Blog

Is your data ready for AI? It depends what you're building

In my recent interview with Global Finance, I argued that strategy must precede AI in treasury. This is not as a philosophical position but as a practical constraint. You don't adopt AI for its own sake. You adopt it to support specific decisions, with outputs you're prepared to defend.

And trust is the starting line. Every year, Kyriba supports 3B bank transactions, 500M payments ($15T), 100M API calls, and 25TB of data. These aren't abstract metrics - they represent customers trusting us with data and workflows that move money. That's the scale where treasury operates, and it's why the readiness question matters.

Therefore, when treasury teams ask "Is our data ready for AI?", the better question to ask themselves is: "What are we trying to build?"

AI in treasury is a spectrum, not a single tool. At one end sit productivity copilots that help teams move faster - answering questions, pulling audit trails, surfacing variance drivers without the archaeological dig. At the other end sit predictive models for forecasting, fraud signals, and exposure management - systems whose outputs can shape decisions, and therefore require a higher standard of trust.

One helps you find information faster. The other recommends what to do next. They do not require the same foundations. Treating them as equivalent is how programs stall.

Two outcomes, two standards

Productivity AI tolerates imperfection. If a conversational assistant retrieves slightly incomplete information or suggests the wrong report, the cost is minor - some rework, a follow-up check. A human remains in the loop, applying context and sanity tests. The bar is accessibility and speed, not perfection.

Decision-making AI cannot afford the same latitude. When a model influences forecasts, flags anomalies, or surfaces risk exposure, incomplete inputs do not merely limit effectiveness - they create financial risk. A forecasting model trained on patchy bank feeds will encode the gaps as truth. If exposures are invisible to the system, they will be invisible to the forecast. The danger is not loud failure. It is quiet failure: outputs that look plausible, sound confident, and invite action - but rest on sand.

The common implementation error is applying one standard to both. Some teams delay productivity wins because they assume everything needs decision-grade data. Others deploy decision-influencing models before the foundations exist to make outputs trustworthy.

Access is not accuracy

Modern AI feels effortless, especially when conversational. That aids adoption. It also obscures an important line: making data easier to query is not the same as making it decision-ready.

Conversational interfaces reduce friction. They let you ask questions in natural language, retrieve information faster, understand what exists and what does not. That is valuable for productivity. But ease of access does nothing to resolve incompleteness, inconsistency, or gaps in coverage. And when the output influences a financial decision, those gaps compound.

The old constraint still applies: garbage in, garbage out. If you train a forecast on incomplete history, the model learns the wrong story - not maliciously, just mechanically. If cash positions systematically understate reality for certain entities, the model encodes that pattern as normal. The forecast will be confident. It will be wrong.

What trusted data means in treasury

Here is a scenario most treasury teams recognize: yesterday's bank statements are missing for a handful of subsidiaries.

A productivity assistant helps you identify the missing feeds and guide resolution. Useful, even when the data picture is incomplete.

A forecasting model, by contrast, may treat the absence as information - learning that those entities consistently report late, or understate positions, or exhibit patterns that are artifacts of missing data rather than economic reality. The model's confidence intervals may look reassuringly tight. Its outputs may align with recent trends. But if the inputs are wrong, sophistication is irrelevant.

For decision-making use cases - cash forecasting, FX risk, exposure management - the question is not "good enough." It is whether you have visibility into what matters across entities, currencies, and systems. Material losses in treasury typically trace to blind spots: incomplete capture, inconsistent mapping, feeds that arrive too late or not at all.

You cannot manage what you cannot see.

The trust progression

This is where strategy becomes capability. Reliable AI outcomes follow a clear sequence:

  • Trusted Connectivity – secure, auditable data flows from banks, ERPs, platforms, and subsidiaries. This is unglamorous infrastructure. It is also the enabling condition.

  • Trusted Data – complete, accurate, standardized information across sources. Not perfection. Sufficient completeness and consistency to support decisions and withstand scrutiny.

  • Trusted Analytics – validated models with explainable outputs and clear lineage back to source transactions. If you cannot explain how you reached a number, you cannot defend the decision it informed.

  • Trusted AI – systems that augment human expertise with transparency and control. Productivity AI requires access controls and verification. Decision-making AI requires validation, thresholds, exception workflows, audit trails, and human-in-the-loop controls for material actions.

  • Governance is not bureaucratic overhead. It is what makes outcomes dependable. In treasury, autonomy is not the goal. Enhanced decision-making is. AI proposes. Humans approve.

Start with strategy

Ask "what are we trying to build?" and the path becomes clear.

Start with productivity wins where accessibility matters most - faster retrieval, less time navigating systems. Build the foundations - connectivity, standardization, auditability, governance - in parallel. Deploy decision-influencing AI only when those foundations can support the stakes.

Strategy precedes AI because trust precedes action. Get the sequence right, and you will not just move faster. You will move safely.

Written By

Bob Stark

Bob Stark

Global Head of Enablement

Bob Stark is the Global Head of Market Strategy at Kyriba and has been a product and go-to-market financial technology leader for 25 years and works directly with clients, partners, and industry influencers to ensure Kyriba is at the forefront of financial technology. He has empowered finance leaders at some of the world’s largest companies, and is a frequent speaker and author on treasury, risk management, and payments.

Related resources

News

77% of Global CFOs Cite Security and Privacy Risks as AI Trust Gap Persists Despite Surging Adoption

Learn more
Blog

When analysts agree: what it signals for treasury leaders

Read
eBook

How AI agents actually work: A practical guide for Treasury & Finance

Learn more