Blog

From cyber risk to cash risk: governing agentic automation

Finance leaders are entering a new phase of automation. Systems do not just recommend the next step. They take it. Agentic tools can draft, reconcile, update, and trigger workflows across the application stack. The payoff is faster cycle times, fewer handoffs, and less manual work.

But the risk scales, too. When the “actor” in your process is software operating at machine speed, control gaps stop being annoying and start being expensive. The downside is not merely a breach. It is cash risk, meaning unauthorized, unauditable actions that can alter financial outcomes.

A recent example from the broader AI ecosystem illustrates the pattern. Security researchers found that an AI-agent social platform exposed private data and large volumes of credentials due to basic access-control failures. More importantly, it enabled unauthorized write actions that could modify live content. Reuters also noted the absence of identity verification, meaning it was unclear which “actors” were real agents versus humans posting through scripts. You do not need to care about that specific platform to care about the lesson. As agents proliferate, the identity perimeter expands faster than most organizations are securing it.

When we evaluate agentic AI platforms, whether for ourselves or customers, we ask the same foundational questions. How does this solution manage non-human identity lifecycle? What data can these identities access, and how is that scoped, logged, and governed? These are not hypothetical concerns. They are design requirements. When we built TAI, Kyriba’s trusted agentic AI, we answered these questions first. TAI inherits your role-based access controls, enforces least-privilege scoping on every action, and ensures that agents can only see and do what you are authorized to see and do. No shortcuts. No black boxes. In finance, trust is the starting point.

For CFOs, this is not a future problem. It is a governance problem that shows up in the same places CFOs already feel pressure: automation ROI, capital efficiency, liquidity performance, and control assurance under board and regulatory scrutiny.

A new identity perimeter is forming, and it touches money

For the last decade, most control programs were built around a familiar perimeter: employees, endpoints, and applications. Agentic automation changes the shape of that perimeter. The “actors” in your environment now include agents, tokens, automation credentials, connectors, and service accounts. Many of them operate continuously, not just during business hours.

This matters because CFO organizations are increasingly judged on outcomes that depend on trusted, timely financial operations. Cash visibility. Forecast accuracy. Working capital discipline. Efficient execution. When agentic automation is introduced without finance-grade controls, the risk is not only technical. It is measurable in performance.

If automation is meant to improve capital efficiency, it has to do so without introducing new exception volume, rework, or control failures. Otherwise, the math behind the ROI breaks. You do not just lose time. You lose trust in the process, and you lose the ability to scale automation across the finance operating model.

In finance, integrity beats confidentiality

Confidentiality matters. But in finance, integrity is existential. A leak is damaging. An unauthorized change can be catastrophic.

That is why unauthorized write access is the detail that should make finance teams sit up. If an agent, or someone impersonating an agent, can change the inputs that downstream processes trust, you can get valid-looking outputs that are wrong. Wrong inputs can ripple across cash forecasting, liquidity decisions, payment execution, reconciliations, and close outcomes. They also create a second-order problem. Teams spend time proving what happened instead of improving performance.

In my experience advising business leaders through digital transformation, the teams that scale automation successfully are not the ones with the most sophisticated AI. They are the ones who can answer two questions. If this goes wrong, can we prove what happened and who is accountable? And can we demonstrate to regulators and the board that we know what data these systems touched? Organizations that skip governance and privacy by design end up in a cycle I see repeatedly. They deploy, hit a control or compliance failure, pull back, and rebuild trust. That cycle is expensive, and it stalls every other priority.

That is why we built TAI with governance and privacy by design: full audit trails, transparent reasoning steps, human-in-the-loop approvals for sensitive actions, and data that never leaves your trusted environment. The teams that scale AI successfully do not skip these controls. They embed them from day one. With TAI, Kyriba made that the default, not an add-on.

Integrity failures map cleanly to CFO-owned risk areas:

  • Vendor and payee setup and modifications

  • Payment proposal creation and release

  • Bank account change workflows

  • Reconciliations and exception handling

  • Controls evidence and audit readiness

The uncomfortable truth is that integrity failures can be harder to detect than data theft. A stolen dataset is a discrete event. A poisoned workflow can run quietly, producing clean logs that reflect a process that executed as designed. The problem is that the design trusted the wrong identity, token, or connector.

The question finance will keep asking: who is accountable?

Finance controls rely on answering three questions with confidence:

  1. Who did it?

  2. Were they allowed to do it?

  3. Can we prove it after the fact?

In a human-centered environment, those map to user identities, roles, approvals, and audit trails. In an agentic environment, the mapping is not automatic. If you cannot tell whether an “actor” is an agent, a script, or a human behaving like either, accountability gets muddy fast.

Stop arguing about intelligence and start proving enforceability

A lot of AI governance conversations start and end with model performance: accuracy, bias, hallucinations. Those topics matter, but they are not the first-order finance risk when agents connect to systems that can move money or change books and records.

The first-order risk is financial controls and control enforceability. Can you enforce the rules of the road, every time, at scale?

  • Can you constrain what the agent is allowed to do, precisely and reliably?

  • Can you prevent it from collapsing separation of duties?

  • Can you ensure approvals are real approvals, not auto-approved loops?

  • Can you produce evidence that stands up to audit and scrutiny?

If the answer is “we think so,” you are behind. Agentic automation works by chaining actions. So does fraud. And even without fraud, uncontrolled automation creates error propagation. Both outcomes degrade the metrics CFOs care about, including capital efficiency and liquidity performance.

A diagnostic CFOs can use before scaling agentic automation

If you cannot answer these four questions with confidence, you are not ready to scale AI agents in finance.

  1. Authority: what exactly can the agent do, and what is explicitly forbidden?
    “AI can help” is not a permission model. Define allowed actions at the level you would define permissions for a human: create, modify, submit, approve, release, reconcile. Be equally explicit about what the agent can never do.

  2. Separation of duties: can the agent both create and approve?
    If an agent can propose a vendor change and also approve it, you have automated a control failure. Machine speed does not make it safer. It makes it faster.

  3. Auditability: Can we prove who or what triggered each action and why?
    “Logs exist” is not enough. Finance needs an evidentiary trail that ties each action to an identity, a policy decision, an approval, and a business purpose. It should be captured in a way that cannot be rewritten after the fact.

  4. Resilience: do we have a kill switch that works at the speed of money?
    If something goes wrong, can you immediately disable the agent, revoke tokens, and halt downstream actions? And can you restore a known-good state without introducing further integrity issues?

If you are building agents in-house, this is the conversation to have with your development team before the first line of code.

The bottom line

Agentic automation will create productivity. It will also expand the identity perimeter in ways many organizations are not yet governing. The lesson from recent incidents in the agent ecosystem is not “don’t move fast.” It is that basic control failures, including exposed credentials, weak access controls, and unclear identity, can become integrity failures with broad downstream impact.

For CFOs, that is the reframing: this is not just cyber risk. It is cash risk. It affects automation ROI, capital efficiency, and liquidity performance. It also shows up under SOX, in audits, and in audit committee conversations.

The winners will not be the organizations that deploy the most agents first. They will be the ones who can prove, before money moves, that every agent action is authorized, constrained, and auditable.

Written By

Kevin Bailey

Kevin Bailey

Chief Information Officer & Chief Information Security Officer

Kevin Bailey is Kyriba's Chief Information & Security Officer, driving enterprise-wide digital transformation while building world-class security capabilities. With more than 15 years of technology leadership experience, Kevin leads initiatives spanning cloud modernization, IT infrastructure optimization, and security program development across complex global environments. He specializes in aligning technology investments to business outcomes, scaling security operations, and delivering measurable results in both innovation and risk management. Kevin's integrated approach ensures that digital transformation and security work in tandem to accelerate business growth and strengthen organizational resilience.

Related resources

Blog

Agentic AI is a gold rush for CFOs and treasurers—not a sugar rush

Read
Blog

Why CFOs should start their AI journey in treasury

Read
Blog

Nacha rule changes 2026: what finance teams must do to ensure compliance

Read