back Back

As AI agents transact, financial services must rethink compliance

April 08, 2026

  • AI
  • Cross Border Payments
  • Customer
Share

Moneyhub

By Nejc Korosec, Head of Compliance at Moneyhub

Financial services has spent decades building its compliance architecture on a single, load-bearing assumption: that every transaction traces back to a verified human being with genuine intent. Know Your Customer (KYC) is the bedrock of that architecture, and it is about to be tested in ways it was never designed to handle.

This won’t be due to some regulatory change or external influence from bad actors. It is more fundamental than that. The customer, as we have always understood them, will gradually step back from the transaction itself, operating at a supervisory level rather than an active one.

Buying agents will negotiate with selling agents, finance agents will autonomously manage supplier payments. AI systems will agree contract terms without a human ever reviewing the small print.

This will be a huge win for efficiency, and an absolute minefield for compliance, but only if we’re unprepared.

The industry urgently needs a new compliance bedrock for this new era – Know Your Agent, or KYA.

KYC was built for humans, but that’s no longer enough on its own

Let’s drill into the issue in more detail. Fundamentally, KYC works because we know a human is at the source. When a person completes an application, they can give consent and be held accountable, but agentic AI disrupts that logic.

When an autonomous system negotiates and executes financial decisions on a customer’s behalf, there is no longer a human at the point of transaction, only a model acting on instructions that may have been set days, weeks, or months earlier. The question of whether those instructions still reflect the customer’s intent, and whether the agent has stayed within them, is one that KYC was never designed to answer.

In this new world, KYA will work alongside KYC rather than replace it, extending the same core principles of identity verification, authority and accountability to a new category of actor. But where KYC was built in response to problems that already existed, refined over years of regulatory iteration, KYA needs to be built ahead of the curve. The window to build this infrastructure is now, not after the agentic economy has scaled beyond the reach of current compliance frameworks.

The liability vacuum

In January 2026, FCA Executive Director Sheldon Mills confirmed that SM&CR accountability remains applicable to AI-driven decision-making. Responsibility for AI-driven outcomes sits with the financial institution deploying them. That position was designed for a world where humans set the parameters and machines executed within them, and a compliance team could trace any decision back to a human judgment call. In an agentic world, that process breaks down. The machine is not just executing within parameters a human defines, it is interpreting them, negotiating within them, and acting on them.

When two probabilistic models negotiate a contract autonomously and that contract breaches regulatory requirements, existing frameworks offer no clean answer on where liability sits.

Consider a straightforward example: a lending agent, acting on a customer’s behalf, agrees repayment terms with a creditor’s agent. The terms fall outside the customer’s pre-authorised range by a margin the model judged acceptable. No human has reviewed or approved it. Under current frameworks, the firm bears liability, yet no clear chain of accountability exists, leaving the compliance team without the means to reconstruct how the decision was reached.

The speed compounds the problem. Agent-to-agent transactions execute in milliseconds. By the time a compliance team identifies an out-of-policy agreement, it may already have settled and cascaded into downstream positions, an accountability vacuum that widens with every autonomous system deployed without a verification framework behind it.

What KYA requires in practice

Building a KYA framework does not mean rebuilding compliance from scratch. It means extending existing principles into territory they were not designed to cover.

The starting point is verification. Before any AI agent transacts on a customer’s behalf, there must be a verifiable, auditable record of what it has been authorised to do and what it has not. Every agent acting in a financial context should carry a credential issued by the institution, scoped to specific transaction types, and revocable at the customer’s instruction. Without that, there is no reliable basis for establishing whether an agent has acted within its mandate or beyond it.

Verification alone is not enough. The architecture matters too. Separating the reasoning layer from the execution layer means an agent can negotiate and assess freely, but the

actual movement of money passes through a deterministic API that enforces hard limits. If an agent agrees a payment exceeding a customer’s pre-set threshold, the execution layer rejects it regardless of the reasoning behind it. That separation is not a restriction on what agentic systems can achieve, it is what makes autonomous operation trustworthy in the first place.

In this model, the human element is non-negotiable. For high-value transactions, or situations the system hasn’t encountered before, a human risk officer should act as the final approver. Not to do the negotiating (the agent handles that), but to retain the veto. And with it, firms retain the clear line of legal liability that regulators will expect them to demonstrate.

The infrastructure question

At Moneyhub, our Smart Payments infrastructure, built on Variable Recurring Payments, already operates on these principles. Consent is hard-coded at the point of authorisation. Transaction limits are fixed and cannot be altered by the reasoning model acting on top of them. For instance, an AI agent may reason that paying a bill early saves money, but Moneyhub’s constraints ensure that payment never triggers an overdraft.

We did not build it this way because regulators required it, but because the alternative, an autonomous financial movement without immutable constraints, is not a system any compliance professional should be comfortable signing off.

The time to act is now

With 2030 fast approaching, significant portions of commerce are expected to run directly between agents, with humans operating in a supervisory rather than transactional role. The compliance infrastructure to govern that world does not yet exist.

The firms that begin building this infrastructure now, by establishing verification, execution controls and human oversight before the regulation arrives, will not be playing catch-up when the time comes. They will have written the template everyone else follows.

Previous Article

April 02, 2026

From Swipe to Scan: How UPI Is Rewiring Credit for Everyday India

Read More

IBSi News

Digital Bank, Neobank, Super app, Virtual bank, Mox, Hong Kong, Fintech News, Fintech Solutions, Online Banking, PAO Bank, Livi Bank, Fintech News, Fintech Listicle, Fintech APAC,

April 08, 2026

AI

GCC Islamic banks show resilience amid geopolitical tensions

Read More

Get the IBSi FinTech Journal India Edition

  • Insightful Financial Technology News Analysis
  • Leadership Interviews from the Indian FinTech Ecosystem
  • Expert Perspectives from the Executive Team
  • Snapshots of Industry Deals, Events & Insights
  • An India FinTech Case Study
  • Monthly issues of the iconic global IBSi FinTech Journal
  • Attend a webinar hosted by the magazine once during your subscription period

₹200 ₹99*/month

Subscribe Now
* Discounted Offer for a Limited Period on a 12-month Subscription



IBSi FinTech Journal

  • Most trusted FinTech journal since 1991
  • Digital monthly issue
  • 60+ pages of research, analysis, interviews, opinions, and rankings
Subscribe Now

Other Related Blogs

April 02, 2026

From Swipe to Scan: How UPI Is Rewiring Credit for Everyday India

Read More

April 01, 2026

Global payments are not broken, incentives are

Read More

March 30, 2026

The future experience of human-augmented AI advice

Read More

Related Reports

Sales League Table Report 2025
Know More
Global Digital Banking Vendor & Landscape Report Q3 2025
Know More
Wealth Management & Private Banking Systems Report Q4 2025
Know More
Incentive Compensation Management Report Q4 2025
Know More
Treasury & Capital Markets Systems Report Q4 2025
Know More