back Back

FinTech automation forces rethink of fraud controls

By Vriti Gothi

Today

  • AI
  • Cross Border Payments
  • Digital Banking
Share

Fintech

The growing integration of AI-powered agents into digital commerce and financial services is reshaping how consumers browse, transact, and manage their money while simultaneously introducing a new and complex set of fraud risks for banks, payment providers, and retailers.

From general-purpose AI tools such as OpenAI and Perplexity to retailer-developed shopping assistants and banking copilots, AI agents are increasingly being entrusted with tasks that involve sensitive financial decisions. These tools can search for products, compare prices, initiate payments, manage subscriptions, and, in some cases, interact directly with financial accounts. While this evolution promises greater convenience and automation, it also expands the surface area that fraudsters can target.

Traditional fraud prevention systems, which often rely on rules-based controls, static thresholds or basic bot detection, are proving increasingly insufficient in this new environment. As AI agents become more capable of mimicking human behaviour and executing complex sequences of actions, distinguishing between legitimate automated activity and malicious automation becomes significantly more difficult.

For banks and retailers, simply blocking automated agents altogether is not a viable long-term strategy. Many organisations are actively investing in AI-driven customer experiences to improve efficiency, reduce friction and stay competitive in digital-first markets.

This shift is driving growing interest in behavioural intelligence as a core component of fraud prevention. Rather than focusing solely on individual transactions or login attempts, behavioural approaches analyse how users, human or automated, interact with digital systems over time. This includes examining interaction patterns, timing, navigation flows and sequences of behaviour to assess intent.

Jonathan Frost, Director of Global Advisory for EMEA at BioCatch, commented, “As people begin to rely on AI agents, from tools such as OpenAI and Perplexity to retailer-built shopping assistants, to browse, buy and manage their finances, it opens up another opportunity for fraudsters to exploit. Criminals are among the earliest adopters of new technology and will use these tools to attack systems at scale. Unlike legitimate businesses, they operate without fear of failure or ethical constraint, allowing them to test and adapt their attacks at speed. For retailers and financial institutions, blocking AI agents outright risks disrupting the future of digital commerce. The challenge is to become more intelligent and flexible, moving beyond simple bot detection to identifying legitimate agents from malicious ones. This requires analysing behaviour over time and examining patterns and sequences of interaction to understand intent. Such behavioural insight enables support for AI-driven experiences while still detecting abuse, fraud, and automation designed to harm.”

The implications extend across multiple areas of FinTech, including digital banking, payments, embedded finance and eCommerce. As AI agents become intermediaries between consumers and financial systems, institutions must ensure that trust frameworks evolve alongside automation. This includes redefining identity verification, rethinking authentication models and ensuring compliance with regulatory expectations around fraud prevention and consumer protection.

Regulators are also likely to pay closer attention as AI-driven financial interactions scale. Questions around liability, accountability and transparency will become more prominent, particularly in cases where automated agents initiate transactions or make decisions on behalf of users.

As the FinTech sector continues to embrace AI as an operating layer for digital services, the ability to distinguish between beneficial automation and malicious exploitation is emerging as a strategic priority. Industry observers expect behavioural analytics and adaptive security models to play an increasingly central role as financial institutions seek to protect customers without stifling the next generation of AI-enabled commerce.

Previous Article

Today

Swift completes multi-bank trial for tokenised bond settlement

Read More
Next Article

Today

Ingenico enables in-store stablecoin payments

Read More



IBSi FinTech Journal

  • Most trusted FinTech journal since 1991
  • Digital monthly issue
  • 60+ pages of research, analysis, interviews, opinions, and rankings
  • Global coverage
Subscribe Now

Other Related News

Today

Morph launches $150m onchain payments accelerator

Read More

Today

Blink Payment simplifies in-store payments with new API

Read More

Today

Ingenico enables in-store stablecoin payments

Read More

Related Reports

Sales League Table Report 2025
Know More
Global Digital Banking Vendor & Landscape Report Q3 2025
Know More
NextGen WealthTech: The Trends To Shape The Future Q4 2023
Know More
Incentive Compensation Management Report Q4 2025
Know More
Treasury & Capital Markets Systems Report Q4 2025
Know More