FinTech automation forces rethink of fraud controls
By Vriti Gothi

The growing integration of AI-powered agents into digital commerce and financial services is reshaping how consumers browse, transact, and manage their money while simultaneously introducing a new and complex set of fraud risks for banks, payment providers, and retailers.
From general-purpose AI tools such as OpenAI and Perplexity to retailer-developed shopping assistants and banking copilots, AI agents are increasingly being entrusted with tasks that involve sensitive financial decisions. These tools can search for products, compare prices, initiate payments, manage subscriptions, and, in some cases, interact directly with financial accounts. While this evolution promises greater convenience and automation, it also expands the surface area that fraudsters can target.
Traditional fraud prevention systems, which often rely on rules-based controls, static thresholds or basic bot detection, are proving increasingly insufficient in this new environment. As AI agents become more capable of mimicking human behaviour and executing complex sequences of actions, distinguishing between legitimate automated activity and malicious automation becomes significantly more difficult.
For banks and retailers, simply blocking automated agents altogether is not a viable long-term strategy. Many organisations are actively investing in AI-driven customer experiences to improve efficiency, reduce friction and stay competitive in digital-first markets.
This shift is driving growing interest in behavioural intelligence as a core component of fraud prevention. Rather than focusing solely on individual transactions or login attempts, behavioural approaches analyse how users, human or automated, interact with digital systems over time. This includes examining interaction patterns, timing, navigation flows and sequences of behaviour to assess intent.
The implications extend across multiple areas of FinTech, including digital banking, payments, embedded finance and eCommerce. As AI agents become intermediaries between consumers and financial systems, institutions must ensure that trust frameworks evolve alongside automation. This includes redefining identity verification, rethinking authentication models and ensuring compliance with regulatory expectations around fraud prevention and consumer protection.
Regulators are also likely to pay closer attention as AI-driven financial interactions scale. Questions around liability, accountability and transparency will become more prominent, particularly in cases where automated agents initiate transactions or make decisions on behalf of users.
As the FinTech sector continues to embrace AI as an operating layer for digital services, the ability to distinguish between beneficial automation and malicious exploitation is emerging as a strategic priority. Industry observers expect behavioural analytics and adaptive security models to play an increasingly central role as financial institutions seek to protect customers without stifling the next generation of AI-enabled commerce.
IBSi FinTech Journal

- Most trusted FinTech journal since 1991
- Digital monthly issue
- 60+ pages of research, analysis, interviews, opinions, and rankings
- Global coverage
