Agentic AI to drive next wave of fraud in 2026
By Vriti Gothi

Financial institutions across the UK and Europe are preparing for a more volatile fraud landscape in 2026 as criminals adopt sophisticated technologies faster than banks and regulators can respond. Despite UK Finance data indicating that fraud losses stabilised in 2025, experts warn that the levelling-off disguises an escalation in attacker capabilities and a widening enforcement gap.
Industry analysts say the apparent plateau reflects a shift rather than a slowdown. Attackers are moving from traditional phishing and impersonation attempts to more complex methods that rely on agentic AI, remote-access malware, and long-form social engineering. These tactics enable criminals to scale operations, mimic legitimate behaviour and evade conventional fraud controls.
According to Jonathan Frost, Director of Global Advisory for EMEA at BioCatch, “The plateau masks a shift in tactics, with criminals increasingly relying on agentic AI, remote-access scams, and sophisticated social engineering,” he said. With more than 7,000 remote purchase fraud attempts recorded daily in the UK, Frost warned that new features in consumer technology such as remote device control on smartphones—could amplify the misuse of screen-sharing tools and mobile malware.
Agentic AI Reshapes the Threat Landscape
The emergence of agentic AI is expected to be one of the defining challenges of 2026. These autonomous systems, capable of completing tasks and making decisions independently, can be used to imitate genuine customer interactions with unprecedented precision. This raises the stakes for banks that must distinguish between human activity, legitimate automated behaviour and fraudulent AI-driven actions.
“In 2026, fraud will hinge on spotting the behavioural tells that separate humans from machines,” Frost said, noting that criminals often adopt emerging technologies faster than legitimate institutions.
While banks are investing in AI to enhance risk detection, the same tools give fraudsters greater reach, scalability and the ability to conduct hyper-personalised scams.
Big Tech Platforms Remain a Major Vector
Fraud originating on social media and online platforms continues to be a significant concern. Research indicates that around 70% of authorised push payment (APP) scams begin online, often through ads, impersonation pages or influencer-style content. Frost pointed to recent disclosures suggesting that a notable share of revenue on major platforms may be linked to fraudulent content.
With key elements of the UK’s Online Safety Act now pushed to 2027, the regulatory gap is likely to widen. “The delay gives criminals a clear runway to exploit social platforms with little resistance,” he said.
Regulatory Momentum Slows
Several major reforms including the EU’s PSD3 and Payment Services Regulation have slid into 2027, raising concerns that policy responses are falling behind technological risks. Mandatory reimbursement schemes have provided victim support but have not reduced case volumes.
Collaboration and Behavioural Biometrics Gain Traction
Banks are increasingly prioritising prevention over remediation, with behavioural biometrics emerging as a critical tool for identifying AI-generated or manipulated interactions. Frost emphasised that stopping scams before payment occurs must be the industry’s focus. Real-time intelligence-sharing networks, already in use in markets such as Australia, are gaining attention as a potential model for the UK and EU.
“Only a united, cross-sector response will allow us to meaningfully turn the tide in 2026,” he said.
Previous Article
IBSi FinTech Journal

- Most trusted FinTech journal since 1991
- Digital monthly issue
- 60+ pages of research, analysis, interviews, opinions, and rankings
- Global coverage

