back Back

As AI-powered fraud takes centre stage, avoid becoming another spectator

July 04, 2024

  • Advanced AI
  • AI
  • AI-powered fraud
Share

Simon Horswell, Senior Fraud Specialist, Onfido
Simon Horswell, Senior Fraud Specialist, Onfido

By Simon Horswell, Senior Fraud Specialist, Onfido

Generative AI has dominated the headlines for more than a year now. From providing simple, instant access to information and advice through large language models (LLM), to accelerating processes for businesses, it’s captured the imagination of both consumers and organisations alike. With models like ChatGPT, AI has become democratised to the point where almost anyone can benefit from its outputs. More than half of US employees report using Gen AI to support day-to-day activities, while no less than four million in the UK have embraced platforms, like ChatGPT, for work.

Yet, with all the benefits brought by its widespread accessibility, the risks have also become amplified. Malicious actors, for instance, now see AI as a gateway to powering sophisticated fraud tactics, enabling them to scam around the clock and at scale. Mitigating this threat is an ever-evolving challenge and it begins with regulation and operational guidance. That’s why global lawmakers and regulators have been entering into new agreements, launching AI Safety Summits and partnering to design new restrictions and rules to protect both individuals and businesses.

But the game of cat-and-mouse shows no sign of slowing, particularly as AI is set to dominate the business agenda for the foreseeable future. So, what are the biggest fraud trends that business leaders should know about to keep themselves – and their customers – safe?

We are all at risk of deepfakes

In 2023, we witnessed a surge in AI-manipulated and synthesised media, with deepfake fraud attempts jumping 3000% year-on-year. Driven by the availability of simple-to-use AI tools, fraudsters scaled sophisticated attacks without needing technical skills or heavy resources.

This is already impacting high-profile public figures, like politicians, with Keir Starmer and Sadiq Khan falling victim to deepfake attacks, while it has also impacted celebrities and business leaders. With a record-breaking 40-plus countries – representing more than 40% of the world’s population – due to hold elections this year, deepfakes have the potential to inflict significant reputational damage, destabilise trust in public leaders and spread misinformation. What’s more, as AI continues to attract public interest, an election provides an opportune moment for a fraudster to double down on deepfakes to cause disruption.

With this in mind, businesses need to remain vigilant. Onfido’s Identity Fraud Report shows that 80% of attacks on biometric systems, like those used in e-voting, were videos of videos displayed on a screen. This suggests fraudsters will always opt for the easiest, most cost-effective route. Should they go mainstream, deepfakes will be a major threat to contend with.

Smishing and phishing attacks are becoming evermore sophisticated

As AI becomes ever more accessible, malicious actors are taking advantage to orchestrate large-scale and highly convincing smishing (SMS phishing) and phishing attacks. The widespread availability of Gen AI tools has lowered the barriers to entry, enabling cybercriminals to craft deceptive messages that appear more authentic than ever. This means the typical signs of a scam we all look for, such as checking spelling and grammar mistakes, will be harder to spot.

Attackers will continue to use Gen AI and LLMs in the likes of phishing and smishing to make the content and images appear more legitimate. But hope is not lost – as scams become more advanced, so does the AI used to keep businesses and individuals safe. We are seeing an AI vs AI battleground emerge, and it’s crucial that businesses deploy systems that have been trained on the very latest attack vectors so they can stay on top of the evolving threat landscape.

Social engineering attacks are here to stay

Contrary to using AI to create convincing spoofs, we’re increasingly witnessing the technology used for a much simpler attack – eliciting people’s private information through social engineering scams.

Cybercriminals have shown over the past year that sophisticated technology is not always necessary to carry out successful cyberattacks at scale. For example, Caesars Entertainment recently confirmed that a social engineering attack had stolen data from members of its customer rewards programme.

In identity verification, we’ve already seen a number of scams that convince unsuspecting victims to create fake accounts on the fraudsters’ behalf. Bad actors use several different scenarios to encourage people to complete the application process using their authentic documents and matching biometric images. These range from fake job ads to delivery drivers asking for proof of identity on people’s doorsteps. The difficulties around detecting these seemingly genuine applications mean they pose a real threat to many of the standard practices in remote identity verification, so businesses need to be ready with the right defences to combat these threats.

Regulation is gaining momentum

From the EU AI Act to the Online Fraud Charter, we’ve seen policymakers take significant steps to protect the public from fraud. Notably, the UK introduced a new Anti-Fraud Champion who is responsible for driving collaboration between the government, law enforcement, and the private sector to help block fraud which accounts for 40% of all crime.

The trend towards tighter regulation is here to stay and businesses will see new come into place as governments try to keep them one step ahead of bad actors. But it’s important to note that failure to comply with these laws and regulations can have serious consequences, including fines, legal action, and damage to reputation. For instance, since the start of 2024, UK banks have been required to refund customers who have been tricked by scammers in a phenomenon known as authorised push payment (APP) fraud. With losses to APP fraud reaching almost £500m in 2022, this is no doubt already raising significant concerns for banks.

New legislation will play a pivotal role in reducing fraud cases, but there is a delicate balance here. While it is imperative to address the risks posed by AI, we need to be careful not to demonise the technology completely, as this would detract from its role in innovation and providing robust safeguarding against new and emerging threats.

Going forward, the convergence of technology, regulation and criminal ingenuity will continue to define how fraud takes place.  Businesses must stay vigilant, proactively embrace cybersecurity measures, and collaborate with regulators to navigate the fragile balance between leveraging AI’s capabilities and safeguarding against malicious intent. We’re at a pivotal moment in the ongoing battle against AI-driven fraud. Businesses need to make sure they are on the right side of the fight.

Previous Article

June 27, 2024

Realising the Power of AI to Transform Banking

Read More
Next Article

July 05, 2024

From self-governance to sustainable growth: how the SRO-FT empowers India’s FinTech revolution

Read More

IBSi News

Buy Now Pay Later, BNPL, UK, Europe, Payment Service Providers, Digital Payments, FinTech

July 23, 2024

Advanced AI

4 BNPL platforms providing financial freedom to consumers in MENA

Read More

  • Daily insightful Financial Technology news analysis
  • Weekly snapshots of industry deals, events & insights
  • Weekly global FinTech case study
  • Chart of the Week curated by IBSi’s Research Team
  • Monthly issues of the iconic IBSi FinTech Journal
  • Exclusive invitation to a flagship IBSi on-ground event of your choice

IBSi FinTech Journal

  • Most trusted FinTech journal since 1991
  • Digital monthly issue
  • 60+ pages of research, analysis, interviews, opinions, and rankings
  • Global coverage
Subscribe Now

Other Related Blogs

July 11, 2024

AI in Accounting: Moving Beyond the Hype

Read More

July 10, 2024

When cyber criminals log in, but don’t break in, is your data still data secure?

Read More

July 05, 2024

From self-governance to sustainable growth: how the SRO-FT empowers India’s FinTech revolution

Read More

Related Reports

Sales League Table Report 2024
Know More
Global Digital Banking Vendor & Landscape Report Q2 2024
Know More
NextGen WealthTech: The Trends To Shape The Future Q4 2023
Know More
IBSi Spectrum Report: Supply Chain Finance Platforms Q4 2023
Know More
Treasury & Capital Markets Systems Report Q1 2024
Know More