Is AI the New Weapon in Chargeback Fraud?
By Puja Sharma
The current interest in generative artificial intelligence (AI) has caused some to conclude that we may see a new wave of AI-enabled fraud. However, the payments industry shouldn’t overestimate the abilities of large language models (LLM) or underestimate the power of existing anti-fraud systems, says the leading chargeback technology platform, Chargebacks911.
AI has played a vital part in anti-fraud and anti-chargeback operations for over a decade, long before the current wave of enthusiasm for AI (specifically, for large language models like ChatGPT). It has been used for everything from checking the validity of individual chargeback claims to consolidating compelling evidence to overturn fraudulent chargebacks. Given the sheer quantity of transaction disputes, it would be impossible for human operators to examine and contest every chargeback, or even a small number. While AI is the leading tool for businesses to avoid losing money to chargeback abuse, it is a solution that may have unexpected consequences if used by malicious customers engaging in friendly fraud.
“The same technology that allows companies to prevent chargebacks could also be turned to automate creating false chargebacks that are far better ‘quality’ than those that can be produced by amateurs,” says Monica Eaton, CEO of Chargebacks911. “Doing so would allow bad actors to work at far larger scales, with chargeback claims that have a higher chance of not being detected.”
According to Eaton, this would be a disaster for online merchants and begs the question, ”What are the capabilities of AI-powered fraud and what can companies do to blunt its impact?”
AI and Large Language Models
The first and perhaps most important aspect of the current craze for artificial intelligence is understanding the difference between ‘true’ AI and large language models.
A large language model works by collecting and annotating vast amounts of written information in order to find patterns and create realistic messaging and responses. While it can recognise requests and provide its own responses, LLMs have a serious problem with ‘hallucinations,’ in which basic mistakes are made due to a host of factors like incomplete or noisy training data, or a misunderstanding of the context. This prevents them from ever attaining ‘artificial general intelligence’ (AGI)—the state of being truly indistinguishable from human intellect—and means that they are unsuitable for many commercial applications, especially where current, up-to-date data is needed. LLMs can’t understand human requests, but they can convincingly match their output to our input.
According to Eaton: “The machine-learning algorithms used by Chargebacks911 and other anti-fraud companies aren’t designed to mimic humans—they are built to extract certain information from a dataset. They do not make ‘decisions’ with this information, but rather follow decision-trees based on parameters set by its overseers. These systems can be very sophisticated, up to the point of being able to improve themselves, but they are not ‘intelligence’ in any real sense, and perhaps this is for the best.”
Eaton adds: “For this reason, LLMs have limited applications in fighting chargebacks. Aside from acting as a responsive customer service tool, LLMs being able to generate large amounts of relatively convincing, but often inaccurate, responses isn’t going to move the needle on the epidemic of chargeback fraud.”
Combatting AI-enabled chargeback fraud
“It seems entirely possible that LLMs can be used to create large amounts of relatively convincing written content to support fraud. Is this the death knell for our efforts to fight chargeback fraud? Now that fraudsters are using the latest generation of AI, are anti-fraud companies outgunned?” asks Eaton.
“In a sense, no,” answers Eaton. “Creating written content is not a skill that is in very high demand when it comes to effectively carrying out online fraud. Anti-fraud systems used by not just Chargebacks911, but every major payments company look for much more than written content; they analyse potentially thousands of signals—no matter how small and seemingly insignificant—to build a complete threat assessment of each transaction and chargeback request. Even if any written elements submitted by fraudsters are perfectly defined, there are still more than enough chokepoints where a fraudulent information is detected, and our track record shows that our constantly-updated systems are more than capable of alerting merchants to AI-enabled fraud.”
Other Related News
July 16, 2024
Rise in sophisticated attacks, state-level threats, and increased ransom DDoS Incidents
Read MoreJuly 15, 2024