back Back

Financial fraud among the top deepfake threat scenarios in 2023

By Puja Sharma

June 14, 2023

  • Anti fraud
  • Cybercrime
  • Cybercriminals
Share

Deepfake, financial fraudThe number of deep fake videos online is increasing at an annual rate of 900%

The number of deepfake videos online is increasing at an annual rate of 900% according to the World Economic Forum (WEF). A lot of deep fake fraud cases hit major headlines in the news, with reports relating to harassment, revenge, and crypto scams.

The use of neural networks and deep learning (hence ‘deep fake’) has allowed users from all over the world to apply images, video, and audio materials to create realistic videos of a person in which their face or body has been digitally altered so that they appear to be someone else. These manipulated videos and images are frequently used for malicious purposes to spread false information.

Financial Fraud

Deepfakes can be used for social engineering, where criminals use enhanced images to impersonate celebrities to bait victims into falling for their scams. For example, an artificially created video of Elon Musk promising high returns from a dubious cryptocurrency investment scheme went viral last year, causing users to lose their money. To create deepfakes like this one, scammers use footage of celebrities or splice together old videos, and launch live streams on social media platforms, promising to double any cryptocurrency payment sent to them.

Business risks

Often, deep fakes are even used to target businesses for crimes such as extortion from company managers, blackmail, and industrial espionage. For instance, there is a known case where cybercriminals managed to deceive a bank manager in the UAE and steal $35 million, using a voice deepfake – just a small recording of an employee’s boss’s voice was enough to generate a convincing deepfake.

In general, the aims of scammers who exploit deep fakes include disinformation and manipulation of public opinion, blackmail, or even espionage. HR managers are already on alert regarding the use of deep fakes by candidates who apply for remote work, according to an FBI warning. In the case of Binance, attackers used images of people from the Internet to create deepfakes and were even able to add photos of these people to resumes. If they manage to trick HR managers in this way and later receive an offer, they can steal employer data.

While the number of deep fakes is increasing, it remains an expensive type of fraud that deep fakes requires a big budget. Therefore, when cybercriminals are preparing for an attack, they will need a big amount of data: photos, videos, and audio of the person they want to impersonate. Different angles, brightness of lighting, and facial expressions, all play a big role in the final quality. For the result to be realistic, up-to-date computer power and software are necessary.  All this demands a huge amount of resources and is only available to a small number of cyber criminals. Therefore, despite the dangers that a deepfake can provide, it is still an extremely rare threat and only a small number of buyers will be able to afford it – after all, the price for one minute of a deepfake can start from 20,000 US dollars.

One of the most serious threats that deepfake poses to business is not always the theft of corporate data. Sometimes reputational risks can have very severe consequences. Imagine a video is published in which your executive (apparently) makes polarizing statements on sensitive issues. For corporations, this can quickly lead to a crash in share prices.

However, even though the risks of such a threat are extremely high, the chance that you will be attacked in this way remains extremely low due to the cost of creating deepfakes and the fact that few attackers can create a high-quality deep fake,” said Dmitry Anikin, a senior security expert at Kaspersky.

“What you can do today is to be aware of the key characteristics of deepfake videos to look out for and keep a skeptical attitude to voicemail and videos you receive. Also, ensure your employees understand what deepfake is and how they can recognize it: for instance, jerky movement, shifts in skin tone, strange blinking or no blinking at all, and so on.” Anikin added.

Continuous monitoring of darknet resources provides valuable insights into the deepfake industry, allowing researchers to track the latest trends and activities of threat actors in this space. By monitoring the darknet, researchers can uncover new tools, services, and marketplaces used for the creation and distribution of deepfakes. This type of monitoring is a critical component of deepfake research and helps improve our understanding of the evolving threat landscape.

Previous Article

June 14, 2023

5 Digital lending platforms to get instant loans from in Kenya

Read More
Next Article

June 14, 2023

E9pay and Currencycloud to enable South Korean merchants to transfer funds

Read More






IBSi FinTech Journal

  • Most trusted FinTech journal since 1991
  • Digital monthly issue
  • 60+ pages of research, analysis, interviews, opinions, and rankings
  • Global coverage
Subscribe Now

Other Related News

Today

9fin raises $50m to build debt capital markets technology

Read More

November 29, 2024

The Weekly Wrap: all you need to know by Friday COB | November 29th

Read More

November 28, 2024

Is deposit protection the boost Oman’s Islamic finance needs?

Read More

Related Reports

Sales League Table Report 2024
Know More
Global Digital Banking Vendor & Landscape Report Q3 2024
Know More
NextGen WealthTech: The Trends To Shape The Future Q4 2023
Know More
IBSi Spectrum Report: Supply Chain Finance Platforms Q4 2023
Know More
Treasury & Capital Markets Systems Report Q1 2024
Know More