Security experts sound the alarm on Deepfake threats in the age of AI
By Puja Sharma
While the time and effort to create these attacks often outweigh their potential ‘rewards’, Kaspersky warns that companies and consumers must still be aware that deepfakes will likely become more of a concern in the future
The widespread adoption of artificial intelligence (AI) and machine learning technologies in recent years is providing threat actors with sophisticated new tools to perpetrate their attacks. One of these are deepfakes which include generated human-like speech or photo and video replicas of people. While the time and effort to create these attacks often outweigh their potential ‘rewards’, Kaspersky warns that companies and consumers must still be aware that deepfakes will likely become more of a concern in the future.
Kaspersky research has found the availability of deepfake creation tools and services on darknet marketplaces. These services offer generative AI video creation for a variety of purposes, including fraud, blackmail, and stealing confidential data. According to the estimates by Kaspersky experts, prices per one minute of a deepfake video can be purchased for as little as $300.
There are also concerns when it comes to the significant divide around digital literacy amongst Internet users. According to the recent Kaspersky Business Digitisation Survey 51% of employees surveyed in the Middle East, Turkiye and Africa (META) region said they could tell a deepfake from a real image, however in a test only 25%2 could actually distinguish a real image from an AI-generated one. This puts organisations at risk given how employees are often the primary targets of phishing and other social engineering attacks.
As per IBSi report, With AI-driven fraud remaining the most prominent challenge across various industries, crypto is the main target sector (representing 88% of all deepfake cases detected in 2023), followed by fintech (8%). In the APAC region, Vietnam and Japan rank the highest for the prevalence of deepfake fraud. Japan, in particular, had notable widespread use of deepfakes in the entertainment sector, while Vietnam, with its rapidly growing digital economy and online-native population, stands as an appealing target for fraudsters.
To combat the prevalence of AI-powered fraud, countries are proactively introducing measures and regulations aimed at safeguarding businesses and individuals from the harmful impacts of AI – such as the effects of deepfakes. For instance, the Hong Kong Monetary Authority (HKMA) published a circular with enhanced measures to defend e-banking from fraudsters in October 2023 which includes enhanced monitoring for suspicious transactions and additional customer authentication.
For example, cybercriminals can create a fake video of a CEO requesting a wire transfer or authorising a payment, which can be used to steal corporate funds. Compromising videos or images of individuals can be created, which can be used to extort money or information from them.
“Despite the technology for creating high-quality deepfakes not being widely available yet, one of the most likely use cases that will come from this is to generate voices in real-time to impersonate someone. For example, a finance worker at a multinational firm was recently tricked into transferring $25 million to fraudsters because of deepfake technology posed as the company’s chief financial officer in a video conference call. It’s important to remember that deepfakes are a threat not only to businesses, but also to individual users – they spread misinformation, are used for scams, or to impersonate someone without consent – and are a growing cyberthreat to be protected from,” said Vladislav Tushkanov, Lead Data Scientist at Kaspersky.
Other Related News
July 16, 2024
Rise in sophisticated attacks, state-level threats, and increased ransom DDoS Incidents
Read MoreJuly 15, 2024