Talk to a fraud fighter
Fraud and AI

AI-Driven Fraud Explained

May 31, 2024

With AI becoming intertwined in nearly every aspect of our lives, its exploitation by fraudsters has become a concerning reality. Let’s delve into the mechanics of AI-driven fraud, and strategies on how to fight back.

A few years ago, artificial intelligence (AI) was a niche topic, mainly discussed by technology experts. A change occurred in 2022 when generative AI became accessible to the masses. As the technology is based on natural language, it offers a rich spectrum of applications for even non-technical professions. It was only a matter of time before fraudsters seized the opportunity. 

AI: A double-edged sword

Just as AI makes everyday users more productive, the same applies to fraudsters. They can use it to easily generate compelling text, images, or even video, analyze large volumes of data, or scale and adapt their actions to specific situations, countries, and users. These capabilities might increase the success rate of scammers. 

Since generative technology is still in its early stages, its frameworks, rules, and security features are yet to fully evolve. As a result, it is not that difficult for fraudsters to manipulate AI for their purposes. On top of that, we can already find copycat and specially adapted AI tools for malicious activities – such as FraudGPT and WormGPT – on the dark web. Experts in technology and artificial intelligence, as well as organizations like Europol, the FBI, and the U.S. Department of Treasury, have already warned about these risks. 

Let’s take a look at examples of how fraudsters weaponize AI for malicious purposes. 

 1. Making phishing more convincing

The use of LLM (large language models) in fraud means that phishing activities are harder to detect. Language deficiencies, one of the primary red flags in fraudulent sites and messages, can be essentially eliminated with the help of AI tools. The same is true of visuals. Fraudsters can easily create very convincing-looking websites and messages in seconds.  

Below is an example of a phishing email created by ChatGPT based on this brief:  

“Create an email addressed to Trevor Mark, informing him that due to a mistake in tax processing, the financial authority owes him 700 euros. He is required to enter his tax information on a designated website within 48 hours. The tone of the letter should be apologetic, yet serious and authoritative.” 

ChatGPT proactively added a security warning to the letter, which, paradoxically, can add to the credibility of the phishing email.

2. Impersonation scams are more sophisticated

Impersonation scams are on the rise globally. In the United States, for instance, it was the most common type of fraud in 2023. Generative AI is again very effective in this regard.

This is what a letter designed by ChatGPT looks like based on a brief:

“Create a letter addressed to Trevor Mark, informing him of a mistake in his 2023 taxes. The letter should state that he owes 700 euros to the financial authority. It must be paid within two business days to avoid further penalties.”

3. Voice cloning

However, the biggest risk of AI isn’t just persuasive words; it’s deepfakes. These are digitally manipulated videos or audio recordings, created or altered using sophisticated AI and machine learning to seem real. Deepfakes can convincingly depict individuals saying or doing things they never said or did, fundamentally amplifying the effectiveness of impersonation scams.

Authentically faking the voice of a person we know significantly boosts imposters’ success, making voice cloning increasingly popular among fraudsters. A McAfee survey of 7,000 respondents revealed that one in four had encountered AI voice cloning, either personally or through someone they know.

In certain parts of the world, such as India, AI voice scams have been particularly successful. Tenable reports that 43% of the population has fallen victim to these scams, with a staggering 83% losing money as a result. Furthermore, 69% of Indians struggle to differentiate between AI-generated voices and those of real humans.

4. Face–swapped deepfakes

Face-swapped deepfakes are immensely effective and convincing, leading fraudsters to use them to imitate an authority in investment scams. A notable example involves well-known British financial journalist Martin Lewis, who was depicted in a deepfake video circulated on social media. The video falsely encouraged investments in Quantum AI, a fictional scheme purportedly launched by Elon Musk.

This particular investment scam also illustrates how fraudsters effectively use AI as a buzzword in investment scams, drawing attention with false promises of high returns from  AI-powered trading systems.

Furthermore, fraudsters are using deepfakes in elaborate schemes, such as “the CEO scam”. A recent example is a finance officer of an international company who was tricked into paying $25 million to criminals. They employed deepfake technology to mimic company executives during a group conference call, convincing the employee to authorize the payment.

Deepfakes are, by their nature, frequently exploited in romance scams. For example, there was a media story about a woman who paid half a million dollars to a scammer posing as famous Hollywood actor Mark Ruffalo, with whom the woman fell in love.

4. Data analysis at hand

AI’s capability to rapidly process and analyze vast amounts of data enables fraudsters to identify potential victims with unprecedented precision. By harvesting data from social media platforms and other digital footprints, AI algorithms can examine personal information, preferences, and online behavior to pinpoint individuals who may be more susceptible to scams or fraud.

This can include identifying patterns that suggest vulnerability, such as an inclination to click on risky links, a lack of online security measures, or even psychological traits inferred from online interactions and postings. AI can help in tailoring phishing attacks or scams by analyzing which tactics have been most effective across different demographics, thereby increasing the likelihood of success and turbocharging intelligence gathering for criminal.

5. Tricking facial recognition when opening mule accounts

As banks increasingly adopt facial biometrics to enhance the security of their onboarding processes, deepfakes become a significant challenge because they can potentially deceive facial recognition systems into opening mule accounts.

6. Developing financial malware

AI can assist fraudsters in developing sophisticated financial malware by identifying vulnerabilities in existing fraud controls, optimizing the code to evade detection by the bank’s security measures, and thus personalizing the malware based on the targeted system. Experts also warned that criminals can use generative artificial intelligence to create polymorphic malware. This type of malicious software is designed to mutate its underlying code to avoid detection by antivirus and cybersecurity software that often relies on identifying known malware signatures.

How to prevent AI-enhanced frauds

The rise of AI and the proliferation of deepfakes have significantly escalated the sophistication of fraudulent activities and underscored the need for advanced fraud detection techniques. The solution is leveraging the very technologies that empower fraudsters. In other words, to combat AI with AI.

For a system to be able to detect sophisticated fraud, it needs to consider truly complex data that go far beyond the surface level. It’s here that ThreatMark’s approach stands out. By integrating behavioral intelligence, ThreatMark analyzes an array of data points, encompassing device specifics, potential threats, user identity and behavior, and transaction details. This in-depth analysis allows for the identification and evaluation of nuanced risk factors inherent in each session or transaction.

Read more about our Behavioral Intelligence Platform

ThreatMark’s behavioral intelligence is adept at countering a wide range of AI-related frauds. It not only detects phishing risks but also prevents scams and mitigates social engineering. The platform identifies anomalies such as transactions made during phone calls, irregular user behavior, and devices compromised by remote access tools or malware, enhancing security measures.

The financial and emotional cost of fraud

Addressing and countering the risks associated with AI technologies used by fraudsters is essential. The consequences extend beyond financial losses for institutions – which are increasing due to the liability shift – and particularly affect customers. The impact on fraud victims is profound, disturbing not just their financial stability but also their mental and physical well-being.

Fraud can have enduring effects, resulting in trauma and a deep-seated mistrust of digital platforms, interpersonal relationships, and financial systems. A survey from the Money and Mental Health Policy Institute reveals that 40% of online scam victims experience stress, while 28% report symptoms of depression due to fraud victimization.

Implementing advanced, state-of-the-art solutions capable of safeguarding customers against sophisticated AI-driven fraud thus becomes critically important.

Contact US