
New Account Fraud: Essential Strategies for Detection and Prevention
New account fraud—once seen as a niche tactic—is now a mainstream threat.
From synthetic identities to AI-powered deception, fraudsters are evolving fast, and the cost to financial institutions is skyrocketing.
In 2022, PayPal revealed it had deleted 4.5 million fake accounts, created by bots designed to exploit small $5 or $10 sign-up incentives. It wasn’t an isolated incident. It was a calculated attack on the company’s account-creation processes, exposing just how easily fraudsters can game onboarding systems at scale. And it’s not just fintechs at risk—any organization offering digital account creation is a potential target.
Recent innovations in technology and accessibility have made it easier than ever to open an account—whether at a bank, loan provider, or digital platform—entirely online and remotely, without stepping into a physical branch.
While this convenience is welcomed by most, it has also created fertile ground for a growing type of fraud: new account fraud. Criminals are now exploiting weaknesses in digital onboarding and identity verification systems to slip through the cracks.
The good news? Banks and organizations can defend themselves effectively—but it requires rethinking fraud detection and embracing innovation at the same pace as the attackers.
Let’s explore the risks and opportunities that come with fighting new account fraud in a digital-first world.
What Is New Account Fraud?
New account fraud—also known as account opening fraud or account creation fraud—is a type of identity fraud (much like account takeover fraud) where criminals open accounts using stolen or fabricated identities with the intent to commit financial crimes. These accounts are then used to apply for credit, launder money, abuse bonuses, or build trust before a final “bust-out.”
New account fraud affects a broad range of organizations—from banks and credit unions to fintechs, telecoms, online betting and gaming companies, and e-commerce platforms. Any institution offering accounts with financial or transactional value is at risk.
The Silent Threat: How New Account Fraud Works—and Why It Slips Through the Cracks
With recent shifts in digital banking and the growing competition from neobanks and fintechs, financial organizations are increasingly focused on maximizing user experience and bringing innovative, frictionless services—including onboarding. But this drive toward convenience comes with a trade-off: in cybersecurity, lower friction often means higher risk.
Fraudsters are quick to exploit these gaps. They use stolen identities, synthetic profiles, and deepfake technology to bypass verification systems and slip through digital defenses. The result? Fraudulent accounts (such as new bank accounts, credit card accounts, and others) that appear legitimate—until it’s too late.
Too often, financial institutions only detect the fraud after the damage is done—after credit lines have been abused, accounts used for money laundering, or fraud rings have used them to cash out stolen funds or test compromised cards. And by then, the cost—both financial and reputational—can be severe.
What the Data Says About New Account Fraud
Unfortunately, organizations often fail to detect fraudulent activity in time, i.e., before the damage occurs. Very often, the fraud is uncovered only after money laundering or other crimes have been committed, leading to significant financial losses. This is a crucial challenge, because data shows that the new account fraud is proliferating—and so is the damage to consumers and organizations.
While in 2022, new account fraud damage totaled $3.9 billion in the U.S. according to Javelin, in 2024, the losses amounted to $6.2 billion—representing a nearly 60% increase in just two years.
How Fraudsters Create and Exploit Fake Accounts
Identity theft
Identity theft is one of the oldest and most common tactics in the fraud playbook. Criminals use stolen personal information to impersonate real individuals during onboarding.
The personally identifiable information (PII) used in this type of fraud is often obtained through data breaches or purchased on the dark web—typically as a direct result of a breach. These incidents are a major enabler of identity theft. For instance, the Identity Theft Resource Center reported a 312% increase in breach notifications in 2024 compared to the previous year, suggesting that nearly every adult in the U.S. may have been affected.
Another common way fraudsters collect PII is through social engineering—especially phishing and impersonation scams. In these attacks, criminals trick individuals into revealing sensitive data by posing as trusted entities such as banks, government agencies, or customer support teams.
Synthetic identities
Synthetic identity fraud happens when fraudsters create a fabricated persona by blending real and fake information—often including a legitimate social security number paired with a fictitious name or date of birth. These so-called “Frankenstein identities” can slip past onboarding checks and are used to open accounts, apply for loans in loan fraud, and commit long-term fraud.
Unlike traditional identity theft or account takeover fraud, which typically result in a quick, one-off loss before being detected, synthetic identity fraud is often a slow-burn scheme. Fraudsters may begin by applying for a store credit card with a fake identity, fully expecting rejection. But that application alone can generate a credit file—effectively bringing the identity to life.
With AI making these schemes more convincing and scalable, synthetic identity fraud is now considered the fastest-growing financial crime in the U.S., with losses projected to exceed $23 billion by 2030.
Bot attacks
Automated bot attacks have become a powerful tool for fraudsters looking to exploit account opening processes. Bots can rapidly submit thousands of fake or synthetic applications, bypass basic KYC checks using stolen or fabricated data, and even mimic human behavior to avoid detection. They’re commonly used to create fraudulent accounts at scale, abuse incentive programs, or prepare sleeper accounts for future exploitation.
For banks and other organizations, this means not only more illegitimate accounts to detect—but also increased pressure on onboarding systems that were never designed to handle such sophisticated, high-volume automation.
In 2024, automated bot traffic surpassed human-generated traffic for the first time in a decade, accounting for 51% of all global web activity. This spike is largely driven by the increasing availability of AI-powered tools that make bot creation easier and more effective than ever.
Deepfakes
Deepfake technology is quickly becoming a preferred weapon for fraudsters trying to bypass biometric verification during onboarding. Using AI-generated images, videos, or voice recordings, attackers can convincingly impersonate real individuals—or even create entirely fake ones from scratch.
This poses a serious challenge for banks that rely on facial recognition or video KYC. While many systems include liveness checks—like blinking or head-turning—modern deepfakes can mimic these actions with increasing precision, making them harder to detect.
The scale of the threat is growing fast. Deepfake incidents targeting financial institutions surged by 700% in 2023 and deepfakes now account for 40% of all video biometric fraud attempts, as attackers become more sophisticated in their use of AI-generated content.
Fake documents
Fake documents are a critical enabler of new account fraud, helping fraudsters pass identity checks and slip through onboarding undetected. Forged IDs, passports, residential visas, utility bills, and income statements are commonly used to support stolen or synthetic identities as part of what’s known as application fraud.
Many businesses still rely on manual reviews of documents, but as onboarding speeds up and customer expectations shift, automation is becoming inevitable. But when this automation isn’t supported by robust fraud detection and verification tools, it creates a critical vulnerability—enabling fake documents to be processed at scale. The result? More fake accounts, and more financial and reputational damage.
As Jan Indra noted on the Behind Enemy Lines podcast, “The fake documents market is massive—it’s much bigger than one would think. There are hundreds of fake document vendors operating openly on the web. It’s a full-blown industry.” With such easy access to high-quality forgeries—often enhanced by AI—document fraud is a growing threat that financial institutions can’t afford to ignore.
Why Traditional Fraud Detection is Failing
As new account fraud tactics become more advanced, legacy identity verification methods are falling behind. Many institutions still rely on rigid, rule-based fraud detection models that don’t adapt to evolving threats. This creates a dangerous blind spot—allowing fraud to slip through undetected while generating a high volume of false positives that frustrate legitimate customers and add unnecessary friction to the user journey.
The same goes for organizations that still depend on manual document reviews. As fake documents grow more sophisticated and high-quality, human reviewers alone often can’t keep up. What once worked is now proving inadequate in the face of AI-enhanced forgeries and increasingly scalable attacks.
Some companies are even rethinking their incentive programs, which are commonly exploited by fraudsters. For instance, after purging 4.5 million illegitimate accounts, PayPal changed its approach to customer acquisition—acknowledging how easily such programs can be abused at scale.
But the most urgent shift must happen at the core: fraud prevention strategies need to evolve. That means moving beyond static rules and manual checks toward dynamic, intelligent systems that can detect and adapt to AI-powered fraud tactics in real time.
What Banks Need to Do Differently
Fighting new account fraud requires more than patching old systems—it demands a fundamental shift in how fraud risk is understood and managed at the onboarding stage. Here’s what forward-looking institutions should prioritize:
1. Adopt a Risk-Based Approach
Not every new account carries the same level of risk. Banks should move away from blanket checks and adopt a risk-based model that evaluates each onboarding attempt using contextual and behavioral data—such as device intelligence, location, and application patterns. This allows teams to focus resources where the actual risk lies.
ThreatMark’s Behavioral Intelligence Platform, for example, analyzes a wide range of subtle digital signals—captured across devices and throughout the user journey—enabling organizations to assess risk in context rather than relying on static, rule-based models. This holistic approach helps prioritize real threats while minimizing friction for legitimate users.
2. Enhance Identity Verification
Static KYC checks are no longer enough. Institutions should layer multiple verification methods—combining biometric authentication, behavioral analytics, and AI-powered document verification. This multi-layered defense makes it significantly harder for fraudsters to game the system using fake or synthetic identities.
Behavioral Intelligence Platform adds a critical layer of insight to the onboarding process—even before identity verification begins. By analyzing users’ cognitive and behavioral patterns in real time, the platform can distinguish between legitimate and high-risk activity, helping organizations prevent new account fraud before it takes root.
3. Leverage AI and Machine Learning
Today’s fraudsters are agile, fast, and tech-savvy. Banks must counter this with AI-driven systems that learn and adapt in real time. Machine learning models can detect subtle anomalies, flag emerging fraud patterns, and reduce false positives—something traditional rules-based systems can’t do.
ThreatMark’s Behavioral Intelligence Platform uses advanced machine learning to analyze subtle behavioral signals—like mouse movements, typing cadence, navigation flow, and on-page interactions—captured during user activity. These insights not only help detect sophisticated fraud attempts but also distinguish human behavior from bots, making it a powerful defense against automated attacks during onboarding.
4. Connect the Dots Across Channels
Fraud doesn’t happen in a vacuum. Fraudsters move across channels and devices, testing weak spots as they go. That’s why banks need to implement cross-channel monitoring—linking signals from mobile, web, and in-branch interactions to get a full view of risk and stop coordinated fraud attempts.
Behavioral Intelligence Platform provides deep, cross-channel insights into fraud patterns—enabling organizations to detect coordinated activity, disrupt fraud rings, and protect customers at scale.
Learn more about Behavioral Intelligence
The Future of New Account Fraud Prevention
The fight against new account fraud is only going to intensify. As digital onboarding becomes the default, financial institutions are under increasing pressure to deliver fast, seamless user experiences—while staying compliant and secure.
Regulatory expectations are rising, especially in regions like the EU, where PSD3 is expected to enforce stricter controls around identity verification, liability, and fraud mitigation. Banks will need to invest in fraud prevention tools that meet these standards—without adding unnecessary friction to the customer journey.
To strike that balance, AI-driven fraud detection will play a central role. Adaptive, intelligent systems can evaluate behavioral patterns, detect anomalies in real time, and respond dynamically—enabling institutions to spot fraud without slowing down onboarding for genuine customers.
But even the most advanced tools won’t be enough in isolation. Collaboration will be key. There will be a growing need for fraud intelligence platforms that enable secure collaboration between institutions—allowing them to share real-time insights on emerging fraud tactics and money mule activity.
New account fraud is evolving fast—but so can the defenses. The institutions that succeed will be those that innovate together, stay agile, and treat onboarding as both a business opportunity and a critical security checkpoint.