
Social Engineering Fraud: Detecting Manipulation Inside the Banking Session
It’s becoming increasingly difficult to find a fraud scheme that doesn’t involve some form of social engineering.
Let’s explore what social engineering means for today’s banking environment, why it consistently bypasses traditional controls, and what banks can do to better protect customers from manipulation-driven fraud.
What Is Social Engineering (and What It Isn’t)
Broadly speaking, social engineering is “any act that influences a person to take action that may or may not be in their best interest.”
In the context of banking fraud, it refers to the deliberate exploitation of psychology to influence people into taking specific actions: typically revealing sensitive information or authorizing a transaction that ultimately results in financial loss. In simple terms, it’s a modern form of a confidence trick.
Not every fraud involving human action is social engineering. First-party fraud, where a customer knowingly commits fraud, doesn’t qualify because there’s no manipulation involved. The same applies to purely technical attacks, where the attacker bypasses the user instead of influencing them. And while users do make mistakes, error alone isn’t social engineering.
Generally, social engineering involves the following characteristics:
- The user is manipulated or deceived, not technically bypassed.
- It exploits human psychology, especially trust.
- It typically forms part of a broader, more complex fraud scheme.
Social Engineering vs Scam vs Authorized Fraud
Social engineering, scams, and authorized fraud are often closely connected and discussed together—but they represent distinct concepts. Without going too deep into theory, here’s how they differ:
- Social engineering = the manipulation mechanism
The psychological techniques used by attackers to influence a victim’s behavior.
- Scam = the fraud scheme
A real-world fraud scenario that uses manipulation to drive the victim to take a harmful action.
- Authorized fraud = the broader fraud category
Fraud where the legitimate user performs the action themselves, typically under manipulation or deception.
Social Engineering Methods
Social engineering manifests in a range of techniques, each designed to manipulate users into actions that benefit the attacker. While these methods differ in execution, they all share a common goal: influencing user behavior rather than bypassing it.
Phishing
Phishing—malicious content designed to manipulate users into sharing sensitive information, transferring money, clicking malicious links, or taking other harmful actions—is one of the most common social engineering methods and a leading initial attack vector behind data breaches and credential theft.
AI is rapidly transforming phishing into a more sophisticated and scalable threat. Large language models (LLMs) enable attackers to generate highly personalized messages at scale, automate campaign creation, and accelerate the deployment of phishing infrastructure, making social engineering attacks more convincing and harder to detect.
Pretexting
Pretexting involves creating a fabricated scenario to manipulate the victim. The attacker typically poses as a trusted authority or someone able to resolve the situation—such as a police officer, bank employee, security manager, IT support, or a company executive or supplier in business email compromise (BEC) attacks. The scenario is carefully crafted to minimize doubt and appear credible to the victim.
To strengthen the pretext, fraudsters often gather information about their target in advance, frequently using social media profiles and publicly available sources. This kind of research is neither complex nor time-consuming. According to IBM, fraudsters can craft a convincing story using information from social media feeds and other public resources in as little as 100 minutes of online research.
Pretexting is a critical element that adds credibility to social engineering attacks. As a result, most social engineering attacks involve some degree of pretexting.
Baiting
Baiting lures victims into unknowingly giving up sensitive information or downloading malicious code by tempting them with something desirable or valuable. This could be a new movie, music, or a hard-to-find product.
When the victim takes the bait, they may click a malicious link, visit a compromised website, or attempt to purchase the product, inadvertently exposing themselves to harm.
Quid Pro Quo
Quid pro quo, meaning “something for something”, is a social engineering technique where the attacker offers a service or benefit in exchange for sensitive information or access. For example, an attacker may pose as IT support, offering to resolve a technical issue while requesting login credentials.
Unlike baiting, which relies on curiosity or temptation without direct interaction, quid pro quo involves an explicit exchange: something is offered in return for access or information.
Scareware
Scareware uses fear to manipulate users into sharing sensitive information or downloading malware. It often appears as fake alerts or warnings, such as law enforcement notices accusing the user of a crime or technical support messages claiming the device is infected.
A typical example is a fake email or pop-up claiming, “Your bank account has been compromised. Verify your identity to avoid account suspension.” The user is then redirected to a phishing site or prompted to download malicious software disguised as a security tool.
Why Social Engineering Sits at the Core of Modern Fraud
Social engineering is a dominant driver of fraud. Some sources estimate that up to 98% of cyberattacks incorporate elements of social engineering. So what makes it so effective?
The Human Factor Behind Social Engineering
Since the story of the Trojan horse (a classic example of baiting), one pattern remains constant: humans are often the weakest link in any security system.
Social engineering works because it taps into how people think and decide. It exploits cognitive biases such as authority, trust, urgency, and reciprocity to override critical thinking and steer victims toward specific actions.
Its effectiveness is further amplified by attackers tailoring scenarios to the victim’s profile and preferences. In romance scams, for example, younger men are more often targeted through messages with a sexual subtext, often involving attackers posing as attractive women. Women, on the other hand, are more commonly targeted with narratives built on trust and emotional connection.
Compounding the problem, many victims feel embarrassed and choose not to report the incident, making the true scale of social engineering fraud significantly harder to measure.
How Technology Scales Social Engineering Attacks
The term “social engineering” was popularized in cybersecurity by Kevin Mitnick in the 1990s. Since then, the rise of the internet and digital banking has dramatically expanded the attack surface, giving criminals access to billions of potential targets worldwide.
While early attacks relied on simple human manipulation, today’s campaigns have evolved into complex, scalable operations. Attackers now use AI and machine learning to automate and personalize social engineering techniques such as phishing and pretexting. These technologies enable highly tailored messages, realistic impersonation, and rapid deployment of attack infrastructure.
As Dr. Nicola Harding, Criminologist and Founder of The Financial Crime Lab, noted in the Behind Enemy Lines podcast, “the biggest problem is not the tricks changing, but the fraudsters’ ability to scale.”
Why Social Engineering Bypasses Traditional Banking Controls
Traditional banking controls are primarily designed to detect unauthorized activity. However, in social engineering scenarios, the transaction is typically initiated by a legitimate user acting under manipulation.
From a system perspective, the session appears consistent with the user’s normal behavior. The identity is authenticated, the device is recognized, and the context does not immediately indicate compromise. Yet the intent behind the transaction is malicious.
This creates a structural blind spot. Controls based on static rules, thresholds, and known fraud patterns are not designed to capture manipulation-driven risk, and therefore may fail to identify these transactions as fraudulent.
The Anatomy of a Coached Session
Let’s look at what a banking session actually looks like when a customer is being guided by an attacker.
1. The manipulation phase
The main part of social engineering fraud happens (sometimes long) before the banking session. The attacker contacts the customer and uses social engineering techniques and methods to establish a believable narrative. The more effort the scammer invests in this phase, the better the fraud ROI.
This is crucial: By the time the customer opens their banking app, the fraud is already halfway through.
2. The verified identity
The customer then logs in using correct credentials, a trusted device, and their usual location. From a controls perspective, everything looks valid.
Notably, multi-factor authentication is largely ineffective against social engineering attacks. According to the EBA, fraud rates are even higher for SCA-authenticated credit transfers than for those exempted from SCA.
3. The controlled session
The attacker controls the session by controlling the customer, guiding them on where to click, what to enter, and what to ignore. This can happen via messaging platforms, phone calls, or remote access tools. Fraudsters may also bypass the banking app and go straight to the mobile browser, where safeguards are typically looser and the session is easier to steer using remote access tools.
With the right capabilities in place, these behavioral, device, and threat signals can be detected in real time.
4. The “clean” transaction
Once authentication is complete and the transaction passes standard checks, it is treated as legitimate by traditional fraud detection tools. This creates a fundamental gap: systems validate who is acting, but not why.
Even when a pop-up alert is triggered, the fraudster often encourages the customer to proceed.
5. The cash-out phase
Once the payment is sent, it is rapidly moved through a chain of money mule accounts. Funds are split, rerouted across multiple institutions, and often withdrawn or converted within minutes.
Without cross-bank visibility, stopping the flow is nearly impossible.
How Can Banks Detect Social Engineering Attacks
Social engineering attacks, while sophisticated, are not undetectable or unpreventable. By focusing on the right signals, banks can detect them in real time.
Advanced fraud detection technology based on understanding how social engineering attacks unfold plays a critical role. The moment a user acts under the influence of a fraudster, their behavior changes, leaving subtle signals that indicate manipulation.
ThreatMark’s Behavioral Intelligence Platform detects coached behavior across a range of signals, including:
- Continuous Behavioral Biometrics: Each user has a unique pattern in how they hold their phone, operate it, and make touch gestures. When a fraudster obtains credentials through social engineering and attempts to log in, their interaction patterns will not match the legitimate user’s established behavioral profile. This mismatch is detected in real time throughout the entire session and is one of the most reliable indicators of a fraud attempt.
- Anomalous payment: Unusual payments (whether in amount, timing, priority, or beneficiary) can be a strong signal of social engineering, especially when viewed alongside what’s happening in the session.
- Active mobile phone call: A common social engineering tactic involves calling the victim on their mobile device and talking them through the transaction. An active call during a banking session is often an indicator of a scam in progress.
- Active remote access tool (RAT): When scammers convince an unsuspecting customer to install a remote access tool, they gain the ability to hijack the user’s legitimate session. Once the tool is activated, behavioral indicators (such as unusual mouse movements or keystroke patterns) can reveal that remote access is in use. ThreatMark extends this visibility to the mobile browser, helping improve detection accuracy in social engineering cases.
ThreatMark captures these in-session signals in real time, along with additional context about the device, user behavior, and the transaction itself. Rather than assessing signals in isolation, it evaluates them together to build a reliable, context-aware view of the entire session—enabling earlier and more accurate detection of social engineering attacks.
