Banks Have Excellent Cyber Defenses, Yet They Are Useless. “It’s Like a System Shock”

May 11, 2026

All it takes is $50 and tools bought on the dark web, and you can launch the most effective scam. The cost of launching a scam is going down, while the cost of protection is going up.

→ Joanna Sosnowska: Let’s say I am bored with the media and the legal world, and I invite you to start a data‑stealing company. How much money do we need to get started?

Lukáš Jakubíček, cybersecurity expert at ThreatMark. Speaker at the Banking Fraud Summit in Bucharest: I’m in. We’ll buy most of what we need online. I’d start by planning the perfect scam. Today, it looks like this: we create an ad and publish it on Facebook, in the Google ecosystem, or on another platform. What do we choose?

OK, we write: “Invest with us. Join a revolutionary crypto platform. Earn $5,000 back in a week.” To be sure, we add a deepfake featuring a celebrity encouraging people to invest. The ad leads to a phishing site —one designed to steal data. There, people will leave their contact information—all we need is a name and a phone number.

Once we have their contact info, we need to set up a call center that will call those numbers and pretend to be, for example, a bank employee who is concerned about activity on the account. They will say that the money must be transferred immediately to another account. Or they will encourage crypto deposits.

→ What do we need to start?

Let’s say we want to save money, so we use artificial intelligence. Any AI model will let us create a phishing website pratically for free. It will be ready in a minute, because AI will generate everything we want: the investment description and the graphics.

While we’re at it, we can generate a client dashboard right away. After depositing money, that’s where the victim will check how their money is “growing.”

Now we need to put it online.

So we need hosting. Since we’re talking about phishing, it’s best to maintain as much anonymity as possible. So we choose a foreign hosting provider that’s hard to contact. Ideally, they should also accept payments in cryptocurrency.

→ How much does that cost?

About $4 per month.

So our campaign is now live. Now we need users.

That brings us back to ads. We generate graphics or deepfakes using AI for free. We publish them on Facebook and Google, where we pay per click.

Let’s assume a click costs half a dollar. How much we spend depends on our imagination. Scammers usually pay for campaigns with stolen credit cards.

→ What is the risk that someone will take our phishing site down?

High. These sites have very short lifespans. Companies like ThreatMark, and organizations responsible for network security like Poland’s NASK, will take them down as soon as they find them. It could be a matter of hours, or it could be days. During that time, we will already have many victims. And most importantly, creating a new site costs next to nothing and takes no effort. AI can automate the entire process of generating new sites. One disappears and another one appears almost immediately.

We have a server, a database, a Facebook campaign, and a fake website. Do we need anything else?

Staff who will run the scam. People who carry out fraud over the phone. So we need a call center.

→ Is this the most common scam online right now?

Absolutely. Over the last three years, scammers have completely shifted from unauthorized fraud, where they somehow accessed bank accounts and made illegal transactions, to authorized scams, such as scams where they manipulate victims over the phone or through other communication channels.

I’d say that currently, about If you’d asked me three years ago what the most common scam was, I’d have said phishing and bank account takeovers. Now the situation is completely different.

About 90 percent of all digital financial losses come from fraud where a manipulated user personally authorizes the payments leaving their account.

Banks have developed very strong technical protections that detect account takeover and protect users. Those defenses have become almost useless. It is like a system shock. A total change.

→ I’m not happy with how we set up our business. Can we simplify it even further?

We can automate the entire process. You can pay a small monthly subscription and everything is done for you. You can buy everything: website creation, databases of potential victims, even phishlets, meaning ready‑made templates of real company websites to impersonate. All major brands were available: Facebook, banks, whatever you wanted. Hosting was included, along with SMS gateways for mass messaging.

→ Isn’t that a much more expensive option?

That depends on how much you value your time. Phishing‑as‑a‑service can save you hours of work, also because when one site stops working, another one can be launched almost immediately.

Altogether, it costs about $50 per month. At that price, you get support for nearly every element of a successful scam.

→ So can we say that the amount of money needed to launch an online scam is trending toward zero, while the amount needed to detect it keeps rising?

Definitely. Including the cost of analyzing what is happening inside the banking system.

Just like in our thought experiment about setting up a scam operation: even if we already have a website, a call center, and the whole system, we still need the most important element: a secure account into which the victim’s money can be transferred.

From the scammer’s perspective, the money has to go to an account that is not linked to them, otherwise they will be prosecuted. It has to be a “clean” account. If banks detect suspicious activity, they can shut it down. Scammers usually have dozens of accounts at their disposal and use them in rotation.

A clean account that can be used freely is a very valuable asset. But even this can be acquired cheaply. I have seen offers on Facebook selling accounts for services like Revolut, PaySafe, and PayPal. These are not newly created accounts. People resell existing accounts with full transaction history. From an investigator’s or analyst’s perspective, such an account looks more credible, assuming there were no prior warning signals.

→ So even cashing out is becoming cheaper for scammers?

Yes. And for banks, detecting these transactions becomes more expensive because it’s more complicated.

Of course, money can be sent to a crypto exchange, but exchanges are regulated, so you still have to register and go through KYC, Know Your Customer. Each account can be linked to a wallet.

One could try to circumvent a bank’s KYC procedures, or hire, for example, a person experiencing homelessness, use their information, and open an account in their name. However, the most common scenario involves withdrawing money from an ATM in a distant country. And that’s where the digital trail of the money ends.

Scammers are still trying to work around that. In the US, for example, gift card scams are becoming more popular. Cards that can be used to top up Netflix, Spotify, gaming consoles, or app stores.

→ How does that work?

Let’s say I am a scammer and I convince you to send me money. You fall for it.

I can convince you to go to a store and buy hundreds of gift cards. For investigators, there will be virtually no digital trail, because it ends at the store. You give me those cards, and I sell them, say, for half their value. Either way, I come out ahead.

In our region, another scam is becoming popular. Here’s how it works: a bank employee calls you and says that someone in another city is trying to take out a loan in your name. They ask if you want to approve it. Of course, you say you’re not approving anything because it’s a scam.

The “employee” replies, “In that case, we need to withdraw the money from your account to keep it safe. We’ll send a police courier who will come, take your money, and put it in our police safe.”

People fall for this and hand over thousands of euros to strangers they have never seen. Interestingly, these couriers are often Uber or Bolt drivers.

→ Wait. Platform couriers transport money from scams? Do they know?

That is the million‑dollar question.

If scammers are attacking the user, not the bank, how can banks defend themselves?

The only partial solution is to collect as much data as possible. Data about the user during the session, while they are logged in.

There are three main categories of data.

The first is normal user behavior, like me. Lukas usually logs in from a laptop, checks the balance, then checks the mortgage. He makes three or four transfers. That is typical, measurable behavior that fits his profile.

The second category is unusual behavior. Lukas logs in again and suddenly goes to change transaction limits and pushes them to the maximum. Or he changes his home address. His typing speed changes. He navigates the app in a way that suggests automated behavior, possibly a bot.

The third category is “fraudster behavior.” You can recognize signs of remote control. Someone else is controlling the mouse. Someone else is typing on the keyboard. These signals need to be monitored, just like active phone calls.

If you are logged into your bank and simultaneously talking to someone on your phone, that is now a very strong warning signal.

→ Today scammers are real people. Who will be calling us in three years?

Without a doubt. They’ll create perfect AI voice agents and scam people while using AI‑based tools. I am one hundred percent sure of that.

→ If that happens, their effectiveness and scale will increase exponentially.

Absolutely. With AI, they will eliminate what does not work even faster.

→ Will AI change other threats too?

Yes. Take phishing, or the act of stealing information. Even today, we’re seeing fewer and fewer of those amateurish, slapdash emails—like the Nigerian prince asking for help—aren’t we?

Today phishing emails are almost flawless, precisely because they’re written and translated into other languages by artificial intelligence. They include all classic psychological tricks: authority impersonation like banks or governments, time pressure like “act within an hour or you will lose your money.”

When that is combined with perfect language, meaning flawless translation into multiple languages, the chance that someone will fall for it increases.

Let me give an example: Czech is one of the more complex languages. Until now, automatic translations haven’t handled it very well. For instance, emails in English often start with “Dear John,” right? In Czech, that would be “Drahé Johné.” However, when using an automatic translator, it often came out as “Miláčku Johné,” meaning “dearest John” or “sweetest John.”

AI now translates it all correctly.

→ What direction will fraud take over the next 3–5 years?

Everything will become fully automated. As AI improves, attacks will become more convincing. We will also see more advanced deepfakes and cloning of voice and appearance.

Until now, those were used as security measures for bank accounts. They are now becoming obsolete. Banks will struggle more and more against synthetic and fake identities.

→ So what should banks—and everyone who has money in the bank—do to protect themselves from fraud?

Banks need to collect as much session interaction data as possible. Companies that miss the chance to deploy preventative solutions will regret it.

Individuals like you and me, my mother and your mother, need a healthy level of paranoia. If someone calls with an amazing offer or tries to scare us by saying something is wrong with our bank account, we should ask: “Why does my bank want me to send money by mail?” “Why do they want gift cards?” “Would a police officer ever contact me on WhatsApp?”

There is a perfect scam for everyone. Even for me. Even for you. People are and will remain by far the most important links in the chain, and it will stay that way.

 

Lukáš Jakubíček – an independent consultant for ThreatMark, a comprehensive fraud prevention system. He regularly speaks at international conferences on fraud prevention and cybersecurity and helps organize the Banking Fraud Summit, a series of events attended by the world’s leading fraud prevention experts. He thinks like a fraudster, but he has a moral compass.

 


This article first appeared on Wyborcza.biz as an interview by Joanna Sosnowska, published on April 14, 2026.