How AI Deepfake attacks have evolved

In early 2024, the security space was shocked by one of the most sophisticated corporate heists. A finance employee at the Hong Kong office of Arup (A multinational professional services company), made 15 separate transfers totalling $25M to fraudsters. These transfers were made to 5 different Hong Kong bank accounts controlled by deepfake fraudsters.
The attack started conventionally with a phishing email, followed by a sophisticated real-time AI Deepfake video conference call.
The victim employee receives a phishing email, purportedly from the company’s UK-based Chief Financial Officer (CFO). In the following video conference call, the finance employee saw and heard people who looked and sounded like the CFO boss and some other colleagues. Interestingly enough, all participants on the call, apart from the victim, were AI-generated deepfakes.
The victim employee was initially suspicious of the email, but the hyper-realistic, multi-person video call convinced them the request was legitimate. The visual and auditory cues convinced them the transaction was legitimate and sanctioned by top management.
During the call, the deepfake participants instructed the finance employee to execute a series of high-value money transfers, keeping it as a "confidential transaction."
Upon discovery, Arup took immediate action, notifying the Hong Kong police about the incident of fraud in January 2024.
But it was too late to interrupt, wasn’t it?
The heist has been successful. The money is gone.
“Seeing is Believing” is an Obsolete Narrative
AI capabilities are getting powerful each passing day. Plus, it is accessible to all of us, and attackers, more than ever before.
Attackers are finding new ways to attack, because we already have so many defenses against known ways of attacks. Those are mostly predictable and technical with some human applied social engineering techniques.
And you can easily spot these types of social engineering attacks by “believing what you’re seeing”. Easy technical defenses are good enough to defend against them as well.
Plus, training people to prevent phishing attacks is a proven defense mechanism most organizations use. But these technical defenses and old training don’t work on modern AI-assisted Social Engineering attacks.
It’s now possible for attackers to craft deepfake social engineering attacks using Artificial Intelligence (How? See the Vibe Hacking section below).
That’s what happened with Arup’s finance employee.
Attackers tricked them to transfer the money using a deepfake video conference call. And the big mistake, Arup’s employee believed what they saw.
But deepfake attack techniques are designed to fool even an eagle’s eye.
Human senses fail checks against these types of attacks because deception happens in real-time using faces and voices of the people you trust the most.
Vibe Hacking: When AI Starts Hacking Alone
On 13th November 2025, Anthropic reported the first major cyberattack run by an anonymous AI agent. Here’s the report: https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf
The attack is linked to a Chinese state-sponsored group (GTG-1002). They’re using the popular AI coding tool Anthropic Claude Code. The agent executed 80% to 90% of the hacking steps, finding information, finding weak spots, stealing passwords, and analyzing data without a human constantly telling it what to do.
This is the key change.
We’ve moved from people using AI as an assistant to AI being the hacker by itself. This increases the speed and tanks the size of attacks drastically. These attacks bypass biometric liveness checks and human verification layers as well.
Reports show that in the first three months of 2025 alone, the number of confirmed deepfake incidents was more than the total number in all of 2024.
This shows how quickly hackers are using this technology.
Financial losses from these scams in just Q1 2025 passed $200 million. Imagine the kind of destruction these standalone AI agents could cause in the near future.
We’re moving from AI-assisted Social Engineering to Standalone AI Agent Social Engineering.
How can we still prevent future attacks?
The Double-Check Rule
Rule: If a request comes via video or email, the employee must call the person on a known desk extension or send a quick text on a different, internal chat system.
Protocol: Treat all urgent, secret, or high-money requests as a possible deepfake. Never confirm the request using the same app or method it was received on.
The Two-Person Rule
Change how financial approvals work to make sure at least two people approve major transactions (called a four-eyes principle), no matter who the boss is on camera.
Principle: No one person, even the CFO, should have the power to approve giant money transfers by themselves. This simple rule makes it much harder for a single deepfake attack to succeed.
Train the Staff
Train employees to use simple, personal verification questions that an attacker who only scraped public videos (or is a basic AI) cannot answer.
In July 2024, a smart Ferrari executive stopped a deepfake voice-clone of their CEO, Benedetto Vigna.
The attacker was demanding an urgent transfer. The alert executive simply interrupted:
"Sorry, Benedetto, but I need to identify you,"
and asked a question based on a recent, private talk:
"What book did you recommend to me recently?"
The AI attacker couldn't answer and immediately hung up the call.
Conclusion
AI Deepfake Social Engineering and Standalone Hacking AI Agent attacks are real attack vectors to most organizations in the year 2025 and beyond. Ignoring this fact could prove to be destructive action. Because if your organization doesn’t evolve and adapt to the evolution of attack techniques, then being a number target of these attacks is inevitable.
You can do security testing yourself, or I can do it for you.
I handle security, you take care of business.
Book a free consultation call at https://cybermehul.com , get security assessments done before attackers do.



