“AI has fully defeated most of the ways that people authenticate currently.”
A warning that Sam Altman, CEO of OpenAI said at a Federal Reserve conference this July.
He didn’t say it at a tech event in Silicon Valley, but in front of regulators who oversee the stability of the financial system.
For centuries, fraud relied on human gullibility. In 2025, it’s powered by machines that can fake your voice, your face, and even your identity with unsettling precision. Fraudsters have now industrialised what once depended on a forged signature or a convincing lie.
With only a few seconds of someone’s speech, a scammer can create a voice clone that sounds indistinguishable from the real person. With a set of images pulled from LinkedIn, they can build a deepfake avatar capable of joining a Zoom call.
This is the crisis that Altman wanted the Fed to understand. And judging by the bluntness of his language, he knows it is not a distant hypothetical but a present-day problem.
A Crisis in the Numbers
The financial fallout is already staggering. In 2024, scams drained more than US$12.5 billion from consumers, a jump of 25% from the year before. Nearly half of all fraud attempts in the financial sector now involve AI in some form. And it is not just the sheer volume of attempts that worries experts, but their success rate.
Almost a third of AI-driven fraud attacks bypass existing security measures.
Deepfakes in particular have exploded, with incidents growing more than tenfold between 2022 and 2023.
One British engineering firm learned this the hard way when an employee was tricked into wiring US$25 million after a video call with what appeared to be the company’s CFO and senior executives. Every one of them was a synthetic fabrication.
Despite this, preparedness is shockingly low. Only 22% of financial institutions have invested in AI-powered defences of their own. Eight out of ten companies have no plan at all for handling deepfake attacks. Consumers are equally vulnerable.
Most people admit they cannot tell the difference between a real voice and an AI-cloned one.
Acting as Both The Prophet and the Profiteer
The genius, and the horror, of AI fraud is that it no longer needs to exploit software vulnerabilities. Instead, it exploits human ones.
A cloned voice asking a grandparent for urgent bail money feels more real than any phishing email. A boss on a live video call insisting on an emergency transfer is harder to question than an email attachment.
What we are witnessing is not simply more fraud, but a change in its nature. The battlefield has shifted from systems to psychology, from code to cognition. Scammers do not need to break into your bank account if they can convince you to open it for them.
Sam Altman’s warning has also stirred an uncomfortable debate.
On the one hand, his message is clear and urgent. The foundations of digital trust are cracking under AI’s weight.
On the other hand, critics argue that it is a fire his own company helped light.
OpenAI and its peers built the very tools now being weaponised, and some even accuse Altman of playing both the prophet and the profiteer, issuing warnings about the dangers while selling the technology that fuels them.
There is also a suspicion that calls for stricter regulation conveniently benefits the largest players. Complex rules are easier for giants to comply with than startups, raising the spectre of regulatory capture. Whether altruistic or strategic, the fact remains that policymakers are now being pushed to act, and fast.
Fighting Back
Defences are emerging, though they are uneven. Banks are experimenting with AI to monitor transactions in real time, spotting unusual behaviour before losses mount. Regulators in the United States are rolling out new rules to address impersonation scams, while the European Union is moving towards government-backed digital identity wallets.
At the corporate level, some companies are ditching outdated biometric checks and replacing them with layered security systems that verify not only who you are, but how you behave online. Training employees to distrust even convincing requests is becoming just as important as any piece of software.
And for individuals, the advice is as unglamorous as it is effective.
Hang up and call back. Verify before you trust. Share less of your voice and face online.
Some families have even introduced “safe words” to confirm identity during emergency calls. Low-tech solutions still matter in a high-tech fraud world.
What AI fraud is really stealing is not just money, but certainty.
The certainty that the voice on the phone is your child. The certainty that the person on screen is your boss. The certainty that seeing and hearing are enough.
Suddenly, in this “new” world, certainty must be earned, not assumed.
If this feels like the kind of issue you’d like to unpack further, Fintech News Singapore is running a webinar on September 9, 2025, called How AI is Transforming FSI’s Approach to Fraud. It’s worth tuning in if you want to hear how the people on the frontlines are thinking about what comes next.
Head over to register.
Featured image by TechCrunch via Wikimedia Commons.