AI-driven Cryptocurrency Frauds on the Rise, According to Sam Altman
In the ever-evolving world of technology, the capabilities of AI systems are expanding at an unprecedented rate. This development, while offering numerous benefits, poses new challenges for security and identity verification, particularly in the realm of crypto scams.
Recent advancements in AI have led to the release of the ChatGPT Agent, an innovation that allows AI systems to interact with computers in a more human-like manner and complete multi-step tasks. This agent, developed by Sam Altman's company, can even switch between apps and log into different accounts, raising concerns about potential malicious uses.
The increasing capabilities of AI have allowed scammers to automate the process of scamming, making it more prevalent. A prime example of this trend is the rise in crypto scams, which have increased by 456% over the last year. In 2024 alone, the FBI received about 150,000 fraud complaints related to cryptocurrency scams, with people reporting losing over $3.9 billion in total.
Deepfake audio and video clips, generated by AI, have significantly contributed to this rise. Scammers have been impersonating high-profile figures like Elon Musk and senior bank representatives to promote fake cryptocurrency giveaways or solicit fraudulent investments. In some cases, they have even tricked employees into transferring large sums to fraudulent accounts by impersonating CEOs or business leaders in phone calls.
The use of deepfakes has led to large-scale financial damage, with deepfake crypto scams causing losses exceeding $100 million in Canada in 2025. Across broader investment scams, $190 million in 2024 losses were cryptocurrency-related and often involved deepfake content.
Moreover, the availability of AI tools and tutorials has facilitated more successful scams at scale. Cybercriminals have been accessing sophisticated AI tools for creating deepfake videos and guides for bypassing Know Your Customer (KYC) checks on crypto exchanges.
Sam Altman's warning about the potential risks of AI defeating existing security apparatus for verifying identity and accessing sensitive accounts echoes general concerns raised by AI execs about the potential risks of artificial general intelligence. Despite this, Altman made these remarks at a banking regulatory conference while dressed in a hot dog suit, adding a touch of levity to the serious discussion.
In conclusion, AI-powered deepfake technologies have revolutionised crypto scams by enabling highly realistic impersonations that exploit trust in known individuals, dramatically increasing the volume and success of such fraudulent schemes in the last year. As we continue to push the boundaries of AI, it is essential to address these security challenges and ensure that technology serves to benefit society rather than facilitate fraud.
- In the future, the advancements in artificial-intelligence (AI) and technology could potentially lead to more sophisticated AI systems like the ChatGPT Agent, capable of interacting with computers in a human-like manner and completing multi-step tasks.
- The growth in AI capabilities has made crypto scams more prevalent by automating the process, with deepfake audio and video clips generated by AI playing a significant role in this trend.
- Scammers have been creating deepfakes to impersonate high-profile figures and exploit trust in known individuals, causing losses exceeding $100 million in deepfake crypto scams in Canada alone in 2025.
- Security and identity verification are becoming increasingly challenging in the technological realm, as AI tools and tutorials facilitate more successful scams at scale, with cybercriminals accessing AI tools for creating deepfake videos and bypassing Know Your Customer (KYC) checks on crypto exchanges.