- Pig-butchering scammers are using new AI tools to trick victims.
- Prosecutors warn of a growing wave of new cases flooding the globe.
- Crypto scams cost victims more than $2.5 billion in 2022.
You can’t say we didn’t see it coming.
So fraudsters and cybercriminals tapping into large language models to lift crypto from victims should come as no surprise.
Still, it did to one guy who used Tandem — a language sharing app that doubles as a dating platform — when he struck up an online conversation.
The other conversationalist swiftly convinced him to move the chat to WhatsApp.
Whatever the intended end-result may have been, the con was bungled by a lengthy text that revealed, “as a language model of ‘me’, I don’t have feelings or emotions like humans do,” basically outing the use of an advanced chatbot that, if it wasn’t ChatGPT, bore striking similarities with it.
Alarmed, the would-be victim contacted cybersecurity firm Sophos, which confirmed that fears of criminals tapping into the new technology had been realised.
‘We can now say that, at least in the case of pig-butchering scams, this is, in fact, happening’— Sean Gallagher
“We can now say that, at least in the case of pig-butchering scams, this is, in fact, happening,” said Sean Gallagher, principal threat researcher at Sophos, in a report highlighting the case.
Pig-butchering scams are cons in which fraudsters attempt to trick victims into investing in crypto platforms over periods of time, fattening up the take before they cash out, leaving the victims out of pocket and ashamed of being duped.
The FBI estimates that crypto scams like these cost victims more than $2.5 billion in 2022.
Still, using ChatGPT and similar tools will enable criminals “to scale this in a way that’s exponentially worse from what we’re already seeing,” said Erin West, a California prosecutor who has made a name for herself clawing back millions in stolen crypto.
The technology will enable criminals to better tailor the conversations they have with victims, overcome language barriers, reach more people “and cause more destruction,” she told DL News, adding: ”The addition of ChatGPT makes it so much more dangerous.”
The stark warning comes on the back of reports of cybercriminals using similar tools to write malware at speed.
‘It wasn’t a matter of if but when’
Security firms and law enforcement are rattled by the development, but no one we spoke with was surprised.
“It wasn’t a matter of if, but when,” Bobby Cornwell, vice president, strategic partner enablement and integration, at cybersecurity firm SonicWall, told DL News.
These stories have been brewing since OpenAI unveiled ChatGPT in November.
The advanced chatbot had been trained on oceans of data. By tapping into huge datasets — consisting of things like books, code, websites and articles — it can write complex code and even hold conversations based on user prompts.
It was a slam dunk for OpenAI. By January, it had already reached about 100 million monthly users, making it the fastest growing app in history.
(LLaMA is not affiliated with DL News’ parent company DefiLlama.)
In July, Tesla CEO Elon Musk launched a new company, x.AI, with the aim of creating a version of ChatGPT that’s less “woke” than OpenAI’s version.
“The danger of training AI to be woke — in other words, lie — is deadly,” Musk tweeted in December.
AI companies raised more than $25 billion from investors in the first half of 2023, according to Crunchbase.
The dark side of ChatGPT
Rather than taking a victory lap, OpenAI CEO Sam Altman has been on a world tour of sorts, warning against the dangers of unrestricted AI.
He has testified in the US Congress and told reporters he’s lost sleep over the fear that OpenAI has “done something really bad” by letting out the proverbial AI genie.
Whether his statements arise from real dread or are part of an elaborate PR strategy remains uncertain.
No matter what the motivation behind Altman’s statements, digital thugs have already used ChatGPT or similar tools to power their schemes.
Cybercriminals are even claiming to have created their own versions of ChatGPT, unburdened by safety measures installed by OpenAI.
They are advertising them on the dark web as tools to supercharge phishing attacks and write malware, Wired reported earlier in August.
“Cybercriminals and the threat actor community are technology savvy, motivated and tend to be early adopters of new technology trends when they see advantages and improvement on how they can evolve their tools, techniques, and practices,” Gary Alterson, vice president, managed security, at cybersecurity firm Kivu Consulting, told DL News.
“So it’s no surprise that threat actors are adopting generative AI as they can get those same gains in productivity and potentially improve how well they can hide, conduct reconnaissance, build malicious code, trick people and security software, and more.”
These developments have left cybersecurity experts and law enforcement agencies scrambling to meet the threat.
Cybersecurity firms must “create more intelligence into our systems to look for AI-generated content,” Cornwell said, adding that creating such countermeasures will take time.
‘It’s difficult to fight a monster this big and this wealthy’— Erin West
West said that while she and other crimefighters can warn against these threats, the criminal syndicates behind pig-butchering scams were difficult to take down even before they tapped into AI.
Many criminal networks are run by Chinese nationals operating in Cambodia, Laos, and Myanmar, according to the Global Anti-Scam Organization.
The billions made from their crimes enable them to pay off local law enforcement and avoid international sanctions, West said.
“It’s difficult to fight a monster this big and this wealthy,” she said.
OpenAI did not return our requests for comment.
Eric Johansson is DL News’ London-based News Editor. He covers crypto culture, investments, and politics. You can reach him at firstname.lastname@example.org or on Telegram at ericjohanssonlj.