This article is more than nine months old

How crypto scammers are embracing new AI technology

How crypto scammers are embracing new AI technology
Crypto scammers are using ChatGPT-like AI tools to con victims in a new wave of pig-butchering scams. Credit: Rita Fortunato/DL News.
  • Pig-butchering scammers are using new AI tools to trick victims.
  • Prosecutors warn of a growing wave of new cases flooding the globe.
  • Crypto scams cost victims more than $2.5 billion in 2022.

You can’t say we didn’t see it coming.

In the months since OpenAI launched ChatGPT and overturned the startup ecosystem, the technology has been branded a threat to journalism, screenwriting, and national security.

So fraudsters and cybercriminals tapping into large language models to lift crypto from victims should come as no surprise.

Still, it did to one guy who used Tandem — a language sharing app that doubles as a dating platform — when he struck up an online conversation.

The other conversationalist swiftly convinced him to move the chat to WhatsApp.

Whatever the intended end-result may have been, the con was bungled by a lengthy text that revealed, “as a language model of ‘me’, I don’t have feelings or emotions like humans do,” basically outing the use of an advanced chatbot that, if it wasn’t ChatGPT, bore striking similarities with it.

Alarmed, the would-be victim contacted cybersecurity firm Sophos, which confirmed that fears of criminals tapping into the new technology had been realised.

‘We can now say that, at least in the case of pig-butchering scams, this is, in fact, happening’

—  Sean Gallagher

“We can now say that, at least in the case of pig-butchering scams, this is, in fact, happening,” said Sean Gallagher, principal threat researcher at Sophos, in a report highlighting the case.

Join the community to get our latest stories and updates

NOW READ: Recovering scammed assets should be easier with blockchain data but it’s not

Pig-butchering scams are cons in which fraudsters attempt to trick victims into investing in crypto platforms over periods of time, fattening up the take before they cash out, leaving the victims out of pocket and ashamed of being duped.

The FBI estimates that crypto scams like these cost victims more than $2.5 billion in 2022.

Still, using ChatGPT and similar tools will enable criminals “to scale this in a way that’s exponentially worse from what we’re already seeing,” said Erin West, a California prosecutor who has made a name for herself clawing back millions in stolen crypto.

NOW READ: Meet the prosecutor who claws back millions from crypto scams: ‘I’ve had grown men crying on the phone to me’

The technology will enable criminals to better tailor the conversations they have with victims, overcome language barriers, reach more people “and cause more destruction,” she told DL News, adding: ”The addition of ChatGPT makes it so much more dangerous.”

The stark warning comes on the back of reports of cybercriminals using similar tools to write malware at speed.

‘It wasn’t a matter of if but when’

Security firms and law enforcement are rattled by the development, but no one we spoke with was surprised.

“It wasn’t a matter of if, but when,” Bobby Cornwell, vice president, strategic partner enablement and integration, at cybersecurity firm SonicWall, told DL News.

These stories have been brewing since OpenAI unveiled ChatGPT in November.

The advanced chatbot had been trained on oceans of data. By tapping into huge datasets — consisting of things like books, code, websites and articles — it can write complex code and even hold conversations based on user prompts.

NOW READ: How Bitcoin swings helped drive an almost nine-fold surge in cryptojacking attacks in Europe

It was a slam dunk for OpenAI. By January, it had already reached about 100 million monthly users, making it the fastest growing app in history.

People have used it for everything from writing online dating profiles and scary stories to generating video scripts and debugging code.

Tech giants including Google and Meta have responded by accelerating development of their own conversational AI tools: Bard and LLaMA.

(LLaMA is not affiliated with DL News’ parent company DefiLlama.)

In July, Tesla CEO Elon Musk launched a new company, x.AI, with the aim of creating a version of ChatGPT that’s less “woke” than OpenAI’s version.

“The danger of training AI to be woke — in other words, lie — is deadly,” Musk tweeted in December.

AI companies raised more than $25 billion from investors in the first half of 2023, according to Crunchbase.

The dark side of ChatGPT

Rather than taking a victory lap, OpenAI CEO Sam Altman has been on a world tour of sorts, warning against the dangers of unrestricted AI.

He has testified in the US Congress and told reporters he’s lost sleep over the fear that OpenAI has “done something really bad” by letting out the proverbial AI genie.

NOW READ: I had my irises scanned by Sam Altman’s Worldcoin Orbs so you don’t have to — here’s what happened

Whether his statements arise from real dread or are part of an elaborate PR strategy remains uncertain.

No matter what the motivation behind Altman’s statements, digital thugs have already used ChatGPT or similar tools to power their schemes.

Cybercriminals are even claiming to have created their own versions of ChatGPT, unburdened by safety measures installed by OpenAI.

They are advertising them on the dark web as tools to supercharge phishing attacks and write malware, Wired reported earlier in August.

NOW READ: Big game hunting is back: Ransomware gangs set for $900m haul this year

“Cybercriminals and the threat actor community are technology savvy, motivated and tend to be early adopters of new technology trends when they see advantages and improvement on how they can evolve their tools, techniques, and practices,” Gary Alterson, vice president, managed security, at cybersecurity firm Kivu Consulting, told DL News.

“So it’s no surprise that threat actors are adopting generative AI as they can get those same gains in productivity and potentially improve how well they can hide, conduct reconnaissance, build malicious code, trick people and security software, and more.”

These developments have left cybersecurity experts and law enforcement agencies scrambling to meet the threat.

Cybersecurity firms must “create more intelligence into our systems to look for AI-generated content,” Cornwell said, adding that creating such countermeasures will take time.

‘It’s difficult to fight a monster this big and this wealthy’

—  Erin West

West said that while she and other crimefighters can warn against these threats, the criminal syndicates behind pig-butchering scams were difficult to take down even before they tapped into AI.

Many criminal networks are run by Chinese nationals operating in Cambodia, Laos, and Myanmar, according to the Global Anti-Scam Organization.

The billions made from their crimes enable them to pay off local law enforcement and avoid international sanctions, West said.

“It’s difficult to fight a monster this big and this wealthy,” she said.

OpenAI did not return our requests for comment.

Eric Johansson is DL News’ London-based News Editor. He covers crypto culture, investments, and politics. You can reach him at eric@dlnews.com or on Telegram at ericjohanssonlj.

Related Topics