Artificial intelligence is terrible at trading crypto. Here’s what could change that

Artificial intelligence is terrible at trading crypto. Here’s what could change that
DeFi
AI agents are popular, but they struggle at tasks like crypto trading. Illustration: Hilary B; Source: Shutterstock.
  • AI agents are popular.
  • But they often make mistakes when asked to navigate the crypto market.
  • One firm is trying to reduce errors and improve reliability.

It was a simple request.

Nick Emmons, co-founder and CEO at Allora Labs, a firm building a decentralised artificial intelligence network, asked a new AI agent he was testing out to trade some crypto into US dollars for him.

Despite giving the programme explicit instructions, it started trading a completely different asset than what was requested.

“It’s completely gone off the rails and done something entirely unrelated to what it was initially directed to do,” Emmons told DL News. This kind of aberrant behaviour among agents is all too common, he said.

AI agents

AI agents, autonomous software programmes designed to achieve specific goals without constant human oversight, sit at the bleeding edge of the booming sector.

In the crypto industry, AI-focused firms have raised over $500 million so far this year, with many touting agents that can analyse potential investments, manage crypto asset portfolios, and even trade on behalf of users.

There’s just one snag.

When these agents are given real money and put in live situations, things often go wrong.

“There’s an infinite set of possibilities for the management of capital to go wrong,” Emmons said. “They could lose it altogether. They could put it in the wrong assets. They can misinterpret numerical inputs to make incorrect financial decisions, all sorts of things.”

That’s an issue, because industry insiders are strong believers in the technology.

Tech giants Google and Microsoft are funnelling resources into building their own AI agent platforms.

A recent survey of IT executives found that 93% report that their organisations are already developing — or plan to develop — their own versions of the technology, according to a July report from OutSystems, an AI-powered coding platform.

The market for AI agents is estimated to surpass $50 billion within the next five years, according to Boston Consulting Group.

With so much interest, anyone who can address the current issues with AI agents stands to profit heavily from this growth.

Large language models

According to Emmons, the reason for the issues with AI agents is that most rely solely on large language models, or LLMs.

“LLMs hallucinate pretty egregiously a lot of the time,” Emmons said. “When you’re dealing with numerical or quantitative settings, those hallucinations can result in some very extreme errors.”

Other finance-specific AI agent issues include an over-reliance on historical data, performing badly when market conditions change, and failing to account for liquidity and slippage, according to Amplework, an AI development and consultation firm.

AI agents have also been shown to collude with each other and engage in anti-competitive practices like price fixing, according to a recent study from the University of Pennsylvania’s Wharton School and the Hong Kong University of Science and Technology.

Emmons’ company, Allora, is offsetting the pitfalls of LLMs with traditional machine learning through its decentralised AI network. This, Emmons said, lets AI agents benefit from the things LLMs are good at while reducing errors and hallucinations.

“It’s about figuring out the right marriage between these two somewhat distinct technologies,” Emmons said.

Allora is already using its network in decentralised finance.

It has live applications actively managing liquidity on Uniswap, the biggest decentralised exchange, and engaging in looping, a kind of leveraged borrowing strategy that juices the yield DeFi users can earn by staking Ethereum.

AI versus humans?

But even with Allora’s AI network reducing errors, there are still risks.

Emmons said there also needs to be tighter parameters in place to ensure safety. “We need the wallets we’re equipping agents with to have a set of contracts and function calls even more specific so they can’t just throw the money away.”

To be sure, it’s not like human traders are infallible, either. Between 2006 and 2008, former Société Générale trader Jérôme Kerviel lost around $7.2 billion of his employer’s money by placing a series of unauthorised, high-stakes trades.

There is also debate over whether AI agents will ever be able to act fully autonomously without human supervision.

A 2024 paper published by Google DeepMind researchers argued that AI agents must be able to engage in causal reasoning to function properly, something that they said isn’t possible currently.

Emmons, on the other hand, is more optimistic.

“A lot of the AI agent stuff will be fairly hands off,” he said. “We’re either here or very close to it.”

Tim Craig is DL News’ Edinburgh-based DeFi Correspondent. Reach out with tips at tim@dlnews.com.