DL Research Content

A conversation with Dr. Clark Alexander, Co-founder & head of AI at Argentum AI

A conversation with Dr. Clark Alexander, Co-founder & head of AI at Argentum AI
Illustration: Hilary B; Source: Argentum AI

Dr. Clark Alexander holds a PhD in Mathematical Physics and Noncommutative Geometry from Northwestern University and has taught at several major institutions. He is a hands-on expert in Data Science, Quantum Finance, AI, and Supply Chain Optimization, known for his work with the IEEE on post-quantum cryptography.

We caught up with Dr. Clark Alexander, the Co-Founder and Head of AI at Argentum AI, on the floor of Token2049 to discuss the future of computation. As a physicist and mathematician with extensive experience across various industries, including quantum computing, molecular simulations, electricity trading, and logistics, Dr. Alexander brings a unique, first-principles perspective to the AI and Web3 space.

This background informs Argentum AI’s primary mission, which is to develop a decentralized computing platform that addresses the fundamental inefficiencies in today’s AI landscape. By focusing on specialized models and creating an open, agnostic marketplace for compute resources, it aims to resolve the “square peg, round hole” problem that currently affects the industry.

You’ve spoken about the limits of LLMs. Where do you see these limits first appearing in real-world applications?

The primary limitation of Large Language Models is a physical barrier, not a theoretical one. A few years ago, I was at Oak Ridge National Lab when they were installing the world’s largest supercomputer. The lead engineer told me they couldn’t build it twice as large simply because they couldn’t afford the electricity bill.

That’s the core issue: an LLM’s computational needs grow quadratically as you add more data, but the energy required to run those computations grows cubically. This mismatch suggests that we are approaching the physical limits of how much electricity can be produced on Earth, making energy the primary and most critical bottleneck for real-world applications.

What do you think of SLMs, and what advantages do you believe they offer over large-scale LLMs in enterprise or scientific use cases?

I believe that large language models are, in most cases, overkill. As a mathematician and physicist, if I want to learn about these subjects, I prefer a model trained on relevant sources, such as MathWorld or Wikipedia. I want a “WebMD” Small Language Model (SLM) that provides information about medical topics; I don’t need that same model to know the history of Indonesia. You can connect these specialized SLMs and have a simple AI classifier acting like a receptionist, directing your query to the appropriate expert model. It’s far more efficient.

How does the confusion between mathematical computing and physical computing affect how people evaluate AI’s capabilities? And in practice, how does Argentum AI address this in its decentralized compute approach?

We are aiming to be compute-agnostic and build an auction-based marketplace. An AI agent manages the auction, matching a user’s compute job with a provider. A user might not need an H100 GPU right now; perhaps they just want the job completed by tomorrow morning. They could run it overnight on several CPUs for a much lower cost.

Currently, there are lots of “square pegs in round holes” in computing. Our agent learns the user’s needs, whether it’s speed or cost, and finds the most efficient provider. This creates a market where you use the minimal resources necessary to solve the problem, saving everyone money and energy.

AI training consumes massive amounts of energy and water. How urgent is this issue, and what technical or policy solutions can help mitigate its impact?

The energy issue is highly urgent and, as I mentioned, represents a physical barrier to the growth of large-scale AI. A major technical cause is the choice of programming language. Python, for example, is very energy inefficient because it was built as a teaching language. My calculations show that training a model in an efficient language like Rust can be six orders of magnitude, or a million times, more efficient in terms of bits per kilowatt-hour.

As a solution, our platform, Argentum AI, will use a market-based approach with tiered pricing. This will incentivize developers to use more energy-efficient languages by making inefficient ones, such as Python, more expensive to run. Tackling the energy problem is a critical focus for us.

Decentralized networks often raise questions about reliability. How do you ensure a compute job is completed correctly?

We utilize a system of bilateral agreements that is tracked from end to end. As a compute provider, I don’t get paid until you, the user, verify that the work was completed correctly. If there’s a dispute, for instance, if I claim a job is done and you say it isn’t, it goes to arbitration. We will have a governance system, aided by an AI agent, that follows a set of rules to determine who is in the right. It’s a system of checks and balances to ensure fairness and reliability.

There’s a debate that Python is killing AI. From your perspective, how do programming language choices constrain or enable breakthroughs in AI?

Python has a unique problem: it’s very energy inefficient and slow because it was built as a teaching language. It was a great stepping stone for scientists, but it’s not suitable for industrial-scale applications.

I recently performed a calculation: if you train a model in an efficient language like Rust versus Python, the difference in bits per kilowatt-hour is six orders of magnitude. That’s a million times more energy efficient. We can create a market force for this on Argentum AI by creating tiered pricing for jobs based on the language used. If you want to run Python, it’s fine, but you have to pay for it.

Given your IEEE work, how urgent is the integration of post-quantum cryptography in blockchain and AI-driven systems?

We aim to be one of, if not the only, post-quantum secure blockchain. While current quantum computers probably will never break RSA encryption due to a different energy limit, that doesn’t rule out the emergence of new computational methods. Much of post-quantum cryptography is lattice-based. The reason it is secure is that it’s mathematically challenging to perfectly fit a quantum bit, which resembles a circle, into a lattice structure. We plan to use lattice-based encryption to secure our platform for the future.

What is your vision for Argentum AI in one year? What does success look like?

I know this is ambitious, but within one year, I aim to see a diverse range of applications, not just AI, being implemented and deployed on our platform. I want to see users running molecular simulations, financial pricing, complex optimizations, and AI training for both language and vision models at various scales. Success for me is witnessing these real-world scientific and industrial compute tasks being completed on Argentum AI.