DL Research Content

Catching up with Leo Fan on Cysic's mainnet, CyClaw's AI agents, and why he disagrees with Charles Hoskinson

Catching up with Leo Fan on Cysic's mainnet, CyClaw's AI agents, and why he disagrees with Charles Hoskinson
Interviews
Leo Fan; Illustration: Andrés Tapia; Source: CYSIC;

The race to scale zero-knowledge proofs and AI computation consistently hits a tough hardware bottleneck. Developers often wait hours for simple verifications. We sat down with Leo Fan, Founder of Cysic, at EthCC last week to discuss how his team is overcoming those limits to make real-time proving a reality.

With a PhD in cryptography from Cornell and deep expertise in hardware design, Leo is perfectly positioned to solve these exact performance issues. We caught up with him right after the highly anticipated Cysic mainnet launch, a defining moment for the ComputeFi economy.

He shared fascinating insights on how automated AI coding loops are outperforming human engineers and how decentralised networks will democratise access to high-performance compute.

Read more about his vision for the verifiable compute engine of Web3 below.

When we last spoke at Consensus in February, mainnet was just two months old. What did Day One feel like? Has the market caught up to what Cysic has been building?

We launched mainnet in December, so the most stressful period was already behind us by February. The difference between testnet and mainnet is stark. Real-value assets on mainnet incentivise adversaries to probe every aspect of the network and explore unanticipated edge cases.

We built an internal monitoring system that tracks several key factors around the clock. We also have team members across time zones watching it continuously.

We prepared multiple contingency plans for every foreseeable attack scenario, making the first two months extremely intense. Now, we are fully focused on shipping new products. We open-sourced our proprietary ZKVM, establishing it as the industry's top performer.

Alongside that, we are expanding the Cysic AI ecosystem. We just launched CyClaw, an open cloud platform where users can run AI instances directly on our hardware and add custom skills. This setup ensures complete privacy by keeping the models isolated from your personal data.

Who is connecting hardware to the network, and does it appear to be the kind of participants you expected?

During the testnet, we had over 100,000 GPU contributors. After the mainnet launch, that number decreased to around 10,000. Many testnet participants were primarily farming for the airdrop. That said, we remain the largest zero-knowledge proof network in the industry by a significant margin. We are far ahead of our closest competitors in active proving.

The contributors who remained are genuine early supporters. They keep their machines connected to Cysic because it functions as a highly active compute marketplace. Their hardware generates real yield through ZK proving, AI inference, and other workloads layered on top of our blockchain coordination system.

Most people who understand blockchains know that they come in a variety of flavours. Can you explain where Cysic excels as a Proof-of-Compute blockchain and why that matters to someone building an app or a protocol?

Our consensus mechanism is proof of compute. This operates as a hybrid of proof-of-work and proof-of-stake, defining "work" strictly as useful computation. The machines on our network generate ZK proofs and execute AI inference and training. The network then translates that output into a computing score. Achieving a higher computing score grants you more influence in the consensus process and yields greater rewards.

I view this as a highly natural model for a compute-focused network. Early proof-of-work models faced valid criticism for environmental waste. While proof of stake solved the environmental issue, it introduced a new hurdle: the need for massive upfront token purchases. Token volatility often makes that route unappealing for everyday users.

Purchasing a GPU server offers a more practical opportunity. That physical hardware delivers valuable AI inference for other projects, generating productive returns from a highly tangible asset.

Charles Hoskinson argues that crypto can safely rely on hyperscalers like AWS because advanced cryptography neutralises the infrastructure risk. You disagree. Where does Cysic mainnet fit into that debate?

Cryptography can mitigate some of that risk, but it certainly does not resolve everything. AWS has experienced several major outages over the years. If you look at the Ethereum validator set, a massive portion of those validators run on AWS, often in the very same data centres. The decentralisation becomes highly superficial when you look at the physical infrastructure.

We believe a decentralised compute network like Cysic serves as a highly resilient complement. Our nodes are community-operated machines spread across countless locations globally. They can seamlessly run ZK proofs, AI inference, and validator workloads. If AWS goes down, projects that rely on a decentralised compute layer remain fully protected and operational.

You mentioned AI verification as one of the workload types the network can handle. With AI dominating every conversation right now, where does Cysic fit into that world?

We fit into that world in two main ways. First, our large fleet of GPU machines enables us to run open-source large language models like Minimax and Llama. We generate inference tokens internally to power our own products, removing the need to purchase tokens from external providers.

Secondly, we offer cryptographic verification for proprietary models. When you receive an output from a proprietary model like GPT-5, you currently have no way to confirm it actually originated from that specific model. We employ ZK proofs to cryptographically verify that a given token was produced by the exact model and API endpoint you specified. This provides a meaningful trust guarantee that is not available anywhere else at present.

You have just launched CyClaw, which lets anyone deploy an AI agent on Telegram in minutes without needing hardware or wasting hours on setup. What does this reveal about the direction Cysic is heading in, and who is the intended user of this product?

You can think of AI agents as operating systems, similar to macOS or Windows. Our agents operate similarly, offering various versions with different capabilities. The skills act like installable applications. You add the skills you need, such as browsing or research, and the agent instantly acquires those capabilities. We will soon launch an agent marketplace, alongside our own skill hub.

The initial group of users will be individuals who prefer not to run a local AI model on their own laptops due to privacy concerns and the complexity of the setup process. With CyClaw, you simply click four or five buttons, and the setup is completely done.

The agent is hosted on our servers, ensuring your private documents and data remain secure. It integrates with Telegram, Slack, and WeChat, and allows voice interaction. In the long run, the same infrastructure can easily expand to meet the needs of institutions.

The pitch for node operators is that idle hardware can generate yield. How realistic is that for someone who is not already deep in crypto, and what does the setup actually involve?

The setup depends entirely on what kind of user you are. For a non-technical user, you simply download the Cysic app on your phone, paste in your wallet address, and press start.

The phone performs a lightweight task to verify ZK proofs generated by the network. This takes about 50 milliseconds and uses roughly 200 kilobytes of bandwidth. It runs quietly in the background and earns yield without requiring any hardware purchases.

For more technical users with GPU servers, the setup is also very straightforward. You download a binary and install it on your server using our step-by-step documentation. You can essentially copy and paste the commands into your terminal. Depending on your bandwidth, it takes around five minutes. Once it is running, the server participates fully as a compute node.

What is the main point you want a newcomer to grasp about the necessity of a decentralised compute network?

Centralised providers offer very reliable service. However, they lack censorship resistance and present single points of failure. A major outage can take down your entire stack, as we have all seen when AWS goes offline for a few hours.

We suggest adding us as an extra layer in your network. Projects that use decentralised compute remain fully operational even during major cloud outages. Besides the resilience benefit, we provide more cost-effective infrastructure for many heavy workloads.

How do you see zero-knowledge proofs evolving over the next five years in terms of adoption and the underlying technology?

When we started in 2022, there were over 15 different ZK algorithm variants. Everyone was building their own versions. Over time, the community recognised the different trade-offs, and the best approaches began merging. We are now left with three or four serious contenders.

In the next two to three years, I expect that the field will converge further towards likely two or three dominant algorithms. This convergence is good news for the industry. Once the community agrees on a smaller set of approaches, the engineering effort shifts directly from algorithm design to pure optimisation. Performance will improve much faster, and those algorithms will become deeply integrated across the entire ecosystem.

Same question for AI. What does the world look like with AI in five years?

The coding analogy is very helpful here. A few years ago, tools like Codex could autocomplete a line of code when you pressed the tab key. That marked the early stage of AI-assisted development. Nowadays, tools like Cursor can generate entire functions.

What we are building internally at Cysic extends far beyond that. You draft your plan in simple English. An AI layer translates it into a technical specification. A second layer of builders then implements various components simultaneously. A third layer of reviewers spots errors and suggests improvements. The builders then continue to iterate based on that feedback in a continuous loop.

We have shown internally that this process can run continuously for two to three weeks without human intervention. During this period, it generated over 300,000 lines of code for internal purposes. We have confirmed that the code is fully functional and actually outperforms some of our best hand-written ZKVM code by 10 to 15 per cent.

In five years, natural language software development will be the norm, with humans defining the intent and AI systems managing the implementation independently.