We recently spoke with Jake Salerno, VP of GTM at 0G Labs, during EthCC to discuss the critical need for verifiable infrastructure in the age of artificial intelligence.
Jake brings a decade of experience in the blockchain space. He previously spent three years at Chainlink Labs driving go-to-market strategies for mission-critical data products. At 0G Labs, he leads business development and ecosystem integrations for the world’s first decentralised AI operating system.
In this interview, Jake explains how 0G is solving the AI centralisation crisis, the importance of hardware-level verification, and his ambitious product roadmap for 2026. Read more about his vision for a democratised AI future below.
You spent three years at Chainlink Labs driving GTM for data products before joining 0G. What did that experience teach you about what decentralised infrastructure needs before enterprise buyers gain interest?
I have spent ten years in the field, and each year becomes more exciting. I started my blockchain career in academia in 2016, co-founding a blockchain club. I later moved to Chainlink Labs to work on price feeds and low-latency data streams. We helped secure quadrillions of dollars in the derivatives market.
About two years ago, AI entered my world. I could not deny the inevitable convergence of these two technologies. I believe the future of AI and agentic finance will reside on blockchains.
After meeting the 0G co-founders, I observed an infrastructure that perfectly reflected my passion. We are now building a full operating system for onchain AI to ensure these systems scale securely.
You mentioned that 0G began with a question about how to make AI a public good to prevent it from being controlled by a few companies. What are your views on the current problem of AI centralisation, and how does 0G address this today?
Currently, companies control the compute, and your data goes straight to their servers. You have to trust their privacy policy and hope they protect your data. Researchers recently found over 30,000 exposed instances of a popular AI project across 82 countries. That specific project had zero built-in privacy.
We make AI a public good through strict verification. Decentralisation relies on verification to work effectively. The main concern with centralisation is who has the authority to verify what happens on those servers.
0G tackles this with three layers of verification. We utilise sealed inference using trusted execution environment hardware. We also introduced Agentic ID, a new standard for verified agent identity. Lastly, we launched DiLoCoX, a 107-billion-parameter model built through verified decentralised training.
Jensen Huang validated decentralised AI training in a recent podcast, and 0G was already ahead with the DiLoCoX research article. Why did it take a prominent figure like Jensen to make a bullish case when the facts were already there?
We published the DiLoCoX paper back in June 2025. Jensen Huang recently discussed decentralised training on a major podcast, drawing significant attention, capital, and legitimacy to the category as a whole. We strongly welcome this increased visibility.
Infrastructure projects attract widespread attention through compelling market narratives and research papers. Jensen reached more people in a single episode than our research paper did in nine months.
The market is finally paying attention. When people examine the timeline, they will see that 0G was the first mover by 9 months.
Bittensor’s Covenant-72B and 0G’s DiLoCoX are both advancing decentralised AI training from different perspectives. Are these complementary, or are they fundamentally different approaches to how decentralised AI training is developed?
These represent different architectural approaches. Bittensor depends on economic incentives to keep participants honest. 0G relies on hardware verification via trusted execution environments to entirely eliminate the need for trust.
Both approaches expand the broader category. However, the verification question remains essential. When a model handles financial transactions, relying on hardware proof offers a much stronger guarantee than depending on economic incentives. 0G also integrates training, inference, storage, data availability, and settlement into a single comprehensive platform.
Your ETHCC keynote is titled “Why Verification Should Be A First-Class Citizen in AI,” focusing on three trust issues in AI. Can you walk us through that?
My talk centred on establishing verification as a fundamental element in AI. We identified three core trust issues originating from the conception of 0G. We need to verify what an AI has computed, who an AI agent is, and how an AI model is trained.
Our answers include sealed inference, Agentic ID, and our 107-billion parameter model. The room was fully packed during the presentation. The verification message deeply resonated with the audience. We are actively bringing these verified products to market this quarter.
0G has over 350 integrations across compute, storage, DA, and the L1. What does the distribution of these integrations look like?
When we prepared for our mainnet launch, we coordinated all the crucial supply-side integrations. You need specific partners to ensure your infrastructure operates securely and efficiently. We launched with 100 launch-day partners across vital categories such as validators, bridges, oracles, and custodians.
Our wider integration network covers both the supply and demand sides. We support builders through our compute, storage, and settlement layers. We also collaborate with various infrastructure providers to offer the best possible tools for entrepreneurs building on 0G.
With all the recent developments and the spotlight on decentralised AI, what direction is 0G heading in 2026? Are there any updates or new points of focus you can tease?
We are currently in the midst of our most intensive shipping period ever. Throughout April, we will be launching a new product each week. This includes Gassed AI, Zero Studio, Company in a Box, the 0G Library, 0G Pay, and the 0G Private Computer.
We are also launching Agentic ID, providing every agent with an onchain identity and passport. As we move into the second quarter, we will introduce a unified 0G SDK and increase our presence at the NTU Singapore AI Research Hub.
By the end of June, every layer of our stack will feature a consumer-facing product. Our core mission remains the same. We are making AI a public good, and we are now delivering the precise tools to make that a reality.
What is your vision for AI in five years? How do you see it evolving by 2030 from a user or technology perspective?
I see AI as something owned by the global community. We are working towards a self-sovereign, verifiable, and democratised AI landscape. Making AI a public good guarantees the technology serves our best interests.
We want to empower humanity with transparent models. This marks a significant milestone in AI development, ensuring the technology benefits mankind effectively.
Any closing thoughts or things you would like to add?
We look forward to sharing more updates during our upcoming product launches. We are always seeking new partnerships and community feedback. This is a collaborative movement meant to benefit the entire ecosystem.
