We met with Shady El Damaty, Co-founder of human.tech, during EthCC, to discuss how his team is solving this exact problem. Coming from a background in neuroscience and decentralised science, Shady is building the technology stack required to keep humans in the loop. By combining zero-knowledge cryptography with protected self-custody, human.tech offers a secure way to verify identity without compromising personal data.
As artificial intelligence floods the internet with bots and autonomous agents, proving you are a real human has never been more critical. Yet, current solutions often force users to surrender their privacy or risk losing access entirely.
Read more below to discover how Shady's approach to transparent accountability protects protocols from massive Sybil attacks and ensures you remain in complete control of your digital life.
You came out of academia and decentralised science before building Web3 production systems. What was the moment you realised cryptography could be more than a research interest?
I remember it clearly. I had just finished collecting data for a neuroimaging study and was struggling with reproducibility. Neuroimaging data is notoriously hard to obtain consistent results from. I found myself working with hash functions and signatures to verify every step of the analysis pipeline. I needed to confirm that nothing had been tampered with along the way. That was my first time using cryptography practically.
The real breakthrough came when I discovered IPFS. I was working on a data consortium between institutions to share data for collaborative projects. Meaningful research requires hundreds of terabytes of data. However, that data almost never leaves the collecting institution due to strict patient privacy rules.
IPFS fundamentally changed that perspective. My co-founder, Nana and I developed a tool similar to BitTorrent for scientific data. It enabled research organisations to share directly without relying on centralised storage. Afterwards, I was fully convinced there was a lot to build in this space.
Holonym became human.tech. What changed, and what does the rebrand signal about where the protocol is headed?
While Holonym still operates as a labs company building new software, human.tech represents our expanded technology framework. We created this new technology stack to focus entirely on keeping humans in the loop, which serves as the guiding principle for our entire roadmap today.
That principle breaks down into two specific actions. First, we have to prove personhood to clearly distinguish real people from the growing number of autonomous agents.
Second, we have to give users total control. By building secure keys and wallets, we ensure people maintain full custody of their digital lives. Proof of personhood and absolute self-custody are the core pillars driving human.tech forward.
There is a tension at the core of the proof of personhood. You need enough signal to verify a real human, but revealing that signal can compromise privacy. How does human.tech balance these factors?
This is one of the most significant issues in the digital space right now. As more of our lives become digital, the pressure to uphold social order and enforce accountability is leading governments towards surveillance-heavy solutions. Anonymity is increasingly frowned upon. The instinct is to demand strict identity verification, especially as AI blurs the boundaries between humans and agents.
Privacy has never been more vital. We face a genuine tension between privacy and accountability. The discourse has historically been stuck between complete anonymity maximalism and total government-controlled identity. Neither offers a practical solution.
We have introduced a third approach called transparent accountability. Using zero-knowledge proofs, a user can interact with an application and consent to its terms. As long as they adhere to the rules and do not exploit the protocol, they remain completely private. The cryptographic proof verifies they are authorised to use the system without ever revealing their real identity.
However, if someone breaks the rules and attempts to extract stolen funds, the programmable privacy primitives allow that identity data to be disclosed retroactively. Privacy serves as the default setting. You only risk losing it if you violate the terms you explicitly agreed to.
Human Passport takes a multi-signal aggregation approach rather than a single verification method. Can you explain this, and how it compares to iris-based approaches like Worldcoin?
Worldcoin adopted a strict one-person, one-iris mapping approach. They selected the iris because it has an extremely low false positive rate, making it excellent for uniquely identifying individuals worldwide.
I believe this approach carries serious architectural risks. Establishing a permanent one-to-one mapping of a person to a biometric and a cryptographic key is highly restrictive. Although the Worldcoin founders are genuinely well-intentioned and care deeply about security, I find their approach flawed.
Consider common user errors. If your account becomes corrupted or your app is deleted, the hardware will recognise you as previously registered and will refuse to issue a new key. You permanently lose access. Additionally, if a lending application uses your identity as collateral and you default, you could be permanently banned from the ecosystem. You would effectively cease to exist within the system.
The better approach relies on probabilistic signals. We provide developers with the tools to make informed guesses about whether a user is human based on the stakes of their specific use case. A UBI distribution requires strong guarantees against bot farms, whereas a social media application should never require biometrics just to create an account.
Human Passport collects 30 different types of signals, including onchain transaction history, government IDs, and biometrics. Developers can select the specific signal threshold suitable for their context. There is no hardware requirement and no permanent biometric binding. We firmly believe this is the best model for the ecosystem.
Human Passport has protected more than $500 million in funds and issued more than 43 million verifiable credentials. Can you put a face on the threat it is defending against? What does a successful Sybil attack on a protocol actually look like?
It's difficult to attribute a single cause, but I can share a specific example we encountered. We identified highly organised Sybil farms operating in Laos, Vietnam, and Bangladesh. During our verification process, if a user encounters a document error, we retain session data for three days to assist with troubleshooting before deleting it.
That data revealed a warehouse in Bangladesh with a long queue of people outside. Each person handed their identity documents to a single operator who verified credentials one by one against a single crypto wallet. It acted as a literal production line for Sybil addresses.
If you perform retroactive Sybil analysis across major airdrops using cluster detection, the results are truly staggering. Tens of billions of dollars have been siphoned off by organised airdrop farmers.
Airdrops offer a genuinely valuable way to bootstrap protocols, but they have also created an industry of organised operations designed to extract value at scale. We are actively working to defend against these very threats.
Human Network pioneered VOPRF, a cryptographic primitive that lets someone derive a key from a password or biometric without the network seeing the underlying data. What will it take for this to catch on, and what does open-sourcing the code change for builders?
We identified the Verifiable Oblivious Pseudorandom Function about two and a half years ago as an excellent primitive for two specific functions.
Firstly, it provides a nullifier. This is a cryptographic record that proves an identity credential has been used only once without revealing who used it. Secondly, it is highly effective for deriving a cryptographic wallet key from low-entropy data such as a voice or a face. The function generates a deterministic random string mapped to that data, ensuring only the true owner can regenerate the key.
Human Passport employs this technology to secure user credentials, and our wallet product utilises it for key derivation. Human Network has been operational since April last year as the first VOPRF network in production, generating over 3.5 million keys.
The standard is quickly gaining traction. We have open-sourced the implementation, and the Ethereum Foundation funded research into it. We actively share our cryptography with other projects to enable new use cases. The clearest sign of adoption is Worldcoin's migration of its entire backend to VOPRF in its recent update. They rebuilt their system from scratch around this exact standard.
Human Network is secured through restaking via EigenLayer and Symbiotic. How effective is the combination of staking and threshold cryptography against collusion risk?
Staking is essential to threshold networks and a key enabler of multi-party computation. The main challenge with this computation is collusion. In a typical two-of-three setup, if two nodes agree to share a user's private key data externally, there is nothing technically preventing them. This presents a significant vulnerability.
Staking fundamentally alters the incentive structure. Nodes that do not behave honestly and meet protocol requirements face slashing and exclusion from the network. They also forfeit their restaking rewards. Restakers naturally avoid allocating assets to nodes with poor reputations.
Reputation effectively becomes a form of load-bearing security. We see this as a significant breakthrough for bootstrapping trustless threshold protocols. Imposing economic skin in the game makes collusion considerably more costly and immediately easier to detect.
Can you walk us through Wallet as a protocol, and what is wrong with the way wallets work today? Why does fixing it require a protocol-level intervention?
We designed Wallet to address a specific issue we consistently encountered with identity. If you want identity verification to be truly private, all secrets must remain on the client side. The pre-processed credential data is stored on the user's device, protected by a password.
This creates a massive portability problem. If you lose your device, your proof of personhood disappears because no server synchronises the data. Our initial solution involved third-party wallets to encrypt the credential data and store it on a server. It was effective, but it felt makeshift and awkward. We knew we could build something much better.
Wallet employs Two-Party Computation. One key share is obtained from the client side, derived from biometrics or social accounts. The other share resides in a secure enclave or network. Transactions can only execute when both shares work together seamlessly.
The threat model is deliberately very strict. We do not trust the user, the network, the dApp, or the smart contract. If the enclave fails, funds remain frozen until service resumes, ensuring a safe failure mode. This provides protected self-custody. Users maintain full control over their keys, while policy enforcement within the enclave ensures they are properly informed of risks before signing any transaction.
You have described a future in which every online action must answer two questions: is this a human or a bot, and, if it is a bot, is there a verified human behind it? Do you see this becoming a regulatory or contractual requirement?
Absolutely. Legislation is currently progressing in the EU for autonomous systems that require operators to demonstrate that humans remain involved. You cannot deploy AI agents without direct accountability for their actions.
We are already seeing the market move in this direction. Enabling users to associate agents with their identities serves as a delegated proof of personhood. I fully anticipate regulatory pressure to enforce that link of accountability from agent to person at both the protocol and government levels.
I recently spoke with the head of decentralised AI at the Ethereum Foundation. Visa and Mastercard are actively addressing this specific concern. They aim to enable autonomous agents to interact with payments, but they must verify that an agent is not acting on behalf of someone on a sanctions list.
"Know Your Agent" will become essential infrastructure. Agent reputation systems will be extremely important. The best approach right now is pluralistic. We must support many ways to define agent identity and allow the ecosystem to discover what works best before converging on a single standard.
The Covenant of Humanistic Technologies lies at the heart of Human.tech's broader mission. As AI capabilities expand, does embedding those principles into production infrastructure seem more pressing?
Absolutely. The Covenant functions as a bilateral terms-of-use agreement between builders and users.
As writing software becomes simpler, the design philosophy underpinning what is built becomes increasingly crucial. Anyone can write code nowadays. The real challenge lies in developing a nuanced, principled design philosophy for how applications should be created and what they owe to their users.
The Covenant clearly states those exact principles. Privacy and security go well beyond simply making users feel comfortable. They are essential qualities that enhance system resilience and greatly increase the chances of long-term success. Incorporating them into the infrastructure from the outset is the only way to achieve compounding returns on trust over time.
