Opinion by: Rob Viglione, co-founder and CEO of Horizen Labs
Can you trust your AI to be unbiased? A recent research paper suggests it’s a little more complicated. Unfortunately, bias isn’t just a bug — it’s a persistent feature without proper cryptographic guardrails.
A September 2024 study from Imperial College London shows how zero-knowledge proofs (ZKPs) can help companies verify that their machine learning (ML) models treat all demographic groups equally while still keeping model details and user data private.
Zero-knowledge proofs are cryptographic methods that enable one party to prove to another that a statement is true without revealing any additional information beyond the statement’s validity. When defining “fairness,” however, we open up a whole new can of worms.
Machine learning bias
With machine learning models, bias manifests in dramatically different ways. It can cause a credit scoring service to rate a person differently based on their friends’ and communities’ credit scores, which can be inherently discriminatory. It can also prompt AI image generators to show the Pope and Ancient Greeks as people of different races, like Google’s AI tool Gemini infamously did last year.




















