Internal Annoucement - NovaNet
May 21, 2025

Verifiable AI: The ZKP Advantage

In the rapidly evolving landscape of artificial intelligence, a critical question emerges: how do we trust what our AI systems are actually doing? As models grow more complex and decisions more consequential, the ability to verify AI behavior becomes not just desirable, but essential. This concern is amplified by the recent surge in on-device language models, bringing AI capabilities directly to our phones, laptops, and IoT devices. Enter zero-knowledge proofs (ZKPs) - the cryptographic equivalent of saying "I can prove I know something without revealing what that something is."

The Verification Problem in AI

Today's large language models are essentially black boxes - we input a prompt, something magical happens across billions of parameters, and out comes a response. Did the model actually follow the constraints we set? Did it use the intended dataset? Is it behaving according to its specifications? With on-device models becoming increasingly prevalent in 2025, these questions take on new urgency as AI computation moves from centralized data centers to billions of distributed endpoints.

NovaNet's zkEngine: Bringing Certainty to Uncertainty

NovaNet's zkEngine represents a significant advancement in addressing these verification challenges. This open-source framework enables the creation of zero-knowledge proofs for various aspects of AI systems, from model architecture to runtime behavior.

What makes zkEngine particularly powerful is its approach to handling the complexity inherent in modern AI systems. Rather than treating the entire model as a monolithic entity that must be proven all at once (which would be computationally prohibitive), zkEngine allows for compositional proofs, breaking down verification into manageable components.

The NovaNet Advantage: Memory Efficiency

The Achilles' heel of zero-knowledge proof systems has always been their voracious memory appetite. Traditional ZKP frameworks typically require RAM proportional to the computation being proven—making them impractical for large AI models with billions of parameters.

NovaNet's zkEngine implements a groundbreaking memory management system that dramatically reduces this overhead through:

  • Streaming proof generation: Rather than loading the entire computational graph into memory, zkEngine processes it in carefully orchestrated segments.
  • Optimal witness compression: The system employs sophisticated techniques to compress intermediate values without compromising proof validity.
  • Hierarchical memory allocation: zkEngine utilizes a multi-tiered memory architecture that intelligently places frequently accessed proof components in faster memory regions.

The result? ZKP generation for AI systems that would require terabytes of RAM in conventional systems can now run on standard hardware configurations—democratizing access to verification technology. This breakthrough is particularly crucial for on-device models, where memory constraints are even more stringent.

AI-Native Circuit Design

Unlike generic ZKP frameworks retrofitted for AI applications, zkEngine was built from the ground up with AI verification in mind:

  • Optimized matrix operation circuits: Special-purpose circuits for the bread-and-butter operations of neural networks that reduce proof complexity by orders of magnitude.
  • Attention mechanism verification: Custom circuits designed specifically for efficiently proving the correct execution of transformer attention mechanisms—crucial for verifying LLMs.
  • Activation function libraries: Pre-optimized circuits for common activation functions that minimize the verification overhead for these non-linear operations.
  • Stochastic operation handling: Specialized constructs for proving properties about the probabilistic components of AI systems, such as sampling operations.
  • Quantization-aware proving: Special support for the quantized operations prevalent in on-device models, ensuring verification remains efficient for these memory-optimized architectures.

Compositional Verification: The Sum Greater Than Its Parts

Perhaps zkEngine's most powerful feature is its compositional approach to verification:

  • Proof recycling: Previously generated proofs can be reused as building blocks for more complex verifications, dramatically reducing computational overhead.
  • Layered verification: Different aspects of an AI system can be verified independently—from data preprocessing to model architecture to output post-processing—and then composed into holistic assurances.
  • Recursive proof integration: zkEngine leverages recent advances in recursive SNARK technology, allowing proofs to verify other proofs in a nested structure that enables increasingly complex verification statements.
  • Distributed verification: Critical for on-device models, zkEngine supports splitting verification workloads across multiple devices in a privacy-preserving manner.

Flexible Proving Systems

NovaNet recognizes that different verification scenarios have different requirements. zkEngine provides:

  • Multi-backend support: The framework can leverage various proving systems (Groth16, Plonk, Halo2, etc.) depending on the specific verification needs.
  • Hybrid verification models: zkEngine can combine different proving techniques within a single verification flow, optimizing for performance where it matters most.
  • Custom constraint systems: Developers can define specialized constraint systems for unique AI architectures or operations.
  • Resource-adaptive proving: Automatically selects the most appropriate proving strategy based on available computational resources—perfect for the heterogeneous world of on-device AI.

On-Device Language Models: The New Verification Frontier

The past year has seen an explosion in the deployment of sophisticated language models directly on consumer devices. From Apple's integration of MindMac into iOS 19, to Samsung's NeuralCore running 7B parameter models on Galaxy devices, to Google's MiniGemini deployment across Android ecosystems—AI is increasingly moving to the edge.

This shift brings several key advantages:

  • Improved privacy through local processing
  • Reduced latency by eliminating server round-trips
  • Offline functionality for uninterrupted use
  • Reduced infrastructure costs for providers

But it also introduces new verification concerns:

  • How can users trust that on-device models respect privacy boundaries?
  • How can providers ensure model integrity across billions of endpoints?
  • How can regulators audit AI behavior without centralized control points?

zkEngine for On-Device Verification

NovaNet's zkEngine is uniquely positioned to address these challenges:

  • Lightweight proofs: zkEngine's memory efficiency makes it practical to generate and verify proofs even on mobile devices with limited resources.
  • Privacy-preserving auditing: Models can generate proofs of proper behavior without exposing user data or proprietary model details.
  • Tamper evidence: Proofs can demonstrate that an on-device model hasn't been modified from its certified version, a critical security feature.
  • Local-to-cloud verification bridge: zkEngine enables a device to prove properties about its local AI execution to remote parties without revealing the inputs or detailed execution trace.

Real-World Application: Verified Federated Learning

One particularly exciting application combines zkEngine with the growing trend of federated learning for on-device models:

  1. Devices can prove they've correctly applied model updates without revealing their training data
  2. Aggregators can verify contributions without accessing sensitive information
  3. End users gain confidence that their device's AI behaviors match global standards

ZKPs vs. TEEs: Why Zero-Knowledge Has the Edge

Trusted Execution Environments (TEEs) like Intel SGX or AMD SEV have been proposed as another approach to AI verification. While TEEs do provide a protected environment for computation, they fall short compared to ZKP-based approaches like zkEngine in several critical ways:

  1. Trust Assumptions: TEEs ultimately require trusting the hardware manufacturer and the integrity of the secure enclave. ZKPs, by contrast, rely only on mathematical guarantees, eliminating this trust dependency.
  2. Side-Channel Vulnerabilities: TEEs have repeatedly shown vulnerability to various side-channel attacks. ZKPs are inherently immune to such attacks since the verification happens entirely separate from the computation.
  3. Auditability: A ZKP creates a persistent, verifiable proof that can be checked by anyone at any time. TEE attestations are ephemeral and tied to the execution environment.
  4. Hardware Independence: NovaNet's zkEngine works across any computational platform, whereas TEEs are restricted to specific hardware architectures and vendors—a significant limitation in the diverse on-device ecosystem.
  5. Proof Composition: Perhaps most importantly, ZKPs allow for compositional reasoning about system properties. You can combine multiple proofs to establish complex behavioral guarantees - something fundamentally impossible with TEE-based approaches.
  6. Resource Requirements: For on-device models, TEEs impose significant overhead, while zkEngine's efficient proving systems can be tailored to device capabilities.

Real-World Applications

The implications of zkEngine's capabilities extend across numerous domains:

  • Regulatory Compliance: Proving that AI systems adhere to regulatory requirements without revealing proprietary details.
  • zkML (Zero-Knowledge Machine Learning): Enabling privacy-preserving inference where neither the model nor the input data is revealed.
  • Agent Behavior Verification: Proving that autonomous AI agents follow specific safety protocols without constant monitoring.
  • Dataset Usage Verification: Demonstrating that models were trained only on licensed or permitted data.
  • Model Integrity: Proving that a deployed model matches its certified version, without revealing model weights.
  • On-Device Privacy Guarantees: Verifying that local models don't exfiltrate sensitive user data while maintaining personalization.
  • Cross-Device Consistency: Ensuring that distributed AI systems maintain consistent behavior across heterogeneous devices.

Conclusion: Trust Through Mathematics

In the realm of AI verification, zero-knowledge proofs offer something uniquely valuable: trust based on mathematical certainty rather than blind faith. NovaNet's zkEngine brings this powerful cryptographic technique to the AI ecosystem in a practical, scalable way that extends from data centers to the billions of devices in our pockets.

As the saying goes in cryptography circles: "In God we trust, all others must bring data." With ZKP technology, we might amend this to: "In God we trust, all AIs must bring proofs."

Gradient Shape - NovaNet
Gradient Shape - NovaNet