Venice AI rolls out verifiable end-to-end encryption for AI chats
Venice AI just rolled out verifiable end-to-end encryption for its AI chats. The update allows users to cryptographically prove their data is safe from the mass surveillance practices currently dominating the AI industry.
The AI industry has a massive surveillance problem, as every major player happily hoovers up your chat history to train their next frontier models. Venice AI is trying to flip that script.
The privacy-focused platform founded by crypto veteran Erik Voorhees just rolled out verifiable trusted execution environments (TEE) and end-to-end encryption (E2EE) for its Pro users. This shifts the privacy promise from a corporate “trust us” to a mathematical “verify it yourself.”
- The hardware enclaves: Pro users can now run inference inside secure hardware enclaves powered by NEAR AI Cloud and Phala Network, which ensures the GPU operators physically cannot read the prompts.
- True end-to-end encryption: In the highest security tier, prompts are encrypted on your local device and only decrypted once inside the verified enclave, so neither Venice nor the compute providers ever see the plaintext.
- The receipt: Every protected response comes with a verification icon that generates a full attestation report when clicked. This allows anyone to independently audit that the computation actually happened inside genuine secure hardware.
The functional trade-offs: Locking down your data comes with real friction. If you want absolute, mathematically proven privacy, you have to sacrifice convenience.
Running in strict E2EE mode intentionally breaks features like web search, memory, and file uploads to preserve encryption integrity. It also makes response times noticeably slower, which is a deliberate choice made by the developer to prioritize security over speed.
- Anonymous (Free): This tier routes queries to closed models like GPT or Claude through a proxy. Your identity is hidden from the provider, but OpenAI or Anthropic might still retain the data.
- Private (Default): This mode uses open-source models with a strict zero-retention policy enforced by contract, so nothing is stored server-side, but it lacks hardware-level cryptographic proof.
- TEE & E2EE (Pro): These are the new hardware-locked, fully encrypted tiers that guarantee absolutely no one is skimming your chats.
The Bottom Line: As Big Tech increasingly treats every user interaction as free training data, Venice is building a necessary off-ramp. Offering verifiable encryption is a massive step forward for AI privacy, even if it currently means trading off some speed and functionality to keep the surveillance at bay.
Check out the new modes at venice.ai/chat.
If you need on-demand GPUs for training, fine-tuning, inference, or running open-source models, give RunPod a try.
- Available hardware: H100, H200, A100, L40S, RTX 4090, RTX 5090, and 30+ more
- Cost: significantly cheaper than AWS or GCP, billed per second, no contracts
- Setup: spins up in under a minute, 30+ regions worldwide

Get the core business tech news delivered straight to your inbox. We track AI, automation, SaaS, and cybersecurity so you don't have to.
Just read what you want, and be done with it.





