The setup reflects his view that cloud-based AI tools pose a serious threat to personal privacy.
Crypto News
Buterin runs an open-source model called Qwen3.5:35B using software called llama-server. He operates it on a laptop fitted with an Nvidia 5090 GPU. That configuration produces about 90 tokens per second, which he considers fast enough to use comfortably in daily tasks.
He keeps a full offline copy of Wikipedia and technical documentation stored locally on his machine. That reduces how often he needs to search the internet. He treats external search queries as a privacy risk because they reveal what users are looking for.
The AI system is also connected to his Ethereum wallet and messaging accounts. Buterin built and released a messaging tool that allows his AI to read Signal messages and emails without restriction. However, the tool blocks the AI from sending messages to anyone outside his own accounts unless a human manually approves each outbound action.
He said the principle applies equally to financial transactions. He recommended that any team building AI tools connected to Ethereum wallets cap autonomous spending at $100 per day. Any transaction above that threshold should require a human to confirm it before it goes through.
Buterin opened the post by citing a security finding that informed the design. Researchers found that roughly 15% of third-party tools built for OpenClaw, currently the fastest-growing repository on GitHub, contained malicious instructions. Some of those tools quietly sent user data to outside servers with no visible indication to the user.
He described his concern in direct terms, explaining that end-to-end encryption and locally run software had finally started to gain mainstream adoption, but he worries that cloud-based AI is now pushing that progress into reverse.
