What is τemplar (SN3)?

By CMC AI
10 April 2026 12:53AM (UTC+0)
TLDR

τemplar (SN3) is a decentralized AI training subnet on the Bittensor network, designed to collaboratively train large language models using a global pool of distributed computing power.

  1. It's a decentralized training framework that coordinates "miners" and "validators" to train AI models across the internet.

  2. Its core achievement is completing the largest decentralized LLM pre-training run in history, producing the 72-billion-parameter Covenant-72B model.

  3. It operates as Subnet 3 (SN3) within Bittensor, using a native token to facilitate its unique incentive and governance mechanisms.

Deep Dive

1. Purpose & Value Proposition

τemplar addresses the centralization of AI development by creating a permissionless, internet-wide framework for training large models. It enables anyone with GPUs to contribute compute as a "miner," training on specific data slices. Other participants act as "validators," evaluating the quality of each miner's contribution. This structure aims to democratize access to large-scale AI training, moving it away from exclusive, centralized data centers. The project proved this concept by successfully training Covenant-72B, a model competitive with Meta's Llama 2 70B, entirely on decentralized infrastructure (Rendoshi).

2. Technology & Incentive Mechanism

The system is powered by a two-party incentive mechanism detailed in its GitHub documentation. Miners perform local training, compute gradients (updates to the model), compress them, and share with peers. Validators then assess each miner's update by measuring how much it reduces the model's loss on a specific dataset. A miner's reward is tied to this quantified improvement, creating a built-in economic incentive for honest, high-quality contributions. This design aims to ensure the collective model improves efficiently while resisting malicious or low-effort participation.

Conclusion

Fundamentally, τemplar is a working prototype for decentralized, incentive-driven AI training, having already delivered a landmark model through global collaboration. How will its proven framework evolve to train the next generation of even larger AI models?

CMC AI can make mistakes. Not financial advice.