Deep Dive
1. Last Code Release (26 January 2025)
Overview: The last documented update to the τemplar repository was a minor version release over a year ago. It focused on internal tooling for miners rather than user-facing features.
The update to version 0.2.13 introduced "gradient analysis" and "similarities" features, which are likely analytical tools for miners to optimize their machine learning models. It also patched a bug related to missing gradient logic and performed routine maintenance tasks like renaming components and updating type checks. This suggests development was focused on stabilizing and improving the backend infrastructure for network participants.
What this means: This is neutral for τemplar (SN3) because the update was minor and occurred long ago. It shows foundational work was being done, but the long gap since the last commit indicates public development activity has been quiet, with focus likely shifting to network operations.
(tplr-ai/templar)
2. Historic Model Training (10 March 2026)
Overview: While not a codebase update, the subnet's major achievement was the live, on-chain deployment of the Covenant-72B large language model. This proved the network's core utility.
The τemplar team completed the pre-training of Covenant-72B, a 72-billion-parameter AI model. It was trained on over 1.1 trillion tokens across more than 70 globally distributed nodes using commodity hardware, without a centralized data center. The model achieved a competitive score of 67.1 on the MMLU benchmark, demonstrating that Bittensor's decentralized approach can rival centralized AI labs.
What this means: This is extremely bullish for τemplar (SN3) because it validates the entire subnet's purpose. The successful deployment of a state-of-the-art model creates real utility, drives demand for TAO staking to secure the network, and directly fueled a 194% price rally for the SN3 token in March.
(CoinMarketCap)
Conclusion
τemplar's development trajectory has pivoted from frequent code commits to demonstrating massive, real-world utility through its decentralized AI network. The historic training of Covenant-72B has become its defining achievement, shifting the value proposition from development activity to proven operational scale. Will the next phase focus on optimizing this live model or expanding to new AI tasks?