Training large-scale AI models requires enormous computational power and GPU time, making it prohibitively expensive for individuals and smaller organizations. Our solution is to decentralize the training process by enabling individuals—called Ai Runners—to contribute their unused GPU resources to a distributed network. In return, they are rewarded with PUR tokens.
Once trained, the AI models will be hosted on ai.purrfectuniverse.com and will be accessible exclusively through PUR tokens, creating a closed-loop token economy. This design ensures that token holders benefit directly from the utility of the network while incentivizing ongoing GPU contributions to power future models.
In summary:
Ai Runners contribute GPU power → earn PUR tokens
PUR token unlocks access to powerful AI models on our platform
Training process is decentralized using proven tech like Hivemind
Sustainable token economy that rewards contributors and enables fair access
This approach significantly reduces AI infrastructure costs, democratizes access to advanced models, and establishes a scalable, community-driven AI ecosystem.
Things to think about
Challenges to face
The system is tolerant to faults but not to attacks ?
Hey! Hey! Your idea with PUR tokens and AI is great, and I had a few ideas, so I want to share them here.
For the safety of AI training, I propose a Reputation Smart Contract - Ai Runners would get scores based on the quality of their GPU contributions, which protects models from bad data and motivates honest participants with extra PUR rewards. Access to AI models on ai.purrfectuniverse.com would be handled via NFT tickets that unlock different models.
I see huge potential in an AI Quest Builder for MW0rld, where AI models with PUR tokens would create unique in-game experiences. The player pays PUR tokens to have the AI generate an in-game quest - either just for him or for everyone, depending on his choice. The reward in PUR tokens varies according to the difficulty of the quest (e.g. conditions such as environment, items, game time). The quest is written as a smart contract on the blockchain, where it is automatically verified as completed and the reward is paid. Alternatively, the AI could generate a unique NFT at coordinates on the map - the player pays PUR to pick it up, and upon reaching an in-game position calls the SC, which transfers the NFT to his wallet.
What do you think of these ideas? I’d love to discuss this further with you!
Here are the challenges that need to be solved IMO before we go further:
Performance metrics: there is a reason why LLMs are trained on $50k H100 cards, it is because they can fit a whole LLM in there while sparing communication overhead. Unsufficient VRAM causing loading from RAM is already a huge performance hit. Multiple graphics cards on the same computer is also a large perf hit. Multiple computers in the same datacenter hits even more. Now multiple unreliable computers around the internet sounds like a huge further hit. Would be amazing to have some metrics on the performance impact (eg. is it really more efficient to train on such a network with hundreds of nodes rather than on a single H100 card on a computer with a lot of CPU and RAM ?)
Byzantine tolerance: the linked algorithms assume everyone is honest (and not too faulty) in the network. As soon as any incentives are put in place, we need to figure out tolerance to attackers pretending to be participating but actually sending random data (or worse, malicious data to derail the training or add backdoors in the resulting NN). Also note that byzantine tolerance adds extra overhead as well.
Hello again everyone and thanks for participating in this discussion.
As @damir emphasized, we must examine performance and security challenges carefully. Through my research, I identified two primary approaches that directly influence our direction:
Full Model Training
What it is:
Every parameter in the model is updated end-to-end, exactly as in centralized training setups.
Advantages:
Maximum Accuracy: All weights are tuned, achieving highest model quality.
Deterministic Results: Predictable behavior simplifies debugging and validation.
Drawbacks:
Heavy Hardware Needs: Demands ≥40 GB VRAM GPUs (e.g. A100/H100) and high-speed connections.
High Network Overhead: Frequent synchronizations increase latency and bandwidth usage.