Can AI computing power facilitate peer-to-peer transactions through Crypto? 3 projects to help you understand the new trend
Author: Arkady childe
In today's data-driven era, artificial intelligence (AI) technology is advancing at an unprecedented pace. Particularly, the training of large AI models is continuously pushing the boundaries of technology, but it also brings significant challenges. Among them, decentralized distributed computing networks play an important role in the training of large AI models, yet they also face considerable technical bottlenecks and challenges.
One of the biggest demands of decentralized networks is support for the training of large AI models. However, this process involves complex data synchronization and network optimization issues, and resolving these issues is crucial for ensuring the efficiency and effectiveness of the computing network. Additionally, data privacy and security are also important factors that cannot be overlooked. How to conduct effective model training while ensuring data privacy has become an urgent problem to solve.
Currently, technologies such as secure multi-party computation, differential privacy, federated learning, and homomorphic encryption have demonstrated their advantages in specific scenarios, but they also have limitations, especially when dealing with data privacy issues in large-scale distributed computing networks. For example, zero-knowledge proof (ZKP) technology has great potential in this regard, but applying it to the training of large models in large-scale distributed computing networks will require years of research and development. This not only requires more attention and resource investment from academia but also faces significant technical costs and practical application challenges.
Compared to model training, decentralized distributed computing networks show greater practical potential in model inference. The growth space in this area is expected to be enormous in the future. Nevertheless, the inference process still faces numerous challenges, including communication delays, data privacy, and model security. Due to the relatively low computational complexity and data interactivity, model inference is more suitable for distributed environments, but overcoming these challenges remains a topic worth exploring in depth.
In this context, we will further explore representative projects in decentralized distributed computing networks—Akash Network, Gensyn, and Together—to gain a deeper understanding of this track that can change future production methods.
Akash Network: A fully open-source P2P cloud marketplace that activates global idle computing power with token incentives
Akash Network is an open-source platform with the core idea of establishing a decentralized peer-to-peer cloud marketplace that connects users seeking cloud services with infrastructure providers that have excess computing resources.
The Akash platform is specifically designed for hosting and managing deployments while providing cloud management services for running Kubernetes workloads. In simple terms, Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
Users on the Akash platform, referred to as "tenants," are primarily developers who wish to deploy Docker containers to cloud providers that meet specific criteria. An important feature of Docker containers is that they include packaged code and its dependencies, ensuring that applications run the same way in any computing environment. This means that whether developing on a laptop, testing in a sandbox, or running in the cloud, applications do not need to change their code.
One unique aspect of the Akash marketplace is its reverse auction model. This model allows users to set their own prices and describe their resource requirements for deploying containers. When cloud providers' computing resources are underutilized, they can rent these resources through the Akash marketplace, similar to how Airbnb hosts rent out spare rooms. Notably, the cost of deploying containers through Akash is about one-tenth that of the three major cloud service providers (Amazon Web Services, Google Cloud, and Microsoft Azure).
All transactions and records on Akash Network are conducted on-chain using its token—Akash Token (AKT). This network is built on the Cosmos SDK framework and utilizes the Tendermint Byzantine Fault Tolerance (BFT) engine to support its Delegated Proof of Stake (DPoS) consensus algorithm. AKT serves not only as a medium of exchange but also plays multiple roles in the Akash network, including ensuring network security, providing rewards, participating in network governance, and processing transactions.
In this way, Akash Network not only provides a more cost-effective cloud service option but also showcases the innovative application of blockchain technology in the modern cloud computing field.
Gensyn: Decomposing complex machine learning tasks into multiple sub-tasks to improve processing efficiency
Gensyn is a blockchain-based decentralized deep learning computing protocol specifically designed to address the demands of the AI computing market.
The core of the protocol lies in decomposing complex machine learning tasks into multiple sub-tasks and achieving highly parallelized computation through participants' computing resources. This approach not only improves computational efficiency but also automates task allocation, verification, and rewards through smart contracts, eliminating the need for centralized management.
The team successfully completed a $43 million Series A funding round led by the well-known venture capital firm a16z in June 2023, bringing total funding to $50 million.
The Gensyn protocol resembles a smart computing network, with key features including:
Probabilistic learning proofs: Utilizing metadata from the gradient optimization process to construct certificates of work completion, allowing for quick verification of task completion;
Graph-based localization protocol: Employing a multi-granularity, graph-based localization protocol, combined with cross-validation of consistent execution to ensure the consistency of verification work;
Truebit-style incentive games: Constructing incentive games through staking and reduction mechanisms to ensure participants fulfill their tasks honestly.
Additionally, the four main roles involved in the Gensyn system include:
Submitter: The end user of the system who provides tasks that need computation and pays fees;
Solver: Executes model training and generates proofs that need to be verified by validators;
Validator: Responsible for verifying the accuracy of the proofs provided by solvers;
Reporter: Acts as a security assurance, reviewing the work of validators and raising challenges when issues are found.
The Gensyn protocol has significant advantages in terms of cost and performance. For example, compared to Ethereum's transition from proof of work to proof of stake, Gensyn provides participants with a way to earn rewards by utilizing their computing resources, reducing computational costs and improving resource utilization. Python simulation test results indicate that while the time overhead for model training under the Gensyn protocol increased by about 46%, its performance showed significant improvement compared to other methods.
As a blockchain-based decentralized computing protocol, Gensyn aims to accelerate AI model training and reduce costs by implementing the distribution and rewards of machine learning tasks through smart contracts. Despite facing challenges such as communication and privacy, Gensyn offers an effective way to utilize idle computing power and considers diverse model scales and demands for broader and more flexible applications.
Together: Focused on large model development and application, seed round financing of $20 million
Together is an open-source company dedicated to providing decentralized AI computing solutions, focusing on the development and application of large models. The company's vision is to make AI accessible and usable for anyone, anywhere. In May of this year, Together completed a $20 million seed round of financing led by Lux Capital.
Together was co-founded by Chris, Percy, and Ce, driven by the realization of the need for a large number of high-end GPU clusters and expensive expenditures for large model training. They believe that these resources and the capability for model training should not be concentrated in the hands of a few large companies.
Together's development strategy emphasizes the application of open-source models and distributed computing. They believe that the premise for using decentralized computing networks is that models must be open-source, which helps reduce costs and complexity. Their recently released LLaMA-based RedPajama is an example, initiated by Together in collaboration with multiple research teams, aiming to develop a series of fully open-source large language models.
In terms of model inference, Together's R&D team has made a series of updates to the RedPajama-INCITE-3B model, including using LoRA for low-cost fine-tuning to make the model run more efficiently on CPUs. Regarding model training, Together is addressing communication bottlenecks in decentralized training, including scheduling optimization and communication compression optimization.
The Together team has a diverse professional background, covering areas from large model development to cloud computing and hardware optimization, demonstrating a comprehensive consideration of AI computing projects. Their strategy reflects a long-term development plan, encompassing the development of open-source large models, testing the application of distributed computing in model inference, and laying out distributed computing for large model training.
As the project is still in its early stages, many key details, such as network incentive mechanisms and token use cases, have yet to be disclosed. These factors are crucial for the success of crypto projects. Therefore, the industry continues to pay close attention to Together's future development and further detailed disclosures.
The future of decentralized AI is vast, but the challenges must be gradually overcome
When examining the integration of decentralized computing networks and AI technology, we find that this field is full of challenges and potential. The combination of AI and Web3, although two distinct fields, has a natural fit in using distributed technology to limit AI monopolies and promote the formation of decentralized consensus mechanisms. Decentralized computing networks not only provide distributed computing capabilities and privacy protection but also enhance the credibility and reliability of AI models, supporting rapid deployment and operation.
However, the development of this field is not without obstacles. The high communication costs in centralized computing networks pose a significant challenge for decentralized networks, requiring overcoming numerous technical issues to ensure node reliability and security, as well as effectively managing distributed computing resources.
Returning to commercial reality, while the deep integration of AI and Web3 is promising, it faces challenges such as high R&D costs and unclear business models. Fields like AI and Web3 are still in the early stages of development, and their true potential remains to be proven over time.