Born on the Edge: How Decentralized Computing Networks Empower Crypto and AI?

Youbi Capital
2024-06-12 14:08:23
Collection
Recently, Aethir and io have issued tokens, and the decentralized computing network has become a highly关注ed track. How should Blockchain and AI be combined? What are the leading projects in the decentralized computing network track, and how can we analyze their similarities and differences? What challenges and opportunities will this track face? We will detail this in the article.

Author: Jane Doe, Chen Li

Corresponding Author: Youbi Investment Team

The Intersection of AI and Crypto

On May 23, chip giant Nvidia released its Q1 2025 financial report. The report showed that Nvidia's revenue for the first quarter was $26 billion. Among them, data center revenue grew by 427% compared to last year, reaching an astonishing $22.6 billion. Nvidia's ability to single-handedly save the financial performance of the U.S. stock market reflects the explosive demand for computing power among global tech companies competing in the AI race. The more top-tier tech companies lay out their ambitions in the AI space, the more their demand for computing power grows exponentially. According to TrendForce's forecast, in 2024, the demand for high-end AI servers from the four major U.S. cloud service providers: Microsoft, Google, AWS, and Meta is expected to account for 20.2%, 16.6%, 16%, and 10.8% of global demand, respectively, totaling over 60%.

Image Source: https://investor.nvidia.com/financial-info/financial-reports/default.aspx

"Chip shortage" has become an annual buzzword in recent years. On one hand, the training and inference of large language models (LLMs) require substantial computing power; with model iterations, the cost and demand for computing power increase exponentially. On the other hand, large companies like Meta purchase massive amounts of chips, causing global computing resources to tilt towards these tech giants, making it increasingly difficult for small businesses to obtain the computing resources they need. The dilemma faced by small businesses arises not only from the chip supply shortage caused by surging demand but also from structural contradictions in supply. Currently, there are still a large number of idle GPUs on the supply side; for instance, some data centers have a significant amount of idle computing power (with utilization rates only between 12% and 18%), and due to reduced profits in crypto mining, a large amount of computing resources has also become idle. Although not all of this computing power is suitable for specialized applications like AI training, consumer-grade hardware can still play a significant role in other areas such as AI inference, cloud gaming rendering, and cloud mobile services. The opportunity to integrate and utilize this portion of computing resources is enormous.

Shifting focus from AI to crypto, after three years of stagnation in the crypto market, another bull market has finally arrived, with Bitcoin prices hitting new highs and various memecoins emerging. Although AI and Crypto have been buzzwords in recent years, artificial intelligence and blockchain, as two important technologies, seem to be parallel lines that have yet to find an "intersection." Earlier this year, Vitalik published an article titled "The promise and challenges of crypto + AI applications", discussing future scenarios where AI and crypto could combine. In the article, Vitalik mentioned many visions, including using blockchain and MPC (Multi-Party Computation) and other cryptographic technologies for decentralized training and inference of AI, which could open the black box of machine learning, making AI models more trustless, among other things. There is still a long way to go to realize these visions. However, one of the use cases mentioned by Vitalik—using crypto's economic incentives to empower AI—is an important direction that can be achieved in the short term. Decentralized computing networks are one of the most suitable scenarios for AI + crypto at this stage.

Decentralized Computing Networks

Currently, there are already several projects developing in the decentralized computing network space. The underlying logic of these projects is similar and can be summarized as: using tokens to incentivize computing power holders to participate in providing computing power services, allowing these scattered computing resources to aggregate into a certain scale of decentralized computing networks. This can improve the utilization of idle computing power while meeting customer computing needs at a lower cost, achieving a win-win situation for both buyers and sellers.

To provide readers with a comprehensive understanding of this space in a short time, this article will deconstruct specific projects and the entire track from both micro and macro perspectives, aiming to provide analytical insights for readers to understand the core competitive advantages of each project and the overall development of the decentralized computing track. The author will introduce and analyze five projects: Aethir, io.net, Render Network, Akash Network, Gensyn, and summarize and evaluate the project situations and track development.

From an analytical framework perspective, if we focus on a specific decentralized computing network, we can break it down into four core components:

  • Hardware Network: Integrates dispersed computing resources together, enabling the sharing and load balancing of computing resources through nodes distributed globally, forming the foundational layer of decentralized computing networks.

  • Bilateral Market: Matches computing power providers with demanders through reasonable pricing and discovery mechanisms, providing a secure trading platform to ensure transparency, fairness, and trustworthiness in transactions between supply and demand.

  • Consensus Mechanism: Ensures that nodes within the network operate correctly and complete their tasks. The consensus mechanism mainly monitors two levels: 1) whether nodes are online and operating in an active state ready to accept tasks; 2) proof of work: whether the node effectively and correctly completed the task after receiving it, ensuring that computing power is not used for other purposes that occupy processes and threads.

  • Token Incentives: The token model is used to incentivize more participants to provide/use services and to capture this network effect with tokens, achieving community profit sharing.

From a bird's-eye view of the entire decentralized computing track, Blockworks Research's report provides a good analytical framework, allowing us to categorize the projects in this space into three different layers.

  • Bare Metal Layer: The foundational layer of the decentralized computing stack, primarily tasked with collecting raw computing resources and making them callable via APIs.

  • Orchestration Layer: The middle layer of the decentralized computing stack, mainly responsible for coordinating and abstracting, handling scheduling, scaling, operation, load balancing, and fault tolerance of computing power. Its main role is to "abstract" the complexity of managing underlying hardware, providing a more advanced user interface for specific customer groups.

  • Aggregation Layer: The top layer of the decentralized computing stack, primarily responsible for integration, providing a unified interface for users to perform various computing tasks in one place, such as AI training, rendering, zkML, etc. It acts as the orchestration and distribution layer for multiple decentralized computing services.

Image

Image Source: Youbi Capital

Based on the above two analytical frameworks, we will conduct a horizontal comparison of the selected five projects and evaluate them from four aspects—core business, market positioning, hardware facilities, and financial performance.

Image

2.1 Core Business

From a foundational logic perspective, decentralized computing networks are highly homogeneous, utilizing tokens to incentivize idle computing power holders to provide computing services. Around this foundational logic, we can understand the differences in core business among projects from three aspects:

  • Source of Idle Computing Power:

  • There are two main sources of idle computing power in the market: 1) idle computing power from enterprises such as data centers and miners; 2) idle computing power from individual users. Data center computing power is usually professional-grade hardware, while individual users typically purchase consumer-grade chips.

  • Aethir, Akash Network, and Gensyn primarily collect computing power from enterprises. The benefits of collecting computing power from enterprises include: 1) enterprises and data centers usually have higher quality hardware and professional maintenance teams, leading to better performance and reliability of computing resources; 2) the computing resources from enterprises and data centers are often more homogeneous, and centralized management and monitoring make resource scheduling and maintenance more efficient. However, this approach requires the project to have commercial ties with enterprises that hold the computing power, which can affect scalability and decentralization.

  • Render Network and io.net primarily incentivize individuals to provide their idle computing power. The benefits of collecting computing power from individuals include: 1) the idle computing power from individuals has a lower explicit cost, providing more economical computing resources; 2) the network's scalability and degree of decentralization are higher, enhancing the system's resilience and robustness. The downside is that individual resources are widely distributed and not uniform, making management and scheduling complex, increasing operational difficulties. Additionally, relying on individual computing power to form initial network effects can be more challenging (harder to kickstart). Finally, individual devices may pose more security risks, leading to data breaches and misuse of computing power.

  • Computing Power Consumers

  • From the perspective of computing power consumers, Aethir, io.net, and Gensyn primarily target enterprises. For B-end clients, AI and real-time rendering in gaming require high-performance computing demands. These workloads have extremely high requirements for computing resources, typically needing high-end GPUs or professional-grade hardware. Moreover, B-end clients have high demands for the stability and reliability of computing resources, necessitating high-quality service level agreements to ensure projects run smoothly and provide timely technical support. Additionally, the migration costs for B-end clients are high; if the decentralized network does not have a mature SDK that allows project parties to deploy quickly (for example, Akash Network requires users to develop based on remote ports), it will be difficult to attract clients to migrate. Unless there is a significant price advantage, clients are very reluctant to migrate.

  • Render Network and Akash Network primarily provide computing services for individuals. To serve C-end users, projects need to design simple and user-friendly interfaces and tools to provide a good consumer experience. Moreover, consumers are very price-sensitive, so projects need to offer competitive pricing.

  • Types of Hardware

  • Common computing hardware resources include CPUs, FPGAs, GPUs, ASICs, and SoCs. These hardware types have significant differences in design goals, performance characteristics, and application areas. In summary, CPUs are better suited for general computing tasks, FPGAs excel in high parallel processing and programmability, GPUs perform exceptionally well in parallel computing, ASICs are most efficient for specific tasks, and SoCs integrate multiple functions, suitable for highly integrated applications. The choice of hardware depends on the specific application requirements, performance needs, and cost considerations. The decentralized computing projects we discuss mainly focus on collecting GPU computing power, which is determined by the type of project business and the characteristics of GPUs. This is because GPUs have unique advantages in AI training, parallel computing, and multimedia rendering.

  • Although most of these projects involve GPU integration, different applications have different hardware specifications, leading to heterogeneous optimization cores and parameters. These parameters include parallelism/serial dependencies, memory, latency, etc. For example, rendering workloads are actually more suited for consumer-grade GPUs rather than the more powerful data center GPUs, as rendering requires high standards for ray tracing, and consumer-grade chips like the 4090s have enhanced RT cores specifically optimized for ray tracing tasks. AI training and inference, on the other hand, require professional-grade GPUs. Therefore, Render Network can gather consumer-grade GPUs like RTX 3090s and 4090s from individuals, while IO.NET needs more professional-grade GPUs like H100s and A100s to meet the needs of AI startups.

2.2 Market Positioning

In terms of project positioning, the core issues, optimization focuses, and value capture capabilities that need to be addressed differ across the bare metal layer, orchestration layer, and aggregation layer.

  • The bare metal layer focuses on the collection and utilization of physical resources, while the orchestration layer focuses on scheduling and optimizing computing power, designing the best optimization based on customer group needs. The aggregation layer is general-purpose, focusing on the integration and abstraction of different resources. From the value chain perspective, each project should start from the bare metal layer and strive to ascend upwards.

  • From the perspective of value capture, the ability to capture value increases layer by layer from the bare metal layer, orchestration layer to aggregation layer. The aggregation layer can capture the most value because the aggregation platform can achieve the largest network effects and directly reach the most users, acting as the traffic entry point for the decentralized network, thus occupying the highest value capture position in the entire computing resource management stack.

  • Correspondingly, the difficulty of building an aggregation platform is also the greatest; projects need to comprehensively address issues such as technical complexity, heterogeneous resource management, system reliability and scalability, achieving network effects, security and privacy protection, and complex operational management. These challenges are not conducive to the project's cold start and depend on the development situation and timing of the track. It is unrealistic to pursue an aggregation layer before the orchestration layer has matured and captured a certain market share.

  • Currently, Aethir, Render Network, Akash Network, and Gensyn all belong to the orchestration layer, aiming to provide services for specific targets and customer groups. Aethir's main business is real-time rendering for cloud gaming, providing certain development and deployment environments and tools for B-end clients; Render Network's main business is video rendering, Akash Network's mission is to provide a trading platform similar to Taobao, while Gensyn focuses on the AI training field. io.net is positioned as an aggregation layer, but the functions it has implemented so far are still a distance from the complete functionality of an aggregation layer; although it has already collected hardware from Render Network and Filecoin, the abstraction and integration of hardware resources have yet to be completed.

2.3 Hardware Facilities

  • Currently, not all projects have disclosed detailed data about their networks; relatively speaking, io.net explorer has the best UI, where one can see parameters such as GPU/CPU quantity, types, prices, distribution, network usage, node income, etc. However, at the end of April, io.net's frontend was attacked due to a lack of authentication on the PUT/POST interfaces, allowing hackers to tamper with frontend data. This serves as a warning bell for other projects regarding privacy and the reliability of network data.

  • In terms of the number and model of GPUs, io.net, as an aggregation layer, should have the most hardware collected. Aethir follows closely, while the hardware situation of other projects is less transparent. From the GPU model perspective, io.net has both professional-grade GPUs like A100 and consumer-grade GPUs like 4090, with a wide variety, aligning with io.net's aggregation positioning. io.net can choose the most suitable GPU based on specific task requirements. However, different models and brands of GPUs may require different drivers and configurations, and software also needs complex optimization, increasing management and maintenance complexity. Currently, task allocation in io.net mainly relies on user self-selection.

  • Aethir has released its own mining machine; in May, the Aethir Edge, supported by Qualcomm, was officially launched. It will break away from the single centralized GPU cluster deployment method far from users, deploying computing power to the edge. Aethir Edge will combine the cluster computing power of H100 to serve AI scenarios, deploying trained models to provide inference computing services to users at optimal costs. This solution is closer to users, providing faster service and better cost-effectiveness.

  • From the supply and demand perspective, taking Akash Network as an example, its statistics show that the total number of CPUs is about 16k, and the number of GPUs is 378. According to network leasing demand, the utilization rates for CPUs and GPUs are 11.1% and 19.3%, respectively. Among them, only the rental rate for the professional-grade GPU H100 is relatively high, while most other models remain idle. Other networks face similar situations to Akash, with overall demand not being high; apart from popular chips like A100 and H100, most computing power remains idle.

  • From a price advantage perspective, compared to traditional service providers, the cost advantage is not particularly prominent, except for cloud computing market giants.

2.4 Financial Performance

  • Regardless of how the token model is designed, a healthy tokenomics must meet the following basic conditions: 1) user demand for the network needs to be reflected in the token price, meaning the token can achieve value capture; 2) all participants, whether developers, nodes, or users, need to receive long-term fair incentives; 3) ensure decentralized governance to avoid excessive holdings by insiders; 4) a reasonable inflation and deflation mechanism and token release cycle to prevent drastic fluctuations in token prices that could affect the network's stability and sustainability.

  • If we broadly categorize token models into BME (burn and mint equilibrium) and SFA (stake for access), the sources of deflationary pressure for these two models differ: the BME model burns tokens after users purchase services, so the system's deflationary pressure is determined by demand. In contrast, SFA requires service providers/nodes to stake tokens to qualify for providing services, so the deflationary pressure comes from supply. The benefit of BME is that it is more suitable for non-standardized goods. However, if the network's demand is insufficient, it may face sustained inflationary pressure. The token models of various projects differ in detail, but overall, Aethir leans more towards SFA, while io.net, Render Network, and Akash Network lean more towards BME, and Gensyn remains uncertain.

  • In terms of revenue, the network's demand will directly reflect on the overall revenue of the network (here we do not discuss miners' income, as miners receive rewards from completing tasks as well as subsidies from projects). From publicly available data, io.net has the highest figures. Although Aethir's revenue has not yet been disclosed, they have announced that they have signed orders with many B-end clients based on publicly available information.

  • In terms of token price, currently only Render Network and Akash Network have conducted ICOs. Aethir and io.net have also recently issued tokens, and their price performance needs further observation, so we will not discuss this in detail here. Gensyn's plans are still unclear. From the two projects that have issued tokens and other projects in the same track that are not included in this discussion but have already issued tokens, overall, decentralized computing networks have shown very impressive price performance, reflecting significant market potential and high community expectations to some extent.

2.5 Summary

  • The decentralized computing network track is developing rapidly, with many projects able to rely on product services to serve customers and generate certain revenues. The track has moved beyond pure narrative and entered a developmental stage where it can provide preliminary services.

  • Weak demand is a common issue faced by decentralized computing networks, as long-term customer demand has not been well validated and explored. However, the demand side has not overly affected token prices, with several projects that have issued tokens performing well.

  • AI is the main narrative for decentralized computing networks, but it is not the only business. In addition to applications in AI training and inference, computing power can also be used for real-time rendering in cloud gaming, cloud mobile services, and more.

  • The hardware heterogeneity of computing networks is relatively high, and the quality and scale of computing networks need further improvement.

  • For C-end users, the cost advantage is not very obvious. For B-end users, in addition to cost savings, considerations such as service stability, reliability, technical support, compliance, and legal support are also important, and Web3 projects generally do not perform well in these areas.

Closing Thoughts

The explosive growth of AI has undoubtedly led to a massive demand for computing power. Since 2012, the computing power used in AI training tasks has been growing exponentially, currently doubling every 3.5 months (in contrast, Moore's Law states it doubles every 18 months). Since 2012, the demand for computing power has increased over 300,000 times, far exceeding the 12-fold growth predicted by Moore's Law. It is projected that the GPU market will grow at a compound annual growth rate of 32% to over $200 billion in the next five years. AMD's estimates are even higher, with the company predicting that the GPU chip market will reach $400 billion by 2027.

Image Source: https://www.stateof.ai/

The explosive growth of artificial intelligence and other compute-intensive workloads (such as AR/VR rendering) has exposed structural inefficiencies in traditional cloud computing and leading computing markets. In theory, decentralized computing networks can provide more flexible, cost-effective, and efficient solutions by utilizing distributed idle computing resources to meet the enormous market demand for computing resources. Therefore, the combination of crypto and AI has tremendous market potential, but it also faces fierce competition from traditional enterprises, high entry barriers, and a complex market environment. Overall, among all crypto tracks, decentralized computing networks are one of the most promising verticals in the crypto space to achieve real demand.

Image Source: https://vitalik.eth.limo/general/2024/01/30/cryptoai.html

The future is bright, but the road is tortuous. To achieve the above vision, we still need to solve numerous problems and challenges. In summary: At this stage, if projects simply provide traditional cloud services, their profit margins are very small. Analyzing from the demand side, large enterprises generally build their own computing power, while pure C-end developers mostly choose cloud services. Whether small and medium-sized enterprises that truly use decentralized computing network resources will have stable demand still needs further exploration and validation. On the other hand, AI is a vast market with extremely high ceilings and imaginative space. For a broader market, future decentralized computing service providers will also need to transition towards model/AI services, exploring more crypto + AI use cases to expand the value they can create. However, currently, there are still many issues and challenges to further develop into the AI field:

  • Price Advantage is Not Obvious: As shown in previous data comparisons, the cost advantage of decentralized computing networks has not been realized. Possible reasons include that for high-demand professional chips like H100 and A100, market mechanisms dictate that the prices of these hardware will not be cheap. Additionally, while decentralized networks can collect idle computing resources, the lack of economies of scale brought by decentralization, high network and bandwidth costs, and significant management and operational complexities will further increase computing costs.

  • The Specificity of AI Training: Using decentralized methods for AI training currently faces significant technical bottlenecks. This bottleneck can be intuitively reflected in the workflow of GPUs; in large language model training, GPUs first receive pre-processed data batches, performing forward and backward propagation calculations to generate gradients. Next, each GPU aggregates gradients and updates model parameters to ensure all GPUs are synchronized. This process is repeated until all batches are trained or a predetermined number of epochs is reached. This process involves a large amount of data transmission and synchronization. Questions such as what kind of parallel and synchronization strategies to use, how to optimize network bandwidth and latency, and reduce communication costs have not yet been well answered. Currently, using decentralized computing networks for AI training is still unrealistic.

  • Data Security and Privacy: During the training process of large language models, various stages involving data processing and transmission, such as data distribution, model training, and parameter and gradient aggregation, can affect data security and privacy. Moreover, privacy is even more critical in privacy coin models. If data privacy issues cannot be resolved, it will be impossible to scale on the demand side.

From the most realistic perspective, a decentralized computing network needs to balance current demand exploration with future market space. It should accurately identify product positioning and target customer groups, such as initially targeting non-AI or Web3 native projects, starting from relatively marginal demands to establish an early user base. At the same time, it should continuously explore various scenarios of AI and crypto integration, pushing the boundaries of technology to achieve service transformation and upgrades.

References

https://www.stateof.ai/

https://vitalik.eth.limo/general/2024/01/30/cryptoai.html

https://foresightnews.pro/article/detail/34368

https://app.blockworksresearch.com/unlocked/compute-de-pi-ns-paths-to-adoption-in-an-ai-dominated-market?callback=%2Fresearch%2Fcompute-de-pi-ns-paths-to-adoption-in-an-ai-dominated-market

https://research.web3caff.com/zh/archives/17351?ref=1554

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
banner
ChainCatcher Building the Web3 world with innovators