Starting from Vitalik's article, let's take a look at the sub-sectors in Crypto×AI that are worth paying attention to

Metrics Ventures
2024-03-18 16:25:16
Collection
The AI track is more like a "technology narrative-driven MEME."

Authors: @charlotte0211z, @BlazingKevin_, Metrics Ventures

On January 30, Vitalik published The promise and challenges of crypto + AI applications, discussing how blockchain and artificial intelligence should be combined and the potential challenges that arise in the process. A month after the article was published, the NMR, Near, and WLD mentioned in the article saw significant price increases, completing a round of value discovery. This article organizes the existing AI track's sub-directions based on the four ways proposed by Vitalik for the combination of Crypto and AI, and briefly introduces representative projects in each direction.

1 Introduction: Four Ways to Combine Crypto and AI

Decentralization is the consensus maintained by blockchain, ensuring security is the core idea, while open source is the key foundation that allows on-chain behavior to possess the aforementioned characteristics from a cryptographic perspective. In the past few years, this approach has been applicable in several rounds of blockchain transformation, but the situation has changed somewhat with the involvement of artificial intelligence.

Imagine designing the architecture of a blockchain or application through artificial intelligence; then the model needs to be open source, but this would expose its vulnerabilities in adversarial machine learning; conversely, it would lose its decentralization. Therefore, it is necessary to consider how to integrate artificial intelligence into the current blockchain or applications, and to what depth this integration should occur.

Source: DE UNIVERSITY OF ETHEREUM

In the article When Giants Collide: Exploring the Convergence of Crypto x AI from DE UNIVERSITY OF ETHEREUM, the differences in core characteristics between artificial intelligence and blockchain are elaborated. As shown in the image above, the characteristics of artificial intelligence are:

  • Centralization

  • Low transparency

  • Energy consumption

  • Monopoly

  • Weak monetization properties

In these five points, blockchain is completely opposite to artificial intelligence. This is also the true argument of Vitalik's article: if artificial intelligence and blockchain are combined, what trade-offs should the resulting applications make in terms of data ownership, transparency, monetization capability, and energy costs, and what infrastructure needs to be created to ensure the effective combination of the two.

Based on the above criteria and his own thoughts, Vitalik categorizes applications that combine artificial intelligence and blockchain into four main types:

  • AI as a player in a game

  • AI as an interface to the game

  • AI as the rules of the game

  • AI as the objective of the game

The first three mainly represent three ways AI is introduced into the Crypto world, reflecting three levels from shallow to deep. According to the author's understanding, this classification represents the degree of influence AI has on human decision-making, thus introducing different levels of systemic risk to the entire Crypto ecosystem:

  • AI as a player in the application: AI itself does not influence human decisions and behaviors, so it does not pose risks to the real human world, making it currently the most feasible option.

  • AI as an interface to the application: AI provides auxiliary information or tools for human decision-making and behavior, improving user and developer experience and lowering barriers, but incorrect information or operations will pose certain risks to the real world.

  • AI as the rules of the application: AI fully replaces humans in decision-making and operations, so malicious actions and failures of AI will directly lead to chaos in the real world. Whether in Web2 or Web3, there is currently no trust in AI to replace humans in decision-making.

Finally, the fourth type of project aims to leverage the characteristics of Crypto to create better artificial intelligence. As mentioned earlier, centralization, low transparency, energy consumption, monopoly, and weak monetary attributes can all be naturally mitigated through the properties of Crypto. Although many people are skeptical about whether Crypto can influence the development of artificial intelligence, the narrative of Crypto's ability to impact the real world through decentralization has always been its most captivating aspect, and this track has become the most fervently speculated part of the AI sector due to its grand vision.

2 AI as a Participant

In mechanisms involving AI participation, the ultimate source of incentives comes from human-input protocols. Before AI becomes an interface or even the rules, we often need to evaluate the performance of different AIs to involve them in a mechanism, ultimately receiving rewards or facing penalties through an on-chain mechanism.

AI as a participant, compared to being an interface or rules, poses negligible risk to users and the entire system. It can be said to be a necessary stage before AI begins to deeply influence user decisions and behaviors, so the cost and trade-offs required for the integration of artificial intelligence and blockchain at this level are relatively small, making it a type of product that Vitalik considers highly feasible.

From a broad and implementation perspective, current AI applications mostly belong to this category, such as AI-enabled trading bots and chatbots. The current level of implementation still struggles to achieve the roles of an interface or even rules, as users are comparing and gradually optimizing among different bots, and crypto users have yet to develop a habit of using AI applications. In Vitalik's article, Autonomous Agents are also categorized under this type.

However, from a narrow and long-term vision perspective, we tend to make more detailed classifications of AI applications or AI Agents. Therefore, under this category, we believe that representative sub-tracks include:

2.1 AI Games

To some extent, AI games can all be categorized under this category, where players interact with AI and train their AI characters to better meet personal needs, such as aligning more closely with personal preferences or being more competitive in game mechanics. Games serve as a transitional phase for AI before it enters the real world, and they currently represent a low-risk track that is easiest for ordinary users to understand. Iconic projects include AI Arena, Echelon Prime, and Altered State Machine.

  • AI Arena: AI Arena is a PVP fighting game where players can learn and train through AI, allowing game characters to evolve continuously. It aims to let more ordinary users access, understand, and experience AI through gaming while enabling AI engineers to provide various AI algorithms to increase their income. Each game character is an AI-powered NFT, with Core containing the AI model's core, including two parts: architecture and parameters, stored on IPFS. The parameters in a new NFT are randomly generated, meaning it will execute random actions. Users need to enhance the character's strategic abilities through imitation learning (IL), and every time a user trains a character and saves progress, the parameters are updated on IPFS.

  • Altered State Machine: ASM is not an AI game but a protocol for the rights and trading of AI Agents, positioned as a metaverse AI protocol. It is currently integrating with multiple games, including FIFA, to introduce AI Agents into games and the metaverse. ASM uses NFTs to establish rights and trade AI Agents, with each Agent containing three parts: Brain (the Agent's inherent characteristics), Memories (storing the behavior strategies learned by the Agent and the model training part, bound to the Brain), and Form (appearance and other characteristics). ASM has a Gym module, including decentralized GPU cloud providers to support the computational needs of Agents. Current projects based on ASM include AIFA (AI football game), Muhammed Ali (AI boxing game), AI League (street football game in collaboration with FIFA), Raicers (AI-driven racing game), and FLUF World's Thingies (generative NFTs).

  • Parallel Colony (PRIME): Echelon Prime is developing Parallel Colony, an AI LLM-based game where players can interact with their AI Avatar and influence it. The Avatar will act autonomously based on memories and life trajectories. Colony is currently one of the most anticipated AI games, and the official team recently released a white paper and announced a migration to Solana, leading to a new wave of price increases for PRIME.

2.2 Prediction Markets/Competitions

Predictive ability is the foundation for AI to make future decisions and behaviors. Before AI models are used for actual predictions, prediction competitions compare the performance of AI models at a higher level, providing incentives for data scientists/AI models through tokens. This has positive implications for the overall development of Crypto×AI—constantly developing more efficient and powerful models and applications suitable for the crypto world through incentives, creating higher quality and safer products before AI plays a deeper role in decision-making and behavior. As Vitalik said, prediction markets are a powerful primitive that can be extended to many other types of questions. Iconic projects in this track include Numerai and Ocean Protocol.

  • Numerai: Numerai is a long-running data science competition where data scientists train machine learning models based on historical market data (provided by Numerai) to predict stock market movements, staking models and NMR tokens for tournaments. Better-performing models receive NMR token incentives, while poorly performing models have their staked tokens destroyed. As of March 7, 2024, a total of 6,433 models have been staked, with the protocol providing $75,760,979 in incentives to data scientists. Numerai is incentivizing global data scientists to collaborate in building a new type of hedge fund, with released funds including Numerai One and Numerai Supreme. The path of Numerai: market prediction competition → crowdsourced prediction models → new hedge funds based on crowdsourced models.

  • Ocean Protocol: Ocean Predictoor focuses on predictions, starting with crowdsourced predictions of cryptocurrency trends. Players can choose to run a Predictoor bot or Trader bot. The Predictoor bot uses an AI model to predict the price of cryptocurrencies (like BTC/USDT) at the next time point (e.g., five minutes later) and stakes a certain amount of $OCEAN. The protocol calculates a global prediction based on the staked amount. Traders purchase the prediction results and can trade based on them. When the prediction accuracy is high, Traders can profit, while incorrect predictions by the Predictoor will be penalized, and correct predictions can earn tokens and Trader purchase fees as rewards. On March 2, Ocean Predictoor announced its latest direction in the media—World-World Model (WWM), beginning to explore predictions related to weather, energy, and other real-world scenarios.

3 AI as an Interface

AI can help users understand what is happening in simple and understandable language, acting as a mentor for users in the crypto world and alerting them to potential risks, thereby lowering the barriers to using Crypto and user risks while enhancing user experience. The specific functionalities of achievable products are rich, such as risk alerts during wallet interactions, AI-driven intent trading, and AI chatbots capable of answering ordinary users' crypto questions. The target audience is expanded, with almost all groups, including ordinary users, developers, analysts, etc., becoming service objects of AI.

Let us reiterate the commonality of these projects: they have not yet replaced humans in executing certain decisions and behaviors but are using AI models to provide information and tools for human decision-making and actions. At this level, the risks of AI behaving maliciously have begun to expose themselves in the system—by providing incorrect information, they can interfere with human final judgments, a point that has been analyzed in detail in Vitalik's article.

Many projects can be categorized under this category, including AI chatbots, AI smart contract audits, AI code writing, AI trading bots, etc. It can be said that currently, the vast majority of AI applications are at this preliminary level. Representative projects include:

  • PaaL: PaaL is currently the leading AI chatbot project, which can be seen as a ChatGPT trained on crypto-related knowledge. By integrating with TG and Discord, it can provide users with token data analysis, token fundamentals and token economics analysis, as well as text-to-image generation and other functions. PaaL Bot can be integrated into group chats for automatic replies to certain information. PaaL supports custom personal bots, allowing users to build their own AI knowledge base and custom bots by feeding data sets. PaaL is moving towards becoming an AI Trading Bot, having announced its AI-supported crypto research & trading terminal PaalX on February 29, which can achieve AI smart contract audits, Twitter-based news integration and trading, and support for crypto research and trading, with the AI assistant lowering the user entry barrier.

  • ChainGPT: ChainGPT has developed a series of crypto tools relying on artificial intelligence, such as chatbots, NFT generators, news aggregators, smart contract generation and auditing, trading assistants, prompt markets, and AI cross-chain exchanges. However, ChainGPT's current focus is on project incubation and Launchpad, having completed 24 project IDOs and 4 Free Giveaways.

  • Arkham: Ultra is Arkham's dedicated AI engine, with use cases including algorithmically matching addresses with real-world entities to enhance transparency in the crypto industry. Ultra merges on-chain and off-chain data provided by users and its own collection, outputting it into a scalable database, ultimately presented in chart form. However, Arkham's documentation does not provide detailed discussions on the Ultra system. Arkham has gained attention this round due to personal investments from OpenAI founder Sam Altman, resulting in a fivefold increase in the past 30 days.

  • GraphLinq: GraphLinq is an automated process management solution designed to allow users to deploy and manage various types of automation functions without programming, such as pushing Bitcoin prices from Coingecko to a TG Bot every five minutes. GraphLinq's solution visualizes automation processes using Graph, allowing users to create automated tasks by dragging and dropping nodes and executing them using the GraphLinq Engine. Although no coding is required, the process of creating a Graph still poses a certain barrier for ordinary users, including selecting suitable templates and choosing and connecting appropriate logic blocks from hundreds of options. Therefore, GraphLinq is introducing AI to allow users to complete the construction and management of automated tasks using conversational AI and natural language.

  • 0x0.ai: 0x0's AI-related services mainly include three areas: AI smart contract auditing, AI anti-Rug detection, and AI developer center. Among them, AI anti-Rug detection will identify suspicious behaviors, such as excessively high taxes or liquidity withdrawal, to prevent user scams. The AI developer center utilizes machine learning techniques to generate smart contracts for No-code deployment. However, currently, only AI smart contract auditing has been preliminarily launched, while the other two functions are still under development.

  • Zignaly: Founded in 2018, Zignaly aims to enable individual investors to choose fund managers for crypto asset management, similar to the logic of copy trading. Zignaly is using machine learning and artificial intelligence technologies to establish a systematic evaluation index for fund managers. The first product launched is Z-Score, but as an AI product, it is still relatively basic.

4 AI as the Rules of the Game

This is the most exciting part—allowing AI to replace humans in decision-making and actions, where your AI will directly control your wallet and make trading decisions and actions on your behalf. In this classification, the author believes it can be mainly divided into three levels: AI applications (especially those with the vision of autonomous decision-making, such as AI automated trading bots, AI DeFi yield bots), Autonomous Agent protocols, and zkml/opml.

AI applications are tools for making specific decisions on issues in a certain field. They accumulate knowledge and data from different sub-fields and rely on AI models tailored to specific problems for decision-making. It is noteworthy that AI applications are categorized into two types in this article: interfaces and rules. From a development vision perspective, AI applications should become independent decision-making agents, but currently, neither the effectiveness of AI models nor the safety of integrating AI meets this requirement. Even as interfaces, they are somewhat strained, indicating that AI applications are still in a very early stage, with specific projects introduced earlier and not elaborated here.

Autonomous Agents were mentioned by Vitalik in the first category (AI as a participant). From a long-term vision perspective, this article categorizes them into the third category. Autonomous Agents use large amounts of data and algorithms to simulate human thinking and decision-making processes and execute various tasks and interactions. This article mainly focuses on the communication layer, network layer, and other infrastructures of Agents, which define the ownership of Agents, establish their identities, communication standards, and methods, and connect multiple Agent applications to collaboratively make decisions and actions.

zkML/opML: By using cryptographic or economic methods, zkML/opML ensures that outputs provided after correct model inference processes are trustworthy. Security issues are fatal when introducing AI into smart contracts, as smart contracts rely on inputs to generate outputs and automate a series of functions. If AI behaves maliciously and provides incorrect inputs, it will introduce significant systemic risks to the entire Crypto system. Therefore, zkML/opML and a series of potential solutions are the foundation for AI to act and make decisions independently.

Finally, the three components form the three foundational levels of AI as operational rules: zkml/opml as the foundational infrastructure ensuring protocol security; Agent protocols establishing the Agent ecosystem capable of collaborative decision-making and actions; and AI applications, which are specific AI Agents that will continuously enhance their capabilities in a certain field and make actual decisions and actions.

4.1 Autonomous Agents

The application of AI Agents in the Crypto world is natural. From smart contracts to TG Bots to AI Agents, the crypto world is moving towards higher automation and lower user barriers. While smart contracts automatically execute functions through immutable code, they still rely on external triggers to wake them up and cannot run autonomously or continuously. TG Bots lower the user barrier, allowing users to interact with the blockchain through natural language instead of directly engaging with the crypto front end, but they can only perform very simple and specific tasks, still failing to achieve user-intent-centered trading. AI Agents, on the other hand, possess a certain degree of independent decision-making ability, understanding users' natural language, and autonomously finding and combining other Agents and on-chain tools to achieve user-specified goals.

AI Agents are committed to significantly enhancing the user experience of crypto products, while blockchain can also help AI Agents operate in a more decentralized, transparent, and secure manner. The specific assistance includes:

  • Incentivizing more developers to provide Agents through tokens

  • NFT rights promotion for charging and trading based on Agents

  • Providing on-chain Agent identity and registration mechanisms

  • Providing immutable Agent activity logs for timely tracing and accountability of their behavior

The main projects in this track include:

  • Autonolas: Autonolas supports the asset rights and composability of Agents and related components through on-chain protocols, allowing code components, Agents, and services to be discovered and reused on-chain, incentivizing developers to receive economic compensation. After developers create a complete Agent or component, they will register the code on-chain and receive an NFT representing ownership of the code. Service Owners will collaborate with multiple Agents to create a service and register it on-chain, attracting Agent Operators to execute the service, with users paying to use the service.

  • Fetch.ai: Fetch.ai has a strong team background and development experience in the AI field and is currently focusing on the AI Agent track. The protocol consists of four key layers: AI Agents, Agentverse, AI Engine, and Fetch Network. AI Agents are the core of the system, while the others provide frameworks and tools to assist in building Agent services. Agentverse is a software-as-a-service platform primarily used for creating and registering AI Agents. The AI Engine aims to read user natural language inputs, convert them into actionable characters, and select the most suitable registered AI Agent in Agentverse to perform tasks. Fetch Network is the blockchain layer of the protocol, where AI Agents must register in the on-chain Almanac contract to start collaborative services with other Agents. Notably, Autonolas currently focuses on building Agents in the crypto world, bringing off-chain Agent operations on-chain, while Fetch.ai's scope includes the Web2 world, such as travel bookings and weather predictions.

  • Delysium: Delysium is transitioning from a game to an AI Agent protocol, primarily consisting of two layers: the communication layer and the blockchain layer. The communication layer is the backbone of Delysium, providing secure and scalable infrastructure for efficient communication between AI Agents. The blockchain layer authenticates Agents and uses smart contracts to achieve immutable records of Agent behavior. Specifically, the communication layer establishes a unified communication protocol for Agents, using a standardized messaging system to enable seamless communication between Agents. Additionally, it establishes service discovery protocols and APIs, allowing users and other Agents to quickly discover and connect with available Agents. The blockchain layer mainly includes two parts: Agent ID and Chronicle smart contract. Agent ID ensures that only legitimate Agents can access the network, while Chronicle serves as a log repository for all important decisions and actions made by Agents, which cannot be altered once on-chain, ensuring trustworthy traceability of Agent behavior.

  • Altered State Machine: By establishing standards for asset rights and trading of Agents through NFTs, detailed analysis can be found in Part 1. Although ASM is currently primarily integrated into games, its foundational standards also have the potential to expand into other Agent domains.

  • Morpheous: Morpheous is building an AI Agent ecosystem, with the protocol aiming to connect four roles: Coder, Computer provider, Community Builder, and Capital, providing AI Agents, computational support for Agent operations, front-end and development tools, and funding, respectively. MOR will adopt a Fair launch model to incentivize miners providing computational power, stETH stakers, Agent or smart contract development contributors, and community development contributors.

4.2 zkML/opML

Zero-knowledge proofs currently have two main application directions:

  • Proving that computations have been correctly executed on-chain at a lower cost (ZK-Rollup and ZKP cross-chain bridges are utilizing this feature of ZK);

  • Privacy protection: proving that computations have been correctly executed without needing to know the details of the computation.

Similarly, the application of ZKP in machine learning can also be divided into two categories:

  • Inference verification: proving that the inference process of an AI model, which is computationally intensive, has been correctly executed off-chain through ZK-proof on-chain at a lower cost.

  • Privacy protection: which can be further divided into two categories: one is protecting data privacy, meaning using private data for inference on public models, which can utilize zkML to protect private data; the other is protecting model privacy, aiming to hide specific information such as model weights while deriving output results from public inputs.

The author believes that inference verification is currently more important for Crypto. We further elaborate on the scenarios for inference verification. Starting from AI as a participant to AI as the rules of the world, we hope to make AI part of on-chain processes. However, the computational costs of AI model inference are too high to run directly on-chain. Moving this process off-chain means we have to endure the trust issues brought by this black box—did the AI model operator tamper with my input? Did they use the model I specified for inference? By transforming ML models into ZK circuits, we can achieve: (1) smaller models on-chain, storing small zkML models in smart contracts, directly addressing opacity issues; (2) completing inference off-chain while generating ZK proofs, proving the correctness of the inference process by running ZK proofs on-chain. The infrastructure will include two contracts—the main contract (using ML model output results) and the ZK-Proof verification contract.

zkML is still in a very early stage, facing technical issues in transforming ML models into ZK circuits and extremely high computational and cryptographic overhead costs. Similar to the development path of Rollup, opML has emerged as another solution from an economic perspective, using Arbitrum's AnyTrust assumption, which states that each claim has at least one honest node, ensuring that the submitter or at least one verifier is honest. However, OPML can only serve as an alternative for inference verification and cannot achieve privacy protection.

Current projects are building the infrastructure for zkML and exploring its applications, which are equally important, as it is necessary to clearly demonstrate to crypto users the significant role of zkML and prove that the ultimate value can offset the enormous costs. Among these projects, some focus on ZK technology research related to machine learning (such as Modulus Labs), while others are building more general ZK infrastructure. Related projects include:

  • Modulus is using zkML to apply artificial intelligence to the on-chain inference process. On February 27, Modulus launched the zkML prover Remainder, achieving a 180-fold efficiency improvement compared to traditional AI inference on equivalent hardware. Additionally, Modulus is collaborating with multiple projects to explore practical use cases for zkML, such as working with Upshot to collect complex market data, assess NFT prices, and transmit prices on-chain using AI with ZK proofs; and collaborating with AI Arena to prove that the Avatar and player currently battling are the same.

  • Risc Zero places models on-chain, proving that the exact computations involved in running machine learning models are correctly executed within RISC Zero's ZKVM.

  • Ingonyama is developing hardware specifically for ZK technology, which may lower the barriers to entry into the ZK technology field, and zkML may also be used in the model training process.

5 AI as the Objective

If the previous three categories focus more on how AI empowers Crypto, then "AI as the objective" emphasizes how Crypto helps AI, specifically how to leverage Crypto to create better AI models and products. This may include multiple evaluation criteria: more efficient, more accurate, more decentralized, etc.

AI consists of three cores: data, computing power, and algorithms. In each dimension, Crypto is committed to providing more effective support for AI:

  • Data: Data is the foundation for model training. Decentralized data protocols will incentivize individuals or enterprises to provide more private data while using cryptography to ensure data privacy and avoid the leakage of sensitive personal data.

  • Computing Power: The decentralized computing power track is currently the hottest AI track, with protocols facilitating matching markets for supply and demand, promoting the matching of long-tail computing power with AI enterprises for model training and inference.

  • Algorithms: Crypto's empowerment of algorithms is the core link to achieving decentralized AI and is the main content of Vitalik's article on "AI as the objective." Creating decentralized, trustworthy black-box AI will address the issues of adversarial machine learning mentioned earlier, but it will face extremely high cryptographic overhead and a series of obstacles. Additionally, "using cryptographic incentives to encourage the creation of better AI" can also be achieved without completely falling into the rabbit hole of full cryptographic encryption.

The monopoly of large tech companies over data and computing power has jointly led to a monopoly over the model training process, with closed-source models becoming key to large enterprises' profits. From an infrastructure perspective, Crypto incentivizes the decentralized supply of data and computing power through economic means while ensuring data privacy during the process through cryptographic methods, thereby supporting decentralized model training to achieve more transparent and decentralized AI.

5.1 Decentralized Data Protocols

Decentralized data protocols mainly operate in the form of data crowdsourcing, incentivizing users to provide datasets or data services (such as data labeling) for enterprises to train models, and opening Data Marketplaces to facilitate matching between supply and demand. Some protocols are also exploring obtaining users' browsing data through DePIN incentive protocols or using users' devices/bandwidth to complete web data scraping.

  • Ocean Protocol: Ocean Protocol establishes data rights and tokenizes them, allowing users to create NFTs for data/algorithms in a no-code manner while creating corresponding datatokens to control access to data NFTs. Ocean Protocol ensures data privacy through Compute To Data (C2D), where users can only obtain output results based on data/algorithms without being able to download the complete data. Founded in 2017, Ocean Protocol serves as a data marketplace and has naturally leveraged the AI boom in this round.

  • Synesis One: This project is a Train2Earn platform on Solana, where users earn $SNS rewards by providing natural language data and data labeling. Users support mining by providing data, which is stored and put on-chain after verification, and used by AI companies for training and inference. Specifically, miners are divided into three categories: Architect/Builder/Validator. Architects create new data tasks, Builders provide corpora for corresponding data tasks, and Validators verify the datasets provided by Builders. Completed datasets will be stored in IPFS and on-chain, preserving data sources and IPFS addresses while also being stored in off-chain databases for AI companies (currently Mind AI) to use.

  • Grass: Known as the decentralized data layer for AI, it is essentially a decentralized web scraping market that obtains data for AI model training. Internet websites are an important source of training data for AI, with data from many sites, including Twitter, Google, and Reddit, holding significant value, but these sites are continuously restricting data scraping. Grass utilizes unused bandwidth from personal networks, using different IP addresses to reduce the impact of data blocking, to scrape data from public websites and perform preliminary data cleaning, becoming a data source for AI model training enterprises and projects. Currently, Grass is in beta testing, and users can provide bandwidth to earn points for potential airdrops.

  • AIT Protocol: AIT Protocol is a decentralized data labeling protocol aimed at providing developers with high-quality datasets for model training. Web3 enables the global workforce to quickly access the internet and earn incentives through data labeling. AIT's data scientists will pre-label the data, which will then be further processed by users, and after being checked by data scientists, quality-verified data will be provided to developers.

In addition to the aforementioned data provision and labeling protocols, previous decentralized storage infrastructure, such as Filecoin, Arweave, etc., will also contribute to more decentralized data supply.

5.2 Decentralized Computing Power

The importance of computing power in the AI era is self-evident. Not only has Nvidia's stock price soared, but in the Crypto world, decentralized computing power can be said to be the hottest sub-direction in AI speculation—among the top 200 market cap AI projects, five are focused on decentralized computing (Render/Akash/AIOZ Network/Golem/Nosana), and they have seen significant price increases in recent months. Many decentralized computing platforms have also emerged among smaller market cap projects, and with the wave of the Nvidia conference, anything related to GPUs has quickly surged.

From the characteristics of the track, the basic logic of projects in this direction is highly homogeneous—using token incentives to encourage individuals or enterprises with idle computing resources to provide resources, significantly lowering usage costs and establishing a supply-demand market for computing power. Currently, the main sources of computing power come from data centers, miners (especially after Ethereum transitioned to PoS), consumer-grade computing, and collaborations with other projects. Although homogeneous, this is a track where leading projects have a high moat, with the main competitive advantages of projects stemming from: computing resources, computing rental prices, computing utilization rates, and other technical advantages. Leading projects in this track include Akash, Render, io.net, and Gensyn.

Projects can be roughly divided into two categories based on their specific business directions: AI model inference and AI model training. Due to the much higher requirements for computing power and bandwidth in AI model training compared to inference, the difficulty of implementing distributed inference is greater, and the market for model inference is rapidly expanding, with predictable revenues likely to be significantly higher than model training in the future. Therefore, the vast majority of projects currently focus on the inference direction (Akash, Render, io.net), while the leading project focusing on training is Gensyn. Among them, Akash and Render were established earlier and were not originally designed for AI computing; Akash was initially used for general computing, while Render was primarily applied to video and image rendering. io.net was specifically designed for AI computing, but as AI has elevated the demand for computing power, these projects have all leaned towards AI development.

The two most important competitive indicators still come from the supply side (computing resources) and the demand side (computing utilization rates). Akash has 282 GPUs and over 20,000 CPUs, having completed 160,000 rentals, with a GPU network utilization rate of 50-70%, which is a good number in this track. io.net has 40,272 GPUs and 5,958 CPUs, along with Render's 4,318 GPUs and 159 CPUs, and Filecoin's permission to use 1,024 GPUs, including about 200 H100s and thousands of A100s. It has completed 151,879 inferences and is attracting computing resources with high airdrop expectations, with GPU data rapidly increasing. Its ability to attract resources will need to be reassessed after the token launch. Render and Gensyn have not disclosed specific data. Additionally, many projects are enhancing their competitiveness on both the supply and demand sides through ecosystem collaborations, such as io.net using Render and Filecoin's computing power to boost its resource reserves. Render has established a computing client plan (RNP-004), allowing users to indirectly access Render's computing resources through computing clients—io.net, Nosana, FedMl, Beam—to quickly transition from the rendering field to AI computing.

Moreover, the verification of decentralized computing remains an issue—how to prove that workers with computing resources have correctly executed computing tasks. Gensyn is attempting to establish such a verification layer, ensuring the correctness of computations through probabilistic learning proofs, graph-based precise positioning protocols, and incentives. In this system, verifiers and reporters jointly check computations. Therefore, Gensyn not only provides computing support for decentralized training but also offers unique value through its established verification mechanism. The computing protocol Fluence on Solana also adds verification for computing tasks, allowing developers to verify whether their applications run as expected and whether computations are executed correctly by checking the proofs published by on-chain providers. However, the real demand remains that "feasibility" is greater than "trustworthiness." Computing platforms must first possess sufficient computing power to have competitive potential. Of course, for excellent verification protocols, they can choose to access computing power from other platforms, serving as a verification layer and protocol layer to play unique roles.

5.3 Decentralized Models

We are still far from the ultimate scenario described by Vitalik (as shown in the image below), where we cannot yet create a trustworthy black-box AI through blockchain and cryptographic technology to solve the issues of adversarial machine learning. Encrypting the entire AI operation process from data training to query output incurs a significant overhead. However, some projects are currently attempting to create better AI models through incentive mechanisms, first breaking down the closed states between different models, creating a pattern of mutual learning, collaboration, and healthy competition among models. Bittensor is one of the most representative projects in this regard.

  • Bittensor: Bittensor is facilitating the combination of different AI models. However, it is important to note that Bittensor itself does not train models but primarily provides AI inference services. Bittensor's 32 subnets focus on different service directions, such as data scraping, text generation, Text2Image, etc. When completing a task, AI models belonging to different directions can collaborate with each other. The incentive mechanism promotes competition both between and within subnets. Currently, rewards are distributed at a rate of 1 TAO per block, with a total of about 7,200 TAO tokens distributed daily. The 64 validators in SN0 (the root network) determine the distribution ratio of these rewards among different subnets based on subnet performance. Subnet validators decide the distribution ratio among different miners based on their evaluations of miners' work, thus better-performing services and models receive more incentives, enhancing the overall inference quality of the system.

6 Conclusion: Meme Speculation or Technological Revolution?

From the price surges of ARKM and WLD driven by Sam Altman's movements to the Nvidia conference boosting a series of participating projects, many are adjusting their investment philosophies regarding the AI track. Is the AI track ultimately meme speculation or a technological revolution?

Aside from a few celebrity-themed projects (like ARKM and WLD), the overall AI track resembles "meme driven by technological narratives."

On one hand, the overall speculation in the Crypto AI track is closely linked to the progress of Web2 AI, with external hype led by OpenAI becoming the catalyst for the Crypto AI track. On the other hand, the story of the AI track still primarily revolves around technological narratives. Here, we emphasize "technological narratives" rather than "technology," which makes the choice of sub-directions in the AI track and attention to project fundamentals still important. We need to find narrative directions with speculative value and also identify projects with medium to long-term competitiveness and moats.

From the four possible combinations proposed by Vitalik, we can see the trade-off between narrative appeal and feasibility. In the first and second categories represented by AI applications, we see many GPT Wrappers, which have quick product implementations but also high levels of business homogeneity. First-mover advantages, ecosystems, user numbers, and product revenues become the stories that can be told in homogeneous competition. The third and fourth categories represent grand narratives of the combination of AI and Crypto, such as Agent collaborative networks on-chain, zkML, and decentralized AI reconstruction, all of which are in early stages. Projects with technological innovations will quickly attract funding, even if they are just early-stage demonstrations.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators