Who has a better chance of becoming the true world computer, AO or ICP?

PermaDAO
2025-02-10 23:24:53
Collection
One is a modular, infinitely scalable decentralized computing network, and the other is a structured, tightly governed distributed system. Which one is the most ideal computing infrastructure in the AI era?

💡 Editor's Recommendation:

In the world of blockchain, decentralized computing is a promised land that is hard to reach. Traditional smart contract platforms like Ethereum are limited by high computing costs and limited scalability, while the new generation of computing architectures is trying to break these limitations. AO and ICP are currently the two most representative paradigms, one focusing on modular decoupling and infinite scalability, and the other emphasizing structured management and high security.

The author of this article, Blockpunk, is a researcher at Trustless Labs and an OG in the ICP ecosystem. He previously created the ICP League incubator and has long been engaged in the technology and developer community, with a keen interest and deep understanding of AO. If you are curious about the future of blockchain and want to know what a truly verifiable and decentralized computing platform in the AI era will look like, or if you are looking for new public chain narratives and investment opportunities, this article is definitely worth a read. It not only provides a detailed analysis of the core mechanisms, consensus models, and scalability of AO and ICP but also delves into their comparisons in terms of security, degree of decentralization, and future potential.

In this rapidly changing crypto industry, who is the real "world computer"? The outcome of this competition may determine the future of Web3. Read this article to get a head start on understanding the latest landscape of decentralized computing!

The integration with AI has become a hot trend in today's crypto world, with countless AI agents starting to issue, hold, and trade cryptocurrencies. The explosion of new applications, accompanied by the demand for new infrastructure, makes verifiable and decentralized AI computing infrastructure particularly important. However, smart contract platforms represented by ETH and decentralized computing power platforms represented by Akash and IO cannot simultaneously meet the needs for verifiability and decentralization.

In 2024, the team behind the well-known decentralized storage protocol Arweave announced the AO architecture, a decentralized general computing network that supports fast and low-cost scaling, enabling it to run many computation-intensive tasks, such as the inference processes of AI agents. The computing resources on AO are organically integrated through AO's message transmission rules, which record the order and content of requests immutably based on Arweave's holographic consensus, allowing anyone to obtain the correct state through recomputation. With optimistic security guarantees, this achieves the verifiability of computation.

The computing network of AO no longer requires consensus on the entire computation process, ensuring the network's flexibility and high efficiency; the processes (which can be viewed as "smart contracts") run using the Actor model and interact through messages without needing to maintain a shared state data. This design sounds somewhat similar to DFINITY's Internet Computer (ICP), which achieves similar goals through a structured subnet of computing resources. Developers often draw analogies between the two. This article will primarily compare these two protocols.

Consensus Computing vs. General Computing

Both ICP and AO aim to achieve flexible scalability of computation by decoupling consensus from the content of computation, thereby providing cheaper computation and handling more complex problems. In contrast, traditional smart contract networks, represented by Ethereum, have all computing nodes in the network share a common state memory, and any computation that changes the state requires all nodes in the network to simultaneously perform redundant calculations to reach consensus. In this fully redundant design, the uniqueness of consensus is guaranteed, but the cost of computation is very high, and it is difficult to scale the network's computing capacity, making it suitable only for handling high-value transactions. Even on high-performance public chains like Solana, it is challenging to afford the intensive computing demands of AI.

As general computing networks, neither AO nor ICP has a globally shared state memory, so there is no need to reach consensus on the computation process that changes the state; consensus is only required on the execution order of transactions/requests, followed by verification of the computation results. Based on optimistic assumptions about the security of the node virtual machines, as long as the input request content and order are consistent, the final state will also be consistent. The computation of state changes in smart contracts (referred to as "containers" in ICP and "processes" in AO) can be performed in parallel across multiple nodes without requiring all nodes to compute exactly the same task at the same time. This greatly reduces the cost of computation and increases scalability, thus supporting more complex business operations, including the decentralized operation of AI models. Both AO and ICP claim "infinite scalability," and we will compare the differences later.

Since the network no longer jointly maintains a large public state data, each smart contract is treated as capable of handling transactions independently, and smart contracts interact through messages in an asynchronous manner. Therefore, decentralized general computing networks often adopt the Actor programming model, which makes the composability between contract businesses relatively poor compared to smart contract platforms like ETH, posing certain challenges for DeFi. However, specific business programming standards can still be used to address this. For example, the FusionFi Protocol on the AO network standardizes DeFi business logic through a unified "ticket-settlement" model to achieve interoperability. As the AO ecosystem is still in its early stages, such protocols can be considered quite forward-looking.

Implementation of AO

AO is built on the foundation of the Arweave permanent storage network and operates through a new node network. Its nodes are divided into three groups: Message Units (MG), Computing Units (CU), and Scheduling Units (SU).

In the AO network, smart contracts are referred to as "processes," which are a set of executable code permanently stored on Arweave.

When a user needs to interact with a process, they can sign and send a request. AO specifies the format of the messages, which are accepted by AO's Message Units (MU), verified for signatures, and forwarded to the Scheduling Units (SU). SU continuously receives requests, assigns a unique number to each message, and then uploads the results to the Arweave network, which reaches consensus on the transaction order. Once consensus on the transaction order is achieved, the task is assigned to the Computing Units (CU). CU performs the specific computations, changes the state values, returns the results to MU, and finally forwards them to the user or re-enters SU as a request for the next process.

SU can be seen as the connection point between AO and the Arweave consensus layer, while CU is a decentralized computing power network. It is evident that the consensus and computing resources in the AO network are completely decoupled, so as more and higher-performance nodes join the CU group, the entire AO will gain stronger computing capabilities, supporting more processes and more complex process computations, and achieving flexible on-demand supply in terms of scalability.

So, how is the verifiability of its computation results ensured? AO adopts an economic approach, requiring CU and SU nodes to stake a certain amount of AO assets. CUs compete based on factors like computing performance and price, earning revenue by providing computing power.

Since all requests are recorded in the Arweave consensus, anyone can trace back these requests to reconstruct the entire state change of the process. If malicious attacks or computation errors are detected, challenges can be initiated against the AO network, and by introducing more CU nodes to recompute, the correct results can be obtained. The AO staked by the erroneous node will be forfeited. Arweave does not verify the states of the processes running in the AO network; it merely records transactions faithfully. Arweave does not have computing capabilities, and the challenge process occurs within the AO network. The processes on AO can be viewed as a "sovereign chain" with its own consensus, while Arweave can be seen as its DA (Data Availability) layer.

AO grants developers complete flexibility, allowing them to freely choose nodes in the CU market, customize the virtual machines for running programs, and even the consensus mechanisms within processes.

Implementation of ICP

Unlike AO, which decouples resources into multiple node groups, ICP uses a more consistent data center node structure, providing a multi-subnet structured resource system, which includes: data centers, nodes, subnets, and software containers.

At the lowest level of the ICP network are a series of decentralized data centers that run the ICP client program, which virtualizes a series of nodes with standard computing resources based on performance. These nodes are randomly combined by ICP's core governance code, NNS, to form a subnet. Nodes under a subnet handle computing tasks, reach consensus, and produce and propagate blocks. Nodes within a subnet achieve consensus through an optimized interactive BFT.

Multiple subnets exist simultaneously within the ICP network, with a group of nodes running only one subnet and maintaining internal consensus. Different subnets can produce blocks in parallel at the same rate and can interact through cross-subnet requests.

In different subnets, node resources are abstracted as "containers," where businesses run within containers, and subnets do not have a large shared state. Containers only maintain their own state and have maximum capacity limits (due to wasm virtual machine constraints), and the blocks of the subnet do not record the states of containers in the network.

Within the same subnet, computing tasks run redundantly across all nodes, but they run in parallel across different subnets. When the network needs to scale, ICP's core governance system, NNS, dynamically adds and merges subnets to meet usage demands.

AO vs ICP

Both AO and ICP are built around the Actor messaging model, which is a typical framework for concurrent distributed computing networks, and both default to using WebAssembly as the execution virtual machine.

Unlike traditional blockchains, AO and ICP do not have the concepts of data and chains. Therefore, under the Actor model, the results of the default virtual machine execution are deterministic, so the system only needs to ensure the consistency of transaction requests to achieve consistency in state values within processes. Multiple Actors can run in parallel, providing significant room for scalability, making the cost of computation low enough to support general computing tasks like running AI.

However, in terms of overall design philosophy, AO and ICP stand on completely opposite sides.

  1. Structured vs. Modular

    The design philosophy of ICP resembles traditional network models, abstracting resources from the underlying data centers into fixed services, including hot storage, computing, and transmission resources. In contrast, AO employs a modular design that is more familiar to crypto developers, completely separating resources such as transmission, consensus verification, computation, and storage, thus distinguishing multiple node groups.

    Therefore, for ICP, the hardware requirements for nodes in the network are very high, as they need to meet the minimum requirements for system consensus.

    Developers must accept a unified standard for program hosting services, and the resources related to these services are constrained within individual containers. For example, the maximum available memory for the current container is 4GB, which also limits the emergence of some applications, such as running larger AI models.

    ICP also attempts to provide diverse needs by creating different characteristic subnets, but this relies on the overall planning and development of the DFINITY Foundation.

    For AO, CU resembles a free computing power market, where developers can choose the specifications and number of nodes based on their needs and price preferences. Therefore, developers can run virtually any process on AO. This is also more friendly for node participants, as CU and MU can achieve independent scaling, resulting in a higher degree of decentralization.

    AO's high modularity supports customization of virtual machines, transaction ordering models, messaging models, and payment methods. Therefore, if developers need a private computing environment, they can choose CU in a TEE environment without waiting for official AO development. Modularity brings more flexibility and lowers the entry costs for some developers.

  2. Security

    ICP relies on subnets for operation, and when processes are hosted on subnets, the computation process is executed across all subnet nodes, with state verification completed by the improved BFT consensus among all subnet nodes. Although this creates some redundancy, the security of processes is entirely consistent with that of the subnet.

    Within a subnet, when two processes call each other, such as when the input of process B is the output of process A, there is no need to consider additional security issues. Only when crossing two subnets does the security difference between the two need to be considered. Currently, the number of nodes in a subnet ranges from 13 to 34, with a final determinism formation time of 2 seconds.

    In AO, the computation process is delegated to the CUs chosen by developers in the market. In terms of security, AO adopts a more token-economics-based approach, requiring CU nodes to stake $AO, with the default assumption that the computation results are trustworthy. AO records all requests through consensus on Arweave, so anyone can read the public records and verify the correctness of the current state through step-by-step recomputation. If issues arise, more CUs can be chosen in the market to participate in the computation to obtain a more accurate consensus, and the stake of the erroneous CU will be forfeited.

    This completely decouples consensus from computation, giving AO far superior scalability and flexibility compared to ICP. Without the need for verification, developers can even compute on their local devices, simply uploading commands through SU to Arweave.

    However, this also brings issues for inter-process calls, as different processes may operate under different security guarantees. For example, if process B has 9 CUs performing redundant computations while process A only has one CU running, then for process B to accept requests from process A, it must consider whether process A will transmit incorrect results. Thus, inter-process interactions are influenced by security. This also leads to longer final determinism formation times, potentially requiring up to half an hour for confirmation from Arweave. The solution is to set a minimum number of CUs and standards while requiring different final confirmation times for transactions of varying values.

    Nevertheless, AO has an advantage that ICP does not possess, which is a perpetual storage containing all transaction histories. Anyone can replay the state at any moment. Although AO does not have the traditional block and chain model, this aligns more with the idea of verifiability in crypto; in contrast, in ICP, subnet nodes are only responsible for computation and consensus on results, and do not store every transaction request, making historical information unverifiable. This means that ICP does not have unified DA; if a container misbehaves and chooses to delete it, the wrongdoing will leave no trace. Although ICP developers have spontaneously established a series of ledger containers to record call histories, this is still relatively difficult for crypto developers to accept.

  3. Degree of Decentralization

    The degree of decentralization in ICP has always been criticized. System-level tasks such as node registration, subnet creation, and merging require a governance system known as "NNS" to decide. ICP holders need to participate in NNS through staking, and to achieve general computing capabilities under multiple copies, the hardware requirements for nodes are also very high. This creates a very high participation threshold. Therefore, the realization of new features and capabilities in ICP relies on the exit of new subnets, which must be governed by NNS and further depend on the DFINITY Foundation, which holds a large amount of voting power.

    In contrast, AO's completely decoupled approach returns more power to developers. An independent process can be viewed as an independent subnet, a sovereign L2, where developers only need to pay fees. The modular design also facilitates developers in introducing new features. For node providers, the cost of participation is also lower compared to ICP.

Conclusion

The ideal of a world computer is grand, but there is no optimal solution. ICP offers better security and can achieve rapid finality, but the system is more complex, subject to more restrictions, and difficult to gain recognition from crypto developers in certain designs. AO's highly decoupled design makes scaling easier while providing more flexibility, which will be favored by developers, but it also introduces complexity in security.

From a developmental perspective, in the rapidly changing crypto world, it is challenging for a single paradigm to maintain absolute dominance for long periods, even ETH is facing competition (with Solana catching up). Only by being more decoupled and modular can systems quickly evolve, adapt to challenges, and survive. As a latecomer, AO is set to become a strong competitor in decentralized general computing, especially in the AI field.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators