Three perspectives to understand the disruptive innovation of AO

PermaDAO
2025-02-06 23:02:59
Collection
AO can be understood as a network of infinite shards and infinite scalability. Each Process is a shard.

AO is not a blockchain in the traditional sense. Its unconventional and counterintuitive design can easily confuse researchers who are just getting acquainted with AO, especially when they try to frame AO within the architecture of traditional blockchains:

  1. What kind of consensus mechanism is the "holographic consensus" mentioned by AO, if it is neither PoS nor PoW?
  2. Without a hash chain and even without blocks, how does AO ensure data immutability?
  3. Without a coordinating hub, how does AO guarantee the consistency of the global state?
  4. Without a redundant computing mechanism, who ensures the reliability of computations? What happens if a computation fails?
  5. Without shared security, how does AO ensure interoperability between Processes?

I will help you understand AO from three perspectives, using concepts familiar in blockchain to traverse from the known to the unknown, transforming the unknown into the known, and grasping AO on an intuitive level.

Sharding Perspective

After the education provided by public chains like Ethereum 2.0, Polkadot, and Near, you should be familiar with the concept of "sharding."

Concept of Sharding: In blockchain, sharding is a solution to improve network scalability by splitting the network into multiple shards, each independently validating and processing transactions and generating its own blocks, thereby enhancing overall network efficiency. Shards can achieve synchronous interoperability, while asynchronous interoperability between shards is realized through certain communication protocols.

Polkadot is the most typical sharding architecture. In Polkadot, each parachain is a shard, independently collecting and packaging its own blockchain, and a randomly assigned group of validators from the relay chain is responsible for validation. Communication between parachains is facilitated through a unified XCM message format to achieve interoperability.

AO's Extreme Sharding

From the perspective of sharding, AO can be understood as an extreme form of "sharding": each Process is a shard. Imagine if each smart contract on Ethereum ran on a separate shard—this is exactly what AO is. Each Process is independent, and calls between Processes rely on message-driven communication, occurring in a completely asynchronous manner.

Modular Perspective

However, we notice a key point: in the design of Polkadot, there is a "relay chain," and in ETH2.0, there is a "beacon chain." Their role is to serve as a unified consensus layer, providing shared security. The unified consensus layer is responsible for providing direct or indirect validation services for all shards and the messages between them. AO, however, seems to lack this component, so how is AO's consensus layer designed?

AO's consensus layer is actually Arweave. From a modular perspective, AO can be understood as an L2 of Arweave, with Arweave serving as the L1 Rollup. All logs of messages generated during the operation of the AO network are uploaded to Arweave for permanent storage, meaning there is an immutable record of the AO network's operation on Arweave. You might ask, since Arweave is a decentralized storage platform and does not possess much computational power, how does Arweave validate the data uploaded from the AO network?

The answer is: Arweave does not validate; the AO network itself has an optimistic arbitration mechanism. Arweave accepts all message data uploaded from the AO network. Each message carries its sender's process ID and the signature of the CU (computational unit) running it, as well as the signature of the SU (sorting unit) that orders it. In case of disputes, the immutable message records on Arweave can be relied upon to introduce more nodes for recomputation to create the correct fork, discarding the original erroneous fork and penalizing the erroneous CU or SU's deposit in the correct fork. It is important to note that the MU only collects the pending messages of Processes and passes them to the SU; it is trustless, does not require a deposit, and does not involve penalties.

AO is very much like an Optimistic Rollup with Arweave as L1, except that the validation challenge process does not occur on L1 but within the AO network itself.

However, there is still a problem: it is not feasible for every message to wait for confirmation after being recorded on Arweave, as the finality time on Arweave exceeds half an hour. Therefore, AO has its own soft consensus layer, just like Ethereum's Rollups have their own soft consensus layer, where most transactions do not wait for L1 confirmation but are recorded immediately.

In AO, the Processes actually decide the verification intensity autonomously.

As the message receiver, a Process must decide whether to wait for Arweave's confirmation before processing the message or to process it after confirmation from the soft consensus layer. Even during the soft consensus confirmation phase, the Process can adopt flexible strategies; it can process after a single CU confirms or after multiple CUs redundantly confirm and cross-validate before processing, with the redundancy level determined by the Process.

In practical applications, the verification intensity often correlates with the transaction amount, for example:

  • For small transactions, a fast verification strategy is adopted, processing after a single confirmation.
  • For medium transactions, different redundancy levels are employed based on the specific amount, with multi-point confirmation before processing.
  • For large transactions, a cautious verification strategy is used, processing only after confirmation from the Arweave network.

This is the model of "holographic consensus" + "flexible verification" that AO refers to. By decoupling "verifiability" from the act of "verification" itself, AO adopts a completely different approach to consensus issues compared to traditional blockchains. The responsibility for message verification does not lie with the network itself but with the receiving Process or, in other words, with the application developers.

It is precisely because of this consensus model that AO can adopt an "extreme sharding" model that is hubless and infinitely scalable.

Of course, flexible verification leads to varying verification intensities among different Processes, which may cause trust chains to break during complex interoperability. A failure in a single link of a long call chain can lead to the failure or error of the entire transaction. In fact, such issues have already been exposed during the AO testnet phase. I believe AO should set a minimum verification intensity standard for all verification tasks, and we look forward to seeing what new designs the upcoming official network of AO will bring.

Resource Perspective

In traditional blockchain systems, resources are abstracted as "block space," which can be understood as a collection of storage, computing, and transmission resources provided by nodes, organically combined through on-chain blocks to provide a runtime carrier for on-chain applications. Block space is a limited resource; in traditional blockchains, different applications must compete for block space and pay for it, while nodes profit from this payment.

AO does not have the concept of blocks, and naturally, it does not have the concept of "block space." However, like other smart contracts on chains, each Process on AO also consumes resources during operation. It requires nodes to temporarily store transaction and state data and needs nodes to consume computing resources to execute computation tasks. The messages it sends need to be transmitted to the target Process by the MU and SU.

In AO, nodes are divided into three categories: CU (computational unit), MU (message unit), and SU (sorting unit), with CU being the core that carries computation tasks. MU and SU handle communication tasks. When a Process needs to interact with other Processes, it generates a message stored in the outbound queue. The CU running that Process signs the message, MU extracts the message from the outbound queue and submits it to the SU, which assigns a unique sequence number to the message and uploads it to Arweave for permanent storage. Then, MU passes the message to the inbound queue of the target Process, completing the message delivery. MU can be understood as the collector and deliverer of messages, while SU is the sorter and uploader of messages.

As for storage resources, MU in the AO network only needs to store temporary data required for computation, which can be discarded after computation is complete. Arweave is responsible for permanent storage. Although Arweave cannot horizontally scale, its storage performance ceiling is extremely high, and the storage demands of the AO network are unlikely to reach Arweave's ceiling in the foreseeable future.

We find that the computing resources, transmission resources, and storage resources in the AO network are decoupled. Apart from the unified storage resources provided by Arweave, computing and transmission resources can scale horizontally without any limitations.

The more and higher-performance CU nodes that join the network, the higher the computing power of the network, which can support more Processes running; similarly, the more and higher-performance MU and SU nodes that join the network, the faster the transmission efficiency of the network. In other words, the "block space" in AO can be continuously created. For applications, they can either purchase services from public CU, MU, and SU nodes in the open market or run private nodes to serve their applications. If an application's business expands, it can enhance performance by scaling its own nodes, just as Web2 applications do. This is unimaginable in traditional blockchains.

At the pricing level of resources, AO can flexibly adjust through supply and demand, allowing resource supply to scale according to demand. This adjustment will be very sensitive, with nodes joining and exiting the network occurring very quickly. Looking back at Ethereum, we find that when resource demand surges, users have no choice but to endure high Gas fees, as Ethereum cannot improve its performance by increasing the number of nodes.

Summary

In summary, we have approached the principles and mechanisms of AO through concepts familiar to most crypto researchers, such as "sharding," "modularity," "Rollup," and "block space," helping everyone understand how AO achieves almost unlimited scalability through disruptive innovation.

Now, looking back at the initial questions, do you have clarity?

  1. What kind of consensus mechanism is the "holographic consensus" mentioned by AO, if it is neither PoS nor PoW?

AO's consensus mechanism is actually a design close to Op Rollup. It relies on Arweave at the hard consensus layer, while at the soft consensus layer, each Process can autonomously decide the verification intensity and how many CU nodes will perform redundant computations.

  1. Without a hash chain and even without blocks, how does AO ensure data immutability?

The DA data uploaded to Arweave is immutable, providing verifiability for all computation and transmission processes on AO. AO itself does not need to limit the processing capacity within a unit of time, so there is no need to set blocks. The structures used to ensure data immutability, such as "hash chains" and "blocks," are present on the Arweave chain.

  1. Without a coordinating hub, how does AO ensure the consistency of the global state?

Each Process is an independent "shard," independently managing its transactions and states, and Processes interact through message-driven communication. Therefore, global state consistency is not required. Arweave's permanent storage provides global verifiability and historical traceability, combined with the optimistic challenge mechanism, which can be used for dispute resolution.

  1. Without a redundant computing mechanism, who ensures the reliability of computations? What happens if a computation fails?

AO does not have a globally enforced redundant computing mechanism; each Process can decide how to verify the reliability of each incoming message. If a computation fails, it can be discovered and corrected through optimistic challenges.

  1. Without shared security, how does AO ensure interoperability between Processes?

Processes need to manage the trust of each Process they interoperate with independently, and different levels of verification intensity can be applied to Processes with varying security levels. For complex interoperability with intricate call chains, to avoid high error correction costs due to trust chain breaks, AO may have a minimum verification intensity requirement.

Related tags
ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators