Three Perspectives on the Disruptive Innovation of AO
Summary ( within 50 words)
AO can be understood as a network with infinite sharding and scalability, where each Process acts as an independent shard.
Author : 0xmiddle
Translator: Emily
Reviewer : Eliot Sayes
Source :Content Guild Research
Originally published at: @perma_daoCN
Original link: https://x.com/perma_daoCN/status/1882013077362852060
Orignal Version
AO 不是传统意义上的区块链。它反常规、反常识的设计,容易让刚刚了解 AO 的研究者在某些关节陷入困惑,尤其是研究者试图用传统区块链的架构框定AO 时:
AO is beyond traditional blockchains. Its unconventional and counterintuitive design can easily confuse researchers who try to frame AO within the architecture of traditional blockchains:
•非 PoS, 非 PoW,AO 所说的“全息共识”究竟是什么样的共识机制呢?
•What is AO’s “Holographic Consensus” if it does not use PoS or PoW?
•没有哈希链,甚至没有区块,AO 是如何保证数据不可变?
•How does AO ensure data immutability without a hash chain or blocks?
•没有一个协调中枢, AO 如何保证全局状态的一致性?
•How does AO ensure global state consistency without a central coordinator?
•没有冗余计算机制,谁来保证计算的可靠性?计算出错怎么办?
•Who ensures computational reliability in AO without a redundant computation mechanism? What happens in case of errors?
•没有共享安全性,如何保证 Process 之间的互操作性?
•How does AO ensure interoperability between Processes without shared security?
我将通过 3 个视角,用区块链中大家已经熟悉的概念,帮助大家从已知穿越到未知,把未知变成已知,在感性层面理解 AO。
I will explain AO from three perspectives, using concepts familiar to blockchain researchers. This will help bridge the gap between the known and the unknown, making AO’s novel approach more understandable.
分片视角
Sharding Perspective
经过以太坊 2.0、波卡、Near 等公链的教育,大家应该对“分片”不陌生。
After Ethereum 2.0, Polkadot, and Near, most blockchain researchers are alreadyfamiliar with the concept of “sharding”.
分片的概念:
What is Sharding?
在区块链中,分片是一种提高网络扩展性的解决方案,通过将网络拆分成多个分片,每个分片独立验证和处理交易,并生成自己的区块,从而提升整体网络效率。分片内可以实现同步互操作,分片之间则通过一定的通信协议实现异步互操作。
In blockchain systems, sharding is a scalability-enhancing solution that divides the network into multiple shards, where each shard independently verifies and processes transactions while generating its own blocks to improve the overall efficiency. Within a shard, synchronous interoperability is possible, while inter-shard communication is achieved through specific protocols.
Polkadot 是最典型的分片架构。在 Polkadot 中,每个平行链都是一个分片,平行链独立收集打包自己的区块链,并由中继链随机分配一个验证者小组来验证。平行链之间统一的 XCM 消息格式来进行通信,实现互操作。
Polkadot is a classic example of a sharded architecture. In Polkadot, each parachainacts as a shard. Parachains collect and package their own blocks independently, while the relay chain randomly assigns validator groups to verify them. Parachains can communicate through the standardized XCM message format to achieve interoperability.
AO 的极致分片
AO’s Extreme Sharding
用分片的视角来看,AO 可以理解为一种极致的“分片”:每个 Process 都是一个分片。想象一下,如果以太坊的每个智能合约都运行在一个单独的分片上会是什么样呢?没错,这就是 AO。每个 Process 都是独立的,Process 之间的调用则依靠消息驱动,以完全异步的方式进行。
From a sharding perspective, AO can be seen as an extreme form of sharding: each Process functions as an independent shard. Imagine if every Ethereum smart contract ran on its own individual shard—this is precisely how AO operates. Each Process in AO is independent, and inter-Process communication is entirely asynchronous, driven by messages.
模块化视角
Modular Perspective
但我们发现一个关键,在 Polkadot 的设计中,存在一个“中继链”,在 ETH2.0 中也有“信标链”,它们的作用是作为统一的共识层,提供共享安全性。统一共识层要负责对所有分片以及分片之间的消息传递提供直接或者间接的验证服务。而 AO 似乎没有这个组件,那么 AO 的共识层是如何设计的呢?
However, there is a key difference: in Polkadot, a “relay chain” exists, while Ethereum 2.0 has a “beacon chain”. These components serve as unified consensus layers that provide shared security. The unified consensus layer is responsible for validating all shards and ensuring secure message transfers between them. AO, however, does not seem to have such a component—so how is AO’s consensus layer designed?
AO 的共识层实际上是 Arweave。以模块化的视角理解,AO 可以理解为是Arweave 的 L2,以 Arweave 为 L1 的 Rollup,AO 网络运行过程中所产生的所有消息的日志都会被上传到 Arweave 永久存储,也就是说,在 Arweave 上有一份 AO 网络运行的不可变的记录。那么你可能会问,Arweave 是去中心化存储平台,并不具备太多计算能力。Arweave 是如何验证 AO 网络上传上来的数据呢?
AO’s consensus layer is actually Arweave. From a modular perspective, AO can be understood as an L2 Rollup built on Arweave as its L1. All logs generated during AO’s operation are permanently stored on Arweave. That is to say, Arweave has a temper-proof record on AO network. You might ask: Arweave is a decentralized storage platform with limited computational capacity—how does it verify the data uploaded by AO?
答案是:Arweave 并不验证,AO 网络自身会有乐观仲裁机制。Arweave 对于AO 网络上传上来的消息数据来者不拒,每条消息都会携带它的发出者 process id,及运行它的 CU(计算单元)的签名,也会带有为它排序的 SU(排序单元)的签名。当发生争议时,可以依赖 Arweave 上不可变的消息记录,引入更多的节点进行重新运算,以创建正确的分叉,舍弃原有的错误分叉,并在正确的分叉中罚没出错 CU 或 SU 的押金。这里需要注意,MU 仅负责收集 Process 的待发消息,传递给 SU,是无信任的,不需要押金,也不涉及罚没。
The answer is that Arweave does not execute validation—AO has its own optimistic arbitration mechanism. Arweave accepts all messages from AO without filtering. Each message includes its sender’s Process ID, a signature from the executing Compute Unit (CU), and a signature from the Sequencing Unit (SU) that ordered it. In case of disputes, immutable records on Arweave allow additional nodes to reprocess computations, create a correct fork, discard incorrect forks, and slash the stake of misbehaving CUs or SUs. It’s important to note that Message Units (MUs) merely collect and transmit messages to SUs and do not require staking or slashing mechanisms, as they operate trustlessly.
AO 非常像是以 Arweave 为 L1 的 Optimistic Rollup,只是验证挑战过程并不在L1 上发生,而是在 AO 网络自身当中发生。
AO closely resembles an Optimistic Rollup built on Arweave as L1. However, unlike traditional rollups, verification challenges occur within the AO network itself rather than on L1.
不过这里还是有个问题,不可能每一笔消息都等待 Arweave 上收录之后,才去确认,实际上 Arweave 的最终确定性形成时间要超过半个小时。所以 AO 自己会有一个软共识层,就像以太坊的 Rollups 自己有软共识层,大多数交易不会等待 L1 确认,就会上账。
One challenge is that AO cannot afford to wait for every message to be recorded on Arweave before confirming it, as Arweave’s finality can take over 30 minutes.Therefore, AO has its own soft consensus layer, similar to how Ethereum Rollups function. Most transactions are processed before L1 confirmation.
AO 中的 Process 实际上是自主决定验证强度的。
Each Process in AO determines its own verification intensity.
作为消息接收方的 Process 要决定是否要等 Arweave 确认后再处理该消息,还是在软共识层确认后即处理该消息。即便在软共识层确认环节,Process 也可以采取弹性的策略,可以是单个 CU 确认后即处理,也可以由多个 CU 冗余确认,并进行交叉验证后再处理,冗余度也是由 Process 决定。
The receiving Process decides whether to wait for Arweave’s confirmation before processing a message or to proceed once the soft consensus layer confirms it. Even at the soft consensus stage, a Process can adopt flexible strategies—it may execute immediately after a single CU’s confirmation or require multiple CUs for redundancy and cross-validation. The redundancy level is determined by the Process itself.
在实际应用中,验证强度往往与交易的金额有关,例如:
In practical applications, verification intensity often correlates with transaction size. For example:
•小额交易,采用快速验证策略,单点确认后即处理
•Small transactions use a fast verification strategy, processing after a single confirmation.
•中等额度交易,根据具体额度,采取不同冗余度的多点确认后处理的策略
•Medium-sized transactions follow varying redundancy strategies based on their amount.
•大额交易,则采取谨慎验证策略,在 Arweave 网络确认后再处理
•Large transactions adopt a cautious strategy, processing only after Arweaveconfirmation.
这就是 AO 所说的“全息共识”+“弹性验证”的模式,通过将“可验证性”和“验证”行为本身解耦,AO 对共识问题采取了完全不同于传统区块链的做法,消息验证的责任也不在网络本身,而在接收方Process本身,或者说在应用程序开发者。
This is AO’s “Holographic Consensus” + “Flexible Verification” model. By decoupling “verifiability” from the act of “verification”, AO fundamentally differs from traditional blockchain consensus models. The responsibility of verification is shouldered by the receiving Process or application developers instead of the network itself.
正是因为采取了这样的共识模型,AO 才有可能采取“极致分片”的无枢纽、无限扩展模型。
It is because AO adopts this consensus model that such an “extreme sharding”architecture with no central coordinator and virtually unlimited scalability can be possible.
当然,弹性验证导致了不同的 Process 的验证强度不同,在复杂的互操作中,可能导致信任链断裂,一个较长的调用链中的个别环节失败,会导致整体交易的失败或者错误。事实上,在 AO 测试网阶段,这样的问题已经有所暴露。我认为,AO 应该会为所有验证任务设定一个最低验证强度的标准,让我们拭目以待 AO 即将到来的正式网会有什么样的新设计。
However, flexible verification results in varying levels of verifying intesity across different Processes. In complex interoperability scenarios, this may lead to trust chain breakage, where a failure in a single step of a long execution chain could cause the entire transaction to fail or produce errors. In fact, such issues have already surfaced during AO’s testnet phase. I believe AO should establish a minimum verificationstandard for all verification tasks. Let’s see what innovations AO’s upcoming mainnetwill introduce.
资源视角
Resource Perspective
在传统区块链系统中,资源被抽象为“区块空间”,区块空间可以被理解为节点提供的存储、计算和传输资源的集合,并通过链上区块有机结合,为链上应用提供运行的载体。区块空间是一种有限的资源,在传统区块链中,不同的应用程序需要争夺区块空间,并为其付费,而节点则通过这种付费来盈利。
In traditional blockchain systems, resources are abstracted as “block space”. Block space represents the combined storage, computation, and transmission resources provided by nodes, structured within blockchain blocks to support on-chain applications. Block space is a finite resource. In traditional blockchains, different applications compete for it and pay transaction fees, while nodes earn revenue through these fees.
AO 中没有区块的概念,也自然没有“区块空间”的概念。但和其他链上的智能合约一样,AO 上的每个 Process 在运行时,也需要消耗资源,它需要节点来临时存储交易和状态数据,也需要节点消耗计算资源,为其执行计算任务,其发出的消息,需要 MU 和 SU 传输到目标 Process 那里。
AO has no concept of blocks and, consequently, no “block space”. However, like smart contracts on other blockchains, each Process on AO requires resources to operate. A Process needs nodes to temporarily store transaction and state data and consume computational resources for execution, and relies on MUs and SUs to transmit messages to target Processes.
在 AO 中,节点分为三类:CU(计算单元)、MU(消息单元)、SU(排序单元),其中 CU 是承载计算任务的核心,MU 和 SU 则承载通信任务。
In AO, nodes are classified into three types: Compute Units (CUs), Message Units (MUs), and Sequencing Units (SUs). CUs handle computation, while MUs and SUs facilitate communication.
当一个 Process 需要和其他 Process 交互时,会生成一条消息,存储在出站队列。运行该 Process 的 CU 会签名该消息,MU 从出站队列中提取该消息,并提交给 SU,SU 给消息赋予一个唯一的序号,并上传到 Arweave 永久存储。然后MU 再将消息传递到目标 Process 的入站队列,消息投递完成。可以把 MU 理解为消息的收集者和投递者,SU 理解为消息的排序者、上传者。
When a Process needs to interact with another Process, it generates a message and stores it in the outbound queue. The CU running the Process signs the message, which is then extracted by an MU and submitted to an SU. The SU assigns a unique sequence number to the message and uploads it to Arweave for permanent storage.Finally, the MU delivers the message to the inbound queue of the target Process, completing the transmission. MUs function as message collectors and couriers, while SUs handle sequencing and uploading.
至于存储资源,AO 网络中的 MU 只需存储计算所需的临时数据,在计算完成后即可丢弃。负责永久存储的是 Arweave,Arweave 虽然不能水平扩展,但其存储性能的天花板极高,AO 网络的存储需求,在可预见的未来,还无法触及Arweave 的天花板。
Regarding storage resources, MUs in AO only retain temporary computation data, which can be discarded once processing is complete. Permanent storage is handled by Arweave. Although Arweave does not scale horizontally, its storage capacity is extremely high. AO’s storage requirements are unlikely to exceed Arweave’s capacity in the foreseeable future.
我们发现 AO 网络中的计算资源、传输资源、存储资源都是解耦的,除了Arweave 提供的统一存储资源外,计算资源和传输资源都可以各自水平扩展,没有任何限制。
In AO, computing, transmission, and storage resources are fully decoupled. Apart from Arweave’s unified storage, computing and transmission resources can scale horizontally without constraints.
越多的、越高性能的 CU 节点加入网络,网络就会拥有更高的算力,就可以支撑更多的 Process 运行。同样,越多的,越高性能的 MU、SU 节点加入网络,网络的传输效率就越快。也就是说,AO 中的“区块空间”可以不断被创造。对于应用程序而言,既可以购买开放市场中的公共的 CU、MU、SU 节点服务,也完全可以自己运行私有节点来为自己的应用程序服务。如果应用程序的业务扩张,则完全可以通过扩容自己的节点来提升性能,正如 Web2 应用所做的那样。这在传统区块链上是无法想象的。
As more high-performance CUs join the network, computing power increases, allowing more Processes to run concurrently. Similarly, as more high-performance MUs and SUs join, message transmission efficiency improves. In other words, AO’s equivalent of “block space” can be continuously expanded. Applications can either purchase CU, MU, and SU services from the open market or run their own private nodes to support their operations. As an application scales, it can expand its own nodes to increase performance—just like in Web2 systems. This level of scalability is unimaginable in traditional blockchains.
在资源的定价层面,AO 可以通过供需来灵活调节,因而使得资源的供给可以根据需求伸缩。这种调节会非常灵敏,节点的加入和退出可以非常快地进行。我们再回头看以太坊,则会发现,当资源需求急剧上升时,大家除了忍受高昂的 Gas 费之外,别无他法,因为以太坊无法通过扩充节点数来提高其性能。
In terms of pricing, AO dynamically adjusts based on supply and demand, allowing resource availability to scale with needs. This adjustment mechanism is highly responsive, enabling rapid node entry and exit. By contrast, Ethereum users have no choice but to endure high gas fees when demand surges, as Ethereum cannot simply add more nodes to improve performance.
总结
Conclusion
以上,我们通过大多数加密研究者熟知的概念入手,例如“分片”、“模块化”、“Rollup”、“区块空间”等,切入到 AO 的原理和机制中,帮助大家理解了 AO 是如何通过颠覆式的创新,做到几乎无限扩容的。
In this article, we used familiar concepts such as “sharding,” “modularity,” “Rollups,”and “block space” to explore AO’s architecture and mechanisms, helping readers understand how AO achieves near-infinite scalability through disruptive innovation.
现在回过头来看开头的几个问题,你是否已了然呢?
Now, looking back at the key questions raised at the beginning—do they make more sense?
1. 非 PoS, 非 PoW,AO 所说的“全息共识”究竟是什么样的共识机制呢?
AO 的共识机制,实际上是一种接近 Op Rollup 的设计。在硬共识层面依赖Arweave,在软共识层面,每个 Process 可以自主决定验证强度,自主决定由多少个 CU 节点进行冗余计算。
What is AO’s “Holographic Consensus” if it does not use PoS or PoW?
AO’s consensus mechanism is fundamentally similar to an Optimistic Rollup design. On the hard consensus layer, it relies on Arweave, while on the soft consensus layer, each Process independently determines its verification intensity and how many CUnodes will perform redundant computation.
2. 没有哈希链,甚至没有区块,AO 如何保证数据不可变?
上传到在 Arweave 上的 DA 数据是不可变的,为 AO 上的所有计算和传输过程提供可验证性。AO 本身没有必要限定单位时间内的处理容量,因此不需要设定区块。“哈希链”和“区块”,这些用来保证数据不可变的结构,Arweave 链是有的。
How does AO ensure data immutability without a hash chain or blocks?
The Data Availability (DA) layer in Arweave ensures that all uploaded data is immutable, providing verifiability for AO’s computation and message transmission. Since AO does not impose a fixed processing capacity per time unit, it does not require blocks. The structures traditionally used to guarantee immutability—such as hash chains and blocks—are already present in Arweave.
3. 没有一个协调中枢,AO 如何保证全局状态的一致性?
每个 Process 都是一个独立“分片”,独立管理其交易和状态,Process 通过消息驱动的方式来交互。因此并不需要全局状态的一致性。Arweave 的永久存储提供了全局可验证性和历史回溯能力,结合乐观挑战机制,可用于争议解决。
How does AO ensure global state consistency without a central coordinator?
Each Process in AO functions as an independent “shard”, managing its transactions and state autonomously. Processes communicate via message-driven interactions, eliminating the need for global state consistency. Arweave’s permanent storage ensures verifiability and historical traceability, while the optimistic arbitrationmechanism handles conflicts.
4. 没有冗余计算机制,谁来保证计算的可靠性?计算出错怎么办?
AO 没有全局强制的冗余计算机制,每个 Process 可以自行决定如何验证发来的每个 message 的可靠性。如果计算出错,可以通过乐观挑战的形式发现和纠正。
Who ensures computational reliability in AO without a redundant computation mechanism?
AO does not enforce a network-wide redundant computation mechanism. Instead, each Process determines how to verify incoming messages. If computation errors occur, the optimistic arbitration mechanism detects and corrects them.
5. 没有共享安全性,如何保证 Process 之间的互操作性?
Process 需要自行管理每个与之互操作的 Process 的授信,可对不同安全级别的Process 采用不同级别的验证强度。对于调用链比较复杂的互操作,为了避免信任链断裂带来的较高的纠错成本,AO 可能会有一个最低限度的验证强度要求。
How does AO ensure interoperability between Processes without shared security?
Processes must establish trust relationships with the ones they interact with, applying different verification intensity based on varying security requirements. For complex interactions involving long execution chains, AO may introduce a minimum verification standard to prevent excessive error correction costs due to trust chain failures.
AO is not a traditional blockchain. Its unconventional and counterintuitive design can easily confuse researchers who are new to AO, especially when trying to fit it into the framework of traditional blockchain architectures:
What exactly is the consensus mechanism behind AO’s “Holographic Consensus,” given that it is neither PoS nor PoW?
Without a hash chain or even blocks, how does AO ensure data immutability?
Without a coordinating hub, how does AO guarantee global state consistency?
Without redundant computation mechanisms, who ensures the reliability of computations? What happens if a computation fails?
Without shared security, how does AO ensure interoperability between processes?
I will explore these questions from three perspectives, using concepts familiar to those in the blockchain space, to help bridge the gap between the known and the unknown and facilitate an intuitive understanding of AO.
Sharding Perspective
After learning about Ethereum 2.0, Polkadot, Near, and other public blockchains, you should be familiar with the concept of "sharding."
The Concept of Sharding: In blockchain, sharding is a solution to improve network scalability by splitting the network into multiple shards, each independently verifying and processing transactions and generating its own blocks, thus enhancing overall network efficiency. Shards can operate synchronously within themselves, and asynchronous interoperability between them is achieved via communication protocols.
Polkadot is a typical sharded architecture. In Polkadot, each parachain is a shard, which independently collects and packages its own blockchain, with a relay chain randomly assigning a validator group to verify the parachains. Communication between parachains is facilitated via a unified XCM message format for interoperability.
AO’s Extreme Sharding
From a sharding perspective, AO can be seen as an extreme form of “sharding”: each Process is a shard. Imagine if every smart contract in Ethereum ran on a separate shard—this is essentially what AO is. Each Process is independent, and communication between Processes relies on message-driven, fully asynchronous interaction.
Modular Perspective
However, we notice a key difference. In Polkadot’s design, there is a “relay chain,” and in ETH2.0 there is a “beacon chain,” both serving as a unified consensus layer to provide shared security. The unified consensus layer is responsible for directly or indirectly validating all shards and the messages exchanged between them. AO, however, seems to lack this component—so how is AO’s consensus layer designed?
AO’s consensus layer is actually Arweave. From a modular perspective, AO can be understood as an L2 on top of Arweave, with Arweave as its L1 Rollup. All message logs generated during the operation of the AO network are uploaded to Arweave for permanent storage. This means that there is an immutable record of AO’s operations stored on Arweave. You may ask, since Arweave is a decentralized storage platform with limited computational capabilities, how does it validate the data uploaded from the AO network?
The answer is: Arweave does not validate. AO’s network itself has an optimistic arbitration mechanism. Arweave accepts all message data from the AO network without verification, with each message carrying its sender’s process ID, the signature of the computational unit (CU) running it, and the signature of the sorting unit (SU) that orders it. In case of disputes, the immutable message records on Arweave can be used to bring in additional nodes for re-computation, create the correct branch, discard the erroneous one, and penalize the CU or SU of the incorrect branch by forfeiting their deposits. It's important to note that the MU only collects and forwards messages to the SU without any trust requirements, deposits, or penalties.
AO closely resembles an optimistic rollup with Arweave as its L1, but the validation challenge process occurs within the AO network itself, not at L1.
However, there’s still an issue: it’s impractical to wait for every message to be confirmed by Arweave before processing. Arweave’s finality time is more than half an hour. Therefore, AO has its own soft consensus layer, similar to Ethereum’s Rollups, where most transactions won’t wait for L1 confirmation and will be processed directly.
In AO, Processes independently decide the level of validation strength.
As a recipient, a Process determines whether to wait for Arweave to confirm a message before processing it, or whether to process the message after confirmation from the soft consensus layer. Even within the soft consensus layer, Processes can adopt flexible strategies, such as processing after single CU confirmation, or after redundant multi-CU confirmation and cross-validation, with redundancy levels determined by the Process itself.
In practice, the validation strength often correlates with the transaction amount. For example:
Small transactions use a fast validation strategy and are processed after single-point confirmation.
Medium transactions use multi-point confirmation with varying redundancy based on the transaction size.
Large transactions adopt a cautious verification strategy and are processed after confirmation by the Arweave network.
This is AO’s “Holographic Consensus” and “Elastic Validation” model, which decouples “verifiability” from the validation process itself. The responsibility for message validation lies not with the network, but with the recipient Process, or more accurately, the application developer.
By adopting this consensus model, AO enables the “extreme sharding” model with no hubs and infinite scalability.
However, elastic validation leads to varying validation strengths across different Processes. In complex interoperability scenarios, trust chain breaks may occur, where failure at certain points in a long call chain could cause the entire transaction to fail or be erroneous. In fact, such issues have already surfaced during AO’s testnet phase. I believe AO will set a minimum validation strength standard for all validation tasks, and we will see new designs with the launch of mainnet.
Resource Perspective
In traditional blockchain systems, resources are abstracted as "block space," which can be understood as the collection of storage, computing, and transmission resources provided by nodes, organically combined through on-chain blocks to support decentralized applications. Block space is a limited resource in traditional blockchains, where different applications must compete for it and pay for it, while nodes profit through these payments.
AO does not have the concept of blocks, nor does it have "block space." However, like other smart contracts on chains, every Process on AO consumes resources during operation. It requires nodes to temporarily store transaction and state data, and to consume computing resources for task execution. The messages it sends need to be transmitted by MU and SU to the target Process.
In AO, nodes are divided into three categories: CU (computational units), MU (message units), and SU (sorting units). CU is the core unit for computation, while MU and SU handle communication tasks. When a Process interacts with another, it generates a message, which is stored in an outbound queue. The CU running the Process signs the message, MU extracts it from the outbound queue and submits it to the SU. The SU assigns a unique number to the message and uploads it to Arweave for permanent storage. MU then delivers the message to the inbound queue of the target Process, completing the delivery. MU can be seen as the message collector and deliverer, while SU is the message sorter and uploader.
As for storage, MU in the AO network only stores temporary data required for computation, which is discarded after processing. Permanent storage is handled by Arweave, which, although it cannot scale horizontally, offers extremely high storage performance, and AO’s storage demands are far from reaching Arweave's performance limits.
We see that AO’s network decouples computational, transmission, and storage resources. Except for Arweave’s unified storage resources, computational and transmission resources can scale horizontally with no limitations.
The more powerful CU nodes that join the network, the higher the computational power, allowing more Processes to run. Similarly, the more powerful MU and SU nodes that join, the faster the network’s transmission efficiency. In other words, AO’s “block space” can continuously be created. For applications, they can either purchase public CU, MU, and SU node services from the open market or run their own private nodes to serve their applications. If an application’s business expands, it can scale its own nodes to increase performance, just as Web2 applications do. This is unimaginable on traditional blockchains.
In terms of resource pricing, AO can flexibly adjust supply based on demand, making resource supply highly scalable. Node addition and removal can happen very quickly. Looking back at Ethereum, we find that when resource demand spikes, users have no choice but to endure high gas fees, as Ethereum cannot increase performance by adding more nodes.
Conclusion
Through familiar blockchain concepts such as “sharding,” “modularization,” “Rollup,” and “block space,” we’ve delved into the principles and mechanisms of AO to understand how AO achieves near-unlimited scalability through disruptive innovation.
Now, looking back at the initial questions, do you have a clearer understanding?
What is the consensus mechanism behind AO’s “Holographic Consensus,” given that it is neither PoS nor PoW?
AO’s consensus mechanism is close to an optimistic rollup design. It relies on Arweave at the hard consensus level, while at the soft consensus level, each Process can independently decide the validation strength and how many CU nodes perform redundant computations.
Without a hash chain or even blocks, how does AO ensure data immutability?
Data uploaded to Arweave’s DA layer is immutable, providing verifiability for all computations and transactions on AO. AO does not need to limit its processing capacity per unit time, so it does not require blocks. The “hash chain” and “blocks” used to guarantee immutability are handled by Arweave.
Without a coordinating hub, how does AO ensure global state consistency?
Each Process is an independent “shard,” independently managing its transactions and state. Processes interact via message-driven communication, so there is no need for global state consistency. Arweave’s permanent storage provides global verifiability and historical traceability, with the optimistic challenge mechanism used for dispute resolution.
Without a redundant computation mechanism, who ensures the reliability of computations? What happens if a computation fails?
AO does not have a mandatory global redundant computation mechanism. Each Process decides how to verify the reliability of the messages it receives. If a computation fails, it can be identified and corrected through an optimistic challenge mechanism.
Without shared security, how does AO ensure interoperability between Processes?
Processes must manage trust for each interacting Process. Different security levels may require varying validation strengths. For complex interoperability, AO may impose a minimum verification strength requirement to avoid high error correction costs from trust chain breaks.

🏆 Spot typos, grammatical errors, or inaccuracies in this article? Report and Earn !
Disclaimer: This article does not represent the views of PermaDAO. PermaDAO does not provide investment advice or endorse any projects. Readers should comply with their country's laws when engaging in Web3 activities.
🔗 More about PermaDAO :Website | Twitter | Telegram | Discord | Medium | Youtube