Multicoin Capital Partner: Analyzing Why Modular Blockchains Are Overrated from 7 Perspectives

Multicoin Capital
2023-08-16 12:54:44
Collection
Return to first principles.

Author: Kyle Samani, Partner at Multicoin Capital

Compiled by: Luffy, Foresight News

Over the past two years, the debate on blockchain scalability has centered around the key topic of "the battle between modularity and integration."

*Note that discussions in cryptocurrency often conflate "monolithic" and "integrated" systems. The technical debate between integrated systems and modular systems has spanned a * 40-year long history. This dialogue in the cryptocurrency space should be constructed through the same lens as history, and it is far from a new debate.

When considering modularity versus integration, the most important design decision that blockchains can make is to what extent they expose the complexity of the stack to application developers. The customers of blockchains are application developers, and therefore the final design decisions should take their perspective into account.

Today, modularity is largely hailed as the primary means of blockchain scalability. In this article, I will question this assumption from first principles, revealing the cultural myths and hidden costs of modular systems, and sharing the conclusions I have drawn over the past six years while contemplating this debate.

Modular Systems Increase Development Complexity

So far, the greatest hidden cost of modular systems is the increased complexity of the development process.

Modular systems significantly increase the complexity that application developers must manage, both in the context of their own applications (technical complexity) and in the context of interacting with other applications (social complexity).

In the context of cryptocurrency, modular blockchains theoretically allow for greater specialization, but at the cost of creating new complexities. This complexity (which is essentially technical and social) is being passed on to application developers, ultimately making it more difficult to build applications.

For example, consider the OP Stack. As of now, it seems to be the most popular modular framework. The OP Stack forces developers to choose between adopting the Law of Chains (which brings a lot of social complexity) or forking and managing separately. Both choices impose significant downstream complexity on builders. If you choose to fork, will you receive technical support from other ecosystem participants (CEXs, fiat on-ramps, etc.), who must bear the costs to comply with new technical standards? If you choose to follow the Law of Chains, what rules and constraints will be imposed on you today and tomorrow?

Source: OSI Model

Modern operating systems (OS) are large complex systems that contain hundreds of subsystems. Modern operating systems handle layers 2-6 in the diagram above. This is a typical example of integrating modular components to manage the complexity of the stack exposed to application developers. Application developers do not want to deal with anything below layer 7, which is precisely why operating systems exist: they manage the complexity of the layers below so that application developers can focus on layer 7. Therefore, modularity itself should not be the goal, but rather a means to achieve an end.

Every major software system in today's world—cloud backends, operating systems, database engines, game engines, etc.—is highly integrated while consisting of many modular subsystems. Software systems tend to be highly integrated to maximize performance and reduce development complexity. The same is true for blockchains.

By the way, Ethereum is reducing the complexity that arose during the 2011-2014 Bitcoin fork era. Modularization advocates often emphasize the Open Systems Interconnection (OSI) model, arguing that data availability (DA) and execution should be separated; however, this argument is widely misunderstood. A correct understanding of the current issues leads to the opposite conclusion: the OSI is an argument for integrated systems rather than modular systems.

Modular Chains Cannot Execute Code Faster

By design, a common definition of "modular chains" is the separation of data availability (DA) and execution: one set of nodes is responsible for DA, while another set (or sets) is responsible for execution. The sets of nodes do not have to overlap, but they can.

In practice, separating DA and execution does not inherently improve the performance of either; rather, some hardware must execute DA somewhere, and some hardware must implement execution somewhere else. Separating these functions does not enhance the performance of either. While separation can reduce computational costs, it can only do so through centralized execution.

It is important to reiterate: whether modular or integrated architecture, some hardware somewhere must do the work, and separating DA and execution onto separate hardware does not inherently accelerate or increase the total system capacity.

Some argue that modularity allows multiple EVMs to run in parallel in a Rollup fashion, enabling execution to scale horizontally. While this is theoretically correct, this perspective actually highlights the limitations of the EVM as a single-threaded processor, rather than the fundamental premise of separating DA and execution in the context of scaling the total throughput of the system.

Standalone modularity does not improve throughput.

Modularization Increases User Transaction Costs

By definition, each L1 and L2 is an independent asset ledger with its own state. These separate state fragments can communicate, although transaction delays are longer, and the situations faced by developers and users are more complex (via cross-chain bridges like LayerZero and Wormhole).

The more asset ledgers there are, the more fragmented the global state of all accounts becomes. This is daunting for chains and users spanning multiple chains. State fragmentation can lead to a series of consequences:

  • Reduced liquidity, resulting in higher transaction slippage;
  • Increased total Gas consumption (cross-chain transactions require at least two transactions across at least two asset ledgers);
  • Increased duplicate calculations across asset ledgers (thereby reducing total system throughput): when the price of ETH-USDC fluctuates on Binance or Coinbase, arbitrage opportunities arise on every ETH-USDC pool across all asset ledgers (you can easily imagine a world where every time the ETH-USDC price fluctuates on Binance or Coinbase, there are more than 10 transactions across various asset ledgers. Maintaining price consistency in a fragmented state is an extremely inefficient use of block space).

It is important to recognize that creating more asset ledgers significantly increases costs across all these dimensions, especially those related to DeFi.

The primary input for DeFi is on-chain state (i.e., who owns what assets). When teams launch application chains/Rollups, they naturally create state fragmentation, which is very detrimental to DeFi, whether for developers managing application complexity (bridges, wallets, delays, cross-chain MEV, etc.) or users (slippage, settlement delays).

The ideal conditions for DeFi are: assets are issued on a single asset ledger and traded within a single state machine. The more asset ledgers there are, the more complexity developers must manage, and the higher the costs users must bear.

Application Rollups Do Not Create New Profit Opportunities for Developers

Supporters of application chains/Rollups argue that incentives will guide application developers to build on Rollups rather than on L1 or L2, so that applications can capture MEV value themselves. However, this idea is flawed because running application Rollups is not the only way to capture MEV back to application-layer tokens, and in most cases, it is not the best way. Application-layer tokens can simply encode logic in smart contracts on a general chain to capture MEV back to their own tokens. Let’s consider a few examples:

  • Liquidation: If Compound or Aave DAO wants to capture a portion of the MEV flowing to liquidation bots, they can simply update their respective contracts to direct a portion of the fees currently going to liquidators back to their own DAO, without needing a new chain/Rollup.
  • Oracles: Oracle tokens can capture MEV by providing back-running services. In addition to price updates, oracles can bundle any on-chain transaction that is guaranteed to run immediately after a price update. Thus, oracles can capture MEV by providing back-running services to seekers, block builders, etc.
  • NFT Minting: NFT minting is rife with resale bots. This can be easily mitigated by simply encoding a declining profit redistribution. For example, if someone tries to resell their NFT within two weeks of minting, 100% of the revenue will revert to the issuer or DAO. Over time, this percentage may vary.

There is no universal answer for capturing MEV back to application-layer tokens. However, with a little thought, application developers can easily capture MEV back to their own tokens on a general chain. Launching an entirely new chain is not necessary, as it introduces additional technical and social complexity and creates more wallet and liquidity headaches for users.

Application Rollups Cannot Solve Cross-Application Congestion Issues

Many believe that application chains/Rollups ensure that applications are not affected by gas spikes caused by other on-chain activities (such as popular NFT minting). This perspective is partially correct but mostly incorrect.

This is a historical issue, fundamentally rooted in the single-threaded nature of the EVM, rather than due to the separation of DA and execution. All L2s must pay fees to L1, and L1 fees can increase at any time. Earlier this year, during the memecoin craze, transaction fees on Arbitrum and Optimism briefly exceeded $10. Recently, Optimism also saw a spike in fees after the launch of Worldcoin.

The only solution to high fee spikes is: 1) maximize L1 DA, and 2) refine the fee market as much as possible:

If L1's resources are constrained, usage peaks in various L2s will pass through to L1, leading to higher costs for all other L2s. Therefore, application chains/Rollups cannot escape the impact of gas spikes.

The coexistence of numerous EVM L2s is merely a crude attempt to localize the fee market. It is better than placing the fee market in a single EVM L1, but it does not solve the core issue. When you realize that the solution is localizing the fee market, the logical endpoint is a fee market for each state (rather than a fee market for each L2).

Other chains have already reached this conclusion. Solana and Aptos have naturally localized their fee markets. This requires years of engineering work tailored to their respective execution environments. Most modularity advocates severely underestimate the importance and difficulty of engineering local fee markets.

Localized fee markets, Source: https://blog.labeleven.dev/why-solana

By launching multiple chains, developers do not unlock true performance gains. When applications drive increased transaction volume, the costs of all L2 chains are affected.

Flexibility is Overrated

Supporters of modular chains argue that modular architectures are more flexible. This statement is clearly true, but is it really important?

For six years, I have been trying to find meaningful flexibility that general L1s cannot provide for application developers. But so far, apart from three very specific use cases, no one has clearly articulated why flexibility is important and how it directly aids scalability. The three specific use cases where I find flexibility to be important are:

Applications that leverage "hot" state. Hot state is the state necessary for real-time coordination of certain sets of operations but is only temporarily committed to the chain and does not exist forever. Some examples of hot state include:

  • Limit orders in DEXs, such as dYdX and Sei (many limit orders ultimately get canceled).
  • Real-time coordination and identification of order flow in dFlow (dFlow is a protocol that facilitates a decentralized order flow market between market makers and wallets).
  • Oracles like Pyth, which is a low-latency oracle. Pyth runs as an independent SVM chain. Pyth generates so much data that the core Pyth team decided it was best to send high-frequency price updates to an independent chain and then bridge the prices to other chains as needed using Wormhole.

Chains that modify consensus. The best examples are Osmosis (where all transactions are encrypted before being sent to validators) and Thorchain (which prioritizes transactions in a block based on the fees paid).

Infrastructure that needs to leverage threshold signature schemes (TSS) in some way. Some examples in this regard are Sommelier, Thorchain, Osmosis, Wormhole, and Web3Auth.

Except for Pyth and Wormhole, all the examples listed above are built using the Cosmos SDK and run as independent chains. This underscores the applicability and scalability of the Cosmos SDK for all three use cases: hot state, consensus modification, and threshold signature scheme (TSS) systems.

However, most of the projects in the above three use cases are not applications; they are infrastructure.

Pyth and dFlow are not applications; they are infrastructure. Sommelier, Wormhole, Sei, and Web3Auth are not applications; they are infrastructure. Among them, the only user-facing applications are a specific type: DEXs (dYdX, Osmosis, Thorchain).

For six years, I have been asking supporters of Cosmos and Polkadot about the use cases that the flexibility they offer brings. I believe there is enough data to draw some inferences:

First, infrastructure examples should not exist as Rollups because they either generate too much low-value data (such as hot state, where the whole point of hot state is that the data is not committed back to L1) or because they perform some functions intentionally related to state updates on the asset ledger (e.g., all TSS use cases).

Second, the only application I have seen that can benefit from changes in core system design is DEXs. Because DEXs are rife with MEV, and general chains cannot match the latency of CEXs. Consensus is the foundation of transaction execution quality and MEV, so changes based on consensus naturally bring many innovative opportunities for DEXs. However, as mentioned earlier in this article, the primary input for spot DEXs is the assets being traded. DEXs compete for assets, thereby competing for asset issuers. In this framework, independent DEX chains are unlikely to succeed because the primary concern for asset issuers when issuing assets is not DEX-related MEV, but rather general smart contract functionality and incorporating that functionality into developers' respective applications.

However, derivatives DEXs do not need to compete for asset issuers; they primarily rely on collateral like USDC and oracle price feeds, and they essentially must lock user assets to collateralize derivative positions. Therefore, in terms of the significance of independent DEX chains, they are most likely applicable to derivatives-focused DEXs like dYdX and Sei.

Let’s consider the general integrated L1 applications that currently exist, including: games, DeSoc systems (like Farcaster and Lens), DePIN protocols (like Helium, Hivemapper, Render Network, DIMO, and Daylight), Sound, NFT exchanges, and so on. None of these particularly benefit from the flexibility brought by modifying consensus; each of their asset ledgers has a fairly simple, obvious, and common set of requirements: low fees, low latency, access to spot DEXs, access to stablecoins, and access to fiat on-ramps like CEXs.

I believe we now have enough data to indicate that the vast majority of user-facing applications have the same general requirements as those listed in the previous paragraph. While certain applications can optimize marginally other variables through custom features in the stack, the trade-offs brought by these customizations are often not worth it (more bridging, less wallet support, less indexing/querying support, reduced fiat channels, etc.).

Launching new asset ledgers is one way to achieve flexibility, but it rarely adds value and almost always brings technical and social complexity, with minimal ultimate benefits for application developers.

Scaling DA Does Not Require Re-staking

You will also hear modularity advocates talk about re-staking in the context of scaling. This is the most speculative argument put forth by supporters of modular chains, but it is worth discussing.

It roughly points out that due to re-staking (e.g., through systems like EigenLayer), the entire crypto ecosystem can infinitely re-stake ETH to empower an infinite number of DA layers (e.g., EigenDA) and execution layers. Thus, scalability is addressed from all angles while ensuring the appreciation of ETH value.

Despite the enormous uncertainty between the current state and the theoretical future, we take for granted that all layered assumptions are as effective as advertised.

Currently, Ethereum's DA is about 83 KB/s. With the rollout of EIP-4844 later this year, that speed can roughly double to about 166 KB/s. EigenDA can add an additional 10 MB/s, but it requires a different set of security assumptions (not all ETH will be re-staked to EigenDA).

In contrast, Solana currently offers about 125 MB/s of DA (32,000 shreds per block, each shred 1,280 bytes, 2.5 blocks per second). Solana is significantly more efficient than Ethereum and EigenDA. Additionally, according to Nielsen's Law, Solana's DA scales over time.

There are many ways to scale DA through re-staking and modularization, but these mechanisms are not necessary today and will introduce significant technical and social complexity.

Building for Application Developers

After years of contemplation, I conclude that modularity itself should not be a goal.

Blockchains must serve their customers (i.e., application developers), and therefore, blockchains should abstract the complexity at the infrastructure level so that developers can focus on building world-class applications.

Modularity is great. But the key to building winning technology is figuring out which parts of the stack need to be integrated and which parts should be left to others. As it stands, chains that integrate DA and execution essentially provide a simpler end-user and developer experience and ultimately lay a better foundation for first-class applications.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
banner
ChainCatcher Building the Web3 world with innovators