Jump Crypto: A Detailed Explanation of Various Blockchain Scalability Solutions

JumpCrypto
2022-04-01 17:18:03
Collection
The ability to effectively scale blockchain is a key factor determining the future success of the cryptocurrency industry.

Author: Rahul Maganti, Partner at Jump Crypto

Original Title: 《A Framework for Analyzing L1s

Compiled by: Hu Tao, Chain Catcher

Introduction

In the previous article, we established a framework for analyzing L1s, particularly considering the countless new chains that have recently emerged. We also briefly pointed out that many of the motivations behind these novel L1s are primarily focused on finding solutions for blockchain scalability. Let's take a closer look at some of these solutions. In this article, our goals are to:

  • Provide an overview of various Layer 1 and Layer 2 scaling solutions.
  • Analyze and compare these different solutions along some core dimensions.
  • Offer our views on which scaling architectures are the most promising.

The Scalability Trilemma

In a blog post from early 2017, Vitalik Buterin introduced the scalability trilemma, referring to the three main attributes that define the feasibility of blockchain systems: (1) Decentralization; (2) Security; (3) Scalability.

Among these three aspects, we believe that scalability remains the most challenging issue to solve without excessively compromising the other two pillars. Security and decentralization are still crucial for the performance of these systems, but as we will see later, addressing the challenges of scaling distributed systems also provides key breakthroughs for decentralization and security, for very fundamental reasons. Therefore, we emphasize that the ability to effectively scale blockchains will be a key factor in determining the future success of the crypto industry more broadly.

Broadly speaking, there are two main categories of scaling: Layer 1 and Layer 2. Both are relevant and critical for increasing the throughput of blockchains, but they focus on different aspects or layers of the Web3 stack. Over the past few years, scaling has undoubtedly received a lot of attention and is often touted as a key pathway to the mass adoption of blockchain technology, especially in light of the rising retail use cases and increasing transaction volumes.

Layer 1 (L1s)

Few major scaling architectures have stood out for Layer 1:

  • State Sharding
  • Parallel Execution
  • Improvements in Consensus Models
  • Validity Proofs

State Sharding

There are many types of sharding, but the core principles remain unchanged:

  • Sharding distributes the costs of validation and computation, so not every node needs to validate every transaction.
  • Nodes in a shard, like those in a larger chain, must: (1) relay transactions; (2) validate transactions; (3) store the state of the shard.
  • Shard chains should retain the security primitives of non-sharded chains through: (1) a valid consensus mechanism; (2) security proofs or signature aggregation.

Sharding allows a chain to be split into K different independent subnets or shards. If there are N nodes in total in the network, then there are N/K nodes operating in each of the K subnets. When a set of nodes in a given shard (say K1) validates a block, it provides a proof or a set of signatures claiming that the subnet is valid. All other nodes, S-{K1}, only need to verify the signatures or proofs. (The time for verification is usually much shorter than rerunning the computation itself).

To understand the scaling advantages of sharding, it is crucial to grasp the value this architecture provides in increasing the total computational capacity of the chain. Now, assume the average capacity of a node is C:O(C). Suppose the chain needs to process B blocks. The computational capacity of a non-sharded chain is negligible O(C); however, since a sharded chain can process blocks in parallel, the capacity of a sharded chain is: O(CB). Generally, the savings in runtime costs are multiplicative! A more in-depth technical explanation from Vitalik can be found here. Sharding is one of the most significant foundational components in Ethereum 2.0 and the development roadmap in recent years.

Parallel Execution

Sharding and parallel execution are similar in many ways. While sharding attempts to validate blocks in parallel across different sub-chains, parallel execution focuses on separating the work of processing individual node transactions. The effect of this architecture is that nodes can now process thousands of contracts in parallel!

We won't go into detail about how it works, but here is a great article that delves deeper into how parallel execution works in Solana through Sealevel.

Consensus Models

Consensus is at the core of Layer 1 blockchain protocols—participants in the network need a way to reach agreement on the chain's state for transactions/data that are to be finalized on the chain. Thus, as new transactions are added and the chain progresses, consensus serves as a means to ensure the consistency of the shared state. However, different consensus mechanisms can also lead to fundamental differences in the key metrics we use to measure blockchain performance: security, fault tolerance, decentralization, scalability, and so on. However, a standalone consensus model does not determine the performance of a blockchain system. Different consensus models are suited to different scaling mechanisms, which ultimately can determine the efficacy of a particular network.

Layer 2 (L2s)

Fundamentally, Layer 2 scaling is predicated on the premise that resources (whether computational or otherwise) on Layer 1 become prohibitively expensive. To reduce costs for users, services, and other community participants, heavy computational loads should be moved off-chain (to Layer 2), while still attempting to retain the underlying security guarantees provided by cryptographic and game-theoretic primitives on Layer 1 (public/private key pairs, elliptic curves, consensus models, etc.).

Early attempts in this area primarily involved establishing "trusted channels" between two parties off-chain, and then completing state updates on Layer 1. State channels achieve this by "locking certain parts of the blockchain state into a multi-signature contract controlled by a defined set of participants." Plasma chains were first proposed by Vitalik in this paper: allowing for the creation of an unlimited number of side chains, which then use fraud proofs (PoW, PoS) to finalize transactions on Layer 1.

Rollup (What Are Their Benefits)

Rollups are also a way to move computation off-chain (to Layer 2) while still recording messages or transactions on-chain (Layer 1). Transactions that would originally be recorded, mined, and verified on Layer 1 are recorded, aggregated, and verified on Layer 2, and then published back to the original Layer 1. This model achieves two goals: (1) it frees up computational resources on the base layer; (2) it still retains the underlying cryptographic security guarantees of Layer 1.

  • Transactions are "aggregated" and then passed to a collection contract sorted by a Sequencer
  • Contracts executed on L2 store off-chain contract calls
  • Then, the contract sends the Merkle root of the new state back as calldata to the L1 chain

Optimistic Rollup

Validators publish transactions to the chain under the assumption that they are valid. If they choose to, other validators can challenge the transaction, but they are certainly not required to. (Think of it as an innocent until proven guilty model). However, once a challenge is initiated, both parties (say Alice and Bob) are forced to participate in a dispute resolution protocol.

At a high level, the dispute resolution algorithm works as follows:

  1. Alice claims her assertion is correct. Bob disagrees.
  2. Alice then splits the assertion into equal parts (for simplicity, assume this is a bisection).
  3. Bob must then choose which part of the assertion he believes is incorrect (say the first half).
  4. Recursively run steps 1 - 3.
  5. Alice and Bob play this game until the size of the sub-assertions is just one instruction. Now, the protocol simply executes this instruction. If Alice is correct, then Bob loses his stake, and vice versa.

A more in-depth explanation of the Arbitrum dispute resolution protocol can be found here.

In the case of Optimistic, the cost is small/constant O(1). In the case of disputes, the algorithm runs in O(logn), where n is the size of the original assertion.

A key result of this Optimistic verification and dispute resolution architecture is that Optimistic Rollups have a guarantee of an honest party, meaning that to secure the chain, the protocol only needs one honest party to detect and report fraud.

Zero-Knowledge Rollups

In many current blockchain systems and Layer 1s, consensus is achieved by effectively "rerunning" transaction computations to verify updates to the chain's state. In other words, to complete a transaction on the network, nodes in the network need to perform the same computations. This seems like a naive way to verify the history of the chain—indeed it is! So the question becomes, is there a way to ensure we can quickly verify the correctness of transactions without having to replicate computations across a large number of nodes? (For those with a background in complexity theory, this idea is at the core of P vs. NP). Well, yes! This is where ZK rollups come into play—essentially, they ensure that the cost of verification is significantly lower than the cost of executing the computation.

Now, let's delve into how ZK-Rollups achieve this while maintaining a high level of security. The high-level ZK-rollup protocol includes the following components:

  • ZK Verifier - Verifies proofs on the chain.
  • ZK Prover - Obtains data from applications or services and outputs proofs.
  • On-chain Contracts - Track on-chain data and verify system state.

A large number of zero-knowledge proof systems have emerged, especially over the past year. There are two main categories of proofs: (1) SNARKs; (2) STARKs, although the boundaries between them are becoming increasingly blurred every day.

We won't discuss the technical details of how ZK proof systems work now, but here is a good diagram illustrating how we obtain something akin to proofs that can be effectively verified from smart contracts.

Key Dimensions for Comparing Rollups

Speed

As we mentioned earlier, the goal of scaling is to provide a way to increase the speed at which the network processes transactions while reducing computational costs. Because Optimistic Rollups do not generate proofs for each transaction (there is no additional cost in the honest case), they are generally much faster than ZK Rollups.

Privacy

ZK proofs are inherently privacy-preserving because they do not require access to the underlying parameters of the computation to verify it. Consider the following specific example: suppose I want to prove to you that I know the combination to a lock and box. A naive way would be to share the combination with you and ask you to try to open the box. If the box opens, then it is clear that I know the combination. But suppose I need to prove that I know the combination without revealing any information about the combination itself. Let's design a simple ZK-proof protocol to demonstrate how it works:

  • I ask you to write a sentence on a piece of paper.
  • I hand you the box and let you tear that piece of paper from a small slit in the box.
  • I turn my back to you and input the combination into the box.
  • I open the slip of paper and hand it back to you.
  • You confirm that the slip of paper is yours!

That's it! A simple zero-knowledge proof. Once you confirm that the slip of paper is indeed the one you put into the box, I have proven to you that I can open the box, thus a priori knowing the combination to the box.

In this way, zero-knowledge proofs are particularly adept at allowing one party to prove the truth of a statement to another without revealing any information that the other party would not possess. Advancing Blockchain Scalability

EVM Compatibility

The Ethereum Virtual Machine (EVM) defines a set of instructions or opcodes for implementing basic computing and blockchain-specific operations. Smart contracts on Ethereum are compiled into this bytecode. The bytecode is then executed as EVM opcodes. EVM compatibility means there is a 1:1 mapping between the instruction set of the virtual machine you are running and the EVM instruction set.

The largest Layer 2 solutions on the market today are built on Ethereum. When Ethereum-native projects want to migrate to Layer 2, EVM compatibility provides a seamless, minimal-code scaling path. Projects only need to redeploy their contracts on L2 and bridge their tokens from L1.

The largest Optimistic Rollups projects, Arbitrum and Optimism/Boba, are both EVM compatible. zkSync is one of the few ZK Rollups built with EVM compatibility in mind, but it still lacks support for some EVM opcodes, including ADDMOD, SMOD, MULMOD, EXP, and CREATE2. While the lack of support for CREATE2 does pose issues for counterfactual interactions with contracts, limiting upgradability and user onboarding, we believe that support for these opcodes will be implemented soon and will not become a significant barrier to using ZK rollups in the long run.

Bridging

Because L2s are independent chains, they do not automatically inherit native L1 tokens. Native L1 tokens on Ethereum must be bridged to the corresponding L2 to interact with dApps and services deployed there. The ability to seamlessly connect tokens remains a key challenge, with different projects exploring various architectures. Typically, once a user calls depositL1, an equivalent token needs to be minted on the L2 side. Designing a highly generic architecture for this process can be particularly challenging, as there are a wide variety of tokens and token standards driving protocols.

Finality

Finality refers to the ability to confirm the validity of transactions on the chain. On Layer 1, when a user submits a transaction, it is almost instantaneously completed. (Although it takes time for nodes to process transactions from the mempool). On Layer 2, this is not necessarily the case. State updates submitted to a Layer 2 chain running the Optimistic Rollups protocol will initially assume that the updates are valid. However, if the validator submitting this update is malicious, there needs to be enough time for an honest party to challenge the claim. Typically, this challenge period is set to about 7 days. On average, users wanting to withdraw funds from L2 may need to wait around 2 weeks!

On the other hand, ZK Rollups do not require such a long challenge period because each state update is verified using a proof system. Therefore, transactions on ZK Rollups protocols are as final as transactions on the underlying Layer 1. Unsurprisingly, the instant finality provided by ZK Rollups has become a key advantage in the race for L2 scaling superiority.

Some argue that while Optimistic Rollups do not necessarily guarantee rapid finality on L1, fast withdrawals provide a clear and user-friendly workaround by allowing users to access funds before the challenge period ends. While this does provide users with a way to access their liquidity, this approach has several issues:

  • Additional overhead for maintaining liquidity pools for L2 to L1 withdrawals.
  • Fast withdrawals are not universal—only supporting token withdrawals. Arbitrary L2 to L1 calls cannot be supported.
  • Liquidity providers cannot guarantee the validity of transactions before the challenge period ends.
  • Liquidity providers must: (1) trust those they provide liquidity to, limiting the benefits of decentralization; (2) build their own fraud/validity proofs, effectively undermining the purpose of leveraging the built-in fraud proofs/consensus protocols of the L2 chain.

Sequencing

Sequencers are like any other full node but can arbitrarily control the ordering of transactions in the inbox queue. Without this ordering, other nodes/participants in the network cannot determine the outcome of a specific batch of transactions. In this sense, it provides users with a degree of certainty when executing transactions.

The main argument against using sequencers for this purpose is that they create a point of failure—if the sequencer fails, availability may be compromised. Wait a minute… what does this mean? Doesn't this undermine the vision of decentralization? Well… sort of. Sequencers are typically run by the projects developing L2 and are often viewed as semi-trusted entities acting on behalf of project stakeholders. For the decentralization hardliners who cringe at this notion, you might find comfort in knowing that here and here there is significant work/research being done on decentralized fair ordering.

Recent disruptions in sequencers on large L2 ecosystems (including Arbitrum / Optimism) continue to demonstrate the demand for fault-tolerant, decentralized sequencing.

Capital Efficiency

Another key point of comparison between Optimistic Rollups and ZK Rollups is their capital efficiency. As mentioned earlier, Optimistic L2 relies on fraud proofs to secure the chain, while ZK Rollups leverage validity proofs.

The security provided by fraud proofs is based on a simple game-theoretic principle: the cost for an attacker to fork the chain should exceed the value they can extract from the network. In the case of Optimistic Rollups, validators stake a certain amount of tokens (e.g., ETH) on Rollup blocks they believe to be valid as the chain progresses. Malicious actors (those found guilty and reported by honest nodes) will be penalized.

Thus, there is a fundamental trade-off between capital efficiency and security. Improving capital efficiency may require shortening the delay/challenge period while increasing the likelihood that fraudulent assertions go undetected or unchallenged by other validators in the network.

Moving the delay period is akin to moving along the curve of capital efficiency versus delay period. However, as the delay period changes, users need to consider its impact on the trade-offs between security and finality—otherwise, they may remain indifferent to these changes.

Currently, the 7-day delay periods for projects like Arbitrum and Optimism are determined by the community considering these aspects. Here is an in-depth explanation by Ed Felten from Offchain Labs on how they determine the optimal length of the delay period.

By construction (relying on cryptographic assumptions rather than game-theoretic assumptions), validity proofs are less susceptible to the same capital efficiency/security trade-offs. Advancing Blockchain Scalability

Specific Application Chains/Scaling

When we talk about a multi-chain future, what exactly do we mean? Will there be a multitude of high-performance Layer 1s with different architectures, more Layer 2 scaling solutions, or only a few Layer 3 chains customized for specific use cases?

We believe that the demand for blockchain-based services will fundamentally be driven by users' needs for specific types of applications, whether it be NFT minting or DeFi protocols for lending, staking, etc. … In the long run, like any technology, we hope users will want to abstract away from the underlying primitives (in this case, L1 and L2 providing core infrastructure for settlement, scalability, and security).

Application-specific chains provide a mechanism to deploy high-performance services by leveraging narrow optimizations. Therefore, we expect these types of chains to become a key component of the Web3 infrastructure aimed at driving mass adoption.

The emergence of these chains can occur in two main ways:

  • Independent ecosystems with their own primitives focused on very specific applications.
  • Additional layers built on existing L1 and L2 chains, but fine-tuned to optimize performance for specific use cases.

In the medium to short term, these independent chains may see significant growth, but we believe this is a function of their short-term novelty rather than a signal of sustainable interest and usage. Even now, more mature application-specific chains like Celo seem relatively scarce. While these independent application-specific chain ecosystems provide excellent performance for specific use cases, they often lack the characteristics that make other general-purpose ecosystems so powerful:

  • Flexibility and ease of use
  • High composability
  • Liquidity aggregation and access to native assets

Next-generation scaling infrastructure must strike a balance between these two approaches.

Fractal Scaling Approach

The fractal scaling approach is highly related to this "layered model" of blockchain scaling. It provides a unique way to unify otherwise isolated, different application-specific chain ecosystems with the broader community, helping to maintain composability, achieve access to general logic, and gain security guarantees from the underlying L1 and L2.

How does it work?

  • Transactions are split among local instances based on the scenarios they intend to serve.
  • Leverages the security, scalability, and privacy properties of the underlying L1/L2 layer while optimizing for unique custom needs.
  • Utilizes new architectures based on proof-of-proofs and recursive proofs (for storage and computation).
  • Any message is accompanied by a proof of the validity of that message and the history leading to that message.

Here is a great article from Starkware discussing the architecture of fractal scaling.

Closing Thoughts

Blockchain scaling has become more prominent over the past few years, and for good reason—the cost of validating computations on highly decentralized chains like Ethereum has become unfeasible. As blockchain adoption grows, the computational complexity of on-chain transactions is also rapidly increasing, further raising the costs of securing the chain. Optimizing existing Layer 1s and architectures like dynamic sharding may be very valuable, but the sharp increase in demand necessitates a more nuanced approach to developing secure, scalable, and sustainable decentralized systems.

We believe in this chain-layer approach based on building optimized chains for specific behaviors, including general-purpose computation for specific applications and logic that supports privacy. Therefore, we view Rollups and other Layer 2 technologies as core to scaling throughput by enabling off-chain computation/storage and rapid verification.

If you have any questions, comments, or thoughts, please reach out to @Rahul Maganti!

References

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators