Limitations of Blockchain Scalability

Vitalik Buterin
2022-08-22 17:58:20
Collection

Author: Vitalik Buterin

Original Title: 《The Limits to Blockchain Scalability

Publication Date: May 23, 2021

How much can we enhance the scalability of blockchain? Can we really achieve what Elon Musk described as "tenfold block time acceleration, tenfold block size increase, and a hundredfold reduction in fees" without leading to extreme centralization and violating the essential properties of blockchain? If the answer is no, then to what extent can we reach? What would happen if we changed the formula algorithms? More importantly, what if we introduced features like ZK-SNARK or sharding? A sharded blockchain theoretically could continuously add shards; can we really do that?

It turns out that there are significant and very subtle technical factors limiting blockchain scalability, regardless of whether sharding is used. Many situations have solutions, but even with solutions, there are limitations. This article will explore many of these issues.

image

If we simply raise the parameters, the problem seems to be solved. But what price do we pay for that?

The Ability for Ordinary Users to Run Nodes is Crucial for Blockchain Decentralization

Imagine it's a little past 2 AM, and you receive an urgent call from someone on the other side of the world helping you run a mining pool (staking pool). About 14 minutes ago, your pool and a few others had split from the chain, while the network still maintained 79% of the hash power. According to your node, the majority of the chain's blocks are invalid. At this moment, a balance error occurs: the block seems to have incorrectly allocated 4.5 million extra tokens to an unknown address.

An hour later, you and two other small pool participants who also encountered unexpected issues, along with some block explorers and exchanges, are in a chat room when someone posts a link to a tweet that starts with "Announcing a new on-chain sustainable protocol development fund."

By morning, the relevant discussions are widely spread on Twitter and a non-moderated community forum. But by then, a significant portion of those 4.5 million tokens has already been converted to other assets on-chain and has undergone billions of dollars in DeFi transactions. 79% of the consensus nodes, along with all major blockchain explorers and light wallet endpoints, are following this new chain. Perhaps the new developer fund will finance some development, or perhaps all of this will be swallowed up by leading mining pools, exchanges, and their affiliates. But regardless of the outcome, the fund has effectively become a fait accompli, and ordinary users cannot resist.

image

Perhaps there could be a themed movie about this. Maybe it will be funded by MolochDAO or other organizations.

Could this scenario happen in your blockchain? The elites in your blockchain community, including mining pools, block explorers, and hosted nodes, might coordinate very well; they are likely all in the same Telegram channel and WeChat group. If they really wanted to suddenly modify the protocol rules for their benefit, they might have the capability to do so. The Ethereum blockchain completely resolved consensus failure within ten hours; if it were a blockchain implemented by only one client, and only needed to deploy code changes to a few dozen nodes, it could coordinate client code changes even faster. The only reliable way to resist such social collaboration attacks is "passive defense," and this strength comes from a decentralized group: the users.

Imagine if users run validating nodes for the blockchain (whether directly validating or through other indirect technologies) and automatically reject blocks that violate protocol rules, even if over 90% of miners or stakers support those blocks; how would the story unfold?

If every user runs a validating node, then attacks would quickly fail: some mining pools and exchanges would fork, and they would look foolish throughout the process. But even if only a few users run validating nodes, attackers cannot achieve complete victory. Instead, the attack would lead to chaos, with different users seeing different versions of the blockchain. In the worst case, the ensuing market panic and potential ongoing chain forks would significantly reduce the attackers' profits. The very idea of responding to such a protracted conflict can deter most attacks.

image

Hasu's view on this:

"Let’s be clear: we can resist malicious protocol changes because we have a culture of users validating the blockchain, not because of PoW or PoS."

Assuming your community has 37 node operators and 80,000 passive listeners checking signatures and block headers, then the attackers win. If everyone runs nodes, the attackers fail. We do not know the exact threshold for initiating group immunity against collusive attacks, but one thing is absolutely clear: the more good nodes there are, the fewer malicious nodes there will be, and the number we need is certainly more than just a few hundred or thousand.

So What is the Upper Limit for Full Node Operations?

To enable as many users as possible to run full nodes, we will focus on ordinary consumer-grade hardware. Even if specialized hardware is easily available, which can lower some of the barriers for full nodes, the actual improvement in scalability is not as significant as we might imagine.

The ability of full nodes to handle a large number of transactions is primarily limited by three factors:

  • Computational Power: How much CPU can we allocate to run nodes while ensuring security?
  • Bandwidth: Based on current network connections, how many bytes can a block contain?
  • Storage: How much space can we reasonably require users to use for storage? Additionally, what should the read speed be? (i.e., is HDD sufficient, or do we need SSD?)

Many misconceptions about significantly scaling blockchain using "simple" technologies stem from overly optimistic estimates of these numbers. We can discuss these three factors in turn:

Computational Power

  • Incorrect Answer: 100% of the CPU should be used for block validation
  • Correct Answer: About 5-10% of the CPU can be used for block validation

The reason for this low limit is primarily due to four main factors:

  • We need a security margin to cover the possibility of DoS attacks (transactions created by attackers exploiting code weaknesses require longer processing times than regular transactions).
  • Nodes need to be able to sync with the blockchain after going offline. If I go offline for a minute, I should be able to sync within a few seconds.
  • Running nodes should not quickly drain batteries or slow down the operation of other applications.
  • Nodes also have other non-block production tasks to perform, most of which involve validation and responding to transactions and requests input into the P2P network.

Note that until recently, most explanations for "why only 5-10%?" focused on a different issue: because PoW has unpredictable block times, validating blocks takes a long time, increasing the risk of multiple blocks being created simultaneously. There are many fixes for this issue, such as Bitcoin NG or using PoS proof of stake. However, these do not address the other four issues, so they have not made the significant progress in scalability that many expected.

Parallelism is also not a panacea. Typically, even clients of seemingly single-threaded blockchains have already been parallelized: signatures can be validated by one thread, while execution is done by other threads, and there is a separate thread handling transaction pool logic in the background. Moreover, the closer the utilization of all threads is to 100%, the more energy consumption running nodes incurs, and the lower the security margin against DoS attacks.

Bandwidth

  • Incorrect Answer: If a 10 MB block is produced every 2-3 seconds, then most users' networks are greater than 10 MB/second, so they can certainly handle these blocks.
  • Correct Answer: Perhaps we can handle 1-5 MB blocks every 12 seconds, but that is still difficult.

Today, we often hear widely circulated statistics about how much bandwidth internet connections can provide: 100 Mbps or even 1 Gbps figures are common. However, there are significant discrepancies between claimed bandwidth and expected actual bandwidth for several reasons:

  1. "Mbps" refers to "millions of bits per second"; a bit is 1/8 of a byte, so we need to divide the claimed number of bits by 8 to get the number of bytes.
  2. Internet service providers, like other companies, often fabricate lies.
  3. There are always multiple applications using the same network connection, so nodes cannot monopolize the entire bandwidth.
  4. P2P networks inevitably introduce overhead: nodes often end up downloading and re-uploading the same block multiple times (not to mention that transactions must be broadcast through the mempool before being packed into blocks).

When Starkware conducted an experiment in 2019, they released a 500 kB block for the first time after reducing transaction data gas costs, and some nodes were actually unable to handle blocks of that size. The ability to process large blocks has been and will continue to improve. However, no matter what we do, we still cannot obtain average bandwidth in MB/second and convince ourselves that we can tolerate 1-second latency and have the capacity to handle blocks of that size.

Storage

  • Incorrect Answer: 10 TB
  • Correct Answer: 512 GB

As you might guess, the main argument here is the same as elsewhere: the difference between theory and practice. Theoretically, we can buy an 8 TB solid-state drive on Amazon (it does require SSD or NVME; HDD is too slow for blockchain state storage). In practice, my laptop, which I used to write this blog post, has 512 GB, and if you ask people to buy hardware, many will become lazy (or they cannot afford an $800 8 TB SSD) and use centralized services. Even if the blockchain can be stored on some storage device, a lot of activity can quickly fill up the disk and force you to buy a new one.

image

A group of blockchain protocol researchers surveyed everyone's disk space. I know the sample size is small, but still…

Moreover, the size of storage determines the time required for new nodes to come online and start participating in the network. Any data that existing nodes must store is data that new nodes must download. This initial sync time (and bandwidth) is also a major barrier for users to run nodes. As I write this blog post, syncing a new geth node took me about 15 hours. If Ethereum's usage increases tenfold, syncing a new geth node will take at least a week and is more likely to lead to the node's internet connection being constrained. This is even more critical during an attack, when successfully responding to an attack requires users to enable new nodes that previously did not run.

Interactive Effects

Additionally, there are interactive effects between these three categories of costs. Since databases use tree structures internally to store and retrieve data, the cost of retrieving data from the database increases logarithmically with the size of the database. In fact, because the top (or first few levels) can be cached in RAM, the cost of disk access is proportional to the size of the database and is a multiple of the size of the data cached in RAM.

image

Do not take this diagram literally; different databases work in different ways, and the portion in memory is usually just a single (but large) layer (see the LSM tree used in leveldb). But the basic principle is the same.

For example, if the cache is 4 GB and we assume that each layer of the database is 4 times larger than the previous layer, then Ethereum's current ~64 GB state would require ~2 accesses. However, if the state size increases 4 times to ~256 GB, this would increase to ~3 accesses. Thus, increasing the gas limit by 4 times can actually translate to an increase in block validation time of about 6 times. This effect could be even greater: disks take longer to read and write when full than when empty.

What Does This Mean for Ethereum?

Currently, running a node on the Ethereum blockchain is already a challenge for many users, although it is still possible with at least conventional hardware (I just synced a node on my laptop while writing this article!). Therefore, we are about to encounter bottlenecks. The primary concern of core developers is storage size. Therefore, the significant efforts currently being made to address computational and data bottlenecks, even changes to consensus algorithms, are unlikely to lead to a substantial increase in gas limits. Even if Ethereum's biggest DoS vulnerability is resolved, the gas limit can only be increased by 20%.

For the issue of storage size, the only solutions are statelessness and state expiration. Statelessness allows a group of nodes to validate without maintaining permanent storage. State expiration makes recently unaccessed states inactive, requiring users to manually provide proof to update. Both of these paths have been studied for a long time, and a proof-of-concept implementation for statelessness has already begun. The combination of these two improvements can significantly alleviate these concerns and open up space for a significant increase in gas limits. But even after implementing statelessness and state expiration, the gas limit may only safely increase by about 3 times until other limitations come into play.

Another possible mid-term solution is to use ZK-SNARKs to validate transactions. ZK-SNARKs can ensure that ordinary users do not need to personally store state or validate blocks, even though they still need to download all data in the blocks to guard against data unavailability attacks. Additionally, even if attackers cannot forcibly submit invalid blocks, if the difficulty of running a consensus node is too high, there remains a risk of collusive review attacks. Therefore, ZK-SNARKs cannot infinitely enhance node capabilities, but they can still significantly improve them (perhaps by 1-2 orders of magnitude). Some blockchains are exploring this form on layer 1, while Ethereum benefits from layer 2 protocols (also known as ZK rollups), such as zksync, Loopring, and Starknet.

What Happens After Sharding?

Sharding fundamentally addresses the above limitations by decoupling the data contained on the blockchain from the data that a single node needs to process and store. Nodes validate blocks not by personally downloading and executing them, but by using advanced mathematics and cryptographic techniques to indirectly validate blocks.

As a result, sharded blockchains can safely achieve very high levels of throughput that non-sharded blockchains cannot. This does require a significant amount of cryptographic technology to effectively replace naive full validation to reject invalid blocks, but it is achievable: the theory is already in place, and proof-of-concept implementations based on draft specifications are underway.

image

Ethereum plans to adopt quadratic sharding, where total scalability is limited by the fact that nodes must be able to handle both individual shards and the beacon chain simultaneously, and the beacon chain must perform some fixed administrative work for each shard. If the shards are too large, nodes cannot handle individual shards; if there are too many shards, nodes cannot handle the beacon chain. The product of these two constraints constitutes the upper limit.

One can imagine that with cubic sharding or even exponential sharding, we could go further. In such designs, data availability sampling would certainly become more complex, but it is achievable. However, Ethereum has not gone beyond quadratic sharding because the additional scalability gains from sharding transactions to sharding transactions cannot be realized under other acceptable risk levels.

So what are these risks?

Minimum User Count

It can be imagined that as long as there is one user willing to participate, a non-sharded blockchain can operate. But sharded blockchains are not like that: a single node cannot handle the entire chain, so enough nodes are needed to collectively process the blockchain. If each node can handle 50 TPS and the chain can handle 10,000 TPS, then at least 200 nodes are needed for the chain to survive. If the chain has fewer than 200 nodes at any time, it may lead to nodes being unable to stay in sync, or nodes stopping the detection of invalid blocks, or many other bad things happening, depending on the node software's settings.

In practice, due to the need for redundancy (including data availability sampling), the safe minimum number is several times higher than simply "chain TPS divided by node TPS." For the example above, we set it at 1,000 nodes.

If the capacity of the sharded blockchain increases tenfold, the minimum user count also increases tenfold. Now one might ask: why not start with a lower capacity and increase it when there are many users, as that is our actual need, and decrease capacity when the user count falls back?

There are several issues here:

  1. The blockchain itself cannot reliably detect how many unique users it has, so some governance is needed to detect and set the number of shards. Governance over capacity limits can easily become a source of division and conflict.
  2. What if many users suddenly go offline at the same time?
  3. Increasing the minimum user count required to launch a fork makes defending against malicious control more difficult.

A minimum user count of 1,000 can almost be said to be fine. On the other hand, a minimum user count set at 1 million is definitely not feasible. Even a minimum user count of 10,000 can be said to start becoming risky. Therefore, it seems difficult to justify a sharded blockchain with more than a few hundred shards.

Historical Retrievability

A crucial property of blockchain that users truly value is permanence. When a company goes bankrupt or the maintenance of the ecosystem no longer generates benefits, digital assets stored on servers will cease to exist within ten years. NFTs on Ethereum are permanent.

image

Yes, by 2372, people will still be able to download and view your CryptoKitties.

However, once the capacity of the blockchain becomes too high, storing all this data will become more difficult, until at some point there is a significant risk that some historical data will ultimately… not be stored by anyone.

Quantifying this risk is straightforward. Taking the data capacity of the blockchain (MB/sec) and multiplying by ~30 gives the amount of data stored per year (TB). The current data capacity of the sharding plan is about 1.3 MB/sec, which translates to about 40 TB/year. If it increases tenfold, that would be 400 TB/year. If we want not only to access the data but to do so conveniently, we also need metadata (such as decompressing aggregated transactions), which brings us to 4 PB per year, or 40 PB after ten years. The Internet Archive uses 50 PB. So this can be said to be the safe size limit for sharded blockchains.

Thus, it appears that in these two dimensions, Ethereum's sharding design is actually very close to a reasonable maximum safe value. The constants can increase a bit, but not too much.

Conclusion

There are two approaches to scaling blockchain: fundamental technical improvements and simply raising parameters. First, raising parameters sounds very appealing: if you are doing math on a napkin, it is easy to convince yourself that consumer-grade laptops can handle thousands of transactions per second without needing ZK-SNARKs, rollups, or sharding. Unfortunately, there are many subtle reasons explaining why this approach is fundamentally flawed.

Computers running blockchain nodes cannot use 100% of the CPU to validate the blockchain; they need a significant safety margin to resist unexpected DoS attacks, they need spare capacity to perform tasks such as processing transactions in the mempool, and users do not want to run nodes on their computers while being unable to use them for any other applications. Bandwidth is also limited: a 10 MB/s connection does not mean it can handle 10 MB blocks every second! Perhaps it can only handle 1-5 MB blocks every 12 seconds. Storage is the same: raising the hardware requirements for running nodes and limiting dedicated node operators is not a solution. For decentralized blockchains, it is crucial that ordinary users can run nodes and form a culture where running nodes is a common practice.

However, fundamental technical improvements are feasible. Currently, the main bottleneck for Ethereum is storage size, and statelessness and state expiration can address this issue, allowing it to grow by about 3 times at most, but no more, as we want to make running nodes easier than it currently is. A blockchain that adopts sharding can further scale, as individual nodes in a sharded blockchain do not need to handle every transaction. But even sharded blockchains have capacity limits: as capacity increases, the minimum number of secure users increases, and the cost of archiving the blockchain (and the risk of data loss if no one archives the chain) will rise. However, we need not worry too much: these limitations are sufficient for us to process over a million transactions per second while ensuring the complete security of the blockchain. But to achieve this without compromising the most valuable decentralized characteristics of the blockchain, we need to put in more effort.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators