EigenLayer: Introducing Ethereum-Level Trust into Middleware

IOSG Ventures
2022-11-15 11:54:46
Collection
These various middleware exist independently of Ethereum itself, running a network of validators: that is, investing some tokens and hardware facilities to provide services for the middleware.

Author: Jiawei, IOSG Ventures

Introduction

image

In the current Ethereum ecosystem, there are many middleware solutions.

On the left side is the perspective of the application layer. Some dApps rely on middleware to operate: for example, DeFi derivatives depend on oracles for price feeds; cross-chain asset transfers rely on cross-chain bridges as third-party relays.

On the right side is the modular perspective. For instance, in Rollup sequencing, we need to build a Sequencer network; in off-chain data availability, we have DAC or Polygon Avail and Celestia's DA-Purpose Layer1.

These various middleware solutions exist independently of Ethereum itself, operating validator networks: that is, investing some tokens and hardware resources to provide services for the middleware.

Our trust in middleware comes from Economic Security; if honest work can be rewarded, malicious actions will lead to slashing of staked tokens. The level of this trust is derived from the value of the staked assets.

If we compare all protocols/middleware in the Ethereum ecosystem that rely on Economic Security to a cake, it would look like this: funds are divided into various portions based on the scale of the staking network.

image

However, current Economic Security still has some issues:

  • For middleware. Validators of middleware need to invest funds to safeguard the network, which incurs certain marginal costs. Due to token value capture considerations, validators are often required to stake the native tokens of the middleware, leading to uncertainty in their risk exposure due to price fluctuations.

    Furthermore, the security of middleware depends on the overall value of the staked tokens; if the token crashes, the cost of attacking the network decreases, potentially triggering security incidents. This issue is particularly evident in protocols with relatively weak token market capitalizations.

  • For dApps. For example, some dApps do not need to rely on middleware (imagine a Pure Swap DEX) and only need to trust Ethereum; for some dApps that rely on middleware (such as derivatives that require oracle price feeds), their security actually depends on the trust assumptions of both Ethereum and the middleware.

    The trust assumptions of middleware essentially stem from trust in a distributed validator network. We have seen numerous incidents of asset losses due to incorrect price feeds from oracles.

This further leads to the "barrel effect":

  • Suppose there is a highly composable DeFi application A, with a related TVL reaching billions, while the trust in oracle B relies solely on hundreds of millions of staked assets. Once a problem arises, the risk transmission and nesting brought about by the inter-protocol connections may infinitely amplify the losses caused by the oracle;

  • Suppose there is a modular blockchain C that adopts data availability solution D, execution layer solution F, etc. If any part of it behaves improperly or is attacked, the impact will be on the entire chain C itself, even if other parts of the system are functioning correctly.

It is evident that system security depends on its weakest link, and seemingly trivial weaknesses may trigger systemic risks.

What Does EigenLayer Do?

The idea behind EigenLayer is not complicated:

Similar to shared security, it attempts to elevate the Economic Security of middleware to the level equivalent to that of Ethereum.

image

This is achieved through "Restaking".

Restaking means re-staking the ETH exposure of the Ethereum validator network:

Originally, validators stake on the Ethereum network to earn rewards, and malicious actions lead to slashing of their staked assets. Similarly, after Restaking, they can earn staking rewards on the middleware network, but if they act maliciously, they will be slashed for their original ETH stake.

The specific implementation method of Restaking is: stakers can set their withdrawal address on the Ethereum network to the EigenLayer smart contract, thereby granting it slashing authority.

image

In addition to directly Restaking $ETH, EigenLayer offers two other options to expand the Total Addressable Market, namely supporting LP Tokens staked with WETH/USDC and stETH/USDC.

Moreover, to maintain the value capture of the middleware's native tokens, middleware can choose to retain their native token staking requirements while introducing EigenLayer, meaning that Economic Security comes from both their native tokens and Ethereum, thus avoiding the "death spiral" caused by a single token's price crash.

Feasibility

Overall, for validators, participating in EigenLayer's Restaking has capital requirements and hardware requirements.

The capital requirement for participating in Ethereum validation is 32 ETH, which remains unchanged in Restaking, but when introducing new middleware, it may add potential risk exposures, such as Inactivity and Slashing.

image

As for hardware facilities, to lower the participation threshold for validators and achieve sufficient decentralization, the hardware requirements for Ethereum validators after the merge are quite low. A slightly better home computer can actually meet the recommended configuration. At this point, some hardware requirements are actually surplus. Analogous to miners simultaneously mining multiple coins when computational resources are sufficient, from a hardware perspective, Restaking is equivalent to using this surplus hardware capability to support multiple middleware solutions.

Does this sound similar to Cosmos's Interchain Security? Is that all? In fact, the impact of EigenLayer on the post-merge Ethereum ecosystem may go beyond this. In this article, we will further elaborate on EigenDA.

image

EigenDA

Note: This section briefly introduces data availability (DA), erasure codes, and KZG commitments. The data availability layer is a modular perspective split to provide data availability for Rollups. Erasure codes and KZG commitments are components of data availability sampling (DAS). Using erasure codes allows for verifying all data availability by randomly downloading a portion of the data, and reconstructing all data when necessary. KZG commitments are used to ensure that erasure codes are correctly encoded. To avoid deviating from the main topic of this article, this section will omit some details, terminology explanations, and background; for questions regarding the context of this section, please refer to IOSG's previous articles "The Merge is Coming: A Detailed Explanation of Ethereum's Latest Technical Roadmap" and "Dissecting the Data Availability Layer: The Overlooked Lego Blocks in the Modular Future."

image

As a simple recap, we can categorize the current DA solutions into on-chain and off-chain parts.

The on-chain part, Pure Rollup, refers to a solution that purely places DA on-chain, which requires a constant payment of 16 gas for each byte, accounting for 80%-95% of Rollup costs. After introducing Danksharding, the cost of on-chain DA will be significantly reduced.

In off-chain DA, each solution has a certain progressive relationship in terms of security and overhead.

Pure Validium refers to placing DA entirely off-chain without any guarantees, where off-chain data custodians face the risk of going offline at any time. Solutions specific to Rollups include StarkEx, zkPorter, and Arbitrum Nova, which rely on a small number of well-known third parties forming a DAC to ensure DA.

EigenDA is a generalized DA solution, belonging to the same category as Celestia and Polygon Avail. However, EigenDA has some differences in its approach compared to the other two.

For comparison, we will first ignore EigenDA and look at how Celestia's DA works.

image

Taking Celestia's Quantum Gravity Bridge as an example:

The L2 Contract on the Ethereum main chain verifies validity proofs or fraud proofs as usual, with the difference being that DA is provided by Celestia. There are no smart contracts on the Celestia chain, and it does not compute data; it only ensures data availability.

The L2 Operator publishes transaction data to the Celestia main chain, where Celestia's validators sign the Merkle Root of the DA Attestation and send it to the DA Bridge Contract on the Ethereum main chain for verification and storage.

This effectively uses the Merkle Root of the DA Attestation to prove all DA, and the DA Bridge Contract on the Ethereum main chain only needs to verify and store this Merkle Root. Compared to storing DA on-chain, this greatly reduces the overhead of ensuring DA while providing security guarantees from the Celestia chain itself.

What happens on the Celestia chain? First, Data Blobs are propagated through a P2P network and reach consensus on the Data Blobs based on Tendermint consensus. Each Celestia full node must download the entire Data Blob. (Note that we are only discussing full nodes here; Celestia's light nodes can use DAS to ensure data availability, which will not be elaborated on here.)

Since Celestia itself still acts as Layer 1, it needs to broadcast and reach consensus on Data Blobs, which imposes high requirements on the network's full nodes (128 MB/s download and 12.5 MB/s upload), while the achieved throughput may not be high (1.4 MB/s).

EigenLayer adopts a different architecture—no consensus is needed, and no P2P network is required.

How is this achieved?

image

First, EigenDA nodes must Restake their ETH exposure in the EigenLayer contract to participate in Restaking. EigenDA nodes are a subset of Ethereum stakers.

Second, the demand side for data availability (for example, Rollups, referred to as Dispersers) obtains the Data Blob, encodes it using erasure codes and KZG commitments (the size depends on the redundancy ratio of the erasure codes), and publishes the KZG commitment to the EigenDA smart contract.

Subsequently, the Disperser distributes the encoded KZG commitment to EigenDA nodes. These nodes, upon receiving the KZG commitment, compare it with the KZG commitment on the EigenDA smart contract, and upon confirming its correctness, they sign the Attestation. The Disperser then collects these signatures one by one, generates an aggregated signature, and publishes it to the EigenDA smart contract, where the smart contract verifies the signature.

In this workflow, EigenDA nodes merely sign the Attestation to claim they have stored the encoded Data Blob. The EigenDA smart contract only verifies the correctness of the aggregated signature. So how do we ensure that EigenDA nodes have indeed stored the data availability?

EigenDA employs the Proof of Custody method. This addresses the situation where some Lazy Validators do not perform the work they are supposed to do (such as ensuring data availability) but pretend they have completed the work and sign the results (for example, falsely claiming that the data is available when they have not done so).

The Proof of Custody approach is similar to fraud proofs: if Lazy Validators appear, anyone can submit proof to the EigenDA smart contract, which will verify it, and if verified, the Lazy Validators will be slashed. (For more details on Proof of Custody, refer to Dankrad's article; further elaboration will not be provided here.)

Summary

From the above discussions and comparisons, we can see:

Celestia's approach is consistent with traditional Layer 1, essentially doing Everybody-talks-to-everybody (consensus) and Everybody-sends-everyone-else-everything (broadcast), with the distinction that Celestia's consensus and broadcast are specifically for Data Blobs, ensuring only data availability.

EigenDA, on the other hand, does Everybody-talks-to-disperser (i.e., step [3] Disperser obtains Attestation) and Disperser-sends-each-node-a-unique-share (i.e., step [2] Disperser distributes data to EigenDA nodes), decoupling data availability from consensus.

The reason EigenDA does not require consensus or participation in a P2P network is that it effectively "hitches a ride" on Ethereum: leveraging the smart contracts deployed on Ethereum, the Disperser publishes Commitments and Aggregated Attestations, and the process of the smart contract verifying the aggregated signature all occurs on Ethereum, which provides consensus guarantees, thus avoiding the bottleneck of low throughput from consensus protocols and P2P networks.

This is reflected in the differences between node requirements and throughput.

image

In terms of security, Celestia uses Tendermint as its consensus, meaning that if 2/3 of the tokens controlling Celestia are held, a majority attack could occur. Meanwhile, Celestia employs fraud proofs for erasure codes, and light clients simultaneously perform DAS. This requires at least one honest full node and enough light clients to perform DAS.

EigenDA's security, on the other hand, fundamentally relies on the validator set of Ethereum, inheriting Ethereum's slashing primitives to provide Economic Security guarantees for the DA layer. The more stakers participate in Restaking on EigenDA, the greater the security. Additionally, lowering node requirements also helps enhance decentralization.

It is important to note that EigenDA is an application-layer DA, distinct from the protocol-layer DA of Danksharding—the advantages of application-specific over general-purpose include sovereignty and flexibility. This allows for customizing different solutions to meet the data availability needs of various Rollups.

Discussion on Economic Security

Finally, let's revisit Economic Security.

We assume that most Economic Security participants are rational, driven by economic incentives, and always inclined to maximize their profits. These participants may be validators of middleware, providing hardware facilities, staking native middleware tokens, and receiving tokens as rewards.

Rational participants will consider input versus output: if these inputs were placed elsewhere, could they yield greater returns? Therefore, middleware needs to ensure that the price of its tokens remains at a certain level. If the token incentives are substantial enough, it will naturally attract more validators to join, further enhancing the network's decentralization; if the token value cannot be maintained, project teams may have to fund the validator set out of pocket, leading to centralization and censorship issues.

Additionally, there is the consideration of security levels—the security of middleware depends on the overall value of the staked tokens; if the token crashes, the cost of attacking the network decreases.

In summary, middleware needs to continuously enhance the value of its protocol tokens to strengthen incentives, thereby ensuring that Economic Security remains robust. Beyond building the middleware service itself, project teams need to incur significant marginal costs.

EigenLayer's Restaking simultaneously addresses both of these issues:

  • Regarding input versus output, if the hardware facilities' capacity is sufficient, validators do not need to incur additional token costs but can extend their existing ETH staking shares to new protocols.

    Of course, this will expand certain risk exposures. How to measure this risk cannot be determined until specific implementation details are disclosed, but intuitively, as long as validators do not have the intention to act maliciously, this risk is within a controllable range, because Inactivity fundamentally differs from Slashing: Inactivity may result from accidental downtime or missing votes due to network issues, whereas Slashing is due to malicious actions, which will lead to removal from the validator network and loss of ETH.

  • Regarding security levels, this will depend on EigenLayer itself and the adoption rate for specific middleware. Currently, a total of 14,836,535 ETH are staked on the Ethereum network; at current market prices, assuming only 1% of ETH participates in Restaking for a specific middleware, it could generate nearly $200 million in asset protection. Furthermore, in terms of decentralization, Ethereum's validator set is also the most decentralized group in the crypto ecosystem.

Closing Thoughts

As EigenLayer is still in its early stages, we lack materials on specific implementations; the content of this article is more of a logical overview. Some technical details still require further exploration and discussion.

However, we have already seen the innovation proposed by EigenLayer in hyperscaling Ethereum, and there will be many interesting topics worth discussing on top of EigenLayer. If you read this article carefully and understand EigenLayer's vision and positioning, you might feel as excited as we do.

IOSG continues to focus on and actively embrace the Ethereum ecosystem, and will keep track of the potential changes EigenLayer brings to the future landscape of Ethereum and its investment opportunities.

Pay attention to EigenLayer :)

Please note: Some ideas in this article are derived from community discussions with the EigenLayer team.

References

https://messari.io/report/eigenlayer-to-stake-and-re-stake-again

https://twitter.com/SalomonCrypto/status/1572094840619532288

https://twitter.com/\nishil\/status/1573018197829115905

https://twitter.com/MeirBank/status/1589013673385000960

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators