Jump Crypto: A Detailed Analysis of the Blockchain Infrastructure Segmentation and Landscape
Original Authors: Rahul Maganti/Partner at Jump Crypto, Saurabh Sharma/Vice President at Jump Crypto
Original Title: 《Peeking Under the Hood: Key Pillars of Crypto Infrastructure》
Compiled by: Ze Yi, Lin Qi, Chain Catcher
Introduction
With the rapid emergence of cross-chain bridges, new frameworks, and other core crypto protocols, effectively planning blockchain infrastructure remains a key challenge for users, developers, and investors. The term "blockchain infrastructure" can encompass a variety of different products and services, from underlying network stacks to consensus models or virtual machines. We reserve a deeper analysis of the various "core" components that constitute L1/L2 chains for later publication (stay tuned!). In this article, our specific goals are:
- To provide a broad overview of the key components of blockchain infrastructure.
- To break these components down into clear, digestible sub-sections.
- Infrastructure Map
We will define the ecosystem of blockchain infrastructure as protocols aimed at supporting L1 and L2 development in the following key areas:
- Layer 0 Infrastructure: (1) Decentralized cloud services (storage, computation, indexing); (2) Node infrastructure (RPC, staking/validators)
- Middleware: (1) Data availability; (2) Communication/messaging protocols
- Blockchain Development: (1) Security and testing; (2) Developer tools (out-of-the-box tools, front-end/back-end libraries, languages/IDEs).
Layer 0 Infrastructure
Decentralized Cloud Services
Cloud services are crucial to the development of Web2— as the computational and data demands of applications grow, service providers that quickly deliver this data and computation in an economically efficient manner are essential. Web3 applications have similar needs for data and computation but wish to stay true to the spirit of blockchain. As a result, protocols aimed at creating decentralized versions of these Web2 services have emerged. Decentralized cloud consists of three core components:
- Storage - Data/files are stored on servers operated by many entities. These networks achieve high fault tolerance as data is replicated or sharded across multiple machines.
- Computation - Similar to storage, computation is centralized in the Web2 paradigm. Decentralized computation focuses on distributing this computation across many nodes to achieve higher fault tolerance (if one or a group of nodes fails, the network can still service requests with minimal disruption to performance).
- Indexing - In the Web2 world, data is stored on a server or a set of servers owned and operated by a single entity, making querying this data relatively easy. Since blockchain nodes are distributed, data can be isolated, scattered across different regions, and often under incompatible standards. Indexing protocols aggregate this data and provide an easy-to-use and standardized API to access it.
Several projects provide storage, computation, and indexing (such as Aleph and Akash Network), while others are more specialized (for example, The Graph for indexing, Arweave/Filecoin for storage).
Node Infrastructure
Remote Procedure Calls (RPC) are central to the functionality of many types of software systems. They allow one program to call or access a program on another computer. This is particularly useful for blockchains, which must service a large number of incoming requests from various machines operating in different regions and environments. Protocols like Alchemy, Syndica, and Infura provide this infrastructure as a service, enabling builders to focus on high-level application development rather than the underlying mechanisms involved in routing calls to nodes.
Like many RPC providers, Alchemy owns and operates all nodes. For many in the crypto community, the dangers of centralized RPC are evident— it introduces a single point of failure that could jeopardize the liveliness of the blockchain (i.e., if Alchemy fails, applications will be unable to retrieve or access data on the blockchain). Recently, decentralized RPC protocols like Pocket have emerged to address these issues, but the effectiveness of this approach remains to be tested on a large scale.
Staking/validators— the security of a blockchain relies on a set of distributed nodes to validate transactions on the chain, but someone must actually run the nodes that participate in consensus. In many cases, the time, cost, and energy required to run a node can be daunting, leading many nodes to drop out and rely on others to take on the responsibility of ensuring chain security.
However, this attitude brings serious problems— if everyone decides to shift security to others, no one will validate. Services like P2P and Blockdaemon run infrastructure that allows less mature or capital-constrained users to participate in consensus, often by pooling capital. Some argue that these staking providers introduce unnecessary centralization, but the alternatives may be worse— without such providers, the barrier to entry for ordinary network participants to run nodes is too high, potentially leading to greater centralization.
Middleware
Data Availability
Applications consume data in large quantities. In the Web2 paradigm, this data typically comes directly from users or third-party providers in a centralized manner (data providers are compensated for aggregating and selling data to specific companies and applications— like Amazon, Google, or other machine learning data providers).
DApps are also significant consumers of data but require validators to make this data available for users or applications running on-chain. It is crucial to provide this data in a decentralized manner to minimize trust assumptions. Applications can access high-fidelity data quickly and efficiently through two main methods:
Data oracles like Pyth and Chainlink provide access to data streams, allowing crypto networks to reliably and decentralized connect with traditional systems and other external information. This includes high-quality financial data (i.e., asset prices). This service is vital for the broad use cases that extend DeFi into trading, lending, sports betting, insurance, and many other areas.
The data availability layer is specifically designed to order transactions and make data available for the chains it supports. Typically, by using a small portion of the block, they generate evidence that provides clients with a high probability confirmation that all block data has been published on-chain. Data availability proofs are key to ensuring the reliability of Rollup sequencers and reducing Rollup transaction processing costs. Celestia is a great example of this layer.
Communication and Messaging
As the number of Layer 1s and their ecosystems grows, the demand for cross-chain composability and interoperability becomes more urgent. Cross-chain bridges allow previously isolated ecosystems to interact meaningfully, similar to how new trade routes help connect previously disparate regions, ushering in a new era of knowledge sharing! Wormhole, Layer Zero, and other cross-chain bridge solutions support universal messaging, allowing all types of data and information (including arrests) to move across multiple ecosystems, and applications can even make arbitrary function calls across chains, enabling them to enter other communities without needing to deploy elsewhere. Other protocols like Synapse and CELER are limited to cross-chain transfers of assets or tokens.
On-chain messaging remains a key component of blockchain infrastructure. As DApp development and retail demand grow, the ability of protocols to interact with their users in meaningful yet decentralized ways will become a key driver of growth. Here are several potential areas where on-chain messaging could be useful:
- Token claim notifications.
- Enabling built-in communication messaging in wallets.
- Notifications about important updates to protocols.
- Notifications tracking key issues (e.g., risk indicators for DeFi applications, security vulnerabilities).
Notable projects developing on-chain communication protocols include Dialect, Ethereum Push Notification Service (EPNS), and XMTP.
Blockchain Development
Security and Testing
The security and testing of cryptographic technologies are relatively underdeveloped, but undeniably critical to the success of the entire ecosystem. Crypto applications are particularly sensitive to security risks, as they often directly relate to user assets. Small errors in their design or implementation can lead to severe economic consequences.
There are seven main security and testing methods:
- Unit Testing is a core part of most software system testing suites. Developers write tests to check the behavior of small atomic parts of the program. There are various practical unit testing frameworks. For example, Waffle and Truffle on Ethereum, and Anchor testing framework is the standard for Solana.
- Integration Testing tests various software modules as a group. Since libraries and higher-level drivers often interact with each other in various ways, as well as interactions between other lower-level modules (for example, a TypeScript library interacting with a set of underlying smart contracts), it is crucial to test the data and information flow between these modules.
- Auditing has become a core part of the blockchain security process development. Before publicly releasing smart contracts, protocols often leverage third-party code auditors to check and verify every line of code. We place great importance on auditors to ensure the highest level of security. Trail of Bits, Open Zeppelin, and Quantstamp are some trusted institutions in the blockchain auditing space.
- Formal Verification involves checking whether a program or software component meets a set of properties. Typically, someone will write a formal specification detailing how the program should behave. Formal verification frameworks will transform this specification into a set of constraints, which are then solved and checked. One of the leading projects enhancing smart contract security is Certora, which uses Runtime Verification to implement formal verification to support smart contract security.
- Simulation— for a long time, quantitative trading firms have used agent-based simulations to backtest algorithmic trading strategies. Given the high costs of experimenting on the blockchain, simulation methods provide a way to parameterize protocols and test various hypotheses. Chaos Labs and Guantlet are two platforms that utilize scenario-based simulations to protect blockchains and protocols.
- Bug Bounties help leverage the decentralized spirit of the crypto space to address large-scale security challenges. High rewards incentivize community members and hackers to report and resolve critical vulnerabilities. Thus, bounty programs play a unique role in turning "gray hats" into "white hats." For example, the bounty platform Immunefi created by Wormhole offers bug bounties worth up to $10 million! We encourage anyone to participate!
- Testnets provide a representation similar to mainnet networks, supporting developers in testing and debugging their parameters in a development environment. Many testnets use Proof-of-Authority/other consensus mechanisms and a small number of validators for speed optimization, and tokens on testnets have no real value. Therefore, users have no other way to acquire tokens except through faucets. Many testnets are built to mimic some projects on mainnet L1 (like Ethereum's Rinkeby, Kovan, Ropsten).
Each method has its own advantages and disadvantages and is certainly not mutually exclusive; different testing styles are often used at different stages of project development:
- Stage 1: Write unit tests while building contracts.
- Stage 2: Once higher-level program abstractions are built, integration testing becomes very important for testing interactions between modules.
- Stage 3: Code audits are conducted during testnet/mainnet releases or major feature releases.
- Stage 4: Formal verification is often combined with code audits and uses additional security assurances. Once a program is specified, the rest of the process can be automated, making it easy to pair with Continuous Integration or Continuous Deployment tools.
- Stage 5: Launch applications on testnets to check throughput, traffic, and other scaling parameters.
- Stage 6: Launch bug bounty programs after deploying to mainnet, leveraging community resources to find and fix issues.
Developer Tools
The growth of any technology or ecosystem relies on the success of its developers— especially in the crypto space. We categorize developer tools into four main categories:
- Out-of-the-Box Tools
- SDKs for developing new L1s that help abstract the process of creating and deploying consensus models. Pre-built modules allow for flexibility and customization but are optimized for development speed and standardization. Cosmos SDK is a great example, supporting rapid development of new verified blockchains within the Cosmos ecosystem. Binance Chain and Terra are well-known examples of public chains based on Cosmos.
- Smart contract development— there are many tools available to help developers quickly develop smart contracts. For example, Truffle boxes contain simple and useful Solidity contract (voting, etc.) examples. The community can also recommend appendices to this repository.
- Front-end/Back-end Tools— there are many tools that simplify application development. Connecting applications to the chain (i.e., ethers.js, web3.js, etc.).
- Upgrading and Interacting with Contracts (e.g., OpenZeppelin SDK)— there are various tools specific to ecosystems (e.g., Anchor IDL for Solana smart contracts, Ink for Parity smart contracts) that handle writing RPC request handlers, issuing IDLs, and generating clients from IDs.
- Languages and IDEs— the programming model for blockchain often differs significantly from traditional software systems. Programming languages used for blockchain development are designed to facilitate this model. For EVM-compatible chains, Solidity and Vyper are widely used. Other languages like Rust are heavily used for public chains like Solana and Terra.
Conclusion
Blockchain infrastructure can be an overloaded and confusing term, often synonymous with a range of products and services covering everything from smart contract auditing to cross-chain bridges. Therefore, discussions about crypto infrastructure are either too broad and chaotic or too specific and targeted for the average reader. We hope this article strikes the right balance for those just entering the crypto space and those seeking a deeper overview.
Of course, the crypto industry is rapidly changing, and the protocols referenced in this article may no longer represent a representative sample of the ecosystem in two or even three months. Nevertheless, we believe the main objective of this article (i.e., breaking down infrastructure into more easily understandable and digestible parts) will remain more relevant in the future. As the landscape of blockchain infrastructure evolves, we will also ensure to provide clear and consistent updates to our thoughts.
For questions or comments, please reach out to Rahul Maganti (@rahulmaganti) and Saurabh Sharma (@zsparta). Let us know where we went wrong or where you disagree! Special thanks to Nikhil Suri (@nsuri) and Lucas Baker (@sansgravitas) for their valuable feedback.