Tron Industry Weekly Report: BTC may continue to bottom below $80,000, De-Storage protocol secures $1.4 billion in funding
I. Outlook
1. Macroeconomic Summary and Future Predictions
Last week, the Trump administration announced a 25% tariff on all non-American-made cars, a decision that once again triggered panic in the market. This tariff policy could not only lead to a significant increase in the prices of imported cars and parts but may also provoke retaliatory measures from trade partners, further escalating international trade tensions. Moving forward, investors need to closely monitor the progress of trade negotiations and changes in the global economic landscape.
2. Market Movements and Warnings in the Cryptocurrency Industry
Last week, the cryptocurrency market experienced a significant pullback driven by macro-level fear, with previously accumulated gains being rapidly reversed within just a few days. This volatility primarily stemmed from the renewed uncertainty in the global macroeconomic environment. Looking ahead to this week, the market's focus will be on whether Bitcoin and Ethereum prices can effectively break below previous lows. This level is not only a crucial technical support but also a key psychological barrier for the market. On April 2, the U.S. officially initiated the imposition of reciprocal tariffs. If this move does not further exacerbate market panic, the cryptocurrency market may see an opportunity for a phase of bottom-fishing. However, investors must remain vigilant and closely monitor market dynamics and relevant indicators.
3. Industry and Sector Hotspots
Cobo and YZI led the investment, with Hashkey participating for the second time in the modular L1 chain abstraction platform Particle, which greatly enhances user experience and developer efficiency by simplifying cross-chain operations and payments, but it also faces challenges related to liquidity and centralized management; Skate, focused on seamless linking of mainstream VM application layer protocols, offers an innovative and efficient solution. By providing a unified application state, simplifying cross-chain task execution, and ensuring security, Skate significantly reduces the complexity for developers and users in a multi-chain environment; Arcium is a fast, flexible, and low-cost infrastructure aimed at providing access to cryptographic computing through blockchain. The innovative decentralized storage solution Walrus raised a record $140 million.
II. Market Hotspot Sectors and Potential Projects of the Week
1. Performance of Potential Sectors
1.1. Analysis of the Features of Skate, the Seamless Linking Mainstream VM Application Layer Protocol Led by Hashkey
Skate is an infrastructure layer focused on DAPPs, allowing users to seamlessly interact with their native chains by connecting to all virtual machines (EVM, TonVM, SolanaVM). For users, Skate provides applications that can run in their preferred environment. For developers, Skate manages the complexity of cross-chain interactions and introduces a new application paradigm, enabling applications to be built across all chains and virtual machines while using a unified application state to serve all chains.
++Architecture Overview++
Skate's infrastructure consists of three foundational layers:
- Skate's Central Chain: The central hub that handles all logical operations and stores application states.
- Pre-confirmation AVS: An AVS deployed on Eigenlayer that facilitates the secure delegation of re-staked ETH to Skate's executor network. It serves as the primary source of real data, ensuring that executors perform the required operations on the target chain.
- Executor Network: A network of executors responsible for executing operations defined by applications. Each application has its own set of executors.
As the central chain, Skate maintains and updates a shared state, providing instructions to connected peripheral chains, which only respond to the call data provided by Skate. This process is implemented through our executor network, where each executor is a registered AVS operator responsible for executing these tasks. In the event of dishonest behavior, we can rely on the pre-confirmation AVS as a source of real data to penalize the violating operators.
User Flow
Skate primarily operates through intents, where each intent encapsulates the key information expressing the operation the user wishes to execute, while also defining the necessary parameters and boundaries. Users only need to sign the intent through their local wallets and interact solely on that chain, creating a user-native environment.
The intent flow is as follows:
- Source Chain: Users initiate operations on the TON/Solana/EVM chain by signing the intent.
- Skate: Executors receive the intent and call the processIntent function. This creates a task that encapsulates the key information needed for the executor's task execution. At the same time, the system triggers a TaskSubmitted event.
AVS validators actively listen for the TaskSubmitted event and verify the content of each task. Once consensus is reached in the pre-confirmation AVS, the forwarder will issue the signatures required for task execution. - Target Chain: Executors call the executeTask function on the Gateway contract.
The Gateway contract verifies whether the task has been validated by the AVS, confirming the validity of the forwarder's signature before executing the functions defined in the task.
The calldata of the function call is executed, and the intent is marked as complete.
++Commentary++
Skate provides an innovative and efficient solution for cross-chain operations of decentralized applications. By offering a unified application state, simplifying cross-chain task execution, and ensuring security, Skate significantly reduces the complexity for developers and users in a multi-chain environment. Its flexible architecture and easy integration features give it broad application prospects in the multi-chain ecosystem. However, to achieve comprehensive implementation in high concurrency and multi-chain ecosystems, Skate still needs to continuously work on performance optimization and cross-chain compatibility.
1.2. How the Decentralized Cryptographic Computing Network Arcium, Backed by Coinbase, NGC, and Long Hash, Achieves Its Vision
Arcium is a fast, flexible, and low-cost infrastructure aimed at providing access to cryptographic computing through blockchain. Arcium is a cryptographic supercomputer that offers large-scale cryptographic computing services, supporting developers, applications, and the entire industry to compute on fully encrypted data using a trustless, verifiable, and efficient framework. Through secure multi-party computation (MPC) technology, Arcium provides scalable and secure cryptographic solutions for Web2 and Web3 projects, supported by a decentralized network.
++Architecture Overview++
The Arcium network aims to provide secure distributed confidential computing for various applications, from artificial intelligence to decentralized finance (DeFi) and beyond. It is based on advanced cryptographic technologies, including multi-party computation (MPC), achieving trustless and verifiable computation without the need for central authority intervention.
- Multi-Party Execution Environments (MXEs)
MXEs are dedicated, isolated environments for defining and securely executing computing tasks. They support parallel processing (as multiple clusters can execute computations for different MXEs simultaneously), thereby enhancing throughput and security.
MXEs are highly configurable, allowing computing clients to define security requirements, encryption schemes, and performance parameters based on their needs. While individual computing tasks are executed in specific clusters of Arx nodes, multiple clusters can be associated with one MXE. This ensures that even if some nodes are offline or overloaded, computing tasks can still be reliably executed. By pre-defining these configurations, clients can flexibly customize the environment according to specific use case requirements.
- arxOS
arxOS is the distributed execution engine within the Arcium network, responsible for coordinating the execution of computing tasks and driving Arx nodes and clusters. Each node (similar to cores in a computer) provides computing resources to execute the tasks defined by MXEs.
- Arcis (Arcium's Developer Framework)
Arcis is a Rust-based developer framework that enables developers to build applications on the Arcium infrastructure and supports all of Arcium's multi-party computation (MPC) protocols. It includes a Rust-based framework and compiler.
- Arx Node Clusters (Running arxOS)
arxOS is the distributed execution engine within the Arcium network, coordinating the execution of computing tasks. Each node (similar to cores in a computer) provides computing resources to execute the tasks defined by MXEs. The cluster provides a customizable trust model, supporting dishonest majority protocols (initially Cerberus) and "honest but curious" protocols (such as Manticore). Other protocols (including honest majority protocols) will be added in the future to support more use case scenarios.
Chain-Level Enforcement
All state management and coordination of computing tasks are handled on-chain through the Solana blockchain, which serves as the consensus layer coordinating the operations of Arx nodes. This ensures fair reward distribution, enforcement of network rules, and alignment of nodes with the current state of the network. Tasks are queued in a decentralized memory pool architecture, where on-chain components help determine which computing tasks have the highest priority, identify misconduct, and manage execution order.
Nodes ensure compliance with network rules by staking collateral. If misconduct or deviation from the protocol occurs, the system implements a penalty mechanism, punishing violating nodes through slashing to maintain the integrity of the network.
++Commentary++
The following are key features that make the Arcium network a cutting-edge secure computing solution:
- Trustless, Arbitrary Cryptographic Computing: The Arcium network achieves trustless computing through its multi-party execution environments (MXEs), allowing arbitrary computations on encrypted data without exposing the content of the data.
- Guaranteed Execution: Through a blockchain-based coordination system, the Arcium network ensures that all computations within MXEs can be reliably executed. Arcium's protocols enforce compliance through staking and penalty mechanisms, requiring nodes to commit collateral, which will be penalized if they deviate from agreed execution rules, thus ensuring the correct completion of each computing task.
- Verifiability and Privacy Protection: Arcium provides a verifiable computing mechanism that allows participants to publicly audit the correctness of computation results, enhancing the transparency and reliability of data processing.
- On-Chain Coordination: The network utilizes the Solana blockchain to manage node scheduling, compensation, and performance incentives. Staking, penalties, and other incentive mechanisms are fully executed on-chain, ensuring the decentralization and fairness of the system.
- Developer-Friendly Interfaces: Arcium offers dual interfaces: one is a web-based graphical interface for non-technical users, and the other is a Solana-compatible SDK for developers to create custom applications. This design makes confidential computing accessible to ordinary users while meeting the needs of highly technical developers.
- Multi-Chain Compatibility: Although initially based on Solana, the Arcium network is designed with multi-chain compatibility in mind, capable of supporting access from different blockchain platforms.
Through these features, the Arcium network aims to redefine how sensitive data is processed and shared in a trustless environment, promoting the broader application of secure multi-party computation (MPC).
1.3. What are the Characteristics of Particle, the Modular L1 Chain Abstraction Platform Led by Cobo and YZI, with Hashkey Participating Twice?
Particle Network has fundamentally simplified the user experience of Web3 through wallet abstraction and chain abstraction. With its wallet abstraction SDK, developers can guide users into smart accounts with one-click social logins.
Additionally, Particle Network's chain abstraction technology stack, with Universal Accounts as its flagship product, allows users to have unified accounts and balances across every chain.
The real-time wallet abstraction product suite of Particle Network consists of three key technologies:
- User Onboarding: By simplifying the registration process, users can more easily enter the Web3 ecosystem, enhancing the user experience.
- Account Abstraction: Through account abstraction, users' assets and operations are no longer dependent on a single chain, improving flexibility and convenience for cross-chain operations.
- Upcoming Product: Chain Abstraction: Chain abstraction will further strengthen cross-chain capabilities, supporting users in seamlessly operating and managing assets across multiple blockchains, creating a unified on-chain account experience.
++Architecture Analysis++
Particle Network coordinates and completes cross-chain transactions in a high-performance EVM execution environment through its Universal Accounts and three core functions:
- Universal Accounts
Provide a unified account state and balance, allowing users to manage assets and operations across all chains through a single account. - Universal Liquidity
Ensures that funds can be seamlessly transferred and utilized across different chains through cross-chain liquidity pools. - Universal Gas
Simplifies the user experience by automatically managing the gas fees required for cross-chain transactions.
These three core functions work together to enable Particle Network to unify interactions across all chains and achieve automated cross-chain fund transfers through atomic cross-chain transactions, helping users achieve their goals without manual intervention.
Universal Accounts: Particle Network's Universal Accounts aggregate token balances across all chains, allowing users to utilize assets from all chains in any decentralized application (dApp) as if using a single wallet.
This functionality is achieved through Universal Liquidity. They can be understood as specialized smart accounts deployed and coordinated across all chains. Users only need to connect their wallets to create and manage Universal Accounts, and the system will automatically assign management permissions. The wallet connected by the user can be generated through Particle Network's Modular Smart Wallet-as-a-Service or can be a regular Web3 wallet like MetaMask, UniSat, Keplr, etc.
Developers can easily integrate Universal Account functionality into their dApps by implementing Particle Network's universal SDK, empowering cross-chain asset management and operations.
Universal Liquidity: Universal Liquidity is the technical architecture that supports aggregating balances across all chains. Its core function is coordinated by Particle Network through atomic cross-chain transactions and exchanges. These atomic transaction sequences are driven by Bundler nodes, executing user operations and completing actions on the target chain.
Universal Liquidity relies on a network of liquidity providers (also known as fillers) to move intermediary tokens (such as USDC and USDT) across chains through token pools. These liquidity providers ensure that assets can flow smoothly across chains.
For example, suppose a user wants to purchase an NFT priced in ETH using USDC on the Base chain. In this scenario:
- Particle Network aggregates the user's USDC balances across multiple chains.
- The user purchases the NFT using their assets.
- Upon confirming the transaction, Particle Network automatically exchanges USDC for ETH and purchases the NFT.
These additional on-chain operations require only a few seconds of processing time and are transparent to the user, who does not need to intervene manually. In this way, Particle Network simplifies the management of cross-chain assets, making cross-chain transactions and operations seamless and automated.
Universal Gas: By unifying balances across chains through Universal Liquidity, Particle Network also addresses the fragmentation issue of gas tokens.
In the past, users needed to hold various gas tokens in different wallets to pay gas fees on different chains, which posed significant usability barriers. To solve this problem, Particle Network uses its native Paymaster, allowing users to pay gas fees with any token from any chain. These transactions will ultimately be settled through the chain's native token (PARTI) on Particle Network's L1.
Users do not need to hold PARTI tokens to use Universal Accounts, as their gas tokens will be automatically exchanged and used for settlement. This makes cross-chain operations and payments much simpler, eliminating the need for users to manage multiple gas tokens.
++Commentary++
Advantages:
- Unified Management of Cross-Chain Assets: Universal Accounts and Universal Liquidity allow users to manage and utilize assets across different chains without worrying about asset fragmentation or the complexity of cross-chain transfers.
- Simplified User Experience: Through social logins and Modular Smart Wallet-as-a-Service, users can easily access Web3, lowering the entry barrier.
- Automation of Cross-Chain Transactions: Atomic cross-chain transactions and Universal Gas make the automatic conversion and payment of assets and gas tokens seamless, enhancing user convenience.
- Developer-Friendly: Developers can easily integrate cross-chain functionality into their dApps using Particle Network's universal SDK, reducing the complexity of cross-chain integration.
Disadvantages:
- Dependence on Liquidity Providers: Liquidity providers (such as for cross-chain transfers of USDC and USDT) require widespread participation to ensure smooth liquidity. If liquidity pools are insufficient or provider participation is low, it may affect the smoothness of transactions.
- Centralization Risks: Particle Network relies to some extent on its native Paymaster to handle gas fee payments and settlements, which may introduce risks and dependencies related to centralization.
- Compatibility and Popularity: Although it supports multiple wallets (such as MetaMask, Keplr, etc.), compatibility between different chains and wallets may still pose a significant challenge for user experience, especially for smaller chains or wallet providers.
Overall, Particle Network greatly enhances user experience and developer efficiency by simplifying cross-chain operations and payments, but it also faces challenges related to liquidity and centralized management.
2. Detailed Analysis of Projects to Watch This Week
2.1. In-Depth Look at Walrus, the Innovative Decentralized Storage Solution Led by A16z, Which Raised a Record $140 Million This Month
++Introduction++
Walrus is an innovative solution for decentralized big data storage. It combines fast linear decodable erasure codes, capable of scaling to hundreds of storage nodes, achieving high resilience with lower storage overhead; and utilizes the next-generation public chain Sui as the control plane, managing everything from the lifecycle of storage nodes to the lifecycle of big data, as well as economics and incentive mechanisms, eliminating the need for a complete custom blockchain protocol.
At the core of Walrus is a new coding protocol called Red Stuff, which employs an innovative two-dimensional (2D) coding algorithm based on fountain codes. Unlike RS coding, fountain codes primarily rely on XOR or other very fast operations on large data blocks, avoiding complex mathematical computations. This simplicity allows for encoding large files in a single transmission, significantly speeding up processing. The 2D encoding of Red Stuff enables recovery of lost fragments in proportion to the amount of lost data. Additionally, Red Stuff incorporates authenticated data structures to prevent malicious clients, ensuring the consistency of stored and retrieved data.
Walrus operates in epochs, with each epoch managed by a committee of storage nodes. All operations within an epoch can be sharded by blobid, achieving high scalability. The system facilitates the writing process of blobs by encoding data into primary and secondary fragments, generating Merkle commitments, and distributing these fragments to storage nodes. The reading process involves collecting and verifying fragments, with the system providing best-effort and incentive paths to address potential system failures. To ensure that the availability of reading and writing blobs is not interrupted while participant replacements naturally occur in the permission system, Walrus has an efficient committee reconfiguration protocol.
Another key innovation of Walrus is its storage proof method, which is a mechanism for verifying whether storage nodes indeed hold the data they claim to possess. Walrus addresses the scalability challenges associated with these proofs by incentivizing all storage nodes to hold fragments of all stored files. This complete replication allows for a new storage proof mechanism that challenges storage nodes as a whole rather than individually for each file. Consequently, the cost of proving file storage grows logarithmically with the number of stored files, rather than linearly as in many existing systems.
Finally, Walrus introduces a staking-based economic model, combining rewards and penalties to align incentives and enforce long-term commitments. The system includes a pricing mechanism for storage resources and write operations, along with a token governance model for parameter adjustments.
++Technical Analysis++
Red Stuff Coding Protocol
Current industry coding protocols achieve low overhead factors and extremely high guarantees but are still unsuitable for long-term deployment. The main challenge lies in the fact that in a long-running large-scale system, storage nodes frequently encounter failures, losing their fragments and needing to be replaced. Furthermore, in a permissionless system, even if storage nodes have sufficient incentives to participate, natural replacements will occur among nodes.
Both scenarios lead to a significant amount of data needing to be transmitted across the network, equivalent to the total amount of stored data, to recover lost fragments for new storage nodes. This is extremely costly. Therefore, the team aims to ensure that the recovery cost during node replacements is proportional only to the amount of data that needs to be recovered and decreases inversely with the number of storage nodes (n).
To achieve this, Red Stuff encodes large data blocks in a two-dimensional (2D) manner. The primary dimension is equivalent to the RS coding used in previous systems. However, to efficiently recover fragments, Walrus also encodes in the secondary dimension. Red Stuff is based on linear erasure codes and the Twin-code framework, which provides efficient recovery in fault-tolerant settings suitable for environments with trusted writers. The team has adapted this framework for Byzantine fault-tolerant environments and optimized it for single storage node clusters, which will be detailed below.
- Encoding
Our starting point is to split large data blocks into f + 1 fragments. This is not merely encoding repair fragments but first adding a dimension during the splitting process:
(a)Two-Dimensional Primary Encoding. The file is split into 2f + 1 columns and f + 1 rows. Each column is encoded as an independent blob containing 2f repair symbols. Then, the extended part of each row becomes the corresponding node's primary fragment.
++++
(b) Two-Dimensional Secondary Encoding. The file is split into 2f + 1 columns and f + 1 rows. Each row is encoded as an independent blob containing f repair symbols. Then, the extended part of each column becomes the corresponding node's secondary fragment.
Figure 2: 2D Encoding/ Red Stuff
The original blob is split into f + 1 primary fragments (vertical in the figure) and 2f + 1 secondary fragments (horizontal in the figure). Figure 2 illustrates this process. Ultimately, the file is split into (f + 1)(2f + 1) symbols, which can be visualized in a [f + 1, 2f + 1] matrix.
Given this matrix, repair symbols are generated in both dimensions. We take each of the 2f + 1 columns (each of size f + 1) and extend it to n symbols, making the number of rows in the matrix n. We assign each row as a primary fragment of a node (see Figure 2a). This nearly triples the amount of data we need to send. To provide efficient recovery for each fragment, we also extend the original [f + 1, 2f + 1] matrix, expanding each row from 2f + 1 symbols to n symbols (see Figure 2b) and using our encoding scheme. In this way, we create n columns, with each column assigned as the corresponding node's secondary fragment.
For each fragment (primary and secondary), W also computes a commitment for its symbols. For each primary fragment, the commitment includes all symbols in the extended row; for each secondary fragment, the commitment includes all values in the extended column. The final step involves the client creating a commitment list containing these fragment commitments, which serves as the blob commitment.
- Writing Protocol
The writing protocol of Red Stuff follows the same pattern as the RS coding protocol. The writer W first encodes the blob and creates a fragment pair for each node. A fragment pair i is a pairing of the i-th primary fragment and secondary fragment. There are a total of n = 3f + 1 fragment pairs, equivalent to the number of nodes.
Next, W sends the commitments of all fragments to each node along with the corresponding fragment pairs. Each node checks whether its fragment in the fragment pair matches the commitment, recalculates the blob's commitment, and replies with a signed confirmation. Once 2f + 1 signatures are collected, W generates a certificate and publishes it on-chain to prove that the blob will be available.
In a theoretical asynchronous network model, assuming reliable transmission, all correct nodes will eventually receive a fragment pair from an honest writer. However, in practical protocols, the writer may need to stop retransmitting. Once 2f + 1 signatures are collected, retransmission can safely stop, ensuring that at least f + 1 correct nodes (selected from the 2f + 1 responding nodes) hold the blob's fragment pair.
(a) Nodes 1 and 3 jointly hold two rows and two columns.
In this case, nodes 1 and 3 hold two rows and two columns of the file, respectively. The data fragments held by each node are distributed across different rows and columns in the two-dimensional encoding, ensuring that data is distributed and redundantly stored across multiple nodes for high availability and fault tolerance.
(b) Each node sends its row/column intersection with node 4's column/row to node 4 (in red). Node 3 needs to encode this row.
In this step, nodes 1 and 3 send their row/column intersection with node 4's column/row to node 4. Specifically, node 3 needs to encode its held row to intersect with node 4's data fragment and pass it to node 4. This way, node 4 can receive the complete data fragment and perform recovery or verification work. This process ensures data integrity and redundancy, allowing other nodes to recover data even if some nodes fail.
(c) Node 4 uses f + 1 symbols on its column to recover the complete secondary fragment (in green). Then, node 4 sends the recovered column intersection to the rows of other recovery nodes.
In this step, node 4 uses f + 1 symbols on its column to recover the complete secondary fragment. The recovery process is based on data intersection, ensuring efficient data recovery. Once node 4 recovers its secondary fragment, it sends the recovered column intersection to other nodes that are recovering, helping them recover their row data. This interaction guarantees the smooth progress of data recovery, and collaboration among multiple nodes can accelerate the recovery process.
(d) Node 4 uses f + 1 symbols on its row and all recovered secondary symbols sent by other honest recovery nodes (in green) (these symbols should be at least 2f, plus the 1 symbol recovered in the previous step) to recover its primary fragment (in deep blue).
At this stage, node 4 not only uses f + 1 symbols on its row to recover its primary fragment but also needs to utilize secondary symbols sent by other honest recovery nodes to assist in completing the recovery. By using these symbols received from other nodes, node 4 can recover its primary fragment. To ensure the accuracy of the recovery, node 4 will receive at least 2f + 1 valid secondary symbols (including the 1 symbol recovered in the previous step). This mechanism enhances fault tolerance and data recovery capability by integrating data from multiple sources.
- Reading Protocol
The reading protocol is the same as that of RS coding, where nodes only need to use their primary fragments. The reader (R) first requests any node to provide the commitment set for the blob and checks whether the returned commitment set matches the requested blob commitment through the commitment opening protocol. Next, R requests all nodes to read the blob commitment, and they will respond by providing their held primary fragments (to save bandwidth, this may be done gradually). Each response will be checked against the corresponding commitment in the blob's commitment set.
When R collects f + 1 correct primary fragments, R decodes the blob, re-encodes it, recalculates the blob commitment, and compares it with the requested blob commitment. If the two commitments match (i.e., they are the same as the commitment published by W on-chain), R outputs blob B; otherwise, R outputs an error or an indication of inability to recover.
Walrus Decentralized Secure Blob Storage
- Writing a Blob
The process of writing a Blob in Walrus can be illustrated by Figure 4.
This process begins with the writer (➊) encoding the Blob using Red Stuff, as shown in Figure 2. This process generates sliver pairs, a set of commitments for the slivers, and a Blob commitment. The writer derives a blobid by hashing the Blob commitment along with metadata such as the file's length and encoding type.
Next, the writer (➋) submits a transaction to the blockchain to secure sufficient guarantees for Blob storage space across a series of Epochs and registers this Blob. The transaction sends the size of the Blob and the Blob commitment, which can be used to re-derive the blobid. The blockchain smart contract needs to ensure that there is enough space to store the encoded slivers on each node, along with all metadata related to the Blob commitment. Some payments may be sent along with the transaction to guarantee free space, or free space can be used as additional resources along with requests. Our implementation allows for both options.
Once the registration transaction is submitted (➌), the writer notifies the storage nodes that they are responsible for storing the slivers of that blobid, while sending the transaction, commitments, and the primary and secondary slivers allocated to each storage node along with proofs that the slivers are consistent with the published blobid. The storage nodes will verify the commitments and return a signed confirmation for the blobid after successfully storing the commitments and sliver pairs.
Finally, the writer waits to collect 2f + 1 signed confirmations (➍), which constitute a write certificate. This certificate will then be published on-chain (➎), marking the Blob's Point of Availability (PoA) in Walrus. The PoA indicates that storage nodes are obligated to maintain the availability of these slivers within the specified Epochs for reading. At this point, the writer can delete the Blob from local storage and can go offline. Additionally, the writer can use the PoA as proof of the Blob's availability to third-party users and smart contracts.
Nodes will listen for blockchain events to check whether the Blob has reached its PoA. If they do not hold the sliver pairs for that Blob, they will execute the recovery process to obtain all commitments and sliver pairs for the Blob until the PoA timestamp. This ensures that ultimately all correct nodes will hold all sliver pairs for the Blob.
++Summary++
In summary, Walrus's contributions include:
- Defining the problem of asynchronous complete data sharing and proposing Red Stuff, the first protocol capable of efficiently solving this problem under Byzantine fault tolerance.
- Introducing Walrus, the first permissioned decentralized storage protocol designed for low replication costs that can efficiently recover data lost due to failures or participant replacements.
- By introducing a staking-based economic model, combining rewards and penalties to align incentives and enforce long-term commitments, and proposing the first asynchronous challenge protocol for efficient storage proofs.
III. Industry Data Analysis
1. Overall Market Performance
1.1 Spot BTC & ETH ETF
From March 24, 2025, to March 29, 2025, the fund flows of Bitcoin (BTC) and Ethereum (ETH) ETFs exhibited different trends:
Bitcoin ETF:
- March 24, 2025: The Bitcoin ETF saw a net inflow of $84.2 million, marking the seventh consecutive day of positive inflows, with total inflows reaching $869.8 million.
- March 25, 2025: The Bitcoin ETF recorded a net inflow of $26.8 million again, bringing the cumulative inflow over eight days to $896.6 million.
- March 26, 2025: The Bitcoin ETF continued to grow, with a net inflow of $89.6 million, marking the ninth consecutive day of inflows, with total inflows reaching $986.2 million.
- March 27, 2025: The Bitcoin ETF had a net inflow of $89 million, maintaining the positive inflow trend.
- March 28, 2025: The Bitcoin ETF continued to record a net inflow of $89 million, sustaining the consecutive positive inflow trend.
Ethereum ETF:
- March 24, 2025: The Ethereum ETF had a net inflow of $0, ending a previous streak of 13 days of outflows.
- March 25, 2025: The Ethereum ETF experienced a net outflow of $3.3 million, marking the first outflow after the resumption of the outflow trend.
- March 26, 2025: The Ethereum ETF continued to face a net outflow of $5.9 million, with investor sentiment remaining cautious.
- March 27, 2025: The Ethereum ETF had a net outflow of $4.2 million, indicating that market panic sentiment still exists.
- March 28, 2025: The Ethereum ETF continued to experience a net outflow of $4.2 million, maintaining the negative outflow trend.
As of November 1 (Eastern Time), the total net outflow of Ethereum spot ETFs was $10.9256 million.
1.2. Price Trends of Spot BTC vs ETH
BTC
Analysis
BTC failed to test the upper wedge (around $89,000) last week, as expected, initiating a downward trend. This week, users need to focus on three important support levels: the support at $81,400, the second support provided by the $80,000 round number, and the bottom support at the year's low of $76,600. For users waiting for entry opportunities, these three support levels can all be seen as suitable points for phased entry.
ETH
Analysis
ETH has now approached a pullback near this year's low of $1,760 after failing to stabilize above $2,000. The subsequent trend will largely depend on BTC's performance; if BTC can stabilize above the $80,000 mark and initiate a rebound, ETH is likely to form a double bottom pattern above $1,760 and could rise to resistance around $2,300. Conversely, if BTC falls below $80,000 again and seeks support at $76,600 or even lower prices, ETH is likely to drop to the first support level of $1,700 or even the second bottom support of $1,500.
1.3. Fear & Greed Index
2. Public Chain Data
2.1. BTC Layer 2 Summary
Analysis
From March 24 to March 28, 2025, the Bitcoin Layer-2 (L2) ecosystem experienced some significant developments:
Stacks' sBTC Deposit Cap Increase: Stacks announced the completion of the cap-2 expansion for sBTC, raising the deposit cap by 2,000 BTC, bringing the total capacity to 3,000 BTC (approximately $250 million). This increase aims to enhance liquidity and support the growing demand for Bitcoin-backed DeFi applications on the Stacks platform.
Citrea's Testnet Milestone: Bitcoin L2 solution Citrea reported an important milestone—its testnet transaction volume surpassed 10 million. The platform also updated the Clementine design, simplifying zero-knowledge proof (ZKP) validators and enhancing security, laying the groundwork for the scalability of Bitcoin transactions.
BOB's BitVM Bridging Activation: BOB (Build on Bitcoin) successfully activated BitVM bridging on its testnet, allowing users to mint Yield BTC from BTC with minimal trust assumptions. This advancement enhances interoperability between Bitcoin and other blockchain networks, enabling more complex transactions without compromising security.
Bitlayer's BitVM Bridging Release: Bitlayer launched BitVM bridging, allowing users to mint Yield BTC from BTC with minimal trust assumptions. This innovation improves the scalability and flexibility of Bitcoin transactions, supporting the development of DeFi applications within the Bitcoin ecosystem.
2.2. EVM & Non-EVM Layer 1 Summary
Analysis
EVM-Compatible Layer 1 Blockchains:
- BNB Chain's 2025 Roadmap: BNB Chain announced its vision for 2025, planning to expand to 100 million transactions per day, enhance security to address miner extractable value (MEV) issues, and introduce smart wallet solutions similar to EIP-7702. The roadmap also emphasizes the integration of artificial intelligence (AI) use cases, focusing on leveraging valuable private data and enhancing developer tools.
- Polkadot's 2025 Development: Polkadot released its 2025 roadmap, highlighting support for EVM and Solidity, aiming to enhance interoperability and scalability. The plan includes implementing a multi-core architecture to increase capacity and upgrading cross-chain messaging through XCM v5.
Non-EVM Layer 1 Blockchains:
- W Chain Mainnet Soft Launch: W Chain, a hybrid blockchain network based in Singapore, announced its Layer 1 mainnet has entered the soft launch phase. Following a successful testnet phase, W Chain introduced bridging capabilities to enhance cross-platform compatibility and interoperability. The commercial mainnet is expected to officially launch in March 2025, with plans to introduce features such as decentralized exchanges (DEX) and ambassador programs.
- N1 Blockchain Investor Support Confirmation: N1, an ultra-low-latency Layer 1 blockchain, confirmed that its original investors, including Multicoin Capital and Arthur Hayes, will continue to support the project, which is expected to launch before the mainnet release. N1 aims to provide developers with unrestricted scalability and ultra-low-latency support for decentralized applications (DApps) and supports multiple programming languages to simplify development.
2.3. EVM Layer 2 Summary
Analysis
Between March 24 and March 29, 2025, several significant developments occurred in the EVM Layer 2 ecosystem:
- Polygon zkEVM Mainnet Beta Launch: On March 27, 2025, Polygon successfully launched the zkEVM (Zero-Knowledge Ethereum Virtual Machine) mainnet beta. This Layer 2 scaling solution improves Ethereum's scalability by executing off-chain computations, enabling faster and lower-cost transactions. Developers can seamlessly migrate their Ethereum applications to Polygon's zkEVM, as it is fully compatible with Ethereum's codebase.
- Telos Foundation's ZK-EVM Development Roadmap: The Telos Foundation announced a development roadmap for ZK-EVM based on SNARKtor. The plan includes deploying hardware-accelerated zkEVM on the Telos testnet in Q4 2024, followed by integration with the Ethereum mainnet in Q1 2025. The subsequent phases aim to integrate SNARKtor to improve verification efficiency on Layer 1, with full integration expected by Q4 2025.
IV. Macroeconomic Data Review and Key Data Release Points for Next Week
The core PCE price index year-on-year for February, released on March 28, recorded 2.7% (expected 2.7%, previous value 2.6%), marking the third consecutive month above the Federal Reserve's target, primarily driven by rising import costs due to tariffs.
This week (March 31 - April 4), important macro data release points include:
April 1: U.S. March ISM Manufacturing PMI
April 2: U.S. March ADP Employment Numbers
April 3: U.S. Initial Jobless Claims for the week ending March 29
April 4: U.S. March Unemployment Rate; U.S. March Adjusted Non-Farm Payrolls
V. Regulatory Policies
During the week, the U.S. SEC concluded its investigations into Crypto.com and Immutable, and Trump also pardoned the co-founder of BitMex. A specialized bill regarding stablecoins has also officially been put on the discussion agenda, accelerating the process of deregulation and compliance in the cryptocurrency industry.
U.S.: Oklahoma Passes Strategic Bitcoin Reserve Bill
The Oklahoma House voted to pass a strategic Bitcoin reserve bill. This bill allows the state to invest 10% of public funds in Bitcoin or any digital asset with a market capitalization exceeding $500 billion.
Additionally, the U.S. Department of Justice announced the dismantling of an ongoing terrorism financing scheme, seizing approximately $201,400 (at current value) in cryptocurrency, which was stored in wallets and accounts intended to fund Hamas. The seized funds originated from fundraising addresses reportedly controlled by Hamas, which have been used to launder over $1.5 million in virtual currency since October 2024.
Panama: Proposed Cryptocurrency Bill Released
Panama has released a proposed cryptocurrency bill to regulate cryptocurrencies and promote the development of blockchain-based services. The proposed bill establishes a legal framework for the use of digital assets, sets licensing requirements for service providers, and includes strict compliance measures in line with international financial standards. Digital assets are recognized as a legitimate means of payment, allowing individuals and businesses to freely agree to use digital assets in commercial and civil contracts.
EU: Possible Implementation of 100% Capital Support Requirements for Crypto Assets
According to Cointelegraph, EU insurance regulators have proposed implementing 100% capital support requirements for insurance companies holding crypto assets, citing "inherent risks and high volatility" associated with crypto assets.
South Korea: Proposed Access Blocking for 17 Overseas Applications Including Kucoin
The Financial Intelligence Unit (FIU) of South Korea announced that starting March 25, it will implement domestic access restrictions on the Google Play platform applications of 17 overseas virtual asset service providers (VASPs) that are not registered in South Korea, including KuCoin and MEXC. This means that users will not be able to install the relevant applications, and existing users will also be unable to update them.