The impact of DeepSeek on Web3 AI upstream and downstream protocols

BlockBooster
2025-02-12 19:06:50
Collection
Agent is the last hope of AI in the industry. The emergence of DeepSeek has liberated the limitations of computing power and painted a future expectation of application explosion.

Author: Kevin, the Researcher at BlockBooster

TLDR:

  • The emergence of DeepSeek shatters the computational power moat, with open-source models leading the new direction of computational optimization;
  • DeepSeek benefits the model and application layers in the industry supply chain, negatively impacting computational protocols in the infrastructure;
  • The advantages of DeepSeek inadvertently burst the last bubble in the Agent track, with DeFAI likely to give birth to new life;
  • The zero-sum game of project financing is expected to come to an end, with community launches and a small amount of VC funding becoming the norm.

The impact triggered by DeepSeek will have far-reaching effects on the upstream and downstream of the AI industry this year. DeepSeek has successfully enabled consumer-grade graphics cards to perform large model training tasks that previously required high-end GPUs. The first moat surrounding AI development—computational power—begins to collapse. As algorithm efficiency races ahead at a staggering 68% per year while hardware performance follows Moore's Law with linear growth, the deeply entrenched valuation models of the past three years are no longer applicable. The next chapter of AI will be opened by open-source models.

Although the AI protocols of Web3 are fundamentally different from those of Web2, they inevitably bear the influence of DeepSeek, which will give rise to entirely new use cases in the upstream and downstream of Web3 AI: infrastructure layer, middleware layer, model layer, and application layer.

Sorting Out the Collaborative Relationships of Upstream and Downstream Protocols

Through the analysis of technical architecture, functional positioning, and practical use cases, I have divided the entire ecosystem into: infrastructure layer, middleware layer, model layer, and application layer, and sorted out their dependencies:

Infrastructure Layer

The infrastructure layer provides decentralized underlying resources (computational power, storage, L1), where computational power protocols include: Render, Akash, io.net, etc.; storage protocols include: Arweave, Filecoin, Storj, etc.; and L1 includes: NEAR, Olas, Fetch.ai, etc.

Computational power protocols support model training, inference, and framework operation; storage protocols preserve training data, model parameters, and on-chain interaction records; L1 optimizes data transmission efficiency and reduces latency through dedicated nodes.

Middleware Layer

The middleware layer serves as a bridge connecting the infrastructure with upper-layer applications, providing framework development tools, data services, and privacy protection. Data labeling protocols include: Grass, Masa, Vana, etc.; development framework protocols include: Eliza, ARC, Swarms, etc.; privacy computing protocols include: Phala, etc.

The data service layer provides fuel for model training, while the development framework relies on the computational power and storage of the infrastructure layer, and the privacy computing layer protects data security during training/inference.

Model Layer

The model layer is used for model development, training, and distribution, with open-source model training platforms like Bittensor.

The model layer relies on the computational power of the infrastructure layer and the data from the middleware layer; models are deployed on-chain through development frameworks; the model market delivers training results to the application layer.

Application Layer

The application layer consists of AI products aimed at end users, where Agents include: GOAT, AIXBT, etc.; DeFAI protocols include: Griffain, Buzz, etc.

The application layer calls pre-trained models from the model layer; relies on privacy computing from the middleware layer; and complex applications require real-time computational power from the infrastructure layer.

DeepSeek May Have a Negative Impact on Decentralized Computational Power

According to a survey, about 70% of Web3 AI projects actually call OpenAI or centralized cloud platforms, with only 15% using decentralized GPUs (such as Bittensor subnet models), and the remaining 15% employing a hybrid architecture (sensitive data processed locally, general tasks on the cloud).

The actual usage rate of decentralized computational power protocols is far below expectations and does not match their actual market value. The reasons for the low usage rate are threefold: Web2 developers carry over their existing toolchains when migrating to Web3; decentralized GPU platforms have yet to achieve price advantages; and some projects evade data compliance checks under the guise of "decentralization," while still relying on centralized clouds for computational power.

AWS/GCP occupies over 90% of the AI computational power market, while Akash's equivalent computational power is only 0.2% of AWS. The moat of centralized cloud platforms includes: cluster management, RDMA high-speed networks, and elastic scaling; decentralized cloud platforms have web3 modified versions of these technologies, but cannot fully address the existing flaws, such as latency issues: distributed node communication latency is six times that of centralized clouds; and toolchain fragmentation: PyTorch/TensorFlow does not natively support decentralized scheduling.

DeepSeek reduces computational power consumption by 50% through sparse training, enabling consumer-grade GPUs to train models with billions of parameters through dynamic model pruning. Market expectations for high-end GPUs in the short term have been significantly downgraded, and the market potential for edge computing has been re-evaluated. As shown in the figure above, prior to the emergence of DeepSeek, the vast majority of protocols and applications in the industry used platforms like AWS, with only a few use cases deployed on decentralized GPU networks. These use cases focused on the price advantage of consumer-grade computational power and did not consider the impact of latency.

This situation may worsen further with the emergence of DeepSeek. DeepSeek has released the constraints on long-tail developers, and low-cost, efficient inference models will spread at an unprecedented speed. In fact, many centralized cloud platforms and several countries have already begun deploying DeepSeek, and the significant reduction in inference costs will give rise to a plethora of front-end applications that have a huge demand for consumer-grade GPUs. Faced with the impending massive market, centralized cloud platforms will launch a new round of user acquisition battles, competing not only with leading platforms but also with countless small centralized cloud platforms. The most direct way to compete will be through price reductions. It is foreseeable that the price of the 4090 on centralized platforms will decrease, which would be a disaster for Web3 computational power platforms. When price is not the only moat for the latter, and computational power platforms in the industry are also forced to lower prices, the result is io.net, Render, Akash, etc., may not be able to bear it. The price war will destroy the last remaining valuation ceiling of the latter, and the downward spiral caused by declining revenue and user loss may force decentralized computational power protocols to pivot in a new direction.

The Specific Significance of DeepSeek to Industry Upstream and Downstream Protocols

As shown in the figure, I believe DeepSeek will have different impacts on the infrastructure layer, model layer, and application layer. From a positive perspective:

The application layer will benefit from the significant reduction in inference costs, allowing more applications to ensure that Agent applications remain online for extended periods and complete tasks in real-time;

At the same time, the low-cost model expenses like those of DeepSeek can enable DeFAI protocols to form more complex SWARMs, with thousands of Agents being used for a single use case, where each Agent's role will be very detailed and clear, greatly enhancing user experience and avoiding the misinterpretation and execution of user inputs by the model;

Developers in the application layer can fine-tune models, feeding prices, on-chain data and analysis, and governance data for DeFi-related AI applications without having to pay high licensing fees.

The significance of the open-source model layer has been validated since the advent of DeepSeek, as high-end models are made available to long-tail developers, stimulating a widespread development boom;

The computational power walls built around high-end GPUs over the past three years have been completely shattered, giving developers more choices and establishing a direction for open-source models. In the future, the competition among AI models will no longer be about computational power but rather algorithms, and this shift in belief will become the cornerstone of confidence for open-source model developers;

Specific subnets around DeepSeek will emerge one after another, with model parameters increasing under equivalent computational power, attracting more developers to join the open-source community.

From a negative perspective:

The objective existence of usage latency in computational power protocols within the infrastructure cannot be optimized;

Moreover, the hybrid network composed of A100 and 4090 requires higher coordination algorithm demands, which is not an advantage of decentralized platforms.

DeepSeek Bursts the Last Bubble in the Agent Track, DeFAI May Give Birth to New Life, and Industry Financing Methods Are Set to Change

Agents are the last hope for AI in the industry. The emergence of DeepSeek has liberated the constraints of computational power, painting a future expectation of application explosion. What was initially a huge boon for the Agent track has been punctured by the strong correlation with the industry, the US stock market, and Federal Reserve policies, causing the remaining bubble to burst and the market value of the track to plummet.

In the wave of AI and industry integration, technological breakthroughs and market games have always gone hand in hand. The chain reaction triggered by the fluctuations in Nvidia's market value serves as a mirror, reflecting the deep-seated dilemmas in the AI narrative within the industry: from On-chain Agents to DeFAI engines, beneath the seemingly complete ecological map lies the harsh reality of weak technological infrastructure, hollowed-out value logic, and capital dominance. The superficially prosperous on-chain ecosystem conceals hidden ailments: a large number of high FDV tokens compete for limited liquidity, outdated assets rely on FOMO sentiment to survive, and developers are trapped in PVP competition, consuming innovative momentum. When incremental funds and user growth hit a ceiling, the entire industry falls into the "innovator's dilemma"—eager for breakthrough narratives while struggling to escape the shackles of path dependence. This state of rupture presents a historic opportunity for AI Agents: it is not only an upgrade of the technological toolbox but also a reconstruction of the value creation paradigm.

Over the past year, more and more teams in the industry have discovered that traditional financing models are failing—the tactics of giving small shares to VCs, maintaining high control, and waiting for a pump are no longer sustainable. With VCs tightening their pockets, retail investors refusing to take over, and high thresholds for large listings, a new playstyle more suited to bear markets is emerging: joint efforts with leading KOLs + a small number of VCs, large community launches, and low market cap cold starts.

Innovators represented by Soon and Pump Fun are opening new paths through "community launches"—collaborating with leading KOLs to endorse and directly distribute 40%-60% of tokens to the community, launching projects at valuations as low as $10 million FDV, achieving millions of dollars in financing. This model builds consensus FOMO through KOL influence, allowing teams to lock in profits early while exchanging high liquidity for market depth. Although it sacrifices short-term control advantages, it can repurchase tokens at low prices during bear markets through compliant market-making mechanisms. Essentially, this is a paradigm shift in power structure: from a VC-led game of hot potato (institutional takeovers - listings - retail purchases) to a transparent game of community consensus pricing, forming a new symbiotic relationship between project parties and the community in liquidity premiums. As the industry enters a period of transparency revolution, projects that cling to traditional control logic may become the remnants of an era swept away by the tide of power migration.

The short-term pain in the market precisely confirms the irreversibility of the long-term technological surge. When AI Agents reduce on-chain interaction costs by two orders of magnitude, and adaptive models continuously optimize the capital efficiency of DeFi protocols, the industry is expected to welcome the long-awaited Massive Adoption. This transformation does not rely on conceptual hype or capital incubation but is rooted in the technological penetration of real demand—just as the electrical revolution did not stall due to the bankruptcy of light bulb companies, Agents will ultimately become the true golden track after the bubble bursts. And DeFAI may be the fertile ground for new life, as low-cost inference becomes commonplace, we may soon see hundreds of Agents combined into a single Swarm use case. With equivalent computational power, the significant increase in model parameters can ensure that Agents in the open-source model era can be fine-tuned more fully, even when faced with complex user input instructions, they can be broken down into task pipelines that a single Agent can execute fully. Each Agent optimizing on-chain operations may promote increased activity and liquidity in the overall DeFi protocol. More complex DeFi products led by DeFAI will emerge, and this is precisely where new opportunities arise after the last round of bubble bursts.

About BlockBooster: BlockBooster is an Asian Web3 venture studio supported by OKX Ventures and other top institutions, dedicated to being a trusted partner for outstanding entrepreneurs. We connect Web3 projects with the real world through strategic investments and deep incubation, helping quality entrepreneurial projects grow.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators