In the era of AI, how should Web3 companies compete with traditional AI giants?

ChainCatcher Selection
2024-05-12 20:21:18
Collection
This article is not a blind optimism promotion, but a profound reflection on the challenges of reality and the opportunities of the future.

Original Title: “Flipping the AI coin”

Original Author: Gagra Ventures

*Original Compilation: *Fairy, ChainCatcher

Editor's Note: The author sees the multiple obstacles faced by Web3 projects in advancing AI development, such as capital and hardware, through the halo of technology. Although the original intention of Web3 is to break decentralization and achieve the ideal of decentralization, in practice, it is often swayed by market narratives and token incentives, deviating from its original purpose.

ChainCatcher compiles the original text as follows:

The call for the combination of AI and Web3 is growing louder, but this is no longer an optimistic venture capital article. We are optimistic about merging these two technologies, but the text below is a call to action. Otherwise, this optimism will not be realized.

Why? Because developing and running the best AI models requires huge capital expenditures, cutting-edge hardware is often difficult to obtain, and very specific domain R&D is needed. Crowdsourcing these resources through crypto incentives, as most Web3 AI projects are doing, is not enough to offset the tens of billions of dollars invested by large companies controlling AI development. Given the hardware constraints, this may be the first large software paradigm that clever and creative engineers outside existing organizations cannot break through.

Software is "eating the world" at an increasingly rapid pace and will soon grow exponentially with the acceleration of artificial intelligence. In the current situation, all these "cakes" are flowing to tech giants, while end users, including governments and large enterprises, are increasingly constrained by their power.

Misaligned Incentives

All of this is happening at a highly inappropriate time—90% of decentralized network participants are busy chasing the "golden egg" of easy fiat gains driven by narratives.

Developers are following the investors in our industry rather than the other way around. This situation manifests in various forms, from open acknowledgment to more subtle subconscious motivations, but narratives and the markets formed around them drive many decisions in Web3. Like traditional reflexive bubbles, participants are too focused on the internal world to notice the external world unless it helps further the narrative of this cycle. And AI is clearly the biggest narrative, as it is itself in a booming phase.

We have communicated with dozens of teams at the intersection of AI and cryptocurrency and can confirm that many of them are very capable, mission-driven, and passionate builders. But human nature is such that when faced with temptation, we often yield to it and then rationalize those choices afterward.

The path of easy liquidity has been a historical curse of the crypto industry—at this point, it has delayed valuable adoption and development for years. It has even turned the most loyal cryptocurrency believers toward "pump the token." The rationalization is that builders holding tokens may have better opportunities.

The low complexity of institutional and retail capital provides opportunities for builders to make claims detached from reality while still benefiting from valuations as if those claims had been realized. The result of these processes is actually deep-rooted moral hazard and capital destruction, with few such strategies proving effective in the long term. Demand is the mother of all invention; when demand disappears, so does invention.

The timing of this situation couldn't be worse. While all the smartest tech entrepreneurs, state actors, and businesses of all sizes are scrambling to ensure they get a piece of the AI revolution, cryptocurrency founders and investors are opting for "quick 10x." And in our view, this is the real opportunity cost.

Overview of Web3 AI Prospects

Given the aforementioned incentives, Web3 AI projects can actually be categorized into:

  • Reasonable (which can be further divided into realists and idealists)
  • Semi-reasonable
  • Fraudulent

Fundamentally, we believe project builders should clearly understand how to keep pace with their Web2 competitors and know which areas are competitive and which are delusional, even though these delusional areas may be marketed to venture capital firms and the public.

Our goal is to be able to compete here and now. Otherwise, the speed of AI development may leave Web3 behind, and the world will leap to a "Web4" between Western corporate AI and Chinese state AI. Those who cannot become competitive in time and rely on distributed technology to catch up over a longer time frame are overly optimistic and not serious enough to be taken seriously.

Clearly, this is just a very rough overview, and even among the "fraudulent" group, there are at least a few serious teams (perhaps more are just delusional). But this article is a call to action, so we do not intend to be objective but rather urge readers to feel a sense of urgency.

Reasonable:

There are few founders developing middleware solutions for "AI on-chain" who understand that currently decentralized training or inference models that users actually need (i.e., cutting-edge technology) are impractical, if not impossible.

Therefore, finding a way to connect the best centralized models with on-chain environments to benefit from complex automation is a sufficiently good first step for them. Currently, hardware-isolated TEE ("air-gapped" processors) that can host API access points, bi-directional oracles (for bi-directional indexing of on-chain and off-chain data), and co-processor architectures that provide verifiable off-chain computing environments for agents seem to be the best solutions available.

There is also a co-processor architecture that uses zero-knowledge proofs (ZKPs) to snapshot state changes (rather than verifying complete computations), which we believe is feasible in the medium term.

For the same issue, a more idealistic approach is to attempt to verify off-chain inference to align its trust assumptions with on-chain computation.

We believe the goal should be to enable AI to perform on-chain and off-chain tasks in a unified operating environment. However, most supporters of verifiable inference talk about tricky goals like "trust model weights," which will only become relevant in a few years (if at all). Recently, founders in this camp have begun exploring alternative methods to verify inference, but initially, they are all based on ZKP. While many smart teams are conducting research on ZKML (i.e., zero-knowledge machine learning), they risk overestimating the speed of cryptographic optimization compared to the complexity and computational requirements of AI models. Therefore, we believe they are currently not competitive. However, some recent progress is still interesting and should not be overlooked.

Semi-reasonable:

Consumer applications use wrappers that encapsulate closed-source and open-source models (e.g., Stable Diffusion or Midjourney for image generation). Some of these teams were early market entrants and gained recognition from actual users. Thus, it is unfair to label them all as frauds, but only a few teams are deeply considering how to develop their underlying models in a decentralized manner and innovate in incentive design. In the token part, there are also some interesting governance/ownership designs. However, most of the projects in this category are merely tokenizing centralized wrappers based on APIs like OpenAI to gain valuation premiums or provide faster liquidity for teams.

The problem that neither of the above two camps has solved is the training and inference of large models in a decentralized environment. Currently, it is impossible to train foundational models within a reasonable time frame without relying on tightly connected hardware clusters. Given the level of competition, "reasonable time" is a key factor.

Recently, there have been some promising research results; theoretically, methods like "Differential Data Flow" may expand to distributed computing networks in the future to increase their capacity (as network capabilities catch up with data flow demands). However, competitive model training still requires communication between localized clusters rather than a single distributed device and cutting-edge computing (retail GPUs are becoming increasingly uncompetitive).

Research on achieving localized inference (one of the two decentralized approaches) by reducing model size has also made progress recently, but existing protocols in Web3 have not yet utilized it.

The issues of decentralized training and inference logically lead us to the last of the three camps, which is by far the most important one and thus the most emotionally triggering for us.

Fraudulent:

Infrastructure applications mainly focus on decentralized server fields, providing bare hardware or decentralized model training/hosting environments. Some software infrastructure projects are pushing protocols like federated learning (decentralized model training) or those that combine software and hardware components into a single platform where people can essentially train and deploy their decentralized models end-to-end. Most of them lack the complexity needed to actually solve the stated problems, and the naive idea of "token incentives + market assistance" prevails here. None of the solutions we see in public and private markets can achieve meaningful competition here and now. Some solutions may evolve into viable (but niche) products, but what we need now are fresh, competitive solutions. And this can only be achieved through innovative designs that address distributed computing bottlenecks. In training, not only is speed a significant issue, but the verifiability of completed work and the coordination of training workloads are also major problems, adding to bandwidth bottlenecks.

We need a set of competitive, truly decentralized foundational models that require decentralized training and inference to function. Losing AI could completely negate all the achievements made since the emergence of Ethereum as a "decentralized world computer." If computers become AI, and AI is centralized, then the concept of a world computer will be moot, except in some dystopian version.

Training and inference are at the core of AI innovation. As other areas of the AI world move toward tighter architectures, Web3 needs some orthogonal solutions to compete, as the feasibility of direct competition is becoming increasingly low.

The Scale of the Problem

Everything is about computation. The more investment in training and inference, the better the results. Yes, there may be some adjustments and optimizations here and there, but computation itself is not homogeneous. There are now various new methods to overcome the bottlenecks of traditional von Neumann architecture processing units, but it all comes down to how many matrix multiplications you can perform on how large a memory block and how fast.

This is why we see so-called "hyperscale" making such powerful investments in data centers, all hoping to create a complete stack with AI models on top and the hardware powering them below: OpenAI (model) + Microsoft (compute), Anthropic (model) + AWS (compute), Google (both) and Meta (doubling down on building its own data centers, both increasingly). There are more nuances, interaction dynamics, and stakeholders, but we won't enumerate them all. Overall, hyperscale companies are investing unprecedented billions in data center construction and creating synergies between their computing and AI products, which are expected to yield enormous returns as AI becomes more prevalent in the global economy.

Let's take a look at the expected construction levels of these four companies just this year:

NVIDIA CEO Jensen Huang has suggested that a total of $1 trillion will be invested in AI acceleration over the next few years. Recently, he doubled that prediction to $2 trillion, reportedly due to interest from sovereign enterprises.

Analysts at Altimeter expect global spending on AI-related data centers to reach $160 billion in 2024 and over $200 billion in 2025.

Now, compare these numbers with the incentives provided by Web3 for independent data center operators to encourage them to expand capital expenditures on the latest AI hardware:

Currently, the total market capitalization of all decentralized physical infrastructure (DePIn) projects is about $40 billion, primarily composed of relatively illiquid and speculative tokens. Essentially, the market cap of these networks equals the estimated upper limit of total capital expenditures by their contributors, as they incentivize this construction with tokens. However, the current market cap is nearly useless because it has already been issued.

So, let's assume that in the next 3-5 years, as incentives, an additional $80 billion (twice the existing value) of private and public DePIn token capital will emerge in the market, and assume these tokens will be 100% used for AI use cases. Even if we divide this very rough estimate by 3 (years) and compare its dollar value with the cash value invested by hyperscale companies in 2024 alone, it is clear that imposing token incentives on a bunch of "decentralized GPU network" projects is insufficient.

Moreover, billions of dollars in investor demand are needed to absorb these tokens, as the operators of these networks will sell a large amount of mined tokens to cover significant capital and operational expenses. More funds are needed to drive these tokens up and incentivize expansion beyond hyperscale companies.

However, those with a deep understanding of how Web3 servers currently operate may argue that a large portion of "decentralized physical infrastructure" is actually running on the cloud services of these hyperscale companies. Certainly, the surge in demand for GPUs and other AI-specific hardware is driving more supply, which will ultimately make cloud leasing or purchasing cheaper. At least, that is the expectation.

But at the same time, consider this: NVIDIA now needs to prioritize customer demand for its latest generation of GPUs. NVIDIA has also begun competing with the largest cloud computing providers on its own turf—offering AI platform services to enterprise customers already locked into these supercomputers. This will ultimately compel it to either build its own data centers over time (which would erode the substantial profits they currently enjoy, making it unlikely) or significantly limit its AI hardware sales to the scope of its partner network cloud providers.

Additionally, NVIDIA's competitors launching additional AI-specific hardware mostly use the same chips produced by TSMC as NVIDIA. Therefore, currently, virtually all AI hardware companies are competing for TSMC's production capacity. TSMC also needs to prioritize certain customers. Samsung and potential Intel (which is trying to return to cutting-edge chip manufacturing as soon as possible to produce chips for its hardware) may be able to absorb additional demand, but TSMC is currently producing most AI-related chips, and scaling and calibrating for cutting-edge chip manufacturing (3 and 2 nanometers) takes years.

Finally, due to U.S. restrictions on NVIDIA and TSMC, China is essentially cut off from the latest generation of AI hardware. Unlike Web3, Chinese companies actually have their own competitive models, especially LLMs from companies like Baidu and Alibaba, which require a large amount of previous-generation equipment to operate.

For one or various reasons mentioned above, as the AI arms race intensifies and takes precedence over cloud business, hyperscale companies will limit external access to their AI hardware, which is a non-negligible risk. Essentially, it is a situation where they monopolize all AI-related cloud capacity, no longer offering it to others, while also gobbling up all the latest hardware. This will lead to other large companies, including sovereign nations, making higher demands on the remaining computing supply. Meanwhile, the remaining consumer-grade GPUs are becoming increasingly uncompetitive.

Clearly, this is just an extreme scenario, but if the hardware bottleneck persists, major players will retreat due to excessive rewards. As a result, decentralized operators like secondary data centers and retail-grade hardware owners (which make up the majority of Web3 DePIn providers) will be excluded from competition.

The Other Side of the Coin

While cryptocurrency founders are still asleep, AI giants are closely watching cryptocurrency. Government pressure and competition may prompt them to adopt cryptocurrency to avoid being shut down or heavily regulated.

Stability AI's founder recently resigned to begin "decentralizing" his company, which is one of the earliest public hints. He had previously not hidden his plans to launch a token after the company successfully goes public, which somewhat exposes the true motives behind the anticipated actions.

Similarly, while Sam Altman is not involved in the operation of the crypto project Worldcoin he co-founded, the trading of its tokens undoubtedly acts as a proxy for OpenAI. Whether there exists a way to link internet token projects with AI R&D projects will only be revealed with time, but the Worldcoin team seems to realize that the market is testing this hypothesis.

For us, it is very meaningful that AI giants are exploring different decentralized paths. The issue we see here again is that Web3 has not produced meaningful solutions. "Governance tokens" are often just a meme, and currently, only those tokens that explicitly avoid direct connections between asset holders and their network development and operation, such as BTC and ETH, are truly decentralized tokens.

The incentive mechanisms that slow technological development also affect the evolution of different governance crypto network designs. Startup teams are merely slapping a "governance token" on their products, hoping to stumble upon a new path during the buildup, but ultimately they can only remain stuck in the "governance theater" surrounding resource allocation.

Conclusion

The AI race is on, and everyone is taking it very seriously. In the thoughts of large tech giants expanding computing capabilities, we cannot find any loopholes—more computing means better AI, better AI means lower costs, increased new revenue, and expanded market share. For us, this means the bubble is justified, but all the frauds will inevitably be eliminated in the future shakeout.

Centralized big corporate AI is dominating the field, making it difficult for startups to keep up. Although Web3 is late to the game, it is also joining this competition. Compared to startups in the Web2 space, the market rewards for crypto AI projects are overly generous, causing founders to shift their focus from product delivery to driving token prices up at critical moments, and that window is rapidly closing. So far, no innovation has been able to circumvent the need to scale computing to compete.

Now, around consumer-facing models, a credible open-source movement is emerging, initially with only a few centralized companies choosing to compete for market share against larger closed-source rivals (like Meta, Stability AI). But now, the community is catching up, putting pressure on leading AI companies. This pressure will continue to affect the closed-source development of AI products, but the impact will not be significant until open-source products catch up. This presents another significant opportunity in the Web3 space, but it must solve the issues of decentralized model training and inference.

Therefore, while on the surface, there seems to be a "classic" disruptive opportunity, the reality is far from it. AI is closely tied to computation, and without breakthrough innovations in the next 3-5 years, this status quo cannot change, which is a critical period determining who controls and guides the development of AI.

The computing market itself, although demand drives supply-side efforts, cannot "bloom everywhere" due to structural factors such as competition among manufacturers being constrained by chip manufacturing and economies of scale.

We remain optimistic about human ingenuity and are confident that there are enough smart and noble people who can attempt to crack the AI problem in a way that benefits the free world rather than top-down corporate or government control. However, this opportunity seems very slim, at best a coin toss, while Web3 founders are busy flipping coins for economic benefits rather than making a real impact on the world.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators