The other side of AI tokens: Most projects are busy with financial interests rather than real-world impact
Original Title: Flipping the AI coin
Author: Gagra
Compiled by: Deep Tide TechFlow
Abstract
- This is not another optimistic venture capital article on the "AI + Web3" space. We are optimistic about the convergence of these two technologies, but this article is a call to action. Otherwise, this optimism will ultimately lose its rationale.
- Why? Because developing and running the best AI models requires massive capital expenditures on cutting-edge and often hard-to-obtain hardware, as well as R&D in specific domains. Simply crowd-sourcing through crypto incentives, as most Web3 AI projects do, is not enough to offset the tens of billions of dollars invested by large companies that firmly control AI development. Given the hardware constraints, this may be the first major software paradigm where smart and creative engineers outside of the current organizations lack the resources to disrupt it.
- The speed at which software is "eating the world" is accelerating and will soon grow exponentially with the acceleration of AI. In the current situation, all of this "cake" is flowing to tech giants, while end users, including governments and large enterprises, and of course consumers, become increasingly reliant on their power.
Incentive Misalignment
All of this could not unfold at a more inappropriate time—when 90% of decentralized network participants are busy chasing the easy gains brought by narrative-driven development. Yes, developers are following investors into our industry rather than the other way around. The motivations vary from open acknowledgment to more subtle subconscious drivers, but the narratives and markets formed around them drive a significant portion of Web3 decision-making. Participants are so immersed in the reflexive bubble that they fail to notice the outside world, except to help propel the narrative that further advances this cycle. And AI is clearly the biggest one, as it is also experiencing a boom.
We have engaged with dozens of teams at the intersection of AI x Crypto and can confirm that many of these teams are highly capable, mission-driven, and passionately building projects. But human nature is such that when faced with temptation, we often yield to it and then rationalize those choices afterward.
The easy availability of liquidity has been a historical curse for the crypto industry, slowing its development and delaying useful adoption by years. It even leads the most loyal crypto believers to turn to "speculative tokens." The rationalization is that holding tokens with more capital gives these builders a better chance.
The relatively low maturity of institutional and retail capital provides builders with opportunities to make unrealistic claims while still benefiting from valuations as if those claims had been realized. The result of these processes is a moral hazard and capital destruction, where few such strategies are effective in the long term. Demand is the mother of all invention; when demand disappears, inventions cease to exist.
The timing of this situation could not be worse. While all the smartest tech entrepreneurs, national leaders, and businesses of all sizes are scrambling to ensure they benefit from the AI revolution, crypto founders and investors are opting for "rapid growth." In our view, this is the true opportunity cost.
Overview of the Web3 AI Market
Given the incentives mentioned above, the categorization of Web3 AI projects essentially boils down to:
- Legitimate (also divided into realists and idealists)
- Semi-legitimate
- Fakes
Basically, we believe builders are well aware of the conditions required to keep pace with their Web2 competitors and in which verticals they might compete, while in others, they are more like dreamers, though all of this can be marketed to venture capitalists and an immature public.
The goal is to be able to compete at this moment. Otherwise, the pace of AI development may leave Web3 behind, as the world moves toward a dystopian Web4 dominated by Western corporate AI and Chinese state AI. Those who cannot quickly become competitive and rely on distributed technology to catch up over a longer time frame are too optimistic to be taken seriously.
Clearly, this is a very rough generalization, and even among the fakes, there are at least a few serious teams (perhaps more dreamers). But this article is a call to action, so we do not intend to remain objective but rather urge readers to feel a sense of urgency.
Legitimate
"On-chain AI" middleware. The founders behind these solutions, though few, understand that decentralized training or reasoning for the models users actually want is currently unfeasible, if not impossible. Therefore, connecting the best centralized models to on-chain environments to benefit from complex automation is a good enough first step for them. Currently, hardware isolation environments (TEE, or "trusted execution environments"), bi-directional oracles (for bi-directional indexing of on-chain and off-chain data), and providing verifiable off-chain computing environments for agents seem to be the best solutions. There are also some co-processor architectures using zero-knowledge proofs (ZKP) for snapshot state changes, rather than verifying complete computations, which we also consider feasible in the medium term.
A more idealistic approach to the same problem attempts to verify off-chain reasoning to align it with on-chain computation in terms of trust assumptions. In our view, the goal should be to allow AI to perform both on-chain and off-chain tasks in a single coherent runtime environment. However, most supporters of reasoning verifiability talk about vague goals like "trust model weights," which in reality will not become important in the coming years, if ever. Recently, founders in this camp have begun exploring alternative methods to verify reasoning, but initially, they are all based on ZKP. While many smart teams are researching so-called ZKML, they are taking too great a risk in expecting cryptographic optimizations to surpass the complexity and computational demands of AI models. Therefore, we believe they are currently not competitive. However, some recent developments are interesting and should not be overlooked.
Semi-Legitimate
Consumer applications using wrappers for closed and open-source models (e.g., Stable Diffusion or Midjourney for image generation). Some of these teams are among the first in the market and have actual user traction. Thus, it is unfair to broadly label them as fakes, but only a few have deeply considered how to develop their foundational models in a decentralized manner and innovate in incentive design. There are some interesting changes in governance/ownership in this regard. However, most projects in this category are merely adding a token on top of centralized wrappers like the OpenAI API to gain valuation premiums or provide faster liquidity for the team.
Both of the above camps have not addressed the issue of training and reasoning large models in a decentralized environment. Currently, there is no way to train foundational models within a reasonable time frame without relying on tightly connected hardware clusters. Given the level of competition, "reasonable time" is a key factor.
Recently, some promising research has emerged, theoretically suggesting that methods like differential data streams could scale to distributed computing networks to increase their future capacity (as network capabilities continuously match data flow requirements). However, competitive model training still requires communication between localized clusters (rather than a single distributed device) and cutting-edge computing power (retail GPUs are increasingly lacking competitiveness).
There has also been progress in research on localizing (one of the two ways to decentralize) reasoning by shrinking model sizes, but there are currently no existing protocols in Web3 that leverage it.
The issues of decentralized training and reasoning logically lead us to the last and most important camp, which also triggers the strongest emotional response for us.
Fakes
Infrastructure applications primarily focus on decentralized server domains, providing bare hardware or decentralized model training/hosting environments. There are also some software infrastructure projects pushing protocols for federated learning (decentralized model training) or merging software and hardware components into a single platform where people can essentially train and deploy their decentralized models end-to-end. Most of them lack the complexity needed to actually solve the stated problems, and the naive idea of "token incentives + market winds" prevails here. The solutions we see in the public and private markets are not capable of achieving meaningful competition at this moment. Some solutions may evolve into viable (but niche) products, but what we need now are fresh, competitive solutions. This can only be achieved through innovative designs that address distributed computing bottlenecks. In training, not only speed is an issue, but also the verifiability of work completed and the coordination of training workloads, which adds to bandwidth bottlenecks.
We need a set of competitive and truly decentralized foundational models that require decentralized training and reasoning to function. If computers become intelligent while AI remains centralized, then there will be no world computer to speak of other than some dystopian version.
Training and reasoning are at the core of AI innovation. As other parts of the AI world move toward tighter architectures, Web3 needs some orthogonal solutions to compete, as the feasibility of direct competition is becoming increasingly low.
The Scale of the Problem
It all comes down to computing power. The more you invest in either training or reasoning, the better the results. Yes, there are some adjustments and optimizations, and computing itself is not homogeneous; there are now various new methods to overcome the bottlenecks of traditional von Neumann architecture processing units, but ultimately, it all boils down to how much matrix multiplication you can perform on how large a memory block and how fast those computations are.
This is why we see so-called "hyperscale operators" making such massive investments in data centers, all seeking to create a full stack with powerful processors for AI models at the top and supporting hardware at the bottom: OpenAI (model) + Microsoft (compute), Anthropic (model) + AWS (compute), Google (both) and Meta (doubling down on expanding data centers, increasingly involved in both). There are more nuances, interaction dynamics, and participants, but we will not delve into that here. The overall situation is that hyperscale operators are investing unprecedented tens of billions of dollars in data center expansions and creating synergies between their computing and AI products, expecting huge returns as AI becomes more prevalent in the global economy.
Let’s just look at the expected expansion levels of these four companies this year:
- Meta expects capital expenditures in 2024 to be between $30-37 billion, likely heavily skewed toward data centers.
- Microsoft’s capital expenditure in 2023 was about $11.5 billion, and it is rumored to invest $40-50 billion in 2024-25! This can be partially inferred from the massive data center investments announced in several countries: $3.2 billion in the UK, $3.5 billion in Australia, €2.1 billion in Spain, €3.2 billion in Germany, $1 billion in Georgia, and $10 billion in Wisconsin. And these are just some regional investments in their network of 300 data centers spread across more than 60 regions. There are also rumors that Microsoft may spend an additional $100 billion to build a supercomputer for OpenAI!
- Amazon’s leadership expects their capital expenditures to grow significantly in 2024, with 2023 expenditures at $48 billion, primarily due to AWS infrastructure expansions for AI.
- Google spent $11 billion just in Q4 2023 to expand its servers and data centers. They acknowledge that these investments are to meet anticipated AI demand and expect their infrastructure spending to increase significantly in speed and total amount due to AI in 2024.
Amount spent by NVIDIA on AI hardware in 2023
NVIDIA's CEO Jensen Huang has been promoting an investment of $1 trillion in AI acceleration over the next few years. He recently doubled that prediction to $2 trillion, reportedly due to interest from sovereign participants. Analysts at Altimeter expect global spending on AI-related data centers to be $160 billion in 2024 and over $200 billion in 2025.
Now compare these numbers with those provided by Web3 for independent data center operators to incentivize them to expand capital expenditures on the latest AI hardware:
- The total market capitalization of all decentralized physical infrastructure (DePIn) projects is currently about $40 billion, which is relatively illiquid and primarily speculative tokens. Essentially, the market cap of these networks equals the estimated upper limit of total capital expenditures by their contributors, as they incentivize this construction with tokens. However, the current market cap is nearly useless as it has already been issued.
- So, let’s assume that in the next 3-5 years, there will be an additional $80 billion (twice the current value) of private and public DePIn token market cap entering the market as incentives, and assume this is entirely for AI use cases.
Even if we divide this very rough estimate by 3 years and compare its dollar value with the cash spent by hyperscale operators in 2024 alone, it is clear that applying token incentives to a range of "decentralized GPU network" projects is insufficient.
Investors also need billions of dollars in demand to absorb these tokens, as the operators of these networks sell large amounts of such mined coins to cover capital expenditure costs. More billions are needed to drive up the value of these tokens and incentivize growth in construction to surpass hyperscale operators.
However, those with a deep understanding of how most Web3 servers currently operate might expect that a significant portion of "decentralized physical infrastructure" is actually running on the cloud services of these hyperscale operators. Of course, the surge in demand for GPUs and other AI-specialized hardware is also driving more supply, which should ultimately make cloud renting or purchasing them cheaper. At least that is the expectation.
But at the same time, consider this: NVIDIA now needs to prioritize providing the latest generation of GPUs to its customers. Meanwhile, NVIDIA is also beginning to compete on its own turf with the largest cloud computing providers, offering AI platform services to enterprise customers already locked into hyperscale servers. This will ultimately compel it to either build its own data centers over time (which would erode the substantial profits they currently enjoy, making it unlikely) or significantly limit its AI hardware sales to the scope of its partner network cloud providers.
Additionally, NVIDIA's competitors are rolling out additional AI-specific hardware, most using the same chips produced by TSMC as NVIDIA. Therefore, virtually all AI hardware companies are currently vying for TSMC's capacity. TSMC also needs to prioritize certain customers. Samsung and potentially Intel (which is trying to return to cutting-edge chip manufacturing soon) may be able to absorb additional demand, but TSMC is currently producing most of the AI-related chips, and scaling and calibrating cutting-edge chip manufacturing (3 and 2 nanometers) takes years.
Most importantly, all cutting-edge chip manufacturing is currently done by TSMC in Taiwan and Samsung in South Korea, and the risk of military conflict may become a reality before facilities being built in the U.S. to offset this (which are not expected to produce the next generation of chips in the coming years) can be operational.
Finally, due to U.S. restrictions on NVIDIA and TSMC, China is essentially cut off from the latest generation of AI hardware, competing for the remaining computing power just like the Web3 DePIn networks. Unlike Web3, Chinese companies actually have their own competitive models, particularly large language models (LLMs) from companies like Baidu and Alibaba, which require a significant amount of last-generation equipment to operate.
Thus, due to one or more of the aforementioned reasons, there exists a non-negligible risk that hyperscale cloud service providers will restrict external access to their AI hardware in the event of an intensifying AI-dominated war that prioritizes their cloud business. Essentially, this is a scenario where they occupy all AI-related cloud computing capacity for their own use and no longer provide it to anyone else while also absorbing all the latest hardware. Once this happens, the remaining computing supply will be in higher demand from other large players, including sovereign nations. Meanwhile, consumer-grade GPUs will become increasingly uncompetitive.
Clearly, this is an extreme scenario, but the rewards for large players are so great that they would not hesitate even if hardware bottlenecks remain. This would exclude decentralized operators like secondary data centers and retail-grade hardware owners (who make up the majority of Web3 DePIn providers) from competition.
The Other Side of the Coin
While cryptocurrency founders are still unaware, AI giants are closely watching cryptocurrency. Government pressure and competition may force them to adopt cryptocurrency to avoid being shut down or heavily regulated.
The founder of Stability AI recently resigned to begin "decentralizing" his company, which is one of the earliest public hints. He had previously been candid about plans to launch a token after the company successfully completed its IPO, which somewhat revealed the authenticity of the anticipated motives.
Similarly, while Sam Altman has not been involved in the operations of the crypto project Worldcoin he co-founded, its token does trade like an agent of OpenAI. Whether there is a path to connect free internet currency projects with AI R&D projects remains to be seen, but the Worldcoin team seems to recognize that the market is testing this hypothesis.
It makes sense to us that AI giants might explore different decentralized paths. The issue we see here is that Web3 has yet to propose meaningful solutions. "Governance tokens" are largely a meme, and only those tokens that explicitly avoid direct ties between asset holders and their network development and operations, like $BTC and $ETH, are currently truly decentralized.
The (non)incentive mechanisms that slow technological development also affect the evolution of different designs for managing crypto networks. Startup teams are merely slapping a "governance token" label on their products in hopes of finding solutions, ultimately falling into resource allocation around "governance theater."
Conclusion
The AI race is underway, and everyone is taking it very seriously. We cannot find a flaw in the thinking of large tech companies; more computing means better AI, better AI means lower costs, new revenue streams, and increased market share. For us, this means the bubble is rational, but all the fraudsters will still be cleared out in the inevitable shakeout.
Centralized large enterprise AI is dominating this space, while legitimate startups find it difficult to keep up. The Web3 space is joining the race late but is indeed joining. The market rewards for crypto AI projects are too rich compared to the rewards for Web2 startups in this space, leading founders to shift their focus from delivering products to driving token appreciation at critical moments, while the window of opportunity to catch up is rapidly closing. So far, no orthogonal innovations have emerged here that can bypass scaling computing to compete.
There is now a credible open-source movement around consumer-facing models, initially pushed forward by some centralized players who chose to compete for market share with larger closed-source competitors (like Meta, Stability AI). But now the community is catching up and applying pressure to leading AI companies. These pressures will continue to affect the closed-source development of AI products, but they do not have substantial impacts if open-source catches up. This presents another significant opportunity for the Web3 space, provided it addresses the issues of decentralized model training and reasoning.
Thus, while superficially providing opportunities for "classic" disruptors, the reality is far from it. AI is primarily associated with computing, and unless breakthrough innovations emerge in the next 3-5 years, this situation cannot change, which is crucial for determining who controls and guides AI development.
Even if demand drives supply-side efforts, the computing market itself cannot "bloom in a hundred flowers," as competition among manufacturers is constrained by structural factors like chip manufacturing and economies of scale.
We are optimistic about human ingenuity and confident that there are enough smart and noble individuals to attempt to solve the AI problem space in favor of the free world rather than top-down corporate or government control. But the odds look very slim, at best a speculative game, and Web3 founders are busy with financial interests rather than real-world impacts.