Comprehensively understand the ecological landscape of the computing DePIN track, excluding io.net
Original Title: 《 The Case for Compute DePINs 》
Author: PAUL TIMOFEEV
Compiled by: Deep Tide TechFlow
Key Points
With the rise of deep learning in machine learning and generative AI development, computing resources have become increasingly popular, both of which require a significant amount of compute-intensive workloads. However, due to large companies and governments hoarding these resources, startups and independent developers are now facing a shortage of GPUs in the market, leading to exorbitant costs and/or unavailability of resources.
Compute DePINs can create a decentralized market for computing resources like GPUs by allowing anyone in the world to provide their idle supply in exchange for monetary rewards. This aims to help underserved GPU consumers access new supply channels, thereby obtaining the development resources needed for their workloads at reduced costs and overhead.
Compute DePINs still face many economic and technical challenges when competing with traditional centralized service providers, some of which will resolve themselves over time, while others will require new solutions and optimizations.
Computing is the New Oil
Since the Industrial Revolution, technology has propelled humanity forward at an unprecedented pace, affecting or completely transforming nearly every aspect of daily life. Computers eventually emerged as the culmination of collective efforts by researchers, scholars, and computer engineers. Originally designed to solve large-scale arithmetic tasks for advanced military operations, computers have evolved into a pillar of modern life. As the influence of computers on humanity continues to grow at an unprecedented rate, the demand for these machines and their driving resources is also increasing, outpacing available supply. This, in turn, creates market dynamics where most developers and businesses cannot access critical resources, leaving the development of machine learning and generative artificial intelligence—one of today's most transformative technologies—in the hands of a few well-funded players. Meanwhile, the vast supply of idle computing resources presents a lucrative opportunity to help alleviate the imbalance between supply and demand for computing, intensifying the need for coordination mechanisms between participants on both sides. Therefore, we believe that decentralized systems supported by blockchain technology and digital assets are crucial for the broader, more democratic, and responsible development of generative AI products and services.
Computing Resources
Computing can be defined as a variety of activities, applications, or workloads where a computer produces explicit outputs based on given inputs. Ultimately, it refers to the computational and processing power of computers, which is the core utility of these machines, driving many parts of the modern world, generating up to $1.1 trillion in revenue in just the past year.
Computing resources refer to various hardware and software components that enable computing and processing. As the number of applications and functionalities they enable continues to grow, these components become increasingly important and more prevalent in people's daily lives. This has led to a race among national powers and enterprises to accumulate as many of these resources as possible as a means of survival. This is reflected in the market performance of companies providing these resources (e.g., Nvidia, whose market value has increased by over 3000% in the past five years).
GPU
GPUs are one of the most important resources in modern high-performance computing. The core function of a GPU is to act as a dedicated circuit that accelerates computer graphics workloads through parallel processing. Initially serving the gaming and personal computer industries, GPUs have evolved to support many emerging technologies that shape the future world (such as consoles and personal computers, mobile devices, cloud computing, and the Internet of Things). However, with the rise of machine learning and artificial intelligence, the demand for these resources has particularly intensified—by executing computations in parallel, GPUs accelerate ML and AI operations, enhancing the processing power and capabilities of the final technologies.
The Rise of AI
At the core of AI is the ability to enable computers and machines to simulate human intelligence and problem-solving capabilities. AI models, structured as neural networks, consist of many different data blocks. The models require processing power to identify and learn the relationships between these data points, which they reference when creating outputs based on given inputs.
Although AI development and production are not new, with Frank Rosenblatt building the Mark 1 Perceptron in 1967 as the first neural network-based computer that "learned" through trial and error, a significant amount of academic research that laid the groundwork for what we know as AI today was published in the late 1990s and early 2000s, and the industry has been evolving since then.
In addition to R&D work, "narrow" AI models have played roles in various powerful applications used today. Examples include social media algorithms, such as Apple's Siri and Amazon's Alexa, and customized product recommendations. Notably, the rise of deep learning has changed the development of artificial general intelligence (AGI). Deep learning algorithms utilize larger or "deeper" neural networks than machine learning applications, serving as a more scalable and broadly performing alternative. Generative AI models "encode a simplified representation of their training data and reference it to produce similar but distinct new outputs."
Deep learning enables developers to scale generative AI models to images, speech, and other complex data types, with milestone applications like ChatGPT setting records for the fastest user growth in modern history, representing just early iterations of what generative AI and deep learning can achieve.
Given this, generative AI development involves multiple compute-intensive workloads that require significant processing power and computational capacity, which should come as no surprise.
According to the triple demand of deep learning applications, the development of AI applications is constrained by several key workloads:
Training - Models must process and analyze large datasets to learn how to respond to given inputs.
Tuning - Models undergo a series of iterative processes where various hyperparameters are adjusted and optimized to improve performance and quality.
Simulation - Before deployment, certain models (e.g., reinforcement learning algorithms) undergo a series of simulations for testing.
Compute Crunch: Demand Exceeds Supply
In recent decades, many technological advancements have driven an unprecedented surge in demand for computing and processing power. As a result, the demand for computing resources like GPUs far exceeds available supply today, creating bottlenecks in AI development that will only continue to grow without effective solutions.
The broader supply constraints are further supported by numerous companies purchasing GPUs beyond their actual needs, both as a competitive advantage and as a means of survival in the modern global economy. Computing providers often adopt contract structures that require long-term capital commitments, granting customers supplies that exceed their demand requirements.
Epoch's research indicates that the overall number of compute-intensive AI models being released is rapidly increasing, suggesting that the resource demand driving these technologies will continue to grow quickly.
As the complexity of AI models continues to grow, the demand for computing and processing power from application developers will also increase. In turn, the performance of GPUs and their subsequent availability will play an increasingly important role. This has already begun to manifest, as the demand for high-end GPUs (such as those produced by Nvidia) continues to rise, with Nvidia dubbing GPUs as the "rare earth metals" or "gold" of the AI industry.
The rapid commercialization of AI has the potential to hand control over to a few tech giants, similar to today's social media industry, raising concerns about the ethical foundations of these models. A notable example is the recent controversy surrounding Google Gemini. While many of its strange responses to various prompts did not pose an actual danger at the time, the incident showcased the inherent risks of a few companies dominating and controlling AI development.
Today's tech startups face increasing challenges in acquiring computing resources to support their AI models. These applications execute many compute-intensive processes before model deployment. For smaller enterprises, accumulating a significant number of GPUs is fundamentally unsustainable, while traditional cloud computing services (like AWS or Google Cloud), although providing a seamless and convenient developer experience, ultimately lead to high costs due to their limited capacity, making them unaffordable for many developers. Ultimately, not everyone can raise $70 trillion to cover their hardware costs.
So what is the reason?
Nvidia estimated that there are over 40,000 companies globally using GPUs for AI and accelerated computing, with a developer community exceeding 4 million. Looking ahead, the global AI market is projected to grow from $515 billion in 2023 to $2.74 trillion by 2032, with a compound annual growth rate of 20.4%. Meanwhile, the GPU market is expected to reach $400 billion by 2032, with a compound annual growth rate of 25%.
However, following the AI revolution, the imbalance between supply and demand for computing resources is increasingly exacerbated, potentially leading to a rather utopian future dominated by a few well-funded giant enterprises controlling the development of transformative technologies. Therefore, we believe that all roads lead to decentralized alternative solutions to help bridge the gap between AI developers' needs and available resources.
The Role of DePIN
What are DePINs?
DePIN is a term created by the Messari research team, representing Decentralized Physical Infrastructure Networks. Specifically, decentralization means that no single entity extracts rent and restricts access. Physical infrastructure refers to the "real-life" physical resources utilized. The network refers to a group of participants working in coordination to achieve a predetermined goal or a series of goals. Today, the total market capitalization of DePINs is approximately $28.3 billion.
At the core of DePINs is a global network of nodes that connect physical infrastructure resources to the blockchain, creating a decentralized market that connects buyers and suppliers of resources, where anyone can become a supplier and be rewarded for their services and contributions to the network's value. In this context, centralized intermediaries that restrict network access through various legal and regulatory means and service fees are replaced by decentralized protocols composed of smart contracts and code, managed by their respective token holders.
The value of DePINs lies in their provision of decentralized, accessible, low-cost, and scalable alternatives to traditional resource networks and service providers. They enable decentralized markets to serve specific end goals; the costs of goods and services are determined by market dynamics, and anyone can participate at any time, naturally lowering unit costs due to the increase in the number of suppliers and the minimization of profit margins.
Using blockchain allows DePINs to build crypto-economic incentive systems that help ensure network participants are appropriately compensated for their services, transforming key value providers into stakeholders. However, it is important to note that network effects, achieved by transforming small individual networks into larger, more productive systems, are key to realizing many of the benefits of DePINs. Furthermore, while token rewards have proven to be a powerful tool for network bootstrapping mechanisms, establishing sustainable incentives to help users retain and adopt long-term remains a critical challenge in the broader DePIN space.
How do DePINs work?
To better understand the value of DePINs in realizing a decentralized computing market, it is important to recognize the different structural components involved and how they work together to form a decentralized resource network. Let’s consider the structure and participants of a DePIN.
Protocol
Decentralized protocols, which are a set of smart contracts built on top of the underlying "base layer" blockchain network, facilitate trustless interactions between network participants. Ideally, the protocol should be managed by a diverse group of stakeholders who are actively committed to contributing to the long-term success of the network. These stakeholders then use their shares of the protocol tokens to vote on proposed changes and developments for the DePIN. Given that successfully coordinating a distributed network is a significant challenge in itself, the core team typically retains the power to implement these changes initially before transferring that power to a decentralized autonomous organization (DAO).
Network Participants
The end users of the resource network are its most valuable participants and can be categorized based on their functions.
Suppliers: Individuals or entities that provide resources to the network in exchange for monetary rewards paid in the native DePIN token. Suppliers "connect" to the network through blockchain-native protocols, which may enforce a whitelisting on-chain process or an unpermissioned process. By receiving tokens, suppliers gain a stake in the network, similar to stakeholders in an equity ownership context, allowing them to vote on various proposals and developments for the network that they believe will help drive demand and network value, thereby creating higher token prices over time. Of course, suppliers receiving tokens may also utilize DePINs as a form of passive income and sell them after receiving the tokens.
Consumers: These are individuals or entities actively seeking resources provided by DePINs, such as AI startups looking for GPUs, representing the demand side of the economic equation. If using DePINs offers a tangible advantage over traditional alternatives (e.g., lower costs and overhead requirements), consumers will be attracted to use DePINs, representing organic demand for the network. DePINs typically require consumers to pay for resources using their native tokens to generate value and maintain stable cash flow.
Resources
DePINs can serve different markets and adopt various business models for resource allocation. Blockworks provides a good framework: custom hardware DePINs, which provide dedicated proprietary hardware for suppliers to distribute; commodity hardware DePINs, which allow for the discovery of idle resources, including but not limited to computing, storage, and bandwidth.
Economic Model
In an ideally functioning DePIN, value comes from the revenue generated by consumers paying suppliers for resources. The ongoing demand for the network means a sustained demand for the native token, aligning economic incentives with those of suppliers and token holders. Generating sustainable organic demand in the early stages is a challenge for most startups, which is why DePINs offer inflationary token incentives to incentivize early suppliers and guide the supply of the network as a means of generating demand and thus more organic supply. This is similar to how venture capital firms subsidized passenger fares in Uber's early stages to guide the initial customer base to further attract drivers and enhance its network effects.
DePINs need to manage token incentives as strategically as possible, as they play a critical role in the overall success of the network. When demand and network revenue rise, token issuance should decrease. Conversely, when demand and revenue decline, token issuance should be used again to incentivize supply.
To further illustrate what a successful DePIN network looks like, consider the "DePIN Flywheel," a positive feedback loop that guides DePINs. It can be summarized as follows:
DePIN distributes inflationary token rewards to incentivize suppliers to provide resources to the network, establishing a foundational supply level available for consumption.
Assuming the number of suppliers begins to grow, competitive dynamics start to form within the network, improving the overall quality of goods and services offered until it provides services superior to existing market solutions, thereby gaining a competitive advantage. This means that decentralized systems surpass traditional centralized service providers, which is no easy feat.
Organic demand for DePIN begins to form, providing suppliers with legitimate cash flow. This presents an enticing opportunity for investors and suppliers to continue driving network demand and thus token prices.
The increase in token prices boosts supplier revenue, attracting more suppliers and restarting the flywheel.
This framework presents an enticing growth strategy, although it is important to note that it is largely theoretical and assumes that the resources provided by the network have sustained competitive appeal.
Compute DePINs
Decentralized computing markets belong to a broader movement known as the "sharing economy," a peer-to-peer economic system where consumers share goods and services directly with other consumers through online platforms. This model was pioneered by companies like eBay and is now dominated by companies like Airbnb and Uber, ultimately poised to disrupt global markets as the next generation of transformative technologies sweeps through. The sharing economy was valued at $150 billion in 2023, projected to grow to nearly $800 billion by 2031, showcasing broader trends in consumer behavior, from which we believe DePINs will benefit and play a key role.
Fundamental Principles
Compute DePINs are peer-to-peer networks that connect suppliers and buyers through decentralized markets, facilitating the allocation of computing resources. A key distinction of these networks is their focus on commodity hardware resources, which many people already possess today. As discussed, the emergence of deep learning and generative AI has led to a surge in demand for processing power due to their resource-intensive workloads, creating bottlenecks in AI development for accessing critical resources. Simply put, decentralized computing markets aim to alleviate these bottlenecks by creating a new flow of supply—a global supply flow that anyone can participate in.
In a compute DePIN, any individual or entity can lend out their idle resources at any time and receive appropriate compensation. Simultaneously, any individual or entity can access the necessary resources from a global permissionless network at lower costs and with greater flexibility than existing market products. Thus, we can describe the participants in compute DePINs through a simple economic framework:
Supply Side: Individuals or entities that own computing resources and are willing to lend or sell their computing resources for compensation.
Demand Side: Individuals or entities that need computing resources and are willing to pay a price for them.
Key Advantages of Compute DePINs
Compute DePINs offer numerous advantages that make them an attractive alternative to centralized service providers and markets. First, enabling permissionless cross-border market participation unlocks a new flow of supply, increasing the number of critical resources needed for compute-intensive workloads. Compute DePINs focus on hardware resources that most people already own—anyone with a gaming PC already has a GPU they can rent out. This broadens the range of developers and teams that can participate in building the next generation of goods and services, benefiting more people globally.
Furthermore, the blockchain infrastructure supporting DePINs provides efficient and scalable settlement rails for facilitating the small payments required for peer-to-peer transactions. Crypto-native financial assets (tokens) provide a shared unit of value that demand-side participants use to pay suppliers, aligning economic incentives with a distribution mechanism consistent with today's increasingly globalized economy. Referring back to the DePIN flywheel we previously constructed, strategically managing economic incentives is highly beneficial for increasing the network effects of DePINs (on both the supply and demand sides), which in turn increases competition among suppliers. This dynamic lowers unit costs while improving service quality, creating a sustainable competitive advantage for DePINs, from which suppliers as token holders and key value providers can benefit.
DePINs are similar to cloud computing service providers in the flexible user experience they aim to provide, with resources accessible and payable on demand. According to Grandview Research, the global cloud computing market is projected to grow at a compound annual growth rate of 21.2%, reaching over $2.4 trillion by 2030, demonstrating the viability of such business models in the context of future growth in demand for computing resources. Modern cloud computing platforms utilize centralized servers to handle all communications between client devices and servers, creating single points of failure in their operations. However, being built on blockchain allows DePINs to provide stronger censorship resistance and resilience than traditional service providers. Attacking a single organization or entity (such as a centralized cloud service provider) would jeopardize the entire foundational resource network, while DePINs are designed to withstand such events due to their distributed nature. First, the blockchain itself is a globally distributed network of dedicated nodes designed to resist centralized network authority. Additionally, compute DePINs allow for permissionless network participation, bypassing legal and regulatory barriers. Depending on the nature of token distribution, DePINs can adopt fair voting processes to vote on proposed changes and developments to the protocol, eliminating the possibility of a single entity suddenly shutting down the entire network.
The Current State of Compute DePINs
Render Network
Render Network is a compute DePIN that connects buyers and sellers of GPUs through a decentralized computing market, with transactions conducted using its native token. Render's GPU market involves two key parties—creators seeking access to processing power and node operators renting out idle GPUs to creators in exchange for native Render token compensation. Node operators are ranked based on a reputation system, and creators can choose GPUs from a multi-tiered pricing system. The Proof-of-Render (POR) consensus algorithm coordinates operations, with node operators committing their computing resources (GPUs) to process tasks, specifically graphic rendering work. Upon task completion, the POR algorithm updates the status of node operators, including changes in reputation scores based on task quality. Render's blockchain infrastructure facilitates work payments, providing transparent and efficient settlement rails for transactions between suppliers and buyers using network tokens.
Render Network was initially conceived by Jules Urbach in 2009, launching on Ethereum (RNDR) in September 2020, and migrating to Solana (RENDER) about three years later to improve network performance and reduce operational costs.
As of the time of writing, Render Network has processed up to 33 million tasks (measured in rendered frames), with the total number of nodes growing to 5,600 since its inception. Approximately 60k RENDER tokens have been burned, occurring during the allocation of work credits to node operators.
IO Net
Io Net is launching a decentralized GPU network on Solana, serving as a coordination layer between a vast supply of idle computing resources and individuals and entities needing those resources. Io Net's unique selling point is that it does not compete directly with other DePINs in the market but aggregates GPUs from various sources (including data centers, miners, and other DePINs like Render Network and Filecoin) while utilizing a proprietary DePIN—Internet-of-GPUs (IoG)—to coordinate operations and align incentives among market participants. Io Net customers can customize their workload clusters on IO Cloud by selecting processor types, locations, communication speeds, compliance, and service times. Conversely, anyone with supported GPU models (12 GB RAM, 256 GB SSD) can participate as an IO Worker, lending their idle computing resources to the network. While service payments are currently settled in fiat and USDC, the network will soon support payments in the native $IO token as well. The pricing of resources is determined by supply and demand, as well as various GPU specifications and configuration algorithms. Io Net's ultimate goal is to become the preferred GPU market by providing lower costs and higher service quality than modern cloud service providers.
The multi-layer IO architecture can be mapped as follows:
UI Layer - Comprising public websites, customer areas, and Workers areas.
Security Layer - This layer consists of firewalls for network protection, authentication services for user verification, and logging services for tracking activities.
API Layer - This layer serves as the communication layer, consisting of public APIs (for websites), private APIs (for Workers), and internal APIs (for cluster management, analytics, and monitoring reports).
Backend Layer - The backend layer manages Workers, cluster/GPU operations, customer interactions, billing and usage monitoring, analytics, and auto-scaling.
Database Layer - This layer serves as the system's data repository, utilizing primary storage (for structured data) and caching (for frequently accessed temporary data).
Message Broker and Task Layer - This layer facilitates asynchronous communication and task management.
Infrastructure Layer - This layer includes GPU pools, orchestration tools, and manages task deployment.
Current Statistics/Roadmap
As of the time of writing:
Total Network Revenue - $1.08m
Total Compute Hours - 837.6k hours
Total Cluster-Ready GPUs - 20.4K
Total Cluster-Ready CPUs - 5.6k
Total On-Chain Transactions - 1.67m
Total Inference Counts - 335.7k
Total Created Clusters - 15.1k
(Data sourced from Io Net explorer)
Aethir
Aethir is a cloud computing DePIN that facilitates the sharing of high-performance computing resources in compute-intensive fields and applications. It utilizes resource pooling to significantly reduce costs for global GPU allocation and achieves decentralized ownership through distributed resource ownership. Aethir is designed for high-performance workloads, suitable for industries such as gaming and AI model training and inference. By unifying GPU clusters into a single network, Aethir's design aims to increase cluster scale, thereby enhancing the overall performance and reliability of the services offered on its network.
Aethir Network is a decentralized economy composed of miners, developers, users, token holders, and the Aethir DAO. The three key roles that ensure the network operates successfully are containers, indexers, and verifiers. Containers are the core nodes of the network, performing essential operations to maintain network activity, including validating transactions and rendering digital content in real-time. Verifiers act as quality assurance personnel, continuously monitoring the performance and service quality of containers to ensure reliable and efficient operations for GPU consumers. Indexers serve as matchmakers between users and the best available containers. Supporting this structure is the Arbitrum Layer 2 blockchain, which provides a decentralized settlement layer for payments for goods and services on the Aethir network using the native $ATH token.
Proof of Render
Nodes in the Aethir network have two key functions—Render Capacity Proof, where a subset of these working nodes is randomly selected every 15 minutes to validate transactions; and Render Work Proof, which closely monitors network performance to ensure users receive optimal service, adjusting resources based on demand and geographic location. Miner rewards are distributed to participants running nodes on the Aethir network, calculated based on the value of the computing resources they lend, with rewards paid in the native $ATH token.
Nosana
Nosana is a decentralized GPU network built on Solana. Nosana allows anyone to contribute idle computing resources and earn rewards in the form of $NOS tokens. The DePIN facilitates the economical allocation of GPUs that can be used to run complex AI workloads without the overhead of traditional cloud solutions. Anyone can run a Nosana node by lending out idle GPUs, earning token rewards proportional to the GPU power they provide to the network.
The network connects two parties involved in the allocation of computing resources: users seeking access to computing resources and node operators providing computing resources. Significant protocol decisions and upgrades are voted on by NOS token holders and managed by the Nosana DAO.
Nosana has laid out an extensive roadmap for its future plans—Galactica (v1.0 - H1/H2 2024) will launch the mainnet, release CLI and SDK, and focus on expanding the network through consumer GPU container nodes. Triangulum (v1.X - H2 2024) will integrate major machine learning protocols and connectors, such as PyTorch, HuggingFace, and TensorFlow. Whirlpool (v1.X - H1 2025) will expand support for diverse GPUs from AMD, Intel, and Apple Silicon. Sombrero (v1.X - H2 2025) will increase support for medium to large enterprises, fiat payments, billing, and team functionalities.
Akash
Akash Network is an open-source proof-of-stake network built on the Cosmos SDK, allowing anyone to join and contribute permissionlessly, creating a decentralized cloud computing marketplace. The $AKT token is used to secure the network, facilitate resource payments, and coordinate economic activities among network participants. The Akash network consists of several key components:
Blockchain Layer, providing consensus using Tendermint Core and Cosmos SDK.
Application Layer, managing deployments and resource allocation.
Provider Layer, managing resources, bidding, and user application deployments.
User Layer, enabling users to interact with the Akash network, manage resources, and monitor application status using CLI, console, and dashboards.
The network initially focused on storage and CPU leasing services, but as the demand for AI training and inference workloads has grown, it has expanded its service offerings to include GPU leasing and allocation, responding to these needs through its AkashML platform. AkashML uses a "reverse auction" system where customers (referred to as tenants) submit their desired GPU prices, and computing suppliers (referred to as providers) compete to supply the requested GPUs.
As of the time of writing, the Akash blockchain has completed over 12.9 million transactions, with over $535,000 spent on accessing computing resources and over 189,000 unique deployments leased.
Honorable Mentions
The field of compute DePINs is still evolving, with many teams competing to bring innovative and efficient solutions to market. Other examples worth further exploration include Hyperbolic, which is building a resource pool collaborative open-access platform for AI development, and Exabits, which is establishing a distributed computing capacity network supported by computing miners.
Important Considerations and Future Outlook
Now that we have understood the fundamental principles of compute DePINs and reviewed several current case studies in operation, it is important to consider the implications of these decentralized networks, including their advantages and disadvantages.
Challenges
Building distributed networks at scale often requires trade-offs in terms of performance, security, and resilience. For instance, training AI models on a globally distributed commodity hardware network may be far less cost-effective and time-efficient than training on centralized service providers. As previously mentioned, AI models and their workloads are becoming increasingly complex, requiring more high-performance GPUs rather than commodity GPUs.
This is why large enterprises hoard high-performance GPUs in bulk and why compute DePINs face inherent challenges aimed at addressing the GPU shortage by establishing a permissionless market where anyone can lend idle GPUs (for more information on the challenges faced by decentralized AI protocols, see this tweet). Protocols can address this issue in two key ways: first, by establishing benchmark requirements for GPU providers wishing to contribute to the network, and second, by aggregating the computing resources provided to the network for greater overall integrity. Nevertheless, establishing this model itself is challenging compared to centralized service providers, who can allocate more funds to trade directly with hardware providers (like Nvidia). This is a consideration that DePINs should keep in mind as they move forward. If decentralized protocols have sufficient funding, DAOs can vote to allocate a portion of funds to purchase high-performance GPUs that can be managed in a decentralized manner and lent out at prices above commodity GPUs.
Another challenge specific to compute DePINs is managing appropriate resource utilization. In their early stages, most compute DePINs will face structural demand insufficiency issues, as many startups do today. Generally, the challenge for DePINs is to establish enough supply early on to reach minimum viable product quality. Without supply, the network cannot generate sustainable demand or serve its customers during demand peaks. On the other hand, excess supply is also an issue. Above a certain threshold, more supply is only helpful when network utilization is close to or at full capacity. Otherwise, the DePIN risks paying too much for supply, leading to underutilization of resources, and unless the protocol increases token issuance to maintain supplier participation, supplier revenues will decline.
Without widespread geographic coverage, telecommunications networks are of little use. If passengers must wait a long time to get a ride, a taxi network is not useful. If a DePIN must pay long-term resource providers, it will not be useful. Centralized service providers can predict resource demand and manage resource supply effectively, while compute DePINs lack a central authority to manage resource utilization. Therefore, it is particularly important for DePINs to strategically determine resource utilization as much as possible.
A larger question is whether the decentralized GPU market may no longer face GPU shortages. Mark Zuckerberg recently stated in an interview that he believes energy will become the new bottleneck rather than computing resources, as companies will now rush to build data centers on a large scale instead of hoarding computing resources as they do now. Of course, this implies a potential reduction in GPU costs, but it also raises the question of how AI startups will compete with large companies in terms of performance and the quality of goods and services offered if building proprietary data centers raises the overall standards of AI model performance.
The Case for Compute DePINs
To reiterate, the gap between the complexity of AI models and their subsequent processing and computing demands and the available high-performance GPUs and other computing resources is widening.
Compute DePINs are poised to become innovative disruptors in the computing market space, currently dominated by major hardware manufacturers and cloud computing service providers, based on several key capabilities:
1) Providing lower costs for goods and services.
2) Offering stronger censorship resistance and network resilience guarantees.
3) Benefiting from potential regulatory guidelines that may require AI models to be as open as possible for fine-tuning and training, making them easily accessible to anyone.
The proportion of households in the U.S. with access to computers and the internet has grown exponentially, nearing 100%. The proportion in many regions globally has also increased significantly. This indicates an increase in the potential number of computing resource providers (GPU owners) who would be willing to lend out idle supplies if there are sufficient monetary incentives and a seamless transaction process. Of course, this is a very rough estimate, but it suggests that the foundation for establishing a sustainable shared economy for computing resources may already exist.
Beyond AI, future demand for computing will also come from many other industries, such as quantum computing. The quantum computing market is projected to grow from $928.8 million in 2023 to $6.5288 billion by 2030, with a compound annual growth rate of 32.1%. The production of this industry will require different kinds of resources, but it will be interesting to see if any quantum computing DePINs launch and what they will look like.
"An open model running on consumer hardware is an important hedge against the future value being highly concentrated in AI and the central servers reading and mediating the thoughts of most humans being controlled by a few."—Vitalik Buterin
Large enterprises may not be the target audience for DePINs, nor will they be in the future. Compute DePINs allow individual developers, scattered builders, and startups with minimal funding and resources to return. They enable the transformation of idle supplies into innovative ideas and solutions, achieving greater computational power. AI will undoubtedly change the lives of billions. We should not worry about AI replacing everyone's jobs but rather encourage the idea that AI can enhance the capabilities of individuals, entrepreneurs, startups, and the broader public.