The real number of GPUs in io.net is a mystery? What issues exist with decentralized AI protocols?
Author: @rargulati, MartinShkreli
Compiled by: Baihua Blockchain
@ ionet is a decentralized computing power network built on Solana, belonging to the Dep in and AI sectors, and has received funding from Mult1 C0 in Capital and Moonhill Capital, with the amount undisclosed.
io . net is a decentralized cloud platform for machine learning training on GPUs based on Solana, providing instant, permissionless access to a global network of GPUs and CPUs. The platform has 25,000 nodes and uses revolutionary technology to cluster GPUs together, saving large-scale AI startups up to 90% on computing costs.
Currently built on Solana and part of the hot Depin and AI sectors, let's take a look at the analysis of its GPUs and existing issues by two individuals on X today:
How many GPUs (Graphics Processing Units) does @ ionet have?
On X, @ MartinShkreli analyzed four answers:
1) 7648 (when attempting during deployment)
2) 11107 (manually calculated from their explorer)
3) 69415 (an inexplicable number, unchanging?)
4) 564306 (there's no support, transparency, or substantial information here. Not even CoreWeave or AWS has this many)
The real answer is actually 320.
Why 320?
Let's take a look at the explorer page together. All GPUs are "free," but you still can't rent one. If they are free, why can't you rent them? People want to get paid, right?
You can only actually rent 320.
If you can't rent them, then they don't truly exist. Even if you could rent them, it would increase…
@ rargulati stated that Martin is completely correct in questioning this matter. The decentralized AI protocol has the following issues:
1) There is no cost-effective and time-efficient way to conduct useful online training on a highly distributed general-purpose hardware architecture. This requires a significant breakthrough that I currently do not know of. This is why FANG spends more money than all the liquidity in cryptocurrency on expensive hardware, network connections, data center maintenance, etc.
2) Performing inference on general-purpose hardware sounds like a good application case, but the rapid development in hardware and software means that a general decentralized approach performs poorly in most critical use cases. Refer to the latest OpenAI delays and Groq's growth.
3) Inference from correctly routed requests, tightly coexisting with GPU clusters, and utilizing decentralized cryptocurrency to lower funding costs to compete with AWS and incentivize enthusiasts to participate. It sounds like a good idea, but due to the multitude of vendors, the liquidity of the GPU spot market is fragmented, and no one has consolidated enough supply to provide it to those operating real businesses.
4) Software routing algorithms must be very good; otherwise, general-purpose hardware for consumer operators will face many operational issues. Forget about network breakthroughs and congestion control; if someone decides to play games or use any content that utilizes webgl, you might encounter service interruptions from a certain operator. Unpredictable supply will trouble operations and create uncertainty for demand-side requesters.
These are all tricky issues that will take a long, long time to resolve. All bids are just a joke.