Top Dialogue: Delphi Digital Interviews ai16z Founder: How Agent is Reshaping the Future of Web3?
If AI Agents are making a strong impact in this round of the crypto cycle, then Shaw, the founder of ai16z and Eliza, has undoubtedly caught the direction of the tide.
The ai16z he initiated is the first on-chain fund themed around AI Memes, a satirical expression of the well-known venture capital firm a16z. It started fundraising from scratch in October 2024 and quickly grew to become the first AI DAO on Solana with a market cap exceeding $2.5 billion (which has since seen some correction); the core of ai16z, ElizaOS, is a multi-agent simulation framework that allows developers to create, deploy, and manage autonomous AI Agents. Thanks to its first-mover advantage and the thriving TypeScript community, the Eliza codebase has garnered over 10,000 stars on GitHub, capturing about 60% of the current Web3 AI Agent development market.
Despite ongoing controversies on social media platforms, Shaw remains a key figure in the crypto AI space. There have been several interviews about him in the Chinese community, but we believe that the podcast held on January 6th between Tom Shaughnessy, co-founder of the leading crypto research firm Delphi Digital, Ejazz from 26 Crypto Capital, and Shaw is currently the most in-depth interview on Shaw's thoughts regarding the "practicality of AI Agents," and it still has a forward-looking perspective.
In this conversation, not only were the questions insightful, but Shaw was also as honest and outspoken as ever, sharing many of his views on the current use cases of AI Agents in the Web3 industry and his judgments about the future. The discussion covered important topics ranging from Agent development frameworks, token economics, to the future of open-source AGI platforms, packed with valuable insights. In this regard, Coinspire has translated a complete version for readers, hoping to provide a glimpse into the future of AI + Web3.
Key Highlights
▶ The creation of Eliza Labs and the rapid development of ai16z
▶ In-depth exploration of various aspects of the Eliza framework technology
▶ Analysis of agent platforms and the transition from Slop Bots (AI junk bots) to utilities
▶ Discussion on token economics and value capture mechanisms
▶ Exploration of cross-chain development and blockchain choices
▶ Vision for open-source AGI and the future of AI agents
Part.1
Entrepreneurial Experience and Asian Tour
Q1: Shaw, can you share your experience?
Shaw: I have developed many open-source projects over the years and created an open-source space network project, but my partner removed me from GitHub and sold the project for $75 million, leaving me with nothing. He had never written a line of code, while I was the chief developer of the project. Although I am suing him, this incident cost me everything and ruined my reputation.
Later, I started over and focused on researching AI Agents, but since the previous people took all the funding, I had to bear all the responsibilities myself, even going into debt while doing some service projects to make a living. Eventually, the concept of the metaverse cooled down, and the direction gradually became unsuitable.
After that, I joined Webiverse as the chief developer. Initially, things went smoothly, but the project was later hacked, and the treasury was stolen, forcing the team to pivot. This experience was extremely difficult and nearly broke me.
I went through many setbacks, but I kept pushing forward. I collaborated with the founder of Project 89 (neural language viral interaction AI) to launch a platform called Magic and completed a round of seed funding. He wanted to turn the platform into a no-code tool to help users build agent systems. I thought that if we provided a complete solution, users might copy it directly; if we didn’t, they wouldn’t know where to start. When the funds were running low, I decided to focus on developing the agent system. At that time, I had already created the first version of Eliza on this platform. All of this may sound crazy, but I have always been trying and exploring new directions.
Q2: What is the situation of the developer community in Asia?
Shaw: I have been in Asia for the past few weeks, meeting intensively with local developer communities. Since our project launched, especially with the attention on AI Agent-related content (like the ai16z project), I have received a lot of information from Asia, particularly China, and we found many supporters here.
Through a community called 706, I met many members who helped us manage the Chinese channel and Discord, organizing a small hackathon. I also met many developers at the event, and after reviewing their projects, I felt I had to come here to meet everyone in person. So, we planned a trip to visit multiple cities and meet developers.
The local community has been very enthusiastic, organizing event after event for us. I also had the opportunity to communicate with many people, learn about their projects, and build connections. Over the past few days, I have traveled from Beijing to Shanghai, then to Hong Kong, and now I am in Seoul, with plans to go to Japan tomorrow.
At these meetups, I saw many interesting projects, such as games, virtual girlfriend applications, robots, and wearable devices. Some projects involve data collection, fine-tuning, and annotation, which could have great development prospects when combined with our existing technology. I am particularly interested in integrating AI Agents into DeFi protocols, as this could lower the barrier to entry for users and potentially become a killer application in the coming months. Although many projects are still in their early stages, the enthusiasm and creativity of the developers are impressive.
Part.2
Exploration of AI Agent + DeFi Use Cases and Practicality
Q3: Now that ai16z is valued at billions of dollars, and the Eliza framework supports many agents with high developer interest, the project's popularity on GitHub has persisted for weeks. Meanwhile, people are increasingly tired of chatbots that can only auto-reply on social media and are looking forward to agents that can actually complete tasks, such as creating tokens, managing token economies, maintaining ecosystems, and even executing DeFi operations. Do you think the future development direction of agents will include these functionalities? Will Eliza's agents focus on DeFi?
Shaw: This is an obvious business opportunity. I am also tired of the Reply Robot situation; many people just download tools, showcase them, and push tokens, but I really hope we can go beyond that. I am currently most interested in three types of agents: one that can help you make money, one that can bring products to the right customers, and one that can save you time.
Currently, we are still stuck in this auto-reply model. I personally block all uninvoked reply bots, and I encourage everyone to do the same, as this will create a social backlash that forces agent developers to genuinely think and build something meaningful. Blindly following a trend and commenting on everything does not help any token.
I am particularly interested in DeFi because it has many arbitrage opportunities. DeFi fits the characteristic of "there are money-making opportunities, but many people don't know how to use them" better than anything else. We are already collaborating with some teams, such as Orca and DLMM (Dynamic Liquidity Market Maker) on Meteora. The bot can automatically identify potential arbitrage opportunities and adjust itself when the range of tokens changes, returning profits to your wallet. This way, users can safely invest their tokens, and the entire process is automated.
Moreover, Meme coins are highly volatile. In fact, Meme coins often see significant price increases at launch, making liquidity pool (LP) operations difficult. However, once they stabilize, volatility becomes an advantage, allowing profits to be earned through liquidity pools. I personally do not sell tokens; instead, I make money through liquidity pools, and I have always encouraged other agent developers to do the same. But I am surprised to find that many people do not operate this way. I have a friend who told me he finds it hard to make money; when I asked if he had considered using liquidity pools, he said he didn't have time, but he should be doing liquidity pools to earn a lot of money through token trading volume.
Q4: Besides liquidity pools, will these agents start managing their own funds for trading, such as Ai16z and Degen Spartan AI projects? How will they operate their asset management (AUM), and do these agents have the capability to achieve this goal within this year?
Shaw: I believe that large language models (LLMs) are not suitable for direct trading. Instead, if there are suitable APIs to obtain market intelligence, they can make reasonable judgments. For example, I see that the trading success rate of some AI systems is about 41%, which is quite good because most cryptocurrencies are not stable. However, LLMs are not good at making complex decisions; their main role is still to predict the next token and make more reasonable decisions based on contextual information.
The value of LLMs lies in transforming unstructured data into structured data. For example, turning information from a group chat where people are promoting tokens into actionable data. We have a team working on a project called "Trust Market," which investigates whether we can make money by treating recommendations in group chats or on Twitter as real and trading based on those recommendations. It turns out that a small number of people are indeed very good traders and recommenders, and we are analyzing the recommendations of the top performers, which may inform our operations in the future.
It's like a prediction market; a small number of people are very good at predicting, while most are relatively poor or easily influenced by behavioral economics. Therefore, our goal is to track the performance of these individuals through measurable metrics and use that as training strategy. I believe this approach can be applied not only to making money but also to governance, contribution rewards, and other more abstract areas.
But making money is the simplest, as it is like an easily measurable Lego block. I don't think simply feeding time series data to an LLM and letting it predict buying and selling tokens will truly solve the problem. If you design an agent to automatically buy and sell tokens, I believe it can do that, but it may not necessarily make money, especially when buying some highly volatile tokens. So, I think we need a more flexible and reliable approach than simple buying and selling.
Q5: If there is an agent that is very good at trading, why would it open-source and create a token around it instead of just trading on its own?
Shaw: Someone told me that a company claims to be able to predict token prices with 70% accuracy. I think if I could do that, I wouldn't be here telling you this; I would just print infinite money. A 70% accuracy rate for short-term trading on something like Bitcoin means you could easily earn unlimited profits. I am sure that companies like Blackstone are doing something similar to some extent; they are trying to process global data to predict stocks and such, and perhaps they are quite successful at it, as they have many people dedicated to this work.
But I believe that in low-market-cap markets, behavior-driven factors and social media influence may be more important than any fundamental data you can predict. For example, a celebrity retweeting a contract address may be more effective than any algorithm you can predict. Therefore, I think Meme coins are interesting precisely because their market value is very low and can be easily influenced by social dynamics. If you can track these social dynamics, you can find opportunities within them.
Part.3
Value of Agent Framework and Development Advantages of Eliza
Q6: In the context of Eliza's application scenarios, how should the team leverage Eliza to bring a brand new, innovative agent to market? What are the main differentiating factors of this agent? Is it the model, the data, or other functionalities and support provided by Eliza?
Shaw: There is indeed a notion that it is just a wrapper around ChatGPT, but this is similar to viewing a website as a wrapper around HTTP or an application as a wrapper around React. In reality, the key lies in the product itself and whether there are customers using this product and paying for it; that is the core of anything.
Models have become extremely commoditized, and training a foundational model from scratch is very expensive, potentially requiring hundreds of millions of dollars. If we had funding and market share like OpenAI, building an end-to-end training system and training a model might be easy, but then we would be competing with Meta, OpenAI, XAI, and Google, all of whom are striving to improve benchmark performance to prove they have the best model in the world. At the same time, XAI open-sources the previous version every time they release a new version, and Meta also open-sources everything they do to capture market share through open-source.
But I believe this is not the area we should compete in. We should focus on helping developers build products. The key is the future of the internet, how websites and products operate, and how users interact with applications. There are already many excellent products and infrastructures waiting to be used by users; they just don't know how to find them. You can't simply Google "making money with DeFi protocols"; you might find a list and do some research, but if you don't know what to look for, it isn't easy.
Therefore, the real value lies in connecting what already exists, changing existing patterns, and not just staying on a website and login page, but taking it to social media to actually showcase the product's use cases, and finding users who need your product. I believe AI agents should not just be products but should be part of the product, serving as an interface for interacting with the product. I hope to see more attempts like this.
Q7: Why do you believe that Eliza's framework or the platform you are building is the best battleground for developers and builders? Compared to other frameworks and languages (the Zeropy team uses Python, the Arc team uses Rust)?
Shaw: I believe language is indeed important, but it is not everything. Currently, there are more developers using JavaScript to develop applications than any other language. Almost every communication application, from Discord to Microsoft Teams, is also developed using JavaScript or some native runtime, with the UI and interaction parts also developed in JavaScript, or a lot of backend development. The number of developers using JavaScript and TypeScript now exceeds the combined number of developers using all other languages, especially with the rise of tools like React Native (a JavaScript-based framework for creating native mobile applications for Android and iOS).
Many developers who have already developed on EVM have also downloaded Node.js and run Ethereum development tools like Forge or Truffle, becoming familiar with this ecosystem. We can reach those who have done website development, and they can also create agents.
While Python is not particularly difficult to learn, it has some challenges in packaging into different forms, and many people get stuck at the installation step. The Python ecosystem is somewhat chaotic, and the package manager is complex; many people may not know how to find the right version to work with. Although Python is a good choice for backend development, I found in my previous development experiences that Python does not handle asynchronous programming well, and it can be cumbersome in string processing.
When I realized the advantages of TypeScript in developing agents, I recognized that this was the right direction. On the other hand, we provide an end-to-end solution that works immediately once you clone it. I think Arc is a cool project, but it lacks connectors; it has no social connectors. Projects like Zeropy are also good, but they mainly focus on social connectors or responding through loops. Many other projects, while allowing a few agents to talk to each other, do not truly connect to social media.
I believe these frameworks are the body, while LLMs (large language models) are the brain. We are building the bridge that allows these frameworks to connect to different clients. By providing these solutions, we significantly lower the barrier to entry and reduce the amount of code developers need to write. Developers only need to focus on their products, pulling the APIs they need; we provide simple abstractions for input and output.
Q8: As a non-developer, how can one understand the features and processes released by the Eliza platform? From a non-developer's perspective, what functionalities or support can agent builders gain by integrating with Eliza or other competing platforms?
Shaw: You just need to download the code to your computer, modify the roles, and after starting it, you will have a basic bot that can perform any operation, such as chatting, which is the most basic function. We have many plugins; if you want to add a wallet, you just need to enable the plugin and add the private key for the EVM chain you need; you can also add API keys, such as for Discord, or your Twitter username and email, all of which can be set up without writing code and can be used directly. This is also why you see many bots doing promotions and replies.
After that, you can use some abstract tools for other operations, called "actions." For example, if you want the bot to order a pizza, you just need to set an "order pizza" action. Then, the system will automatically retrieve user information, possibly from the current user's information provider. You will also need an evaluator to extract the user information you need, such as name and address. If someone messages you to order pizza, the system will first get the user's address and then execute the pizza ordering action.
These three parts: provider, evaluator, and action, are the foundation for building complex applications. Any operation like filling out a form on a website can essentially be achieved through these three elements. We currently use this method to handle tasks like automatic LP management, which is similar to writing any website, mainly calling APIs, and developers should find it easy to get started.
For non-developers, I recommend choosing a hosted platform that offers the features or plugins you need without delving deeply into the code. If you want, you can certainly do it yourself.
Q9: How long would it take for a developer to build these functionalities or piece together these components from scratch? How does the time cost compare to using the Eliza platform?
Shaw: It depends on what you want to do. If you are just looking at the codebase and understand the abstractions, you might be able to build very specific functionalities in a short time. For example, I might be able to create an agent that does what you want in a week. But if you want memory functions, information extraction, or to build a framework that supports these functionalities, it will be more complex.
For instance, I once created a pizza delivery application; it took me 5 hours, while another person took 2 hours, so it could basically be done in a day. If I were to do it myself, it might take several weeks. Although everything, such as writing code, has been accelerated by AI, the overall framework has already provided you with a lot.
To give an example, like React, all applications are built on React. You can quickly piece together a website, but as the complexity of the project increases, it becomes very difficult. So, when you are doing something simple, you only need an LLM, a blockchain, and a loop, and you might be able to finish it in a few days. But we support all models; it can run entirely locally and also supports transcription. You can send audio files to Discord, and it will transcribe them; uploading PDF files can also allow for chatting, and all of these are already built-in. Most people haven't even used 80% of the features inside.
So, if you only need to build a simple chat interface, you can definitely do it yourself. But if you want to build a fully functional agent that can do many things, then you need a framework that has already handled most of it. I can tell you that I spent many months creating this.
Q10: Compared to other launched agent platforms that generally emphasize rapid design, deployment, and no-code operations, is Eliza more suitable for customized and unique functionalities in agent building?
Shaw: If you take out the entire system of Arc, or the entire Zeropy, or the entire Game framework, the number of lines of code is much less than Eliza because Eliza contains many different functionalities. Even if you only take out the plugin part, it includes many core capabilities, such as speech-to-text, text-to-speech, transcription, PDF processing, image processing, etc., all of which are already built-in. While this may be overly complex for some, it indeed makes many things possible, which is why so many people are using it.
I see some agents are entirely Eliza plus some other functionalities, for example, they use the Pump.fun plugin we provide, or Eliza combined with image and video generation functionalities, which are actually all built-in. I hope to see more people try it out and see what happens when all plugins are enabled simultaneously.
My goal is that eventually these agents will be able to write new plugins from scratch themselves, because there will be enough similar existing plugins as examples, and all of this will be trained into the model. Once we achieve 100 stars and reach a certain codebase threshold, companies like OpenAI and Claude will scrape this data to train. This is part of our Loop, and eventually, you will be able to write new plugins yourself.
Q11: If Eliza becomes the most powerful codebase (not just in terms of wealth, but in providing the strongest functionalities for any agent developer), does it mean that Eliza can attract developers not only from the crypto field but also more from traditional AI and machine learning backgrounds?
Shaw: If there is indeed a breakthrough. Eliza, besides having many blockchain integrations (all are plugins), is not a crypto project itself. I have noticed that the trend of popularity on GitHub has helped us attract people from the Web2 space; many people just see it as a great tool for developing agent frameworks.
I personally hope to make people accept this; some people have biases against cryptocurrencies, but I think it is clear that 99% of agents will trade 99.9% of tokens in the future. Cryptocurrency is the native token for agents; trying to use a PayPal account is really difficult. And we can directly open a wallet and generate a private key, making it easy.
We have indeed attracted some people from outside the crypto space, especially those who do not actively engage in crypto trading; they feel that cryptocurrencies are fine but are more interested in the applications of agents.
Although some people have biases against crypto projects, they are willing to accept them as long as they can bring real value. Many people only see hype and empty talk and feel disappointed, but when they see our projects have actual research and engineering support, they gradually change their views. I hope to attract more people, and we have indeed made some progress; this is a huge differentiating advantage.
Part.4
Vision for Open-Source AGI and the Future of AI Agents
Q12: In the future, how will you compete with OpenAI and traditional AI labs? Is it through a group of agents built on Eliza collaborating as a differentiating advantage, or is this comparison fundamentally meaningless?
Shaw: This question is very meaningful. First of all, when you launch Eliza, it will by default start a new model, which is a fine-tuned Llama model, also known as the Hermes model, which has been trained by Nous Research. I really like what they are doing; one of their members, Ro Burito, is both a member of Nous Research and an agent developer in our community. They helped launch God and Satan Bots, as well as some other bots. So, we might be able to train models ourselves, but we have partners like them, and rather than competing with them, I prefer to collaborate with them to complement each other's strengths.
Many people do not understand how simple it is to train a model; it actually only requires one command. If I go to use Together, I can start fine-tuning a Llama model in five minutes by just entering a command and pointing to a JSON file. The advantage of Nous is not in their fine-tuning method but in the data. They collect and curate data meticulously, which is their core competitive advantage; collecting, preparing, and cleaning data is a very tedious job, and they focus on data that is different from OpenAI. This is also where our market differentiation lies.
We choose to use their model because they do not reject many requests like OpenAI does. We have a term called "OpenAI models are castrated," and basically, all agent developers feel that OpenAI's models are limited. Our market differentiation is that OpenAI will never let you create an agent that can connect to Twitter; they will never allow you to make the assistant very personalized or interesting. They are not bold enough, not cool enough, and they are under a lot of pressure.
If you go to use ChatGPT now and ask it about the 2024 election, it might give you a long answer, but for a long time, it would just directly tell you Biden because that is how it was trained. I am not saying I support one side, but I think it is foolish to let a leading model make such a simple political choice. OpenAI is very cautious; they are largely just "going through the motions" and do not let users truly get what they want.
So, the real competitive point is how you collect data and the source of that data. You do not see OpenAI doing such things. If you look at Sam Altman's tweets, he indicates that users really want an adult mode, not referring to NSFW (not safe for work) content, but rather "adults in the room," meaning do not treat me like a child and restrict me from seeing certain information. Moreover, because OpenAI is centralized, they face a lot of political pressure from the government. I believe the open-source movement frees itself from these constraints, and more importantly, it has diversity and various models to meet users' real needs, giving them what they want rather than controlling their behavior; this approach will ultimately prevail. For OpenAI, while they have huge funding, a very high market value, and a lot of talent, decentralized AI provides community support, incentive mechanisms, funding, and rapidly developing conditions without having to wait for hardware like GPUs.
I believe that the path to AGI is not either/or; it is actually a combination of various approaches. If the largest companies in the world are doing something, can competing with them really accelerate development? I think AI agents are the "stepchildren" of the AI world, because they are not as easily measured by standards as traditional AI, and it is difficult for PhD researchers to quantify and say that one agent is better than another. AI agents are more about foundational engineering and creatively solving problems, which is precisely the uniqueness of many developers who are investing in this field.
Q13: What does open-source AGI (Artificial General Intelligence) specifically mean? Is it through a group of agents collaborating autonomously to ultimately produce a superintelligent whole, or are there other ways?
Shaw: If millions of developers are using most open-source models and tools, they will compete with each other and optimize the overall system's capabilities. I believe AGI is essentially the form of the internet; the internet itself is made up of many agents that do various things. Moreover, this does not need to be a unified system; we can call it AGI, but it depends on how you define AGI.
Most people think AGI is intelligence that can do anything like a human. In reality, this agent does not need to possess all knowledge beforehand; it can obtain the required information by calling APIs or operating computers. If it can operate computers like a human, has a powerful memory system, and rich functionalities, ultimately combining with actual robots, AGI will become evident.
However, in the AI field, we often say, "AGI is something that computers cannot currently do," and this goal continues to evolve with the introduction of new models. At the same time, there is a concept called ASI, or superintelligence, which refers to a powerful model capable of controlling the world. I believe that if only large companies like Microsoft are building it, it may have this superintelligent potential. But if there are many different players, each open-sourcing their models and continuously fine-tuning and optimizing them, it will ultimately form a multi-agent system like the internet, interacting with each other and having their own specialties, which will look like superintelligence.
This is a massive system, even a collection of systems. If one agent wants to attack another agent, it will be very difficult because no agent can be significantly stronger than the others. As technology advances, we are also reaching an energy limit; models cannot expand infinitely, or they will need nuclear reactors to support them. Just like how Microsoft is now investing in nuclear power plants, all companies are gradually improving their models.
The new model GPT-4 released by OpenAI is very close to human intelligence, but similarly, other companies are also actively developing similar models, and many people are focusing on researching and implementing the latest technologies. Even if OpenAI's model is close to AGI, due to the large number of users, its model has to compromise on quality, shifting towards lower-scale models to alleviate the burden on GPUs.
Overall, I believe that as competition among companies increases, models are becoming more efficient, and open-source allows more developers to participate, all of which drive the emergence of superintelligence. I hope that in the future world, on Twitter, I can easily find a robot that can do something and choose the most suitable one.
Q14: What role will tokens and markets in cryptocurrency play in achieving future innovations and visions?
Shaw: From the perspective of "intelligence," the market itself is a form of intelligence. It can discover opportunities, allocate capital, drive competition, and ultimately optimize the best solutions. This process may continue to compete until a complete and mature system is formed. I believe market intelligence and competition play important roles here.
The role of cryptocurrency in this is evident. It has two key functions:
First, it provides a crowdfunding mechanism for projects, no longer relying on the old Silicon Valley venture capital model, based on what people truly want rather than the definitions of value by a few venture capitalists. Although venture capitalists often have deep insights, their investment logic may also be limited by certain geographical or cultural circles, overlooking the potential for more decentralized capital allocation.
Second, cryptocurrency can accurately capture people's emotional needs. If a product can be delivered that meets this need, users will be very excited. However, the main problem in the crypto space is that many projects hit the emotional points but ultimately fail to deliver on their promises. If these projects can truly achieve their goals, such as developing a robot that can provide perfect market insights, it would be of immense value.
Moreover, the open-source auditability allows anyone capable to verify the authenticity of projects. This transparency can guide capital to flow more efficiently towards genuinely promising opportunities. A major issue in the current world is that most people cannot invest in companies like OpenAI unless they go public, but by then, the returns are relatively limited. In contrast, cryptocurrency allows people to invest directly in projects at early stages, realizing the dream of "participating in the future" and "generational wealth."
To make these mechanisms more robust, we need to better prevent fraud. I believe that open-source and publicly developed approaches can greatly enhance the efficiency of capital allocation in the market and accelerate the development of this field. At the same time, future agents will trade tokens with each other; almost everything can be tokenized—trust, capabilities, money, etc. In summary, cryptocurrency provides a new way for capital allocation, accelerating the realization of innovation and future visions.
Part.5
Discussion on Token Economics and Value Capture Mechanisms
Q15: Is the ai16z platform fast enough in implementing token economic value capture mechanisms? How to respond to potential competitive threats?
Shaw: The problem with open-source blockchains is that the incentives for forking are very high because when you hold network tokens, there are direct economic benefits. If we launch an L1, people might fork our L1 or feel that they cannot truly collaborate with us because we are an L1.
The tribalism in the crypto industry is strong, largely due to this either/or competition rather than inclusive cooperation.
In reality, our token economic model needs to continuously evolve to find new profit-making methods. Launchpad is not the final token economic model; it is an initial version. We have attracted a lot of attention, and many partners want to launch on our platform; they just need a hosted way to kick off their agent projects. We can provide plugins and ecosystem capabilities for them to use directly.
We plan to open-source Launchpad, but it is foreseeable that once it is open-sourced, others will replicate it. Projects that rely solely on launch platforms will need to rethink their long-term strategies; simply setting roles, burning tokens, and repurchasing may not be sustainable.
In the long run, we want to invest in technologies that can expand the overall ecological value. In the short term, we need to meet market demands and launch Launchpad. But three months later, the launch platform may become ordinary, many projects may fail, and only a few will continue to create value.
The future focus is not just on simply launching agents but on investing in projects that can clearly create value. We have already started investing and acquiring, and these also have their own token economic models, such as repurchasing tokens through revenue for further investment. Additionally, we are also looking for new ways to enhance token value, such as increasing long-term yield pressure, rather than just collecting network fees or simple mechanisms like token pairing and burning.
My goal is to push us beyond these simple models and move towards a larger vision. We hope to create a platform similar to a production studio, allowing people to submit projects to DAOs and roles, validate popular projects, and then invest. I believe the current token economic plan can sustain for six months, but we are also actively thinking about the next token economic model.
Q16: If the ai16z token economic model operates successfully and the token has real value, it can provide more funding support for project development platforms, while the agents will further promote the development of open-source frameworks, bringing growth to the ecosystem in a non-direct way?
Shaw: I often think about this. In the AI field, there is a tool called "Fume," which refers to agents being able to write their own code and continuously improve at a speed faster than humans. They will write code for various possible use cases, submit requests (PRs), and other agents will be responsible for reviewing and testing. This situation may happen within a few years, or even less than two years. If we can persist, we will reach a kind of "escape velocity," and the system will develop exponentially, potentially entering the stage of AGI (Artificial General Intelligence) and achieving complete self-construction.
We should do everything we can to accelerate towards this future. I have already seen some projects, like Reality Spiral, where agents are submitting PRs to GitHub; this trend has already begun.
If we can accumulate value in tokens while investing in our ecosystem and promoting its growth, it will create a positive feedback loop: the value of the tokens increases, driving ecosystem development, which in turn enhances token value. Ultimately, this system will reach a state of automatic operation.
However, we still have a lot of practical work to complete. The key is to ensure that tokens accumulate value in the expected way and meet user needs. For example, Launchpad was developed based on user needs to help them realize what they are already building.
In the future, we might even allow agents to create specific projects directly, with multiple agents competing to develop, ultimately chosen by community votes for the best results. This model could quickly become extremely complex and powerful, and our goal is to accelerate reaching this stage.
Part.6
Exploring Cross-Chain Development and Blockchain Choices
Q17: Which blockchain do you think AI agents should be developed on? Solana or Base?
Shaw: From the user's perspective, blockchains have gradually been "normalized"; many people do not even know which chain their tokens are on. Although EVM and SVM models have significant differences in programming and functionality, they are essentially indistinguishable to users. Users simply check their wallets to see if they have funds or perform token swaps.
For the future of agents, I hope it can blur the differences between chains; tokens will definitely bridge frequently between the two. Currently, we are an SPL 2022 token with minting capabilities, so there are some technical challenges for cross-chain, but we are overcoming these issues.
I actually like the Base team; they have been very supportive of us, so I do not have a particular bias. We choose Solana because users are here. As product people, we should set aside personal beliefs and focus on user needs, providing the services they need in the places they prefer.
Currently, you can deploy an agent on Base or StarkNet; the choice is completely open. The fragmentation of these ecosystems comes more from their respective token prices, whether they have tokens, and the existing developer communities and infrastructures. The main reason we choose Solana is that projects like DAOs.fun and users are on this chain. But overall, I do not have a strong preference for the platform; the best strategy is to cover all platforms, observe where users are, and then provide services there.
Part.7
Transition from Slop Bots (AI Junk Bots) to Utilities
Q18: Regarding the current situation where some "utterly useless slop agents" are gradually losing market share, is there a natural transition period to the emergence of "high-performance agents" that can truly execute efficient and practical tasks?
Shaw: I believe we will soon enter a new phase where agents will do surprising things; if people can make money from agents, then that agent will definitely be very successful.
As for whether "slop agents" will disappear, I think they may not completely vanish. Their current situation is that platforms (like X) realize they cannot eliminate these agents through coercive means, nor can they determine through manual review whether they are bots or humans, especially when these agents are very close to passing the Turing test. So, the platform's solution is to algorithmically punish those "humans" that cause disturbances more severely.
From a developer's perspective, if they cannot attract users, agents will have no influence. My approach is to directly block those meaningless agents. I believe that if an agent is not specifically summoned and does not provide valuable content, we do not want that content to appear on the platform.
Agents in the DeFi space have not fully developed yet, although teams are still working hard on it. But I believe that in the coming month, we will see many new developments. Moreover, we have not yet seen agents that can find users for their products; currently, many agents are just used for inefficient promotion. But imagine if an agent discovers the solution you need; you would definitely not block it but would appreciate it, just like using a new Google.
Currently, we are still in a "dogs playing poker" stage. Initially, if you walk into a room and see four dogs playing poker, you would find it incredible, but after a few weeks, you would ask, "How are those dogs playing? Are they really making money, or are they just holding cards?" Once the novelty wears off, people will start to pay attention to who is the best poker-playing dog or whose poker algorithm is the best.
Therefore, while "celebrity agents" may always exist, in the future, we will see more useful agents. Just like in Web2, McDonald's might launch a "Grimace agent," or some influencers might have to establish a reply bot to build a virtual relationship with their fans because their DMs are flooded after posting content.
Q19: Currently, detailed information about agent architecture, models, hosting locations, etc., is hard to obtain, relying solely on developers' trust. How can this be visualized and viewed?
Shaw: I believe someone will hear this demand and build that platform; I also agree that there is an opportunity here. TEE has existed for a long time, and I have talked to many developers. Before agents appeared, it was just a very obscure concept. The emergence of agents has led people to ask, "If it is a self-governing agent, how do we prevent it from directly taking the private key and stealing money?" Thus, people have started to pay attention to TEE, and I think Phala has done well because they created a clear demand: a verifiable remote authentication system. This is also why we see the rise of products like ZKML (Zero-Knowledge Machine Learning), which provide necessary trust mechanisms to reassure users.
We will see many products addressing this uncertainty; this uncertainty itself is a great product opportunity. If someone can establish a list that certifies these agents, it will be very successful, just like the trust ratings for decentralized exchanges; we could also see similar agent verification systems. Open-source will become an important incentive because if the code is relatively simple and the issue is trust, then why not open-source it and let everyone see it? This could lead to a new breed of "programmer influencers" who evaluate the legitimacy of these agents.
I believe that within five years, you will be able to query any agent's relevant information at any time; there may be a website dedicated to providing this information. If not, someone should start building such a platform this year.