Arweave Founder Talks with NEAR Co-Founder: Exploring the Path of AI and Blockchain Integration
Author: Siwei Guai Guai, BlockBeats
Translator's Note: On June 14, the AO Foundation officially launched the tokenomics of the decentralized supercomputer AO. Just two days earlier, on the evening of June 12, BeWater partner Lulu invited Arweave and AO founder Sam Williams, along with NEAR Protocol co-founder Illia Polosukhin, for an in-depth discussion on the integration of AI and blockchain. Sam elaborated on the underlying architecture of AO, which is based on an actor-oriented paradigm and a decentralized Erlang model, aiming to create a decentralized computing network that is infinitely scalable and supports heterogeneous process interactions.
Sam also envisioned the potential applications of AO in the DeFi space, where the introduction of trustworthy AI strategies could enable true "agent finance." Illia shared the latest advancements of NEAR Protocol in scalability and AI integration, including the introduction of chain abstraction and chain signature functionalities, as well as the development of peer-to-peer payment and AI inference routers. Additionally, both guests expressed their views on the current priorities and research focuses within their respective ecosystems, as well as innovative projects they are optimistic about.
How Illia and Sam Got Involved in AI and Crypto
Lulu: First, could you introduce yourselves and tell us how you got involved in both AI and blockchain?
Illia: My background is in machine learning and artificial intelligence, and I worked in this field for about 10 years before entering the crypto space. I am best known for the paper "Attention is All You Need," which introduced the Transformer model, now widely used in various modern machine learning, AI, and deep learning technologies. However, prior to that, I was involved in many projects, including TensorFlow, which is a machine learning framework open-sourced by Google in 2014-2015. I also worked on research in question-answering systems, machine translation, and applied some of these research outcomes in Google.com and other Google products.
Later, I co-founded NEAR.ai with Alex, initially as an AI company focused on teaching machines to program. We believed that in the future, people would be able to communicate with computers through natural language, and the computers would automatically program themselves. In 2017, this sounded like science fiction, but we did a lot of research. We crowdsourced more training data, with students from places like China and Eastern Europe completing small tasks for us, such as writing code and drafting code comments. However, we faced challenges in paying them, as PayPal could not transfer money to users in China.
Someone suggested using Bitcoin, but at that time, Bitcoin transaction fees were already quite high. So we began to delve deeper into it. We had a background in scalability; at Google, everything was about scale. My co-founder Alex created a sharded database company serving Fortune 500 companies. It was strange to see the state of blockchain technology at that time, where almost everything was running on a single machine, limited by the capabilities of that single machine.
Thus, we set out to build a new protocol, which became NEAR Protocol. It is a sharded Layer 1 protocol focused on scalability, usability, and developer convenience. We launched the mainnet in 2020 and have been growing the ecosystem since. In 2022, Alex joined OpenAI, and in 2023, he founded an AI company focused on foundational models. Recently, we announced his return to lead the NEAR.ai team to continue our work on teaching machines to program that we started in 2017.
Lulu: That's a fascinating story! I didn't know that NEAR started as an AI company and is now refocusing on AI. Next, Sam, could you introduce yourself and your project?
Sam: We got involved in this field about seven years ago, and I had been following Bitcoin for a long time. We discovered an exciting but underexplored idea: you can store data on a network that will be replicated globally, without a single centralized point of failure. This inspired us to create an archive that is never forgotten and replicated in multiple places, making it impossible for any single organization or government to censor the content.
Thus, our mission became to scale Bitcoin, or to enable Bitcoin-like on-chain data storage to any scale, so that we could create a knowledge base for humanity, storing all of history and forming an immutable, trustless historical log, ensuring we never forget how we arrived at this important context today.
We started this work seven years ago, and we have had the mainnet live for over six years now. In the process, we realized that permanent on-chain storage could provide far more functionality than we initially imagined. Initially, our idea was to store newspaper articles. But shortly after launching the mainnet, we realized that if you could store all this content around the world, you were essentially planting the seeds for a permanent decentralized network. Moreover, around 2020, we realized that if you have a deterministic virtual machine and a permanently ordered log that interacts with programs, you could essentially create a smart contract system.
We first tried this system in 2020, calling it SmartWeave. We borrowed the concept of lazy evaluation from computer science, primarily popularized by the programming language Haskell. We knew this concept had been used in production environments for a long time, but it had not been truly applied in the blockchain space. Typically, in the blockchain space, people execute smart contracts when writing messages. But we believe that blockchain is essentially a data structure that only grows, with certain rules to include new information without executing code simultaneously with data writing. Since we have an arbitrarily scalable data log, this was a natural way of thinking for us, though it was relatively rare at the time. The only other team doing something similar was what is now called Celestia (formerly LazyLedger).
This led to a Cambrian explosion of computing systems on Arweave. There are about three or four major projects, some of which have developed their unique communities, feature sets, and security trade-offs. In the process, we realized that not only do we need to leverage the data availability of the underlying layer to store these logs, but we also need a mechanism to delegate data availability guarantees. Specifically, you can submit data to a packaging node or another representative (now called a scheduling unit), which will upload the data to the Arweave network and provide you with an economically incentivized guarantee that the data will be written to the network. Once this mechanism is in place, you have a system that can scale horizontally for computation. Essentially, you have a series of processes that can be viewed as rollups on Ethereum, sharing the same dataset and able to communicate with each other.
The name AO (Actor-Oriented) comes from a paradigm in computer science, and we built a system that combines all these components, featuring a native messaging system, data availability providers, and a decentralized computing network. Thus, the lazy evaluation component becomes a distributed collection where anyone can start a node to resolve contract states. When you combine these elements, you get a decentralized supercomputer. At its core, we have an arbitrarily scalable message log that records all messages involved in the computation. I find this particularly interesting because you can perform parallel computations, and your process does not affect the scalability or utilization of my process, meaning you can perform computations of arbitrary depth, such as running large-scale AI workloads within the network. Currently, our ecosystem is actively promoting this idea, exploring what happens when market intelligence is introduced into the smart contract system at the foundational layer. This way, you essentially have intelligent agents working on your behalf, which are trustworthy and verifiable, just like the underlying smart contracts.
The Underlying Concepts and Technical Architecture of AO
Lulu: As we know, NEAR Protocol and Arweave are now driving the intersection of AI and cryptocurrency. I want to delve deeper into this. Since Sam has touched on some of the underlying concepts and architecture of AO, I might start with AO and then shift to AI later. The concepts you described make me feel that those agents are autonomously running, coordinating, and allowing AI agents or applications to work on top of AO. Could you elaborate on the parallel execution or autonomous agents within the AO infrastructure? Is the metaphor of building a decentralized Erlang accurate?
Sam: Before I begin, I want to mention that I built an operating system based on Erlang during my PhD studies. We called it running on bare metal. The exciting thing about Erlang is that it is a simple yet expressive environment where every piece of computation is expected to run in parallel, rather than a shared state model, which has become the norm in the crypto space.
The elegance of this lies in its beautiful mapping to the real world. Just like we are having this conversation together, we are actually independent roles computing in our own minds, then listening, thinking, and talking. Erlang's agent or actor-oriented architecture is indeed remarkable. Right after my talk at the AO summit was one of the co-founders of Erlang, who talked about how they came up with this architecture around 1989. At that time, they didn't even realize the term "actor-oriented." But it's such a wonderful concept that many people came up with the same idea because it makes so much sense.
For me, if you want to build truly scalable systems, you have to make them pass messages rather than share state. That is, when they share state, as happens in Ethereum, Solana, and almost all other blockchains, NEAR is actually an exception. NEAR has sharding, so they do not share global state but have local states.
When we built AO, the goal was to combine these concepts. We wanted processes that could execute in parallel, allowing for arbitrarily large-scale computation while separating the interactions of these processes from their execution environments, ultimately forming a decentralized version of Erlang. For those less familiar with distributed technology, the simplest way to understand it is to think of it as a decentralized supercomputer. With AO, you can start a terminal within the system. As a developer, the most natural way to use it is to start your local process and then interact with it, just like you would with a local command line interface. As we move towards consumer adoption, people are building UIs and everything you would expect. Fundamentally, it allows you to run personal computations in this decentralized computing cloud and interact using a unified message format. We referenced the TCP/IP protocol that runs the internet when designing this part, trying to create a protocol that can be viewed as the TCP/IP of computation itself.
The data protocol of AO does not enforce any specific type of virtual machine. You can use any virtual machine you want; we have implemented both WASM32 and 64-bit versions. Others in the ecosystem have implemented EVM. If you have this shared messaging layer (we use Arweave), then you can allow all these highly heterogeneous processes to interact in a shared environment, like the internet of computation. Once this infrastructure is in place, the next step is naturally to explore what can be done with intelligent, verifiable, trustless computation. The obvious applications are AI or smart contracts, allowing agents to make intelligent decisions in the market, potentially competing with each other or representing humans against humans. When we look at the global financial system, about 83% of trades on NASDAQ are executed by robots. That's how the world operates.
In the past, we couldn't put the intelligent part on-chain and make it trustworthy. But in the Arweave ecosystem, there is another parallel workflow we call R AI L, which stands for Responsible AI Ledger. It essentially provides a way to create records of different model inputs and outputs and store these records in a public and transparent manner, allowing you to query and say, "Hey, did this data I saw come from an AI model?" If we can promote this, we believe it can solve a fundamental problem we see today. For example, someone sends you a news article from a website you don't trust, with what seems to be a picture or video of a politician doing something foolish. Is this real? R AI L provides a ledger that many competing companies can use in a transparent and neutral manner to store the output records they generate, just like they use the internet. And they can do this at a very low cost.
Illia's Perspective on Blockchain Scalability
Lulu: I'm curious about Illia's perspective on the scalability of the AO approach or model. You have worked on the Transformer model, which aims to address the bottlenecks of sequential processing. I want to ask, what is NEAR's approach to scalability? In a previous AMA chat, you mentioned that you are exploring a direction where multiple small models form a system, which could be one of the solutions.
Illia: Scalability can manifest in many different ways in blockchain, and we can continue along Sam's topic. What we see now is that if you use a single large language model (LLM), it has some limitations in reasoning. You need to prompt it in a specific way for it to run for a while. Over time, the models will continue to improve and become more general. But in any case, you are tuning these models (which can be seen as raw intelligence) to perform specific functions and tasks and to reason better in specific contexts.
If you want them to perform more general work and processes, you need multiple models running in different contexts, executing different aspects of tasks. For a very specific example, we are currently developing an end-to-end process. You could say, "Hey, I want to build this application." The final output is a fully constructed application, complete with correct, formally verified smart contracts, and the user experience has been thoroughly tested. In real life, typically, one person does not build all these things, and the same idea applies here. You actually want AI to play different roles and take on different functions at different times, right?
First, you need an AI agent that acts as a product manager, actually gathering requirements, figuring out what you really want, what the trade-offs are, and what the user stories and experiences are. Then there might be an AI designer responsible for translating these designs into the frontend. Next could be an architect responsible for the backend and middleware architecture. Then there’s the AI developer, writing code and ensuring that the smart contracts and all frontend work are formally verified. Finally, there might be an AI tester ensuring everything runs smoothly, testing through the browser. This creates a set of AI agents that, while they may use the same model, are fine-tuned for specific functions. They each play their roles independently in the process, interacting using prompts, structures, tools, and the observed environment to build a complete workflow.
This is what Sam is talking about, having many different agents that asynchronously complete their work, observing the environment and figuring out what to do. So you do need a framework; you need a system to continuously improve them. From the user's perspective, you send a request and interact with different agents, but they work together as if they are a single system completing the task. At a lower level, they might actually pay each other for exchanging information, or different agents owned by different owners interact with each other to actually accomplish something. This is a new version of an API, smarter and more natural language-driven. All of this requires a lot of framework structure, as well as payment and settlement systems.
There is a new way of explaining this called AI commerce, where all these agents interact with each other to complete tasks. This is the system we are all moving towards. If you consider the scalability of such a system, several issues need to be addressed. As I mentioned, NEAR is designed to support billions of users, including humans, AI agents, and even cats, as long as they can transact. Each NEAR account or smart contract runs in parallel, allowing for continued scaling and transactions. At a lower level, you probably don't want to send a transaction every time you call an AI agent or API; it wouldn't be reasonable, no matter how cheap NEAR is. Therefore, we are developing a peer-to-peer protocol that allows agent nodes, clients (including humans or AI) to connect with each other and pay for API calls, data retrieval, etc., with cryptoeconomic rules ensuring they will respond, or else they will lose part of their collateral.
This is a new system that allows for scaling beyond NEAR, providing micropayments. We call it yocto NEAR, equivalent to 10^-24 of NEAR. This way, you can actually exchange messages at the network level with payment functionality, allowing all operations and interactions to now be settled through this payment system. This addresses a fundamental issue in blockchain, where we lack a payment system with bandwidth and latency, and there are actually many free-rider problems. This is a very interesting aspect of scalability, not just limited to blockchain scalability, but applicable to a future world that may have billions of agents. In this world, even on your device, multiple agents may be running simultaneously, executing various tasks in the background.
Applications of AO in DeFi: Agent Finance
Lulu: This use case is very interesting. I believe there is typically a demand for high-frequency payments and complex strategies for AI payments, which have not yet been realized due to performance limitations. So I look forward to seeing how better scalability options can meet these demands. In our hackathon, Sam and the team mentioned that AO is also exploring the use of new AI infrastructure to support DeFi use cases. Sam, could you elaborate on how your infrastructure applies in the new DeFi scenarios?
Sam: We call it Agent Finance. This refers to the two aspects we see in the market. DeFi has done very well in its first phase, decentralizing various economic primitives and bringing them on-chain, allowing users to use them without trusting any intermediaries. But when we think about the market, we think about the digital ups and downs, and the intelligence driving these decisions. When you can bring this intelligence itself on-chain, you get a financial tool that requires no trust, like a fund.
A simple example is, suppose we want to build a meme coin trading hedge fund. Our strategy is to buy Trump coins when we see mentions of Trump and buy Biden coins when we see mentions of Biden. In AO, you can use oracle services like 0rbit to get the full content of web pages, such as The Wall Street Journal or The New York Times, and then feed it into your agent, which processes this data and analyzes how many times Trump has been mentioned. You can also perform sentiment analysis to understand market trends. Then, your agent will buy and sell these assets based on this information.
Interestingly, we can make the agents themselves operate without trust. This way, you have a hedge fund that can execute strategies, and you can invest funds into it without trusting the fund manager. This is another aspect of finance that the DeFi world has not truly touched upon, which is making informed decisions and then acting on them. If we can make these decision-making processes trustworthy, we can unify the entire system into what looks like a truly decentralized economy, rather than just a settlement layer involving different economic games.
We see this as a huge opportunity, and some people in the ecosystem are already starting to build these components. We have a team creating a trustless portfolio manager that buys and sells assets based on the proportions you want. For example, you want 50% to be Arweave tokens and 50% to be stablecoins. When the prices of these things change, it will automatically execute trades. There’s also an interesting concept behind this; AO has a feature we call cron messages. This means processes can wake themselves up and decide to do something autonomously in the environment. You can set your hedge fund smart contract to wake up every five seconds or five minutes, fetch data from the network, process it, and take action in the environment. This makes it completely autonomous, as it can interact with the environment; in a sense, it is "alive."
Executing smart contracts on Ethereum requires external triggers, and many infrastructures have been built to solve this problem, but it’s not smooth. In AO, these functionalities are built-in. Therefore, you will see a market on-chain where agents are constantly competing with each other. This will significantly increase the usage of the network in ways never seen before in the crypto space.
Overall Strategy and Development Focus of NEAR.ai
Lulu: NEAR.ai is advancing some promising use cases. Can you tell us more about other aspects or the overall strategy and some key focuses?
Illia: Indeed, there are many things happening at every level, with various products and projects that can be integrated. It all obviously starts with the NEAR blockchain itself. Many projects need a scalable blockchain, some form of authentication, payment, and coordination. NEAR's smart contracts are written in Rust and JavaScript, which is very convenient for many use cases. One interesting thing is that NEAR's recent protocol upgrade introduced what are called yield/resume precompiles. These precompiles allow smart contracts to pause execution, waiting for external events to occur, whether from another smart contract or AI inference, and then resume execution. This is very useful for smart contracts that need input from LLMs (like ChatGPT) or verifiable reasoning.
We also introduced chain abstraction and chain signature functionalities, which are unique features NEAR has brought in over the past six months. Any NEAR account can transact on other chains. This is very useful for building agents, AI inference, or other infrastructures because now you can conduct cross-chain transactions via NEAR without worrying about transaction fees, tokens, RPC, and other infrastructures. All of this is handled for you through the chain signature infrastructure. Regular users can also use this feature. There is a HOT Wallet built on NEAR on Telegram, which has just launched Base integration on the mainnet, with about 140,000 users using Base through this Telegram wallet.
Furthermore, we plan to develop a peer-to-peer network that will involve agents, AI inference nodes, and other storage nodes in more provable communication protocols. This is very important because the current network stack is very limited and lacks native payment functionality. Although we often say that blockchain is "internet money," we have not yet solved the problem of sending data packets with money at the network level. We are addressing this issue, which is very useful for all AI use cases and broader Web3 applications.
In addition, we are developing what we call AI inference routers, which are essentially a place that can plug into all use cases, middleware, decentralized inference, and on-chain and off-chain data providers. This router can serve as a framework that truly connects all the projects being built in the NEAR ecosystem and then provides all of this to the NEAR user base. NEAR has over 15 million monthly active users across different models and applications.
Some applications are exploring how to deploy models on user devices, known as edge computing. This approach includes storing data locally and operating using related protocols and SDKs. From a privacy perspective, this has great potential. In the future, many applications will run on user devices, generating or pre-compiling user experiences using only local models, thus avoiding data leakage. As developers, we have a lot of research underway aimed at making it easy for anyone to build and publish applications on Web3 and formally verify them on the backend. This will become an important topic in the future, as OLLM models become increasingly powerful in discovering vulnerabilities in codebases.
In summary, this is a complete tech stack, from the underlying blockchain infrastructure to chain abstraction for Web3, to peer-to-peer connections, making it very suitable for connecting off-chain and on-chain participants. Next are applications of AI inference routing centers and local data storage, particularly suitable for situations where access to private data is needed without leaking it to the outside. Finally, developers will integrate all research outcomes, aiming for future applications to be built by AI. In the medium to long term, this will be a very important direction for development.
Priorities and Research Focus of AO
Lulu: I would like to ask Sam, what are AO's current priorities and research focuses?
Sam: One idea that I am particularly interested in is leveraging the scalability provided by AO to build a deterministic subset of CUDA, an abstract GPU driver. Typically, GPU computation is non-deterministic, so it cannot be safely used for computation like it can be on AO, at least not safely, which is why no one would trust these processes. If we can solve this problem, it is theoretically possible; we just need to address the uncertainty issues at the device level. There has been some interesting research, but it needs to be handled in a way that can always be 100% deterministic, which is crucial for the execution of smart contracts. We already have a plugin system supporting this functionality as a driver within AO. The framework is already in place; we just need to figure out how to implement it precisely. Although there are many technical details, the basic idea is to make the jobs in the GPU environment sufficiently predictable for this type of computation.
Another area I am interested in is whether we can leverage this on-chain AI capability to enable decentralized or at least open and distributed model training, especially fine-tuning models. The basic idea is that if you can set a clear criterion for a task, you can train models against that criterion. Can we create a system where people stake tokens to incentivize miners to compete to build better models? While this may not attract a very diverse set of miners, that doesn't matter, as it allows for model training in an open manner. Then, when miners upload models, they can add a general data license tag that stipulates anyone can use these models, but if used for commercial purposes, specific royalties must be paid. Royalties can be distributed to contributors through tokens. This way, by combining all these elements, we can create an incentive mechanism for training open-source models.
I also think the previously mentioned R AI L initiative is very important. We have discussed the possibility of supporting this initiative with some major AI providers or inference providers, and they have indeed shown strong interest. If we can get them to actually implement it and write this data on the network, then users could right-click on any image on the internet and query whether it was generated by Stable Diffusion or DALL·E. These are all very interesting areas we are currently exploring.
Illia and Sam's Favorite Projects
Lulu: Please each nominate a recent AI or crypto project you like, it can be any project.
Illia: I'm going to take a shortcut. We hold AI Office Hours every week, inviting some projects, and recently we had Masa and Compute Labs. Both projects are fantastic; I'll use Compute Labs as an example. Compute Labs essentially turns actual computing resources (like GPUs and other hardware) into a real asset that can be economically participated in, allowing users to earn from these devices. Currently, the computing market in the crypto space is booming, and they seem to be a natural place for cryptocurrencies to facilitate the market. But the problem is that these markets lack moats and network effects, leading to fierce competition and profit compression. Therefore, the computing market is just one supplement to other business models. Compute Labs provides a very crypto-native business model, that is capital formation and asset decarbonization. It creates opportunities for people that usually require building data centers to participate. The computing market is just part of it, with the main goal of providing access to computing resources. This model also fits into the broader decentralized AI ecosystem, providing underlying computing resources and opportunities for a wider group of investors to participate in innovation.
Sam: I have many great projects in the AO ecosystem, and I don't want to favor any one, but I think the underlying infrastructure that Autonomous Finance is building makes "agent finance" possible. This is very cool, and they are really at the forefront of this. I also want to thank the broader open-source AI community, especially Meta for open-sourcing the Lama model, which has encouraged many others to open-source their models. Without this trend, if OpenAI had turned into ClosedAI after GPT-2, we might have fallen into a dark age, especially in the crypto space, as we would not have access to these models. Everyone would have to rent these closed-source models from one or two major providers. Fortunately, that situation has not occurred, which is great. Ironically, we should give a thumbs up to Meta, the king of Web2.