Arweave Founder Talks with NEAR Co-Founder: Exploring the Integration of AI and Blockchain
Author: Siwei Guai Guai, BlockBeats
Translator's Note: On June 14, the AO Foundation officially launched the token economics of the decentralized supercomputer AO. Just two days earlier, on the evening of June 12, Lulu, a partner at the decentralized hackathon platform and accelerator BeWater, invited Sam Williams, founder of Arweave and AO, and Illia Polosukhin, co-founder of NEAR Protocol, for an in-depth discussion on the integration of AI and blockchain. Sam elaborated on the underlying architecture of AO, which is based on an actor-oriented paradigm and a decentralized Erlang model, aiming to create a decentralized computing network that is infinitely scalable and supports heterogeneous process interactions.
Sam also envisioned the potential applications of AO in DeFi scenarios, where the introduction of trustworthy AI strategies could enable true "agent finance." Illia shared the latest advancements of NEAR Protocol in scalability and AI integration, including the introduction of chain abstraction and chain signature functionalities, as well as the development of peer-to-peer payment and AI inference routers. Additionally, both guests expressed their views on the current priorities and research focuses within their respective ecosystems, as well as innovative projects they are optimistic about.
How Illia and Sam Got Involved in AI and Crypto
Lulu: First, please introduce yourselves and tell us how you got involved in both AI and blockchain.
Illia: My background is in machine learning and artificial intelligence, and I worked in this field for about 10 years before entering the crypto space. I am best known for the paper "Attention is All You Need," which introduced the Transformer model, now widely used in various modern machine learning, AI, and deep learning technologies. However, before that, I was involved in many projects, including TensorFlow, a machine learning framework open-sourced by Google in 2014-2015. I also worked on research in question-answering systems, machine translation, and applied some of my research results in Google.com and other Google products.
Later, I co-founded NEAR.ai with Alex, initially as an AI company focused on teaching machines to program. We believed that in the future, people would be able to communicate with computers through natural language, and the computers would automatically program themselves. In 2017, this sounded like science fiction, but we did a lot of research. We crowdsourced more training data, with students from China, Eastern Europe, and other places completing small tasks for us, such as writing code and creating code comments. However, we faced challenges in paying them, as PayPal could not transfer funds to users in China.
Someone suggested using Bitcoin, but at that time, the transaction fees for Bitcoin were already quite high. So we began to delve deeper into it. We had a background in scalability; at Google, everything had to be scalable. My co-founder Alex created a sharded database company that served Fortune 500 companies. It was strange to see the state of blockchain technology at that time, where almost everything was running on a single machine, limited by the capabilities of that single machine.
Thus, we set out to build a new protocol, which became NEAR Protocol. It is a sharded Layer 1 protocol focused on scalability, usability, and developer convenience. We launched the mainnet in 2020 and have been growing the ecosystem since. In 2022, Alex joined OpenAI, and in 2023, he founded an AI company focused on foundational models. Recently, we announced his return to lead the NEAR.ai team to continue our work on teaching machines to program that we started in 2017.
Lulu: That's a fascinating story! I didn't know that NEAR initially started as an AI company and is now refocusing on AI. Next, Sam, please introduce yourself and your project.
Sam: We started getting involved in this field about seven years ago, and I had been following Bitcoin for a long time. We discovered an exciting but underexplored idea: you can store data on a network that will be replicated globally, without a single centralized point of failure. This inspired us to create a permanent archive that is replicated in multiple locations, making it impossible for any single organization or government to censor the content.
Thus, our mission became to scale Bitcoin, or to enable Bitcoin-like on-chain data storage to any scale, so that we could create a knowledge base for humanity, storing all of history and forming an immutable, trustless historical log, ensuring we never forget how we arrived at this important context today.
We began this work seven years ago, and we have had the mainnet live for over six years now. In the process, we realized that permanent on-chain storage could provide far more functionality than we initially imagined. Initially, our idea was to store newspaper articles. But shortly after the mainnet launched, we realized that if you could store all this content around the world, you were essentially planting the seeds for a permanent decentralized network. Moreover, around 2020, we realized that if you have a deterministic virtual machine and a permanent ordered log that interacts with programs, you could essentially create a smart contract system.
We first tried this system in 2020, calling it SmartWeave. We borrowed the concept of lazy evaluation from computer science, primarily popularized by the programming language Haskell. We knew this concept had been used in production environments for a long time, but it had not been truly applied in the blockchain space. Typically, in the blockchain space, people execute smart contracts when writing messages. But we believe that blockchain is essentially a data structure that only grows, with certain rules to include new information without executing code simultaneously with data writing. Since we have an arbitrarily scalable data log, this was a natural way of thinking for us, though it was relatively rare at the time. The only other team was what is now called Celestia (formerly known as LazyLedger).
This led to a Cambrian explosion of computing systems on Arweave. There were about three or four major projects, some of which developed their unique communities, feature sets, and security trade-offs. In this process, we realized that not only did we need to leverage the data availability of the underlying layer to store these logs, but we also needed a mechanism to delegate data availability guarantees. Specifically, you could submit data to a packing node or another representative (now called a scheduling unit), which would upload the data to the Arweave network and provide you with an economically incentivized guarantee that the data would be written to the network. Once this mechanism was in place, you had a system capable of horizontally scaling computation. Essentially, you have a series of processes that can be viewed as rollups on Ethereum, sharing the same dataset and able to communicate with each other.
The name AO (Actor-Oriented) comes from a paradigm in computer science, and we built a system that combines all these components, featuring a native messaging system, data availability providers, and a decentralized computing network. Thus, the lazy evaluation component became a distributed collection, where anyone could start a node to resolve contract states. When you combine these elements, you get a decentralized supercomputer. At its core, we have an arbitrarily scalable message log that records all messages involved in the computation. I find this particularly interesting because you can perform parallel computations, and your process does not affect the scalability or utilization of my process, meaning you can perform computations of arbitrary depth, such as running large-scale AI workloads within the network. Currently, our ecosystem is actively promoting this idea, exploring what happens when market intelligence is introduced into the smart contract system at the foundational layer. This way, you essentially have intelligent agents working for you, which are trustworthy and verifiable, just like the underlying smart contracts.
The Underlying Concepts and Technical Architecture of AO
Lulu: As we know, NEAR Protocol and Arweave are now driving the intersection of AI and cryptocurrency. I would like to delve deeper into this. Since Sam has touched on some of the underlying concepts and architecture of AO, I might start with AO and then shift to AI later. The concepts you described make me feel like those agents are autonomously running, coordinating, and allowing AI agents or applications to operate on top of AO. Could you elaborate on the parallel execution or autonomous agents within the AO infrastructure? Is the metaphor of building a decentralized Erlang accurate?
Sam: Before I begin, I want to mention that I built an operating system based on Erlang during my PhD studies. We called it running on bare metal. The exciting thing about Erlang is that it is a simple yet expressive environment where every piece of computation is expected to run in parallel, rather than a shared state model, which has become the norm in the crypto space.
The elegance of this lies in its beautiful mapping to the real world. Just like we are having this conversation together, we are actually independent roles, computing in our own minds, then listening, thinking, and talking. The actor model or actor-oriented architecture of Erlang is indeed remarkable. Right after my talk at the AO summit was one of the co-founders of Erlang, who talked about how they came up with this architecture around 1989. At that time, they didn't even realize the term "actor-oriented." But it is such a wonderful concept that many people came up with the same idea because it makes so much sense.
For me, if you want to build truly scalable systems, you have to make them message-passing rather than sharing state. That is, when they share state, as happens in Ethereum, Solana, and almost all other blockchains, NEAR is actually an exception. NEAR has sharding, so they do not share global state but have local states.
When we built AO, the goal was to combine these concepts. We wanted processes that could execute in parallel, capable of arbitrarily large-scale computations, while separating the interactions of these processes from their execution environments, ultimately forming a decentralized version of Erlang. For those who are less familiar with distributed technology, the simplest way to understand it is to think of it as a decentralized supercomputer. With AO, you can start a terminal within the system. As a developer, the most natural way to use it is to start your local process and then interact with it, just like you would with a local command line interface. As we move towards consumer adoption, people are building UIs and everything you would expect. Fundamentally, it allows you to run personal computations in this decentralized computing cloud and interact using a unified message format. We referenced the TCP/IP protocol that runs the internet when designing this part, trying to create a protocol that could be seen as the TCP/IP of computation itself.
The data protocol of AO does not mandate any specific type of virtual machine. You can use any virtual machine you want; we have implemented both WASM32 and 64-bit versions. Others in the ecosystem have implemented EVM. If you have this shared messaging layer (we use Arweave), then you can allow all these highly heterogeneous processes to interact in a shared environment, just like the internet of computation. Once this infrastructure is in place, the next step is naturally to explore what can be done using intelligent, verifiable, trustless computation. The obvious applications are AI or smart contracts, allowing agents to make intelligent decisions in the market, potentially competing with each other or representing humans against other humans. When we look at the global financial system, about 83% of trades on NASDAQ are executed by robots. This is how the world operates.
In the past, we couldn't put the intelligent part on-chain and make it trustworthy. But in the Arweave ecosystem, there is another parallel workflow we call R AI L, which stands for Responsible AI Ledger. It is essentially a way to create records of different model inputs and outputs and store these records in a public and transparent manner, allowing you to query and say, "Hey, did this piece of data come from an AI model?" If we can promote this, we believe it can solve a fundamental problem we see today. For example, if someone sends you a news article from an untrusted website that seems to have a picture or video of a politician doing something foolish, how do you know if it's real? R AI L provides a ledger that many competing companies can use in a transparent and neutral way to store their generated output records, just like they use the internet. And they can do it at a very low cost.
Illia's Perspective on Blockchain Scalability
Lulu: I'm curious about Illia's perspective on the scalability of the AO approach or model. You have worked on the Transformer model, which aims to address the bottlenecks of sequential processing. I want to ask, what is NEAR's approach to scalability? In a previous AMA chat, you mentioned that you are exploring a direction where multiple small models form a system, which could be one of the solutions.
Illia: Scalability can manifest in many different ways in blockchain, and we can continue along the lines of Sam's discussion. What we see now is that if you use a single large language model (LLM), it has some limitations in reasoning. You need to prompt it in a specific way for it to run for a while. Over time, the models will continue to improve and become more general. But in any case, you are tuning these models (which can be seen as primitive intelligence) to perform specific functions and tasks and to reason better in specific contexts.
If you want them to perform more general work and processes, you need multiple models running in different contexts, executing different aspects of tasks. For a very specific example, we are currently developing an end-to-end process. You could say, "Hey, I want to build this application." The final output is a fully constructed application, complete with correct, formally verified smart contracts, and the user experience has been thoroughly tested. In real life, there usually isn't one person building all these things, and the same idea applies here. You actually want AI to play different roles and take on different responsibilities at different times, right?
First, you need an AI agent acting as a product manager, gathering requirements, figuring out what you really want, what the trade-offs are, and what the user stories and experiences are. Then there might be an AI designer responsible for translating those designs into the frontend. Next could be an architect responsible for the backend and middleware architecture. Then there is the AI developer, writing code and ensuring that the smart contracts and all frontend work are formally verified. Finally, there might be an AI tester ensuring everything runs smoothly, testing through a browser. This forms a set of AI agents that, while they may use the same model, are fine-tuned for specific functions. They interact in the process, each playing their role independently, using prompts, structures, tools, and the observed environment to build a complete workflow.
This is what Sam was talking about, having many different agents that asynchronously complete their work, observing the environment and figuring out what to do. So you do need a framework, a system to continuously improve them. From the user's perspective, you send a request and interact with different agents, but they work together as a single system to get the job done. At a lower level, they might actually be paying each other for exchanging information, or different agents owned by different owners interact to actually accomplish something. This is a new version of an API, smarter and more natural language-driven. All of this requires a lot of framework structure, as well as payment and settlement systems.
There is a new way of explaining this called AI commerce, where all these agents interact with each other to complete tasks. This is the system we are all moving towards. If we consider the scalability of such a system, several issues need to be addressed. As I mentioned, NEAR is designed to support billions of users, including humans, AI agents, and even cats, as long as they can transact. Each NEAR account or smart contract runs in parallel, allowing for continued scaling and transactions. At a lower level, you probably don't want to send a transaction every time you call an AI agent or API; it wouldn't be reasonable, no matter how cheap NEAR is. Therefore, we are developing a peer-to-peer protocol that allows agent nodes, clients (including humans or AI) to connect with each other and pay for API calls, data retrieval, etc., with cryptoeconomic rules ensuring they will respond, or else they will lose part of their collateral.
This is a new system that allows for scaling beyond NEAR, providing micropayments. We call it yocto NEAR, equivalent to 10^-24 of NEAR. This way, you can actually exchange messages at the network level with payment functionality, allowing all operations and interactions to be settled through this payment system. This addresses a fundamental issue in blockchain, where we lack a payment system with bandwidth and latency, and there are actually many free-rider problems. This is a very interesting aspect of scalability, not limited to blockchain scalability but applicable to a future world that may have billions of agents. In this world, even on your device, multiple agents may be running simultaneously, executing various tasks in the background.
AO's Applications in DeFi: Agent Finance
Lulu: This use case is very interesting. I believe there is typically a demand for high-frequency payments and complex strategies for AI payments, which have not yet been realized due to performance limitations. So I look forward to seeing how better scalability options can meet these needs. In our hackathon, Sam and the team mentioned that AO is also exploring using new AI infrastructure to support DeFi use cases. Sam, could you elaborate on how your infrastructure applies in the new DeFi scenarios?
Sam: We call it agent finance. This refers to the two aspects we see in the market. DeFi has done very well in its first phase, decentralizing various economic primitives and bringing them on-chain, allowing users to operate without trusting any intermediaries. But when we think about the market, we think about the fluctuations of digital assets and the intelligence driving those decisions. When you can bring that intelligence itself on-chain, you get a trustless financial tool, like a fund.
A simple example is, suppose we want to build a meme coin trading hedge fund. Our strategy is to buy Trump coins when we see mentions of Trump and buy Biden coins when we see mentions of Biden. In AO, you can use oracle services like 0rbit to get the full content of web pages, such as The Wall Street Journal or The New York Times, and then input that into your agent, which processes the data and analyzes how many times Trump was mentioned. You can also perform sentiment analysis to understand market trends. Then, your agent will buy and sell those assets based on that information.
Interestingly, we can make the agents themselves trustless. This way, you have a hedge fund that can execute strategies, and you can invest funds in it without trusting the fund manager. This is another aspect of finance that the DeFi world has not truly touched upon: making informed decisions and then taking action. If we can make these decision-making processes trustworthy, we can unify the entire system into what looks like a truly decentralized economy, rather than just a settlement layer involving different economic games.
We see this as a huge opportunity, and some people in the ecosystem have already started building these components. We have a team creating a trustless portfolio manager that buys and sells assets based on the proportions you want. For example, you want 50% to be Arweave tokens and 50% to be stablecoins. When the prices of these things change, it will automatically execute trades. There is also an interesting concept behind this: AO has a feature we call cron messages. This means processes can wake themselves up and decide to do something autonomously in the environment. You can set your hedge fund smart contract to wake up every five seconds or five minutes, retrieve data from the network, process the data, and take action in the environment. This makes it completely autonomous, as it can interact with the environment; in a sense, it is "alive."
Executing smart contracts on Ethereum requires external triggers, and many infrastructures have been built to address this issue, but it is not smooth. In AO, these functionalities are built-in. Therefore, you will see a market on-chain where agents continuously compete with each other. This will significantly increase the usage of the network in ways never seen before in the crypto space.
NEAR.ai's Overall Strategy and Development Focus
Lulu: NEAR.ai is advancing some promising use cases. Can you tell us more about other aspects or the overall strategy and some key focuses?
Illia: Indeed, there are many things happening at every level, with various products and projects to integrate. It all obviously starts with the NEAR blockchain itself. Many projects require a scalable blockchain, some form of authentication, payment, and coordination. NEAR's smart contracts are written in Rust and JavaScript, which is very convenient for many use cases. An interesting thing is that NEAR's recent protocol upgrade introduced what we call yield/resume precompiles. These precompiles allow smart contracts to pause execution, waiting for external events to occur, whether from another smart contract or AI inference, and then resume execution. This is very useful for smart contracts that need input from LLMs (like ChatGPT) or verifiable reasoning.
We also launched chain abstraction and chain signature functionalities, which are unique features NEAR introduced in the past six months. Any NEAR account can transact on other chains. This is very useful for building agents, AI inference, or other infrastructures, as you can now perform cross-chain transactions through NEAR without worrying about transaction fees, tokens, RPC, and other infrastructures. All of this is handled for you through the chain signature infrastructure. Ordinary users can also use this feature. There is a HOT Wallet built on NEAR on Telegram, which just launched Base integration on the mainnet, with about 140,000 users using Base through this Telegram wallet.
Furthermore, we plan to develop a peer-to-peer network that will involve agents, AI inference nodes, and other storage nodes in a more provable communication protocol. This is very important because the current network stack is very limited and lacks native payment functionality. Although we often say that blockchain is "internet money," we have not yet solved the problem of sending data packets with money at the network level. We are addressing this issue, which is very useful for all AI use cases and broader Web3 applications.
Additionally, we are developing what we call AI inference routers, which are essentially a place to plug in all use cases, middleware, decentralized inference, and on-chain and off-chain data providers. This router can serve as a framework to truly connect all the projects being built in the NEAR ecosystem and then provide all of this to the NEAR user base. NEAR has over 15 million monthly active users across different models and applications.
Some applications are exploring how to deploy models on user devices, known as edge computing. This approach includes storing data locally and operating with related protocols and SDKs. From a privacy perspective, this has great potential. In the future, many applications will run on user devices, generating or pre-compiling user experiences using only local models, thus avoiding data leakage. As developers, we have a lot of research underway aimed at making it easy for anyone to build and deploy applications on Web3 and formally verify them on the backend. This will become an important topic in the future as OLLM models become increasingly powerful in discovering vulnerabilities in codebases.
In summary, this is a complete tech stack, from the underlying blockchain infrastructure to Web3's chain abstraction, to peer-to-peer connections, very suitable for connecting off-chain and on-chain participants. Next are applications of AI inference routing centers and local data storage, particularly suitable for cases where access to private data is needed without leaking it to the outside. Finally, developers will integrate all research results, aiming to have future applications built by AI. In the medium to long term, this will be a very important direction for development.
AO's Priorities and Research Focus
Lulu: I would like to ask Sam, what are AO's current priorities and research focuses?
Sam: One idea that I am particularly interested in is leveraging the scalability provided by AO to establish a deterministic subset of CUDA, an abstract GPU driver. Typically, GPU computation is not deterministic, so it cannot be safely used for computation like it can on AO, at least not safely, and therefore no one would trust these processes. If we can solve this problem, it is theoretically possible, we just need to address the uncertainty issues at the device level. There has already been some interesting research, but it needs to be handled in a way that can always be 100% deterministic, which is crucial for the execution of smart contracts. We already have a plugin system supporting this functionality as a driver within AO. The framework is already there; we just need to figure out how to implement it precisely. Although there are many technical details, fundamentally, it is about making the jobs in the GPU environment predictable enough for this type of computation.
Another area I am interested in is whether we can leverage the capabilities of on-chain AI to enable decentralized or at least open and distributed model training, especially fine-tuning models. The basic idea is that if you can set a clear criterion for a task, you can train a model against that criterion. Can we create a system where people stake tokens to incentivize miners to compete to build better models? While this may not attract a very diverse group of miners, that doesn't matter because it allows for open model training. Then, when miners upload models, they can add a general data license label stating that anyone can use these models, but if used for commercial purposes, specific royalties must be paid. Royalties can be distributed to contributors through tokens. This way, by combining all these elements, we can create an incentive mechanism to train open-source models.
I also think the previously mentioned R AI L initiative is very important. We have discussed the possibility of supporting this initiative with some major AI providers or inference providers, and they have indeed shown strong interest. If we can get them to actually implement it and write this data on the network, then users could right-click on any image on the internet and query whether that image was generated by Stable Diffusion or DALL·E. These are all very interesting areas we are currently exploring.
Illia and Sam's Favorite Projects
Lulu: Please each nominate a recent AI or crypto project you like, it can be any project.
Illia: I'm going to take a shortcut. We hold AI Office Hours every week, inviting some projects, and recently we had Masa and Compute Labs. Both projects are fantastic, and I'll use Compute Labs as an example. Compute Labs essentially turns actual computing resources (like GPUs and other hardware) into a real asset that can be economically participated in, allowing users to earn from these devices. Currently, the computing market in the crypto space is booming, and they seem to be a natural place for cryptocurrencies to promote the market. But the problem is that these markets lack moats and network effects, leading to fierce competition and profit compression. Therefore, the computing market is just a supplement to other business models. Compute Labs offers a very crypto-native business model, namely capital formation and asset decarbonization. It creates opportunities for people that typically require building data centers to participate. The computing market is just one part of it, with the main goal of providing access to computing resources. This model also fits into the broader decentralized AI ecosystem, providing opportunities for a wider group of investors to participate in innovation by providing underlying computing resources.
Sam: I have many great projects in the AO ecosystem, and I don't want to favor any one, but I think the underlying infrastructure that Autonomous Finance is building makes "agent finance" possible. This is very cool, and they are really at the forefront of this. I also want to thank the broader open-source AI community, especially Meta's open-source Lama model approach, which has encouraged many others to open-source their models. Without this trend, when OpenAI turned into ClosedAI after GPT-2, we might have fallen into a dark age, especially in the crypto space, as we would not have access to these models. Everyone would have to rent these closed-source models from one or two major providers. Fortunately, that situation has not occurred, which is great. Ironically, I still want to give a thumbs up to Meta, the king of Web2.