Vitalik Buterin: Prospects and Challenges of Crypto+AI Applications

Vitalik Buterin
2024-01-30 22:04:32
Collection
Vitalik elaborates on the four intersections of Crypto+AI.

Author: Vitalik Buterin

Compiler: Karen, Foresight News

Special thanks to the Worldcoin and Modulus Labs teams, Xinyuan Sun, Martin Koeppelmann, and Illia Polosukhin for their feedback and discussions.

For many years, people have asked me the question: "Where is the most fruitful intersection between cryptocurrency and AI?" This is a reasonable question: cryptocurrency and AI are two major deep (software) technology trends of the past decade, and there must be some connection between them.

On the surface, it is easy to find synergies between the two: the decentralization of cryptocurrency can balance the centralization of AI, AI is opaque, while cryptocurrency can bring transparency; AI needs data, and blockchain excels at storing and tracking data. However, over the years, when people have asked me to delve into specific applications, my answer has often been disappointing: "Yes, there are indeed some applications worth exploring, but not many."

In the past three years, with the rise of more powerful AI technologies like modern LLMs (large language models), as well as the emergence of stronger cryptocurrency technologies beyond just blockchain scaling solutions, such as zero-knowledge proofs, fully homomorphic encryption, and (two-party and multi-party) secure multi-party computation, I have begun to see a shift. Within the blockchain ecosystem, or in combining AI with cryptography, there are indeed some promising AI applications, although caution is needed when applying AI. A particular challenge is: in cryptography, open source is the only way to make something truly secure, but in AI, open models (and even their training data) significantly increase their vulnerability to adversarial machine learning attacks. This article will categorize the different ways in which Crypto+AI may intersect and explore the prospects and challenges of each category.

A summary of the Crypto+AI intersection from the uETH blog post. But how can we truly realize these synergies in specific applications?

Four Major Intersections of Crypto+AI

AI is a very broad concept: you can think of AI as a set of algorithms, where the algorithms you create are not driven by explicit specifications, but by stirring a large computational soup and applying some form of optimization pressure to push the soup to generate algorithms with the desired properties.

This description should not be taken lightly, as it encompasses the process of creating us humans! But it also means that AI algorithms share some common characteristics: they possess very powerful capabilities, while we have certain limitations in understanding or comprehending their internal workings.

There are many ways to categorize artificial intelligence, and for the interactions between AI and blockchain discussed in this article (Virgil Griffith's article "Literally, Ethereum is a game-changing technology"), I will categorize them as follows:

  • AI as a participant in the game (highest feasibility): In mechanisms involving AI, the ultimate source of incentives comes from human input protocols.

  • AI as a game interface (great potential but with risks): AI helps users understand the surrounding crypto world and ensures their actions (such as signing messages and transactions) align with their intentions to avoid being deceived or scammed.

  • AI as game rules (requires extreme caution): Blockchain, DAOs, and similar mechanisms directly invoke AI. For example, "AI judges."

  • AI as a game objective (long-term and interesting): The goal of designing blockchains, DAOs, and similar mechanisms is to build and maintain an AI that can be used for other purposes, with the use of cryptographic techniques either to better incentivize training or to prevent AI from leaking private data or being misused.

Let's review each one.

AI as a Game Participant

In fact, this category has existed for nearly a decade, at least since decentralized exchanges (DEX) began to be widely used. Whenever there is an exchange, there are opportunities to make money through arbitrage, and bots can perform arbitrage better than humans.

This use case has been around for a long time, even with much simpler AI than what we have now, but ultimately it is indeed a real intersection of AI and cryptocurrency. Recently, we have often seen MEV (Maximal Extractable Value) arbitrage bots exploiting each other. Any blockchain application involving auctions or trades will have arbitrage bots.

However, AI arbitrage bots are just the first example of a larger category, and I expect many other applications to be covered soon. Let's take a look at AIOmen, a demonstration of AI as a participant in a prediction market:

Prediction markets have long been the holy grail of cognitive technology. Back in 2014, I was excited about using prediction markets as inputs for governance (future rule) and have widely experimented with them in recent elections. But so far, prediction markets have not made much progress in practice for many reasons: the largest participants are often irrational, those with correct insights are reluctant to spend time and stake unless many people are involved with money, and the markets are often not active enough, etc.

One response to this is to point out the user experience improvements being made by Polymarket or other new prediction markets, hoping they can refine and succeed where previous iterations failed. People are willing to bet hundreds of billions on sports, so why don't they put enough money on the U.S. elections or LK99 to get serious players involved? But this argument must face the fact that previous versions failed to reach such scale (at least compared to the dreams of their supporters), so it seems that some new elements are needed for prediction markets to succeed. Therefore, another response is to point out a specific characteristic of the prediction market ecosystem that we can expect to see in the 2020s, which we did not see in the 2010s: the potential for widespread AI participation.

AI is willing or able to work for less than $1 an hour and possesses encyclopedic knowledge. If that’s not enough, they can even integrate with real-time web search capabilities. If you create a market and provide a $50 liquidity subsidy, humans may not care about bidding, but thousands of AIs would flock in and make their best guesses.

The motivation to do well on any given question may be small, but the motivation to create an AI that can make good predictions could be in the millions. Note that you don’t even need humans to adjudicate most questions: you can use a multi-round dispute system similar to Augur or Kleros, where AIs will also participate in the early rounds. Humans only need to react in a few cases where a series of escalations occur and both sides have invested significant amounts of money.

This is a powerful primitive because once you can make "prediction markets" work at such a micro scale, you can repeat the "prediction market" primitive for many other types of questions, such as:

  • Is this social media post acceptable according to [terms of use]?
  • What will happen to the price of stock X (e.g., see Numerai)?
  • Is this account messaging me really Elon Musk?
  • Is this work submitted on an online task market acceptable?
  • Is the DApp on https://examplefinance.network a scam?
  • Is 0x1b54….98c3 the address of the "Casinu In" ERC20 token?

You may notice that many of these ideas lean towards the "info defense" direction I mentioned earlier. Broadly speaking, the question is: how do we help users distinguish between true and false information and identify fraud without giving a centralized authority the power to decide right from wrong, to avoid the abuse of that power? At a micro level, the answer could be "AI."

But at a macro level, the question is: who built the AI? AI is a reflection of its creation process, and thus there are inevitably biases. A higher-level game is needed to judge the performance of different AIs, allowing AIs to participate as players in the game.

This way of using AI, where AI participates in a mechanism and ultimately receives rewards or penalties from humans through an on-chain mechanism (in a probabilistic manner), I believe is very worthy of exploration. Now is the right time to delve deeper into such use cases, as the scalability of blockchain has finally succeeded, making previously often unfeasible "micro" anything now potentially viable.

A related class of applications is developing towards highly autonomous agents, using blockchain to cooperate better, whether through payments or by making credible commitments using smart contracts.

AI as a Game Interface

One idea I proposed in "My techno-optimism" is that there is a market opportunity for user-facing software that can protect users' interests by interpreting and identifying dangers in the online world they are browsing. The scam detection feature of MetaMask is an existing example.

Another example is the simulation feature of the Rabby wallet, which shows users the expected outcomes of the transactions they are about to sign.

These tools have the potential to be enhanced by AI. AI can provide richer, more human-understandable explanations, clarifying what kind of DApp you are participating in, the consequences of the more complex operations you are signing, whether specific tokens are genuine (e.g., BITCOIN is not just a string of characters; it is the name of a real cryptocurrency, and it is not an ERC20 token with a price far above $0.045), and so on. Some projects are starting to fully develop in this direction (e.g., using AI as the primary interface in the LangChain wallet). My personal view is that purely AI interfaces may currently pose too great a risk, as they increase the risk of other types of errors, but combining AI with more traditional interfaces becomes very feasible.

There is a specific risk worth mentioning. I will elaborate on this in the "AI as Game Rules" section below, but the general issue is adversarial machine learning: if a user has an AI assistant in an open-source wallet, then bad actors will also have the opportunity to access that AI assistant, giving them unlimited chances to optimize their fraudulent behavior to evade the wallet's defenses. All modern AIs will have certain vulnerabilities, and even with limited access to the model, it is easy to find these vulnerabilities during the training process.

There is a particular risk worth mentioning. I will discuss this in more detail in the "AI as Game Rules" section below, but the general issue is adversarial machine learning: if users can access an AI assistant within an open-source wallet, then bad actors will also have access to that AI assistant, giving them unlimited opportunities to optimize their scams to avoid triggering the wallet's defenses. All modern AIs have flaws somewhere, and it is not too difficult to find them, even in training processes with limited access to the model.

This is where "AI participating in on-chain micro-markets" works better: each individual AI faces the same risks, but you intentionally create an open ecosystem that is iteratively improved by dozens of people.

Moreover, each individual AI is closed: the security of the system comes from the openness of the game rules, not from the internal workings of each participant.

In summary: AI can help users understand what is happening in simple language, can act as a real-time mentor, protecting users from errors, but caution is needed when encountering malicious misinformers and scammers.

AI as Game Rules

Now, let’s discuss applications that many people are excited about, but I believe are the riskiest, and we need to proceed with extreme caution: what I mean by AI being part of the game rules. This relates to the excitement of mainstream political elites about "AI judges" (for example, related articles can be found on the "World Government Summit" website), and there are similar aspirations in blockchain applications. If a blockchain-based smart contract or DAO needs to make subjective decisions, can you simply make AI a part of the contract or DAO to help execute those rules?

This is where adversarial machine learning will be an extremely daunting challenge. Here’s a simple argument:

If an AI model that plays a key role in the mechanism is closed, you cannot verify its internal workings, so it is no better than a centralized application.

If the AI model is open, then attackers can download and simulate it locally, designing highly optimized attacks to deceive the model, and then they can replay that model on the live network.


Example of adversarial machine learning. Source: researchgate.net

Now, readers who frequently read this blog (or are crypto natives) may have grasped my point and started to think. But wait.

We have advanced zero-knowledge proofs and other very cool forms of cryptography. We can certainly perform some cryptographic magic to hide the internal workings of the model so that attackers cannot optimize their attacks while proving that the model is executing correctly and is built on a reasonable training process with a reasonable dataset.

Typically, this is exactly the kind of thinking I advocate in this blog and other articles. However, in the case of AI computation, there are two main objections:

  • Cryptographic overhead: Performing a task in SNARKs (or MPC, etc.) is much less efficient than executing it in plaintext. Given that AI itself has high computational demands, is it computationally feasible to perform AI computations in a cryptographic black box?
  • Black box adversarial machine learning attacks: Even without understanding the internal workings of the model, there are ways to optimize attacks against AI models. If hidden too tightly, you may make it easier for those choosing training data to compromise the integrity of the model through poisoning attacks.

Both are complex rabbit holes that need to be explored one by one.

Cryptographic Overhead

Cryptographic tools, especially general tools like ZK-SNARK and MPC, have high overhead. Directly verifying an Ethereum block takes several hundred milliseconds, but generating a ZK-SNARK to prove the correctness of such a block can take hours. The overhead of other cryptographic tools (like MPC) can be even greater.

AI computation itself is already very expensive: the most powerful language models output words only slightly faster than human reading speed, not to mention that training these models typically costs millions of dollars. The quality difference between top models and those trying to economize on training costs or parameter counts is significant. At first glance, this is a good reason to be skeptical about wrapping AI in cryptography to add guarantees.

Fortunately, AI is a very special type of computation that allows for various optimizations that more "unstructured" types of computation like ZK-EVM cannot benefit from. Let’s take a look at the basic structure of AI models:

Typically, AI models consist mainly of a series of matrix multiplications, interspersed with nonlinear operations on each element, such as the ReLU function (y = max(x, 0)). Asymptotically, matrix multiplication accounts for most of the work. This is indeed convenient for cryptography, as many cryptographic forms can perform linear operations almost "for free" (at least when performing matrix multiplication on the encrypted model rather than the input).

If you are a cryptographer, you may have heard of a similar phenomenon in homomorphic encryption: performing addition on encrypted ciphertext is very easy, but multiplication is very difficult, and it wasn’t until 2009 that we found a method for multiplication that works for infinite depth.

For ZK-SNARK, similar to the protocol from 2013, the overhead for proving matrix multiplication is less than 4 times. Unfortunately, the overhead for nonlinear layers remains significant, with the best practical implementations showing an overhead of about 200 times.

However, with further research, there is hope to significantly reduce this overhead. Ryan Cao's demonstration introduces a recent GKR-based method, along with my own simplified explanation of the main components of GKR.

But for many applications, we not only want to prove that AI output computations are correct, but also want to hide the model. There are some simple methods for this: you can split the model so that a different set of servers redundantly stores each layer, hoping that leaking certain layers from certain servers won’t leak too much data. But there are also some surprisingly specialized multi-party computations.

In both cases, the moral of the story is the same: the main part of AI computation is matrix multiplication, and very efficient ZK-SNARKs, MPCs (or even FHE) can be designed for matrix multiplication, so the total overhead of putting AI into a cryptographic framework is surprisingly low. Typically, the nonlinear layers are the biggest bottleneck, even though they are smaller in size. Perhaps new techniques like lookup can help.

Black Box Adversarial Machine Learning

Now, let’s discuss another important issue: even if the contents of the model remain private and you can only access the model through an "API," there are still types of attacks you can perform. Quoting a 2016 paper:

Many machine learning models are vulnerable to adversarial examples: specially designed inputs that cause machine learning models to produce incorrect outputs. An adversarial example that affects one model often affects another model, even if the two models have different architectures or were trained on different training sets, as long as both models are trained to perform the same task. Therefore, attackers can train their own substitute models, create adversarial examples for the substitute models, and transfer them to the victim model with little information about the victim.

Potentially, even if you have very limited or no access to the model you want to attack, you can create attacks just from the training data. As of 2023, such attacks remain a significant issue.

To effectively mitigate such black box attacks, we need to do two things:

  1. Truly restrict who or what can query the model and the number of queries. A black box with unrestricted API access is insecure; a black box with very restricted API access may be secure.
  2. Hiding the training data while ensuring that the training data creation process is not compromised is an important goal.

Regarding the former, the project that has done the most in this area may be Worldcoin, which I analyzed in detail in its early version (and other protocols). Worldcoin extensively uses AI models at the protocol level to (i) convert iris scans into short "iris codes" that are easy to compare for similarity, and (ii) verify that the scanned object is indeed human.

The main defense that Worldcoin relies on is that no one is allowed to simply call the AI model: instead, it uses trusted hardware to ensure that the model only accepts inputs digitally signed by the orb camera.

This approach does not guarantee effectiveness: it has been shown that you can perform adversarial attacks on biometric recognition AI through physical patches or jewelry worn on the face.

Wearing something extra on the forehead can evade detection or even impersonate someone else. Source: https://arxiv.org/pdf/2109.09320.pdf

However, our hope is that if all defenses are combined, including hiding the AI model itself, strictly limiting the number of queries, and requiring some form of authentication for each query, then adversarial attacks will become very difficult, thus making the system more secure.

This leads to the second question: how do we hide the training data? This is where "AI democratically managed by DAOs" may actually make sense: we can create an on-chain DAO to manage who is allowed to submit training data (and the required statements about the data itself), who can query, and the number of queries, using cryptographic techniques like MPC to encrypt the entire AI creation and running process from each individual user's training input to each query's final output. This DAO can also meet the widely popular goal of compensating those who submit data.

It is important to reiterate that this plan is very ambitious and has many aspects that could prove impractical:

  • For such a completely black box architecture, the cryptographic overhead may still be too high to compete with traditional closed "trust me" methods.
  • The reality may be that there is no good way to decentralize the training data submission process and prevent poisoning attacks.
  • Due to collusion among participants, multi-party computation devices may compromise their security or privacy guarantees: after all, this has repeatedly happened in cross-chain bridges.

One of the reasons I did not warn at the beginning of this section "Don't make AI judges, that's dystopian" is that our society has already become highly reliant on unaccountable centralized AI judges: deciding which types of algorithmic posts and political views are promoted and demoted on social media, and even censored.

I do believe that further expanding this trend at the current stage is a rather bad idea, but I do not think that the blockchain community experimenting more with AI would be the main reason for making the situation worse.

In fact, cryptographic techniques offer some very fundamental and low-risk ways to improve even existing centralized systems, and I am very confident about this. One simple technique is delayed verification of AI: when social media sites use AI-based post ranking, they can publish a ZK-SNARK proving the hash of the model that generated that ranking. The site can commit to publicly releasing its AI model after a certain delay (e.g., one year).

Once the model is made public, users can check the hash to verify that the correct model was released, and the community can test the model to validate its fairness. The release delay will ensure that the model is outdated by the time it is released.

Thus, compared to the centralized world, the question is not whether we can do better, but how well we can do it. However, for the decentralized world, caution is needed: if someone builds a prediction market or stablecoin using an AI oracle, and then someone discovers that the oracle is attackable, a large amount of funds could potentially disappear in an instant.

AI as a Game Objective

If the technologies mentioned above for creating scalable decentralized private AI (whose contents are a black box to no one) can actually work, then this can also be used to create AI with utility beyond blockchain. The NEAR protocol team is making this a core goal of their ongoing work.

There are two reasons for doing this:

  1. If a "trusted black box AI" can be created by running training and inference processes through some combination of blockchain and multi-party computation, then many applications where users are concerned about bias or deception in the system can benefit from it. Many people have expressed a desire for democratic governance of the AI we rely on; cryptographic and blockchain-based technologies may be the way to achieve this goal.

  2. From the perspective of AI safety, this would be a way to create decentralized AI with a natural emergency stop switch, and it can limit queries that attempt to use AI for malicious purposes.

Notably, "using cryptographic incentives to encourage better AI production" can be achieved without completely falling into the rabbit hole of fully encrypting with cryptography: methods like BitTensor fall into this category.

Conclusion

As blockchain and AI continue to evolve, the applications at their intersection are also increasing, some of which are more meaningful and robust.

Overall, those application cases where the underlying mechanisms remain fundamentally unchanged but individual participants become AI are the most immediately promising and easiest to realize at a more micro level.

The most challenging are those attempting to create "singleton" applications that rely on a single decentralized trusted AI for some purpose: certain applications that depend on a single decentralized trusted AI for some purpose.

These applications have the potential to enhance functionality and improve AI safety while avoiding centralization risks.

However, the underlying assumptions may fail in many ways. Therefore, caution is needed, especially when deploying these applications in high-value and high-risk environments.

I look forward to seeing more constructive attempts at AI application cases in all these areas, so we can see which use cases are truly viable at scale.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators