SevenX Ventures: Understanding ZKML in One Article - How Zero-Knowledge Proofs and Blockchain Work in the Field of Artificial Intelligence and Machine Learning?

SevenX Ventures
2023-12-28 11:59:48
Collection
ZK+ML=?

Author: SevenX Ventures

For us crypto enthusiasts, artificial intelligence has been a hot topic for quite some time. Interestingly, no one wants to see AI go out of control. The original intention of inventing blockchain was to prevent the dollar from spiraling out of control, so we might try to prevent AI from going off the rails as well. Additionally, we now have a new technology called zero-knowledge proofs to ensure that things do not go wrong. However, to tame the beast that is AI, we must understand how it works.

A Brief Introduction to Machine Learning

Artificial intelligence has gone through several name changes, from "expert systems" to "neural networks," then to "graphical models," and finally evolving into "machine learning." All of these are subsets of "artificial intelligence," and as people have given it different names, our understanding of AI has deepened. Let's delve a little deeper into machine learning and unveil its mysteries.

Note: Nowadays, most machine learning models are neural networks because they perform exceptionally well on many tasks. We mainly refer to machine learning as neural network machine learning.

How does machine learning work?

First, let's quickly understand the internal workings of machine learning:

  • Input Data Preprocessing:
  • Input data needs to be processed into a format that can be used as input for the model. This usually involves preprocessing and feature engineering to extract useful information and transform the data into a suitable form, such as an input matrix or tensor (high-dimensional matrix). This is the expert system approach. With the advent of deep learning, the processing layers automatically handle preprocessing.
  • Setting Initial Model Parameters:
  • Initial model parameters include multiple layers, activation functions, initial weights, biases, learning rates, etc. Some parameters can be adjusted during training through optimization algorithms to improve the model's accuracy.
  • Training Data:
  • Input data is fed into the neural network, typically starting from one or more feature extraction and relationship modeling layers, such as convolutional layers (CNN), recurrent layers (RNN), or self-attention layers. These layers learn to extract relevant features from the input data and model the relationships between these features.
  • The outputs of these layers are then passed to one or more additional layers that perform different computations and transformations on the input data. These layers typically involve matrix multiplication of learnable weight matrices and the application of nonlinear activation functions, but may also include other operations, such as convolutions and pooling in convolutional neural networks, or iterations in recurrent neural networks. The outputs of these layers serve as inputs for the next layer in the model or as the final prediction output.
  • Obtaining the Model's Output:
  • The output computed by the neural network is usually a vector or matrix representing probabilities for image classification, sentiment analysis scores, or other results, depending on the application of the network. There is typically also an error assessment and parameter update module that automatically updates parameters based on the model's objectives.

If the above explanation seems too obscure, you can refer to the following example of using a CNN model to recognize images of apples.

  • Load the image into the model in the form of a pixel value matrix. This matrix can be represented as a 3D tensor with dimensions (height, width, channels).
  • Set the initial parameters of the CNN model.
  • The input image passes through multiple hidden layers in the CNN, with each layer applying convolutional filters to extract increasingly complex features from the image. The output of each layer is passed through a nonlinear activation function and then pooled to reduce the dimensionality of the feature maps. The final layer is typically a fully connected layer that generates output predictions based on the extracted features.
  • The final output of the CNN is the category with the highest probability. This is the predicted label for the input image.

The Trust Framework of Machine Learning

We can summarize the above content into a machine learning trust framework, which includes four fundamental layers of machine learning. The entire machine learning process requires these layers to be trustworthy to be reliable:

  • Input: Raw data needs to be preprocessed and sometimes kept confidential.
  • Integrity: Input data has not been tampered with, has not been contaminated by malicious inputs, and has been correctly preprocessed.
  • Privacy: If necessary, input data will not be disclosed.
  • Output: Needs to be accurately generated and transmitted.
  • Integrity: Output is generated correctly.
  • Privacy: If necessary, output will not be disclosed.
  • Model Type / Algorithm: The model should compute correctly.
  • Integrity: The model executes correctly.
  • Privacy: If necessary, the model itself or its computations will not be disclosed.
  • Different neural network models have different algorithms and layers suitable for different use cases and inputs.
  • Convolutional neural networks (CNNs) are typically used for tasks involving grid-like data, such as images, where local patterns and features can be captured by applying convolution operations to small input regions.
  • On the other hand, recurrent neural networks (RNNs) are very suitable for sequential data, such as time series or natural language, where hidden states can capture information from previous time steps and model temporal dependencies.
  • Self-attention layers are very useful for capturing relationships between elements in the input sequence, making them very effective for tasks such as machine translation or summarization, where long-range dependencies are crucial.
  • Other types of models also exist, including multilayer perceptrons (MLP), among others.
  • Model Parameters: In some cases, parameters should be transparent or democratically generated, but in all cases, they should not be easily tampered with.
  • Integrity: Parameters are generated, maintained, and managed correctly.
  • Privacy: Model owners often keep machine learning model parameters confidential to protect the intellectual property and competitive advantage of the organization that developed the model. This phenomenon was very common before transformer models became extremely expensive to train, but it remains a major issue for the industry.

Trust Issues in Machine Learning

With the explosive growth of machine learning (ML) applications (with a compound annual growth rate of over 20%) and their increasing integration into daily life, such as the recently popular ChatGPT, the trust issues surrounding machine learning have become increasingly critical and cannot be ignored. Therefore, it is essential to identify and address these trust issues to ensure the responsible use of AI and prevent its potential misuse. But what exactly are these issues? Let's delve deeper.

Insufficient Transparency or Provability

Trust issues have long plagued machine learning, primarily for two reasons:

  • Privacy Nature: As mentioned above, model parameters are often confidential, and in some cases, model inputs also need to be kept secret, which naturally creates some trust issues between model owners and model users.
  • Algorithm Black Box: Machine learning models are sometimes referred to as "black boxes" because they involve many automated steps that are difficult to understand or explain during computation. These steps involve complex algorithms and large amounts of data, leading to uncertainty and sometimes random outputs, making the algorithms susceptible to accusations of bias or even discrimination.

Before going deeper, one larger assumption of this article is that the model is "ready for use," meaning it has been well-trained and meets its objectives. Models may not be suitable for all situations, and they improve at an astonishing rate; the normal lifespan of a machine learning model is between 2 to 18 months, depending on the application scenario.

A Detailed Breakdown of Trust Issues in Machine Learning

There are some trust issues during the model training process, and Gensyn is currently working to generate effective proofs to facilitate this process. However, this article primarily focuses on the model inference process. Now, let's use the four building blocks of machine learning to uncover potential trust issues:

  • Input:
  • Data sources are tamper-proof.
  • Private input data is not stolen by model operators (privacy issue).
  • Model:
  • The model itself is as accurate as advertised.
  • The computation process is completed correctly.
  • Parameters:
  • Model parameters have not been altered or are consistent with what was advertised.
  • During the process, model parameters that are valuable to the model owner are not disclosed (privacy issue).
  • Output:
  • Output results can be proven to be correct (which may improve with enhancements to all the above elements).

How to Apply ZK to the Machine Learning Trust Framework?

Some of the trust issues mentioned above can be addressed by putting them on-chain; uploading inputs and machine learning parameters to the chain and computing the model on-chain can ensure the correctness of inputs, parameters, and model computations. However, this approach may sacrifice scalability and privacy. Giza is working on this on Starknet, but due to cost issues, it only supports simple machine learning models like regression and does not support neural networks. ZK technology can more effectively address the trust issues mentioned above. Currently, ZKML's ZK usually refers to zkSNARK. First, let's quickly review some basic concepts of zkSNARK:

A zkSNARK proof is a proof that I know some secret input w such that the result of this computation f is OUT is true, without telling you what w is. The proof generation process can be summarized in the following steps:

  • Formulate the statement to be proven: f(x,w)=true

"I correctly classified this image x using a machine learning model f with private parameters w."

  • Convert the statement into a circuit (arithmetic): Different circuit construction methods include R1CS, QAP, Plonkish, etc.
  • Compared to other use cases, ZKML requires an additional step called quantization. Neural network inference is typically performed using floating-point arithmetic, and simulating floating-point arithmetic in the main domain of arithmetic circuits is very costly. Different quantization methods strike a balance between accuracy and device requirements.

Some circuit construction methods like R1CS are not efficient for neural networks. This part can be adjusted to improve performance.

  • Generate a proving key and a verification key.
  • Create a witness: when w=w*, f(x,w)=true.
  • Create a hash commitment: the witness w* commits to generating a hash value using a cryptographic hash function. This hash value can be made public.

This helps ensure that during the computation process, private inputs or model parameters have not been tampered with or modified. This step is crucial because even minor modifications can significantly impact the model's behavior and output.

  • Generate proof: Different proof systems use different proof generation algorithms.
  • Special zero-knowledge rules need to be designed for machine learning operations, such as matrix multiplication and convolution layers, to achieve sub-linear time efficient protocols for these computations.
    • General zkSNARK systems like groth16 may not handle neural networks effectively due to the computational load being too high.
    • Since 2020, many new ZK proof systems have emerged to optimize ZK proof generation for model inference, including vCNN, ZEN, ZKCNN, and pvCNN. However, most of them are optimized for CNN models and can only be applied to some major datasets like MNIST or CIFAR-10.
    • In 2022, Daniel Kang, Tatsunori Hashimoto, Ion Stoica, and Yi Sun (founders of Axiom) proposed a new proof scheme based on Halo 2, achieving ZK proof generation for the ImageNet dataset for the first time. Their optimizations mainly focused on the arithmetic part, featuring novel lookup parameters for non-linear operations and cross-layer reuse sub-circuits.
    • Modulus Labs is benchmarking different proof systems for on-chain inference, finding that ZKCNN and plonky2 perform best in terms of proof time; ZKCNN and halo2 perform well in terms of peak prover memory usage; while plonky performs well but sacrifices memory consumption, and ZKCNN is only applicable to CNN models. It is also developing a new zkSNARK system specifically designed for ZKML and a new virtual machine.
  • Verify the proof: The verifier uses the verification key to verify without needing to know the witness's knowledge.

Thus, we can prove that applying zero-knowledge technology to machine learning models can solve many trust issues. Similar techniques using interactive verification can achieve similar effects but may require more resources on the verifier's side and could face more privacy issues. It is worth noting that generating proofs for them may take time and resources depending on the specific model, so there will be trade-offs when applying this technology to real-world use cases.

Current State of Existing Solutions

Next, what are the existing solutions? Note that model providers may have many reasons for not wanting to generate ZKML proofs. For those brave enough to try ZKML and find the solution meaningful, they can choose from several different solutions based on where the model and input are located:

  • If the input data is on-chain, consider using Axiom as a solution:
  • Axiom is building a zero-knowledge co-processor for Ethereum to improve user access to blockchain data and provide a more complex digital view of on-chain data. Performing reliable machine learning computations on on-chain data is feasible:

* * * First, Axiom imports on-chain data by storing the Merkle root of Ethereum block hashes in its smart contract AxiomV0, which undergoes trustless verification through the ZK-SNARK process. Then, the AxiomV0StoragePf contract allows for batch verification of arbitrary historical Ethereum storage proofs given the trust root of cached block hashes in AxiomV0. * Next, machine learning input data can be extracted from the imported historical data. * Axiom can then apply verified machine learning operations on top; using optimized halo2 as a backend to verify the validity of each computational part. * Finally, Axiom attaches a zk proof to each query result, and the Axiom smart contract verifies the zk proof. Any relevant party wishing to prove can access it from the smart contract.

  • If the model is placed on-chain, consider using RISCZero as a solution:
  • By running machine learning models in RISC Zero's ZKVM, it can be proven that the exact computations involved in the model are executed correctly. The computation and verification process can be completed offline in an environment preferred by the user or within the Bonsai Network, a general-purpose roll-up.

* * * First, the model's source code needs to be compiled into RISC-V binaries. When this binary is executed in the ZKVM, the output will be paired with an encrypted sealed computation receipt. This seal serves as a zero-knowledge argument for computational integrity, associating the encrypted imageID (identifying the executed RISC-V binary) with the declared code output for quick verification by third parties. * When the model is executed in the ZKVM, computations regarding state changes are entirely completed within the VM. It does not leak any information about the internal state of the model to the outside. * Once the model execution is complete, the generated seal becomes a zero-knowledge proof of computational integrity. RISC ZeroZKVM is a RISC-V virtual machine that can generate zero-knowledge proofs for the code it executes. Using ZKVM, an encrypted receipt can be generated that anyone can verify was produced by the ZKVM's client code. When the receipt is published, no additional information about the code execution (e.g., the inputs provided) is leaked.

  • The specific process of generating ZK proofs involves an interactive protocol with a random oracle as the verifier. The seal on the RISC Zero receipt essentially serves as a record of this interactive protocol.

  • If you want to import models directly from popular machine learning software (like TensorFlow or PyTorch), consider using ezkl as a solution:
  • Ezkl is a library and command-line tool for inferring deep learning models and other computational graphs in zkSNARK.
    • First, export the final model as a .onnx file and export some sample inputs as a .json file.
    • Then, point ezkl to the .onnx and .json files to generate a zk-SNARK circuit that can prove ZKML statements.
  • It seems simple, right? Ezkl aims to provide an abstraction layer that allows for calling and laying out high-level operations in Halo 2 circuits. Ezkl abstracts many complexities while maintaining incredible flexibility. Their quantized models have automatically quantized scaling factors. They support flexible changes to other proof systems involved in new solutions. They also support various types of virtual machines, including EVM and WASM.
  • Regarding proof systems, ezkl customizes Halo 2 circuits through aggregated proofs (converting hard-to-verify proofs into easy-to-verify proofs via intermediaries) and recursion (which can address memory issues but is challenging to adapt to halo2). Ezkl also optimizes the entire process through fusion and abstraction (which can reduce overhead through high-level proofs).
  • It is worth noting that, compared to other general zkml projects, Accessor Labs focuses on providing specially designed zkml tools for fully on-chain games, which may involve AI NPCs, automatic updates of gameplay, game interfaces involving natural language, etc.

Where Are the Use Cases?

Addressing the trust issues in machine learning with ZK technology means it can now be applied to more "high-risk" and "highly deterministic" use cases, not just keeping conversations in sync with people or distinguishing cat images from dog images. Web3 has been exploring many such use cases. This is no coincidence, as most Web3 applications run or intend to run on the blockchain, which has specific characteristics that allow for secure operation, are difficult to tamper with, and have deterministic computation. A verifiable well-behaved AI should be able to operate in a trustless and decentralized environment, right?

Use Cases for ZK+ML in Web3

Many Web3 applications sacrifice user experience for security and decentralization, as this is clearly their priority, and there are limitations in infrastructure. AI/ML has the potential to enrich user experience, which is undoubtedly helpful, but it previously seemed impossible without compromise. Now, thanks to ZK, we can comfortably see the combination of AI/ML with Web3 applications without sacrificing too much in terms of security and decentralization.

Essentially, this will be a Web3 application (which may or may not exist at the time of writing) that implements ML/AI in a trustless manner. By "trustless," we refer to whether it operates in a trustless environment/platform or whether its operations can be proven to be verifiable. Note that not all ML/AI use cases (even in Web3) need or prefer to operate in a trustless manner. We will analyze each part of the ML capabilities used in various Web3 domains. Then, we will identify the parts that require ZKML, typically the high-value parts that people are willing to pay extra for proof. Most of the use cases/applications mentioned below are still in experimental research stages. Therefore, they are still far from actual adoption. We will discuss the reasons later.

Defi

Defi is one of the few product-market fit proofs in blockchain protocols and Web3 applications. Creating, storing, and managing wealth and capital in a permissionless manner is unprecedented in human history. We have identified many use cases where AI/ML models need to operate permissionlessly to ensure security and decentralization.

  • Risk Assessment: Modern finance requires AI/ML models for various risk assessments, from preventing fraud and money laundering to issuing unsecured loans. Ensuring that these AI/ML models operate in a verifiable manner means we can prevent them from being manipulated to impose censorship, thus hindering the permissionless nature of using Defi products.
  • Asset Management: Automated trading strategies are not new for traditional finance and Defi. There have been attempts to apply AI/ML-generated trading strategies, but only a few decentralized strategies have succeeded. A typical application in the current Defi space includes the Rocky Bot experiment by Modulus Labs.
  • Rocky Bot: Modulus Labs created a trading bot using AI for decision-making on StarkNet.
    • A contract that holds funds on L1 and swaps WEth/USDC on Uniswap.
    • This applies to the "output" part of the ML trust framework. The output is generated on L2, transmitted to L1, and used for execution. It is not tampered with in the process.

* * * An L2 contract implements a simple (but flexible) three-layer neural network to predict future WEth prices. The contract uses historical WETH price information as input. * This applies to the "input" and "model" parts. Historical price information input comes from the blockchain. The model's execution is computed in CairoVM (a ZKVM), and its execution trace will generate a ZK proof for verification. * A simple frontend is used for visualization and for training regressors and classifiers in PyTorch code.

  • Automated Market Makers and Liquidity Provision: Essentially, this is a combination of similar efforts in risk assessment and asset management, just approached differently in terms of trading volume, timelines, and asset types. There are many research papers on how to use ML for market-making in stock markets. It may just be a matter of time before some of these research findings apply to Defi products.
  • For example, LyraFinance is collaborating with Modulus Labs to enhance its AMM with intelligent features for more efficient capital utilization.
  • Honorable Mentions:
  • The Warp.cc team developed a tutorial project demonstrating how to deploy a smart contract running a trained neural network to predict Bitcoin prices. This aligns with the "input" and "model" parts of our framework, as the input uses data provided by RedStoneOracles, and the model executes as a Warp smart contract on Arweave.
  • This is the first iteration and involves ZK, so it falls under our honorable mentions, but in the future, the Warp team is considering implementing a ZK component.

Gaming

Gaming has many intersections with machine learning:

The gray area in the diagram represents our preliminary assessment of whether the machine learning capabilities in the gaming section need to be paired with corresponding ZKML proofs. Leela Chess Zero is a very interesting example of applying ZKML to gaming:

  • AI Agent
  • Leela Chess Zero (LC0): A fully on-chain AI chess player built by Modulus Labs that competes against a group of human players from the community.
    • LC0 and humans take turns playing (as is customary in chess).
    • LC0's moves are computed through a simplified, circuit-friendly LC0 model.
  • LC0's moves have a Halo2 snark proof to ensure there is no intervention from a human think tank. Only the simplified LC0 model makes decisions there.
  • This aligns with the "model" part. The execution of the model has a ZK proof to verify that the computation has not been tampered with.

  • Data Analysis and Prediction: This has been a common use of AI/ML in the Web2 gaming world. However, we find that there are very few reasons to implement ZK in this ML process. To avoid too much value being directly involved in this process, it may not be worth the effort. However, if certain analyses and predictions are used to determine rewards for users, then ZK may be implemented to ensure the results are correct.
  • Honorable Mentions:
  • AI Arena is an Ethereum-native game where players from around the world can design, train, and battle NFT characters driven by artificial neural networks. Talented researchers from around the world compete to create the best machine learning (ML) models to participate in game battles. AI Arena primarily focuses on feedforward neural networks, which generally have lower computational overhead than convolutional neural networks (CNNs) or recurrent neural networks (RNNs). Nevertheless, the current models are only uploaded to the platform after training is complete, so it is worth mentioning.
  • GiroGiro.AI is building an AI toolkit that enables the public to create artificial intelligence for personal or commercial use. Users can create various types of AI systems based on an intuitive and automated AI workflow platform by entering minimal data and selecting algorithms (or models for improvement). Although the project is in its very early stages, we are excited to see what GiroGiro can bring to products focused on gaming finance and the metaverse, thus listing it as an honorable mention.

DID and Social

In the DID and social domain, the intersection of Web3 and ML currently mainly manifests in the fields of human proof and credential proof; other areas may develop but will take longer.

  • Human Proof
  • Worldcoin uses a device called Orb to determine whether someone is a real person rather than attempting to fraudulently verify. It achieves this by analyzing facial and iris features through various camera sensors and machine learning models. Once this determination is made, the Orb takes a set of iris photos of the person and uses multiple machine learning models and other computer vision techniques to create an iris encoding, which is a digital representation of the most important features of an individual's iris pattern. The specific registration steps are as follows:
    • The user generates a Semaphore key pair on their phone and provides the hashed public key to the Orb via a QR code.
    • The Orb scans the user's iris and locally computes the user's IrisHash. It then sends a signed message containing the hashed public key and IrisHash to the registration sequencer.
    • The sequencer verifies the Orb's signature and then checks whether the IrisHash matches any existing ones in the database. If the uniqueness check passes, the IrisHash and public key are saved.
  • Worldcoin uses the open-source Semaphore zero-knowledge proof system to convert the uniqueness of the IrisHash into the uniqueness of the user account without linking them. This ensures that newly registered users can successfully claim their WorldCoins. The steps are as follows:
  • The user's application locally generates a wallet address.
  • The application uses Semaphore to prove it possesses the private key of a previously registered public key. Since this is a zero-knowledge proof, it does not disclose which public key it is.
  • The proof is sent again to the sequencer, which verifies the proof and initiates the process of depositing tokens into the provided wallet address. The so-called parts are sent along with the proof to ensure that the user cannot claim rewards twice.

*

  • WorldCoin uses ZK technology to ensure that the outputs of its ML models do not leak users' personal data, as there is no association between them. In this case, it belongs to the "output" part of our trust framework, as it ensures that the output is transmitted and used as expected, which in this case is privately.
  • Proof of Action
  • Astraly is a reputation-based token issuance platform on StarkNet that seeks to discover and support the latest and greatest StarkNet projects. Measuring reputation is a challenging task because it is an abstract concept that cannot be easily quantified with simple metrics. When dealing with complex metrics, more comprehensive and diverse inputs often yield better results. This is why Astraly seeks Modulus Labs' help to provide more accurate reputation ratings using ML models.
  • Personalized Recommendations and Content Filtering
  • Twitter recently open-sourced the algorithm for its "For You" timeline, but users cannot verify whether the algorithm is functioning correctly because the weights of the ML models used for tweet ranking are confidential. This raises concerns about bias and censorship.
  • However, Daniel Kang, Edward Gan, Ion Stoica, and Yi Sun provided a solution using ezkl to prove the true operation of Twitter's algorithm without revealing the model weights, helping to balance privacy and transparency. By using the ZKML framework, Twitter can commit to a specific version of its ranking model and publish a proof that it generated a specific final output ranking for a given user and tweet. This solution enables users to verify whether the computation is correct without trusting the system. While there is still much work to be done to make ZKML more practical, this is a positive step toward increasing transparency in social media. Thus, it belongs to the "model" part of our ML trust framework.

Revisiting the ML Trust Framework from a Use Case Perspective

It can be seen that the potential use cases for ZKML in Web3 are still in their infancy but should not be overlooked; in the future, as the use of ZKML expands, there may be a demand for ZKML providers, forming a closed loop as shown in the diagram below:

ZKML service providers primarily focus on the "model" and "parameter" parts of the ML trust framework. Although most of what we see now related to "parameters" is more related to "models." It is important to note that the "input" and "output" parts are more addressed by blockchain-based solutions, whether as data sources or data destinations. Using ZK or blockchain alone may not achieve complete trustworthiness, but together they may.

How Far Are We from Mass Adoption?

Finally, we can focus on the current feasibility status of ZKML and how far we are from the mass adoption of ZKML.

Modulus Labs' paper provides us with some data and insights on the feasibility of ZKML applications by testing Worldcoin (with strict accuracy and memory requirements) and AI Arena (with cost-effectiveness and time requirements):

If Worldcoin uses ZKML, the memory consumption of the prover will exceed the capacity of any commercial mobile hardware. If the AI Arena matches use ZKML, using ZKCNNs would increase time and cost by 100 times (0.6 s compared to the original 0.008 s). So unfortunately, neither is suitable for directly applying ZKML technology to prove time and prover memory usage.

What about proof size and verification time? We can refer to the paper by Daniel Kang, Tatsunori Hashimoto, Ion Stoica, and Yi Sun. As shown, their DNN inference solution achieves an accuracy of 79% on ImageNet (model type: DCNN, 16 layers, 3.4 million parameters) while requiring only 10 seconds of verification time and a proof size of 5952 bytes. Additionally, zkSNARKs can shrink to a verification time of only 0.7 seconds at 59% accuracy. These results indicate that zkSNARKing models at the scale of ImageNet is feasible in terms of proof size and verification time.

Currently, the main technical bottlenecks lie in proof time and memory consumption. Applying ZKML in web3 cases is technically not feasible yet. Does ZKML have the potential to catch up with the development of AI? We can compare several empirical data points:

  • The pace of development of machine learning models: The GPT-1 model released in 2019 had 150 million parameters, while the latest GPT-3 model released in 2020 had 175 billion parameters, an increase of 1166 times in the number of parameters in just two years.
  • The pace of optimization of zero-knowledge systems: The performance growth of zero-knowledge systems essentially follows a "Moore's Law"-like pace. New zero-knowledge systems emerge almost every year, and we expect rapid growth in prover performance to continue for some time.

From this data, although the pace of development of machine learning models is very fast, the optimization speed of zero-knowledge proof systems is also steadily improving. In the near future, ZKML may still have the opportunity to gradually catch up with the development of AI, but it requires continuous technological innovation and optimization to close the gap. This means that although there are currently technical bottlenecks for ZKML in web3 applications, we still have reason to expect ZKML to play a greater role in web3 scenarios in the future as zero-knowledge proof technology continues to evolve. Comparing the improvement rates of cutting-edge ML and ZK, the outlook is not very optimistic. However, with the continuous improvement of convolutional performance, ZK hardware, and ZK proof systems tailored to highly structured neural network operations, we hope that the development of ZKML can meet the needs of web3, starting with providing some classic machine learning functionalities.

While it may be difficult to verify whether the information ChatGPT provides me is trustworthy using blockchain + ZK, we may be able to install some smaller and older ML models in ZK circuits.

Conclusion

"Power tends to corrupt, and absolute power corrupts absolutely." With the incredible power of artificial intelligence and ML, there is currently no foolproof method to place it under governance. History has repeatedly shown that governments either provide the aftermath of late intervention or completely ban it in advance. Blockchain + ZK offers one of the few solutions capable of taming the beast in a provable and verifiable manner.

We look forward to seeing more product innovations in the ZKML space, as ZK and blockchain provide a secure and trustworthy environment for the operation of AI/ML. We also anticipate that these product innovations will generate entirely new business models, as in the permissionless cryptocurrency world, we are not constrained by the SaaS commercialization models here. We look forward to supporting more builders to establish their exciting ideas in this "wild west anarchy" and "ivory tower elite" fascinating overlap. We are still in the early stages, but we may already be on the path to saving the world.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators