Dialogue with Vitalik: Exploring the Vision of Ethereum 2025, the Innovative Integration of POS, L2, Cryptography, and AI
Original Title: [DappLearning] Vitalik Buterin Chinese Interview
Original Author: DappLearning
On April 7, 2025, at the Pop-X HK Research House event co-hosted by DappLearning, ETHDimsum, Panta Rhei, and UETH, Vitalik and Xiao Wei appeared at the event.
During a break in the event, Yan, the founder of the DappLearning community, interviewed Vitalik. The interview covered multiple topics including ETH POS, Layer2, cryptography, and AI. The conversation was in Chinese, and Vitalik's Chinese was very fluent.
Below is the content of the interview (the original content has been edited for readability):
01 Views on POS Upgrade
Yan:
Hello, Vitalik, I am Yan from the DappLearning community. It is a great honor to interview you here.
I started learning about Ethereum in 2017. I remember that in 2018 and 2019, there was a heated discussion about POW and POS, and this topic may continue to be debated.
Looking at it now, (ETH) POS has been running stably for over four years, with millions of Validators in the consensus network. However, at the same time, the exchange rate of ETH to BTC has been declining, which has both positive aspects and some challenges.
So, from this point in time, what is your view on Ethereum's POS upgrade?
Vitalik:
I think the prices of BTC and ETH have nothing to do with POW and POS.
There are many different voices in the BTC and ETH communities, and what these two communities are doing is completely different; their ways of thinking are also completely different.
Regarding the price of ETH, I think there is a problem: ETH has many possible futures, and (one can imagine) in these futures, there will be many successful applications on Ethereum, but these successful applications may not bring enough value to ETH.
This is a concern for many people in the community, but it is actually a very normal issue. For example, Google, they have many products and do many interesting things. However, over 90% of their revenue is still related to their search business.
The relationship between Ethereum's ecosystem applications and ETH (price) is similar. Some applications pay a lot of transaction fees and consume a lot of ETH, while there are many (applications) that may be relatively successful, but they do not correspondingly bring that much success to ETH.
So this is something we need to think about and continue to optimize. We need to support more applications that have long-term value for Ethereum holders and ETH.
Therefore, I think the future success of ETH may appear in these areas. I don't think it has much relevance to improvements in consensus algorithms.
02 PBS Architecture and Centralization Concerns
Yan:
Yes, the prosperity of the ETH ecosystem is also an important reason that attracts us developers to build it.
OK, what do you think about the ETH2.0 PBS (Proposer & Builder Separation) architecture? This is a good direction; in the future, everyone can use a mobile phone as a light node to verify (ZK) proofs, and anyone can stake 1 ether to become a Validator.
But Builders may become more centralized, as they need to do anti-MEV and generate ZK Proofs, and if Based rollup is adopted, then Builders may have even more responsibilities, such as acting as Sequencers.
In this case, will Builders become too centralized? Although Validators are already sufficiently decentralized, this is a chain. If one link in the middle has a problem, it will also affect the operation of the entire system. So, how do we solve the censorship resistance issue in this area?
Vitalik:
Yes, I think this is a very important philosophical question.
In the early days of Bitcoin and Ethereum, there was a subconscious assumption:
Building a block and validating a block is one operation.
Assuming you are building a block, if your block contains 100 transactions, then your own node needs to process that many (100 transactions) gas. When you finish building the block and broadcast it to the world, every node in the world also needs to do that much work (consume the same gas). So if we set the gas limit to allow every Laptop or Macbook, or some server of a certain size to build blocks, then we need appropriately configured node servers to validate these blocks.
This was the previous technology. Now we have ZK, DAS, many new technologies, and Statelessness (stateless validation).
Before using these technologies, building a block and validating a block needed to be symmetrical, but now it can become asymmetrical. So the difficulty of building a block may become very high, but the difficulty of validating a block may become very low.
Using a stateless client as an example: If we use this stateless technology and increase the gas limit tenfold, the computational demand for building a block will become enormous, and a regular computer may no longer be able to handle it. At this point, we may need to use a particularly high-performance MAC studio or a more powerful server.
But the cost of validation will become lower, because validation does not require much storage, relying only on bandwidth and CPU computational resources. If we add ZK technology, the CPU cost of validation can also be eliminated. If we add DAS, the cost of validation will be very, very low. If the cost of building a block becomes higher, the cost of validation becomes very low.
So is this better compared to the current situation?
This question is quite complex. I think about it this way: if there are some super nodes in the Ethereum network, that is, some nodes have higher computational power, we need them to perform high-performance computing.
So how do we prevent them from acting maliciously? For example, there are several types of attacks:
First: Creating a 51% attack.
Second: Censorship attack. If they refuse to accept some users' transactions, how can we reduce this type of risk?
Third: Anti-MEV related operations. How can we reduce these risks?
Regarding the 51% attack, since the validation process is done by Attesters, those Attester nodes need to validate DAS, ZK Proof, and stateless clients. The cost of this validation will be very low, so the threshold for becoming a consensus node will still be relatively low.
For example, if some Super Nodes build blocks, if such a situation occurs where 90% of these nodes are you, 5% are someone else, and 5% are others. If you completely refuse to accept any transactions, it is not particularly bad, why? Because you cannot interfere with the entire consensus process.
So you cannot perform a 51% attack; the only thing you can do is to refuse certain users' transactions.
Users may just need to wait for ten blocks or twenty blocks for another person to include their transaction in a block, which is the first point.
The second point is that we have the concept of Fossil. What is Fossil for?
Fossil separates the role of "selecting transactions" from the role of executing transactions. This way, the role of selecting which transactions to include in the next block can be more decentralized. Therefore, through the Fossil method, smaller nodes will have the independent ability to choose transactions to include in the next block. Additionally, if you are a larger node, your power is actually very limited[1].
This method is more complex than before. Previously, we thought that each node was like a personal laptop. But if you look at Bitcoin, it is also a relatively hybrid architecture now. Because Bitcoin miners are those types of Mining Data Centers.
So in POS, it is done like this: some nodes need more computational power and resources. But the rights of these nodes are limited, while other nodes can be very decentralized, ensuring the security and decentralization of the network. However, this method is more complex, so this is also a challenge for us.
Yan:
Very good thinking. Centralization is not necessarily a bad thing, as long as we can limit their malicious actions.
Vitalik:
Yes.
03 Issues Between Layer1 and Layer2, and Future Directions
Yan:
Thank you for answering my long-standing confusion. We come to the second part of the question. As a witness to Ethereum's journey, Layer2 has actually been very successful. The TPS issue has indeed been resolved. Unlike during the ICO days (when transactions were congested).
I personally think that Layer2 is quite usable now, but currently, the issue of liquidity fragmentation for Layer2 has led many people to propose various solutions. What do you think about the relationship between Layer1 and Layer2? Is the current Ethereum mainnet too laid-back, too decentralized, with no constraints on Layer2? Should Layer1 establish rules with Layer2, or create some profit-sharing models, or adopt solutions like Based Rollup? Justin Drake recently proposed this solution on Bankless, and I also agree with it. What do you think, and I am also curious when corresponding solutions will be launched?
Vitalik:
I think there are several issues with our Layer2 now.
First, their progress in security is not fast enough. So I have been pushing for Layer2 to upgrade to Stage 1 and hope to upgrade to Stage 2 this year. I have been urging them to do this while supporting L2BEAT to do more transparency work in this area.
Second, there is the issue of L2 interoperability. That is, cross-chain transactions and communication between two L2s. If two L2s are in the same ecosystem, interoperability needs to be simpler, faster, and cheaper than it is now.
Last year we started this work, now called the Open Intents Framework, along with Chain-specific addresses, which is mostly UX-related work.
In fact, I think the cross-chain issue for L2 is probably 80% a UX problem.
Although the process of solving UX issues may be painful, as long as the direction is correct, we can simplify complex problems. This is also the direction we are working towards.
Some things need to go further. For example, the withdrawal time for Optimistic Rollup is one week. If you have a token on Optimism or Arbitrum, you need to wait a week to cross-chain to Layer1 or another Layer2.
You can have Market Makers wait a week (and you need to pay them a certain fee). Ordinary users can cross-chain from one Layer2 to another through methods like Open Intents Framework Across Protocol for small transactions, which is possible. But for larger transactions, Market Makers still have limited liquidity. So the transaction fees they require will be relatively high. I published that article last week[2], where I support the 2 of 3 verification method, which is OP + ZK + TEE.
Because if we do that 2 of 3, we can meet three requirements simultaneously.
The first requirement is completely Trustless, without needing a Security Council; TEE technology serves as an auxiliary role, so we do not need to fully trust it.
Second, we can start using ZK technology, but ZK technology is still in its early stages, so we cannot fully rely on it yet.
Third, we can reduce the withdrawal time from one week to one hour.
You can imagine that if users use the Open Intents Framework, the liquidity cost for Market Makers will decrease by 168 times. Because the waiting time for Market Makers (to perform rebalance operations) will be reduced from one week to one hour. In the long term, we plan to reduce the withdrawal time from one hour to twelve seconds (the current block time), and if we adopt SSF, it can be reduced to four seconds.
Currently, we will also adopt methods like zk-SNARK Aggregation to parallel process the ZK proof process and reduce latency a bit. Of course, if users use ZK to do this, they do not need to go through Intents. But if they do it through Intents, the cost will be very low; this is all part of Interactability.
Regarding the role of Layer1, in the early stages of the Layer2 Roadmap, many people thought we could completely replicate Bitcoin's Roadmap, where Layer1 would have very few uses, only doing proofs (doing minimal work), while Layer2 could do everything else.
However, we found that if Layer1 does not play any role at all, it is dangerous for ETH.
One of our biggest concerns we discussed earlier is: the success of Ethereum applications cannot become the success of ETH.
If ETH is not successful, it will lead to our community having no money and no way to support the next round of applications. So if Layer1 does not play a role at all, the user experience and the entire architecture will be controlled by Layer2 and some applications. There will be no one representing ETH. So if we can assign more roles to Layer1 in some applications, it will be better for ETH.
Next, we need to answer the question: What will Layer1 do? What will Layer2 do?
In February, I published an article[3], in a Layer2-centric world, there are many important things that Layer1 needs to do. For example, Layer2 needs to submit proofs to Layer1; if a Layer2 has issues, users will need to cross-chain to another Layer2 through Layer1. Additionally, Key Store Wallets, and Oracle Data can be placed on Layer1, etc. There are many such mechanisms that rely on Layer1.
There are also some high-value applications, such as DeFi, that are actually more suitable for Layer1. One important reason why some DeFi applications are more suitable for Layer1 is their Time Horizon (investment period); users need to wait a long time, such as one year, two years, or three years.
This is especially evident in prediction markets, where sometimes prediction markets will ask questions like what will happen in 2028?
Here lies a problem: if the governance of a Layer2 goes wrong. Theoretically, all users there can exit; they can move to Layer1 or another Layer2. But if there is an application in this Layer2 whose assets are locked in long-term smart contracts, then users cannot exit. So many theoretically safe DeFi applications are actually not very safe.
For these reasons, some applications should still be built on Layer1, so we are starting to pay more attention to Layer1's scalability.
We now have a roadmap, and by 2026, there will be about four to five methods to enhance Layer1's scalability.
The first is Delayed Execution (separating block validation and execution), which means we can only validate blocks in each slot and let them execute in the next slot. This has the advantage that the maximum acceptable execution time may increase from 200 milliseconds to 3 seconds or 6 seconds. This allows for more processing time[4].
The second is Block Level Access List, which means each block will need to specify in its information which accounts' states need to be read and related storage states. This can be somewhat similar to Stateless without Witness, and it has the advantage that we can parallel process EVM execution and IO, which is a relatively simple implementation method for parallel processing.
The third is Multidimensional Gas Pricing[5], which can set a maximum capacity for a block, which is very important for security.
Another is (EIP4444) historical data processing, which does not require every node to permanently store all information. For example, each node can only store 1%, and we can use a p2p method, where your node may store part of it, and his node stores another part. This way, we can store that information more decentralized.
So if we can combine these four solutions, we now believe we can increase Layer1's gas limit by ten times, all our applications will have the opportunity to start relying more on Layer1 and doing more on Layer1, which will benefit Layer1 and ETH.
Yan:
Okay, the next question, are we likely to welcome the Pectra upgrade this month?
Vitalik:
Actually, we hope to do two things: approximately at the end of this month, we will conduct the Pectra upgrade, and then in Q3 or Q4, we will conduct the Fusaka upgrade.
Yan:
Wow, so soon?
Vitalik:
Hopefully.
Yan:
The next question I have is also related to this. As someone who has witnessed Ethereum's growth, we know that Ethereum has developed about five or six clients (consensus clients and execution clients) simultaneously to ensure security, which involves a lot of coordination work, leading to longer development cycles.
This has its pros and cons. Compared to other Layer1s, it may indeed be slow, but it is also safer.
However, what kind of solutions can we have so that we do not have to wait a year and a half for an upgrade? I have seen you propose some solutions; could you elaborate on them?
Vitalik:
Yes, one solution is that we can improve coordination efficiency. We now have more people who can move between different teams to ensure more efficient communication between teams.
If a client team has a problem, they can raise that issue and let the research team know. Actually, the advantage of Thomas becoming one of our new EDs is that he is in the client (team), and now he is also in the EF (team). He can facilitate this coordination; this is the first point.
The second point is that we can be stricter with client teams. Our current approach is that if there are five teams, we need all five teams to be fully prepared before we announce the next hard fork (network upgrade). We are now considering that we can start the upgrade as long as four teams complete it, so we do not need to wait for the slowest one, and we can also motivate everyone more.
04 Views on Cryptography and AI
Yan:
So appropriate competition is still necessary. It’s great; I really look forward to every upgrade, but let’s not keep everyone waiting too long.
Next, I want to ask some questions related to cryptography, which are quite broad.
In 2021, when our community was just established, we gathered developers from major exchanges and researchers from Ventures to discuss DeFi. In 2021, everyone was participating in understanding DeFi, learning, and designing DeFi; it was a nationwide craze.
Looking back, regarding ZK, whether for the public or developers, learning ZK, such as Groth16, Plonk, Halo2, has become increasingly difficult for developers to catch up with, and the pace of technological advancement is also very fast.
Additionally, we now see a direction where the development of ZKVM is also rapid, leading to the direction of ZKEVM not being as popular as before. When ZKVM matures, developers may not need to focus too much on the underlying ZK.
What are your suggestions and views on this?
Vitalik:
I think for some ecosystems of ZK, the best direction is that most ZK developers can know some high-level languages, that is, HLL (High Level Language). They can write their application code in HLL, while researchers of Proof Systems can continue to improve and optimize the underlying algorithms. Developers need to be layered; they do not need to know what happens in the next layer.
Currently, there may be a problem that Circom and Groth16 have a very developed ecosystem, but this poses a significant limitation on ZK ecosystem applications. Because Groth16 has many drawbacks, such as each application needing its own Trusted Setup, and its efficiency is not very high, so we are also thinking that we need to allocate more resources here and help some modern HLL achieve success.
Another good route is ZK RISC-V. Because RISC-V can also become an HLL, many applications, including EVM and some others, can be written on RISC-V[6].
Yan:
Okay, so developers just need to learn Rust, which is great. I attended Devcon in Bangkok last year and also learned about the development of applied cryptography, which was quite eye-opening for me.
Regarding applied cryptography, what do you think about the combination of ZKP with MPC and FHE, and what advice do you have for developers?
Vitalik:
Yes, this is very interesting. I think FHE has a good prospect now, but there is a concern, which is that MPC and FHE always require a Committee, which means selecting seven or more nodes. If those nodes can be attacked, such as 51% or 33%, then your system will have problems. It is equivalent to having a Security Council, which is actually more serious than a Security Council. Because if a Layer2 is Stage 1, then the Security Council needs 75% of the nodes to be attacked for problems to arise[7], this is the first point.
The second point is that the Security Council, if they are reliable, will throw most of their assets into cold wallets, meaning they will mostly be offline. However, in most MPC and FHE cases, their Committee needs to be online to keep the system running, so they may be deployed on a VPS or other servers, making it easier to attack them.
This worries me a bit. I think many applications can still be developed, which have advantages but are not perfect.
Yan:
Finally, I want to ask a relatively light question. I see you have been paying attention to AI recently. I want to list some viewpoints.
For example, Elon Musk said that humanity might just be a guiding program for silicon-based civilization.
Then there is a viewpoint in "The Network State" that centralized states may prefer AI, while democratic states prefer blockchain.
From our experience in the crypto space, decentralization presupposes that everyone will abide by the rules, check and balance each other, and understand the risks, which ultimately leads to elite politics. So what do you think of these viewpoints? Just share your thoughts.
Vitalik:
Yes, I am thinking about where to start answering.
Because the field of AI is very complex. For example, five years ago, no one would have predicted that the U.S. would have the best closed-source AI in the world, while China would have the best open-source AI. AI can enhance everyone's capabilities, and sometimes it can also enhance the power of some centralized (states).
However, AI can also have a relatively democratizing effect. When I use AI myself, I find that in areas where I am already among the top thousand globally, such as in some ZK development fields, AI actually helps me very little in ZK; I still need to write most of the code myself. But in areas where I am a novice, AI can help me a lot. For example, in Android app development, I had never done it before. I made an app ten years ago using a framework and wrote it in JavaScript, then converted it to an app; apart from that, I had never written a native Android app.
Earlier this year, I did an experiment where I wanted to try writing an app through GPT, and it was completed within an hour. This shows that the gap between experts and novices has been significantly reduced with the help of AI, and AI can also provide many new opportunities.
Yan:
To add a point, I really appreciate the new perspective you provided. I previously thought that with AI, experienced programmers would learn faster, while new programmers would find it less friendly. However, in some aspects, it indeed enhances the capabilities of newcomers. It may be a form of equality rather than division, right?
Vitalik:
Yes, but now a very important issue that also needs to be considered is what effects the combination of the technologies we are developing, including blockchain, AI, cryptography, and some other technologies, will have on society.
Yan:
So you still hope that humanity will not just be ruled by elites, right? You also hope to achieve a Pareto optimality for the entire society, where ordinary people become super individuals through the empowerment of AI and blockchain.
Vitalik:
Yes, yes, super individuals, super communities, super humans.
05 Expectations for the Ethereum Ecosystem and Advice for Developers
Yan:
OK, then we come to the last question, what are your expectations and messages for the developer community? What would you like to say to the Ethereum community developers?
Vitalik:
For these Ethereum application developers, it’s time to think.
There are many opportunities to develop applications in Ethereum now, and many things that were previously impossible to do can now be done.
There are many reasons for this, such as:
First: The previous TPS of Layer1 was completely insufficient, but now this problem is gone;
Second: The privacy issue that could not be solved before can now be addressed;
Third: Because of AI, the difficulty of developing anything has decreased. It can be said that although the complexity of the Ethereum ecosystem has increased somewhat, AI still allows everyone to better understand Ethereum.
So I think many things that failed before, including ten years ago or five years ago, may now succeed.
In the current blockchain application ecosystem, I think the biggest problem is that we have two types of applications.
The first type can be said to be very open, decentralized, secure, and particularly idealistic (applications). But they only have 42 users. The second type can be said to be casinos. The problem is that these two extremes are both unhealthy.
So what we hope to do is to create some applications,
First, that users will like to use, which will have real value.
Those applications will be better for the world.
Second, there are really some business models, for example, economically sustainable, that do not need to rely on limited foundation or other organizations' funds; this is also a challenge.
But now I think everyone has more resources than before, so if you can find a good idea and execute it well, your chances of success are very high.
Yan:
Looking back, I think Ethereum has been quite successful, continuously leading the industry, and striving to solve the problems faced by the industry under the premise of decentralization.
Another point I deeply resonate with is that our community has always been non-profit, through Gitcoin Grants in the Ethereum ecosystem, as well as OP's retroactive rewards and airdrop rewards from other projects. We have found that building in the Ethereum community can receive a lot of support. We are also thinking about how to ensure the community can operate sustainably and stably.
Building on Ethereum is truly exciting, and we hope to see the true realization of the world computer soon. Thank you for your valuable time.
The interview took place at Mo Sing Leng, Hong Kong
April 7, 2025
Finally, here is a photo with Vitalik 📷
The references mentioned by Vitalik in the article are summarized as follows:
[1]: https://ethresear.ch/t/fork-choice-enforced-inclusion-lists-focil-a-simple-committee-based-inclusion-list-proposal/19870
[2]: https://ethereum-magicians.org/t/a-simple-l2-security-and-finalization-roadmap/23309
[3]: https://vitalik.eth.limo/general/2025/02/14/l1scaling.html
[4]: https://ethresear.ch/t/delayed-execution-and-skipped-transactions/21677
[5]: https://vitalik.eth.limo/general/2024/05/09/multidim.html
[6]: https://ethereum-magicians.org/t/long-term-l1-execution-layer-proposal-replace-the-evm-with-risc-v/23617
[7]: https://specs.optimism.io/protocol/stage-1.html?highlight=75#stage-1-rollup