Vitalik's new article: A Year in Review of Decentralized Accelerationism and Artificial Intelligence
Original Title: "d/acc: one year later"
Author: Vitalik Buterin, Founder of Ethereum
Compiled by: Leek, Foresight News
This article revolves around the concept of decentralized acceleration (d/acc), exploring its applications and challenges in technological development, including AI safety and regulation, its connection to cryptocurrencies, and public goods funding. It emphasizes the importance of d/acc in building a safer and better world, as well as the opportunities and challenges for future development. The author elaborates on the connotations of d/acc, analyzing its role in addressing AI risks through comparisons of different strategies, while discussing the value of cryptocurrencies and the exploration of public goods funding mechanisms. Finally, it looks forward to the future of technological development, acknowledging challenges but also the opportunities for humanity to build a better world using existing tools and ideas.
Introduction
Special thanks to volunteers such as Liraz Siri, Janine Leger, and Balvi for their feedback and reviews.
About a year ago, I wrote an article on technological optimism, expressing my overall enthusiasm for technology and the tremendous benefits it can bring, while also conveying my cautious stance on certain specific issues, primarily focusing on superintelligent AI and the potential catastrophic risks it could pose if not constructed properly, or the risk of humanity losing power irreversibly.
One of the core points in that article was to uphold a principle: decentralized, democratic, and differentiated defensive acceleration. We should accelerate technological development while distinctly focusing on those technologies that enhance our defensive capabilities rather than those that could cause harm, and strive to promote the decentralization of power rather than concentrating it in the hands of a few elites, avoiding the scenario where these elites determine right and wrong on behalf of everyone. The model of defense should resemble the democratic Switzerland and the historically quasi-anarchist Zomia region, rather than the model represented by lords and castles under medieval feudalism.
In the year since then, these ideas and concepts have undergone significant development and maturation. I shared these thoughts on the "80,000 Hours" platform (note: an organization focused on career choices) and received numerous responses, most of which were positive, though there were also some criticisms.
The work itself has continued to progress and achieve tangible results: we have witnessed advancements in verifiable open-source vaccines; there has been a deepening understanding of the value of healthy indoor air; "community notes" continue to play a positive role; prediction markets have had a breakthrough year as an information tool; zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARKs) have been applied in government identity verification and social media (ensuring the security of Ethereum wallets through account abstraction); open-source imaging tools have found applications in medicine and brain-computer interfaces (BCI), and so on.
Last autumn, we held the first significant d/acc event: "d/acc Discovery Day" (d/aDDy) at Devcon, which brought together speakers from various pillar areas of d/acc (biology, physics, networks, information defense, and neurotechnology) for a full day of activities. People who have been dedicated to these technologies for years have increasingly understood each other's work, while outsiders have become more aware of the broader vision: the same values that drive the development of Ethereum and cryptocurrencies can extend to a wider world.
The Connotation and Extension of d/acc
Time flows to 2042. You see a news report stating that a new epidemic may break out in your city. You are already accustomed to such news: people often overreact to every mutation of animal diseases, and in most cases, these mutations do not lead to an actual crisis. The previous two potential epidemics were detected early through wastewater monitoring and open-source analysis of social media, and were successfully contained at their inception. However, this time is different; prediction markets show a probability of at least 10,000 cases at 60%, which makes you anxious.
Just yesterday, the genetic sequence of the virus was identified. An update for the air testing device in your pocket was released, allowing it to detect the new virus (either through a single breath test or after being exposed to indoor air for 15 minutes). Meanwhile, open-source instructions and code for generating vaccines using equipment available at any modern medical facility are expected to be released within weeks. Most people have not yet taken any action; they mainly rely on widely adopted air filtration and ventilation measures to protect themselves.
Due to your own immune issues, you act more cautiously: the open-source locally running personal assistant AI you use, in addition to handling routine tasks like navigation, restaurant, and event recommendations, also takes into account real-time air testing data and carbon dioxide levels, recommending only the safest places to you. This data is provided by thousands of participants and devices, and the risk of data being leaked or misused is minimized through ZK-SNARKs and differential privacy techniques (if you intend to contribute data to these datasets, other personal assistant AIs will verify whether these cryptographic tools are indeed effective).
Two months later, the epidemic miraculously dissipated: it seems that 60% of people followed basic preventive protocols, wearing masks when the air tester alarmed and indicated the presence of the virus, and isolating at home if their personal test results were positive. This measure was sufficient to further reduce the transmission rate, which had already been significantly lowered due to passive strong air filtration, to below 1. A disease that simulated results indicating it could be five times more severe than the COVID-19 pandemic from twenty years ago has now had no serious impact.
Devcon's d/acc Day
The d/acc event at Devcon achieved a very positive outcome: the d/acc concept successfully brought together people from different fields and genuinely sparked their interest in each other's work.
Holding an event with "diversity" is not difficult, but enabling people from different backgrounds and interests to truly establish close connections is quite challenging. I still vividly remember my experiences in middle and high school, being forced to watch lengthy operas, which I personally found tedious. I knew I "should" appreciate them, as failing to do so would label me as an uncultured computer science slacker, but I could not resonate with the content of the operas on a deeper level. However, the atmosphere of d/acc Day was entirely different: it felt like people genuinely loved learning about various works from different fields.
If we aspire to build a future brighter than domination, deceleration, and destruction, we must engage in this kind of broad alliance building. d/acc seems to have achieved significant success in this regard, and this alone is enough to highlight the precious value of this concept.
The core idea of d/acc is simple: decentralized, democratic, and differentiated defensive acceleration. It aims to build technologies that tilt the balance of offense and defense towards defense, without relying on transferring more power to central authorities during implementation. There is an inherent close connection between these two aspects: any decentralized, democratic, or free political structure tends to thrive when defense is easy to implement, while facing severe challenges when defense is fraught with difficulties— in those cases, the more likely outcome is a chaotic period of mutual antagonism among all, ultimately leading to a balance of power ruled by the strongest.
One way to understand the significance of attempting to achieve decentralization, defensiveness, and acceleration simultaneously is to compare it with the ideas generated by abandoning any one of these three aspects.
Chart from last year's "My Technological Optimism"
Decentralized acceleration, but neglecting the "differentiated defense" aspect
Essentially, this is akin to being an effective accelerationist (e/acc) while simultaneously pursuing decentralization. Many people adopt this approach, some of whom self-identify as d/acc, but they beneficially describe their focus as "offensive." Additionally, many others exhibit a more moderate enthusiasm for "decentralized AI" and similar topics, but in my view, they clearly lack sufficient attention to the "defense" aspect.
In my opinion, this approach may mitigate the risk of specific groups imposing dictatorial rule over humanity, but it fails to address potential structural issues: in an environment conducive to offense, there is always a persistent risk of disaster, or someone may position themselves as a protector and permanently occupy a dominant position. Regarding AI, it also cannot adequately address the risk of humanity's overall power being diminished relative to AI.
Differentiated defense acceleration, but ignoring "decentralization and democracy"
Accepting centralized control to achieve safety goals has always been somewhat appealing to some, and readers are undoubtedly familiar with many such examples and the drawbacks they bring. Recently, some have expressed concerns that extreme centralized control may be the only way to respond to future extreme technologies: for example, envisioning a hypothetical scenario where "everyone wears a 'freedom tag'—a follow-up product to today's more limited wearable monitoring devices, similar to ankle tags used as alternatives to imprisonment in several countries… encrypted video and audio are continuously uploaded and interpreted in real-time by machines." However, centralized control has a degree of problems. A relatively mild form of centralized control that is often overlooked but still harmful is reflected in the resistance to public oversight in the biotechnology field (e.g., food, vaccines) and allowing such resistance to go unchallenged through closed-source norms.
The risks of this approach are evident: the center itself often becomes the source of risk. We witnessed this during the COVID-19 pandemic, where functionally gain-of-function research funded by multiple major world governments may have been the source of the outbreak, and centralized epistemology led the World Health Organization to refuse for years to acknowledge that the COVID virus was airborne, while mandatory social distancing and vaccine mandates triggered political backlash that could last for decades. Similar situations are likely to recur in any risk scenario related to AI or other risky technologies. In contrast, a decentralized approach would more effectively address risks emanating from the center itself.
Decentralized defense, but excluding acceleration
Essentially, this is an attempt to slow technological progress or promote economic decline.
This strategy faces dual challenges. First, overall, technological and economic growth is extremely beneficial to humanity, and any delay in this regard would incur immeasurable costs. Second, in a non-totalitarian world, stagnation is unstable: those who "cheat" the most and can find seemingly reasonable ways to continue advancing will gain the upper hand. Deceleration strategies can work to some extent in certain specific contexts: for example, European food is healthier than American food, which is one such instance; the success of nuclear non-proliferation thus far is another. However, these strategies cannot work indefinitely.
Through d/acc, we are committed to achieving the following goals:
- Upholding principles in the face of the increasingly tribalized trends of today's world, rather than blindly constructing various things—on the contrary, we aim to build specific things that make the world safer and better.
- Recognizing that exponential technological advancement means the world will become extremely peculiar, and humanity's overall "footprint" in the universe will inevitably increase. Our ability to protect vulnerable animals, plants, and populations from harm must continue to improve, and the only way forward is to press on.
- Building technologies that can genuinely protect us, rather than relying on the assumption that "good people (or good AIs) will control everything." We achieve this by constructing tools that are naturally more effective for building and protecting than for destruction.
Another perspective on thinking about d/acc is to return to a framework from the late 2000s European Pirate Party movement: empowerment.
Our goal is to build a world that retains human agency, achieving negative freedom by avoiding active interference from others (whether ordinary citizens, governments, or superintelligent robots) in our ability to shape our own destinies, while also achieving positive freedom by ensuring we have the knowledge and resources to exercise that ability. This echoes a classical liberal tradition that has persisted for centuries, encompassing Stewart Brand's focus on "tool access" and John Stuart Mill's emphasis on education and freedom as key elements of human progress—perhaps we could also add Buckminster Fuller's vision that the process of solving global problems should be participatory and widely distributed. Given the technological landscape of the 21st century, we can view d/acc as a means to achieve these same goals.
The Third Dimension: The Synergistic Development of Survival and Prosperity
In my article last year, d/acc particularly focused on defensive technologies: physical defense, biological defense, network defense, and information defense. However, mere decentralized defense is insufficient to build a great world: we also need a forward-looking positive vision that clarifies what humanity can achieve after gaining new decentralization and security.
Last year's article indeed contained a positive vision in two aspects:
- When addressing the challenges of superintelligence, I proposed a path (which is not original to me) for how we can achieve superintelligence without losing power:
- Currently, construct AI as a tool rather than a highly autonomous agent.
- In the future, use tools like virtual reality, electromyography, and brain-computer interfaces to establish a closer feedback mechanism between AI and humans.
- Over time, gradually move towards the ultimate outcome, where superintelligence is a product of close integration between machines and humans.
- When discussing information defense, I also mentioned that besides defensive social technologies aimed at helping communities maintain cohesion and engage in high-quality discussions in the face of attackers, there are also progressive social technologies that can assist communities in making high-quality judgments more easily: Pol.is is one example, as are prediction markets.
But at the time, these two points felt disconnected from the core argument of d/acc: "Here are some ideas about building a more democratic and defensively favorable world at the foundational level, and by the way, here are some unrelated thoughts on how we achieve superintelligence."
However, I believe that in reality, there are some crucial connections between the d/acc technologies labeled as "defensive" and "progressive." Let's expand the d/acc chart from last year's article by adding this axis (while renaming it "Survival and Prosperity") to see what results it presents:
There exists a consistent pattern across various fields, where the sciences, ideas, and tools that help us "survive" in a certain domain are closely related to those that enable us to "prosper." Here are some specific examples:
- Many recent studies on combating COVID-19 focus on the persistent presence of the virus in the body, which is seen as a key mechanism for long COVID. Recently, there have also been signs that the persistent presence of the virus may be a pathogenic factor for Alzheimer's disease—if this view holds, then addressing the issue of viral persistence across all types of organisms may become key to tackling the aging problem.
- Low-cost and micro-imaging tools, such as those being developed by Openwater, have tremendous potential in treating microthrombi, viral persistence, cancer, etc., and can also be applied in the field of brain-computer interfaces.
- The idea of promoting the construction of social tools suitable for highly adversarial environments (like community notes) and social tools for reasonable cooperative environments (like Pol.is) is very similar.
- Prediction markets hold significant value in both high cooperation and high confrontation environments.
- Zero-knowledge proofs and similar technologies allow for computations on data while protecting privacy, increasing the amount of data available for beneficial work like scientific research while enhancing privacy protection.
- Solar energy and batteries are crucial for driving the next wave of clean economic growth, while also demonstrating excellence in decentralization and physical resilience.
Moreover, there are significant interdependencies between different disciplines:
- Brain-computer interfaces are crucial as both information defense and collaboration technologies because they enable more nuanced communication of our thoughts and intentions. Brain-computer interfaces are not merely connections between robots and consciousness: they can also be interactions among consciousness - robots - consciousness. This resonates with the value of brain-computer interfaces in the concept of diversity.
- Many biotechnologies rely on information sharing, and in many cases, people are only willing to share information when they are confident it will only be used for specific applications. This relies on privacy technologies (like zero-knowledge proofs, fully homomorphic encryption, obfuscation techniques, etc.).
- Collaborative technologies can be used to coordinate funding for any other technological domains.
The Dilemma: AI Safety, Urgent Timelines, and Regulatory Quandaries
Different people have vastly different AI timelines. The chart is from Zuzalu in Montenegro in 2023.
The most compelling counterarguments to my article last year came from the AI safety community. Their argument was: "Of course, if we had half a century to develop strong AI, we could focus on building all these beneficial things. But in reality, it seems we may only have three years to develop general AI, and another three years to develop superintelligence. Therefore, if we do not want the world to fall into destruction or otherwise irreversible dilemmas, we cannot merely accelerate the development of beneficial technologies; we must also slow down the development of harmful technologies, which means we need strong regulatory measures that may anger the powerful." In my article last year, aside from vaguely calling for not constructing risky forms of superintelligence, I did not propose any specific strategies for "slowing down harmful technology development." So here, it is necessary to directly address this issue: if we find ourselves in the least ideal world, where AI risks are extremely high and the timeline may be as short as five years, what regulatory measures would I support?
Reasons for Caution Towards New Regulations
Last year, the main AI regulatory proposal was California's SB-1047 bill. SB-1047 requires developers of the most powerful models (i.e., those with training costs exceeding $100 million or fine-tuning costs exceeding $10 million) to undertake a series of safety testing measures before release. Additionally, if AI model developers fail to exercise sufficient caution, they will be held accountable. Many critics argue that the bill "poses a threat to open source"; I disagree with this, as the cost threshold means it only affects the most powerful models: even the Llama3 model may fall below this threshold. However, in retrospect, I believe the bill has a more serious problem: like most regulatory measures, it overfits the current situation. The focus on training costs has proven fragile in the face of new technologies: the recently advanced DeepSeek v3 model had training costs of only $6 million, and in new models like o1, costs often shift more from training to inference.
Actors Most Likely Responsible for AI Superintelligence Catastrophe Scenarios
In reality, the actors most likely responsible for AI superintelligence catastrophe scenarios are military forces. As we have witnessed in the past half-century of biosafety (and earlier), militaries are willing to take some terrible actions and are prone to making mistakes. Today, the application of AI in military contexts is rapidly advancing (as seen in Ukraine and Gaza). Moreover, any safety regulations passed by governments will, by default, exempt their own military forces and companies closely cooperating with the military.
Response Strategies
Nevertheless, these arguments do not provide us with a reason to be helpless. On the contrary, we can use them as guidance to attempt to formulate rules that raise the least concerns.
Strategy 1: Accountability
If someone's actions cause legally actionable harm in some way, they may be sued. This does not address the risk posed by militaries and other "above the law" actors, but it is a very general approach that avoids overfitting, which is why libertarian-leaning economists typically support it.
The main accountability targets considered so far are as follows:
- Users: those who use AI.
- Deployers: intermediaries providing AI services to users.
- Developers: those who build AI.
Holding users accountable seems to align best with incentive structures. While the connection between how models are developed and how they are ultimately used is often unclear, users determine the specific ways AI is used. Holding users accountable creates a powerful incentive for people to use AI in what I believe is the correct way: focusing on building mechanical suits for human thought rather than creating new self-sustaining forms of intelligent life. The former will regularly respond to user intentions, thus not leading to catastrophic actions unless the user desires it. The latter carries the greatest risk of potentially spiraling out of control and triggering the classic "AI gone rogue" scenario. Another benefit of placing accountability as close to the end-use as possible is that it minimizes the risk of accountability leading people to take actions that are harmful in other ways (e.g., closed-source, know-your-customer (KYC) monitoring, collusion between states/corporations to secretly restrict users, such as banks refusing to serve certain customers, excluding large areas of the world).
A classic counterargument to solely holding users accountable is that users may be ordinary individuals with little money, or even anonymous, making it impossible for anyone to actually pay for catastrophic damages. This viewpoint may be exaggerated: even if some users are too small to bear responsibility, ordinary customers of AI developers are not, so AI developers will still be incentivized to build products that assure users they will not face high accountability risks. That said, this is still a valid point that needs addressing. You need to incentivize someone with resources in the pipeline to take appropriate precautions, and both deployers and developers are easily identifiable targets who still have significant influence over the safety of the models.
Deployers' accountability seems reasonable. A common concern is that it does not apply to open-source models, but this seems manageable, especially since the most powerful models are likely to be closed-source (if the outcome is open-source, then while deployer accountability may ultimately not be very useful, it will not cause too much harm). Developer accountability also faces the same concerns (though for open-source models, there are certain barriers to fine-tuning models to do things that were not originally permitted), but the same counterarguments apply. As a general principle, imposing a "tax" on control essentially says, "You can build something you cannot control, or you can build something you can control, but if you build something you can control, then 20% of the control must be used for our purposes," which seems to be a reasonable stance for a legal system.
One idea that seems not to have been fully explored is to hold other actors in the pipeline accountable, who are more likely to have sufficient resources. A very d/acc-aligned idea is to hold accountable the owners or operators of any devices that AI takes over (e.g., through hacking) during the execution of certain catastrophic harmful actions. This would create a very broad incentive for people to work towards making the infrastructure of the world (especially in computing and biology) as safe as possible.
Strategy 2: A Global "Soft Pause" Button on Industrial-Scale Hardware
If I were convinced we needed more "powerful" measures than accountability rules, I would choose this strategy. The goal is to have the ability to reduce global available computing power by about 90% - 99% during critical periods, lasting 1 - 2 years, to buy humanity more preparation time. The value of 1 - 2 years should not be underestimated: a year of "wartime mode" can easily be equivalent to a hundred years of regular work in complacency. Methods for achieving a "pause" are already being explored, including some specific proposals like requiring hardware registration and verifying location.
A more advanced approach is to use clever cryptographic techniques: for example, industrial-scale (but not consumer-grade) AI hardware produced could be equipped with a trusted hardware chip that only allows it to continue operating if it receives a 3/3 signature from major international institutions (including at least one non-military-affiliated institution) each week. These signatures would be device-agnostic (if necessary, we could even require zero-knowledge proofs to be published on a blockchain), so it would be all-or-nothing: there would be no practical way to authorize one device to continue operating without authorizing all other devices.
This seems to "meet the requirements" in maximizing benefits and minimizing risks:
- This is a useful capability: if we receive indications that an AI approaching superintelligence begins to do things that could lead to catastrophic harm, we would want to transition more slowly.
- Having merely the ability to soft-pause before such critical moments poses little harm to developers.
- Focusing on industrial-scale hardware and only setting the target at 90% - 99% will avoid some dystopian practices, such as implanting spy chips in consumer-grade laptops or forced shutdown switches, or coercing small countries into taking harsh measures against their will.
- Focusing on hardware seems to be highly adaptable to technological changes. We have seen across multiple generations of AI that quality largely depends on available computing power, especially in early versions of new paradigms. Thus, reducing available computing power by 10 - 100 times could easily create a distinction between runaway superintelligent AI and humans attempting to stop it in a rapid confrontation.
- The inherent hassle of needing to obtain signatures weekly will strongly deter the idea of extending this scheme to consumer-grade hardware.
- Verification can be conducted through random checks, and operating at the hardware level will make it difficult to exempt specific users (methods based on legal shutdown rather than technical means do not possess this all-or-nothing attribute, making them more susceptible to sliding into exemptions for militaries, etc.).
Hardware regulation is already being strongly considered, although typically within the framework of export controls, which essentially embodies a "we trust our side but not the other side" mentality. Leopold Aschenbrenner famously argued that the U.S. should race to achieve a decisive advantage and then force China to sign an agreement limiting the number of devices they can operate. In my view, this approach seems risky and may combine the flaws of multipolar competition and centralization. If we must limit people, it seems better to equally limit everyone and strive for actual cooperation to organize implementation, rather than one side attempting to dominate all.
The Role of d/acc Technologies in AI Risks
Both of these strategies (accountability and the hardware pause button) have vulnerabilities, and it is clear that they are merely temporary stopgap measures: if something can be done on a supercomputer at time T, it is likely to be possible on a laptop at time T + 5 years as well. Therefore, we need more stable measures to buy time. Many d/acc technologies are relevant here. We can view the role of d/acc technologies as follows: if AI were to take over the world, how would it do so?
- It hacks into our computers → network defense
- It creates super plagues → biological defense
- It persuades us (either to trust it or not trust each other) → information defense
As briefly mentioned above, accountability rules are a naturally d/acc-aligned regulatory approach because they can effectively incentivize the adoption of these defensive measures worldwide and take them seriously. Taiwan has recently been experimenting with holding parties accountable for false advertising, which can be seen as an example of using accountability to encourage information defense. We should not be overly eager to impose accountability everywhere and must remember the benefits of ordinary freedoms that enable small players to innovate without fear of litigation, but where we genuinely wish to push for safety more strongly, accountability can be quite flexible and effective.
The Role of Cryptocurrencies in d/acc
Many aspects of d/acc extend far beyond typical blockchain themes: biosafety, brain-computer interfaces, and collaborative discourse tools seem to diverge significantly from what cryptocurrency enthusiasts usually discuss. However, I believe there are some important connections between cryptocurrencies and d/acc, particularly:
- d/acc is an extension of the fundamental values of cryptocurrencies (decentralization, censorship resistance, an open global economy and society) to other technological domains.
- Because cryptocurrency users are natural early adopters and there is consistency in values, the cryptocurrency community is a natural early user of d/acc technologies. The high emphasis on community (both online and offline, such as events and flash mobs), and the fact that these communities are actually doing high-risk things rather than just talking to each other, makes the cryptocurrency community a particularly attractive incubator and testing ground for d/acc technologies, which fundamentally operate based on groups rather than individuals (e.g., most information defense and biological defense technologies). Cryptocurrency enthusiasts are doing things together.
- Many cryptocurrency technologies can be applied in d/acc thematic areas: blockchain can be used to build more robust and decentralized financial, governance, and social media infrastructures, while zero-knowledge proofs can be used to protect privacy, etc. Today, many of the largest prediction markets are built on blockchains, and they are gradually becoming more complex, decentralized, and democratic.
- There are also win-win collaboration opportunities in adjacent technological areas that are very useful for cryptocurrency projects while being key to achieving d/acc goals: formal verification, computer software and hardware security, and robust governance technologies with adversarial resilience. These make Ethereum blockchain, wallets, and decentralized autonomous organizations (DAOs) more secure and robust, and they also achieve important civil defense goals, such as reducing our vulnerability to cyberattacks (including potential attacks from superintelligent AI).
Cursive is an application that uses fully homomorphic encryption (FHE) technology, allowing users to identify areas of mutual interest with other users while protecting privacy. Edge City in Chiang Mai (one of the many branches of Zuzalu) has utilized this application.
d/acc and Public Goods Funding
One question I have long been interested in is how to come up with better mechanisms for funding public goods: projects that are valuable to very large groups but lack naturally accessible commercial models. My past work in this area includes my contributions to quadratic funding and its applications in Gitcoin grants, retroactive public goods funding (retro PGF), and the recent deep funding initiatives.
Many people are skeptical about the concept of public goods. This skepticism typically comes from two angles:
- Public goods have historically been used as a justification for governments to impose hard central planning and intervention on society and the economy.
- A common view is that public goods funding lacks rigor and operates based on social expectation biases—things that sound good rather than truly good—and favors insiders who can play social games.
These are important criticisms and reasonable ones. However, I believe that strong decentralized public goods funding is crucial for the d/acc vision, as a key goal of d/acc (minimizing central control points) inherently obstructs many traditional business models. It is possible to build successful businesses on open-source foundations—several Balvi grantees are doing so—but in some cases, it is difficult enough that important projects require additional ongoing support. Therefore, we must do the hard work of figuring out how to fund public goods in a way that addresses both of the aforementioned criticisms.
The solution to the first problem is essentially credible neutrality and decentralization. Central planning is problematic because it hands control over to elites who may become abusive of power and because it often overfits the current situation, becoming increasingly ineffective over time. Quadratic funding and similar mechanisms are precisely about funding public goods in a way that is as credibly neutral and (in terms of architecture and politics) decentralized as possible.
The second problem is more challenging. A common criticism of quadratic funding is that it quickly turns into a popularity contest, requiring project funders to expend significant effort on public relations. Moreover, those projects that are "in front of people" (e.g., end-user applications) receive funding, while those more behind-the-scenes projects (typical "dependencies maintained by someone from Nebraska") receive none at all. Optimistic retroactive funding relies on a smaller number of expert badge holders; here, the popularity contest effect diminishes, but the social effect of having close personal relationships with badge holders is amplified.
Deep funding is my latest effort to address this issue. Deep funding has two main innovations:
- Dependency graphs. We do not ask each juror a global question ("What is the value of project A to humanity?"), but rather a local question ("Which is more valuable, project A or project B for outcome C? By how much?"). Humans are notoriously bad at answering global questions: in a famous study, when asked how much they would be willing to pay to save N birds, respondents gave roughly the same answer of $80 for N = 2000, N = 20000, and N = 200000. Local questions are easier to handle. We then combine local answers into a global answer by maintaining a "dependency graph": for each project, which other projects contribute to its success and by how much?
- AI as a refinement of human judgment. Each juror is only assigned a small random sample of all questions. There is an open competition where anyone can submit an AI model attempting to effectively fill in all the edges of the graph. The final answer is a weighted sum of the models that are most compatible with the juror answers. For code examples, see here. This approach allows the mechanism to scale to very large sizes while requiring jurors to submit only a small number of "information bits." This reduces opportunities for corruption and ensures that each information bit is of high quality: jurors can take a long time to think about each question rather than quickly clicking through hundreds of questions. By using an open competition for AI, we reduce bias from any single AI training and management process. The open market for AI serves as the engine, with humans at the steering wheel.
But deep funding is just the latest example; there have been other ideas for public goods funding before, and there will be more in the future. allo.expert has done a great job cataloging them. The fundamental goal is to create a social tool that can fund public goods with at least a level of accuracy, fairness, and open access comparable to market funding for private goods. It does not have to be perfect; after all, markets themselves are far from perfect. But it should be effective enough that developers engaged in high-quality open-source projects beneficial to everyone can continue to do so without feeling the need to make unacceptable compromises.
Today, most leading projects in d/acc thematic areas—vaccines, brain-computer interfaces, "edge brain-computer interfaces" like wrist electromyography and eye tracking, anti-aging drugs, hardware, etc.—are proprietary projects. This has significant downsides in ensuring public trust, as we have already seen in many of the aforementioned fields. It also shifts attention to competitive dynamics ("Our team must win this critical industry!"), rather than ensuring that these technologies emerge quickly enough to protect us in a world of superintelligent AI. For these reasons, strong public goods funding could be a powerful advocate for openness and freedom. This is another way the cryptocurrency community can assist d/acc: by earnestly exploring these funding mechanisms and making them work well in their own context, preparing for broader applications in open-source science and technology.
The Future
The coming decades present significant challenges. Recently, I have been contemplating two challenges:
- A powerful new wave of technologies, especially strong AI, is rapidly approaching, accompanied by important traps we need to avoid. "Artificial superintelligence" could arrive in five years or it could take fifty years. In either case, it is unclear whether the default outcome will be automatically positive, as described in this article and the previous one, with multiple traps to avoid.
- The world is becoming increasingly uncooperative. Many powerful actors that previously seemed to act based on noble principles (universalism, freedom, shared humanity… etc.) are now more openly and actively pursuing their own personal or tribal interests.
However, each of these challenges holds a glimmer of hope. First, we now have very powerful tools to complete our remaining work more quickly:
- Current and near-future AI can be used to build other technologies and can serve as a factor in governance (as seen in deep funding or information finance). It is also highly relevant to brain-computer interfaces, which can themselves provide further productivity boosts.
- Large-scale coordination is now more possible than ever before on a larger scale. The internet and social media have expanded the scope of coordination, global finance (including cryptocurrencies) has enhanced its power, and now information defense and collaboration tools can improve its quality, perhaps soon with brain-computer interfaces in a human-computer-human format increasing its depth.
- Formal verification, sandbox technologies (web browsers, Docker, Qubes, GrapheneOS, etc.), secure hardware modules, and other technologies are improving, making better cybersecurity possible.
- Writing any type of software is much easier than it was two years ago.
- Recent foundational research on understanding how viruses work, particularly the simple understanding that the most important mode of transmission is airborne, has shown a clearer path for improving biological defenses.
- Recent advancements in biotechnology (e.g., CRISPR, advancements in biological imaging) have made various biotechnologies, whether for defense, longevity, super happiness, exploring multiple new biological hypotheses, or simply doing very cool things, more accessible.
- The concurrent advancements in computing and biotechnology are making synthetic biological tools possible, which you can use to adapt, monitor, and improve your health. Cyber defense technologies, such as cryptography, make this personalized dimension more feasible.
Second, many of the principles we cherish are no longer monopolized by specific parts of the old powers; they can be reclaimed by a broad coalition that welcomes anyone in the world to join. This may be the greatest benefit of the recent political "realignment" around the world, which is worth leveraging. Cryptocurrencies have excelled at capitalizing on this and finding global appeal; d/acc can do the same.
Acquiring tools means we can adapt and improve our biological characteristics and environment, while the "defense" aspect of d/acc means we can do so without infringing on others' freedom to do the same. The principle of liberal pluralism means we can have great diversity in how we do this, while our commitment to shared human goals means it should be achievable.
We humans remain the brightest stars. The task before us—to build a brighter 21st century while protecting human survival, freedom, and agency as we reach for the stars—is a challenging one. But I believe we are up to the task.