Gavin Wood: How to prevent witch attacks for effective airdrops?

Gavin Wood
2024-08-30 19:29:31
Collection
In Web3, you will hardly interact with humans anymore.

Author: Gavin Wood

Gavin has recently been focusing on the issue of civil resistance, and PolkaWorld revisits Dr. Gavin Wood's keynote speech at Polkadot Decoded 2024 to explore some of his insights on how to prevent civil resistance attacks.

What is a civil resistance attack?

You may know that I have been researching some projects, writing a white paper focused on the JAM project, and doing some coding work in this direction. In fact, for the past two years, I have been thinking about a very critical question that is very important in this field: how to prevent civil resistance attacks. This question is ubiquitous. Blockchain systems are based on game theory, and when analyzing games, we often need to limit the number of participants or manage the arbitrariness that participants may exhibit.

When we design digital systems, we very much hope to determine whether a specific endpoint—namely, a digital endpoint—is operated by a human. I want to clarify that I am not discussing identity issues here. Identity issues are certainly important, but the goal here is not to determine the specific identity of an endpoint in the real world, but to distinguish between this device and the device currently operated by a human. Additionally, there is a supplementary question: if the device is indeed operated by a human, can we provide a pseudonym for this person to identify them in a specific context, and if they interact with us again using this device, we can recognize them again.

As our modes of interaction have shifted from primarily communicating with others (like in the 1980s when I was born) to interacting with systems, these types of digital systems, especially decentralized Web3 systems, have become increasingly important. In the 1980s, people mainly communicated directly with others; by the 1990s, we began to interact with services via phone, such as telephone banking. This was a significant change for us, although initially, telephone banking consisted of a large number of manually operated call centers where we communicated with people over the phone, but eventually, these systems evolved into today's automated voice systems. With the development of the internet, this interpersonal interaction has decreased significantly, and in everyday services, we hardly interact directly with humans anymore. Of course, with the rise of Web2 e-commerce, this trend has become even more pronounced. Web3 further solidifies this— in Web3, you almost never interact with humans. The core idea of Web3 is to enable you to interact with machines, and even allow machines to interact with each other.

What is the significance of studying civil resistance attacks?

So, what does this mean? It is a fundamental element of any truly social structure and is at the core of many of our social systems, including business, governance, voting, opinion aggregation, and various other aspects. All of these largely rely on the ability to prevent civil resistance attacks to build communities. Many mechanisms that we take for granted in enterprises are actually based on the assumption of preventing civil resistance attacks. Whether it is fair and reasonable use, noise control, or community management, all are built on this defensive capability. Many things require us to confirm whether a certain entity is a real human. If someone behaves inappropriately, we may want to temporarily exclude them from the community. You can see this phenomenon in digital services, and of course, it exists in the real world as well.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

By preventing civil resistance attacks, we can introduce some mechanisms to constrain behavior without setting entry barriers or sacrificing the accessibility of the system. For example, there are two basic ways to incentivize behavior. One is the "carrot and stick" strategy (i.e., a reward and punishment mechanism). The stick (punishment) approach requires you to deposit a sum of money, which is forfeited if you behave inappropriately; staking is a simple example of this. The carrot (reward) approach assumes that you will behave well, and if you do not meet expectations, we will deprive you of some rights. This is essentially how most civil societies operate.

However, if there is a lack of mechanisms to prevent civil resistance attacks on the blockchain, this approach cannot be effectively implemented. In civil society, such mechanisms are effective because once someone is imprisoned, they cannot commit the same crime again, at least not while they are incarcerated. Freedom is inherent, and in principle, the government can deprive it. I am not suggesting that we imprison anyone on-chain, but rather that similar constraints cannot currently be realized on-chain. This makes it difficult for us to curb bad behavior by providing free services, rather than just encouraging good behavior. Business and promotional activities heavily rely on the ability to confirm whether traders are real humans.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

This is a screenshot of a website I occasionally use. It features a very good whiskey that many people love, and it is hard to buy in its country of origin. In Europe, it is relatively cheaper, but it seems they maintain lower prices by limiting the quantity individuals can purchase. However, this operation is nearly impossible to implement in a truly Web3 system.

There are also significant difficulties in community building, airdrops, and identifying and distributing to community members. Overall, the efficiency of airdrops in capital expenditure is very low because the goal of airdrops is to cover as many people as possible. To effectively achieve fair distribution during an airdrop, individuals need to be identified first, and then everyone should receive the same amount. But in practice, various issues arise, such as differing wallet balances. Ultimately, you may find yourself in a predicament where the distribution curve becomes very unbalanced, showing great disparities. The result is that most people receive hardly enough incentives.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

Regarding the issue of "fair use," while the impact is currently minor, if you use too many network resources, the system typically only reduces your speed, although you can still continue to use the network.

Going back about 10 to 15 years, if you used too many internet resources, internet service providers might consider that you were not using the unlimited network service reasonably. Therefore, they would essentially cut off your service entirely, rather than just reducing your speed as is done now. This practice allowed them to provide nearly unlimited internet service to most users because they could distinguish who was using resources reasonably by identifying users.

One foundation of Web2 is the premium service model, which largely relies on the ability to identify users. Over 20 years ago, user identification mechanisms might not have been as complex, but that is no longer the case. If you want to open an account, there are usually more than three mechanisms to confirm whether you are a real individual and whether you are a user they have not seen before. For example, if you try to register an Apple account without purchasing an iPhone, it feels like a challenge; these companies are generally reluctant to give you an account. Of course, they advertise that you can get an account for free, but I don't know what the AI in the background is doing. I tried 10 times before finally succeeding. In the end, I still had to buy an iPhone.

I believe that if we could better identify individuals, many processes similar to "Oracleization" (information verification) would become much easier.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

A typical example of using "human proof" to verify information in society is the jury system. When we need an impartial judge (i.e., an Oracle) to determine whether someone is guilty, the system randomly selects an odd number of ordinary people from society to listen to evidence and make a judgment. Similarly, in other areas of social life, such as representation and opinion collection, representation is an important component of society, and we manage representation through means of preventing civil resistance attacks. Of course, due to the current inadequacies of civic infrastructure, this management approach is often not ideal, especially in cases where representation is confused with identity. Many times, when you want to vote, you need to prove your real identity, such as showing a driver's license or passport. But in reality, voting represents a portion of your voting rights, not a direct association of that vote with personal identity.

How to prevent civil resistance attacks? What are the current solutions?

So, how should this be done?

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

In Web 2 and before Web 2, there were many methods to achieve identity verification. In today's Web 2 systems, these methods are often used in combination. For example, if you want to create a new Google account, you may need to go through a CAPTCHA and verify via email and SMS. Sometimes, SMS verification can replace talking to a real person. If you have ever encountered a situation where your Amazon account was locked, you know what I mean. Basically, it is a complex maze game until you find the right buttons and phone options to finally talk to a real customer service representative. For more complex civil resistance prevention, we might use information such as ID cards or credit cards.

However, when we enter the world of Web 3, my research has not found any truly satisfactory perfect solutions. There are some candidates now, but they differ significantly in three aspects: whether they are decentralized, whether they protect privacy, and whether they truly possess resilience (i.e., resistance to attacks).

Resilience is becoming an increasingly significant issue. In fact, most systems face these two problems.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

There is a system I refer to as the "common repentance system," where you disclose your privacy to a specific authority that holds some of your information, which you may not want to share with others. For example, you might scan your passport and submit it to an authority, and then this authority holds everyone's passport information, putting them in a position of power because they possess all this information. The common repentance system is not suitable for Web3.

Additionally, you sometimes see some personalized systems that resemble Web3, relying on "common key management authorities." There is one powerful authority that decides who is a legitimate individual by controlling the keys. In other words, this authority has the power to determine who can be considered a "real user" in the system. Sometimes, these authorities even hold keys for users, but more often, they merely retain the power to decide who is a legitimate individual.

These all rely on centralized authorities to control users' privacy or identity information, which contradicts the decentralized and user-empowered philosophy of Web 3.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

Putting something on-chain does not mean it is Web3. You can simply transfer Web2 strategies or strategies that rely on centralized authorities to the chain, but doing so does not change the strategy itself. It only means that this strategy may be executed more resiliently, but the strategy itself is still not Web3. Just because a name is a long hexadecimal string does not mean it is necessarily private. Without specific measures, this string may still be associated with real-world identity information.

If a system relies on a common "repentance mechanism," then it is not a privacy-protecting solution. We have seen too many data breaches that make it clear that merely placing data behind corporate firewalls or in some trusted hardware does not ensure security. A personalized solution suitable for Web3 needs not local individual identity or local community membership, but rather a global individual identity, which are completely different concepts.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

Some systems attempt to address this issue, but they rely on single hardware and common key management authorities, so they are not truly Web3 solutions. For example, the Worldcoin project attempts to solve this problem through trusted hardware, but it uses a unified key management authority and centralized data sources, thus not aligning well with the decentralized philosophy of Web3.

Another example is Gitcoin Passport, which is widely used in the Ethereum community and serves as a comprehensive platform for other identity and personalization solutions. It relies on a federated key management authority to establish individual identities, but these data sources often depend on centralized authorities, including centralized entities like CoinBase (CC).

Idena, an interesting Web3 solution, does not have common key management authorities or centralized entities. However, it is just a single mechanism, and it remains unclear whether this mechanism has sufficient resilience in the face of the evolving AI industry. So far, it has performed reasonably well, but the number of users is still relatively small, with only about a thousand users.

Overall, there is currently no method that can completely solve this problem.

Gavin's perspective on solving civil resistance attacks

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

Regarding individual identity, there are two ways of thinking: one is remote, and the other is local. Machines do not inherently understand "individual identity," and it is unlikely that we will see some encryption technology suddenly solve this issue. Some might say that fingerprints or biometric technology can give humans uniqueness, which machines can measure, but pure digital systems find it challenging to prove this. Perhaps the system closest to this goal is Worldcoin, but it is just a machine that can verify in a way that is not easily hackable.

Therefore, we need to understand that individual identity is more about the issue of authentication. It involves how elements within a digital system verify whether other elements are real individuals. So, the question is, what is the basis for this authentication? Is it physical contact, or is it some other form of suspicion? Do we believe an account is a real individual because we have seen that person and, upon meeting, we believe they have not interacted with others, allowing us to infer that they are the unique individual in a specific environment? Or is it merely because we see certain information on the screen, supported by other evidence of their individual identity?

When we talk about remote authentication (i.e., authentication based on non-direct, non-physical evidence), AI may introduce some issues. And if we rely on physical evidence, practicality may become a problem. Thus, we find ourselves caught between these two limitations. However, I believe that through innovation and imagination, we can still find some feasible solutions.

So what do we need?

So, what do we need? What is our plan?

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

I believe that making Polkadot more useful in the real world (not just in DeFi, NFTs, and virtual blockchain domains) hinges on finding a simple way to identify individual identities. The identification here does not mean determining who this person is, such as saying "I know this is Gavin Wood," but rather identifying "this is a unique individual." I do not think there will be a single solution, so we need a modular and scalable framework.

First, we can integrate existing, reasonable solutions (like Idena). Secondly, this system should not be limited by one person's ideas and should not rely solely on one person's imagination of what mechanisms might work. This should be somewhat open, allowing everyone to contribute solutions.

Next, we need strong contextual pseudonymity. In fact, I initially wrote anonymity, and to some extent, I did refer to anonymity, meaning anonymity from your real-world identity. But at the same time, we also want pseudonymity, so that in any specific context, you can not only prove that you are a unique individual but also prove that you are the same unique individual when you use the system again in the same context.

Finally, we need robust SDKs and APIs to make this functionality as easy to use as any other functionality in Substrate or Polkadot smart contracts, or in the upcoming JAM ecosystem. It must be user-friendly. For instance, specifically, I do not know how many people here have written Frame code, but when writing a new blockchain, you often see a line of code like let account = ensure_signed(origin);. This line of code retrieves the source of the transaction and confirms whether it comes from an account, and if so, tells me what that account is. But an account is not equivalent to an individual; one person may use one or multiple accounts, and similarly, a script may use one or multiple accounts. An account itself cannot provide any information about individual identity, at least not on its own. So, if we want to ensure that a transaction comes from a real person rather than one of a million accounts, we need to be able to replace that line of code with another line like let alias = ensure_person(origin, &b"My context");.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

There are two benefits worth noting.

First, we are not just asking whether an account is signing a transaction, but whether a person is signing the transaction. This brings a huge difference in the functionality we can achieve.

Second, it is important that different operations have different contexts, and we implement anonymity and pseudonymity protection within these contexts. When the context changes, the pseudonym also changes, and the pseudonyms in different contexts cannot be linked, nor can the pseudonyms be associated with the person behind them. These are completely anonymous pseudonym systems, which become a very important tool in blockchain development, especially when developing systems that are useful in the real world.

So, what limitations might we impose on the actual mechanisms for identifying individual identities? First, this mechanism must be widely accessible. If it only allows a portion of the population to participate, then it will not be very useful. It should not require assets, nor should it require expensive fees, at least not excessively high fees.

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

Inevitably, there will be trade-offs between different mechanisms. I do not believe there will be a one-size-fits-all solution. But some trade-offs are acceptable, while others are not. Resilience, decentralization, and sovereignty should not be compromised, but some mechanisms may require less effort but more commitment, while others may require more effort but less commitment. We should have a reasonable expectation that individuals verified by the system (i.e., accounts linked to a person, or pseudonyms) are indeed unique individuals in the real world.

Different mechanisms may overlap when measuring individual identity in a decentralized Web3 system in a resilient and non-authoritarian manner. This means that we cannot achieve perfection, but there should not be orders of magnitude errors; the differences should be significantly less than an order of magnitude. Additionally, the system must have a strong ability to resist identity abuse to prevent a minority of individuals or organizations from attempting to acquire a large number of individual identities.

Crucially, the system must have safeguards to prevent such situations. There may be some mechanisms that can provide relatively low-confidence individual identity scores, which is a higher goal. Some mechanisms may achieve this, while others may not, and some may be binary, meaning we either believe that this account is a unique individual or we do not. Other mechanisms may indicate that we have a 50% confidence level, but it could also be that this individual has two accounts, and we have 50% confidence in both accounts.

Of course, all of this must be permissionless and not difficult to implement. I should not have to emphasize this, but there should not be common repentance mechanisms or common key management authorities in the system.

What are the benefits of doing this?

So why do this? What are the benefits?

Gavin Wood: How to Prevent Civil Resistance Attacks for Effective Airdrops?

We have discussed some ways society uses or relies on individual identities. But how can these be realized on-chain? We can start to imagine a Polkadot system that does not require transaction fees, meaning reasonable use is free. Imagine a "Plaza Chain," which, if you are not familiar, is essentially an enhanced asset hub with smart contract capabilities and the ability to utilize staking systems.

If we envision such a Plaza Chain, we can imagine a scenario where gas fees are not required. As long as you are within reasonable usage limits, gas is free. Of course, if you write scripts or conduct a large number of transactions, then you will need to pay fees, as that exceeds the ordinary individual's usage rights. Imagine these systems starting to open up to the public for free, and we can efficiently and effectively launch communities through targeted airdrops. At the same time, we can envision more advanced governance methods for Polkadot.

Now, I am not particularly convinced by the idea of "one person, one vote." In some cases, it is necessary to ensure legitimacy, but it usually does not yield particularly good results. However, we can consider some other voting methods, such as quadratic voting or regional voting. In some representative elements, one person, one vote may be very enlightening.

We can also imagine a jury-like Oracle system, where parachains and smart contracts can use local secondary Oracle systems, perhaps for price feeds or to resolve disputes between users. But they can also say that if needed, we will utilize a "grand jury" or "supreme court" system to select members from known random individuals to make decisions, help resolve disputes, and provide some small rewards. Since these members are randomly selected from a large, fair group, we can expect this method to provide a resilient and reliable dispute resolution mechanism.

You can imagine noise control systems, especially in decentralized social media integrations, which can help manage spam and bad behavior. In DeFi, we can envision reputation control systems similar to credit scoring, but perhaps more focused on whether you have been found to default on payments, allowing the system to provide services similar to a freemium model.

Well, that concludes the first part of this presentation, and I hope it has been helpful to you.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
banner
ChainCatcher Building the Web3 world with innovators