Vitalik on Worldcoin: How will biometric proof of personhood reshape the world?

ChainCatcher Selection
2023-07-24 21:08:08
Collection
In principle, even though various implementation methods carry risks, the concept of personality proof remains extremely valuable. At the same time, a world completely devoid of any personality proof still cannot avoid risks: a world without personality proof seems more likely to be dominated by centralized identity solutions, currency, small closed communities, or some combination of the three.

Original Title: What do I think about biometric proof of personhood?

Original Author: Vitalik

Translated by: Qianwen, bayemon.eth, ChainCatcher

Special thanks to the Worldcoin team, the Proof of Humanity community, and Andrew Miller for the discussions.

People in the Ethereum community have been working hard to build a decentralized proof of personhood solution, which is a tricky but potentially one of the most valuable small tools. Proof of personhood, also known as the unique human problem, is a limited form of real-world identity that asserts that a given registered account is controlled by a real person (and that it is a different real person from other registered accounts), ideally without revealing which real person it is.

There have been some attempts to address this issue: for example, Proof of Personhood, BrightID, Idena, and Circles. Some of them have their own applications (often UBI tokens), and some are used in Gitcoin Passport to verify which accounts are valid in quadratic voting. Zero-knowledge technologies like Sismo add privacy to many such solutions. Recently, we have seen the rise of a larger and more ambitious identity project: Worldcoin.

Worldcoin was co-founded by Sam Altman, who is known for being the CEO of OpenAI. The idea behind the project is simple: artificial intelligence will create a lot of wealth for humanity, but it may also lead to many people becoming unemployed. It will make it nearly impossible to distinguish who is human and who is a robot, so we need to plug this gap by:

(i) creating a very good identity proof system so that humans can prove they are indeed human;

(ii) providing interest-free loans to everyone.

What makes Worldcoin unique is that it relies on highly sophisticated biometric technology, using a dedicated hardware device called Orb to scan each user's iris: their goal is to produce a large number of Orbs and distribute them widely around the world, placing them in public locations so that anyone can easily obtain an ID. Notably, Worldcoin also promises to achieve decentralization over time. Initially, this means technical decentralization: becoming an L2 on Ethereum using the Optimism stack and employing ZK-SNARKs and other cryptographic techniques to protect user privacy. Later, it will also include decentralized governance of the system itself.

Worldcoin has faced criticism for privacy and security issues related to the Orbs, design issues with its tokens, and some ethical choices made by the company. Some of these criticisms are very specific, mainly targeting decisions made by the project that could have easily been made differently—indeed, the Worldcoin project itself might be willing to change these decisions. But there are also more fundamental questions about the use of biometric technology— not just Worldcoin's eye-scanning biometric technology, but also simpler methods like facial video uploads and non-robotic verification games used in Idena—whether it is a good idea. There are criticisms of all proofs of personhood, arguing that the risks include inevitable privacy breaches, further diminishing people's ability to browse the internet anonymously, coercion by authoritarian governments, and the potential impossibility of achieving security while being decentralized.

This article will discuss these issues and provide some arguments to help you decide whether it is a good idea to bow before this spherical deity and scan your eyes (or face, voice, etc.), and whether natural alternatives—using social graph-based proofs of personhood or completely abandoning proof of personhood—are better.

What is proof of personhood, and why is it important?

The simplest definition is: it creates a list of public keys, with the system guaranteeing that each public key is controlled by a unique human. In other words, if you are human, you can put one key on the list, but you cannot put two keys on the list; if you are a robot, you cannot put any keys on the list.

Proof of personhood is valuable because it addresses the anti-fraud and anti-centralization issues many people face, avoiding reliance on centralized authorities and disclosing as little information as possible. If the proof of personhood issue is not resolved, decentralized governance (including micro-governance, such as voting on social media posts) is more easily captured by very wealthy participants (including hostile governments). Many services can only prevent denial-of-service attacks by setting access prices, and sometimes the high prices that deter attackers are too high for many low-income legitimate users.

Many major applications in today's world solve this problem by using government-supported identity systems (like credit cards and passports). While this solves the problem, it makes enormous, perhaps unacceptable sacrifices in terms of privacy, and the government itself can also carry out trivial attacks on it.

Few supporters of proof of personhood see the dual risks we face.

In many proof of personhood projects—not just Worldcoin, but also others (Circle, BrightID, Idena)—the flagship application is a built-in N-per-person token (sometimes referred to as UBI tokens). Each registered user in the system receives a certain number of tokens daily (or hourly, or weekly). But there are many other applications:

  • Token distribution airdrops
  • Token or NFT sales, providing better terms for less wealthy users
  • Voting in DAOs
  • Developing methods for reputation systems based on graphs
  • Quadratic voting (and funding and attention payments)
  • Defending against bot/fake attacks on social media
  • CAPTCHA alternatives to prevent DoS attacks

In many of these cases, the common goal is to establish open and democratic mechanisms to avoid centralized control by project operators and dominance by the wealthiest users. The latter is particularly important in decentralized governance. In many cases, existing solutions rely on a combination of the following two aspects:

(1) Highly opaque AI algorithms, which have significant room for imperceptible discrimination against users that operators fundamentally dislike;

(2) Centralized identity verification, also known as KYC.

An effective identity verification solution would be a better alternative, providing the security properties required by these applications without the flaws of existing centralized methods.

What early attempts at proof of personhood have there been?

Proof of personhood mainly takes two forms: social graph-based proof and biometric proof.

Social graph-based proof of personhood relies on some form of endorsement: if Alice, Bob, Charlie, and David are all verified humans, and they all say Emily is a verified human, then Emily is likely also a verified human. Endorsements are usually reinforced through incentives: if Alice says Emily is human but it turns out she is not, both Alice and Emily may be penalized. Biometric proof of personhood involves verifying certain physical or behavioral characteristics of Emily to distinguish humans from robots (and differences between human individuals). Most projects combine both techniques.

The four systems I mentioned at the beginning of the article work roughly as follows:

Proof of Personhood: Upload a video of yourself and provide a deposit. To get approved, an existing user must endorse you, and a certain amount of time is required during which you can be challenged. If a challenge arises, the Kleros decentralized court will determine whether your video is genuine; if it is not, you will lose your deposit, and the challenger will receive a reward.

BrightID: You participate in video call verification parties with other users, where everyone verifies each other. Higher-level verification can be done through Bitu, where if enough Bitu-verified users endorse you, you can pass verification.

Idena: You play a CAPTCHA game at a specific time (to prevent people from participating multiple times); part of the CAPTCHA game involves creating and verifying CAPTCHAs, and then using those CAPTCHAs to verify others.

Circles: Existing Circle users endorse you. What makes Circles unique is that it does not attempt to create a globally verifiable ID; instead, it creates a trust relationship graph, where someone's trustworthiness can only be verified from your own position in the graph.

Each Worldcoin user installs an application on their phone that generates a private and public key, just like an Ethereum wallet. They then visit an Orb in person. The user looks at the camera of the Orb while showing a QR code generated by the Worldcoin application that contains their public key. The Orb scans the user's eyes and uses complex hardware scanning and machine learning classifiers to verify two things:

1) The user is a real person

2) The user's iris is inconsistent with the iris of any other user who has previously used the system

If both scans pass, the Orb signs a message approving the specialized hash value of the user's iris scan. The hash value is uploaded to a database (currently a central server), and once the hash value mechanism is confirmed to be effective, it will be replaced by a decentralized on-chain system. The system does not store the complete iris scan results, only the hash values, which are used to check uniqueness. From that point on, the user has a World ID.

World ID holders can prove they are unique humans by generating ZK-SNARKs that prove they hold the private key corresponding to the public key in the database without revealing the specific private key they hold. Therefore, even if someone rescans your iris, they cannot see any of your actions.

What are the main issues with Worldcoin's construction?

Four major risks:

l Privacy. The iris scan registry may leak information. If someone else scans your iris, they can cross-reference it with the database to determine whether you possess a World ID. Iris scans may leak more information.

l Accessibility. If there are not enough Orbs, it will not be possible for everyone to reliably access World ID.

l Centralization. The Orb is a hardware device, and we cannot verify whether it is constructed correctly or whether there are backdoors. Therefore, even if the software layer is perfect and fully decentralized, the Worldcoin Foundation still has the ability to insert a backdoor into the system, allowing it to arbitrarily create many false human identities.

l Security. Users' phones may be hacked, and users may be coerced into scanning their irises while presenting public keys belonging to others, and there is also the possibility of 3D printing fake humans to pass iris scans and obtain World IDs.

It is important to distinguish between (i) issues specific to Worldcoin's choices, (ii) issues that any biometric proof of personhood will inevitably have, and (iii) issues that any general proof of personhood will have.

For example, registering for proof of personhood means publicly disclosing your face on the internet. Joining a BrightID verification party, while not fully disclosing your face, still exposes your identity to many people. Joining Circles would publicly disclose your social graph.

In contrast, Worldcoin is much better at protecting privacy. On the other hand, Worldcoin relies on specialized hardware, which brings the challenge of trusting the Orb manufacturer to construct the Orbs correctly—a challenge that does not exist in proof of personhood, BrightID, or Circles. It is even conceivable that in the future, besides Worldcoin, other specialized hardware solutions will be created, making different trade-offs.

How do biometric proof of personhood solutions address privacy issues?

The most obvious and significant potential privacy breach of any personal identification system is linking every action of a person to their real-world identity. This data breach is severe, arguably to an unacceptable degree, but fortunately, zero-knowledge proof technology easily addresses this issue.

Users do not need to directly sign with their private key (the corresponding public key is in the database); instead, they can use ZK-SNARKs to prove they possess the private key, while the corresponding public key is somewhere in the database, without revealing the specific private key they hold. This can be achieved through tools like Sismo (see specific implementations of proof of personhood here), and Worldcoin also has its own built-in implementation. Here, it is necessary to commend cryptographically native proof of personhood: they do take this fundamental step to provide anonymization seriously, while virtually all centralized identity solutions fail to do so.

A more subtle but still significant privacy breach is the public registration of biometric scans. In terms of proof of personhood, this involves a large amount of data: you would get videos of every proof of personhood participant, making it clear to anyone interested in investigating proof of personhood participants. In Worldcoin, however, the leaked data is much more limited: the Orb only computes and publishes the hash values of each person's iris scans locally. This hash value is not a conventional hash like SHA256, but a specialized algorithm based on a machine learning Gabor filter that addresses the inherent inaccuracies in any biometric scan and ensures that consecutive hash values for the same person's iris produce similar outputs.

Blue: Percentage of differing bits between two iris scans of the same person.

Orange: Percentage of differing bits between two iris scans of different people.

These iris hash values leak only a small amount of data. If an adversary can forcibly (or secretly) scan your iris, they can compute your iris hash value themselves and cross-reference it with the iris hash value database to determine whether you are participating in the system. This functionality of checking whether someone is registered is necessary for the system itself to prevent multiple registrations, but this functionality always has the potential to be abused.

Moreover, iris hash values may leak a certain amount of medical data (including gender, race, and possibly medical conditions), but this leakage is far less than the data that almost all other large-scale data collection systems currently capture (for example, even street cameras).

Overall, in my view, storing iris hash values is sufficient to protect privacy. If others disagree with this assessment and decide to design a system with stronger privacy, there are two ways to achieve this:

1) If the iris hashing algorithm can be improved to significantly reduce the differences between two scans of the same person (for example, reliably keeping bit flips below 10%), then the system can store fewer error-correcting bits for the iris hash rather than storing the complete iris hash (see: fuzzy extractors). If the difference between two scans is below 10%, then the number of bits that need to be disclosed would be reduced by at least five times.

2) If we want to go further, we could store the iris hash database in a multi-party computation (MPC) system that can only be accessed by the Orb (with rate limits), making the data completely inaccessible, but at the cost of greatly increasing the complexity of managing the protocols and social complexities of the MPC participants. The benefit of this approach is that even if users wish to, they cannot prove the connection between two different World IDs they possess at different times.

Unfortunately, these technologies are not applicable to proof of personhood because proof of personhood requires the complete videos of each participant to be publicly available so that challenges can be raised in the event of signs of fake videos (including AI-generated fake videos), and in such cases, more detailed investigations can be conducted.

Overall, while staring at the Orb for a deep iris scan may feel utopian, specialized hardware systems do seem to do a good job of protecting privacy. However, this also indicates that specialized hardware systems bring greater centralization issues. Thus, we seem to be caught in a dilemma: we must weigh one set of values against another.

What accessibility issues do biometric identification systems face?

Specialized hardware brings accessibility issues because specialized hardware is not very convenient to use. Currently, about 51% to 64% of people in sub-Saharan Africa own smartphones, and this is expected to rise to 87% by 2030. However, while there are billions of smartphones globally, there are only a few hundred Orbs. Even with larger-scale distributed manufacturing, it is difficult to achieve a world where there is an Orb within five kilometers of everyone.

It is worth noting that many other forms of proof of personhood have even more severe accessibility issues. It is difficult to join a social graph-based proof of personhood system unless you already know someone in the social graph. This makes such systems easily limited to single communities within a single country.

Even centralized identity verification systems have learned this lesson: India's Aadhaar ID system is based on biometric technology because it allows for rapid inclusion of large populations while avoiding a lot of duplicate and fraudulent accounts (thus saving significant costs). Of course, the Aadhaar system as a whole is much weaker in terms of privacy protection than any large-scale proposals put forth by the cryptocurrency community.

From an accessibility perspective, the best-performing systems are actually those like proof of personhood, where you can register using just a smartphone.

What are the centralization issues with biometric identification systems?

  1. Centralization risks in the upper management of the system (especially if different participants in the system have subjective disagreements, leading to final high-level decisions made by the system).

  2. Centralization risks are unique to systems that use specialized hardware.

  3. If proprietary algorithms are used to determine who the real participants are, there is a centralization risk.

Any proof of personhood system must contend with point (1), perhaps except for systems where the accepted ID set is entirely subjective. If a system uses an incentive mechanism priced in external assets (like Ethereum, USDC, DAI), then it cannot be completely subjective, and thus governance risks become unavoidable.

The second risk is much greater for Worldcoin than for proof of personhood (or BrightID) because Worldcoin relies on specialized hardware, while other systems do not require it.

The third risk is particularly present in logically centralized systems, unless all algorithms are open-source and we can ensure they are indeed running the code they claim. This is especially risky for systems that rely purely on users verifying other users (like proof of personhood).

How does Worldcoin address the centralization issue of hardware?

Currently, an entity called Tools for Humanity, a Worldcoin affiliate, is the only organization manufacturing Orbs. However, most of the Orb's source code is public: you can see the hardware specifications in this GitHub repository, and other parts of the source code are expected to be released soon. The license is another form of shared source code but is considered open-source only after four years, similar to the Uniswap BSL, which prevents forking and actions they deem unethical—specifically listing mass surveillance and three international declarations of human rights.

The established goal of the team is to allow and encourage other organizations to create Orbs and, over time, transition from Orbs created by Tools for Humanity to a DAO that approves and manages which organizations can create Orbs recognized by the system.

This design has a problem:

1) Due to common pitfalls in collaborative agreements, it may ultimately fail to achieve true decentralization: over time, one manufacturer may end up dominating in practice, leading to a return to centralization. While the governance body can limit how many valid Orbs each manufacturer can produce, this requires careful management and puts significant pressure on the governance body to both decentralize and monitor the ecosystem effectively against threats. This is much more challenging than merely handling top-level dispute resolution tasks in a static DAO.

2) It is essentially impossible to ensure the security of this distributed manufacturing mechanism, with two risks present:

Extreme vulnerability to bad Orb manufacturers: even if just one Orb manufacturer is malicious or hacked, it can generate an unlimited number of fake iris scan hash values and provide them with World IDs.

Government restrictions on Orbs: governments that do not want their citizens to participate in the Worldcoin ecosystem can prohibit Orbs from entering their countries. Moreover, they can even force citizens to undergo iris scans to obtain their accounts, and citizens will have no recourse.

To enable the system to more effectively resist attacks from bad Orb manufacturers, the Worldcoin team suggests conducting regular audits of Orbs to verify whether the manufacturing process is correct, whether key hardware components are made according to specifications, and whether they have been tampered with afterward. This is a challenging task: it is essentially similar to the International Atomic Energy Agency (IAEA) nuclear inspectorate, but for Orbs. We hope that even in the case of imperfect auditing systems, the number of fake Orbs can be significantly reduced.

To prevent any bad Orbs from slipping through the cracks and causing harm to the system, it is necessary to implement a second mitigation measure. That is, World ID registrations registered with different Orb manufacturers can be effectively distinguished. If this information is private and stored only on the devices of World ID holders, that is not a problem; it just requires that it can be proven when necessary. This way, the ecosystem can respond to (inevitable) attacks and remove individual Orb manufacturers or even individual Orbs from the whitelist as needed. For example, if we find that the North Korean government is forcing people to scan their eyes everywhere, those Orbs and any accounts generated by them would be immediately retroactively disabled.

General security issues with proof of personhood

In addition to the unique issues posed by the Worldcoin system, there are some problems that affect general proof of personhood designs. The main issues I can think of are as follows:

  1. 3D-printed fakes: People can use AI to generate photos of fake humans or even 3D print fake humans that are credible enough for the Orb software to accept. As long as there are groups doing this, they can generate an unlimited number of identities.
  2. Selling IDs: Someone can provide another person's public key during registration instead of their own, allowing someone else to control the ID they registered, thus exchanging it for money. This situation seems to have already occurred, and besides selling, there may also be cases of renting IDs.
  3. Hacking phones: If a person's phone is hacked, the hacker can steal the keys controlling their World ID.
  4. Government coercion to steal identification: Governments can force citizens to verify while presenting a QR code belonging to the government. In this way, malicious governments can obtain millions of IDs. In biometric systems, this can even be done covertly: governments can use disguised Orbs to extract World IDs from everyone entering the country while verifying passports at customs.

(1) is a problem unique to biometric proof of personhood systems, while (2) and (3) are common to both biometric and non-biometric designs. Point (4) is also a commonality between the two. Although the technologies required in these two cases are quite different, in this section, I will focus on the issues in the biometric context.

These are quite serious problems, some of which have been adequately addressed in existing protocols, others can be mitigated through future improvements, and some seem to be fundamental limitations.

How to address the issue of 3D-printed fakes?

For Worldcoin, this risk is much smaller than for systems like proof of personhood: in-person scanning can check many features of a person, making it difficult to forge compared to carefully fabricated videos. Specialized hardware is inherently harder to deceive than ordinary hardware, and ordinary hardware is harder to deceive than digital algorithms that verify remotely sent images and videos.

Will someone eventually create something that can fool even specialized hardware with 3D printing? It is possible. I anticipate that at some point in the future, the contradiction between ensuring openness and security will grow: open-source AI algorithms are inherently more susceptible to adversarial machine learning. Black-box algorithms are more protected, but it is difficult to ensure that no private malicious information has been added during the training of black-box algorithms. Perhaps future ZK-ML technologies can achieve the best of both worlds, but from another perspective, even the best AI algorithms may be fooled by the best 3D-printed fakes.

How to prevent ID selling?

In the short term, preventing this ID leakage is quite difficult because most people in the world do not even know about identification protocols. If you tell them that raising a QR code and scanning their eyes can get them $30, they will certainly comply. Once more people know what identification protocols are, a relatively simple mitigation measure becomes possible: allowing registered ID holders to re-register and cancel their previous IDs. This way, the credibility of selling IDs would be greatly reduced because the person selling you the ID can directly go re-register and cancel the ID they just sold you. However, to achieve this, the protocol must be widely known, and the Orbs must be very easy to obtain to make on-demand registration a reality.

This is also one of the reasons why integrating UBI coins into proof of personhood systems is valuable: UBI Coin provides an easy-to-understand incentive mechanism that can help people understand the protocol and register. Moreover, if they register on behalf of others, it will immediately trigger the re-registration mechanism, while re-registration can effectively mitigate the risk of phones being hacked.

Can we prevent coercion in biometric identification systems?

It depends on what kind of coercion we are talking about. Possible forms of coercion include:

  • Governments scanning people's eyes (or faces, etc.) at border controls and other routine government checkpoints, registering citizens and frequently re-registering them.
  • Governments prohibiting the use of Orbs domestically to prevent people from independently re-registering.
  • Individuals purchasing IDs and then threatening others that if their ID becomes invalid due to re-registration, they will harm the person who re-registers.
  • (Possibly government-operated) applications requiring people to directly sign in with their public keys, allowing them to see the corresponding biometric scans, thus seeing the connection between the user's current ID and any future ID obtained after re-registration. There is a general concern that this could easily create a permanent record that follows a person throughout their life.

Especially in the hands of immature users, it seems quite difficult to completely prevent these situations. Users can leave their country and (re)register on a safer country's Orb, but this is a difficult and costly process. In truly harsh legal environments, finding an independent Orb is too difficult and risky.

A feasible approach is to make such abuses harder to implement and detect. Requiring users to say a specific phrase during registration is a good example: this may be enough to prevent covert scanning but would require coercive actions to be more overt. Moreover, the registration phrase could even include a statement confirming that the respondent knows they have the right to independently re-register and may receive UBI Coin or other rewards. If coercive behavior is detected, devices used for mass coercive registration could be denied access. To prevent applications from linking people's current and previous IDs and attempting to leave a permanent record, the default identification application could lock users' keys in trusted hardware, preventing any application from directly using the keys without an anonymous ZK-SNARK layer. If governments or application developers want to bypass this, they would need to enforce the use of their own custom applications.

Combining these technologies with vigilance against ID abuse, locking down truly hostile regimes while keeping merely moderate regimes honest (which is the case in many parts of the world) seems possible. This could be accomplished by projects like Worldcoin or proof of personhood maintaining their own bureaucracies, or by disclosing more information about how IDs are registered (for example, in Worldcoin, which Orb it comes from) and leaving this classification task to the community.

How to prevent ID renting (like vote selling issues)?

Re-registering does not prevent ID renting. This is not a problem in some applications: the cost of renting the right to receive UBI Coin on the day will simply be the value of that day's UBI Coin. But in applications like community voting, selling votes is a big issue.

Systems like MACI can prevent you from selling your votes credibly, allowing you to vote again later, invalidating your previous vote, so no one can know whether you actually cast such a vote. However, this is futile if someone malicious controls the keys you obtained during registration.

I believe there are two solutions:

  1. Run the entire application within MPC: This also covers the re-registration process, meaning that when a person registers with the MPC, the MPC assigns them an ID that is independent of their proof of personhood ID and cannot be associated with it. When a person re-registers, only the MPC knows which account to deactivate. This can prevent users from proving their actions because every important step is completed using private information known only to the MPC.
  2. Distributed registration rituals: Decentralized registration rituals. Essentially, implementing a face-to-face key registration protocol that requires four randomly selected local participants to jointly complete the registration. This can ensure that registration is a trustworthy process, and attackers cannot eavesdrop during the registration process.

In practice, social graph-based systems may perform better in this regard because they can automatically create locally stored distributed registration processes as a byproduct of how they operate.

Biometric technology vs. social graph-based verification

Aside from biometric methods, the other main competitors for proving personal identity so far are social graph-based verification. Social graph-based verification systems are based on the same principle: if there is a large number of existing verified identities that validate your identity, then that validity stands, and you should receive verified identity.

If only a few real users (accidentally or maliciously) verify a fake user, then basic graph theory techniques can be used to set an upper limit on the number of fake users verified by the system.

Supporters of social graph-based verification often describe it as a better alternative to biometric technology for several reasons:

  • It does not rely on specialized hardware, making it easier to deploy.
  • It avoids a long arms race between 3D fake manufacturers and Orbs.
  • It does not require the collection of biometric data, which is more favorable for privacy.
  • It may be more friendly to anonymity, as if someone chooses to segment their online life into multiple independent identities, both identities could potentially be verified (but maintaining multiple real and independent identities sacrifices network effects and is costly, making it difficult for attackers to achieve this).
  • Biometric methods provide a binary score of "is human" or "is not human," which is fragile because those who are accidentally rejected may ultimately be unable to obtain UBI and may not be able to participate in online life. Social graph-based methods can provide a more nuanced numerical score, which may be slightly unfair to some participants but is less likely to completely exclude someone from participating in online life.

For these arguments, my view is generally supportive. These are indeed the real advantages of social graph-based methods and should be taken seriously. However, social graph-based methods also have shortcomings, which are worth considering:

  • Initial connections: To join a social graph-based system, users must know someone in the graph. This poses challenges for large-scale applications and may potentially exclude entire regions that are unlucky during the initial onboarding process.
  • Privacy: While social graph-based methods can avoid collecting biometric data, they often end up leaking information about a person's social relationships, which may lead to greater risks. Of course, zero-knowledge technologies can mitigate this issue (for example, see the suggestions proposed by Barry Whitehat), but the inherent interdependencies in the graph and the need for mathematical analysis of the graph make it difficult to achieve the same level of data concealment as biometric technology.
  • Inequality: Everyone can only have one biometric ID, but a social butterfly can leverage their connections to generate many IDs. Fundamentally, social graph-based systems can flexibly provide multiple pseudonyms to those who genuinely need this feature (like event organizers), which may also mean that more powerful and well-connected individuals can obtain more pseudonyms than those with less power and connections.
  • Risk of falling into centralization: Most people are too lazy to spend time reporting to internet applications who is a real person and who is not. Therefore, over time, the system may tend to rely on centralized authorities' easy entry methods, and the social graph of system users will effectively become the social graph of which countries recognize which people as citizens—providing us with centralized KYC but adding unnecessary extra steps.

In the real world, is proof of personhood compatible with pseudonyms?

In principle, proof of personhood can be compatible with various pseudonyms. Applications can be designed such that a person with proof of identity can create up to five profiles in the program, leaving room for pseudonymous accounts, and they could even use a quadratic formula where the cost of N accounts is N² dollars. But would they do this?

However, pessimists might argue that trying to create a form of ID that is more privacy-friendly and hoping it will be adopted in the right way is unrealistic because those in power do not care about the privacy of ordinary people. If a powerful person obtains a tool that can be used to gather more personal information, they will undoubtedly use it. In such a world, unfortunately, the only realistic approach is to prevent any identity solution from being implemented to defend a completely anonymous and highly trusted digital world.

I fully understand the rationale behind this approach, but I worry that even if this approach succeeds, it will lead to a world where no one can take any action to resist the centralization of wealth and governance because one person can always impersonate ten thousand. Conversely, this centralization can easily be held in the hands of those in power. Instead, I prefer a more moderate approach where we strongly advocate for proof of personhood solutions with strong privacy, and if necessary, even incorporate a mechanism where the cost of registering N accounts is N² dollars, creating something with privacy-friendly values that has a chance of being accepted by the outside world.

So my view is that in the absence of an ideal proof of personhood method, we have three different proof methods, each with its unique advantages and disadvantages, as shown in the comparison chart below:

Our ideal approach is to view these three technologies as complements and combine them. As demonstrated by India's Aadhaar, specialized hardware biometric technology has the advantages of large-scale security, but they are very weak in terms of decentralization, which can be addressed by holding individual Orbs accountable. Today, general biometric technology has reached a level of large-scale application, but its security is rapidly declining, and future usage expectations may only remain for 1-2 years. Social graph-based systems can be initiated with just a few hundred people closely related to the founding team, but for most regions, they must constantly weigh between direct neglect or adoption that is vulnerable to attacks. However, a social graph-based system, if it originates from tens of millions of biometric ID holders, can truly work. Biometric onboarding may be more effective in the short term, while social graph-based technology may be more robust in the long term and could have broader applications as algorithms improve.

A feasible hybrid solution

All teams are capable of making many mistakes, and there will inevitably be tense conflicts between commercial interests and the broader community's needs, so we must remain highly vigilant. As a community, we should advance open-source technology to the comfort zone of all participants, conducting third-party audits, writing software, or implementing other checks and balances. We also need to have more alternative technologies within these three categories.

At the same time, we must also commend the work that has already been done: many teams running these systems have shown their commitment to privacy, which is far more serious than any identity system run by governments or large corporations, and this is a quality we should promote.

Building an effective and reliable identification system, especially one managed by those far removed from the cryptocurrency community, seems quite challenging. I absolutely do not envy those trying to accomplish this task, as they will likely need years to find a viable solution. In principle, even though various implementations carry risks, the concept of proof of personhood remains extremely valuable. Meanwhile, a world without any proof of personhood still cannot avoid risks: a world without proof of personhood seems more likely to be dominated by centralized identity solutions, currencies, small closed communities, or some combination of the three. I look forward to seeing various types of proof of personhood make more progress and hope to see different approaches eventually converge into a coherent whole.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
banner
ChainCatcher Building the Web3 world with innovators