a16z: 7 Recommendations to Avoid Token Design Flaws

a16z
2023-04-23 11:00:06
Collection
Most effective token models have unique points tailored to their objectives, while most flawed token designs also share some common bugs.

Original Title: 7 Sanity Checks Before Designing a Token

Author: Guy Wuollet

Compiled by: Katie Gu, Odaily Planet Daily

Tokens are a powerful new primitive that can be defined in various ways. The token design space is vast, but we are still in the early stages of exploration.

In fact, many teams struggle to find the "right" token design for their projects. The industry lacks tested design frameworks, leading to repeated challenges faced by newcomers. Fortunately, there are (a few) early examples of successful token designs. Most effective token models have unique elements tailored to their goals, but many flawed token designs share common bugs. Therefore, this article will discuss why we should consider research and design of tokens, not just "token economics," and will outline seven "pitfall avoidance" tips.

#1 Clarify the Goals of Token Design

The biggest issue in token design is how to build complex token models before clarifying the goals. The first step should be to identify the goals and ensure the entire team fully understands: What is it? Why is it important? What do you really want to achieve? Failing to rigorously define goals often leads to redesigns and wasted time. Clearly defined goals also help avoid the problem of "fabricating a token economy to design a token economy," which is a common phenomenon in some token economic designs.

Moreover, goals should revolve around the token itself, but this is often overlooked. Examples of clear goals include:

  • Designing a token model for a game that achieves optimal scalability and supports modeling.

  • A DeFi protocol aiming to design a token model that reasonably allocates risk among participants.

  • Designing a reputation protocol for collateralized money that cannot directly replace reputation (e.g., by separating liquidity from reputation signals).

  • Designing a storage network that ensures file availability with low latency.

  • Designing a staking network that provides maximum economic security.

  • Designing a governance mechanism that elicits true user preferences or maximizes participation.

Such examples are numerous. Let the token support any use case and achieve any goal, rather than the other way around.

So how do you start defining a clear goal? Well-defined goals often stem from the "project mission." While the "project mission" is often high-level and abstract, the goals should be specific and distilled to their most basic form.

Let’s take EIP-1559 as an example. Roughgarden articulated a clear goal for EIP-1559: "EIP-1559 should improve user experience through simple fee estimation in the form of 'obviously best bids' outside of periods of rapid demand growth."

He then proposed another clear goal: "Can we redesign Ethereum's transaction fee mechanism to make setting transaction gas prices as 'smooth' as shopping on Amazon? Ideally, a price publication mechanism that provides each user with a mechanism to accept or reject gas prices."

The commonality in these two examples is that they state a high-level goal, provide a relevant analogy to help others understand your goal, and then continue to outline the design that best supports that goal.

#2 Evaluate Existing Work Based on Fundamental Principles

When creating something new, it’s a good idea to start by studying what already exists. When evaluating existing protocols and literature, they should be assessed objectively based on their technical merits.

Token models are often evaluated based on the price of the token or the popularity of related projects. These factors may have little to do with the ability of the token model to achieve its stated goals. Valuation, popularity, or other simple methods of assessing token models may lead builders to "take detours." If you assume other token models are functioning normally when they are not, you may create a "fundamentally flawed" token model.

#3 Clarify Your Assumptions

Clearly articulate your assumptions. When you focus on building a token, it’s easy to take fundamental assumptions for granted. It’s also easy to misstate the assumptions you are actually making.

Take a new protocol as an example that assumes its hardware bottleneck is computational speed. Incorporating that assumption as part of the token model (e.g., by limiting the hardware costs required to participate in the protocol) can help align the design with the expected behavior.

However, if the protocol and token designers do not clearly express their assumptions, or if the assumptions they express are incorrect, then participants who realize this mismatch may extract value from the protocol. Hackers are often those who understand the system better than the original builders.

Clarifying your assumptions makes it easier for others to understand your token design and ensures it operates as intended. If you do not clarify your assumptions, you cannot validate them.

#4 Validate Your Assumptions

There is a saying: "It's not what you don't know that gets you into trouble. It's what you know for sure that just ain't so."

Token models often make a series of assumptions. This approach partly stems from Byzantine system design, which inspired blockchain. The system makes an assumption and establishes a function that guarantees certain outputs if the assumption is true. For example, Bitcoin guarantees activity in a synchronized network model, ensuring consistency if 51% of the hash power in the network is honest. Several smaller blockchains have suffered 51% attacks, violating the honest assumption required for the blockchain to function properly.

Token designers can validate their assumptions in various ways. Rigorous statistical modeling, often in the form of agent-based models, can help test these assumptions. Assumptions about user behavior can often be validated by talking to users, preferably by observing what people actually do (rather than what they say they do). This increases the likelihood of successful validation, especially through incentive testing networks that generate experiential results in a sandbox environment. Formal validation or intensive audits will also help ensure the codebase operates as expected.

#5 Clarify "Abstraction Barriers"

"Abstraction barriers" are the interfaces between different layers of a system or protocol. They are used to separate different components of the system, allowing each component to be designed, implemented, and modified independently. Clear abstraction barriers are useful in all engineering fields, especially in software design, but are even more necessary for decentralized development and large teams building complex systems that individuals cannot fully understand.

In token design, the goal of clearing abstraction barriers is to minimize complexity. Reducing (internal) dependencies between the different components of a token model can lead to cleaner code, fewer bugs, and better token designs.

For example, many blockchains are built by large engineering teams. One team may make assumptions about hardware costs over a period and use that to determine how many miners contribute hardware to the blockchain at a given token price. If another team relies on the token price as a parameter but is unaware of the first team's assumptions about hardware costs, they can easily make contradictory assumptions.

At the application layer, clear abstraction barriers are crucial for achieving composability. As more protocols combine, the ability to adapt, build, scale, and remix will only become increasingly important. Greater composition brings greater possibilities but also greater complexity. When applications want to compose, they must understand the details of the composable protocols they are using.

Opaque assumptions and interfaces can occasionally lead to ambiguous bugs, especially in early DeFi protocols. Ambiguous abstraction barriers also increase the communication efficiency required between teams handling different components of the protocol, extending development time. Ambiguous abstraction barriers also increase the complexity of the protocol, making it difficult to fully understand its mechanisms.

By creating clear abstraction barriers, token designers can more easily predict how specific changes will affect each part of the token design. Clear abstraction barriers also make it easier to scale tokens or protocols and create a more inclusive and scalable builder community.

#6 Reduce Dependency on External Parameters

External parameters are not inherent to the system but can affect overall performance and success, such as the cost of computational resources, transaction volume, or latency during the early creation of a token model.

However, when a token model only works within a limited range of parameters, unexpected behaviors may arise. For example, a protocol that sells services and offers rebates in the form of fixed token rewards may find that if the token price is unexpectedly high, the value of the token rewards could exceed the cost of the service. In this case, purchasing unlimited services from the protocol becomes quite advantageous, fully leveraging the token rewards and services.

Or to give another example, decentralized networks often rely on cryptographic algorithms or computational puzzles that are difficult to solve but not impossible. The difficulty often depends on an exogenous variable, such as how fast a computer can compute a hash function or zero-knowledge proof. For instance, there is a protocol that assumes how fast a given hash function can be computed and pays token rewards accordingly. If someone invents a new method to compute the hash function faster, or simply has disproportionately large resources compared to their actual work in the system, they can receive unexpectedly large token rewards.

#7 Re-validate Assumptions

Designing a token should be like designing an adversarial system. User behavior will change as the way the token works changes.

A common mistake is to adjust the token model without ensuring that arbitrary user behaviors still yield acceptable results. Do not assume that user behavior will remain unchanged due to changes in the token model. This mistake often occurs late in the design process, where someone has spent considerable time defining the goals of the token, outlining its functions, and validating that it operates as intended. Then, they identify a special case and change the token design to accommodate it, but forget to re-validate the entire token model. By fixing one special case, they create another (or several) unexpected consequences.

Remember not to waste your hard work; re-validate whether the token model operates as intended whenever the project changes its token model.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators