Exclusive Interview with Trusta Labs: The Chaos and Order of Airdrops
Author: Azuma, Odaily Planet Daily
LayerZero's largest witch hunt in history has officially concluded.
Over the past month, from calling on witches to self-report to retain 15% of the airdrop allocation, to the official collaboration with data analysis firms like Chaos Labs and Nansen for active screening, and incentivizing hunters with a 10% reward for reporting, LayerZero's every move has captured the community's attention.
Especially during the final reporting phase, although this was not an original idea from LayerZero (previously, Hop Protocol and others had similar designs), as one of the most highly anticipated potential airdrop projects in the market, the waves stirred by LayerZero far exceed those of its predecessors. Driven by interests, countless hunters submitted thousands of witch reports to LayerZero, forcing GitHub to temporarily suspend accounts to alleviate server pressure. Even after the event migrated to Commonwealth and added a 0.5 ETH deposit requirement, hunters submitted over 3,000 witch reports within just three days.
From the results alone, LayerZero seems to have achieved its desired outcome in this community-driven event—gaining a massive amount of data samples to adjust the airdrop details. However, community feedback indicates that the controversies and doubts surrounding this event have never dissipated. Questions remain regarding the validity of the reports submitted by hunters, whether professional institutions will turn into hunters themselves, whether LayerZero can balance efficiency and fairness as the reviewing judge, whether the final witch list will be reused by later projects, and the far-reaching impact of this model on the airdrop paradigm. Many issues in the market remain unanswered.
As a Web3 startup from a former large company's anti-fraud team, Trusta Labs is dedicated to building Web3 identity and reputation infrastructure through AI. It has launched the industry's first Web3 user value assessment system, MEDIA Score, and the wallet analysis tool, TrustGo, which have been adopted by leading projects like Celestia, Manta, and Starknet as on-chain analysis standards for screening airdrop users.
LayerZero conducted a large-scale social experiment, but what was the outcome?
Looking back at the development history of airdrop design, it is essentially a dynamic game between project parties and users (including witches). Since the classic airdrop of Uniswap, subsequent projects launching TGE have put considerable effort into rule design, culminating in the high-profile actions of LayerZero. Overall, the project's efforts to hunt witches have been continuously intensified.
The fundamental reason, according to Trusta Labs, lies in the fact that as more users (including witches) flock in, the airdrop market has formed a "too many monks, too little porridge" situation, which continues to worsen. There is a mismatch between users' high expectations and the chips that project parties can offer. Based on this trend, later project parties have had to increase their efforts to hunt witches, taking food from them to compete for more chips for the users recognized by the project parties.
This is especially true for projects like LayerZero, which have interaction addresses in the millions. However, LayerZero did not choose to delegate a professional institution for airdrop design like earlier leading projects such as Arbitrum and Starknet; instead, it initiated this nearly month-long large-scale social experiment "hands-on," attempting to conduct a more thorough hunt against witches.
However, Trusta Labs believes that LayerZero's experiment has certain issues in planning and execution, which is the fundamental reason for the community's strong reaction.
From a planning perspective, the biggest problem is that LayerZero failed to clearly define the token economic model and airdrop allocation in advance. In simple terms, no one knows how much the airdrop will distribute. Which addresses qualify? The so-called self-reporting can retain the original allocation of 15%, and reporting can earn 10%, but no one knows what the original allocation actually is; there are no details on how the deducted tokens will be distributed… Lack of transparency means there is room for manipulation, making it hard for the community to trust.
As for the execution aspect, the effectiveness of the three major phases—self-reporting, screening, and reporting—also deserves scrutiny.
First, regarding the self-reporting phase, Trusta Labs believes this form is unlikely to be very effective because the addresses that choose to self-report generally do not receive many allocations, thus having a relatively small impact on the overall chip distribution. In retrospect, the proportion of addresses that chose to self-report was not high. Additionally, LayerZero intended to improve the screening mechanism through the behavior logic of self-reporting addresses, but since the self-reporting addresses are relatively scattered, and witches usually appear in clusters with similar behavior logic, it is difficult to derive a unified logic from scattered samples.
Next is the official screening phase. LayerZero commissioned Chaos Labs and Nansen to conduct witch logic analysis, but after the list was released, many users reported that their unique addresses were marked as witches. LayerZero had to open an "appeal" channel for secondary checks afterward. Trusta Labs infers that LayerZero may have included too many low-allocation addresses in the sample selection, leading to model bias and certain logical omissions in witch identification.
The biggest issue arises from the reporting phase. LayerZero's intention to actively engage the community may have been to lighten the workload and enhance efficiency through collective effort, but in the end, LayerZero had to individually verify over 3,000 reports to filter out logically valid reports, which became a more time-consuming and labor-intensive task. If they conduct rough screening, it is easy to overlook; if they conduct detailed screening, it requires significant manpower and time costs—indeed, even LayerZero's founder Bryan Pellegrino himself lamented that he wished he had more time to handle these matters. Furthermore, due to malicious reporting, list plagiarism, address poisoning, and other behaviors in the reporting phase, the darker side of human nature was fully exposed, and community dissatisfaction gradually escalated.
Odaily Note: Pellegrino expressed on X that he wished he could have two more months for detailed checks.
In summary, LayerZero initiated an unprecedented large-scale witch-cleansing experiment in the industry. This courage and effort deserve commendation, but upon review, the planning and execution of this experiment may still have much room for optimization.
LayerZero plants trees, zkSync enjoys the shade?
After LayerZero's experiment concluded, someone asked, after such a month of turmoil, with the team expending time and effort and the community exhausted, who really benefits?
In response to this question, Trusta Labs' answer is somewhat unexpected—zkSync… In Trusta Labs' view, the biggest outcome of LayerZero's experiment lies in the final witch list, which later projects can take advantage of for free, while the community's noise will only focus on LayerZero.
Trusta Labs predicts that projects planning TGE in the near future may choose to wait for LayerZero's specific list and then cross-check it with their own screening rules. For example, zkSync, as another highly anticipated airdrop project, can completely use LayerZero's list to implement strict witch screening in a relatively gentle manner, thus avoiding community backlash over trademark issues.
As for whether LayerZero's three-tiered cleansing model will become a new paradigm for future airdrops, Trusta Labs does not think so.
From the project party's perspective, LayerZero's case has proven that this is not a simple task. Firstly, it is labor-intensive; secondly, it tests the team's insight into human nature and their ability to withstand pressure. Throughout the cryptocurrency industry, it is hard to find another founder like Pellegrino, who understands witches' thoughts and is energetic—Bryan, a former professional poker player, can engage in high-frequency multilingual "debates" with the community on X or Telegram every day, and he even enjoys it—therefore, other projects find it hard to emulate.
How to design a reasonable airdrop?
Looking at the airdrop plans of major projects in the current market, they can generally be divided into two main schools: "radical" and "moderate."
The perfect representative of the "radical" school is clearly LayerZero, which is committed to eliminating all witches. The main reason such projects choose high-profile actions is to improve rule design and create momentum for the upcoming TGE, attracting more attention through more engaging gameplay. However, as mentioned earlier, such actions require high levels of energy and pressure resistance from the project itself, making replication difficult.
Additionally, the rapidly rising points-based airdrop schemes represented by EigenLayer and Blast in the past year also belong to the "radical" school to some extent. Projects that choose points systems are generally in the early stages of development, with core products not yet launched, unable to accumulate interaction data, thus needing to use simpler metrics (TVL) to please VCs or persuade the market—"PoW to PoS," attracting larger TVL and more market attention in advance through points. However, in Trusta Labs' view, "points are tokens that can be infinitely minted," which gives project parties greater initiative and more flexible operational space. The market has already shown signs of aversion; if points systems want to continue existing in the future, they may need to develop in a more transparent direction, such as the purely on-chain points scheme promoted by Trusta Labs and Linea.
More projects will still choose the relatively traditional "moderate" school, quietly screening addresses and suddenly throwing out an airdrop announcement one day. For such projects, the biggest challenge mainly lies in address analysis, as determining how to reasonably filter data samples and how to identify the behavior logic of witches is no easy task.
In this regard, Trusta Labs suggests that project parties should directly and efficiently delegate the more specialized parts of airdrop design to professional institutions, which will collaborate with project parties to complete the rule design based on their needs and preferences.
To design a reasonable airdrop, project parties need a top-down design approach and adhere to basic principles such as data-driven, rule transparency, and equitable benefits. The first step is to clarify the token economic model and distribution allocation; the second step is to profile different user groups (developers, early users, active users, other ecosystem participants, etc.), choose a focus direction, and determine the allocation ratio for each group; the third step is the concrete witch screening work.
Regarding the distinction between witches and ordinary users (commonly referred to as "airdrop farmers"), Trusta Labs provides a relatively clear definition. In the institution's view, witches generally refer to those who operate a large number of accounts for on-chain interactions using scripts, forming address clusters with strong behavioral consistency; while "airdrop farmers" generally refer to individuals who interact based on their understanding and planning for airdrop purposes, with relatively fewer addresses and manual operations. The screening work should focus on "witches" rather than "airdrop farmers."
Will professional institutions engage in "insider trading" or "hunt down"?
As mentioned earlier, Trusta Labs has undertaken airdrop design for many leading projects like Celestia, Starknet, and Manta in the past few months. Some users have raised doubts about whether professional institutions like Trusta Labs would take the opportunity to engage in "insider trading" for profit.
Not long ago, during LayerZero's reporting phase, a report containing 470,000 addresses caused a stir in the community (subsequent community analysis based on Dune's third-party data found that most of the 470,000 addresses were among LayerZero's top 600,000 interaction addresses, making it hard to imagine they were operated by a single entity). Some have even suspected that this list originated from Trusta Labs.
In response to such doubts, Trusta Labs stated: "Trusta Labs aims to keep its business simple, transparent, and professional. We absolutely will not leverage our technical advantages to engage in under-the-table dealings, nor will we compete for benefits with C-end users, as this would not be conducive to the long-term sustainability of our business or the fair development of the industry."
For example, regarding the "insider trading" issue, while it may yield short-term profits, once exposed, it would cause irreversible damage to the institution's reputation, affecting future business development. Similarly, regarding the "bounty hunter" issue, reporting witches generally requires accompanying explanations of behavioral logic, which could lead to the exposure of the institution's witch algorithm, with potential downsides far outweighing any small profits.
Trusta Labs also specifically mentioned the report covering 470,000 addresses that appeared during LayerZero's reporting phase: "Any user with a bit of knowledge can see that report is nonsense; we certainly wouldn't write something so embarrassing…"
Airdrops are becoming increasingly competitive; do ordinary users still have a chance?
Looking back at the development history of the airdrop market, as the concept of "airdrop farming" becomes more widespread and strong players like scientists and studios continue to enter, the threshold for obtaining airdrops seems to be getting higher. Many readers have reported to us that the airdrops in 2024 already seem to have yielded quite a few, but the overall returns are even lower than the level of one or two "big airdrops" from previous years, and the frequency of "anti-farming" is also significantly increasing.
For ordinary users, the wealth effect of airdrops seems to be gradually dissipating. In response to this issue, Trusta Labs offers a unique suggestion based on its own experience.
In Trusta Labs' view, "airdrop farming" is one of the best ways for users to understand projects. Users can regard it as an alternative investment research method. If they find suitable targets that align with their cognitive aesthetics during interactions, especially high-potential projects in niche sectors, they can use this to assist their investment decisions. From Trusta Labs' own experience, the institution systematically understood Celestia's business model and ecological potential while helping with its airdrop design, and subsequently made a substantial investment shortly after TIA launched, reaping significant profits in the secondary market.
The airdrop market is becoming increasingly competitive, and it will only get more so in the future—the chefs (project parties) are unable to keep up with the growing number of diners (users), and the amount of chips individuals can obtain will only decrease relatively as the group continues to expand. As ordinary players, we cannot change the trend, but perhaps we can slightly adjust our operational mindset.