Analysis of the Current Situation, Competitive Landscape, and Future Opportunities of the Integration of AI and the Web3 Data Industry (Part 2)

Footprint Analytics
2023-12-06 13:09:56
Collection
The emergence of GPT has drawn global attention to large language models, and various industries are trying to leverage this "black technology" to improve work efficiency and accelerate industry development.

Author: Footprint Analytics


The emergence of GPT has drawn global attention to large language models, with various industries attempting to leverage this "black technology" to enhance work efficiency and accelerate industry development. Future3 Campus, in collaboration with Footprint Analytics, has conducted an in-depth study on the infinite possibilities of the integration of AI and Web3, jointly releasing the research report titled "Analysis of the Current Situation, Competitive Landscape, and Future Opportunities for the Integration of AI and Web3 Data Industry." This report is divided into two parts; this article is the second part, authored by Future3 Campus researchers Sherry and Humphrey.

Abstract:

  • The integration of AI and Web3 data is enhancing data processing efficiency and improving user experience. Currently, the exploration of LLM in the blockchain data industry mainly focuses on using AI technology to improve data processing efficiency, leveraging the interactive advantages of LLM to build AI Agents, and utilizing AI for pricing and trading strategy analysis.
  • The application of AI in the Web3 data field still faces several challenges, such as accuracy, interpretability, and commercialization. There is still a long way to go before AI can completely replace human intervention.
  • The core competitiveness of Web3 data companies lies not only in AI technology itself but also in their ability to accumulate data and apply in-depth analysis.
  • AI may not be a solution to the commercialization of data products in the short term; commercialization requires more productization efforts.

1. Current Status and Development Path of the Integration of Web3 Data Industry and AI

1.1 Dune

Dune is currently a leading open data analysis community in the Web3 industry, providing tools for querying, extracting, and visualizing large amounts of blockchain data. It allows users and data analysts to query on-chain data from Dune's pre-filled database using simple SQL queries and generate corresponding charts and insights.

In March 2023, Dune proposed plans regarding AI and future integration with LLM, and in October, it launched its Dune AI product. The core focus of Dune AI-related products is to enhance Wizard UX by leveraging the powerful language and analytical capabilities of LLM to better serve users in querying data and writing SQL on Dune.

(1) Query Explanation: A product released in March that allows users to obtain natural language explanations of SQL queries by clicking a button, aimed at helping users better understand complex SQL queries, thereby improving the efficiency and accuracy of data analysis.

(2) Query Translation: Dune plans to unify different SQL query engines on Dune (such as Postgres and Spark SQL) to DuneSQL, enabling LLM to provide automated query language translation capabilities, helping users transition more smoothly to facilitate the promotion of DuneSQL products.

(3) Natural Language Query: Launched in October as Dune AI. It allows users to ask questions in plain English and obtain data. The goal of this feature is to enable users without SQL knowledge to easily access and analyze data.

(4) Search Optimization: Dune plans to utilize LLM to improve search functionality, helping users filter information more effectively.

(5) Guide Knowledge Base: Dune plans to release a chatbot to help users quickly browse blockchain and SQL knowledge in Spellbook and Dune documentation.

(6) Simplifying SQL Writing (Dune Wand): In August, Dune launched the Wand series of SQL tools. Create Wand allows users to generate complete queries from natural language prompts, Edit Wand allows users to modify existing queries, and Debug automatically debugs syntax errors in queries. The core of these tools is LLM technology, which simplifies the query writing process, allowing analysts to focus on the core logic of data analysis without worrying about code and syntax issues.

1.2 Footprint Analytics

Footprint Analytics is a blockchain data solution provider that leverages artificial intelligence technology to offer a no-code data analysis platform, a unified data API product, and the Web3 project BI platform Footprint Growth Analytics.

Footprint's advantage lies in its on-chain data production line and ecological tools. By establishing a unified data lake, it connects on-chain and off-chain data and a meta-database of on-chain business registrations, ensuring the accessibility, usability, and quality of data during analysis and use. Footprint's long-term strategy will focus on technological depth and platform construction to create a "machine factory" capable of producing on-chain data and applications.

The integration of Footprint products with AI is as follows:

Since the launch of LLM models, Footprint has been exploring the integration of existing data products with AI to enhance data processing and analysis efficiency, creating more user-friendly products. In May 2023, Footprint began providing users with natural language interactive data analysis capabilities, upgrading its original no-code features to advanced product functionalities, allowing users to quickly access data and generate charts through dialogue without needing to familiarize themselves with the platform's tables and designs.

Additionally, the current market for LLM + Web3 data products mainly focuses on reducing user entry barriers and changing interaction paradigms. However, Footprint's emphasis in product and AI development is not only on improving the user experience in data analysis but also on accumulating vertical data and business understanding in the crypto field, as well as training language models specific to the crypto domain to enhance the efficiency and accuracy of vertical scenario applications. Footprint's advantages in this area will be reflected in the following aspects:

  • Data Knowledge Volume (quality and quantity of the knowledge base). The efficiency of data accumulation, sources, volume, and categories. Particularly, the Footprint MetaMosaic sub-product reflects the accumulation of relationship graphs and static data of specific business logic.
  • Knowledge Architecture. Footprint has accumulated over 30 public chains of structured data tables abstracted by business segments. The knowledge of the production process from raw data to structured data can reinforce the understanding of raw data and improve model training.
  • Data Types. Training begins with non-standard, unstructured raw data from on-chain sources and structured, business-significant data tables and metrics, showing significant differences in training efficiency and machine costs. A typical example is the need to provide a considerable amount of data to LLM, which requires not only specialized data from the crypto field but also more readable, structured data, along with a larger user base for feedback data.
  • Crypto Capital Flow Data. Footprint abstracts capital flow data closely related to investment, which includes the time of each transaction, entities (including flow direction), token types, amounts (associated token prices at the time), business types, and labels for tokens and entities, serving as a knowledge base and data source for LLM to analyze main capital flows, identify chip distribution, monitor capital flows, detect on-chain anomalies, and track smart capital.
  • Injection of Private Data. Footprint categorizes models into three layers: a foundational large model with world knowledge (OpenAI and other open-source models), specialized vertical models, and personalized expert knowledge models. This allows users to unify their various knowledge sources on Footprint for management and train private LLMs using private data, suitable for more personalized application scenarios.

In exploring the integration of LLM models, Footprint has also encountered a series of challenges and issues, the most typical being insufficient tokens, time-consuming prompts, and unstable responses. The on-chain data vertical field that Footprint operates in faces greater challenges due to the diversity and large volume of on-chain data entities, as well as rapid changes. The form in which this data is fed to LLM requires more research and exploration across the industry. The current toolchain is still relatively nascent and requires more tools to address specific issues.

In the future, Footprint's integration of technology and products with AI will include the following:

(1) In terms of technology, Footprint will explore and optimize LLM models in three areas:

  • Supporting LLM reasoning on structured data, allowing a large amount of accumulated structured data and knowledge in the crypto field to be applied in LLM's data consumption and production.
  • Helping users build personalized knowledge bases (including knowledge, data, and experience) and using private data to enhance the capabilities of already optimized crypto LLMs, enabling everyone to build their own models.
  • Enabling AI-assisted analysis and content production, allowing users to create their own GPT through dialogue, combining capital flow data and private knowledge bases to produce and share crypto investment content.

(2) In terms of products, Footprint will focus on exploring innovations in AI product applications and business models. According to Footprint's recent product promotion plan, it will launch a platform for AI crypto content generation and sharing for users.

Additionally, for future partner expansion, Footprint will explore the following two areas:

First, strengthening cooperation with KOLs to assist in the production of valuable content and community operations, as well as monetizing knowledge.

Second, expanding more cooperative project parties and data providers to create an open, win-win user incentive and data collaboration, establishing a mutually beneficial one-stop data service platform.

1.3 GoPlus Security

GoPlus Security is currently a leading user security infrastructure in the Web3 industry, providing various user-oriented security services. It has been integrated into mainstream digital wallets, market websites, DEXs, and various other Web3 applications. Users can directly utilize asset security detection, transfer authorization, and anti-phishing features. The user security solutions provided by GoPlus comprehensively cover the entire lifecycle of user security to protect user assets from various types of attackers.

GoPlus's development and planning with AI are as follows:

GoPlus's exploration of AI technology is mainly reflected in its AI automated detection and AI security assistant products:

(1) AI Automated Detection

Since 2022, GoPlus has been developing an AI-based automated detection engine to comprehensively enhance the efficiency and accuracy of security detection. GoPlus's security engine employs a multi-layered, funnel-style analysis approach, incorporating static code detection, dynamic detection, and feature or behavior detection. This composite detection process enables the engine to effectively identify and analyze the characteristics of potential risk samples, thereby effectively modeling attack types and behaviors. These models are key to the engine's ability to identify and prevent security threats, helping the engine determine whether risk samples exhibit certain specific attack characteristics. Additionally, the GoPlus security engine has accumulated substantial security data and experience through long-term iteration and optimization, allowing it to quickly and effectively respond to emerging security threats, ensuring timely detection and prevention of various complex and novel attacks, thus providing comprehensive protection for users. Currently, this engine employs AI-related algorithms and technologies in multiple security scenarios, including risk contract detection, phishing website detection, malicious address detection, and risk transaction detection. Utilizing AI technology can rapidly reduce risk exposure, improve detection efficiency, and lower detection costs; on the other hand, it reduces the complexity and time costs of human involvement, enhancing the accuracy of risk sample judgments, especially for new scenarios that are difficult to define manually or identify with engines. AI can better aggregate features and form more effective analysis methods.

In 2023, with the development of large models, GoPlus quickly adapted to and adopted LLM. Compared to traditional AI algorithms, LLM has significantly improved efficiency and effectiveness in data recognition, processing, and analysis. The emergence of LLM has accelerated GoPlus's technological exploration in AI automated detection. In the direction of dynamic fuzz testing, GoPlus utilizes LLM technology to effectively generate transaction sequences and explore deeper states to discover contract risks.

(2) AI Security Assistant

GoPlus is also developing an AI security assistant based on LLM's natural language processing capabilities to provide instant security consulting and improve user experience. The AI assistant is based on the GPT large model and, through the input of front-end business data, has developed a self-researched user security agent that can automatically analyze, generate solutions, break down tasks, and execute based on user inquiries, providing the necessary security services. The AI assistant simplifies communication between users and security issues, lowering the threshold for user understanding.

In terms of product functionality, given the importance of AI in the security field, AI has the potential to fundamentally change the structure of existing security engines or antivirus engines, leading to the emergence of entirely new engine architectures centered around AI. GoPlus will continue to train and optimize AI models, aiming to transform AI from an auxiliary tool into a core function of its security detection engine.

In terms of business models, although GoPlus's services are currently mainly aimed at developers and project parties, the company is exploring more products and services directly targeting C-end users, as well as new revenue models related to AI. Providing efficient, accurate, and low-cost C-end services will be GoPlus's core competitiveness in the future. This requires the company to continuously research and conduct more training and output on AI large models that interact with users. At the same time, GoPlus will collaborate with other teams to share its security data and promote AI applications in the security field, preparing for potential industry transformations in the future.

1.4 Trusta Labs

Trusta Labs, established in 2022, is an AI-driven data startup in the Web3 field. Trusta Labs focuses on efficiently processing and accurately analyzing blockchain data using advanced AI technology to build on-chain reputation and security infrastructure. Currently, Trusta Labs' business mainly includes two products: TrustScan and TrustGo.

(1) TrustScan is a product designed for B-end clients, primarily used to help Web3 projects analyze on-chain user behavior and refine user acquisition, engagement, and retention to identify high-value and genuine users.

(2) TrustGo is a product aimed at C-end clients, providing a MEDIA analysis tool that evaluates on-chain addresses from five dimensions (capital amount, activity, diversity, identity rights, loyalty). This product emphasizes in-depth analysis of on-chain data to enhance the quality and security of trading decisions.

Trusta Labs' development and planning with AI are as follows:

Currently, both of Trusta Labs' products utilize AI models to process and analyze interaction data from on-chain addresses. The behavioral data of on-chain address interactions is sequential data, which is very suitable for training AI models. During the process of cleaning, organizing, and labeling on-chain data, Trusta Labs delegates a significant amount of work to AI, greatly improving the quality and efficiency of data processing while also reducing substantial labor costs. Trusta Labs uses AI technology to conduct in-depth analysis and mining of on-chain address interaction data, effectively identifying high-probability witch addresses for B-end clients. In several projects that have used Trusta Labs products, Trusta Labs has effectively prevented potential witch attacks; for C-end clients, TrustGo effectively helps users gain deeper insights into their on-chain behavior data using existing AI models.

Trusta Labs has been closely monitoring the technological progress and application practices of LLM models. As the costs of model training and inference continue to decrease, along with the accumulation of substantial corpora and user behavior data in the Web3 field, Trusta Labs will seek suitable opportunities to introduce LLM technology, leveraging AI productivity to provide deeper data mining and analysis capabilities for products and users. Based on the rich data already provided by Trusta Labs, it hopes to utilize AI's intelligent analysis models to offer more reasonable and objective data interpretation functions, such as providing qualitative and quantitative analyses of identified witch accounts for B-end users, helping them better understand the reasons behind the data analysis, and providing more detailed materials for B-end users when explaining complaints to their clients.

On the other hand, Trusta Labs also plans to utilize open-source or relatively mature LLM models, combined with an intent-centered design philosophy, to build AI Agents to help users resolve on-chain interaction issues more quickly and efficiently. In terms of specific application scenarios, in the future, users can directly communicate with intelligent assistants trained based on LLM through natural language, and the intelligent assistant can "smartly" respond with information related to on-chain data and provide suggestions and plans for subsequent operations based on the information provided, truly achieving a user-intent-centered one-stop intelligent operation, significantly lowering the threshold for users to utilize data and simplifying the execution of on-chain operations.

Additionally, Trusta believes that as more AI-based data products emerge in the future, the core competitive elements of each product may not lie in which LLM model is used, but rather in a deeper understanding and interpretation of the data already mastered. Only by analyzing the mastered data and combining it with LLM models can smarter AI models be trained.

1.5 0xScope

0xScope, established in 2022, is an innovation platform centered around data, focusing on the integration of blockchain technology and artificial intelligence. 0xScope aims to change the way people process, use, and view data. Currently, 0xScope has launched 0xScope SaaS products and 0xScopescan for B-end and C-end clients, respectively.

(1) 0xScope SaaS products are enterprise-oriented SaaS solutions that empower enterprise clients to manage post-investment, make better investment decisions, understand user behavior, and closely monitor competitive dynamics.

(2) 0xScopescan is a B2C product that allows cryptocurrency traders to investigate the flow and activity of funds on selected blockchains.

0xScope's business focus is to abstract general data models from on-chain data, simplifying on-chain data analysis work and transforming on-chain data into understandable operational data, thereby assisting users in conducting in-depth analysis of on-chain data. Utilizing the data tool platform provided by 0xScope not only enhances the quality of on-chain data and uncovers hidden information but also significantly lowers the threshold for data mining.

0xScope's development and planning with AI are as follows:

0xScope's products are being upgraded in conjunction with large models, encompassing two directions: first, further lowering the user entry threshold through natural language interaction; second, utilizing AI models to improve processing efficiency in data cleaning, parsing, modeling, and analysis. Additionally, an AI interactive module with Chat functionality is about to be launched in 0xScope's products, which will greatly reduce the threshold for users to conduct data queries and analysis, allowing interaction and querying of underlying data using only natural language.

However, in the process of training and using AI, 0xScope has identified several challenges: first, the costs and time associated with AI training are high. After posing a question, AI requires a considerable amount of time to respond. This challenge forces the team to streamline and focus business processes, concentrating on vertical domain Q&A rather than becoming a comprehensive super AI assistant. Second, the outputs of LLM models are uncontrollable. Data-related products aim to provide precise results, but the outputs from LLM models may deviate from actual situations, which can be detrimental to the user experience of data products. Furthermore, the outputs of large models may involve users' private data. Therefore, when using LLM models in products, the team needs to impose significant restrictions to ensure that the outputs of AI models are controllable and accurate.

In the future, 0xScope plans to utilize AI to focus on specific vertical tracks and delve deeper. Currently, with a substantial accumulation of on-chain data, 0xScope can define the identities of on-chain users and will continue to use AI tools to abstract on-chain user behavior, thereby creating a unique data modeling system to reveal the implicit information within on-chain data.

In terms of collaboration, 0xScope will focus on two groups: first, the target audience that products can directly serve, such as developers, project parties, VCs, exchanges, etc., who require the data currently provided by the products; second, partners who need AI Chat, such as Debank, Chainbase, etc., who only need relevant knowledge and data to directly call AI Chat.

2. VC Insight------The Commercialization and Future Development Path of AI + Web3 Data Companies

This section provides insights from interviews with four seasoned VC investors, examining the current state and development of the AI + Web3 data industry from an investment and market perspective, the core competitiveness of Web3 data companies, and their future commercialization paths.

2.1 Current Status and Development of the AI + Web3 Data Industry

Currently, the integration of AI and Web3 data is in a phase of active exploration. From the development directions of various leading Web3 data companies, the combination of AI technology and LLM is an indispensable trend. However, LLM has its own technical limitations and cannot yet solve many problems in the current data industry.

Therefore, we need to recognize that blindly integrating with AI will not necessarily enhance a project's advantages or create hype around AI concepts; rather, it is essential to explore truly practical and promising application areas. From the VC perspective, the integration of AI and Web3 data has already seen explorations in the following areas:

1) Enhancing the capabilities of Web3 data products through AI technology, including improving internal data processing and analysis efficiency for enterprises, as well as enhancing automated analysis and retrieval capabilities for user data products. For example, Yuxing from SevenX Ventures mentioned that the primary benefit of using AI technology in Web3 data is efficiency, such as Dune using LLM models for code anomaly detection and converting natural language into SQL for information indexing; there are also projects using AI for security alerts, where AI algorithms perform anomaly detection more effectively than pure mathematical statistics, thus enabling more efficient security monitoring. Additionally, Zixi from Matrix Partners noted that enterprises can save significant labor costs by training AI models for data pre-labeling. Nevertheless, VCs believe that AI plays a supportive role in enhancing the capabilities and efficiency of Web3 data products, such as in data pre-labeling, and that human review may still be necessary to ensure accuracy.

2) Utilizing LLM's advantages in adaptability and interaction to create AI Agents/Bots. For instance, using large language models to retrieve data across the entire Web3 landscape, including on-chain data and off-chain news data, for information aggregation and sentiment analysis. Harper from Hashkey Capital believes that such AI Agents are more focused on information integration, generation, and interaction with users, and may be relatively weaker in terms of information accuracy and efficiency.

Although there are already numerous cases in the above two applications, the technology and products are still in the early stages of exploration, necessitating continuous technical optimization and product improvement in the future.

3) Using AI for pricing and trading strategy analysis: Currently, there are projects in the market that utilize AI technology to estimate prices for NFTs, such as NFTGo, which is invested by Qiming Venture Partners, and some professional trading teams that use AI for data analysis and trading execution. Additionally, Ocean Protocol recently released an AI product for price prediction. These products seem imaginative, but their acceptance by users, especially in terms of accuracy, still requires validation.

On the other hand, many VCs, especially those with investments in Web2, are more focused on the advantages and application scenarios that Web3 and blockchain technology can bring to AI technology. Blockchain's characteristics of being publicly verifiable and decentralized, along with cryptographic technology providing privacy protection, combined with Web3's restructuring of production relationships, may offer new opportunities for AI:

1) AI Data Rights and Verification. The emergence of AI has led to an oversaturation and cheapness of data content generation. Tang Yi from Qiming Venture Partners mentioned that for digital works and other content, it is challenging to determine their quality and creators. In this regard, establishing a new system for data content rights may require assistance from blockchain. Zixi from Matrix Partners noted that some data exchanges are trading data in NFTs, which can solve data rights issues.

Additionally, Yuxing from SevenX Ventures mentioned that Web3 data can improve the issues of AI forgery and black-box problems. Currently, AI faces black-box issues in both model algorithms and data, leading to deviations in output results. However, Web3 data's transparency, being publicly verifiable, makes the training sources and results of AI models clearer, ensuring fairness and reducing biases and errors. However, the current volume of Web3 data is still insufficient to empower AI training, so this will not be realized in the short term. However, we can leverage this characteristic to put Web2 data on-chain to prevent deep forgery by AI.

2) AI Data Labeling Crowdsourcing and UGC Communities: Traditional AI labeling faces issues of low efficiency and quality, especially in specialized knowledge areas that may require interdisciplinary knowledge, which traditional general data labeling companies cannot cover and often require internal professional teams. Introducing crowdsourced data labeling through blockchain and Web3 concepts can effectively address this issue, such as Questlab, which is invested by Matrix Partners, providing crowdsourced data labeling services using blockchain technology. Additionally, in some open-source model communities, blockchain concepts can also be used to solve the economic issues of model creators.

3) Data Privacy Deployment: The combination of blockchain technology and cryptographic techniques can ensure data privacy and decentralization. Zixi from Matrix Partners mentioned that a synthetic data company they invested in generates synthetic data using large models, which can primarily be applied in software testing, data analysis, and AI large model training. The company faces many privacy deployment issues when handling data and uses the Oasis blockchain to effectively avoid privacy and regulatory problems.

2.2 How AI + Web3 Data Companies Build Core Competitiveness

For Web3 technology companies, the introduction of AI can increase project attractiveness or attention to some extent. However, most AI-related products from Web3 technology companies are not sufficient to become the core competitiveness of the company; they mainly provide a friendlier experience and improved efficiency. For example, the threshold for AI Agents is not high; companies that enter the market first may have a first-mover advantage, but this does not create barriers.

The true core competitiveness and barriers in the Web3 data industry should stem from the team's data capabilities and how they apply AI technology to solve specific analytical scenario problems.

First, the team's data capabilities include the quality of data sources and the ability of the team to analyze data and adjust models, which is the foundation for subsequent work. In interviews, SevenX Ventures, Matrix Partners, and Hashkey Capital all agreed that the core competitiveness of AI + Web3 data companies depends on the quality of data sources. On this basis, engineers need to be proficient in model fine-tuning, data processing, and parsing based on data sources.

On the other hand, the specific scenarios in which the team integrates AI technology are also crucial, and these scenarios should be valuable. Harper believes that although the combination of Web3 data companies and AI currently mostly starts with AI Agents, their positioning is different. For example, Space and Time, invested by Hashkey Capital, collaborated with chainML to launch infrastructure for creating AI agents, with the DeFi agent created being used by Space and Time.

2.3 Future Commercialization Paths for Web3 Data Companies

Another important topic for Web3 data companies is commercialization. For a long time, the profit model of data analysis companies has been relatively singular, mostly free for ToC, primarily profiting from ToB, which heavily relies on the willingness of B-end clients to pay. In the Web3 field, the willingness of enterprises to pay is inherently low, compounded by the fact that most projects are startups, making it difficult for project parties to sustain long-term payments. Therefore, Web3 data companies currently face a challenging situation in terms of commercialization.

In this regard, VCs generally believe that the current integration of AI technology, applied only to internally solve production process issues, has not fundamentally changed the monetization difficulties. Some new product forms, such as AI Bots, may enhance user willingness to pay in the ToC sector to some extent, but this is still not very strong. AI may not be a solution to the commercialization problems of data products in the short term; commercialization requires more productization efforts, such as finding more suitable scenarios and innovative business models.

In the future, the path of integrating Web3 with AI may generate new business models by combining Web3's economic model with AI data, primarily in the ToC sector. Zixi from Matrix Partners mentioned that AI products could incorporate some token mechanics to enhance community engagement, daily activity, and emotional connection, which is feasible and easier to monetize. Tang Yi from Qiming Venture Partners noted that from an ideological perspective, the value system of Web3 can be integrated with AI, making it suitable as a bot account system or value conversion system. For example, a bot could have its own account, earn money through its intelligent components, and pay for maintaining its underlying computing power. However, this concept belongs to future speculation, and practical applications may still have a long way to go.

In terms of the original business model, where users pay directly, there needs to be sufficiently strong product capabilities to enhance users' willingness to pay. For example, higher quality data sources and the benefits derived from data exceeding the costs of payment, which relies not only on the application of AI technology but also on the capabilities of the data team itself.


This article is jointly published by Footprint Analytics, Future3 Campus, and HashKey Capital.

Footprint Analytics is a blockchain data solution provider. Leveraging cutting-edge AI technology, we offer the first no-code data analysis platform and unified data API in the crypto field, enabling users to quickly retrieve NFT, GameFi, and wallet address capital flow tracking data across more than 30 public chain ecosystems.

Footprint official website: https://www.footprint.network

Twitter: https://twitter.com/Footprint_Data

WeChat official account: Footprint Blockchain Analysis

Join the community: Add the assistant on WeChat to join the group footprint_analytics

Future3 Campus is a Web3.0 innovation incubation platform jointly initiated by Wanxiang Blockchain Lab and HashKey Capital, focusing on three major tracks: Web3.0 Massive Adoption, DePIN, and AI, with Shanghai, the Guangdong-Hong Kong-Macao Greater Bay Area, and Singapore as the main incubation bases, radiating the global Web3.0 ecosystem. At the same time, Future3 Campus will launch an initial seed fund of 50 million USD for the incubation of Web3.0 projects, truly serving innovation and entrepreneurship in the Web3.0 field.

HashKey Capital is an asset management institution focused on investing in blockchain technology and digital assets, currently managing over 1 billion USD in assets. As one of the largest and most influential blockchain investment institutions in Asia, and one of the earliest institutional investors in Ethereum, HashKey Capital plays a leading role in connecting Web2 and Web3, working with entrepreneurs, investors, communities, and regulatory agencies to build a sustainable blockchain ecosystem. The company is located in Hong Kong, Singapore, Japan, the United States, and other regions, and has already invested in over 500 global enterprises across various tracks, including Layer 1, protocols, Crypto Finance, Web3 infrastructure, applications, NFTs, and the Metaverse. Representative projects include Cosmos, Coinlist, Aztec, Blockdaemon, dYdX, imToken, Animoca Brands, Falcon X, Space and Time, Mask Network, Polkadot, Moonbeam, and Galxe (formerly Project Galaxy).

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators