Regarding AI and the metaverse, Zuckerberg said these things during the Meta conference call

Tencent Technology
2024-04-25 15:15:50
Collection
On April 25, news reported by foreign media stated that on Wednesday, U.S. local time, Facebook's parent company Meta released its financial report for the first quarter of 2024, ending March 31, after the U.S. stock market closed.

Original Title: “Zuckerberg says it will take Meta years to make money from generative AI”
Author: Alex Heath
Translation: Tencent Technology

On April 25, news came that Meta, the parent company of Facebook, released its Q1 2024 earnings report after the U.S. stock market closed on Wednesday. The report showed that Meta's revenue for the first quarter was $36.5 billion, a 27% increase from $28.6 billion in the same period last year, slightly above Wall Street's forecast of $36.1 billion. Net profit reached $12.4 billion, more than doubling from $5.7 billion in the same period last year. Diluted earnings per share were $4.71, a year-on-year increase of 114%.
However, Meta's exploration in the field of artificial intelligence requires enormous computational power, and the costs associated with this are extremely high. The Silicon Valley tech giant announced plans to increase its spending this year from an initial estimate of $30 billion to $37 billion to $35 billion to $40 billion. This adjustment is mainly due to Meta's significant investments in AI infrastructure (such as data centers) and chip design and research and development.
Meta also expects revenue for the current quarter to be between $36.5 billion and $39 billion, a figure that falls short of analysts' general expectations. The dual blow of increased spending and lower-than-expected revenue has raised concerns among investors, leading to a significant drop in Meta's stock price, which fell more than 16% on Wednesday afternoon.

After the earnings report was released, Meta CEO Mark Zuckerberg, CFO Susan Li, and other executives participated in an analyst conference call to interpret the report and answer analysts' questions.

Here are Zuckerberg's opening remarks during the earnings call:

Thank you, and thank you all for being here! From the momentum of our product development and the overall performance of our business, this year is undoubtedly a good start for us on the path to success. We estimate that over 3.2 billion people use at least one of our apps daily, and we have also observed robust growth in the U.S. market.

I want to highlight WhatsApp, which has seen a continuous increase in daily active users and message volume in the U.S., and I believe our progress in this area is quite significant. Of course, the advancements in artificial intelligence and the Metaverse are also noteworthy, which is what I want to focus on sharing today.

First, let's start with artificial intelligence. We are building a diverse range of AI services, such as our AI assistant Meta AI, which you can ask any question in our apps and glasses; Creator AI, which helps creators engage deeply with their communities and allows fans to participate; and Business AI, which we believe every user on our platform will eventually use to assist customers with shopping and customer support; in addition, we have internal coding and development AI, as well as hardware products like glasses for people to interact with AI.

Last week, we released a new version of Meta AI, now powered by our latest open-source large model, Llama 3. Our goal is to make Meta AI the leading AI service in terms of quality and user experience.

The initial launch of Meta AI has achieved satisfactory results, with tens of millions of people trying it and providing very positive feedback. When I first communicated with our team, most of the feedback I received was that people were eager for us to roll out Meta AI in more places.

Therefore, we have begun to roll out Meta AI in some English-speaking countries and plan to expand this service to more languages and countries in the coming months. I believe everyone is familiar with our product development manual.

We have always taken a cautious approach, releasing early versions of products to a limited audience to gather valuable feedback and make improvements. Once we believe the product is mature and ready for a broader user base, we will roll it out to more people. The version released last fall is a solid foundation for us to move into the next phase of growth.

For Meta AI, we firmly believe that with the strong support of Llama 3, it has now become one of the smartest AI assistants on the market, and users can use it freely. Now, we are not only satisfied with having an excellent product but are also committed to making it easier for a wide range of users to adopt and use. Therefore, we are working to integrate it more conveniently across platforms like WhatsApp, Messenger, Instagram, and Facebook.

It is worth mentioning that this new version not only has the ability to answer complex queries but also possesses some unique and compelling features. For example, Meta AI can now create dynamic effects from static images, quickly generate high-quality images, and even create and update images in real-time as you type. I have seen many people online rave about this feature, saying they have never seen or experienced anything so amazing.

In terms of the core AI model and the intelligence supporting Meta AI, I am very satisfied with Llama 3's performance so far. The 8 billion and 70 billion parameter models we released have demonstrated outstanding performance at their respective scales. The 400 billion parameter model currently in training seems to be leading the industry in several benchmark tests, and I look forward to seeing other models gain further improvements from our open-source contributions.

Overall, I believe the achievements our team has made here are an important milestone, fully demonstrating that we have exceptional talent, rich data, and the ability to scale infrastructure to build world-leading AI models and services. This reinforces my belief that we should continue to invest heavily in the coming years to build more advanced models and create the largest AI service in the world.

Of course, as we continue to expand our capital and energy expenditures in the field of artificial intelligence, we will also keep a close eye on and optimize the company's other business operations. In fact, even as we shift a significant amount of existing resources to the field of artificial intelligence, we will still cautiously expand our investment scope until these new products generate substantial revenue.

I want to emphasize that during this critical phase of our product strategy, our stock has experienced a series of fluctuations. We are currently focused on scaling new products but have not yet monetized them. This is particularly evident in our transition from news feeds to mobile devices with Reels, Stories, etc. I expect that before we fully scale Meta AI, Business AI, and other more profitable services, we will need to go through a few years of investment cycles.

From historical experience, building these new large-scale experiences in our apps is a long-term value investment for us and for the investors who have always supported us. Currently, we have seen some positive early signs. However, compared to other experiences we have added to our apps, building leading artificial intelligence will be a more challenging task that may take years to complete.

However, from a positive perspective, once our new AI services reach scale, we will have a solid foundation for achieving effective profitability. There are various ways to help build a large-scale business, including expanding business information, introducing advertising or paid content into AI interactions, and allowing people to pay for access to more advanced AI models and more computing resources.

Most importantly, artificial intelligence has already been helping us improve user engagement in our apps, which naturally allows us to show more ads and directly improve advertising to deliver more value. Therefore, as long as technology and products can develop in the direction we expect, over time, each technology and product will bring tremendous value to our individuals and businesses.

Some of our efforts have already made significant progress. Currently, about 30% of posts on Facebook come from our AI recommendation system, which has doubled compared to the past few years. Even more exciting is that over 50% of content on Instagram is now AI-recommended, marking a historic first.

Artificial intelligence also plays a crucial role in helping us create value for advertisers. It enables us to show users more precise and relevant ads. For example, our two end-to-end AI tools, Advantage+ Shopping and Advantage+ App Campaigns, have seen their revenue double since last year.

In the process of expanding Meta AI and other AI services, we have always placed efficiency at the core. These efficiency improvements come partly from our advancements in model training and operation, as well as the power of the open-source community. We are working to improve cost-effectiveness, which is one of the main areas where I hope open-source can help us achieve breakthroughs, as we have witnessed in open computing.

Additionally, we have made positive progress in manufacturing more self-developed chips. Our Meta training and inference accelerator chips have successfully been applied to some recommendation-related workloads, allowing them to run on cheaper hardware stacks. As this project continues to improve over the next few years, we plan to expand it to more types of workloads. Of course, while increasing these investments, we will also prudently manage the overall employee scale and other cost growth across the company.

In addition to our efforts in artificial intelligence, the Metaverse is another long-term focus for us. It is indeed fascinating to see how these two themes can merge. This integration is particularly evident in the field of glasses. I once thought that augmented reality (AR) glasses would struggle to become mainstream products without holographic displays.

But I firmly believe that stylish, display-less AI glasses will occupy a significant market, which is undoubtedly a highlight in the long-term maturity of this product. Glasses are the ideal device for AI assistants, capable of capturing what you see and hear, providing a comprehensive and in-depth understanding of your surroundings as you handle various tasks.

The Meta AI glasses with visual capabilities that we released this week are a perfect example; now you can ask questions about anything in front of you and receive instant answers. Currently, one strategic dynamic I have been contemplating is that the work of Reality Labs will increasingly integrate into our AI research and development.

Although in the financial report, we view the family of applications and Reality Labs as two separate business segments, strategically, they are closely connected and together form our core business. The vision of Reality Labs is to build the next-generation computing platform, laying a solid foundation for us to create outstanding applications and experiences.

Over time, we need to explore more appropriate ways to present the value created by these two areas. As the glasses ecosystem continues to expand, hardware costs do not seem to have significantly increased, while value flows into multiple different areas. The Ray-Ban Meta glasses we launched in collaboration with EssilorLuxottica continue to sell well, with several styles and colors already sold out.

Therefore, we are intensifying our efforts to develop more styles and bring them to market as soon as possible. Yesterday, we just released a new cat-eye design, Skyler, which is more feminine and charming. Overall, I am confident in our approach of starting with classic styles and gradually expanding to more options.

If we want everyone to use wearable AI, I firmly believe that compared to phones or watches, people's design needs for glasses will be more diverse and personalized. Therefore, collaborating with leading eyewear brands will be a key strategy for us to serve a broader market. Over time, this open ecosystem model will also help us expand the virtual reality and mixed reality headset market. To this end, we announced the opening of Meta Horizon OS, our operating system developed specifically for Quest.

As the ecosystem matures, I believe the applications of mixed reality will exhibit rich diversity, and the demand for design will far exceed our development capabilities. For example, headsets focused on work may not be suitable for sports but could be made lighter by connecting to a laptop; while fitness-focused headsets might use sweat-wicking materials to optimize the wearing experience; entertainment headsets may center around the highest resolution displays, while gaming headsets might focus on peripheral connectivity and haptic feedback, or integrate Xbox controllers and game pass subscription services.

It is important to clarify that while our first-party Quest devices have achieved significant success in the market, we will continue to push the frontiers of technology to allow more people to enjoy the charm of mixed reality. At the same time, opening our ecosystem and operating system will help the entire mixed reality field achieve faster development.

In addition to artificial intelligence and the Metaverse, our applications have also made significant progress in functionality and experience. As I mentioned earlier, WhatsApp is experiencing strong growth in the U.S., and the application of AI recommendations in feeds and Reels videos has also become an important trend. However, it is worth mentioning that video functionality remains a major highlight in our applications.

This month, we launched a brand new full-screen video player on Facebook, which integrates Reels short videos, long videos, and live content into a coherent viewing experience through a unified recommendation system. On Instagram, Reels and video content continue to attract users, with Reels alone accounting for half of the time users spend in the app. Additionally, Threads is also growing steadily, with monthly active users surpassing 150 million and maintaining good momentum.

Of course, I also want to mention a topic that my little daughters would be particularly interested in—Taylor Swift has now joined Threads. This is big news in our household.

Overall, I am proud of what we have achieved so far this year. We have strong execution capabilities to seize the various opportunities before us. Thank you to all the teams driving these advancements and to every partner who has joined us on this journey.

Here is CFO Susan Li's interpretation of the earnings report:

Two core elements are driving our revenue growth: our ability to create engaging experiences for the community and how we effectively convert this engagement into monetary value over time.

First, we are satisfied with the trend of user engagement, and our product prioritization is showing strong momentum. Our investments in developing advanced recommendation systems are continuously driving user engagement on the platform. This clearly indicates that users are discovering more added value by exploring content from accounts they had not previously followed. As we continue to optimize these systems, the accuracy of recommended content in the apps is steadily improving. With the ongoing evolution of our models, we see tremendous potential for further enhancing recommendation relevance and personalization.

Video content continues to grow on our platform, now accounting for over 60% of user time spent on Facebook and Instagram. Reels is undoubtedly the main driver of this growth, and we are working to deeply integrate Reels long videos and live video features into Facebook.

In April of this year, we launched this unified video experience in the U.S. and Canada, which is gradually supported by our new ranking architecture. We expect that over time, this architecture will provide users with more relevant and precise video recommendations. Additionally, we have introduced deeper generative AI integrations in our apps in the U.S. and over a dozen other countries.

Now, people can use Meta AI in our chat interface and also experience it in Facebook recommendations and groups. We expect these integrations to enhance our social discovery strategy, as our recommendation systems can help users uncover and explore their interests, while Meta AI can guide them to delve deeper into topics of interest. As we continue to roll out valuable new features and expand the community, Threads also maintains strong appeal.

Next, let's talk about the second key factor driving revenue growth—the improvement in monetization efficiency. This work mainly falls into two areas: first, optimizing the level of ad placements within organic user engagement. Here, we are continuously studying users' ad viewing preferences to better grasp the optimal timing, location, and audience for ad placements to achieve more efficient ad optimization.

For example, we have made significant progress in precise ad placements, able to adjust the position and quantity of ads in real-time based on user interest in ad content, while striving to minimize ad disruption. Additionally, we are constantly exploring innovative ad formats. We will continue to focus on this area and view current monetization levels in video and messaging, which are relatively low, as potential growth opportunities.

On the path to improving monetization efficiency, optimizing marketing performance is also crucial. Similar to our work in organic recommendations, artificial intelligence plays an increasingly important role in this area. We are continuously refining our ad models to provide advertisers with better performance.

One notable outcome is our new ad ranking architecture—Meta Lattice. Since last year, we have begun to widely promote this innovative architecture. Meta Lattice allows us to run larger models that can generalize learning across multiple objectives and interfaces, significantly improving operational efficiency and enhancing ad effectiveness compared to many smaller ad models optimized for single objectives and interfaces in the past.

Additionally, we are leveraging artificial intelligence to provide more automated support for advertisers. Through our Advantage+ product suite, advertisers can automate a series of steps in the ad campaign setup process, such as selecting the ad creatives to display. Or use our end-to-end automation tools, such as Advantage+ Shopping and Advantage+ App Ads, which can help advertisers achieve comprehensive automation of their campaigns. We are seeing more and more advertisers adopting these solutions and look forward to driving broader adoption throughout this year while applying the insights we gain to a wider range of ad investments.

Next, please allow me to delve into our capital allocation strategy. Currently, we still see many attractive investment opportunities that can help our core business thrive in the short term while capturing important long-term opportunities in generative AI and Reality Labs.

As we develop more advanced, compute-intensive recommendation models for the training and inference needs of generative AI and continue to expand capacity, we recognize that having sufficient infrastructure capacity is crucial to seizing these opportunities. Therefore, we expect to significantly increase our investments in infrastructure over the next few years to ensure we can meet the growing computational demands.

Additionally, another long-term strategic focus is Reality Labs, and we will continue to invest heavily in this area. Notably, we are beginning to observe that our AI initiatives are gradually merging with the work of Reality Labs. For example, through the Ray-Ban Meta smart glasses, users in the U.S. and Canada can now utilize our multimodal Meta AI assistant to complete daily tasks without frequently using their phones.
In the long run, we expect generative AI to play an increasingly important role in our mixed reality products, making it easier to develop immersive experiences. Accelerating AI research and development will ensure that we provide the highest quality service to users as we transition to the next computing platform.

Here is the analyst Q&A session:

Goldman Sachs analyst Eric Sheridan: Mark, you mentioned using past investment cycles as a comparison, such as Stories and Reels. I understand you haven't provided long-term guidance today, but through these comparisons, how should investors think about the length and depth of the investment cycles for artificial intelligence, the broader Reality Labs, and mixed reality? Also, you all mentioned the impact of AI on the advertising ecosystem. What metrics are you focusing on in terms of consumer adoption or utility to understand whether AI adoption is aligning with the investment cycle?

Zuckerberg: Regarding timing, I think it is indeed difficult to infer from past cycles. But I want to emphasize that we typically spend several years focusing on building and scaling products. We often do not pay much attention to monetization until a new area reaches a significant scale, as improving monetization in other areas before new products reach scale is a higher leverage point for us. Therefore, when a product reaches a certain stage, savvy investors will notice the scale of the product and may even see clear monetization opportunities before revenue is realized.

In fact, we have already seen such trends in Stories, Reels, and the transition to mobile devices. Essentially, we first build inventory for a period of time and then gradually monetize it. During this time, as the product scales, we may sometimes not even be profitable from this new product. So, I think that is the analogy I am making, but it indicates that in the next phase, we should focus on how Meta AI is meaningfully launched as consumer products scale.

While we currently do not have any specific data to share, I want to emphasize that this is something I am primarily focused on this year and will likely be devoted to developing this product and other AI products and their engagement for much of next year. I believe that if we have enough confidence that these products are progressing along a good trajectory, they will ultimately become very successful businesses. That is the key point I want to emphasize.

Morgan Stanley analyst Brian Nowak: I have two questions. First, regarding the improvements in the recommendation engine, Susan previously mentioned opportunities to further enhance model relevance. Can you elaborate on that? Can you give us an example of areas where you are still using suboptimal models or where there are opportunities to capture and utilize improvements, as well as data that you have not fully leveraged? The second question is, what are the main limiting factors you encounter when encouraging advertisers to gradually adopt AI tools? How do you plan to overcome these challenges in 2024 and 2025?
Susan Li: Regarding your first question about how we utilize and improve recommendation models to enhance user engagement. I want to emphasize that in the past, each of our recommendation products had its own independent AI model. However, in recent years, we have developed a new model architecture aimed at unifying support for multiple recommendation products. Last year, we conducted initial validation on Facebook Reels and found that this model could significantly increase watch time by 8% to 10%. This year, we plan to further expand this model architecture, applying it not only to Facebook Reels recommendations but also promoting it to other content like Facebook video tags. Although it is still difficult to predict specific effects, we are confident that over time, this new model architecture will bring more high-quality video recommendations. If it succeeds, we will explore using it to drive other recommendations. In the advertising space, we face a similar situation. We have launched the new model architecture Meta Lattice, which integrates multiple smaller, more specialized models into a larger, more comprehensive model to more accurately predict ad performance. This architecture has already been applied on Facebook and Instagram, driving improvements in ad performance during 2023. In 2024, we will continue to optimize this model architecture to support more ad objectives and platforms, such as web and apps. We have invested significantly in the foundational model architecture for organic engagement and advertising, and we hope that over time, these model architectures will continue to enhance ad effectiveness.

As for your second question regarding the limiting factors for advertisers testing and adopting the next generation of AI tools, I believe there are two main aspects. First, the new generation of AI ad creative features being introduced in ad creation tools is still in its early stages. While we have seen some vertical markets and advertisers of different sizes begin to experiment with these features, further promotion and optimization are needed. In particular, we have noticed widespread adoption of image expansion features among small businesses, which will be a key focus for us in 2024. We hope to improve the quality of generated ad content by continuously enhancing the underlying foundational models and supporting the development of more innovative features. Additionally, we see tremendous potential in Business AI. We are testing some features that allow businesses to use AI for commercial messaging chats, enabling these AIs to interact with customers in real-time, providing shopping advice and other services. Although these tests are still in very early stages, we have received some positive feedback, such as businesses reporting that AI has saved them a lot of time, while consumers have noticed faster response times. We have also learned a lot from these tests, allowing these AIs to perform better over time. Therefore, we will continue to expand the scope of these tests in the coming months and gradually promote the use of Business AI while ensuring quality.

Bernstein Research analyst Mark Shmulik: In the advertising market space, we have heard discussions about the contributions of Chinese advertisers multiple times. Can you share the latest trends regarding this portion of spending?

Susan Li: In the first quarter, advertising spending from Chinese advertisers continued to show strong growth. This growth is primarily driven by the online commerce and gaming industries, which is also reflected in our Asia-Pacific advertiser segment. Specifically, advertising revenue in this region achieved a 41% year-on-year growth in the first quarter, continuing to maintain the fastest growth momentum. Meanwhile, we have also observed strong growth performance in other regions, such as a 6 percentage point acceleration in total revenue growth from North American advertisers. However, I want to emphasize that we currently do not have specific quantifications of the first-quarter advertising revenue contribution from China, nor do we have forward-looking expectations regarding quarterly advertising revenue based in China to share. However, given that Chinese advertisers are gradually recovering from the previous pandemic headwinds in 2023, we have reason to believe that in 2024, we will enter a period of increasingly strong demand.

JPMorgan analyst Doug Anmuth: Mark, can you share what significant changes you see in the current AI business environment and opportunities compared to three months ago? Also, given the developments in AI, do we need to increase investments to seize larger opportunities?

Zuckerberg: Yes. My view is that we have become more optimistic and ambitious about the prospects of AI. Last year, we released the Llama 2 model, and at that time, we were very excited about its potential, believing it would become the cornerstone for building many valuable products and integrating into our social products. However, the situation has changed significantly now. With the latest models, we are not just building excellent AI models; we are able to create a whole new range of outstanding social and business products based on these models. In fact, I believe we have proven our capability to build leading models and rank among the world's top AI companies. This brings us many additional opportunities, not just the obvious ones.

As I mentioned in my previous remarks, combining Llama 2 with the success of Meta AI, we achieved a true technological validation. This proves that we have the talent, data, and scalable infrastructure to lead in this field. Through Meta AI, we are working to make it the most widely used and best AI assistant in the world, and I believe this will bring tremendous value.

Therefore, all these factors make me more determined to ensure that our investments in AI can maintain our leading position. In pursuing this goal, we will also strive to scale our products and achieve profitability at the right time. As I mentioned earlier, we have gone through similar cycles, but fundamentally, seeing our team create such outstanding products makes me more optimistic about the future and set higher goals. I believe these products will ultimately become a very important series of products for us, and their significance may become even more pronounced.

Bank of America analyst Justin Post: Mark, regarding capital expenditures, you mentioned an investment cycle. I want to know if you are considering shifting some investments from the Metaverse to the AI space? Is there a possibility for them to merge, allowing funds from other areas to support AI development? Additionally, long-term capital returns are a key issue for investors. We understand that past capital expenditures have yielded high returns, and profit margins remain at a high level. So, how do you view the returns on invested capital over the next two to three years?

Zuckerberg: Regarding the resource transfer issue, we have actually made such adjustments in multiple areas. Whether it is computational resources or other types of resources, we are working to reallocate them to AI-related projects. In particular, I remain optimistic about the long-term potential of Reality Labs to establish a new computing platform. As I mentioned earlier, eyewear is an important area of investment in Reality Labs. We believe this will be a highly promising platform for the future. Moreover, with technological advancements, we are no longer solely reliant on holographic displays to explore this market. Our collaboration with Ray-Ban on the glasses project is progressing well, leading me to believe it could become a very meaningful platform and may achieve this goal sooner than we expected. Therefore, while the work of Reality Labs is increasingly focused on AI, I still firmly believe we need to focus on building these long-term platforms.

Barclays analyst Ross Sandler: Mark, regarding the collaboration between Meta AI and natural search citations with Google and Bing, I am curious. Do you believe that Meta AI has the potential to generate search ad revenue in the long run? Or do you think it might move towards a paid subscription model like other platforms? Additionally, you mentioned that you are developing AI tools for businesses and creators. So, when technology advances to the point where we can interact with a custom AI for Taylor Swift to purchase concert tickets, how do you view the evolution of the business model? What would the outcome be?

Zuckerberg: In our collaboration with Google and Microsoft, we are primarily integrating real-time information into Meta AI, which indeed adds value to our services. However, this is fundamentally different from search advertising. Currently, we do not plan to enter the search advertising space. I believe this will be a completely different industry. Over time, advertising and paid content may emerge in Meta AI interactions. For example, users may pay for access to more advanced models, enjoy more powerful computing capabilities, or access certain premium features. But these are still in the early stages of development.

In fact, I believe the biggest potential opportunity may lie in the business information space. This can not only enhance user engagement and ad quality in our existing apps but also provide strong support for creators and the over 100 million businesses on our platform. They can interact with their communities more easily through AI, boost sales, and optimize customer service. For creators, whether selling concert tickets or promoting products, AI can open up new business opportunities for them. Currently, many creators and businesses have not fully utilized advertising to achieve their business goals, and the commercialization level in the business information space is relatively low. However, with the proliferation of AI, I believe this situation will improve. AI will significantly lower communication costs, providing businesses and creators with more touchpoints while improving the quality and effectiveness of advertising.

I firmly believe this is likely to be an exciting opportunity, although it may not manifest in the next quarter. It is not something that can be rapidly expanded in just one quarter, but it is also not an opportunity that requires a long wait of five years. Therefore, in my view, this is a highly promising opportunity, and I am genuinely excited about it. As Meta AI continues to expand, I believe it will have its unique monetization opportunities, and we will gradually establish corresponding business models. But before that, our primary goal is to make Meta AI a core part of the work for hundreds of millions or even billions of people. This is our important task ahead and the key to creating tremendous value. We firmly believe that Meta AI has great promotional potential, and this is precisely where we will focus our efforts next.

Citi analyst Ronald Josey: Mark, I want to further explore the topic you previously mentioned regarding the series of improvements, investments, and innovations that have fostered an optimistic atmosphere within Meta. We felt this deeply during the recent Meta AI experience. So, can you share how a 400 billion parameter model might enrich the experience for Meta users in the future? And how do you see these models evolving in the coming months and years as information delivery becomes a more important focus? This is clearly a grand vision.

Zuckerberg: In my view, the next key stage in the development of these models will be to handle more complex tasks and evolve into more agent-like entities, rather than just simple chatbots. When I say chatbots, I mean the kind where you send a message, and it replies with a message, which is almost a one-to-one interaction. However, the way agents work is more advanced; you just need to give it an intention or goal, and it will autonomously execute multiple queries in the background to help you achieve that goal, whether that goal is to search for information online or ultimately find the product you want to purchase. These tasks are both complex and diverse, and I believe people may not even realize how rich the requests they will be able to make of computers in the future will be.

I firmly believe that as the scale and performance of the models improve, they will enable more interesting and in-depth interactions. For example, in business applications, you do not just want a sales or customer support chatbot that can only respond to your words. If you are a business with a clear goal, you want to better support customers, you want to position your products in a certain way, and encourage people to buy things they are interested in. This interaction is more like a multi-turn, in-depth conversation. Therefore, compared to the agent types we will have a year from now, the business agents that can be achieved through chatbots now will seem very primitive.

Moreover, the reasoning and planning capabilities of these models are also continuously enhancing, enabling them to assist people in better achieving their goals as creators or businesses in the business process. Therefore, I believe this will be a very powerful capability with tremendous opportunities. Most importantly, what we are demonstrating now is that we have the capability to build leading models internally within the company. I think this is very meaningful, and we will continue to invest in this area long-term because I believe this will be an excellent long-term strategy. In today’s call, I just wanted to emphasize our focus on this area and our long-term investments because that is what we are doing.

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
banner
ChainCatcher Building the Web3 world with innovators