Exclusive Interview with OpenAI's First Investor Vinod Khosla: 7 Important Questions About AI
Author: Lynn Yang, Silicon Release
One evening last week, I listened to a podcast on Spotify while having dinner.
The guest was Vinod Khosla, the first investor in OpenAI and founder of the top tech venture capital firm Khosla Ventures.
Clearly, not every day do you get the chance to gain insights from the thoughts of OpenAI's first investor.
So I organized Khosla's key points and shared them with everyone. Here are Khosla's 7 key points about AI:
(1)
Context: The host asked Khosla what AI use cases he primarily uses in his daily life as OpenAI's first investor.
Khosla: Mainly two, ChatGPT and Tesla's autonomous driving.
Regarding Tesla, the number of times I just let it drive is astonishing. It feels like complete autonomous driving. You know, a few nights ago, I landed at 3 AM. At that time, I thought: I'm too tired; I won't be a safe driver. So I just said: Take me home. That experience was amazing.
These are the main uses of AI for me. And I use these two applications many times a day.
As for ChatGPT, I'm using it to plan my spring garden.
I told ChatGPT: I want plants that grow in zone 9a (referring to plants suitable for USDA hardiness zone 9a). I want the height for each area because I'm layering them.
Then I also said: I want some flowers that bloom in spring, some that bloom in early summer, some that bloom in late summer, and some that bloom in autumn.
This is actually a design-like task. I had ChatGPT help me arrange 20 plants, and it provided me with all this information, including: watering needs, climate zones, heights, areas without sunlight, semi-shaded areas, and bright shade areas.
So, ChatGPT did something amazing. These tasks would have taken me 3-4 hours. So yes, the garden is entirely designed by me; I didn't hire a designer. And I can assure you: my garden is blooming now, and you might not believe it.
(2)
Context: The host asked Khosla how he views Apple's announcement of its AI strategy and collaboration with OpenAI, and what impact this collaboration will have on the AI startup ecosystem in the coming years.
Khosla: I think Apple needs to do something for AI, as Siri's reputation has started to decline.
First, Apple's smart move is to stay open, allowing users to access any LLM. However, Apple has indeed chosen to embed and integrate it into iOS, which is enough to make Elon Musk uneasy, prompting him to claim he would ban Apple devices.
So I think what's more important is that Apple is actually showcasing something very significant: How do we interact with computers?
I believe that over time, Siri will evolve into the beginning of a true human interface. From this perspective, I think this is big news because we are witnessing this beginning. It's exciting.
From OpenAI's perspective, this collaboration has clearly established OpenAI's best position in the competition—direct interaction with users. In fact, many people want this business.
On the other hand, I do think Apple must have carefully considered—where will the best AI be in 1-2 years?
Therefore, in many ways, Apple's collaboration with OpenAI is a validation of OpenAI and a very important milestone regarding how humans interact with machines.
(3)
Context: The host asked Khosla how Apple's case illustrates that a small model can do many things. What will be the positioning of large models in the future? And if everyone wants small models, will it turn into a scenario where you can talk to many people, some with an IQ of 50, some with an IQ of 100, and some with an IQ of 10,000? The key is, where do you want to spend your money: asking a person with an IQ of 10,000 a question, or asking someone with an IQ of 70 who knows your email content? This involves balancing product handling directions and the costs of model computation. Do you think the future will be a competition like this?
Khosla: Small models and large models are different and cannot replace each other.
Moreover, I might disagree with the IQ assumption for the future. In fact, what I believe will happen is that the cost of computation will become very low.
I bet that in a year, the cost of computation will be 1/5 to 1/10 of what it is today. Therefore, my advice to all our startups is: ignore your computation costs because any assumptions you make, any dollars you spend optimizing software, will become worthless within a year.
The reason is: every owner of a large model is trying to reduce computation costs. As engineers at OpenAI, Google, and cloud computing companies work to lower the expensive AI chip costs, computation will soon become very cheap.
So forget about it and rely on the various large models in the market, like Google's Gemini and the competition between OpenAI, to drive costs down to negligible levels. In fact, as long as it drops to 10% or less of the current level, it won't matter.
Additionally, for a large model to outperform other large models, its training costs must be an order of magnitude higher. This is why I believe open-source models are not feasible, as the training costs are too high. But once you accept training, you will want to be used as widely as possible for two reasons:
First, you want to maximize your returns, and the model with the lowest costs will yield the greatest returns.
Second, and more importantly, there is a lot of data available for you to train the next generation of models.
Therefore, for various reasons, you want to maximize usage. If you're playing a long game, I believe the AI model game is mostly played over a 5-year timeframe, rather than within a year. Within that timeframe, costs will decrease.
Today, Nvidia extracts quite a bit of tax from everyone, but each model will run on various types of GPUs or computations, and they need the most data generation. So I believe that in the coming years, revenue will not be a significant metric for model companies.
Of course, you don't want to lose too much money you can't afford. But you don't want to make a lot of money because you're trying to gain a large user base, trying to gather a lot of data from user usage, and learn to become a better model.
I do believe that in terms of intelligence, models still have a lot to gain, whether in reasoning, probabilistic thinking, or some form of pattern matching, etc., these models still have a lot of room to improve.
So I think we will see amazing progress almost every year. Some companies execute better than others, and this is the main difference between companies: OpenAI excels in execution, while Google has outstanding technology but lacks clarity in execution.
(4)
Context: The host asked Khosla if considering a five-year timeframe, some people in the tech industry really believe that the value of AI will all flow into existing large companies. But even so, it has been commoditized. So what do you think the five-year outlook will be? And what AI topics are you more focused on that are not covered by existing large companies?
Khosla: So I don't believe that if you're building foundational models and trying to compete with OpenAI and Google, that would be a good position.
Because large LLMs will belong to large players who can run on very large clusters and can pay for proprietary content/data, whether it's paying for Reddit or a company that can access every scientific article.
So the biggest players do have an advantage.
On the other hand, recently we announced an investment in a symbolic logic company called Symbolica. They take a very different approach to building models. It doesn't rely on large amounts of data or massive computation. This is actually a high-risk, high-upside investment. If Symbolica succeeds, it will be dramatic.
So I think there are still other methods at the model level. If I called my friend Josh Tenenbaum at MIT, he would say the biggest contribution is probabilistic programming. Because human thinking is probabilistic, unlike pattern matching. This is an important factor.
Therefore, I believe foundational technology is far from complete. We are increasingly utilizing transformer models, but there are other models yet to be developed. It's just that everyone is afraid to invest in things outside of transformer models. And we haven't.
You know, I am very focused on esoteric things. In fact, Symbolica is based on a theory called category theory, which most mathematicians have never heard of.
So we probably made a big bet about 15 to 18 months ago. I think investing in cloud computing is foolish because people buy GPUs to build cloud computing, but they will lose to Amazon, they will lose to Amazon's scale and efficiency, and also Microsoft.
Both companies are making custom chips so they don't have to pay Nvidia taxes within a few years. Yes, there’s AMD, and there’s still a lot to do in the chip space. But at the next level, at the application level, there are huge opportunities.
(5)
Context: In the following content, Khosla talks about the huge opportunities he sees in AI applications and lists many examples.
Khosla: One of my important predictions is that in the future, almost all expertise will be free.
So by this logic, whether you're talking about junior healthcare providers, teachers, structural engineers, or oncologists, there are hundreds or even thousands of professional fields, each of which will produce a very successful company.
Recently, we also invested in a company that builds structural engineers. Of course, we invested in something very popular like Devin. Everyone knows Devin; they are building an AI programmer. They are not creating tools like Copilot for programmers; they are building a programmer. But we just invested in a company that builds structural engineers called Hedral.
One strange thing is, how many structural engineers are there now? How much do we spend on structural engineering? You hand a building structure to a structural engineer, and two months later, you get something and a change. But you can get five changes from an AI structural engineer in 5 hours, saving months on a construction project. So this is a great niche example. But this could be a multi-billion dollar niche market.
So my point is: ……