Miss A's conversation with Kevin Kelly: Judgments about AI that I never wrote in my book
Author: Miss Jia, Baidu Editor: Tian Siqi
Slower than it looks, LLMs tend average, Not replacing humans, New, not substitutions, Cloud first, then AI, Must change your org, Just beginning… These seven predictions are not exhaustive. Can you give us some guesses that have never been mentioned elsewhere?" I asked KK.
KK, who took off his glasses, paused for at least 30 seconds, then asked me several long rounds of questions until he suddenly interrupted me.
"Then here comes my prediction. My prediction is that in 10 years, training data won't be important," KK said.
Kevin Kelly, known as "KK" among tech enthusiasts, has become a symbol of an era with his beard and graying hair. He has written books such as "Out of Control," "What Technology Wants," and "The Inevitable," and is hailed as the "father of Silicon Valley spirit," having predicted trends like cloud computing, virtual reality, and the Internet of Things over 30 years ago. On June 16, 2024, he visited Suzhou to participate in a tech lecture co-hosted by Suzhou Technology Business School and Shanghai Jiao Tong University’s Shanghai Advanced Institute of Finance. The above conversation took place during an exclusive interview in the conference room after his lecture, which extended from the originally planned 20 minutes to nearly an hour.
In this article, Miss Jia engages in a deep conversation with Kevin Kelly, discussing recent developments, AI innovation, and the essence of humanity. Aside from some differing judgments on specific details, KK and "Jiazi Guangnian" share similar views: the "progress bar" of AI changing the world has just begun.
1. Recent Situation: "That occupies all my time"
"You must have 1,000 hours. I may have trained for 800 hours, but it's still not 1,000 hours."
Miss Jia: News comes and goes, and the world's attitude towards AI has changed a lot, especially people's views on AI 2.0, AGI, or large models. Recently, how much time have you devoted to tracking cutting-edge AI developments?
Kevin Kelly: That occupies all my time; what I do is continuously read articles about artificial intelligence.
Miss Jia: Who's your favorite writer?
Kevin Kelly: As you said, there are new articles every day, possibly every hour, about new discoveries regarding language models.
Just last week, there was a paper from Anthropic about the weights of features and how to manipulate them, which relates to the concept of the AI black box. They said we can actually see a little bit of the mechanism behind it, which is very interesting.
Miss Jia: Do you usually use AI applications like Midjourney, Pika, Runway, etc.?
Kevin Kelly: I create one AI-generated artwork every day; I've been doing it for a year.
Miss Jia: So you are now a native of the AI industry?
Kevin Kelly: Still on the way. You must have 1,000 hours. I may have trained for 800 hours, but it's still not 1,000 hours.
Miss Jia: You are a philosophical thinker in the tech field. Has your tech philosophy iterated or changed in the recent wave of AI?
Kevin Kelly: That's a good question. My view of tech philosophy hasn't changed; if new phenomena arise, I would think they are continually confirming and reinforcing my philosophy.
So far, I haven't seen any events that might change my view of technology. My tech theory is evolving, and everything I see in AI hasn't changed my underlying tech philosophy.
2. AI View: "What I really worry about: the weaponization of artificial intelligence"
What are the best and worst decisions made by OpenAI?
Miss Jia: At the top of your official website, there is a small line: OVER THE LONG TERM, THE FUTURE IS DECIDED BY OPTIMISTS. Given the recent series of advancements in AI and the rapid iteration wave, along with the identity crisis of humanity you just mentioned, do you have any concerns?
Kevin Kelly: Overall, I'm not particularly worried. There are some things I do care about, but I believe we will solve them. However, there are still some problems we don't know how to solve. For example, climate change; we know what to do, but in the field of AI, there are some issues we don't know how to address, and these problems could trouble us in the future, such as the weaponization of AI: should we allow a robotic soldier to exist? Can AI have the capability to kill? This is something we don't know, and it's really hard to decide. So this is what I really worry about: the weaponization of artificial intelligence.
Of course, I also care about whether AI is open source or closed source, whether it is public or only owned by companies. My thought is that it should be public.
Miss Jia: Do you think AI should be open source?
Kevin Kelly: Yes, the source code should be open to the public; this is another thing I care about.
Miss Jia: You still consider yourself an optimist.
Kevin Kelly: I am very optimistic. I believe we will eventually solve these AI-related issues; we just don't know how to do it yet. That is to say, the outcome is certain; it's just that the path is not clear, so I am very optimistic. Of course, there are some things that others worry about that I do not, like I am not worried about unemployment. Additionally, I am not worried that artificial intelligence will pose a threat to us.
Miss Jia: You have fans all over the world and must know many great scientist friends. Do they agree with your views on AI, or do more of them hold opposing opinions?
Kevin Kelly: This topic is indeed interesting; there is currently a huge divide. There are two camps regarding super AI; some very excellent scientists are worried, while another group of excellent scientists is not concerned, which is fascinating. I am in the camp that is not worried about AI.
Miss Jia: So far, what are the best and worst decisions made by OpenAI?
Kevin Kelly: The worst decision is that OpenAI did not open its large models; that is a very bad decision. Another worst decision was (at one time) firing founder Sam Altman.
The best decision is that OpenAI has always maintained rapid development, rapid iteration, and continuous innovation, and its increasingly fast growth allowed it to rehire Sam. It is also firm in emphasizing that development should not be overly cautious but should genuinely attempt to grow quickly.
3. Boundaries: "AI excels at hill climbing, not hill making"
You can have Midjourney or Dall-E draw a famous astronaut riding a horse, but you can't have the horse ride the astronaut because that is outside the learning scope.
Miss Jia: You mentioned two types of creativity, Type 1 and Type 2. You drew a very amusing picture, saying AI excels at hill climbing rather than hill making. What is the difference between the two?
Kevin Kelly: The creativity of large language models is actually only one type of creativity, which operates within known boundaries. They fill in and explore everything within the space I know. They do not invent entirely new domains.
Breakthroughs are basically about creating new territories, rather than finding solutions within existing constraints.
What they are mainly doing now is looking for answers within the scope of what we know. You can have Midjourney or Dall-E draw a famous astronaut riding a horse, but you can't have the horse ride the astronaut because that is outside the learning scope.
Miss Jia: Are you a proponent of scaling laws?
Kevin Kelly: There are indeed some. To make it easier for "Jiazi Guangnian" users to understand, let me explain first. Scaling laws state that there is a mathematical proportional relationship that can describe how large a model becomes, the loss factors, and how far it is from optimal performance.
We don't know if this extends infinitely. Can I expand forever? Does the curve eventually flatten out? So far, I think the evidence suggests it will tend toward a straight line. This is different from the internet.
Of course, the evidence does not come from the scaling laws themselves; the scaling laws themselves are a hypothesis.
Miss Jia: Recently, there is a popular viewpoint in the AI industry—everything is about datasets. Given time, the effectiveness of AI has little to do with algorithms or other methods; only the dataset is important.
Kevin Kelly: There is a paper that says the quality and impact of data are greater than algorithms. I believe this is very likely.
I predict we will see an AI company promoting that AI is based on training data. So someone will say, we haven't undergone any algorithm training; we only trained with the best data. We trained it with high-quality books and other high-quality materials. We trained it with Reddit.
It's like education. If you have a child, how would you educate them? What do you plan to let them read? Are you going to let them watch Twitter or read the classics? Some say our AI only reads the classics. They read the highest quality books, the highest quality scientific journals. They do not read Reddit, Twitter, or Weibo. They read good stuff. They received the best training. Some people will use this curated training data idea as a selling point, very carefully curated. Yesterday, Getty Images announced it would release an AI image generator trained only on the Getty library.
4. Speculation: "In 10 years, training data won't be important"
In 10 years, we won't need millions of data points to have reasoning abilities.
Miss Jia: Your fame largely comes from your identity as a prophet, but you just displayed big words on the screen: No predictions. Yet you mentioned seven judgments: Slower than it looks LLMs tend average Not replacing humans New, not substitutions Cloud first, then AI Must change your org Just beginning
The judgments are not exhaustive. Can you give us some guesses that you have never mentioned elsewhere?
Kevin Kelly: (pauses for a long time) Generally speaking, if I have ideas, I will definitely tell others. Let's continue the conversation, and then I will try to come up with one.
(continues to pause) Regarding artificial intelligence, I don't know much about AI in China. You are obviously reading papers too; what do you think is happening in China regarding AI right now?
Miss Jia: I think the similarities between China and the U.S. are much greater than people imagine.
Kevin Kelly: Similarities? How so?
Miss Jia: For example, talent. China has many young talents, who are students or in startups. They are very similar to the young talents I encounter in the U.S. or some other countries because AI is so sharp and so novel.
My specialty is mathematics. When we compare AI and mathematics, the length of history is different. Many friends around me think artificial intelligence is too complex and difficult to understand. But AI has only a little over half a century of history; if you just want to understand the overview, history, and subject classification, reading two or three books is enough to give you a basic overview of AI. From the discipline itself, everyone starts from a similar point. China may not have big names like Musk or Altman, but when you look around at young talents, the overall foundation is very similar.
The second dimension is data. Perhaps China may have some advantages.
Kevin Kelly: Who has access to the data? Can a young startup access this data?
Miss Jia: I think we are just getting started. The government is trying to establish basic infrastructure to allow people to access the data they want in a good way.
Kevin Kelly: What does that good way look like?
Miss Jia: Data markets. You know, data has been written into basic policies and has become a factor, just like capital, labor, technology, and land; they are referred to as "production factors" in China.
Kevin Kelly: Do your entrepreneurs have no difficulty accessing data?
Miss Jia: It's not that there is no difficulty. But they can access it like in other countries, perhaps more easily, and they must face many challenges. But I think the biggest challenge is not policy, not permissions, but datasets. Different languages have different datasets.
Kevin Kelly: Then here comes my prediction. My prediction is that in 10 years, data will no longer be important.
Right now, all large language models require extensive data methods, but other types of cognition and intelligence are what they lack. Just like a human toddler can distinguish between cats and dogs after seeing 12 examples, toddlers do not need 12 million data points to know.
I believe that in 10 years, we won't need millions of data points to have reasoning abilities. This is a huge advantage for startups because they don't need to possess all this data. This is my speculation.
5. Essence: "Consciousness makes humans unique, but we will also give consciousness to AI"
We will discover what it means to be human together with AI.
Miss Jia: Can you give me some insights about your views on the essence of humanity and artificial intelligence?
Kevin Kelly: The question is that we also don't know what the essence of humanity is. The way to find out the answer is to create artificial intelligence. We will succeed.
We once realized that creativity was the reason we were unique—but now we have changed our minds because AI also has creativity; then we would say, well, now we think consciousness is the reason we are unique, but we will give consciousness to AI…
Miss Jia: Where will this "giving" process continue to?
Kevin Kelly: Driven by technology and AI, we will continuously redefine ourselves. The more important question is not who we are, but who we want to become. What do we want humanity to be? This is a more powerful question.
Because we have a little bit of choice. This is exciting for me; this is the ultimate charm of AI. It illuminates the fog of who we are and inspires us to become the kind of people we should be.
Miss Jia: Accelerated computing is touching the unmapped territories of science; what are the limits of this path?
Kevin Kelly: Just as we have no theory about intelligence, we also have no theory about humanity.
We cannot predict where AI is going because we have no theory about AI. We also have no theory that says if you do this, you will have predictions; if you do that, then this will happen; if you account for all these calculations, you will get this… We currently have no such theories. This is quite unusual.
In physics, we have theories—if you build a collider large enough, you will find that particle. We do not have such theories in the field of intelligence. But what is exciting is that we will discover what it means to be human together with AI.
Miss Jia: I like your answer.
Kevin Kelly: I like your questions.
Right: Kevin Kelly, Left: Zhang Yijia, Founder & CEO of Jiazi Guangnian (Image Source: "Jiazi Guangnian" Photography)