Ilya leaves OpenAI insider exposure: team computing power reduced by Ultraman, prioritizing product development for profit, complaints upon departure lead to loss of equity

Quantum Bit
2024-05-18 22:23:32
Collection
The scientific faction is completely out of OpenAI, and the outside world still does not know how far GPT has actually progressed.

Author: Quantum Bit

A series of 13 tweets!

OpenAI's super alignment lead Jan Leike, who just followed Ilya out of the company, revealed the real reason for his departure, along with more insider information.

First, there is not enough computing power; the promised 20% for the super alignment team was short, leading the team to go against the current, but it has become increasingly difficult.

Second, safety is not prioritized; the governance issues regarding AGI's safety are not as important as launching "shiny products." Image

Immediately after, more gossip was unearthed by others.

For example, OpenAI's departing members are required to sign an agreement to ensure they do not speak ill of OpenAI after leaving; if they refuse to sign, it is considered an automatic forfeiture of company shares.

However, there are still some hardliners who refuse to sign and are spilling the beans (hilarious), saying that the core leadership's disagreement over the priority of safety issues has been long-standing.

Since last year's internal power struggle, the ideological conflict between the two factions has reached a critical point, which is why it appears to have collapsed quite decently.

Therefore, even though Ultraman has sent a co-founder to take over the super alignment team, it is still not viewed favorably by the outside world.

Image

Twitter users on the front lines thanked Jan for having the courage to reveal this shocking scoop and lamented:

Wow, it seems OpenAI really doesn't pay much attention to safety!

Image

However, looking back, Ultraman, who currently leads OpenAI, is still holding steady for now.

He publicly thanked Jan for his contributions to OpenAI's super alignment and safety, expressing that he was actually very sad and reluctant about Jan's departure.

Of course, the key point is this:

Wait, in a couple of days, I will post a tweet longer than this one. Image

The promised 20% computing power seems to have some exaggeration

Since the internal power struggle at OpenAI last year, the soul figure, former chief scientist Ilya, has almost stopped appearing publicly or speaking out.

Before he publicly announced his departure, there was already much speculation. Many believe Ilya saw some terrifying things, such as AI systems that could potentially destroy humanity.

Image User: The first thing I think about every day when I wake up is what Ilya saw.

This time, Jan laid it out, stating that the core reason is the differing views on the priority of safety between the technical and market factions.

The disagreement is severe, and the consequences are… well, everyone has seen them.

According to Vox, sources familiar with OpenAI revealed that employees who prioritize safety have lost confidence in Ultraman, stating, "This is a process of trust gradually collapsing."

But as you can see, on public platforms and occasions, not many departing employees are willing to openly discuss this matter.

Part of the reason is that OpenAI has always had a tradition of requiring employees to sign non-disparagement agreements upon departure. If they refuse to sign, it is equivalent to forfeiting the OpenAI stock options they previously received, meaning that employees who come out to speak may lose a significant amount of money.

However, the dominoes continue to fall one after another------

Ilya's resignation has intensified the recent wave of departures from OpenAI.

Following Jan, at least five members of the safety team have also announced their departures.

Among them is a hardliner who did not sign the non-disparagement agreement, Daniel Kokotajlo (hereafter referred to as DK).

Image Last year, DK wrote that he believes the probability of an AI-induced existential disaster is 70%.

DK joined OpenAI in 2022, working in the governance team, primarily guiding OpenAI's safe deployment of AI.

But he also resigned recently and gave an interview:

OpenAI is training more powerful AI systems, aiming to ultimately surpass human intelligence comprehensively.
This could be the best thing that has ever happened to humanity, but if we act carelessly, it could also be the worst.

DK explained that when he joined OpenAI, he was filled with hope and expectations for responsible governance, hoping that OpenAI would be more responsible as it approached AGI. However, many in the team gradually realized that OpenAI would not go that route.

"Gradually losing confidence in OpenAI's leadership and their ability to handle AGI responsibly" is the reason DK resigned.

Image

Disappointment regarding future AGI safety work is part of the reason many have left amid Ilya's intensified departure wave.

Another part of the reason is that the super alignment team probably does not have the abundant resources for research that the outside world imagines.

Even if the super alignment team is working at full capacity, they can only obtain the promised 20% of computing power from OpenAI.

Moreover, many of the team's requests are often denied.

Of course, this is because computing resources are extremely important for AI companies, and every bit must be allocated reasonably; also because the work of the super alignment team is "to address the different types of safety issues that may arise if the company successfully builds AGI."

In other words, the super alignment team is addressing the future safety issues that OpenAI will face------emphasis on future, uncertain issues.

Image

As of the time of writing, Ultraman has not yet released his "longer tweet than Jan's insider revelations."

But he briefly mentioned that Jan's concerns about safety are valid, "We have a lot to do; we are committed to doing so."

On this point, everyone can grab a small stool and wait; we will all eat melons together at the first opportunity.

In summary, many people have left the super alignment team, especially Ilya and Jan's departures, leaving this stormy team in a leaderless predicament.

The follow-up arrangement is that co-founder John Schulman will take over, but there will no longer be a dedicated team.

The new super alignment team will be a more loosely connected group, with members distributed across various departments of the company, which an OpenAI spokesperson described as "more deeply integrated."

This has also raised doubts from the outside world, as John's original full-time job was to ensure the safety of current OpenAI products.

It remains to be seen whether John can manage the additional responsibilities and effectively lead two teams focused on current and future safety issues.

The Ilya-Altman Conflict

If we extend the timeline, today's fragmentation is actually a sequel to the Ilya-Altman conflict within OpenAI.

Going back to November last year, when Ilya was still around, he collaborated with the OpenAI board to attempt to oust Ultraman.

The reason given at the time was that he was not sincere enough in his communications. In other words, we do not trust him.

But the outcome was evident; Ultraman, along with his "allies," threatened to join Microsoft, leading the board to yield, and the ousting attempt failed. Ilya left the board. On Ultraman's side, he chose to bring in members more favorable to him.

After that, Ilya disappeared from social media until he officially announced his departure a few days ago. It is said that he had not been seen in the OpenAI office for about six months.

At that time, he left behind a thought-provoking tweet, which he quickly deleted.

In the past month, I have learned many lessons. One lesson is that "the beating will continue until morale improves" applies more often than it should.
Image

However, insiders revealed that Ilya has been remotely co-leading the super alignment team.

On Ultraman's side, the biggest accusation from employees is that he is inconsistent in his words and actions, claiming he wants to prioritize safety, yet his actions contradict that.

In addition to the promised computing resources not being provided, there were also recent efforts to raise funds from places like Saudi Arabia to build chips.

Those employees who prioritize safety are baffled.

If he truly cared about building and deploying AI as safely as possible, why would he be so frantically accumulating chips to accelerate technological development?

Earlier, OpenAI also ordered chips from a startup that Ultraman invested in, amounting to $51 million.

And during the internal power struggle days, the whistleblower letter from former OpenAI employees seemed to once again confirm the description of Ultraman. Image

It is precisely this consistent "inconsistency" in operations that has led employees to gradually lose confidence in OpenAI and Ultraman.

Ilya is like this, Jan Leike is like this, and the super alignment team is like this.

Some thoughtful users have compiled important milestones of related events over the past few years------a friendly reminder, the P (doom) mentioned below refers to "the probability of AI triggering an apocalyptic scenario."

  • In 2021, the head of the GPT-3 team left OpenAI due to "safety" issues and founded Anthropic; one of them believed P (doom) to be 10-25%;
  • In 2021, the head of RLHF safety research left, P (doom) was 50%;
  • In 2023, the OpenAI board ousted Ultraman;
  • In 2024, OpenAI fired two safety researchers;
  • In 2024, a researcher particularly focused on safety left OpenAI, believing P (doom) to be 70%.
  • In 2024, Ilya and Jan Leike left.

Image

Technical Faction or Market Faction?

As large models have developed to this point, "How to achieve AGI?" can actually be summarized into two routes.

The technical faction hopes to mature and control the technology before application; the market faction believes in a "gradual" approach of opening up and applying it simultaneously.

This is also the fundamental disagreement in the Ilya-Altman conflict, namely OpenAI's mission:

Should it focus on AGI and super alignment, or on expanding ChatGPT services? Image The larger the scale of ChatGPT services, the more computing power is needed; this will also take time away from AGI safety research.

If OpenAI were a non-profit organization dedicated to research, they should spend more time on super alignment.

However, from some of OpenAI's external initiatives, the result is clearly not that; they just want to take the lead in the competition for large models and provide more services for businesses and consumers.

In Ilya's view, this is a very dangerous thing. Even if we do not know what will happen as the scale expands, Ilya believes the best approach is to prioritize safety.

Openness and transparency are how we humans can ensure the safe construction of AGI, rather than doing so in some secretive manner.

But under Ultraman's leadership, OpenAI seems to pursue neither open-source nor super alignment. On the contrary, it is solely focused on racing towards AGI while trying to build a moat.

So, in the end, will AI scientist Ilya's choice be correct, or will Silicon Valley businessman Ultraman prevail?

It is still unknown. But at least OpenAI is currently facing a critical choice.

Industry insiders have summarized two key signals,

One is that ChatGPT is OpenAI's main source of income; if there is no better model to support it, they will not provide GPT-4 for free to everyone;

The other is that if the departing team members (Jan, Ilya, etc.) were not worried about stronger features coming soon, they would not care about alignment issues… If AI remains at this level, it basically doesn't matter. Image

However, the fundamental contradiction within OpenAI remains unresolved, with one side being the AI scientists' concerns about responsibly developing AGI, and the other being the Silicon Valley market faction's urgency to promote sustainable technology through commercialization.

The two sides have become irreconcilable, and the scientific faction is completely exiting OpenAI, while the outside world still does not know how far GPT has progressed.

The eager onlookers wanting to know the answer to this question are feeling a bit exhausted.

A sense of helplessness washes over, just like what Ilya's mentor, one of the Turing Award trio Hinton, said:

I am old, I worry, but I feel powerless.

Reference links:
[1]https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence
[2]https://x.com/janleike/status/1791498174659715494
[3]https://twitter.com/sama/status/1791543264090472660

ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators