security issues

More insider information revealed about OpenAI disbanding its safety team, involving resource allocation and security issues

ChainCatcher news, Eastern Time on May 14 (last Tuesday), OpenAI's Chief Scientist Ilya Sutskever officially announced his departure. On the same day, Jan Leike, one of the leaders of OpenAI's super alignment team, also announced his departure.Last Friday, OpenAI also confirmed that the "Superintelligence Alignment Team," co-led by Sutskever and Jan Leike, has been disbanded. In the early hours of May 18, Jan Leike posted 13 tweets on the social platform X, revealing the true reasons for his departure and more insider information.In summary, the first issue is insufficient computational resources, and the second is that OpenAI does not place enough emphasis on safety. Jan Leike stated that more resources and energy should be invested in preparing for the next generation of AI models, but the current development path cannot smoothly achieve this goal. His team faced significant challenges in the past few months, sometimes struggling to obtain enough computational resources.In response to Jan Leike's revelations, on May 18, Altman also urgently posted a response: "I am very grateful for Jan Leike's contributions to OpenAI in AI super alignment research and safety culture, and I am very sorry to see him leave the company. He pointed out that we have a lot of work to do, and we agree with this and are committed to advancing this work. In the coming days, I will write a more detailed article to discuss this issue."
ChainCatcher Building the Web3 world with innovators