OpenAI quietly recruited the security team invested by Altman, but it is unrelated to super alignment
Author: Zhao Jian, Jiazi Guangnian
Eight days after OpenAI's chief scientist Ilya Sutskever announced his departure, OpenAI quietly recruited an entire team focused on security.
This team is called "Indent," a data security startup based in California, USA. On May 23, Indent's co-founder and CEO Fouad Matin announced on X that he would be joining OpenAI to oversee security-related work.
Although the details have not been disclosed, it is highly likely that Indent will be fully integrated into OpenAI. Indent announced on its website, "After careful consideration, we have made a very difficult decision to shut down Indent in the coming months," and "Services will cease after July 15."
It is worth mentioning that OpenAI CEO Sam Altman participated in Indent's $5.6 million seed round financing in 2021, making them old acquaintances.
Recently, OpenAI has been embroiled in controversy, particularly the chain reaction triggered by Ilya Sutskever's departure, which also led to the resignation of Jan Leike, the co-director of the super alignment team responsible for security. The super alignment team, which the two co-led, was only established in July last year and is now in disarray.
However, upon closer inspection, the addition of the Indent team, while a fresh infusion of talent for the security team, has no relation to the super alignment team.
The inclusion of the Indent team further clarifies one thing: Sam Altman is transforming OpenAI into a fully commercialized company.
1. Who is Indent?
First, let’s introduce Indent.
Founded in 2018, Indent is engaged in data security-related services, providing a simple solution—automating the approval process for access permissions.
For example, when engineers need to view production server logs or customer support requires admin access to sensitive systems, they can use Indent's application to request access without IT department assistance. Reviewers can receive notifications via Slack and approve directly from there, and once the time expires, access is automatically revoked.
Indent provides on-demand access control for everyone in the company, allowing them to access what they need when they need it.
This seemingly simple service addresses an important need— as teams grow, more employees require access to an increasing number of services, and the approval process for these services can take days, weeks, or even months. While it is possible to simplify the approval process, the simplest method is often not the correct one, as it may lead to security issues. If critical business is involved, responding to a customer in a few hours versus a few days can yield completely different outcomes.
Many companies use dozens of applications to handle critical services, collaboration, or customer data across different teams, each with dozens of potential roles or sub-permissions, which can easily spiral out of control.
Indent provides teams with the simplest and safest way to achieve democratized access management and accountability.
In 2023, following the rise of large models, Indent expanded its data security business into the large model field.
In March 2024, Indent co-founder and CEO Fouad Matin published an article titled "The Million-Dollar AI Engineering Problem."
He mentioned that model weights, biases, and the data used to train them are the crown jewels of artificial intelligence and the most valuable assets for companies developing custom models or fine-tuning existing ones, often investing millions of dollars in engineering time, computational power, and data collection.
However, large language models carry the risk of leakage. He cited Llama as an example, where Meta initially did not intend to fully open-source Llama but imposed some restrictions. However, it was leaked on the 4chan website, forcing Meta to fully open-source Llama.
As a result, Indent specifically proposed security solutions for model weights, training data, and fine-tuning data.
2. Deep Connections with Altman
Indent has a long-standing connection with OpenAI.
Indent has two co-founders: Fouad Matin serves as CEO, and Dan Gillespie serves as CTO.
Fouad Matin is an engineer, privacy advocate, and streetwear enthusiast who previously worked on data infrastructure products at Segment. In 2016, he co-founded VotePlz, a nonpartisan voter registration and turnout nonprofit. He is passionate about helping people find fulfilling work and previously founded a referral recruiting company through the YC W16 program.
Dan Gillespie was the first non-Google employee to manage Kubernetes releases and has been a regular contributor since the project's early days. As his entry point into K8s, he co-founded and served as CTO of a collaborative deployment tool (YC W16), where he built Minikube. His company was acquired by CoreOS, which later became part of RedHat and subsequently IBM.
Their backgrounds indicate a close relationship with YC in their early years. Sam Altman invested in the startup incubator YC in 2011 and served as its president from 2014 until becoming OpenAI's CEO in 2019.
On December 21, 2021, Indent announced it had raised $5.6 million in seed funding, led by Shardul Shah (partner at Index Ventures), Kevin Mahaffey (CTO of Lookout), and Swift Ventures, with a luxurious lineup of co-investors including Sam Altman and his brother Jack Altman.
The close relationship between the two parties, along with Indent's future involvement in large model security, laid the groundwork for Indent's integration into OpenAI.
3. Indent is Not Joining Superalignment
Is OpenAI's recruitment of the entire Indent team a supplement to the super alignment team? The answer is no, as these are two completely different teams.
OpenAI's security team actually consists of three teams: Safety Systems, Preparedness, and Superalignment.
The division of labor among the three teams is as follows: the Safety Systems team focuses on the deployment risks of current models, aiming to reduce abuse of existing models and products like ChatGPT; the Preparedness team focuses on the safety assessment of frontier models; and the Superalignment team focuses on coordinating superintelligence to lay the groundwork for the safety of superintelligent models that may exist in the more distant future.
The Safety Systems team is a relatively mature team, divided into four subsystems: safety engineering, model safety research, safety reasoning research, and human-computer interaction, bringing together a diverse team of experts from engineering, research, policy, AI collaboration, and product management. OpenAI states that this combination of talent has proven very effective, enabling OpenAI to access a wide range of solutions from pre-training improvements and model fine-tuning to monitoring and mitigation during inference.
The Preparedness team's research on frontier AI risks has not yet reached the necessary level. To bridge this gap and systematize safety thinking, OpenAI released an initial version of a framework called "Preparedness Framework" in December 2023, describing the process by which OpenAI tracks, assesses, predicts, and mitigates the catastrophic risks posed by increasingly powerful models.
OpenAI also announced the establishment of a dedicated team to oversee technical work and build an operational structure for safety decision-making. The Preparedness team will drive technical work to examine the limits of frontier model capabilities, conduct assessments, and compile reports. OpenAI is creating a cross-functional safety advisory group to review all reports and simultaneously send them to leadership and the board. While leadership makes decisions, the board has the authority to overturn those decisions.
The Superalignment team is a newly established team formed on July 5, 2023, aimed at guiding and controlling AI systems that are significantly smarter than humans using science and technology by 2027. OpenAI claims it will allocate 20% of the company's computing resources to this work.
There is currently no clear, feasible solution regarding superalignment. OpenAI's research approach is to use already aligned small models to supervise larger models, gradually aligning superintelligence by incrementally increasing the size of the small models and stress-testing the entire process.
The Superalignment team was co-led by Ilya Sutskever and Jan Leike, both of whom have now left. According to media reports, the superalignment team has disbanded following their departures.
The Indent team did not join the super team. According to information released by the Indent team on X, they are joining OpenAI's Preparedness team, responsible for preparing frontier models and managing customer data.
This indicates that OpenAI is increasing its investment in frontier models.
At the recent VivaTech summit held in Paris, Romain Huet, OpenAI's developer experience lead, revealed in a presentation that OpenAI's next new model, "GPT Next," will be released later in 2024.
The image is from the PPT shared by OpenAI at VivaTech, sourced from X.
OpenAI's upcoming focus is likely on the capabilities and safety of this new model.
4. OpenAI's "Original Sin"
The dissolution of the super team and the addition of the Indent team lead to one conclusion when connected: OpenAI is accelerating its pursuit of model deployment and commercialization.
This point has already been made public in the resignation statement of Jan Leike, the head of the superalignment team.
Jan Leike believes that more bandwidth should be spent preparing for the next generation of models, including safety, monitoring, preparedness, adversarial robustness, super consistency, confidentiality, social impact, and related topics, but in recent months, his team has "struggled for computing resources"—even the initially promised 20% of computing resources could not be met.
He believes that OpenAI's safety culture and processes are no longer prioritized, while shiny products are favored.
In response, Greg Brockman, OpenAI's president, stated in a lengthy reply:
We believe that such (increasingly powerful) systems will be very beneficial and helpful to people and that it is possible to deliver them safely, but this requires a lot of foundational work. This includes thoughtful consideration of what they connect to during training, solutions to challenges like scalable supervision, and other new types of safety work. As we build in this direction, we are still uncertain when we will meet our safety standards to release products, and if this leads to delays in the release schedule, that’s okay.
As mentioned earlier, when OpenAI established the superalignment team, it set a timeline for guiding and controlling superintelligent systems smarter than humans by 2027. Greg Brockman's response effectively changes this timeline—"If there are delays in release, that’s okay."
It is important to emphasize that OpenAI does not disregard safety, but it is clear that its emphasis on safety comes with a condition—all safety must be premised on deployable models and commercializable products. Clearly, under relatively limited resources, even OpenAI must make trade-offs.
And Sam Altman tends to choose to be a more pure businessman.
Is safety in conflict with commercialization? For all other companies in the world, this is not a contradiction. But OpenAI is that exception.
In March 2023, after the release of GPT-4, Elon Musk posed a soul-searching question: "I’m confused, I donated $100 million to a nonprofit organization, how did it turn into a for-profit organization valued at $30 billion?"
When the OpenAI internal conflict occurred, the outside world had already more or less guessed that the source of the contradiction stemmed from this. On March 8 of this year, after the investigation results were released, OpenAI's official announcement not only stated that Sam Altman would return but also announced that the company would make significant improvements to its governance structure, including "adopting a new set of corporate governance guidelines" and "strengthening OpenAI's conflict of interest policies."
However, until the dissolution of OpenAI's super team, we did not see any new policies. This may be the reason many departing employees are disappointed with OpenAI.