OpenAI and Anthropic agree to grant the U.S. government access to new AI models
ChainCatcher news, according to TheVerge, OpenAI and Anthropic have agreed to provide the U.S. government with access to new AI models to help improve model safety. The two companies have signed a memorandum of understanding with the U.S. AI Safety Institute to allow the government to assess safety risks and mitigate potential issues. The U.S. AI Safety Institute stated that it will collaborate with its UK counterpart to provide feedback on safety improvements.
It is reported that California recently passed the "Frontier AI Model Safety and Innovation Act" (SB 1047), which requires AI companies to take specific safety measures before training advanced foundational models. However, the bill has faced opposition from companies including OpenAI and Anthropic, who believe it could be detrimental to small open-source developers.