recent
Hot news

Everything you need to know about OpenAI and Anthropic's agreement to share AI models

 

OpenAI and Anthropic have agreed to share pre- and post-release AI models with the U.S. AI Safety Institute. The agency, created by President Biden’s executive order in 2023, will provide safety feedback to companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.

The AI ​​Safety Institute did not name other companies involved in AI, but in a statement to Engadget, a Google spokesperson told the site that the company is in discussions with the agency and will share more information when it is available. Google began rolling out updated models of its chatbots and image generators for Gemini this week.

Safety is essential to spur breakthrough technological innovation, and Elizabeth Kelly, director of the AI ​​Safety Institute, wrote in a statement: “With these agreements in place, we look forward to beginning our technical collaboration with Anthropic and OpenAI to advance the science of AI safety. These agreements are just the beginning, but they represent an important milestone as we work to help responsibly manage the future of AI.”

The AI ​​Safety Institute is part of the National Institute of Standards and Technology (NIST). It creates and publishes guidelines, standardized tests, and best practices for testing and evaluating potentially dangerous AI systems.


 Just as AI has the potential to do great good, it also has the potential to cause profound harm, from AI-powered cyberattacks on a scale beyond anything we’ve seen before to AI-crafted bioweapons that could endanger millions of lives,” Vice President Kamala Harris said in late 2023 after the agency’s creation

The first-of-its-kind agreement is through a memorandum of understanding (a formal but non-binding agreement), and the agency will have access to each company’s “major new prototypes” before and after their public release. The agency describes the agreements as collaborative risk mitigation research that will assess capabilities and safety, and the US AI Safety Institute will also collaborate with the UK AI Safety Institute

This comes as federal and state regulators are trying to create guardrails for AI while the rapidly advancing technology is still nascent

This week, the California State House passed an AI safety bill (SB 10147) that would require safety testing for AI models that cost more than $100 million to develop or require a certain amount of computing power. The bill would require AI companies to have kill switches that could shut down models if they become “unworkable or unmanageable.”

Unlike the nonbinding agreement with the federal government, the California bill would have some enforcement power, giving the state attorney general license to sue if AI developers fail to comply, especially during threat-level events

However, it still requires an additional vote and the signature of Gov. Gavin Newsom, who will have until Sept. 30 to decide whether to give the green light

google-playkhamsatmostaqltradent