OpenAI and Anthropic agree to let U.S. AI Safety Institute test and evaluate new models

OpenAI and Anthropic agree to let U.S. AI Safety Institute test and evaluate new models


Jakub Porzycki | Nurphoto | Getty Images

OpenAI and Anthropic, the two most richly valued artificial intelligence startups, have agreed to let the U.S. AI Safety Institute test their new models before releasing them to the public, following increased concerns in the industry about safety and ethics in AI.

The institute, housed within the Department of Commerce at the National Institute of Standards and Technology (NIST), said in a press release that it will get “access to major new models from each company prior to and following their public release.”

The group was established after the Biden-Harris administration issued the U.S. government’s first-ever executive order on artificial intelligence in October 2023, requiring new safety assessments, equity and civil rights guidance and research on AI’s impact on the labor market.

“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” OpenAI CEO Sam Altman wrote in a post on X. OpenAI also confirmed to CNBC on Thursday that, in the past year, the company has doubled its number of weekly active users to 200 million. Axios was first to report on the number.

The news comes a day after reports surfaced that OpenAI is in talks to raise a funding round valuing the company at more than $100 billion. Thrive Capital is leading the round and will invest $1 billion, according to a source with knowledge of the matter who asked not to be named because the details are confidential.

Anthropic, founded by ex-OpenAI research executives and employees, was most recently valued at $18.4 billion. Anthropic counts Amazon as a leading investor, while OpenAI is heavily backed by Microsoft.

The agreements between the government, OpenAI and Anthropic “will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks,” according to Thursday’s release.

Jason Kwon, OpenAI’s chief strategy officer, told CNBC in a statement that, “We strongly support the U.S. AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models.”

Jack Clark, co-founder of Anthropic, said the company’s “collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment” and “strengthens our ability to identify and mitigate risks, advancing responsible AI development.”

A number of AI developers and researchers have expressed concerns about safety and ethics in the increasingly for-profit AI industry. Current and former OpenAI employees published an open letter on June 4, describing potential problems with the rapid advancements taking place in AI and a lack of oversight and whistleblower protections.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” they wrote. AI companies, they added, “currently have only weak obligations to share some of this information with governments, and none with civil society,” and they can not be “relied upon to share it voluntarily.”

Days after the letter was published, a source familiar to the mater confirmed to CNBC that the FTC and the Department of Justice were set to open antitrust investigations into OpenAI, Microsoft and Nvidia. FTC Chair Lina Khan has described her agency’s action as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.”

On Wednesday, California lawmakers passed a hot-button AI safety bill, sending it to Governor Gavin Newsom’s desk. Newsom, a Democrat, will decide to either veto the legislation or sign it into law by Sept. 30. The bill, which would make safety testing and other safeguards mandatory for AI models of a certain cost or computing power, has been contested by some tech companies for its potential to slow innovation.

WATCH: Google, OpenAI and others oppose California AI safety bill

Google, OpenAI & others oppose California AI safety bill



Source

Palo Alto tops earnings expectations, announces Chronosphere acquisition
Technology

Palo Alto tops earnings expectations, announces Chronosphere acquisition

Chief executive officer at Palo Alto Networks Inc., Nikesh Arora attends the 9th edition of the VivaTech trade show at the Parc des Expositions de la Porte de Versailles on June 11, 2025, in Paris. Chesnot | Getty Images Palo Alto Networks beat Wall Street’s fiscal first-quarter estimates after the bell on Wednesday. The stock […]

Read More
Block says gross profit in 2028 will approach  billion as company unveils 3-year outlook
Technology

Block says gross profit in 2028 will approach $16 billion as company unveils 3-year outlook

Twitter CEO Jack Dorsey testifies during a remote video hearing held by subcommittees of the U.S. House of Representatives Energy and Commerce Committee on “Social Media’s Role in Promoting Extremism and Misinformation” in Washington, U.S., March 25, 2021. Handout | Via Reuters Block said Wednesday that it expects gross profit to increase in the mid-teens […]

Read More
Nvidia reports third-quarter earnings after the bell
Technology

Nvidia reports third-quarter earnings after the bell

Nvidia founder and CEO Jensen Huang reacts during a press conference at the Asia-Pacific Economic Cooperation (APEC) CEO Summit in Gyeongju on October 31, 2025. Jung Yeon-je | Afp | Getty Images Artificial intelligence chipmaker Nvidia is scheduled to report fiscal third-quarter earnings on Wednesday after the market closes. Here’s what Wall Street is expecting, […]

Read More