Anthropic adds Claude 4 security measures to limit risk of users developing weapons

Anthropic adds Claude 4 security measures to limit risk of users developing weapons


Omar Marques | Lightrocket | Getty Images

Anthropic on Thursday said it activated a tighter artificial intelligence control for Claude Opus 4, its latest AI model.

The new AI Safety Level 3 (ASL-3) controls are to “limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons,” the company wrote in a blog post.

The company, which is backed by Amazon, said it was taking the measures as a precaution and that the team had not yet determined if Opus 4 has crossed the benchmark that would require that protection.

Anthropic announced Claude Opus 4 and Claude Sonnet 4 on Thursday, touting the advanced ability of the models to “analyze thousands of data sources, execute long-running tasks, write human-quality content, and perform complex actions,” per a release.

The company said Sonnet 4 did not need the tighter controls.

Jared Kaplan, Anthropic’s chief science officer, noted that the advanced nature of the new Claude models has its challenges.

“The more complex the task is, the more risk there is that the model is going to kind of go off the rails … and we’re really focused on addressing that so that people can really delegate a lot of work at once to our models,” he said.

The company released an updated safety policy in March addressing the risks involved with AI models and the ability to help users develop chemical and biological weapons.

Major safety questions remain about a technology that is advancing at a breakneck pace and has shown worrying cracks in safety and accuracy.

Last week, Elon Musk’s Grok chatbot from xAI continued to bring up the topic of “white genocide” in South Africa in responses to unrelated comments.

The company later attributed the bizarre behavior to an “unauthorized modification.”

Olivia Gambelin, AI ethicist and author of the book “Responsible AI,” said the Grok example shows how easily these models can be tampered with “at will.”

AI researchers and experts told CNBC that the push from the power players to prioritize profits over research has led to companies taking shortcuts and forgoing rigorous testing.

James White, chief technology officer at cybersecurity startup CalypsoAI, said companies sacrificing security for advancement means models are less likely to reject malicious prompts.

“The models are getting better, but they’re also more likely to be good at bad stuff,” said White, whose company performs safety and security audits of Meta, Google, OpenAI and other companies. “It’s easier to trick them to do bad stuff.”

CNBC’s Hayden Field and Jonathan Vanian contributed to this report.



Source

Stock futures are little changed as investors await more inflation data: Live updates
World

Stock futures are little changed as investors await more inflation data: Live updates

Traders work on the floor of the New York Stock Exchange (NYSE) on August 12, 2025 in New York City. Spencer Platt | Getty Images News | Getty Images Stock futures were relatively unchanged on Wednesday after the S&P 500 and Nasdaq Composite rallied to new records and as investors gear up for more data […]

Read More
Asia-Pacific markets set to open mixed as investors bet on Fed rate cut
World

Asia-Pacific markets set to open mixed as investors bet on Fed rate cut

CNBC Pro: Buy or avoid India’s IT stocks after recent job cuts? 3 pros share their take Here are the opening calls for the day Good morning from Singapore. Investors are awaiting the release of a slew of employment-related data from Australia. Economists polled by Reuters expect an increase of 25,000 employed individuals in July, […]

Read More
Elon Musk’s xAI loses co-founder Igor Babuschkin, who’s leaving to start venture firm
World

Elon Musk’s xAI loses co-founder Igor Babuschkin, who’s leaving to start venture firm

Igor Babuschkin, co-founder of xAI, during the Nvidia GPU Technology Conference (GTC) in San Jose, California, US, on Tuesday, March 19, 2024. David Paul Morris | Bloomberg | Getty Images Igor Babuschkin, a founding member of Elon Musk’s xAI, said Wednesday that he’s leaving the artificial intelligence startup to launch his own venture firm. “Today […]

Read More