Anthropic adds Claude 4 security measures to limit risk of users developing weapons

Anthropic adds Claude 4 security measures to limit risk of users developing weapons


Omar Marques | Lightrocket | Getty Images

Anthropic on Thursday said it activated a tighter artificial intelligence control for Claude Opus 4, its latest AI model.

The new AI Safety Level 3 (ASL-3) controls are to “limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons,” the company wrote in a blog post.

The company, which is backed by Amazon, said it was taking the measures as a precaution and that the team had not yet determined if Opus 4 has crossed the benchmark that would require that protection.

Anthropic announced Claude Opus 4 and Claude Sonnet 4 on Thursday, touting the advanced ability of the models to “analyze thousands of data sources, execute long-running tasks, write human-quality content, and perform complex actions,” per a release.

The company said Sonnet 4 did not need the tighter controls.

Jared Kaplan, Anthropic’s chief science officer, noted that the advanced nature of the new Claude models has its challenges.

“The more complex the task is, the more risk there is that the model is going to kind of go off the rails … and we’re really focused on addressing that so that people can really delegate a lot of work at once to our models,” he said.

The company released an updated safety policy in March addressing the risks involved with AI models and the ability to help users develop chemical and biological weapons.

Major safety questions remain about a technology that is advancing at a breakneck pace and has shown worrying cracks in safety and accuracy.

Last week, Elon Musk’s Grok chatbot from xAI continued to bring up the topic of “white genocide” in South Africa in responses to unrelated comments.

The company later attributed the bizarre behavior to an “unauthorized modification.”

Olivia Gambelin, AI ethicist and author of the book “Responsible AI,” said the Grok example shows how easily these models can be tampered with “at will.”

AI researchers and experts told CNBC that the push from the power players to prioritize profits over research has led to companies taking shortcuts and forgoing rigorous testing.

James White, chief technology officer at cybersecurity startup CalypsoAI, said companies sacrificing security for advancement means models are less likely to reject malicious prompts.

“The models are getting better, but they’re also more likely to be good at bad stuff,” said White, whose company performs safety and security audits of Meta, Google, OpenAI and other companies. “It’s easier to trick them to do bad stuff.”

CNBC’s Hayden Field and Jonathan Vanian contributed to this report.



Source

U.S. issues 30-day sanctions waiver for sale of Iranian oil at sea
World

U.S. issues 30-day sanctions waiver for sale of Iranian oil at sea

Ships line up in the Strait of Hormuz as seen from Khor Fakkan, United Arab Emirates, Wednesday, March 11, 2026. Altaf Qadri | AP The Trump administration on Friday issued a 30-day sanctions waiver for the purchase of Iranian oil at sea to ease energy supply pressures since the start of the U.S.-Israeli war on […]

Read More
Super Micro co-founder indicted on Nvidia smuggling charges leaves board
World

Super Micro co-founder indicted on Nvidia smuggling charges leaves board

Jaque Silva | Nurphoto | Getty Images Super Micro Computer said Yih-Shyan “Wally” Liaw, a co-founder, has resigned from the server maker’s board after he was indicted in the U.S. on allegations of smuggling equipment containing Nvidia artificial intelligence chips into China. A federal court unsealed the indictment on Thursday. While the company wasn’t specified, […]

Read More
OpenAI’s first crack at online shopping stumbled. It’s preparing for the next wave
World

OpenAI’s first crack at online shopping stumbled. It’s preparing for the next wave

Inkoly | Istock | Getty Images When OpenAI announced its Instant Checkout feature last fall, retailers sprang into action.  Etsy, Walmart and Shopify quickly lined up to let users buy merchants’ products directly within its ChatGPT chatbot. Suddenly, the e-commerce world was fixated on shopping agents, the artificial intelligence tools that can make purchases on […]

Read More