Google removes pledge to not use AI for weapons, surveillance

Google removes pledge to not use AI for weapons, surveillance


Sundar Pichai, CEO of Alphabet Inc., during Stanford’s 2024 Business, Government, and Society forum in Stanford, California, April 3, 2024.

Justin Sullivan | Getty Images

Google has removed a pledge to abstain from using AI for potentially harmful applications, such as weapons and surveillance, according to the company’s updated “AI Principles.”

A prior version of the company’s AI principles said the company would not pursue “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and “technologies that gather or use information for surveillance violating internationally accepted norms.”

Those objectives are no longer displayed on its AI Principles website.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” reads a Tuesday blog post co-written by Demis Hassabis, CEO of Google DeepMind. “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”

The company’s updated principles reflect Google’s growing ambitions to offer its AI technology and services to more users and clients, which has included governments. The change is also in line with increasing rhetoric out of Silicon Valley leaders about a winner-take-all AI race between the U.S. and China, with Palantir’s CTO Shyam Sankar saying Monday that “it’s going to be a whole-of-nation effort that extends well beyond the DoD in order for us as a nation to win.”

The previous version of the company’s AI principles said Google would “take into account a broad range of social and economic factors.” The new AI principles state Google will “proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.”

In its Tuesday blog post, Google said it will “stay consistent with widely accepted principles of international law and human rights — always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks.”

The new AI principles were first reported by The Washington Post on Tuesday, ahead of Google’s fourth-quarter earnings. The company’s results missed Wall Street’s revenue expectations and drove shares down as much as 9% in after-hours trading.

Hundreds of protestors including Google workers are gathered in front of Google’s San Francisco offices and shut down traffic at One Market Street block on Thursday evening, demanding an end to its work with the Israeli government, and to protest Israeli attacks on Gaza, in San Francisco, California, United States on December 14, 2023.

Anadolu | Anadolu | Getty Images

Google established its AI principles in 2018 after declining to renew a government contract called Project Maven, which helped the government analyze and interpret drone videos using artificial intelligence. Prior to ending the deal, several thousand employees signed a petition against the contract and dozens resigned in opposition to Google’s involvement. The company also dropped out of the bidding for a $10 billion Pentagon cloud contract in part because the company “couldn’t be sure” it would align with the company’s AI principles, it said at the time.

Touting its AI technology to clients, Pichai’s leadership team has aggressively pursued federal government contracts, which has caused heightened strain in some areas within Google’s outspoken workforce.

“We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security,” Google’s Tuesday blog post said.

Google last year terminated more than 50 employees after a series of protests against Project Nimbus, a $1.2 billion joint contract with Amazon that provides the Israeli government and military with cloud computing and AI services. Executives repeatedly said the contract didn’t violate any of the company’s “AI principles.”

However, documents and reports showed the company’s agreement allowed for giving Israel AI tools that included image categorization, object tracking, as well as provisions for state-owned weapons manufacturers. The New York Times in December reported that four months prior to signing on to Nimbus, Google officials expressed concern that signing the deal would harm its reputation and that “Google Cloud services could be used for, or linked to, the facilitation of human rights violations.”

Meanwhile, the company had been cracking down on internal discussions around geopolitical conflicts like the war in Gaza.

Google announced updated guidelines for its Memegen internal forum in September that further restricted political discussions about geopolitical content, international relations, military conflicts, economic actions and territorial disputes, according to internal documents viewed by CNBC at the time. 

Google did not immediately respond to a request for comment.

WATCH: Google’s uphill AI battle in 2025

Google's uphill AI battle in 2025



Source

Jim Cramer says the market powered through a tough earnings week but ‘that doesn’t mean we’re out of the woods yet’
Technology

Jim Cramer says the market powered through a tough earnings week but ‘that doesn’t mean we’re out of the woods yet’

CNBC’s Jim Cramer said the market just powered through the toughest week of earnings “with flying colors,” but warned that next week could be even more treacherous. “All the big techs did well … Everything connected with the data center went bonkers,” the “Mad Money” host said. However, he cautioned against complacency. “That doesn’t mean […]

Read More
The market isn’t grading all Big Tech earnings the same — here’s why
Technology

The market isn’t grading all Big Tech earnings the same — here’s why

In this Club Check-in, CNBC Investing Club’s Paulina Likos and Zev Fima break down what really matters for investors after a flurry of earnings reports that highlighted both strong demand for artificial intelligence infrastructure and a continued surge in spending. The AI trade faced a major test this week as several of the key hyperscalers […]

Read More
Roblox shares plummet 18% as child safety measures weigh on bookings
Technology

Roblox shares plummet 18% as child safety measures weigh on bookings

Roblox shares plummeted 18% on Friday after the company reported first-quarter earnings as its new child safety measures weighed on bookings. “Part of what we’re rolling out with age check, we believe, is the real, right long-term way to build this platform,” CEO David Baszucki said Friday on CNBC’s “Squawk Box.” In a letter to […]

Read More