Google removes pledge to not use AI for weapons, surveillance

Google removes pledge to not use AI for weapons, surveillance


Sundar Pichai, CEO of Alphabet Inc., during Stanford’s 2024 Business, Government, and Society forum in Stanford, California, April 3, 2024.

Justin Sullivan | Getty Images

Google has removed a pledge to abstain from using AI for potentially harmful applications, such as weapons and surveillance, according to the company’s updated “AI Principles.”

A prior version of the company’s AI principles said the company would not pursue “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and “technologies that gather or use information for surveillance violating internationally accepted norms.”

Those objectives are no longer displayed on its AI Principles website.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” reads a Tuesday blog post co-written by Demis Hassabis, CEO of Google DeepMind. “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”

The company’s updated principles reflect Google’s growing ambitions to offer its AI technology and services to more users and clients, which has included governments. The change is also in line with increasing rhetoric out of Silicon Valley leaders about a winner-take-all AI race between the U.S. and China, with Palantir’s CTO Shyam Sankar saying Monday that “it’s going to be a whole-of-nation effort that extends well beyond the DoD in order for us as a nation to win.”

The previous version of the company’s AI principles said Google would “take into account a broad range of social and economic factors.” The new AI principles state Google will “proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.”

In its Tuesday blog post, Google said it will “stay consistent with widely accepted principles of international law and human rights — always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks.”

The new AI principles were first reported by The Washington Post on Tuesday, ahead of Google’s fourth-quarter earnings. The company’s results missed Wall Street’s revenue expectations and drove shares down as much as 9% in after-hours trading.

Hundreds of protestors including Google workers are gathered in front of Google’s San Francisco offices and shut down traffic at One Market Street block on Thursday evening, demanding an end to its work with the Israeli government, and to protest Israeli attacks on Gaza, in San Francisco, California, United States on December 14, 2023.

Anadolu | Anadolu | Getty Images

Google established its AI principles in 2018 after declining to renew a government contract called Project Maven, which helped the government analyze and interpret drone videos using artificial intelligence. Prior to ending the deal, several thousand employees signed a petition against the contract and dozens resigned in opposition to Google’s involvement. The company also dropped out of the bidding for a $10 billion Pentagon cloud contract in part because the company “couldn’t be sure” it would align with the company’s AI principles, it said at the time.

Touting its AI technology to clients, Pichai’s leadership team has aggressively pursued federal government contracts, which has caused heightened strain in some areas within Google’s outspoken workforce.

“We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security,” Google’s Tuesday blog post said.

Google last year terminated more than 50 employees after a series of protests against Project Nimbus, a $1.2 billion joint contract with Amazon that provides the Israeli government and military with cloud computing and AI services. Executives repeatedly said the contract didn’t violate any of the company’s “AI principles.”

However, documents and reports showed the company’s agreement allowed for giving Israel AI tools that included image categorization, object tracking, as well as provisions for state-owned weapons manufacturers. The New York Times in December reported that four months prior to signing on to Nimbus, Google officials expressed concern that signing the deal would harm its reputation and that “Google Cloud services could be used for, or linked to, the facilitation of human rights violations.”

Meanwhile, the company had been cracking down on internal discussions around geopolitical conflicts like the war in Gaza.

Google announced updated guidelines for its Memegen internal forum in September that further restricted political discussions about geopolitical content, international relations, military conflicts, economic actions and territorial disputes, according to internal documents viewed by CNBC at the time. 

Google did not immediately respond to a request for comment.

WATCH: Google’s uphill AI battle in 2025

Google's uphill AI battle in 2025



Source

After Trump pulled NASA nomination, Musk ally Jared Isaacman says stint in politics was ‘thrilling’
Technology

After Trump pulled NASA nomination, Musk ally Jared Isaacman says stint in politics was ‘thrilling’

Inspiration4 mission commander Jared Isaacman, founder and chief executive officer of Shift4 Payments, stands for a portrait in front of the recovered first stage of a Falcon 9 rocket at Space Exploration Technologies Corp. (SpaceX) on February 2, 2021 in Hawthorne, California.  Patrick T. Fallon | Afp | Getty Images Days after his nomination for […]

Read More
Court denies Apple appeal in Epic Games case, keeping App Store changes in place
Technology

Court denies Apple appeal in Epic Games case, keeping App Store changes in place

The App Store logo is seen next to the Epic Games Store logo on two screens. Epic, maker of the popular game “Fortnite,” wants to sell digital items in its apps without giving a cut of the purchase price to Apple. Fabian Summer | picture alliance | Getty Images Apple was dealt a blow in […]

Read More
Amazon’s R&D lab forms new agentic AI group
Technology

Amazon’s R&D lab forms new agentic AI group

Amazon CEO Andy Jassy speaks at a company event in New York on Feb. 26, 2025. Michael Nagle | Bloomberg | Getty Images Amazon has formed a new group within its consumer product-development arm that is focused on agentic artificial intelligence, the company said Wednesday. The team will be based in Lab126, the stealthy Silicon […]

Read More