Google has quietly tweaked its AI principles, removing a key section that promised not to use AI for potentially harmful applications such as weapons and surveillance. Up until last week, their public AI principles page included a section called “applications we will not pursue,” but now that part is no longer available. The change suggests the company might be shifting its stance on how AI can be used, raising questions about where it’s headed next.
In a recent blog post, James Manyika, a senior vice president at Google, and DeepMind CEO Demis Hassabis wrote that the tech firm believes that “democracies should lead in AI development.” They added that “companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
As we make progress towards AGI, developing AI needs to be both innovative and safe. ⚖️
To help ensure this, we’ve made updates to our Frontier Safety Framework – our set of protocols to help us stay ahead of possible severe risks.
Find out more → https://t.co/YwtVDqQWW9 pic.twitter.com/LbHMdInAHQ
— Google DeepMind (@GoogleDeepMind) February 4, 2025
Google’s updated AI principles now note a commitment to reducing unintended harm and avoiding unfair bias while also ensuring their AI aligns with “widely accepted international principles of international law and human rights.”
The company had previously stated that it would avoid creating technologies “that cause or are likely to cause overall harm.” Google also said that it would not use AI to produce “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
This included applications that “gather or use information for surveillance violating internationally accepted norms.”
Google joins other tech firms as it changes tack
In 2018, Google rolled out its AI principles after facing backlash from employees over a Pentagon contract. The contract, known as Project Maven, used Google’s computer vision algorithms to analyze drone footage.
Thousands of employees signed an open letter to CEO Sundar Pichai, making it clear: “We believe that Google should not be in the business of war.” In response to the protests, Google decided not to renew the contract. However, this recent move is just one piece of a bigger trend as major tech companies are rethinking and shifting their policies with the new Trump administration.
This week, Meta dropped a new policy document, suggesting that there might be situations where they choose not to release a powerful AI system they’ve built in-house. At the same time, they have replaced fact-checkers with user-driven notes.
ReadWrite has reached out to Google for comment.
Featured image: Canva