The contract includes a safety provision stating that Google AI "is not intended for, and should not be used for" large-scale domestic surveillance or autonomous weapons without human control. However, Charlie Bullock, a senior researcher at the Law and AI Institute, stated that the "should not be used for" language has no legal binding force. It merely expresses the parties' view that such use is undesirable but does not constitute a breach of contract. The agreement also stipulates: "This agreement does not grant the right to control or veto decisions on lawful government actions."
Compared to OpenAI's February agreement with the Pentagon, Google's terms are notably more permissive. OpenAI retained "full discretion over safety systems," while Google agreed to assist the government in adjusting AI safety settings and filters upon request. Google's spokesperson noted these filters were designed for consumers and the company typically makes adjustments for enterprise clients. Google is the third company to sign a classified AI agreement with the Pentagon, after xAI and OpenAI. Anthropic, which refused to relax safety restrictions, has been flagged by the Pentagon as a "supply chain risk" and is currently in legal proceedings.
