Silicon Valley is seeing a widening gap on the topic of using AI in weapon systems. Some are looking to grow their business through military AI applications while others believe that the currently flawed and unreliable nature of the technology should prohibit it from being utilized in such critical ways. The employees who signed the petition reflect the same sentiment expressed during Google’s previous Project Maven protest. In contrast, Google has entered a deal that allows for weapons targeting to occur as long as humans retain decision-making capabilities. Additionally, because of Anthropic’s more restrictive ethics that prohibited military application, Anthropic was blacklisted by DoD which created an opportunity for other competitors. Also, because Google has agreed to allow the government to make changes to Google’s safety filters, they also lose their ability to block certain uses of their models.
In addition, the U.S. Department of Defense (DoD) is rapidly increasing its efforts to utilize AI foundation models within its classified systems. Because there remains great debate regarding the reliability of AI in high-risk scenarios, the next step will be to determine how human oversight functions when AI provides recommendations quickly in the context of modern warfare. Currently, two other major players, OpenAI and xAI, have already been contracted by DoD for their AI foundation models. Therefore, these three major players provide a competitive framework for utilizing AI in weapon systems with varying degrees of human oversight. The issue of how human oversight mechanisms will operate in weapon system environments is likely going to be a major point of contention moving forward.
Also, the employees of Google have demonstrated a significant level of concern relative to this issue. Over 560 employees of Google have signed a petition asking the company to refrain from accepting the contract based upon ethical concerns. However, no response has been provided by the company.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
As well, the terms of the contract demonstrate a high degree of awareness from both parties concerning potential issues related to AI safety. According to reporting by Reuters, Google may be required to alter the safety filters in its models at the request of defense officials. As well, according to reporting by Reuters, Google retains some limitations on how its models can be used by the DoD. Specifically, the AI models cannot be used for domestic mass surveillance and cannot select targets autonomously without proper human oversight. The language referencing "appropriate human oversight" is significant in this regard. It references weapons systems in which AI identifies and evaluates targets, but a human ultimately decides whether to authorize firing. These parameters are consistent with current drone strike protocols wherein human operators evaluate algorithmic recommendations prior to taking lethal action.
Additionally, this development comes after a disagreement between the Pentagon and Anthropic. Reports indicate that Anthropic's Claude models were removed from defense networks by DoD due to Anthropic's refusal to alter its strong restrictions on how its models could be used. In comparison, Google appears to have established conditional limits on how its models would be used.

