Big AI developers like OpenAI and Anthropic are threading a delicate needle to sell software to the U.S. military to make AI less lethal and make the Department of Defense more efficient.
Although their tools are not currently being used as weapons, AI provides the Pentagon with “significant advantages” in identifying, tracking and assessing threats, said Radha Puram, the Pentagon’s chief digital and AI officer. he told TechCrunch. Telephone interview.
“We’re clearly adding more ways to expedite execution of the kill chain so that commanders can respond in a timely manner to protect their forces,” Plumb said.
“Kill chain” refers to the military process of identifying, tracking, and eliminating threats that involves a complex system of sensors, platforms, and weapons. According to Plumb, generative AI has proven useful in the planning and strategy stages of the kill chain.
The relationship between the Department of Defense and AI developers is relatively new. OpenAI, Anthropic, and Meta rescinded their usage policies in 2024, allowing U.S. intelligence and defense agencies to use their AI systems. However, they still do not allow AI to harm humans.
Asked how the Department of Defense works with AI model providers, Plumb said, “We’re very clear about what we will and won’t use their technology for.”
Nevertheless, this marked the beginning of a speed dating round between AI companies and defense contractors.
Meta partnered with Lockheed Martin, Booz Allen and others to bring its Llama AI model to defense agencies in November. That same month, Anthropic partnered with Palantir. In December, OpenAI signed a similar deal with Anduril. More covertly, Cohere is developing a model with Palantir.
If generative AI proves its usefulness at the Pentagon, it could prompt Silicon Valley to loosen its policies on AI usage and allow more military applications.
“Playing out different scenarios is something that generative AI could benefit from,” Plumb said. “This not only allows commanders to take advantage of all the tools available to them, but also to be creative about different response options and potential trade-offs in an environment where there is a potential threat, or set of threats, that needs to be prosecuted. You can think about it.”
It is unclear whose technology the Pentagon is using for this work. Using generative AI in kill chains (even in early planning stages) appears to violate the usage policies of several major model developers. For example, Anthropic’s policy prohibits using its models to create or modify “systems designed to cause harm or loss of human life.”
In response to our questions, Anthropic pointed out to TechCrunch that CEO Dario Amodei defended military activities in a recent interview with the Financial Times.
I don’t understand the position that AI should never be used in defense or intelligence operations. The position that we should join Gangbusters and use it to create anything we want, up to and including doomsday weapons, is clearly just as crazy. We are trying to find compromises and do things responsibly.
OpenAI, Meta, and Cohere did not respond to TechCrunch’s requests for comment.
Life, death and AI weapons
In recent months, a debate has erupted in defense technology over whether AI weapons should really be allowed to make life-or-death decisions. Some argue that the US military already has such weapons.
Anduril CEO Palmer Lackey recently pointed out at X that the U.S. military has a long history of purchasing and using autonomous weapons systems such as CIWS turrets.
“The Department of Defense has been purchasing and using autonomous weapons systems for decades. Their use (and export) is subject to well-understood, strictly defined, and not entirely voluntary regulations. It is clearly regulated by the government,” Lackey said.
But when TechCrunch asked if the Pentagon would buy and operate fully autonomous weapons, meaning weapons without human involvement, Plumb rejected the idea on principle.
“No, the short answer is,” Mr. Plumb said. “As both a matter of credibility and ethics, decisions to use force always involve humans, and this includes weapons systems.”
The term “autonomy” is somewhat vague and has sparked debate across the tech industry about when automated systems such as AI-coded agents, self-driving cars, and self-launching weapons become truly independent.
Plumb said the idea that automated systems independently make life-and-death decisions is “too binary” and that the reality is not so “science fiction.” Rather, she suggested that the Department of Defense’s use of AI systems is actually a collaboration between humans and machines, with senior leaders making active decisions throughout the process.
“People tend to think about this as if there’s a robot somewhere, a gonculator spitting out a piece of paper, and a human just checking a box,” Plumb said. “This is not how human-machine teaming works, and this is not how to effectively use this type of AI system.”
AI safety in the Department of Defense
Military alliances haven’t always worked out well for Silicon Valley employees. Last year, dozens of Amazon and Google employees were fired and arrested after protesting the companies’ military contracts with Israel, a cloud deal codenamed “Project Nimbus.”
In comparison, the response from the AI community has been quite muted. Some AI researchers, such as Anthropic’s Evan Hubinger, say the use of AI in the military is inevitable and that working directly with the military is critical to getting it right.
“If we are serious about the catastrophic risks posed by AI, the U.S. government is a very important actor to engage, and simply blocking the U.S. government from using AI is not a viable strategy,” Hubinger said in November. said in an online post. Forum less mistakes. “It is not enough to focus on catastrophic risks; we also need to prevent all the ways in which the model could be misused by governments.”