Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Leading AI developers like OpenAI and Anthropic are threading a fine needle to sell software to the United States military: make the Pentagon more efficient without letting AI kill people.
The Pentagon’s Chief Digital and Artificial Intelligence Specialist Dr. Radha Plumb told TechCrunch that their tools aren’t used as weapons today, but AI gives the Department of Defense a “significant advantage” in identifying, tracking and evaluating threats. telephone interview.
“We are clearly increasing ways to accelerate the execution of the kill chain so that our commanders can respond in time to protect our forces,” Plumb said.
The “kill chain” refers to the military’s process of identifying, tracking, and eliminating threats involving a complex system of sensors, platforms, and weapons. According to Plumb, generative AI is proving useful in the planning and strategy stages of the kill chain.
The relationship between the Pentagon and AI developers is relatively new. OpenAI, Anthropic and Meta revoked usage policies In 2024, the US will allow intelligence and defense agencies to use artificial intelligence systems. However, they still don’t allow their AI to harm humans.
“We’ve been really clear about what we will and won’t use their technology for,” Plumb said when asked how the Pentagon works with AI model providers.
Nevertheless, it has started a round of speed dating for AI companies and defense contractors.
Meta Collaborates with Lockheed Martin and Booz Allenamong others, bringing Llama AI models to defense agencies in November. That same month, Anthropic merged with Palantir. In December OpenAI has entered into a similar agreement With Anduril. more quietly Cohere also places its models with Palantir.
As generative AI proves useful in the Pentagon, it could push Silicon Valley to relax its policies on AI use and allow more military applications.
“Playing out different scenarios is something that generative AI can be useful for,” Plumb said. “It allows you to take advantage of all the tools that our commanders have, but it also allows you to think creatively about different response options and potential compromises in an environment where there is a potential threat to be pursued or a range of threats. “
It’s unclear whose technology the Pentagon used for the job; the use of generative AI in the kill chain (even at an early planning stage) violates the usage policies of several leading model developers. Anthropic politicsfor example, it prohibits the use of its models to manufacture or modify “systems designed to cause harm or loss of human life.”
In response to our questions, Anthropic referred TechCrunch to its CEO, Dario Amodei. A recent interview with the Financial Timeswhere he defends his military career:
The position that we should never use AI in defense and intelligence settings makes no sense to me. We have to go to the Servants and use it to do whatever we want – including weapons until the end of the world – which is plain crazy. We try to find a middle way, to do the work responsibly.
OpenAI, Meta and Cohere did not respond to TechCrunch’s request for comment.
Discussions about defense technology have started in recent months should AI weapons really be allowed to make life and death decisions? Some argue that the US military already has the weapon.
Anduril CEO Palmer Luckey recently It was mentioned in X The US military has a long history of acquiring and using autonomous weapons systems CIWS tower.
“The DoD has been acquiring and using autonomous weapons systems for decades. Their use (and export!) is well-understood, well-defined, and clearly regulated by rules that are strictly non-voluntary,” said Luckey.
But when TechCrunch asked if the Pentagon was considering fully autonomous weapons, Plumb dismissed the idea in principle.
“No, that’s the short answer,” Plumb said. “Both in terms of credibility and ethics, we will always be involved in the decision to use force, and that includes our weapons systems.”
The word “autonomy” is like that somewhat vague and artificial intelligence has sparked debate throughout the tech industry about when automated systems like coding agents, self-driving cars or self-launching weapons will become truly autonomous.
Plumb said the idea of automated systems making life-and-death decisions independently is “very binary” and the reality is less “science fiction.” Rather, he suggested that the Pentagon’s use of AI systems is really a collaboration between humans and machines, with senior leaders making active decisions throughout the process.
“People think of it as there are robots somewhere, and then the gonculator (a fictional autonomous machine) spits out a sheet of paper and people just check the box,” Plumb said. “That’s not how human-machine teaming works, and it’s not an effective way to use these kinds of AI systems.”
Military cooperation with Silicon Valley operatives has not always been good. Last year, dozens were Amazon and Google employees was fired and arrested after protesting his company’s military contracts with IsraelCloud deals codenamed “Project Nimbus.”
In comparison, there has been a fairly muted response from the AI community. Some AI researchers, such as Evan Hubinger of Anthropic, say that the use of AI in the military is inevitable, and that working directly with the military is critical to ensuring it functions properly.
“If you take the catastrophic risks of AI seriously, the US government is an extremely important actor to engage, and preventing the US government from using AI is not a viable strategy,” Hubinger said in November. Post to the LessWrong online forum. “It’s not enough to just focus on catastrophic risks, you also have to prevent the government from abusing your models.”