AI startup Anthropic says it will not give the U.S. military unlimited access to its technology, even after pressure from the United States Department of Defense.
The company made its position clear on Thursday after receiving an ultimatum from the Pentagon.
Anthropic refuses unrestricted military use
Anthropic CEO Dario Amodei said the company cannot agree to the Pentagon’s request to allow unrestricted military use of its AI systems.
“These threats do not change our position,” Amodei said. “We cannot in good conscience agree to their request.”
According to the company, the U.S. government gave Anthropic until Friday evening to approve full military use of its AI — even if that use goes against the company’s internal ethical standards.
If Anthropic refuses, the government could force compliance using the Defense Production Act.
Ethical concerns over surveillance and autonomous weapons
Anthropic said its technology is already used by the Pentagon and intelligence agencies in some cases, mainly for defensive and analytical work.
However, the company says it has clear limits.
Amodei explained that Anthropic will not support the use of AI for mass surveillance of U.S. citizens or fully autonomous weapons.
“Using these systems for mass domestic surveillance is incompatible with democratic values,” he said.
He also warned that today’s AI systems are not reliable enough to control lethal weapons without human oversight, adding that the company does not want to create tools that could put soldiers or civilians at risk.
Pentagon issues a strong warning
After a meeting earlier this week, the Pentagon told Anthropic it must approve unrestricted military use of its technology by 5:01 pm Friday or risk being forced to cooperate.
Officials said they could use the Defense Production Act — a Cold War-era law that allows the U.S. government to require companies to prioritize national security needs. The law was previously used during the COVID-19 pandemic.
The Pentagon also warned it could label Anthropic a supply chain risk, a designation usually used for companies linked to rival countries. Such a label could hurt the company’s reputation and limit its ability to work with the U.S. government.
A senior defense official responded to Anthropic’s concerns by saying the Pentagon always operates within the law.
“Legality is the Pentagon’s responsibility as the end user,” the official said.
AI race inside the defense sector
The dispute also highlights growing competition among AI companies working with the U.S. military.
Anthropic was awarded a $200 million government contract last year alongside OpenAI and Google to provide AI models for military projects.
The Pentagon also confirmed that the Grok system developed by Elon Musk has already been cleared for use in a classified environment. Other companies like OpenAI and Google are reportedly close to receiving similar approvals.
A clash between safety and national security
Anthropic was founded in 2021 by former OpenAI employees with a focus on building safer AI systems.
That mission is now putting the company in a direct clash with government officials who want broader access to AI for military purposes.
Amodei said Anthropic understands that military decisions are made by governments, not private companies. But he believes there are certain situations where AI could undermine democratic values rather than protect them.







