Written by: Haim Ravia, Dotan Hammer
On March 9, 2026, AI company Anthropic filed two federal lawsuits against the Department of Defense—one in the U.S. District Court for the Northern District of California and one in the D.C. Circuit Court of Appeals—challenging the Pentagon’s designation of Anthropic as a “supply chain risk.” The lawsuits cap a weeks-long public confrontation between one of the world’s leading AI companies and the U.S. military over the boundaries of government control over commercial AI technology.
The dispute arose after Anthropic maintained two firm red lines in its contract negotiations with the Department of Defense: it would not allow its Claude AI models to be used for mass surveillance of Americans, and it would not permit its technology to power fully autonomous weapons without human oversight over targeting and firing decisions. Defense Secretary Pete Hegseth argued that the Pentagon should have unrestricted access to AI systems for “any lawful purpose” and that a private contractor should not be permitted to dictate how the military can use its technology.
After negotiations broke down, the Department of Defense designated Anthropic a supply chain risk—a designation typically reserved for foreign adversaries that could potentially sabotage or subvert U.S. interests. National security experts described the use of this label against an American company as highly unusual and, according to some, unprecedented. President Trump subsequently directed all federal agencies to immediately cease using Anthropic’s technology, and the General Services Administration terminated Anthropic’s “OneGov” contract, ending the availability of Anthropic’s services across federal systems.
In its California complaint, Anthropic raises five counts. First, it alleges the Department of Defense violated the Administrative Procedures Act by designating Anthropic without following the procedures Congress mandated, which generally require a risk assessment, notification of the targeted company, an opportunity to respond, a written national security determination, and notification to Congress before excluding a vendor from federal supply chains. Second and third, it argues the designation constitutes unconstitutional retaliation against Anthropic’s protected speech about the limitations and safety risks of its AI systems, in violation of the First Amendment. Fourth and fifth, Anthropic claims it was denied due process both in the supply chain risk designation itself and in the unilateral contract cancellations by other executive branch departments. Anthropic separately challenges the designation in the D.C. Circuit under the specific statutory provision the government invoked, 10 U.S.C. § 3252.
The case has attracted extraordinary attention. Dozens of scientists and researchers at rival AI companies OpenAI and Google DeepMind filed an amicus brief in their personal capacities supporting Anthropic, arguing that the supply chain risk designation could harm U.S. competitiveness in AI and chill public discussion about the risks and benefits of the technology. Anthropic is seeking injunctive relief, claiming the government’s actions could reduce its 2026 revenue by multiple billions of dollars and jeopardize hundreds of millions in existing private-sector contracts.
Israel’s Supreme Court ruled on March 22, 2026, that the municipality of Ramat Gan made “reckless” use of artificial intelligence in administrative proceedings concerning a child with special needs. The court found that the municipality relied on materials generated by AI tools that included a non-existent Education Ministry directive and fabricated court rulings, and ordered the municipality to pay 30,000 NIS (approximately $9,600) in legal costs.
Click here to read the complaint filed by Anthropic in the Northern District of California.