A federal judge has issued a preliminary injunction blocking the U.S. government from enforcing a supply-chain risk designation against Anthropic, the AI company behind Claude. The designation, issued by the Federal Acquisition Security Council (FASC), would have restricted federal agencies from purchasing Anthropic's services over unspecified national security concerns related to its supply chain. The …
A federal judge has issued a preliminary injunction blocking the U.S. government from enforcing a supply-chain risk designation against Anthropic, the AI company behind Claude. The designation, issued by the Federal Acquisition Security Council (FASC), would have restricted federal agencies from purchasing Anthropic’s services over unspecified national security concerns related to its supply chain. The ruling comes in response to a lawsuit filed by Anthropic challenging the process as opaque and procedurally flawed. The judge found that Anthropic demonstrated a likelihood of success on its claims that the designation was arbitrary and violated due process, as the company was not given adequate notice or a meaningful opportunity to contest the allegations. The injunction halts the designation’s enforcement pending a full trial on the merits, allowing Anthropic to continue its government business operations for now. The case highlights growing tensions between rapid AI development and government oversight mechanisms. Read the full article: https://www.wired.com/story/anthropic-supply-chain-risk-designation-injunction/
Join the Club
Like this story? You’ll love our Bi-Weekly Newsletter



