Menu

Post: Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

/

/

/

Join the Club

Your Bi-Weekly Dose Of Everything Optimism

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

The U.S. Department of Justice has filed a legal response to a lawsuit from AI company Anthropic, arguing that the company should not be entrusted with developing or managing autonomous warfighting systems. The government's filing asserts that Anthropic's AI models, including Claude, are not sufficiently reliable or safe for such high-stakes military applications. It cites …

The U.S. Department of Justice has filed a legal response to a lawsuit from AI company Anthropic, arguing that the company should not be entrusted with developing or managing autonomous warfighting systems. The government’s filing asserts that Anthropic’s AI models, including Claude, are not sufficiently reliable or safe for such high-stakes military applications. It cites concerns about the potential for unpredictable behavior, alignment issues, and the fundamental limitations of current large language models in life-or-death combat scenarios. This legal stance emerges from an ongoing dispute over defense contracts and highlights the growing debate about the appropriate role of private AI firms in national security. Read the full article: https://www.wired.com/story/department-of-defense-responds-to-anthropic-lawsuit/

Join the Club

Like this story? You’ll love our Bi-Weekly Newsletter

Wired

Wired

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Ask Richard AI Avatar