US Government Phases Out Anthropic AI Over Security Concerns
The US administration has directed federal agencies to stop using Anthropic's AI tools, including the Claude chatbot, citing supply-chain risks to national security. This comes amid backlash against OpenAI's Pentagon deal, boosting Claude's popularity as it surged to the top of app charts. Meanwhile, the Pentagon labeled Anthropic uncooperative on military applications like intelligence and cybersecurity. This highlights growing tensions between AI ethics and national defense needs.