Tech »  Topic »  Opinion: Red lines and Red flags

Opinion: Red lines and Red flags


Image by: Reuters

The fierce standoff over Claude isn’t just a contract fight. It’s about who controls the future of military AI.

In Washington and Silicon Valley, a conflict once relegated to specialist policy briefings has burst into view as arms-length diplomacy between the U.S. Department of Defense and Anthropic, the San Francisco-based AI lab, approaches a critical deadline.

At stake is the future of AI governance and what limits, if any, private developers can place on how governments use powerful models.

For years, Anthropic has distinguished itself from peers by embracing a safety-first stance. Its flagship model, Claude, was designed with guardrails that explicitly prohibit use in fully autonomous lethal weapons or domestic surveillance.

Those restrictions have been central to the company’s identity and its appeal to customers wary of unfettered AI.

The Pentagon has responded sharply. Defense Secretary Pete Hegseth has given Anthropic until ...


Copyright of this story solely belongs to thenextweb.com . To see the full text click HERE