Tech »  Topic »  Anthropic weakens its safety pledge in the wake of the Pentagon's pressure campaign

Anthropic weakens its safety pledge in the wake of the Pentagon's pressure campaign


The Claude maker's new policy trades hard lines in the sand for flexible gray areas. (Aerps.com / Unsplash)

Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge.

On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the company's core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropic's pitch to businesses and consumers.

“Two and a half years later, our ...


Copyright of this story solely belongs to Engadget . To see the full text click HERE