Anthropic drops its signature safety promise and rewrites AI guardrails
techradar.com
- Anthropic has removed its pledge not to train or release AI models without guaranteed safety mitigations in advance
- The company will now rely on transparency reports and safety roadmaps instead of strict preconditions
- Critics argue the shift shows the limits of voluntary AI safety commitments without binding regulation
Anthropic has formally abandoned the central promise not to train or release frontier AI systems unless it can guarantee adequate safety in advance. The company behind Claude confirmed the decision in an interview with Time, marking the end of a policy that had once set it apart among AI developers. The newly revised Responsible Scaling Policy focuses more on ensuring the company stays competitive as the AI marketplace heats up.
For years, Anthropic framed that pledge as evidence that it would resist the commercial pressures pushing competitors to ship ever more powerful systems. The policy effectively barred it from ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE

