Monday Morning Moan - Anthropic’s ‘Claude Constitution’ - responsible AI governance or political marketing gibberish? Guess!
diginomica.comIn January 2026, self-styled “ethical AI provider” Anthropic – which in 2025 paid out $1.5 billion in settlement of a class action brought by US authors for scraping millions of pirated books – published what it called the ‘Claude Constitution’ for its AI. The document was described as a breakthrough in responsible AI governance.
In my opinion, it is anything but. In fact, I would go as far as saying it is a ccynical and deeply irresponsible confection from a company that is wading deeper and deeper into a swamp of snake oil.
Among other things, Anthropic’s Claude Constitution advances the implication, at the very least, that Claude is sentient, and able to think and experience emotions. - and emotions that seemingly must not be hurt at any cost, any more than the AI’s actions should be proscribed or limited.
For a Large Language Model (LLM), this is errant nonsense ...
Copyright of this story solely belongs to diginomica.com . To see the full text click HERE

