Tech »  Topic »  When a government blacklists its own AI company for refusing to enable mass surveillance, Europe should be paying very close attention

When a government blacklists its own AI company for refusing to enable mass surveillance, Europe should be paying very close attention


On the afternoon of 27 February 2026, Pete Hegseth picked up his phone and posted to X. The US Secretary of Defense had just designated Anthropic, a San Francisco AI company, a “supply chain risk to national security.”

The label, under 10 USC 3252, had previously been applied to Huawei and ZTE, Chinese firms accused of embedding surveillance backdoors into their hardware.

Now it was being used against an American company founded by former OpenAI researchers, whose crime was this: it refused to let the US military use its AI models for mass domestic surveillance of American citizens, or for fully autonomous lethal weapons.

That afternoon, hours after Anthropic was blacklisted, OpenAI CEO Sam Altman announced his company had reached its own deal with the Pentagon. His models, he wrote, would be available for all lawful purposes.

The same evening, OpenAI’s most senior hardware executive, Caitlin Kalinowski, who had ...


Copyright of this story solely belongs to thenextweb.com . To see the full text click HERE