Office workers without AI experience warned to watch for prompt injection attacks - good luck with that
theregister.co.ukAnthropic's tendency to wave off prompt-injection risks is rearing its head in the company's new Cowork productivity AI, which suffers from a Files API exfiltration attack chain first disclosed last October and acknowledged but not fixed by Anthropic.
PromptArmor, a security firm specializing in the discovery of AI vulnerabilities, reported on Wednesday that Cowork can be tricked via prompt injection into transmitting sensitive files to an attacker's Anthropic account, without any additional user approval once access has been granted.
The process is relatively simple and, as PromptArmor explains, part of an “ever-growing” attack surface - a risk amplified by Cowork being pitched at non-developer users who may not think twice about which files and folders they connect to an AI agent.
Cowork, launched in research preview on Monday, is designed to automate office work by scanning files such as spreadsheets and other everyday documents that desk workers interact ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

