AI agents are fast, loose, and out of control, MIT study finds
zdnet.comThe majority of agentic AI systems disclose nothing about what safety testing, and many systems have no documented way to shut down a rogue bot, a study by MIT found.

Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- Agentic AI technology is marked by a lack of disclosure about risks.
- Some systems are worse than others.
- AI developers need to step up and take responsibility.
Editor's note: This article has been updated with responses from Perplexity, OpenAI, and IBM.
Agentic technology is moving fully into the mainstream of artificial intelligence with the announcement this week that OpenAI has hired Peter Steinberg, the creator of the open-source software framework OpenClaw.
The OpenClaw software attracted heavy attention last month not only for its enabling of wild capabilities -- agents that can, for example, send and receive email on your behalf -- but ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE

