AI agents are fast, loose and out of control, MIT study finds
zdnet.comThe vast majority of agentic AI systems disclose nothing about what safety testing, if any, has been conducted, and many systems have no documented way to shut down a rogue bot, a study by MIT and collaborators found.

Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- Agentic AI technology is marked by a lack of disclosure about risks.
- Some systems are worse than others.
- AI developers need to step up and take responsibility.
Agentic technology is moving fully into the mainstream of artificial intelligence with the announcement this week that OpenAI has hired Peter Steinberg, the creator of the open-source software framework OpenClaw.
The OpenClaw software attracted heavy attention last month not only for its enabling of wild capabilities -- agents that can, for example, send and receive email on your behalf -- but also for its dramatic security flaws, including the ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE

