What are the new systemic risks of agentic AI? The Association for Computing Machinery weighs in
diginomica.comWe are in the early innings of new agentic AI approaches built on a foundation of Large Language Models (LLMs). They show promise for a range of problems, but introduce many new risks we are still trying to understand. In addition to the existing issues with LLMs, such as hallucinations, these new systems can evolve unpredictably, interact with other agents in opaque ways, and operate beyond human control. All of these risks can erode trust with enterprises, regulators, and users.
The Association for Computing Machinery’s (ACM) Europe Technology Policy Committee recently explored these concerns to guide industry discussions. Gerhard Schimpf, one of the paper's co-authors and a long-term systems researcher, says:
What’s new is not just that the systems are more capable. It’s that we’re increasingly delegating strings of actions, not just individual predictions, to AI systems, often in environments where no human can reasonably ...
Copyright of this story solely belongs to diginomica.com . To see the full text click HERE

