Tech »  Topic »  AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows

AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows


Image credit: VentureBeat with ChatGPT

While enterprises face the challenges of deploying AI agents in critical applications, a new, more pragmatic model is emerging that puts humans back in control as a strategic safeguard against AI failure. 

One such example is Mixus, a platform that uses a “colleague-in-the-loop” approach to make AI agents reliable for mission-critical work.

This approach is a response to the growing evidence that fully autonomous agents are a high-stakes gamble. 

The high cost of unchecked AI

The problem of AI hallucinations has become a tangible risk as companies explore AI applications. In a recent incident, the AI-powered code editor Cursor saw its own support bot invent a fake policy restricting subscriptions, sparking a wave of public customer cancellations. 

Similarly, the fintech company Klarna famously reversed course on replacing customer service agents with AI after admitting the move resulted in lower quality. In a more alarming case ...


Copyright of this story solely belongs to venturebeat . To see the full text click HERE