Be part of the occasion trusted by enterprise leaders for almost 20 years. VB Rework brings collectively the individuals constructing actual enterprise AI technique. Be taught extra
Whereas enterprises face the challenges of deploying AI brokers in crucial functions, a brand new, extra pragmatic mannequin is rising that places people again in management as a strategic safeguard towards AI failure.
One such instance is Mixus, a platform that makes use of a “colleague-in-the-loop” strategy to make AI brokers dependable for mission-critical work.
This strategy is a response to the rising proof that totally autonomous brokers are a high-stakes gamble.
The excessive price of unchecked AI
The issue of AI hallucinations has change into a tangible danger as corporations discover AI functions. In a latest incident, the AI-powered code editor Cursor noticed its personal assist bot invent a faux coverage proscribing subscriptions, sparking a wave of public buyer cancellations.
Equally, the fintech firm Klarna famously reversed course on changing customer support brokers with AI after admitting the transfer resulted in decrease high quality. In a extra alarming case, New York Metropolis’s AI-powered enterprise chatbot suggested entrepreneurs to have interaction in unlawful practices, highlighting the catastrophic compliance dangers of unmonitored brokers.
These incidents are signs of a bigger functionality hole. In line with a Might 2025 Salesforce analysis paper, right now’s main brokers succeed solely 58% of the time on single-step duties and simply 35% of the time on multi-step ones, highlighting “a major hole between present LLM capabilities and the multifaceted calls for of real-world enterprise situations.”
The colleague-in-the-loop mannequin
To bridge this hole, a brand new strategy focuses on structured human oversight. “An AI agent ought to act at your path and in your behalf,” Mixus co-founder Elliot Katz advised VentureBeat. “However with out built-in organizational oversight, totally autonomous brokers usually create extra issues than they remedy.”
This philosophy underpins Mixus’s colleague-in-the-loop mannequin, which embeds human verification immediately into automated workflows. For instance, a big retailer would possibly obtain weekly stories from hundreds of shops that include crucial operational knowledge (e.g., gross sales volumes, labor hours, productiveness ratios, compensation requests from headquarters). Human analysts should spend hours manually reviewing the info and making selections primarily based on heuristics. With Mixus, the AI agent automates the heavy lifting, analyzing complicated patterns and flagging anomalies like unusually excessive wage requests or productiveness outliers.
For prime-stakes selections like fee authorizations or coverage violations — workflows outlined by a human person as “high-risk” — the agent pauses and requires human approval earlier than continuing. The division of labor between AI and people has been built-in into the agent creation course of.
“This strategy means people solely become involved when their experience truly provides worth — sometimes the crucial 5-10% of choices that would have vital influence — whereas the remaining 90-95% of routine duties circulation by means of routinely,” Katz stated. “You get the velocity of full automation for traditional operations, however human oversight kicks in exactly when context, judgment, and accountability matter most.”
In a demo that the Mixus crew confirmed to VentureBeat, creating an agent is an intuitive course of that may be carried out with plain-text directions. To construct a fact-checking agent for reporters, for instance, co-founder Shai Magzimof merely described the multi-step course of in pure language and instructed the platform to embed human verification steps with particular thresholds, corresponding to when a declare is high-risk and may end up in reputational harm or authorized penalties.
One of many platform’s core strengths is its integrations with instruments like Google Drive, electronic mail, and Slack, permitting enterprise customers to convey their very own knowledge sources into workflows and work together with brokers immediately from their communication platform of selection, with out having to modify contexts or be taught a brand new interface (for instance, the fact-checking agent was instructed to ship approval requests to the editor’s electronic mail).
The platform’s integration capabilities lengthen additional to fulfill particular enterprise wants. Mixus helps the Mannequin Context Protocol (MCP), which permits companies to attach brokers to their bespoke instruments and APIs, avoiding the necessity to reinvent the wheel for current inside techniques. Mixed with integrations for different enterprise software program like Jira and Salesforce, this enables brokers to carry out complicated, cross-platform duties, corresponding to checking on open engineering tickets and reporting the standing again to a supervisor on Slack.
Human oversight as a strategic multiplier
The enterprise AI house is at present present process a actuality test as corporations transfer from experimentation to manufacturing. The consensus amongst many trade leaders is that people within the loop are a sensible necessity for brokers to carry out reliably.
Mixus’s collaborative mannequin modifications the economics of scaling AI. Combined predicts that by 2030, agent deployment could develop 1000x and every human overseer will change into 50x extra environment friendly as AI brokers change into extra dependable. However the complete want for human oversight will nonetheless develop.
“Every human overseer manages exponentially extra AI work over time, however you continue to want extra complete oversight as AI deployment explodes throughout your group,” Katz stated.

For enterprise leaders, this implies human abilities will evolve reasonably than disappear. As an alternative of being changed by AI, specialists will likely be promoted to roles the place they orchestrate fleets of AI brokers and deal with the high-stakes selections flagged for his or her evaluation.
On this framework, constructing a powerful human oversight operate turns into a aggressive benefit, permitting corporations to deploy AI extra aggressively and safely than their rivals.
“Corporations that grasp this multiplication will dominate their industries, whereas these chasing full automation will wrestle with reliability, compliance, and belief,” Katz stated.