For nearly two decades, join a reliable event by Enterprise leaders. The VB transform brings people together with real venture AI strategy together. learn more
While enterprises face challenges to deploy AI agents in important applications, a new, more practical model is emerging that keeps humans under control as a strategic security against AI failure.
One such example is MixusA platform that uses a “colleague-in-loop” approach to make AI agents reliable for mission-mating task.
This approach is a reaction to growing evidence that a completely autonomous agent is a high-dam gambling.
High cost of uncontrolled AI
Problem of AI hallucinations A concrete risk has become because companies detect AI applications. In a recent incident, AI-Investigated Code Editor Karsar saw his support bot Invent a fake policy Restricting subscription, promoting a wave of public customer cancellation.
Similarly, Fintech Company Clarna famous Reverse course After accepting this step, AI resulted in low quality changing customer service agents with AI. In a more dangerous case, AI-Inaccurate Business Chatboat in New York City advises entrepreneurs Attached to illegal practicesTo highlight the dreaded compliance risks of uncomminated agents.
These events are a symptom of a large capacity difference. May 2025 according to salesforce Research paperToday’s major agents are successful in single-phase tasks only in 58% time and multi-phase people in just 35% time, “highlights a significant difference between current LLM abilities and versatile demands of the actual world enterprise scenarios.”
Peer-in-loop model
To bridge this difference, a new approach focuses on structured human inspection. “An AI agent should work in your direction and on your behalf,” Mixus co-founder Elliot Katj told Venturebeat. “But without the underlying organizational inspection, completely autonomous agents often cause more problems than solving.”
This philosophy underlines the collaborative-in-loop model of Mixus, which embedded human verification directly into automated workflows. For example, a large retailer can receive weekly reports from thousands of stores including important operating data (eg, sales volume, labor hours, productivity ratio, compensation request from headquarters). Human analysts should spend hours to review data manually and make decisions based on succession. With Mixus, the AI agent automatically automatically lifts up, analyzes complex patterns and flags off discrepancies such as abnormally high pay requests or productivity outlars.
For higher-day decisions such as payment authority or policy violations-a workflows-agent defined as “high-risk” by a human user stops and requires human approval before proceeding. The division of labor between AI and humans has been integrated into the agent construction process.
“This approach means that humans involve only when their expertise adds value-usually important 5–10% decisions that can have significant impact-while the remaining 90–95% regular functions flow automatically,” Katz said. “You get full speed of complete automation for standard operations, but it is fine when human inspection is fine when reference, decision and accountability matters the most.”
In a demo that the Mixus team showed the enterprise, making an agent is an intuitive process that can be done with plain and reading instructions. For example, to create a fact-zanch agent for journalists, co-founder Shaai Magzimof only described the multi-step process in natural language and directed the stage to embed the human verification stages with specific thresholds, such as when a claim is high-risk and a result may be an prestigious damage or legal consequences.
One of the main powers of the platform is its integration with devices such as Google Drive, Email, and Slack, which allows enterprise users to bring their own data sources in the workflows and interact with agents directly from the communication platforms of their choice, without switching references or learning a new interface, without learning a new interface, the edge agent, without learning a new interface. It was instructed to send approval requests).
The integration capabilities of the platform proceed to meet specific enterprise requirements. Supports Mixus Model reference protocol (MCP), which enables businesses to connect agents to their bespoke tools and API, avoids the need to reinforce the wheel for existing internal systems. Combined with integration to other enterprise software such as JIRA and Salesforce, it allows agents to do complex, cross-platform functions, such as checking open engineering tickets and reports a manager back to a manager.
Human monitoring as a strategic multiplier
Enterprise AI Space is currently undergoing a reality check as companies move forward to production. The consensus between many industry leaders is that there is a practical need for man agents in the loop to perform firmly.
Mixus’ collaborative model scaling changes AI’s economics. Mixed prediction makes the deployment of agents by growing 1000x and each human overseer will be more efficient because AI agents become more reliable. But the total requirement of human inspection will still increase.
“Each human overseer works rapidly over time, but you still need more total oversight as AI deployment explodes in your organization,” Katz said.

For enterprise leaders, this means that human skills will develop rather than disappearing. Instead of being replaced by AI, experts will be promoted in roles where they orkstrates AI agents and handle the high-dawn decisions flagged for their review.
In this structure, the construction of a strong human overcast function becomes a competitive advantage, allowing companies to deploy AI more aggressive and safely than their rivals.
“Companies who mastered this multiplication will dominate their industries, while people pursuing complete automation will struggle with credibility, compliance and faith,” Katz said.
Source link