When Better AI Makes Oversight Harder
Human-AI collaboration is often presented as the gold standard for modern organizations. AI systems handle scale and speed, while humans provide judgment, oversight, and accountability. In theory, these hybrid teams should outperform humans or AI working alone.
New research from Hamsa Bastani, Associate Professor of Operations, Information and Decisions, and Gérard Cachon, Fred R. Sullivan Professor of Operations, Information, and Decisions at the Wharton School reveals a counterintuitive challenge: as AI systems become more reliable, organizations may find it increasingly difficult, and costly, to motivate humans to oversee them effectively.
The Human-AI Contracting Paradox
Modern AI tools tend to fail rarely, but unpredictably. When failures occur, they can be expensive, reputationally damaging, or even dangerous. That is why organizations insist on “human-in-the-loop” designs.
The research shows that vigilance is not free. When AI errors are infrequent, humans must expend effort reviewing outputs that are almost always correct. As a result, the compensation required to ensure consistent oversight rises sharply as AI reliability improves.
This creates what the researchers call a human-AI contracting paradox. Even when human-AI collaboration would produce the best outcomes for the organization, rational leaders may choose to:
- Limit or delay adoption of advanced AI tools,
- Rely entirely on AI and accept occasional failures, or
- Prefer less reliable AI systems because they keep humans more engaged.
- In short, better AI can create worse economic incentives for oversight.
Key Takeaways for Senior Leaders
- Human oversight is an economic decision, not just a design choice.
If oversight is essential, it must be explicitly rewarded. Professional norms alone are not enough to sustain attention when AI rarely fails. - More reliable AI does not automatically lead to better outcomes.
As AI error rates decline, the cost of motivating human vigilance increases. Leaders should anticipate this tradeoff rather than assuming reliability solves governance problems. - Specialization beats uniform reliability.
Organizations benefit when AI is predictably strong at some tasks and predictably weak at others. When humans can tell when AI is likely to fail, oversight becomes more targeted and less costly. - Redesign roles around judgment, not constant monitoring.
Humans add the most value when they decide whether AI should be trusted, not when they are asked to check everything. - Align AI governance with incentives.
Mandating “human-in-the-loop” processes without rethinking compensation and accountability can create the illusion of control without the reality.
Why Organizations Get Stuck

This paradox helps explain why many AI deployments stall after early success. Even in instances where there isn’t a lack of trust in, or resistance to, AI from employees, there’s still a misalignment between how AI works and how humans are incentivized.
Oversight is often treated as a passive responsibility rather than an active role. When incentives fail to reflect the real cost of vigilance, organizations either overpay for supervision or quietly lose it.
This content was created with the assistance of generative AI. All AI-generated materials are reviewed and edited by the Wharton AI & Analytics Initiative to ensure accuracy, clarity, and alignment with our standards.
About Wharton AI & Analytics Insights
Wharton AI & Analytics Insights is a thought leadership series from the Wharton AI & Analytics Initiative. Featuring short-form videos and curated digital content, the series highlights cutting-edge faculty research and real-world business applications in artificial intelligence and analytics. Designed for corporate partners, alumni, and industry professionals, the series brings Wharton expertise to the forefront of today’s most dynamic technologies.
