In many boardrooms, a dangerous delusion is taking hold. Executives are championing artificial intelligence by painting illusions of a seamless, frictionless future where knowledge workers become instantly more productive, creative, and compliant. In this case, however, it is AI that’s hallucinating, its business executives.
Although we should be concerned about AI systems “hallucinating,” a more consequential form of fabrication thrives in the C-suite: the fantasy that adopting AI will somehow be fast, easy, and immune to the messy realities of human organizations.
The Instant Impact Delusion
Executives see Microsoft’s Copilot or Salesforce’s Agentforce and conjure a future where knowledge workers instantly become more productive. They imagine AI agents summarizing meetings, drafting emails, generating content, even managing suites of agents with the same ease as rolling out a new laptop or switching to cloud email.
Deploying AI requires transformative thinking. Agents and copilots don’t simply slot into existing workflows; they reshape how work is conducted, distributed, and evaluated. Leaders who assume AI as an “add-on” ignore the necessity of redesigning work for an environment that is fundamentally different than the one that existed before AI arrived.
McDonald’s and Taco Bell both experimented with AI in their drive-thrus, hoping for immediate gains in speed and accuracy. Instead, the systems often misinterpreted orders, frustrating customers, forcing human workers to step in. These pilots failed because leaders treated AI like a plug-and-play upgrade rather than a shift in operations that required new policies, training, and workflows.
Without structural adjustments such as shifts in policy, practice, and expectations. Even promising tools risk becoming intriguing pilots that never scale into meaningful work experiences.
The Organizational Amnesia Delusion
Employees are told to implement pilots, restructure data pipelines, and craft use cases, all while maintaining existing workflows. The paradox is clear: the same people tasked with building the bridge are often the ones being told they may not cross it once it is complete. Expecting people to design for their own obsolescence, with enthusiasm, is naïve at best and corrosive at worst.
Without co-creating post-AI career paths, organizations risk losing the core expertise that will be required to innovate, codify new knowledge, and maintain the relevancy of sources.
And while AI vendors promise systems that will eventually learn from experience, right now, people are the only conduit for learning and adapting. AI may eventually get very good at doing what was important yesterday, but it may remain incapable of handling emergent situations for which it has not been trained.
Some reasoning by analogy may work for minor shifts, but huge market changes like the ones we are currently experiencing will remain beyond AI’s ability to cope. If knowledgeable people are uncertain, and they are the source of the content that trains AI models, then we should expect AI models to also be uncertain.
Organizations have always struggled with change. Process improvement, ERP rollouts, and digital transformations each revealed the slow, contested nature of altering human systems. Reflect on Hershey’s 1999 ERP fiasco or Target’s Canadian supply chain failure. And those were deterministic systems.
Many leaders who should know better currently act as if AI will somehow skip the historical lessons of organizational inertia. That is its own hallucination: a belief that AI will bypass the stubborn patterns of power, trust, and resistance.
The Complexity Denial Delusion
Organizations are complex, woven from rules, tacit practices, and fragile agreements about who does what. Hallucinating executives imagine that AI can simply take over tasks without unraveling these interdependencies. But knowledge work is rarely a collection of discrete, interchangeable actions.
Work creates and depends on context. An assistant who drafts an email may alter tone and timing in ways that shift relationships. An agent that automates a process may inadvertently surface informal exceptions that were once handled through tacit knowledge. IBM’s early Oncology work at MD Anderson resulted in Watson being sidelined not because of its technical failings, but because of complex and ill-advised management tactics that most likely doomed the project from the onset.
Replacing deeply rooted work requires more than licensing software. It requires organizations to revisit why the work exists, who benefits, and how its transformation alters not only productivity but meaning. Air Canada was reminded of this need when a BC tribunal held the airline liable after its bot provided incorrect bereavement-fare guidance, illustrating that seemingly simple, discrete tasks live inside legal, policy, and service entanglements
Leaders who wave away the complexity of change are as unmoored from reality as an AI confidently citing a paper that does not exist.
The Passive Participation Delusion
Perhaps the most vivid hallucination is the belief that those at risk of replacement will cheerfully participate in building their replacements. In practice, their cooperation is indispensable. Training AI on enterprise data requires access, annotation, and feedback—all of which come from the very people whose future is uncertain. Without their involvement, implementations can stall, accuracy suffers, and eventually, the promise of widespread adoption will become the next failed investment.
Kathryn Sullivan, for instance, spent 25 years at Commonwealth Bank of Australia. She helped train its chatbot, Bumblebee. Kathryn was eventually laid off, along with 45 other co-workers, once the AI started answering the calls she helped script. The union intervened, and the bank reversed the decision after admitting that its analysis of the business’s impact proved inadequate, but not before the emotional fallout had reverberated across the workforce.
Cooperation is not conjured through vision statements and town hall meetings. Organizations must reckon as much with fairness, purpose, and intent as they do efficiency. Trust is easily undermined and cannot be automated; only cultivated, and once lost, it is expensive to reestablish.
Facing the Real Hallucinations
Organizations treat AI hallucinations as a threat even as their executives create pronouncements or enact policies that also qualify as hallucinations. An AI misattributing a quote or offering incorrect product advice sparks panic about brand risk. A leader overestimating what AI can do by next quarter is met with applause for ambition. Yet the latter is far more consequential.
Technical hallucinations can often be corrected with guardrails, fine-tuning, or improved retrieval or representation approaches. Managerial hallucinations can derail investments, alienate talent, and undermine strategic credibility.
If organizations are serious about AI, they must confront their own illusions. They must recognize that hallucinations are not confined to fringes of large language model tensors, but flourish in boardrooms and planning committees.
AI will not erase the need for strategy, nor will it shortcut the hard labor of organizational change. Leaders cannot hallucinate their way past the difficult conversations about reskilling, redistribution of work, and the organizational recalibration required to coexist with intelligent systems.
Talking Your Organization Down
It is tempting to keep the spotlight on AI’s hallucinations—they are visible, measurable, and often correctable. But that is a distraction from the hallucinations that really matter. The test, which I have not yet seen in an AI assessment tool, is not whether AI mistakes can be curtailed, but if leaders can resist their own fantasies long enough to address complexity, design meaningful change, and treat their people as partners rather than obstacles.
AI is not a dream machine, nor is it a nightmare engine. It is a mirror. Many of its “hallucinations” reflect the human propensity to leap ahead of evidence, to see what we want rather than what is. Hopefully, organizations that recognize this may learn how to avoid managerial hallucinations long enough to reflect on how to effectively employ AI in service of strategy rather than as yet another tool to bludgeon workers into becoming more productive.
So, how can leaders talk their organization down from these managerial hallucinations? It starts with staging an intervention.
Stage a Reality Check, Not a Pilot: Add questions that check for managerial overreach. Are the expectations too grandiose? Has the AI strategy been aligned with the business strategy to the point that AI is being deployed in service of strategy rather than as an independent strategy?
Incentivize, Don’t Command, Cooperation: Revisit role descriptions and explicitly include incentives and rewards for taking leadership in the deployment and adoption of AI systems. Look for opportunities to change career paths to reflect the new reality of AI-first processes and the roles people will play in their success.
Benchmark Against Your Past, Not Vendor Promises: Remind your organization of the lessons from past change initiatives. If strategy is the story an organization tells itself about its future, that story should be prefaced and supported by stories about the past and the lessons learned from previous technology shifts, including how they eventually proved successful, the pitfalls encountered, and the approaches to moving beyond those obstacles.
Redefine “Productivity” for the AI Era: While AI may offer some immediate returns, large-scale transformations are likely to take months or years to realize, and current accounting methods make it hard to connect initial investments with those returns. Take time early in this disruptive cycle to evaluate new technology investment accounting approaches, including ideas like “The Serendipity Economy,” which recognizes the unique ways that value emerges from human-technology cooperation.
- Enterprise AI World 2025 Notes from the Field: Evolving AI from Chatbots to Colleagues That Make An Impact - December 4, 2025
- The C-Suite Hallucinates Just as Much as AI - November 26, 2025




