Artificial Understanding – The Intelligence We Built and the Comprehension We Didn’t
Artificial Understanding – The Intelligence We Built and the Comprehension We Didn’t
Artificial Understanding
The Intelligence We Built and the Comprehension We Didn’t
Part One of a Three-Part Series
Issue 260, April 16, 2026
I attended a virtual event last week that was constructed around consideration and discussion of AI twins for CEOs and C-level executives. The technology discussion was impressive; there are so many options. I geeked out on the framework, policy and procedure points and where progress is happening as we continue to run instead of walk. It was a thought-provoking discussion on so many levels.
The concept is straightforward: create an AI-powered digital replica of yourself that can engage with stakeholders, respond to inquiries, and extend your executive presence across time zones, languages, and platforms without you being in the room. Along with a myriad of other potential uses and applications to offload tasks.
Companies like Read AI have launched digital twin products that can respond to emails and schedule meetings on your behalf. At CES 2026, IgniteTech demonstrated MyPersonas, a platform that builds AI-powered replicas of employees using their video, voice, and written materials. MirrorMeAI is offering leaders the ability to scale their presence across more than 175 languages with zero additional filming time. The promise is compelling and very exciting.
The reality that struck me as I sat and listened to the questions and conversations that followed was something else entirely.
What struck me was not the sophistication of the technology being discussed, not the incredibly smart people in the discussion (and there were many), not the potential of an AI Twin, as that would be likely exponentially helpful to many.
It was the gap between what the technology could do and what the people in the room actually understood about it. The questions kept circling back to the same anxieties: What happens to my data? What about my privacy? How do I know the AI version of me will not say something I would never say? Can this thing hallucinate in a meeting? And to me, the most important question is, what is this going to do to those we are hoping to groom for future leadership roles, if an AI Twin replaces what those up-and-coming leaders would help with?
Beneath these specific concerns, a broader unease that nobody quite articulated but mostly everyone seemed to feel: we are being asked to trust something we, individually, organizationally, and as a society, do not fully comprehend, and the stakes of getting it wrong are not theoretical.
The room was divided in a way that I have seen repeatedly in my consulting work over the past year. There were those who understood what AI is actually doing at a functional level, who could ask informed questions about training data, model thought processes (and prompts) around strategic considerations, and the boundaries of what these systems can and cannot reliably produce. And there were those who were operating at the surface. They understood the marketing language. They knew the competitive pressure to adopt. But their engagement with AI remained fundamentally at the level of what it promises rather than what it is. That gap is not a technology problem. It is a comprehension problem. And it is one of the most concerning gaps in organizational strategy and leadership today.
The Comprehension Deficit
We are living through the fastest adoption of a transformative technology in human history, and the people making the most critical decisions about it are, in many cases, the least equipped to understand what they are deciding. That is not an insult. It is a structural reality. A Harvard Business Review analysis published in February 2026 examined where senior leaders are struggling most with AI adoption and found that the challenge is not resistance or lack of enthusiasm. Leaders are eager and optimistic about the potential, particularly to decrease operational costs. The challenge is that their understanding of AI remains largely conceptual rather than functional. They know what AI is supposed to do. They do not know how it does it, where it fails, or why.
The data reinforces what I observed in that virtual room. DataCamp’s 2026 analysis of enterprise AI readiness found that 59 percent of enterprise leaders acknowledge their organization has an AI skills gap. Seventy-two percent say basic AI literacy is becoming more and more important for day-to-day work, and yet only 35 percent report having a mature, workforce-wide upskilling program in place. The gap between recognizing the problem and doing something about it is enormous. And it is not closing. It is widening because the technology is advancing faster than most organizations’ ability to build genuine comprehension.
I wrote about this dynamic in Issue 233 when I introduced what I call Cognitive Territory Theory, the idea that our brains categorize AI interactions into three psychological zones: low-stakes assistance, competence zones, and high-stakes irreversible decisions. In that piece, I explored why a CEO who happily follows a GPS recommendation will agonize over an AI-generated strategic analysis. The answer is not irrationality. It is evolutionary psychology meeting exponential technology. Our brains evolved to assess risk quickly, particularly in environments where the threats were visible and the consequences were physical. Think here of our fight or flight response, it is often something we don’t even think about consciously, we often simply “react”.
We are now asking those same brains to evaluate risks that are invisible, statistical, and compounding at a pace no biological system was designed to process. I felt this was such an important consideration that I focused the very first episode of The Human Factor Podcast on the same topic last year.
What I observed this week adds another dimension to that framework. It is not just that executives, and really, this correlates to most people, are cautious about AI in high-stakes domains. It is that many are making high-stakes decisions about AI without the foundational literacy to evaluate what they are actually approving, deploying, or depending on. This harkens to the often-blind faith in technologies historically, where understanding was lacking, but the decisions to adopt felt very necessary. Once again, often without the comprehension of the intended and unintended consequences.
We are operating in cognitive territory we have not mapped, let alone begun to understand.
The Hallucination Problem Is a Trust Problem
The hallucination question came up multiple times during the event, and it deserves serious attention because it illustrates the comprehension gap perfectly. When executives ask, “Can this AI hallucinate?” they are asking a legitimate question. AI systems do produce outputs that are confidently stated and factually wrong. A survey of 1,200 C-suite executives found that 71 percent are hesitant to scale AI deployment without what they call “hallucination proofing,” viewing it as a direct threat to decision-making integrity. They aren’t wrong.
These are real concerns. But here is what the hallucination conversation reveals about the comprehension gap: most of the people asking the question do not understand why hallucinations occur. They treat hallucination as a bug that will eventually be fixed, like a software glitch waiting for a patch. It is not. Hallucinations are a structural feature of how large language models work. These systems generate outputs by predicting what word or phrase is most likely to follow another based on statistical patterns in their training data. They do not understand truth (at least not yet). They do not verify facts against reality unless specifically asked to do so, and even then, the results may not be perfect. They produce language that sounds correct because it follows the patterns of correct language, and sometimes those patterns lead to outputs that are entirely fabricated.
When you understand this, you engage with AI differently. You verify outputs. You design workflows with human checkpoints. You build what I have called “deliberate friction” into high-stakes processes. But when you do not understand this, when you treat AI as a smarter version of a search engine or an infallible advisor, you are building organizational processes on a foundation you have not tested.
Employees are already feeling the consequences. Current research indicates that workers spend an average of 4.5 hours per week verifying whether what AI told them is actually accurate, representing approximately $14,200 per employee per year in pure verification overhead. That is not a technology cost. That is a comprehension cost. Organizations that understood what these systems actually produce would have designed verification into the workflow from the beginning rather than discovering the need for it after deployment. Once again, a case of the wild-wild west (think back to the start of the web) clashing with our very human defaults to conserve energy and lessen the load while embracing potential unreasonable expectations.
The Mirror We Are Not Looking Into
In Issue 253, I wrote about what I called The Algorithmic Mirror, the way AI reflects back the biases, inconsistencies and unexamined assumptions that organizations and individuals carry but rarely confront. That piece explored the identity threat that emerges when AI reveals uncomfortable truths about how we actually think and decide, as opposed to how we believe we think and decide.
The comprehension gap I am describing here is a different dimension of the same phenomenon. When executives engage with AI at the surface level, they are not just failing to understand the technology. They are failing to see what the technology reveals about their own organizations. An AI twin that hallucinates in a customer-facing interaction is not just a technology failure. It is exposing the fact that the organization deployed a system it did not understand into an environment where the consequences of failure are significant. That decision reveals something about how the organization evaluates risk, how it allocates resources to understanding before deploying, and how much genuine versus performative due diligence went into the adoption process.
I wrote in Issue 194 about managing ethical dilemmas through a human-centric approach to AI adoption, and I referenced Tristan Harris’s warning that “if we don’t understand the risks, we won’t get to a positive future.” Harris described a world where our technology is ahead of the average individual’s capability to understand it. That was January 2025. Fifteen months later, the gap has only grown. The technology has advanced dramatically while the average executive’s functional understanding has barely moved. We have more tools, more platforms, more vendor pitches, and more competitive pressure to adopt. What we do not have more of is a genuine comprehension of what we are adopting and what it means for the people it touches.
And this is where the conversation needs to expand beyond the boardroom. Because the comprehension gap is not just an executive problem. It extends to every level of every organization and into every household. The same forces that leave a CEO unable to explain why an AI system hallucinates leave a parent unable to understand what an AI companion is actually doing when it engages their teenager in conversation. The same surface-level understanding that leads to deployment without adequate safeguards in the enterprise leads to adoption without adequate understanding in the lives of the most vulnerable people in our society.
What Genuine AI Literacy Requires
The solution is not to slow down AI adoption. That ship has sailed, and the competitive dynamics are real. The solution is to close the comprehension gap with the same urgency we are applying to adoption itself. And that requires a fundamentally different approach than what most organizations are currently pursuing.
AI literacy at the leadership level does not mean learning to code or understanding the mathematics of neural networks. It means understanding five things with enough depth to make informed decisions.
First, how these systems actually produce their outputs, not at the level of a computer scientist but at the level of someone who needs to know what they can and cannot reliably do.
Second, where the boundaries of reliability are and how those boundaries shift depending on the domain, the data, and the task.
Third, what data these systems were trained on and what that means for the biases, gaps, and limitations in their outputs.
Fourth, what happens when these systems fail and how to design processes that account for failure rather than assuming it away.
And fifth, what the human and organizational consequences of deployment look like, not in the vendor’s pitch deck but in the lived experience of the people who interact with these systems every day.
Harvard Business Review’s January 2026 survey on how executives are thinking about AI found that the organizations seeing the strongest returns are those that pair AI investment with structured workforce capability building. They are nearly twice as likely to achieve meaningful ROI compared to organizations that deploy tools without building understanding. That finding should be a wake-up call for every leader who has approved an AI budget without a corresponding investment in genuine comprehension.
In my own practice, I have been living this evolution. Over the past several months, I have integrated AI, not as a replacement for judgment but as a collaborator that extends my capabilities. And the single most important thing I have learned is that the value of AI is directly proportional to the depth of your understanding of what it can do, what it should and shouldn’t do and what it simply cannot do. When I understand the boundaries, I use it brilliantly. When I assume capabilities it does not have, the results expose my own comprehension gap. Every leader I work with is navigating this same learning curve, whether they recognize it or not.
What Comes Next
The comprehension gap at the executive level is consequential, but it is only the beginning of the story. The same forces that leave leaders unable to fully evaluate the AI systems they are deploying are playing out at a far more personal and far more dangerous level in the lives of young people who are growing up with AI as a baseline feature of their world rather than a tool they chose to adopt.
In next week’s issue, Part Two of this series, we will examine the human cost of the comprehension gap. From the erosion of attention spans and critical thinking that I wrote about in “Brain Rot, Attention Spans and You” to the emergence of AI companions like Character.ai that simulate emotional connection with developmental consequences, we are only beginning to understand. The conversation about AI cannot remain confined to the boardroom. It must extend to the family room, the classroom, and the deeply personal question of what these systems are doing to the way we think, relate, and develop as human beings.
The intelligence we built is extraordinary. The understanding we owe ourselves, our organizations, and especially the next generation is long overdue.
Join the Conversation
What is your experience with the AI comprehension gap? Are the leaders in your organization making informed decisions about AI, or are they operating at the surface? I would welcome your perspective. Share your thoughts on LinkedIn or subscribe to Ideas and Innovations on Substack for weekly insights on leadership, transformation, and the human side of organizational change.
The Truth About Transformation
For a comprehensive framework on navigating transformation in the age of AI, my book The Truth About Transformation explores why most change initiatives fail and what leaders can do differently. Available on Amazon.
Go Deeper
Subscribe to the Human Factor Podcast where we explore the psychology of change, the dynamics of resistance, and the strategies that help leaders and organizations navigate transformation successfully. Now in its second season with episodes covering AI adoption resistance, the algorithmic mirror, structural silence, and the broken contracts of organizational change.
Connect With Us
What leadership challenges are shaping your decisions right now? Share your experiences and join the conversation.
Go Deeper: Human Factor Podcast
From resistance and identity to the frameworks that help leaders navigate transformation. Available wherever you listen to or watch podcasts.
Kevin Novak
Kevin Novak is the Founder & CEO of 2040 Digital, a professor of digital strategy and organizational transformation, and author of The Truth About Transformation. He is the creator of the Human Factor Method™, a framework that integrates psychology, identity, and behavior into how organizations navigate change. Kevin publishes the long-running Ideas & Innovations newsletter, hosts the Human Factor Podcast, and advises executives, associations, and global organizations on strategy, transformation, and the human dynamics that determine success or failure.
