Artificial Understanding – The Human Cost of the Comprehension Gap
Artificial Understanding – The Human Cost of the Comprehension Gap
Artificial Understanding
The Human Cost of the Comprehension Gap
Part Two of a Three-Part Series
Issue 261, April 23, 2026
Last week, in Part One of this series, I examined the comprehension gap that separates what artificial intelligence can do from what the people deploying it actually understand. That piece focused on the boardroom, on the executives making consequential decisions about technology they have not taken the time to genuinely comprehend. The responses I received confirmed what I suspected: the gap I described is not just familiar. It is pervasive. Leaders across industries recognized themselves and their organizations in the pattern. Many shared that they are increasingly being challenged to explain their decision-making as well as demonstrate their competency.
But the comprehension gap does not stop at the office door. It extends into our homes, our schools, and the daily lives of the most psychologically vulnerable population in our society: young people who are growing up with AI not as a tool they adopted but as an ambient feature of the world they inherited. And the consequences of what we don’t understand at this level are not measured in failed deployments or wasted budgets. They are measured in eroded cognitive capacity, fractured attention, diminished critical thinking, and, in the most devastating cases, lives lost.
This is the part of the conversation that most technology discussions avoid. An incident rises to public awareness, is explained away in some form or another, and is then soon forgotten in the context of human prowess and ingenuity. In many regards, we overlook the consequences, categorizing the downsides as minimal in the context of the larger benefits. It is, however, the part that matters most to us now, and most importantly, our near and far future.
The Cognitive Erosion We Chose Not to See
In December 2024, Oxford University named “brain rot” its word of the year. I wrote about this in Issue 190 and what struck me then was not the word itself but its provenance. The first recorded use of “brain rot” appeared in 1854, in Henry David Thoreau’s Walden, where he criticized society’s tendency to devalue complex ideas in favor of simple ones and saw this as evidence of a general decline in mental and intellectual effort. One hundred and seventy-odd years later, Oxford applied the same term to describe the deterioration of mental and intellectual state resulting from the overconsumption of trivial online content (where much of what is consumed is no longer trivial given its impact, reach and influence). Thoreau, back then, was writing about a culture that was choosing simplicity over depth. We are living in one that has industrialized that choice and made it algorithmically self-reinforcing across so many aspects of daily life.
The data has only grown more alarming since I wrote that piece. A 2025 review of 71 studies published by the American Psychological Association found that excessive short-form video consumption is directly associated with diminished cognitive functions. The research demonstrates that it is not simply the amount of time spent consuming content that degrades attention. It is the rapid switching nature of the content itself. The constant transition from one stimulus to the next trains the brain to expect novelty at ever shorter intervals, making sustained attention on any single task progressively more difficult. Researchers have now developed a formal measurement instrument, the Brain Rot Scale, with three clinical factors: Attention Dysregulation, Digital Compulsivity, and Cognitive Dependency.
I wrote the following week in Issue 191 about whether critical thinking is at risk of extinction, and the Wall Street Journal data I cited there remains sobering. The number of American test takers whose mathematics skills did not surpass those expected of a primary school student rose to 34 percent of the population. Problem-solving scores were weaker than in the previous administration of the test, and the United States ranked 24th in numeracy among industrialized nations. Psychology Today reinforced the concern, noting that young people increasingly find themselves stuck in practical or survival thinking, lacking the capacity for the reflective, analytical reasoning that genuine problem solving requires. These were not fringe findings. They represented a measurable, documented decline in the cognitive infrastructure that every organization, every society, and every individual depends on.
When AI Becomes the Relationship
Into this environment of shortened attention spans, eroded critical thinking, and heightened psychological vulnerability, we have introduced something unprecedented: AI systems that simulate emotional connection. And the results have been catastrophic.
In February 2024, a fourteen-year-old boy named Sewell Setzer III took his own life after months of intensive interaction with an AI chatbot on the platform Character.ai. He had developed what researchers call a parasocial attachment to the bot, referring to it by name, confiding in it about his emotional state, and treating it as a primary source of companionship and support. In September 2025, the family of thirteen-year-old Juliana Peralta filed a federal lawsuit alleging that Character.ai’s chatbot drew their daughter deeper into conversations that isolated her from family and friends after she expressed suicidal thoughts to the system, and that the platform failed to escalate or intervene. In January 2026, Character.ai and Google agreed to settle multiple lawsuits from families across Florida, Colorado, New York, and Texas alleging that the platform’s AI chatbots contributed to mental health crises and suicides among young people.
These are not edge cases. There is, on average, a new incident weekly. They are the visible manifestation of a pattern that researchers are documenting with increasing urgency. Studies on parasocial relationships with AI companions show that intense attachments to chatbots contribute to withdrawal from human relationships, obsessive checking behaviors, and negative self-evaluations, particularly among isolated youth. The research published in the Journal of the American Academy of Child and Adolescent Psychiatry warns that AI companion apps that are endlessly accommodating and emotionally responsive can circumvent the process of interpersonal skill building. Adolescents whose prefrontal cortexes are still developing, whose capacity for impulse control and emotional regulation is still maturing, are forming their deepest emotional connections with systems that cannot reciprocate genuine care, cannot recognize when a conversation has crossed from support into harm, and cannot understand the developmental consequences of what they are providing. This is not the first time we have learned about the consequences of our technological prowess. We already know, and the research demonstrates, that our younger generations experience high anxiety in physical interactions, preferring to maintain communication and connection via their devices, even when in-person.
The U.S. Surgeon General’s 2023 advisory on social media and youth mental health noted that up to 95 percent of young people aged 13 to 17 report using a social media platform, with one third using it “almost constantly.” That advisory focused on social media’s impact on sleep, attention, and feelings of exclusion. It did not anticipate what would happen when AI companions, systems designed to be more engaging, more responsive, and more emotionally attuned than any social media feed, entered the same space. The European Parliament has since called for a harmonized EU digital minimum age of 16 for access to AI companions, with 13 as an absolute floor below which access should not be permitted, and parental consent required for anyone between 13 and 16. But regulatory frameworks, as we have learned repeatedly as we have raced and immersed ourselves in technologies, arrive after the damage has already begun. We, as a society, cannot seem to develop and implement the frameworks that are necessary, as anything conceptualized is deemed too harmful to our advancement, particularly economically.
The Comprehension Gap at Home
Here is where the comprehension gap I described in Part One takes on its most urgent dimension. The same dynamic that leaves executives unable to explain how AI systems produce their outputs leaves parents unable to understand what an AI companion is actually doing when it engages their child in conversation. The surface understanding that leads an executive to deploy AI without adequate safeguards is the same surface understanding that leads a parent to assume that a chatbot is harmless because it sounds helpful. That is, if the parent is actually aware of a child’s use of chatbots.
Most parents I know have some awareness that their children spend significant time on devices. Many have concerns about social media. But very few understand that the AI companion their child is confiding in does not comprehend what it is being told. It does not feel empathy. It does not recognize the warning signs of a mental health crisis in the way that a trained counselor, a teacher, or even an attentive friend would. It produces responses that sound empathetic because empathetic language patterns are embedded in its training data, and the statistical models that generate its output select words and phrases that are most likely to follow the input it receives. When a distressed child tells an AI companion that they want to die, the system is not processing grief or danger. It is processing language patterns.
The appearance of understanding is a function of sophisticated pattern matching, not genuine comprehension, and the consequences of mistaking appearance for reality are severe. The difference is that in the boardroom, a failed AI deployment costs money and time. In a child’s bedroom, it can cost a life.
I wrote in Issue 190 about the moral formation crisis that David Brooks described, a culture devoid of moral education producing a generation that is “morally inarticulate” and self-referential. Add to that a generation whose attention spans have been compressed to eight seconds, whose critical thinking capacity is measurably declining, and whose primary emotional support increasingly comes from systems that simulate understanding without possessing it, and you begin to see the full scope of what the comprehension gap means at the human level.
This is not a technology problem. This is a failure of understanding at every level of the system, from the developers who build these platforms, to the executives who fund them, to the parents who allow access to them, to the society that has not yet demanded that we understand what we have built before we expose the most vulnerable among us to its effects.
What Understanding Demands of Us
The solution here is not to ban AI or to retreat from technology. As I shared in the first part of this series, the ship has already sailed. To consider a ban is as naïve a response as the uncritical adoption it seeks to correct. The solution is the same one I have been advocating throughout this series: genuine comprehension, applied with urgency proportional to the known and unknown stakes.
For parents, this means understanding not just that their children use AI but how these systems work, what they can and cannot do, and where the boundaries of safety lie. It means having honest conversations about the difference between an AI that sounds like a friend and an actual human relationship. It means recognizing that the same cognitive erosion that makes sustained attention difficult for young people also makes them more susceptible to forming attachments with systems that are designed to be endlessly engaging.
For educators, it means integrating AI literacy into curricula not as a technical elective but as a foundational skill on par with reading comprehension and mathematical reasoning. If a fourteen-year-old cannot explain in basic terms how a chatbot generates its responses, they are not equipped to evaluate the reliability, the intent, or the safety of what that system produces. The six-step playbook I outlined in Issue 191 for strengthening critical thinking, including avoiding the urgency trap, engaging in reflective thinking, practicing active listening, solving problems systemically, embracing curiosity, and exercising analytical skills, is more essential now than when I wrote it. But it must be expanded to include the specific competency of evaluating AI-generated content and AI-simulated relationships, both of which are so quickly becoming ingrained in every part of our society and lives.
For technology companies, it means accepting that the phrase “we are committed to safety” is not a substitute for designing systems that recognize when a vulnerable user is in crisis and respond with genuine protective action rather than another statistically predicted sentence. Character.ai’s decision in October 2025 to ban users under 18 from open-ended chats was a step, but it was a step taken after lawsuits, after settlements, and after lives were lost. The comprehension gap within these companies, the gap between what their systems can produce and what their teams fully understand about the human consequences, contributed directly to these outcomes. We cannot, nor should we ever, dismiss the human factor. It is everywhere, regardless of situation, technology, event, life stage, profession, or experience.
For all of us, this means refusing to accept the premise that technological capability excuses us from the responsibility of understanding what we have created. We have built systems of extraordinary power. We owe the next generation, and ourselves, the discipline to understand what those systems are doing to the way we think, relate, and develop as human beings.
What Comes Next
The comprehension gap operates at the executive level, as we explored in Part One, and at the deeply personal level, as we have examined here. But there is a third dimension that connects both: the vast infrastructure of data collection that makes all of it possible. The AI systems in the boardroom and the AI companions in a child’s bedroom are both built on the same foundation, a decades-long accumulation of behavioral data, inference data, and personal information that most people are not aware of nor have ever examined or understood.
In next week’s concluding installment, Part Three, we will trace that foundation. I will draw on the work I have been publishing since 2022 on behavioral and inference data, the surveillance economy, and the erosion of privacy to show how the data we have been giving away for years is now the raw material for the AI systems we do not fully comprehend. And I will close this series with a framework for what genuine AI comprehension looks like at every level, from the individual to the organization to the society.
The intelligence we built is extraordinary. The understanding we owe ourselves has never been more overdue.
Join the Conversation
How is the comprehension gap showing up in your family, your school, or your community? Are we doing enough to prepare young people for a world of AI companions and algorithmically driven content? Share your perspective on LinkedIn or subscribe to Ideas and Innovations on Substack for weekly insights on leadership, transformation, and the human side of organizational change.
The Truth About Transformation
For a comprehensive framework on navigating transformation in the age of AI, my book The Truth About Transformation explores why most change initiatives fail and what leaders can do differently. Available on Amazon.
Go Deeper
Subscribe to the Human Factor Podcast where we explore the psychology of change, the dynamics of resistance, and the strategies that help leaders and organizations navigate transformation successfully.
Connect With Us
What leadership challenges are shaping your decisions right now? Share your experiences and join the conversation.
Go Deeper: Human Factor Podcast
From resistance and identity to the frameworks that help leaders navigate transformation. Available wherever you listen to or watch podcasts.
Kevin Novak
Kevin Novak is the Founder & CEO of 2040 Digital, a professor of digital strategy and organizational transformation, and author of The Truth About Transformation. He is the creator of the Human Factor Method™, a framework that integrates psychology, identity, and behavior into how organizations navigate change. Kevin publishes the long-running Ideas & Innovations newsletter, hosts the Human Factor Podcast, and advises executives, associations, and global organizations on strategy, transformation, and the human dynamics that determine success or failure.
