Artificial Understanding – What Feeds the Machine and What It Means for All of Us
Artificial Understanding – What Feeds the Machine and What It Means for All of Us
Artificial Understanding
What Feeds the Machine and What It Means for All of Us
Part Three of a Three-Part Series
Issue 262, April 30, 2026
Over the past two weeks, this series has examined the comprehension gap that defines our relationship with artificial intelligence. In Part One, I explored how executives are making consequential decisions about AI systems they do not fully understand, creating organizational risk that grows in proportion to the gap between capability and comprehension. In Part Two, I examined the human cost of that same gap, the cognitive erosion, the parasocial attachments, and the devastating consequences for young people growing up in an environment where AI simulates understanding without possessing it. Both dimensions of the comprehension gap are real and urgent. But neither can be fully understood without examining the foundation they both rest on: the vast, largely invisible infrastructure of data that makes all of it possible.
Every AI system that hallucinates in a boardroom presentation and every AI companion that simulates empathy with a distressed teenager is built on the same raw material. It is built on us. On our behavioral data, our inference data, our preferences, our patterns, our words, our movements, and the digital trails we have been leaving for decades, often without awareness and almost always without genuine understanding of where that data goes, who uses it, and what it enables. This is the final piece of the comprehension gap, and in many ways it is the most consequential. Because the data infrastructure that feeds AI was built long before most people understood what AI would become, and the rules governing that infrastructure have never caught up to the reality of what it now powers.
The Data We Gave Away
In July 2022, in Issue 65 of this newsletter, I wrote a piece called “Behavioral and Inference Data: A 360 Perspective” that examined how our digital actions, the sites we visit, the products we research, the content we consume, the words we choose in our communications, even the GPS coordinates of our devices, are collected, aggregated, and analyzed to build increasingly detailed profiles of who we are, what we want, and what we are likely to do next. I also lecture on the topic in the courses I teach at the University of Maryland, which often results in gasps or expressions of shock from the students.
The concept of inference data that I explored in that piece is straightforward: individual data points that are meaningless in isolation can be combined to reveal intimate details about a person’s identity, beliefs, emotional state, and behavior. What seemed abstract nearly four years ago is now the operational reality powering the AI systems that are reshaping every aspect of our professional and personal lives.
The data trails I described in 2022 have not diminished. They have expanded exponentially. Every interaction with a digital platform, every voice command to a smart device, every purchase through a reward program, every scroll through a social media feed generates data that is recorded, stored, and available for use. In March 2024, I expanded on this in Issue 152, “What We Know About You: Welcome to the Surveillance State,” drawing on Byron Tau’s investigation into how commercial data brokers sell personal behavioral data to government agencies. Tau documented that the data being sold commercially, our geolocation data, our shopping patterns, our communication metadata, would require a search warrant if collected through traditional intelligence methods. Michael Morell, a former deputy director of the Central Intelligence Agency, noted that if this same data were gathered through intelligence operations it would be classified as “top secret sensitive.” Yet it is available for purchase by anyone willing to pay for it, because we volunteered it through our daily digital interactions.
What has changed since I wrote those pieces is not the data collection itself. What has changed is what that data is now being used to build. In 2022, the primary concern was targeted advertising and commercial manipulation. Today, that same behavioral and inference data is the training material for AI systems that generate strategic recommendations for executives, simulate emotional relationships with teenagers, produce content that is indistinguishable from human writing, and create digital twins that purport to represent individual human beings. The infrastructure that was built to sell us products is now the infrastructure that powers systems claiming to think, decide, and relate on our behalf.
The Privacy Reckoning
The privacy concerns that executives raised at the AI twins event I attended three weeks ago are not new fears. They are old fears encountering new consequences. When a CEO asks, “What happens to my data?” in the context of an AI digital twin, the honest answer is that what happens to their data has been happening for years. The behavioral patterns, communication styles, decision-making tendencies, and professional relationships that an AI twin would need to simulate have been captured in fragments across dozens of platforms for as long as that executive has been operating in a digital environment. The AI twin is not creating a new privacy problem. It is making visible a privacy problem that has existed beneath the surface of our digital lives for at least the past two decades.
Regulatory frameworks are attempting to catch up because they are never proactive and are always playing catch-up, but the gap between the pace of technology and the pace of governance remains enormous. The European Union’s AI Act, the world’s first comprehensive legal framework for artificial intelligence, becomes fully enforceable in August 2026. It classifies AI systems by risk level, requires transparency in algorithmic decision making, mandates human oversight of systems that affect fundamental rights, and prohibits social scoring or segmentation of people based on behavior and personal characteristics. It is a significant step. But it arrives after years of largely unregulated data collection and AI training have already occurred. The further challenge is that there is growing pressure to weaken the EU AI Act’s provisions in order to preserve economic competitiveness, because most countries and regions of the world are following the United States in declining to enact strong regulation. The risk is that the most ambitious governance framework in the world may be diluted before it is fully tested.
The European Data Protection Board has raised concerns that proposed amendments to GDPR that would create a legitimate interest basis for processing personal data to train AI models lack adequate safeguards and that opt-out mechanisms are “insufficient for data subjects whose information has already been collected.”
In the United States, the regulatory landscape remains fragmented, with little appetite at the federal level to impose constraints that might slow economic growth. The federal government has gone further, drafting policies that would preempt states from enacting their own AI and privacy regulations.
Existing state-level privacy laws in California, Colorado, Virginia, and other states address inference data to varying degrees, but there is no comprehensive federal framework governing how personal data is collected, sold, or used to train AI systems. The American Privacy Rights Act and its various other incarnations, which would have established national standards, have repeatedly stalled in Congress.
Meanwhile, the data continues to flow. Every day, the behavioral exhaust of billions of digital interactions becomes training material for systems that grow more capable, more pervasive, and less understood by the people whose data makes them possible.
The Circle Closes
Consider the full arc of what I have described across this series, and you begin to see a pattern that is both circular and self-reinforcing. We generate data through our digital behavior, often without awareness. That data is collected, aggregated, and sold by commercial brokers. It is used to train AI systems that produce outputs we do not fully understand. Those AI systems are deployed by executives who lack the literacy to evaluate them, consumed by young people whose cognitive defenses have been eroded by the same digital ecosystem that generated the data in the first place, and governed by regulatory frameworks that consistently arrive after the consequences have already manifested.
But the circle does not stop at consumption. AI systems trained on our real behavioral patterns can now generate fabricated behavioral data at scale: synthetic comments that read as contextually plausible because they were trained on genuine human engagement, manufactured social proof that exploits the same heuristics we evolved to navigate real communities, and coordinated inauthentic activity that is increasingly indistinguishable from organic human interaction. The same data we gave away to build these systems is now being used to forge the social signals we depend on to judge what is real, what is trustworthy, and what deserves our attention. The loop is not merely self-reinforcing. It is self-corrupting because the fabricated signals feed back into the same algorithmic systems, distorting the next cycle of recommendations for every authentic user in the ecosystem.
The comprehension gap is not a single problem. It is a system of interconnected failures of understanding that reinforce each other at every level. Executives do not understand how AI works because they were never required to. Parents do not understand what AI companions do because the platforms that build them have no incentive to explain. Young people do not understand the data they generate because the education system has not prioritized that knowledge. Regulators do not understand the pace of change because the technology evolves faster than policy cycles allow. And all of us, collectively, do not understand the full implications of what we have built because the system is designed to be invisible. The data collection is seamless. The AI outputs are polished. The appearance of understanding is convincing. And the actual understanding, the genuine comprehension of what is happening beneath the surface, remains dangerously thin.
A recent paper by Kim, Yu, and Yi at ddai Inc. introduces a concept that captures one of the most insidious dimensions of this comprehension gap. They call it the LLM fallacy: a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence. This produces a systematic divergence between perceived and actual capability. The opacity, fluency, and low-friction interaction patterns of these systems obscure the boundary between human and machine contributions, leading users to infer competence from the quality of outputs rather than from the processes that generated them.
The researchers map this across four domains: computational, where users generate functional code without understanding the underlying logic; linguistic, where individuals produce fluent text in languages they do not actually speak; analytical, where LLM-generated reasoning is adopted as one’s own; and creative, where AI-assisted content is attributed as personal authorship.
The implications for education, hiring, and organizational decision-making are significant because the fallacy means that the people using AI most confidently may be the ones who understand it least. The comprehension gap is not just about understanding what AI does. It is about understanding what AI does to our understanding of ourselves.
Which is why, in Issue 233, I introduced the concept of Conscious Evolution, the idea that we must deliberately choose to become more human because of AI, not less. That concept has never been more essential than it is right now. Because the alternative, unconscious evolution, is what we are currently experiencing. We are being shaped by systems we did not design, trained on data we did not knowingly provide, operating by principles we do not understand, and producing consequences we did not anticipate. That is not evolution. That is drift. And drift in a system this powerful, with consequences this far-reaching, is something we cannot afford.
A Framework for Genuine Comprehension
I want to close this series not with alarm but with a framework for the work that lies ahead. The comprehension gap is real, but it is not yet set in stone. It is and will be the product of choices we have made and choices we can make differently. Closing it requires action at every level of the system.
At the individual level, comprehension begins with awareness of the data you generate and the AI systems you interact with. It means understanding that every digital interaction creates information that may be collected, aggregated, and used in ways you did not intend. It means evaluating AI outputs with the same critical thinking you would apply to any source of information, asking not just “Is this helpful?” but “How was this produced, what data informed it, and where might it be wrong?” It means recognizing that AI is increasingly capable of manufacturing the social signals we have always relied on to judge whether something is worth our attention, and that the instinct to follow the crowd becomes dangerous when the crowd itself can be fabricated. And it means having honest conversations with the young people in your life about the difference between AI that simulates understanding and the genuine human connections that no system can replace.
At the organizational level, comprehension requires treating AI literacy as a strategic investment, not a training checkbox. It means building verification into workflows from the outset rather than discovering the need for it after deployment. It means demanding transparency from vendors about how their systems work, what data they use, and where their boundaries of reliability lie. It means creating ethical frameworks that go beyond compliance and address the genuine human consequences of the technologies you deploy. And it means recognizing, as I wrote in Issue 233, that the leaders who will thrive are not those who become human AI hybrids but those who develop the wisdom to know when to trust a machine’s recommendation and when to override it based on factors that algorithms cannot compute.
At the societal level, comprehension demands that we close the governance gap with the same urgency we are applying to AI development. The EU AI Act is a meaningful start, but it must be followed by frameworks that specifically address AI training data consent, AI companions for minors, and the commercial data brokerage ecosystem that feeds the entire system. In the United States, the absence of comprehensive federal AI and privacy legislation is no longer a policy debate. It is a societal vulnerability that grows more consequential with every passing quarter. And in education, AI literacy must become a foundational competency, not for future computer scientists, but for every person who will live and work in a world shaped by these systems, which is all of us.
The Human Factor
Everything I have written across 262 issues of this newsletter, across two seasons of the Human Factor Podcast, and across the pages of The Truth About Transformation comes back to a single conviction: the most consequential variable in any system, any organization, any transformation, and any technology is the human being at the center of it.
AI does not understand this. It cannot understand this.
It processes patterns. We create meaning.
It generates language. We build relationships.
It simulates empathy. We feel it.
The comprehension gap I have described across this series is not, at its root, a failure of technology. It is a failure of human responsibility. We built extraordinary systems and then failed to do the harder, slower, less glamorous work of understanding what we built and what it does to us.
We let the pace and potential benefits of innovation outrun the pace of comprehension, and the consequences are now visible at every level, from boardrooms making uninformed decisions to teenagers forming attachments with systems that cannot care for them to a global data infrastructure operating largely without the knowledge or consent of the people whose lives it shapes.
Closing the comprehension gap is the defining challenge of this moment. Not because AI is dangerous, but because AI without understanding is dangerous. Not because technology is bad, but because technology deployed without genuine comprehension produces outcomes that no one intended and no one is prepared for.
We owe ourselves, our organizations, and the next generation the discipline to understand what we have built. That understanding is not optional. It is not a luxury for the technically inclined. It is a fundamental human responsibility in an age of artificial intelligence.
The intelligence is artificial. The understanding must be real.
Join the Conversation
What does genuine AI comprehension look like in your organization, your family, or your community? What are we getting right, and where are we still falling short? Share your thoughts on LinkedIn or subscribe to Ideas and Innovations Newsletter for weekly insights on leadership, transformation, and the human side of organizational change.
Explore the full archive at: 2040digital.com/newsletter
Assess Your Organization’s Transformation Readiness: transformationassessment.com
The Truth About Transformation
For a comprehensive framework on navigating transformation in the age of AI, my book The Truth About Transformation explores why most change initiatives fail and what leaders can do differently. Available on Amazon.
Go Deeper
Subscribe to The Human Factor Podcast where we explore the psychology of change, the dynamics of resistance, and the strategies that help leaders and organizations navigate transformation successfully. Now in its second season with episodes covering AI adoption resistance, the algorithmic mirror, structural silence, and the broken contracts of organizational change.
Connect With Us
What leadership challenges are shaping your decisions right now? Share your experiences and join the conversation.
Go Deeper: Human Factor Podcast
From resistance and identity to the frameworks that help leaders navigate transformation. Available wherever you listen to or watch podcasts.
Kevin Novak
Kevin Novak is the Founder & CEO of 2040 Digital, a professor of digital strategy and organizational transformation, and author of The Truth About Transformation. He is the creator of the Human Factor Method™, a framework that integrates psychology, identity, and behavior into how organizations navigate change. Kevin publishes the long-running Ideas & Innovations newsletter, hosts the Human Factor Podcast, and advises executives, associations, and global organizations on strategy, transformation, and the human dynamics that determine success or failure.
