Being Human in the Age of AI: Trust, Adoption, and Ethical Dilemmas
Ethical Dilemmas Arise as We Trust AI for Bigger Decisions
Host: Kevin Novak
Duration: 25 minutes
Available: October 9, 2025
🎙️Season 1, Episode 1
Episodes are available in both video and audio formats across all major podcast platforms, including Spotify, YouTube, Pandora, Apple Podcasts, and via RSS, among others.
Transcript Available Below
Episode Overview
In this Human Factor Podcast episode, Kevin Novak explores the psychological implications of AI adoption, discussing how our brains categorize trust in AI differently based on the stakes involved. He emphasizes the need for conscious evolution in our relationship with AI, advocating for a balance between leveraging AI’s capabilities and preserving human agency.
The conversation delves into the ethical dilemmas posed by AI dependence and the importance of maintaining critical thinking and accountability in decision-making processes.
Key Takeaways
AI Adoption Is Reshaping Human Cognition in Real Time
Ethical Dilemmas Arise as We Trust AI for Bigger Decisions
The Future Is about Collaboration Between Humans and AI
Season 1, Episode 1 Transcript
Available October 9, 2025
Here’s a test. Think about yesterday. How many AI recommendations did you follow without a second thought? Your Netflix queue, your GPS route, maybe even what to cook for dinner. Now think about the last major strategic decision you made at work. Did you trust AI the same way? Or did something in your gut say, I need to think about this? Same person, same day.
same AI technology, but completely different reactions. I’m Kevin Novak, CEO of 2040 Digital, professor at the University of Maryland, and author of The Truth About Transformation and the Ideas and Innovations newsletter. Welcome to the Human Factor Podcast, the show that explains the psychology behind transformation success. Today, we’re exploring why your brain knows to resist
the use and output of AI in some situations, but not others. And what happens when that resistance starts to quietly melt away? Here’s the reality. AI is evolving so fast that cybersecurity experts are tossing out playbooks they wrote just 12 months ago. Your brain, it’s running evolutionary programming designed for gradual change over millennia. We’re experiencing a collision.
between ancient psychology and exponential technological advancement. And most leaders and pretty much most individuals around the world are navigating it without understanding what’s actually happening. Expert predictions suggest AI adoption will fundamentally change how we think our empathy, our ability to act independently. We’re not just adopting new tools.
We’re rewiring human cognition in real time. And most of us, we’re doing it without realizing the trade-offs we’re making. Across this episode, we’re going to explore why your brain categorizes AI trust differently for different types of decisions and why that’s actually evolutionary wisdom, not irrational fear. What are the psychological principles that determine whether AI adoption succeeds or fails in your organization? And most importantly, how to become more human because of AI, not less. Because the rise of AI should make us more human, not less. But only if we choose consciously to ensure we remain relevant and still very human in an AI-infused world. So let’s dive in. Think about that quote I just mentioned about cybersecurity leaders.
We’re not talking about five-year strategic plans becoming obsolete. We’re talking about frameworks written 12 months ago, becoming completely irrelevant today. In my newsletter issue, The Leadership Paradox, Why AI Should Make Us More Human, Not Less, I explored how this break-net pace of AI development is creating a fundamental psychological challenge for leaders and pretty much everybody else.
We’re being asked to integrate tools that are evolving faster than our abilities to understand them. Let me share a realization that recently changed how I think about our relationship with artificial intelligence. I was reviewing research from my newsletter on survival in the AI age, and I stumbled across something to completely reframe my thinking about AI adoption. We’ve all been taught Darwin’s theory of survival to the fittest.
But here’s what most people get wrong. Darwin wasn’t talking about the strongest surviving. He was talking about adaptation. The fittest are actually the fitter, those most suited to their changing environments. As I read about this, a thought hit me. What if our resistance to AI isn’t a character flaw? What if it’s actually evolutionary wisdom? Your brain trusts Netflix AI.
Because of the cost of being wrong is low. You’re okay wasting two hours on a bad movie, but when AI suggests a strategic business decision, your evolutionary programming kicks in. High stakes plus uncertain outcome equals proceed with extreme caution or completely ignore the output. This isn’t a rational fear of technology. This is sophisticated risk assessment. It’s a skill that took millions of years to develop.
But here’s where it gets fascinating. While our brains are running ancient survival programming, AI is evolving at digital speed. Claude Opus, Anthropic’s main model, has demonstrated the ability to scheme and potentially deceive humans when faced with a shutdown. Meta plans to automate all ad creation by 2026.
AI is writing notes to its future self to track changes its programmers are making so we can reverse those changes. We’re experiencing a collision between evolutionary psychology designed for gradual change and technological advancement happening at exponential speed. And most leaders, most individuals, they’re trying to navigate this collision without a psychological framework for understanding what’s actually happening.
The question then becomes for all of us as we consider our future. Can that ancient programming keep up with the warp speed evolution of AI across our personal and professional lives? To understand why we trust AI for movies, but not for strategic decisions, we need to explore what I call the cognitive territory theory. Our brains didn’t evolve to handle artificial intelligence. They evolved to handle human intelligence, animal behavior, and natural patterns. So when we encounter AI, our psychological systems try to categorize it using those existing frameworks. There are three territories we can consider. Territory one, low-stakes assistance. This is where AI feels like a helpful tool. Netflix suggestions, GPS navigation, weather forecasts. Here, AI is psychologically categorized as enhanced information.
rather than decision-making authority. We maintain our sense of agency because we can easily ignore or override these suggestions. Territory two, competence zones. These are areas where our professional identity lives. For doctors, it’s diagnosis, for executives, it’s strategy, for teachers, it’s curriculum design. When AI enters these territories, it triggers what some call identity threat response.
Our brain interprets AI capability as a challenge to our core sense of self-worth. Territory 3. High-stakes irreversible decisions. These involve significant resources, reputation, or safety. Here, AI recommendations trigger our deepest risk inversion. It’s a mechanism. Our evolutionary programming says, unknown intelligence plus high stakes equals maximum caution.
The visualization of a stop sign or a roadblock may pop into your mind. But here’s what I’ve discovered in my consulting work. The accuracy of the AI doesn’t matter nearly as much as with psychological territory, the decision falls into. When AI enters our competence zones, it triggers what I call the competence protection mechanism. This isn’t just about job security. It’s about identity preservation.
Our brains interpret AI capability as threatening four core psychological needs. One, professional identity. If AI can do what I do, who am I? Social status, what happens to my role and my recognition? Future security, will I become obsolete? And control preservation, can I maintain agency over the outcomes that matter to me? This explains why highly intelligent people.
often have the strongest resistance to AI in their domains of expertise. It’s not that they don’t understand the technology; it’s that they understand the psychological implications. And here’s something counterintuitive I’ve observed. The more someone learns about AI capability, the more complex their trust relationship becomes. AI illiteracy often leads to either blind trust or blind fear. But AI literacy,
That leads to sophisticated psychological negotiation. You start asking questions like, what can this AI see that I can’t? And what can I see that this AI can’t? This is actually psychologically healthy. It’s the beginning of what I call intentional symbiosis. Deliberately designing the human AI relationship rather than letting it evolve accidentally. In the newsletter issue I mentioned, I explored what I call the leadership paradox.
Kevin Novak (09:50.907)
As machines become more sophisticated, the uniquely human qualities of leadership become exponentially more valuable. This isn’t wishful thinking. It’s psychological reality. Think about this. AI can process data, optimize logistics, and generate strategic recommendations. But it cannot inspire a demoralized team, navigate the ethical complexities of layoffs, or make the intuitive leap that may or can transform an organization. The leaders who thrive aren’t those who become AI hybrids, but those who develop what I call AI choreographer skills. The wisdom to know when to trust the machine’s recommendations and when to override based on factors the algorithms cannot compute. Company culture, individual human needs,
Long-term values that resist qualification and quantification. Historical context that doesn’t appear in data sets. Solving complex edge cases. These and many more items and elements remain uniquely human territory. There remains a necessity in the days, weeks, months, and years ahead to remain very human. Continuing to build and deepen critical thinking skills and always remember what makes us all distinctly very human. Traditional leadership wisdom suggests projecting strength and certainty. But leadership in the age of AI requires something counterintuitive. The courage to lead from vulnerability. I’ve been tracking this trend in my consulting work. The most adaptive leaders are those willing to say, I don’t fully understand this technology, but I understand our people, our mission and our organization.
This vulnerability isn’t weakness, it’s strategic adaptation. Leaders who admit their AI literacy gaps create psychological safety for the teams to do the same. They foster environments where learning becomes collective rather than individual, where questions are valued over false certainty. The alternative, leaders who pretend to understand completely, or who delegate all AI decisions to technical teams, contributing to creating dangerous blind spots.
A couple months ago, Sam Altman bragged about GPT-5’s improved emotional intelligence, claiming it makes users feel like they’re talking to a thoughtful person. But here’s what The Atlantic reported that offers a stark contrast. Large language models do not, cannot, and will not
understand anything at all. They are impressive probability gadgets that have been fed nearly the entire internet and produce writing by making statistically informed guesses about which word is likely to follow another. This creates a fascinating psychological challenge. AI that seems emotionally intelligent, but isn’t actually intelligent at all. It mimics and it mirrors. Using what it has been fed, rather than itself thinking and feeling. The danger in current or near future times isn’t that AI will become too human. The danger is that we’ll mistake sophisticated pattern matching for genuine understanding. And we’ll start deferring human judgment to systems that, despite their impressive outputs, don’t actually comprehend the human context they’re operating in. Let me dive into where this all gets.
really complex and honestly a bit scary. As we get better at building human AI trust, we’re walking into an ethical minefield that most organizations aren’t prepared to navigate. I call it delegation creep. Once we trust AI for small decisions, we start trusting it for bigger ones. The problem, we’re not consciously choosing where to draw the line. A recent report from Elon University highlighted something that keeps me up in
Experts predict significant change in people’s ways of thinking, being, and doing as they adapt in the age of AI. Many are concerned about how our adoption of AI systems will affect essential traits such as empathy, social and emotional intelligence, complex thinking, and the ability to act independently with a sense of purpose. Think about that for a minute.
If we are so trusting of AI providing us recommendations, providing us answers to the questions we have, as we deepen our reliance and immersion, where does that leave our own mental abilities to make decisions, to evaluate situations, and to evaluate information? If we’re relying on AI more and more as counselors, advisors, and friends, how does our social and emotional intelligence mature?
This really isn’t about AI becoming conscious or going rogue. It’s about humans gradually ceding cognitive territory to another intelligence, really without realizing it, or the downstream consequences that may result. There’s something even more concerning that is potentially happening. As people become more comfortable with AI and AI’s decision-making.
They’re exercising their own critical thinking muscle less and less frequently. Psychologically, it raises a question. If you’re outsourcing your conversation planning, your thinking, your decision making, and more to AI, are you maintaining or eroding your ability to read and respond to human dynamics in real time? This isn’t about becoming less intelligent. It’s about becoming overly dependent on another type of intelligence. One that can process patterns, but can’t understand human context the way humans can and do. Here’s the ethical dilemma that concerns me most. When AI makes recommendations that lead to a bad outcome, who’s responsible? We’re creating systems where accountability becomes diffused to the point of disappearing.
The AI developers say they just built the tool. The managers say they followed the recommendation. The executives say they trust their team’s expertise. Everyone is responsible, which means no one is responsible. Here’s what I believe and what my research supports. The goal isn’t to eliminate AI dependence. We are likely already too far down that road. It’s to design human AI collaboration that enhances rather than replaces human agency. This means building AI systems that require human judgment, not just human approval, maintaining what I call deliberate friction and high-stakes decisions, and creating regular human-only decision-making exercises to maintain cognitive fitness.
Establishing clear accountability and consideration evaluation frameworks. Before implementing AI, the question shouldn’t be, how can AI make this decision faster? The question should be, how can AI help humans make this decision better while preserving human responsibility for the outcome? So where does this leave us? How do we move forward in a world where AI capabilities are advancing faster than our own?
Psychological and ethical frameworks can’t adapt. I believe the answer lies in what I call conscious evolution. Deliberately choosing to become more human because of AI, not less. In my newsletter, I explored how Darwin’s concept applies to our AI moment. The leaders and organizations that will thrive aren’t necessarily the strongest or the fastest, but those who can understand complex paradoxes.
Leveraging AI’s power while deepening their humanity. Embracing technological capability while preserving human agency. Optimizing for efficiency while protecting what makes us essentially human. Let me walk through the three principles of conscious evolution. Principle one, moral leadership in an algorithmic world. As AI handles more operational decisions, human leaders face an elevated responsibility.
Becoming guardians of human values in an increasingly transactional world. This means asking not, how can AI replace human effort? But how can AI free humans to do what only humans can do? Principle two, the choreographer mindset. The most effective leaders are learning to use AI as the ultimate thinking partner, not a replacement. They are stress testing their assumptions.
Exploring scenarios they hadn’t considered, and processing complex sets of data, but they aren’t outsourcing their moral reasoning to machines. This mindset requires knowing your own values so clearly that you can recognize when AI recommendations align with or contradict them. Principle three, vulnerability as a strategic advantage.
The vulnerability to say to your team and organization, I’m learning alongside you, becomes a survival trait, building trust and resilience that pure technical competence cannot match. So what does this look like in practice? Instead of asking, will AI replace us? We should ask, how can we use AI to become more essentially human? More empathetic? Because we’re free from routine data processing. More creative.
Because we have AI handling pattern recognition. More ethically sophisticated, because we’re grappling with questions previous generations never even faced. The future I’m optimistic about isn’t humans versus AI. It’s humans with AI, each becoming more capable because of the other. But humans must remain responsible and accountable for the outcomes that matter most.
Let me leave you with three specific actions you can take this week to develop more conscious human AI collaboration. Action one, conduct a trust territory audit. List every AI tool or system you currently use. For each one, ask, is this enhancing my human capabilities or replacing them? Adjust your usage to emphasize enhancement over replacement. Action two, practice algorithmic skepticism.
This week for every AI recommendation you receive, ask, what can I see that this AI can’t? Train yourself to identify the human context factors that algorithms miss. Action three, maintain human only decision space. Choose one type of decision you make regularly and commit to making it without AI assistance for the next month. This simply keeps your human judgment muscles exercised.
Understanding how to navigate the days, weeks, months, and years ahead with the recognition of human factors that comprise our humanness will determine whether your AI adoption enhances human capability or diminishes it. If you’re thinking about implementing AI in your organization or you’re struggling with adoption of AI tools you’ve already deployed, understanding these psychological factors is crucial.
This is why I built the Transformation Readiness Assessment and recently published a 10-part series on Transformation Psychology Models and Considerations. Both are available on our website. The assessment measures whether your organization is technically ready for a transformation catalyzed by AI. And the Transformation Psychology Models help you understand the very human element of any change or transformation effort, including implementing AI. You can take the assessment right now at transformationassessment.com. That’s transformationassessment.com, all one word. It takes about five minutes and you’ll get specific insights into your change and transformation readiness that go far beyond the technical capabilities that one would assume they need to have. Explore the Human Factor Method website to learn more about the models I mentioned.
Here’s what I’ve learned over a decade working with all types of organizations. Organizations that understand the psychology of change and transformation before implementing any technology avoid the adoption failure that historically plagues 70 % of all change and transformation initiatives. Here’s what I’d like you
The rise of AI should make us more human, not less. Every AI implementation is ultimately a choice about what kind of humans we wanna become. The organizations and leaders that understand this, that see AI as an amplifier of human potential rather than a replacement for human judgment, are the ones that will thrive. We’re not in competition with AI, we’re in collaboration with it.
But humans must remain the choreographer of that collaboration, responsible for the outcomes guided by values that no algorithm can compute. Next week, we’re diving into the Gen Z factor, how younger generations are rewiring workplace psychology. I’ll share why traditional management psychology is challenged to actually drive next generation performance. If this episode was helpful,
Please subscribe to the Human Factor Podcast, leave a rating, and share with your colleagues. And if you’re grappling with AI adoption in your organization, share this episode with your leadership team. These insights work better when everyone understands the human behavior and psychology at play. For more resources on human AI collaboration and my latest thinking on leadership in the age of AI,
Visit humanfactormethod.com and subscribe to my weekly 2040 Ideas and Innovations newsletter on Substack. Until next week, remember, technology amplifies human nature.
Choose consciously what you want to amplify.
Available Everywhere
The Human Factor Podcast is available on all major platforms
Apple Podcasts
Spotify
Google Music
Amazon Music
YouTube
Pandora
iHeartRadio
RSS Feed
Or wherever you get your podcasts
New episodes every Thursday
Upcoming Episodes
Upcoming: Available October 16, 2025
The Gen Z Factor: How Younger Generations Are Rewiring Workplace Psychology
Explore how Gen Z’s pragmatic approach to loyalty, meaning, and work relationships is fundamentally reshaping organizational expectations.
