Human Factor Podcast Season 1 Episode 010: Measuring the Human Factor – When Surveys Lie and Behavior Reveals the Truth
Episode 010 Measuring the Human Factor – When Surveys Lie and Behavior Reveals the Truth
Learn How to Measure What People Do, not Say
Hosts: Kevin Novak
Duration: 32 minutes
Available: December 11, 2025
🎙️Season 1, Episode 10
Episodes are available in both video and audio formats across all major podcast platforms, including Spotify, YouTube, Pandora, Apple Podcasts, and via RSS, among others.
Transcript Available Below
Episode Overview
Why do transformation initiatives fail despite dashboards showing 82% employee support? Because we’re measuring the wrong things. In this episode, Kevin Novak reveals the measurement crisis hiding in plain sight: a consistent 40 to 50 percentage point gap between what people say they’ll do and what they actually do.
Drawing from real consulting experience where behavioral data exposed that only 31% of employees were genuinely adopting a change that surveys claimed 82% supported, Kevin introduces the four domains of human factor measurement that make psychological readiness visible and actionable.
You’ll learn why surveys measure intention instead of behavior, why training completion rates reveal compliance instead of capability, and why system logins show access frequency instead of genuine adoption.
Most importantly, you’ll discover a practical framework for measuring what actually predicts transformation success: behavioral readiness, psychological safety, adoption velocity, and sustainability indicators. If your transformation metrics keep showing green while your outcomes stay red, this episode explains why and what to do about it.
Learn more about the Human Factor Podcast>
Subscribe to the Ideas and Innovations Newsletter> (It’s free)
Key Takeaways
Traditional Metrics Mislead Because Surveys Measure Intention
Declining Error Reports During Transformation Often Signal Fear Rather Than Success.
Learn About Behavioral Readiness, Psychological Safety, Adoption Velocity, and Sustainability Indicators
Season 1, Episode 10 Transcript
Available December 11, 2025
Episode 010: Measuring the Human Factor – When Surveys Lie and Behavior Reveals the Truth
DURATION: 32 minutes
HOST: Kevin Novak
SHOW: The Human Factor Podcast
An organization hired us to figure out why their digital transformation was stalling. They had run an engagement survey three months earlier. Support for the initiative was at 82%. Leadership was baffled. If four out of five people supported the transformation, why wasn’t anything actually changing?
So we started measuring something different. Not what people said they would do, but what they actually did. We tracked system logins, feature usage, time spent in new workflows versus old workarounds. We monitored question frequency in training sessions and error reporting rates. We looked at whether people were helping their colleagues learn the new system or quietly reverting to spreadsheets when no one was watching.
The behavioral data told a completely different story. Actual adoption was 31%. Two out of three people who claimed to support the transformation were actively avoiding it in their daily work. They weren’t lying on the survey. They genuinely believed they supported the initiative. But their behavior revealed what their conscious minds couldn’t admit: they weren’t psychologically ready for change.
This gap between stated support and actual behavior is the measurement crisis at the heart of transformation failure. And it’s why everything we’ve discussed in the first nine episodes of this series, from resistance patterns to transformation fatigue, remains invisible to most organizations until it’s far too late.
I’m Kevin Novak, CEO of 2040 Digital, Professor at the University of Maryland, and author of The Truth About Transformation: Leading in the Age of AI, Uncertainty, and Human Complexity and the ideas and innovations weekly newsletter.
Welcome to The Human Factor Podcast, the show that explores the intersection of humanity, technology, and transformation along with the psychology behind transformation success.
Today we’re exploring how to measure the human factor: the psychological readiness that determines whether your transformation will succeed or become another casualty of the 70% failure rate.
This connects directly to the subtitle of my book: Leading in the Age of AI, Uncertainty, and Human Complexity. Because in an era where technology changes faster than psychology can adapt, understanding how to measure what’s actually happening inside your organization, not what surveys tell you is happening, becomes the difference between transformation success and complete and expensive failure.
Here’s the fundamental problem with how organizations measure transformation: we measure what people say instead of what they do. We ask people if they support the change, and we’re surprised when stated support doesn’t translate to behavioral change. We track training completion rates and assume that completing a course means someone has actually changed how they work. We count system logins and conclude that access frequency equals adoption.
These aren’t measurements of human factors. They’re measurements of compliance theater. And they explain why organizations keep investing in transformations that fail at predictable rates despite all the dashboards showing green.
Today I’m going to share the measurement framework we developed at 2040 Digital to make psychological readiness visible and actionable for our clients. Let’s start by understanding why the metrics you’re currently using are probably lying to you.
SECTION 1: THE MEASUREMENT GAP
Research from implementation science reveals a consistent pattern across industries and transformation types. Self-reported support for organizational change typically ranges between 70% and 85%. People genuinely believe they support the initiative. But actual adoption rates, measured by sustained behavior change rather than survey responses, average between 30% and 45%.
That’s a gap of 40 to 50 percentage points between what people say and what they do. And here’s the critical insight: this isn’t a communication problem or a training gap. It’s a measurement problem. We’re measuring the wrong things, and then we’re surprised when our measurements don’t predict the outcomes we hoped for.
Let me explain why traditional transformation metrics systematically mislead leaders.
First, surveys measure intention, not behavior. When someone says they support a transformation, they’re reporting their conscious intention. They genuinely believe they support it. But intention and behavior are governed by different psychological systems. Your conscious mind can commit to change while your unconscious mind is working overtime to protect you from it. This is exactly what we explored in Episode 8 on hidden resistance. The quality director who sincerely supports digital transformation still raises quality concerns that delay implementation. She’s not lying. She’s protecting herself from competence threat in ways she can’t consciously recognize. Surveys also nearly always drive the respondents to the answers expected by the survey creators. Whether an individual or a group authors a survey, they are developing the questions, they then often, to gain consistency of response, develop the multiple-choice answers. That construction leads the respondents to choose from what was provided. In experience across a decade of consulting work, rarely have I seen the truth result from surveys. Those hidden gems of challenge, resistance, of important knowledge, only present themselves outside of a survey construct and only then can one recognize the very human behaviors that are manifesting.
Second, training metrics measure compliance, not capability. When organizations track training completion rates or training test scores, they’re measuring whether people showed up, not whether they retained information and will put the knowledge into action, and not whether they embrace the change. The research on training transfer, the extent to which skills learned in training actually appear in job performance, is sobering. Studies consistently show that only 10% to 20% of training content translates to sustained behavior change on the job. Yet organizations continue to use training completion as a proxy for transformation readiness.
Third, usage metrics measure frequency of access, not adoption. System logins, feature clicks, and time in application are all proxies for adoption, not evidence of it. Someone can log into a new system every day and still be doing their actual work in Excel spreadsheets. They’ve learned the minimum necessary to appear compliant while maintaining their comfortable workflows. This is one of the most common patterns we see in digital transformation. The dashboard shows 90% usage. The reality is that maybe 40% of people have genuinely changed how they work.
Fourth, milestone metrics measure project progress, not psychological readiness. Traditional transformation scorecards track implementation milestones: technology deployed, training delivered, processes documented. These are important operational metrics, but they tell you nothing about whether people are psychologically prepared to work differently. You can achieve every milestone and still fail the transformation because the human factors weren’t ready.
The fundamental problem is that we’re measuring visible activities and assuming they reflect invisible psychology. It’s like measuring the number of gym memberships sold and concluding that everyone in the city is fit. The membership is a leading indicator of intention, not a measurement of outcome and there is a lot as humans we intend, aspire or even genuinely in the moment believe we will do or complete, whereas, in reality we don’t.
So if traditional metrics measure the wrong things, what should we be measuring instead? This brings us to the framework we’ve developed for making human factors visible.
SECTION 2: THE FOUR DOMAINS OF HUMAN FACTOR MEASUREMENT
Our framework focuses on four domains of human factor measurement. Each domain addresses a different aspect of psychological readiness, and together they predict transformation success with significantly greater accuracy than traditional approaches. Let me walk you through each one.
Domain One is Behavioral Readiness which measures actual behavior patterns rather than stated intentions. Instead of asking people if they’re ready for change, we observe whether their behaviors indicate readiness. Here are specific indicators we track.
Workaround frequency tells us how often people are creating parallel processes that bypass new systems. Workarounds are a reliable indicator of psychological resistance. When someone says they support the new system but maintains their own spreadsheet just in case, they’re demonstrating that their behavior hasn’t actually changed.
Feature depth versus feature breadth reveals whether people are using the full capabilities of new tools, or only the minimum features required for compliance. Surface-level adoption, using new tools to do exactly what old tools did, suggests the psychological shift hasn’t occurred.
Proactive engagement versus reactive compliance shows whether people seek out information about the transformation, or only engage when required. Someone who attends optional training sessions and asks questions in team meetings is behaviorally ready. Someone who does the minimum required is demonstrating resistance through compliance theater.
Help-seeking patterns indicate who people are asking for help, and how often. Psychologically ready individuals ask questions early and publicly. Those experiencing resistance either don’t ask or ask privately because they don’t want to appear incompetent.
Behavioral readiness tells us what people are doing. But behavior doesn’t happen in a vacuum. It happens within an emotional environment that either supports or undermines change. Remember back to Elizabeth Stewart and I emphasizing the criticality of psychological safety in Episode 7.
That brings us to our second domain: Psychological Safety
Psychological safety measures the emotional environment for change. Transformation requires people to acknowledge what they don’t know, make mistakes while learning, and admit when they’re struggling. If the environment doesn’t feel safe for these behaviors, people will hide their difficulties and fake adoption. This also connects directly to our Episode 7 conversation with Elizabeth Stewart about why vulnerability is essential for transformation.
Question frequency in meetings is a powerful indicator. In psychologically safe environments, people ask questions publicly. When question frequency drops, it often indicates that people don’t feel safe revealing what they don’t understand.
Error reporting rates tell us whether mistakes are being reported promptly, or whether people are hiding errors until they become too big to conceal. A decrease in error reporting during transformation often means people are afraid of the consequences of admitting they’re struggling with new systems.
Peer support dynamics reveal whether team members are helping each other learn or competing to appear competent. Psychologically safe teams share tips, troubleshoot together, and openly discuss challenges. Unsafe teams keep struggles private.
Feedback quality matters as well. When people provide specific, constructive feedback about transformation challenges, it indicates psychological safety. When feedback becomes vague or disappears entirely, people don’t feel safe being honest about problems.
Behavioral readiness and psychological safety tell us about current state. But transformation is a process that unfolds over time. We also need to understand the trajectory of change.
The Third Domain is Adoption Velocity
Adoption velocity measures the speed and quality of behavior change over time. This domain helps identify whether people are genuinely progressing toward new behaviors or stuck in patterns of resistance.
Learning curve progression determines whether people are improving at a normal rate or showing signs of stalled development. Stalled learning curves often indicate psychological barriers that training can’t address. Remember from Episode 9, when cognitive resources are depleted, learning capacity diminishes regardless of training quality.
Early adopter identification matters because in every organization, some people embrace change naturally. Identifying these early adopters and understanding their characteristics helps predict who else is likely to adopt and who will need additional support.
Influence network activation tracks whether early adopters are influencing others. Transformation spreads through social networks. If early adopters aren’t bringing their colleagues along, something is blocking the natural diffusion of new behaviors.
Regression patterns reveal whether people who adopted new behaviors are reverting to old ones under stress. Regression is one of the most reliable indicators of insufficient psychological readiness. The behavior changed temporarily, but the underlying psychology didn’t shift enough to sustain it.
This brings us to the ultimate question: Are changes actually sticking? Are they becoming part of how your organization operates, or will they fade when attention moves elsewhere?
Lets talk about Domain Four: Sustainability Indicators
Sustainability indicators measure whether behavior changes are becoming embedded habits or remaining fragile, conscious efforts that will fade when leadership focus shifts to the next priority. Much of what I am about to share requires active watching and listening across the organization, yes system metrics help inform, but real listening is a superpower in determining if the organization and its culture has reached a level of sustainability with the transformation effort. Some examples:
Habit formation metrics determine whether new behaviors are becoming automatic. Early in transformation, people have to consciously choose new behaviors. Successful transformation means those behaviors become default.
Cultural integration measures whether new behaviors are becoming part of team identity. When teams start describing themselves in terms of the new approach, rather than as people who used to work differently, the culture has shifted.
Continuous improvement engagement looks at whether people are suggesting improvements to new processes or just complying with what they were told to do. Genuine adoption leads to ownership, and ownership leads to improvement suggestions.
Stress test resilience examines what happens when the organization faces pressure. Do new behaviors persist, or do people revert to familiar patterns? The real test of transformation success isn’t whether people can work differently when everything is calm. It’s whether the new behaviors survive adversity.
These four domains, behavioral readiness, psychological safety, adoption velocity, and sustainability, work together to reveal what traditional metrics hide. Let me show you how this looks in practice by returning to the organization I mentioned at the opening.
SECTION 3: MEASUREMENT IN PRACTICE
When we applied the four domains of human factor measurement to the organization where 82% survey support masked 31% actual adoption, here’s what we discovered.
In behavioral readiness, we found that 67% of employees had created personal workarounds that bypassed the new system. Feature usage analysis showed that 78% of users were only accessing three basic features despite the system having extensive capabilities that could transform their workflows. Help requests had dropped 40% since the transformation began, which initially looked like a positive sign until we realized it meant people had stopped trying to learn and were just doing the minimum to appear compliant.
In psychological safety, we found that question frequency in transformation-related meetings had dropped from an average of eight questions per meeting to two. Error reports had declined 60%, which leadership had initially interpreted as successful adoption. In reality, people were hiding mistakes rather than reporting them. In team interviews, we discovered widespread fear of appearing incompetent in the new system. People were afraid to ask questions because they thought everyone else had already figured it out.
In adoption velocity, learning curve analysis revealed that 45% of employees had plateaued at basic proficiency within the first two weeks and hadn’t improved since. Early adopters existed but were isolated. They weren’t influencing their peers because the broader culture didn’t support experimentation. Most concerning, we found a 28% regression rate among people who had initially shown strong adoption. When quarter-end pressure hit, they reverted to familiar processes because the new behaviors weren’t sufficiently embedded.
In sustainability indicators, habit formation metrics showed that only 23% of employees were using the new system automatically. The rest were still consciously choosing between old and new approaches, and choosing old more often than not. Cultural integration hadn’t occurred. Teams still described themselves by their old workflows. When asked about their work, they’d say things like “we’re the team that handles the quarterly reports” rather than describing their role in the new system.
This data told a completely different story than the 82% survey support. But more importantly, it told us exactly what to address.
The Intervention
The data allowed us to design an intervention that addressed the actual barriers, not the imagined ones. Instead of more training, which leadership had assumed was the solution, we focused on psychological safety and identity threat.
We created peer learning pods where small groups could struggle together without judgment. We redesigned error reporting to celebrate learning rather than punish mistakes. We worked with managers to shift their language from “are you using the new system?” to “what are you learning as you work with the new system?” We helped early adopters become visible mentors rather than isolated experts.
Six months later, behavioral adoption had increased from 31% to 71%. Not because we trained people better, but because we addressed the psychological barriers that made them unable to adopt what they already knew how to do.
So how do you start measuring human factors in your own organization? Let me give you a practical approach you can implement immediately.
Practical Implementation
First, audit your current metrics. List every transformation metric you’re currently tracking. For each one, ask: does this measure what people say, or what they do? I cannot stress enough how important that determination is to any project success.
Do your current metrics measure compliance, or genuine capability change? You’ll likely find that most of your metrics fall into the intention and compliance categories. That’s not wrong, but it’s incomplete.
Second, identify behavioral indicators for each domain. You don’t need sophisticated measurement systems to start. Look for simple behavioral signals. In behavioral readiness, count workarounds and survey teams about parallel processes. In psychological safety, track question frequency in meetings and notice whether errors are being reported or hidden. In adoption velocity, identify your early adopters and ask whether they’re influencing others. In sustainability, pay attention to what happens under pressure.
Third, create baseline measurements before assuming you understand current state. Measure the actual current state of human factors before launching interventions. Most organizations don’t do this, which means they can’t tell whether their interventions are working or whether adoption was going to happen anyway.
Fourth, triangulate data sources. No single metric tells the whole story. Use multiple indicators for each domain. Look for patterns across domains. When behavioral readiness, psychological safety, adoption velocity, and sustainability indicators all point in the same direction, you can be confident in your assessment.
Fifth, use measurement as intervention. The act of measuring human factors can itself change behavior. When people know that psychological safety is being tracked, managers pay more attention to it. When workarounds are being counted, people become more conscious of them. Measurement isn’t just about data. It’s about signaling what the organization values.
Everything I’ve described so far helps you measure human factors during transformation. But what about measuring readiness before a transformation begins? This is where you can prevent problems rather than just diagnose them.
SECTION 4: ASSESSING ORGANIZATIONAL READINESS
The best time to understand psychological readiness is before you launch your initiative. Once transformation is underway, you’re measuring resistance that’s already forming. But if you can assess readiness in advance, you can address barriers before they become entrenched. The understanding informs the plan, activities and yes measures that become hinge points to achieving your goals.
At transformationassessment.com, we’ve built an assessment based on years of research and consulting experience that measures the gap between stated intentions and likely behavior. The assessment examines five dimensions of transformation readiness.
Leadership alignment measures whether your leadership team is genuinely unified around the change or fractured in ways that will create resistance. Capability readiness assesses whether people have the skills they need and feel confident in their ability to develop new ones. Cultural preparedness examines the psychological safety and openness to change in your organizational culture. Communication effectiveness determines whether your messaging is creating clarity or confusion. Engagement foundation measures the quality of relationships and trust that change will be built upon.
What makes this assessment different from typical engagement surveys is that it’s designed to reveal the gap between conscious support and unconscious readiness. The questions are structured to surface the psychological barriers that people can’t self-report directly because they’re not consciously aware of them.
For leaders planning their next initiative, this assessment provides the kind of pre-work that separates successful transformations from the 70% that fail. It helps you know where the psychological landmines are buried before you step on them. Take the assessment at transformationassessment.com.
The measurement framework we’ve discussed today, the four domains, the behavioral indicators, and the readiness assessment, represents a fundamentally different approach to understanding transformation. Instead of asking what people think, we observe what they do. Instead of measuring compliance, we measure genuine psychological change.
CLOSING
Let me leave you with this perspective: The gap between stated support and actual adoption isn’t a communication failure or a training problem. It’s a measurement problem. We’ve been measuring the wrong things, and then we’ve been surprised when our green dashboards don’t predict green outcomes.
Traditional transformation metrics measure the wrong things. Surveys measure intention, not behavior. Training metrics measure compliance, not capability. Usage metrics measure access, not adoption. Milestone metrics measure project progress, not psychological readiness. The gap between stated support and actual adoption is consistently 40 to 50 percentage points.
The four domains of human factor measurement provide a more accurate picture. Behavioral readiness measures what people actually do, not what they say they’ll do. Psychological safety measures whether the environment supports honest struggle and learning. Adoption velocity measures the speed and quality of behavior change over time. Sustainability indicators measure whether changes are becoming embedded habits.
When you measure human factors instead of just activities, you can identify barriers before they become fatal, design interventions that address real problems, and predict transformation success with much greater accuracy.
In our next episode, I’m exploring a danger that threatens every successful organization: drift.
What happens when the very success that built your market position becomes the psychological barrier that prevents you from seeing where the market is heading?
do iconic companies like Blockbuster, Kodak, and BlackBerry fail not because of one catastrophic decision, but through thousands of small, reasonable choices that gradually pulled them away from relevance?
How do you recognize when your organization has shifted from actively steering to passively floating, becoming a passenger instead of a captain?
Episode 11 examines the four stages of organizational drift, the six human factors that enable it to take hold, and the strategic framework for building the awareness that prevents drift before it becomes fatal.
If you found today’s episode valuable, subscribe to The Human Factor Podcast wherever you listen to podcasts, leave a rating and a review, and share this episode with colleagues who are struggling to understand why their transformation metrics aren’t predicting outcomes.
To explore these concepts further, visit 2040digital.com for the change leadership series and subscribe to my Ideas and Innovations newsletter.
Until next time, remember: You can’t transform an organization without transforming the humans in it. And you can’t transform what you don’t measure.
And finally, transformation isn’t about technology, its about people.
This has been The Human Factor Podcast. I’m Kevin Novak. Thanks for watching or listening.
END OF EPISODE
Available Everywhere
The Human Factor Podcast is available on all major platforms
Apple Podcasts
Spotify
Google Music
Amazon Music
YouTube
Pandora
iHeartRadio
RSS Feed
Or wherever you get your podcasts
New episodes every Thursday
Upcoming Episodes
Upcoming: Available December 18, 2025
EPISODE 011: The Drift that Destroys – When Success Becomes the Enemy of Survival
Learn the uncomfortable truth about organizational drift: it doesn’t happen to failing companies. It happens to successful ones.
The companies most vulnerable to drift are the ones that have the most reason to believe they’re doing everything right.
