Manufactured Engagement – When Social Proof Becomes Social Fiction
Manufactured Engagement – When Social Proof Becomes Social Fiction
Manufactured Engagement
When Social Proof Becomes Social Fiction
Issue 263, May 7, 2026
In the previous issue and the Artificial Understanding series, I described the comprehension gap and the circular system in which our behavioral data feeds AI systems that shape our behavior without our conscious understanding. I closed the series with a warning about what happens when fabricated behavioral data is injected back into the same algorithmic systems that govern what we see. This fabrication, which I explain later in this article, can be called Manufactured Engagement.
It is an outcome that the majority are completely unaware of and an outcome that infuses every output of AI, whether personal or at an organizational level, and therefore influences nearly every human.
My goal here is to define what Manufactured Engagement is and make my prior warning become concrete for you, because one of the most fundamental human heuristics, the instinct to follow the judgment of the crowd, is now a target for manipulation at an industrial scale, and you simply may not even be aware how you are being influenced and manipulated.
We are wired to use other people as a filter. We don’t often want to make our own decisions or use our energy to conduct necessary research. We seek out recommendations and opinions from others that inform the choices we need to make, often to delegate the decision-making responsibility off onto others. So if the choice isn’t the best, we have someone to blame other than ourselves. Humans, at their core, are very trusting of others and relish the wisdom of the crowd. Even when we don’t “know” those in the crowd, if we believe there are enough in the crowd who represent and think a certain way, we too align ourselves and our thoughts and become overly trusting given the “social proof”.
Therefore, when a social post shows thousands of likes and a thread of enthusiastic comments, something in us relaxes. The judgment has already been made, the crowd has already weighed in, and we can skip ahead to the conclusion (the choice, a decision, a belief, and even recording information in our minds as the truth). Robert Cialdini named this innately human default behavior social proof over forty years ago, and every piece of behavioral research since has confirmed that the shortcut is not a flaw in our thinking. It is a feature of it, and our very human selves.
Under cognitive load, which is to say under the conditions of nearly every modern scrolling session, we lean harder on heuristics and lighter on scrutiny. That is the quiet bargain our attention makes with the social feeds, and it is the bargain that an entire gray market has figured out how to exploit in ever-increasing ways now that AI is part of the day-to-day.
The Acceleration
The mechanics of manufactured engagement are no longer a secret for those who have sought to understand them. Bot networks have existed as long as social platforms have, but the recent acceleration comes from three directions at once. Accounts are easier to age and warm with automation (meaning becoming credible and popular in the eyes of users and to social platform algorithms). Residential proxy services make the traffic look geographically native. And generative AI has collapsed the cost of writing contextually plausible comments from something that used to require a team of humans into something one operator can run from a laptop. This last development is the one that connects most directly to the data infrastructure I described in the Artificial Understanding series. The same behavioral data I wrote about in Issue 65, the patterns of how real humans engage, react, and comment, are precisely what generative AI was trained on. That training is why AI-generated comments are contextually plausible and viewed as real and credible by most. The systems have learned to fake us by studying us, and the raw material for that study was the behavioral exhaust we have been generating for the past couple of decades without fully understanding where it goes or what it enables.
Meta’s own Adversarial Threat Reports show coordinated inauthentic behavior (manufactured) takedowns rising year over year. A 2025 study published in Scientific Reports by researchers at Carnegie Mellon University analyzed approximately 200 million social media users across seven global events and found that roughly 20 percent of all social media activity comes from bots, with the figure spiking to 43 percent during the US elections. The FTC’s 2024 Final Rule on Fake Reviews and Testimonials acknowledged the scale of the problem by specifically targeting the use of AI-generated reviews and fake social media indicators. But the capability has moved from specialized operations into the reach of almost anyone willing to pay for it, and that shifts the baseline that every honest creator is now competing against and challenges every consumer of information to be forever wary.
What confounds me, and what has confounded me for the years I have been writing and teaching about misinformation, is not that the manipulation exists. It is how easily it works. A 2025 study by Sabour, Liu, and colleagues tested the manipulation directly by having 233 participants interact with AI agents in financial and emotional decision-making scenarios. Some agents were neutral, optimizing for the user’s benefit. Others were designed to covertly manipulate. The results were not surprising: participants interacting with the manipulative AI agents shifted toward harmful options at rates of 62.3 percent in financial decisions and 42.3 percent in emotional ones, compared to 35.8 percent and 12.8 percent for the neutral AI agent. Perhaps most unsettling, the researchers found that even subtle manipulative objectives were as effective as agents equipped with explicit psychological tactics. The manipulation did not need to be sophisticated to work. It just needed to be present.
Humans are capable of discernment, via our capacity to use our critical thinking skills, in ways no algorithm can match, and yet we routinely skip discernment or even the use of our critical thinking skills when a post looks popular, a comment looks agreeable, the quantity of reviews for a product is high, or a headline confirms what we already half-believe. In Issue 191, I asked whether critical thinking is at risk of extinction, and the dynamics I described there, cognitive disharmony, information overload, and the tendency to revert to surface-level processing, are exactly the conditions that manufactured engagement exploits. Laziness is part of it. Trust is part of it. But underneath both is something more structural, which is that the platforms have trained us effectively, even to the point that we are not consciously aware, to treat engagement as a proxy for truth, and the manipulators have learned to take advantage of us.
The Three-Sided Cost
The damage from manufactured engagement is rarely framed correctly because it is usually viewed as a platform problem. The platform has a fraud issue, the platform takes action, the takedown numbers go up, and the story ends at least in that sliver of time. The framing misses the more important truth, which is that there are three parties losing something at the same time, and the losses are not symmetric.
Platforms lose the integrity of the very ranking systems that make their products valuable. Every recommendation engine, from Meta’s to TikTok’s to LinkedIn’s, depends on the assumption that aggregated user behavior carries signals about what is worth elevating. When a meaningful fraction of that behavior is fabricated, the algorithm is not just being tricked. It is being trained on bad data. Researchers like Filippo Menczer at Indiana University have documented how inauthentic engagement creates feedback loops that distort what reaches authentic users, which then distorts what those users engage with, which then distorts the next round of training signal. In Issue 218, Signal vs. Noise, I described what happens when organizations drown in data that promises clarity but delivers confusion. Manufactured engagement is that phenomenon applied to the entire information ecosystem: noise masquerading as signals at a scale that corrupts the signals themselves.
Creators (those we follow, those who make us laugh, those whose opinions and thoughts we seek, etc.) lose in two directions at the same time, and this is the part that gets the least attention in public discussion. The first loss is competitive, because the performance baseline they are measured against, by clients, by employers, by their own internal sense of whether their work is landing with the social audiences, has been inflated by people who are not playing the same honest game. A post that earns three hundred genuine engagements in a domain where competitors are buying three thousand looks like underperformance even when it is not. The second loss is moral, because the temptation to join in is real, and the rationalizations are easy. Everyone is doing it. The platform is going to discount it anyway. My content is good and just needs a push to reach the audience it deserves. Each of these is plausible enough to soften the conscience, and the cumulative effect is a slow erosion of the trust that distinguishes a serious practitioner from a manufactured engagement performer.
I have seen this dynamic firsthand in organizations I consult with. A marketing director at a mid-sized technology company showed me their social analytics and asked why their engagement rates had dropped despite the content improving measurably in quality. When we investigated, the answer was not that their audience had lost interest. It was that three competitors in their space had begun purchasing or manufacturing engagement at scale, inflating the benchmarks against which everyone in that sector was being measured. The marketing director faced genuine pressure from leadership to match those numbers, and the pressure did not come with a caveat about how the numbers were produced. The dashboard showed a gap. Leadership wanted the gap closed. The conversation about whether the comparison was legitimate never happened because the measurement system didn’t distinguish between real engagement and purchased/manufactured engagement. It counted everything the same way.
Audiences lose what is hardest to measure and easiest to dismiss, which is the cognitive infrastructure that lets them tell the difference between something that resonated with real people and something that was made to look like it did. The research on this is sobering. When audiences discover that engagement signals can be faked, they do not become more discerning in any directed way. They become more generally cynical, which damages legitimate creators alongside the manipulators and erodes the basic trust that any communication depends on. The Edelman Trust Barometer has tracked this drift for years, and the Reuters Institute Digital News Report has shown declining confidence in digital information environments across nearly every market they survey. The losses compound because once trust is gone, the cost of earning it back is higher than the cost of having protected it in the first place.
In Issue 192, I wrote about the death of brands and Ed Elson’s observation that America has fallen out of love with brands and in love with people. If that thesis is correct, and I believe the evidence supports it, then manufactured engagement becomes even more dangerous than it first appears. What is being faked is not just brand credibility or product endorsement. It is the appearance of human connection, of genuine community response, of real people caring about real ideas. In a society already struggling with loneliness and parasocial relationships, the one-sided emotional bonds people form with public figures and online personalities, manufacturing the appearance of authentic human engagement, is not just a marketing fraud. It is an erosion of the social fabric itself. Our society is already experiencing the polarizing effects, and there is no natural pause where collective awareness and corrective action might emerge.
The Counterpoint Worth Naming
Any honest treatment of this topic has to acknowledge that the line between manipulation and legitimate practice is not always as clean as either side of the debate prefers. Asking your team, your community, or your genuine supporters to engage with a piece of content shortly after it publishes is a long-standing practice, and there is nothing fraudulent about it. The early activity helps the algorithm understand who the content is for, which is part of how distribution is supposed to work. Newsletter writers do this with subject lines and send times. Speakers do this when they ask the audience to introduce themselves to one another before a talk. Communities do this every time members rally around a piece of work that matters to them.
The distinction that matters is whether the engagement reflects something real or whether it is engineered to look like something real that does not exist. Coordinated authentic behavior is not the same as coordinated inauthentic behavior, and conflating the two does a disservice to the practitioners who are doing the slower, harder work of building actual community. The honest test is whether the engagement would survive disclosure. If you would be comfortable with your audience knowing exactly how their feed got populated with your content, you are probably on the right side of the line. If you would not, that discomfort is information worth taking seriously.
What Restoration Looks Like
There is no single intervention that fixes this, and anyone selling one is probably selling something else. But there are modest, defensible moves at each of the three levels, and they reinforce each other when practiced together.
For platforms, the most useful direction is transparency about detection, including periodic reporting that goes beyond takedown counts and addresses how inauthentic engagement is weighted in social platform algorithmic ranking systems. The EU Digital Services Act is now forcing more of this disclosure for the largest services, and this is part of the same governance arc I traced in the Artificial Understanding series. The same regulatory fragmentation that fails to govern AI training data also fails to govern AI-generated manipulation. Detection has improved meaningfully, and the economics of running sophisticated inauthentic operations are tightening, but the gap between what platforms catch and what gets through is still wide enough that the burden can’t rest on platform action alone.
For creators, the discipline is to anchor in the part of the work that compounds over time, which is the actual relationship with the actual audience. Numbers that grow because people care will outlast numbers that grow because they were purchased, and the practitioners who keep their attention on the former tend to be the ones still standing when the platforms next adjust their algorithms or when an audience suddenly notices that a familiar voice has been performing rather than communicating. This is not a moral argument so much as a practical one, though the moral argument also holds. An authentic community is more expensive to build and more durable to own.
For audiences, the work is the hardest and the most important, because no platform intervention and no creator integrity can substitute for the small daily habit of reading, seeing, or hearing critically before sharing what has been consumed. Gordon Pennycook at Cornell and David Rand at MIT have published extensively on this, and the underlying mechanism is not complicated. People are not unwilling to think carefully. They are just not in the habit of doing it inside an interface designed for speed. Building that habit back is what I described in Issue 233 as conscious evolution: the deliberate choice to engage with information environments as thinking participants rather than passive consumers being shaped by systems we did not design. It is partly an individual practice and partly a cultural one, which is why the people who teach it, in classrooms, in newsletters, in the slow patient work of public writing, matter more than the size of their immediate audience would suggest.
The Honest Path Forward
Manufactured engagement works on us not because we are foolish but because we are tired, distracted, and trained by years of frictionless feeds to treat popularity as a sufficient stand-in for influence, trust, and value.
The honest path forward is not to scold the audience for being human or the platforms for being platforms. It is to keep naming what is happening, to keep doing the slower work that does not need fakery to find its readers, and to keep insisting, in classrooms and newsletters and the quieter conversations that shape how people think, that the difference between real resonance and manufactured noise is one worth being able to tell. In the Artificial Understanding series, I argued that the comprehension gap is not inevitable. It is the product of choices. The same is true here. We can choose to understand the systems that shape what we see. We can choose to build the critical thinking capacity that manufactured engagement is designed to bypass. And we can choose, as individuals and as organizations, to value genuine connection over the performed appearance of it. That choice is what conscious evolution looks like in practice. It is harder than drift.
It’s also the only path that leads somewhere worth going.
Join the Conversation
How do you distinguish genuine engagement from manufactured noise in your professional life? What signals do you trust, and which have you learned to question? Share your thoughts on LinkedIn or subscribe to Ideas and Innovations Newsletter for weekly insights on leadership, transformation, and the human side of organizational change.
Explore the full archive at: 2040digital.com/newsletter
Assess Your Organization’s Transformation Readiness: transformationassessment.com
The Truth About Transformation
For a comprehensive framework on navigating transformation in the age of disruption, my book The Truth About Transformation explores why most change initiatives fail and what leaders can do differently. Available on Amazon.
Go Deeper
Subscribe to the Human Factor Podcast where we explore the psychology of change, the dynamics of resistance, and the strategies that help leaders and organizations navigate transformation successfully. Now in its second season with episodes covering AI adoption resistance, the algorithmic mirror, structural silence, and the broken contracts of organizational change.
Connect With Us
What leadership challenges are shaping your decisions right now? Share your experiences and join the conversation.
Go Deeper: Human Factor Podcast
From resistance and identity to the frameworks that help leaders navigate transformation. Available wherever you listen to or watch podcasts.
Kevin Novak
Kevin Novak is the Founder & CEO of 2040 Digital, a professor of digital strategy and organizational transformation, and author of The Truth About Transformation. He is the creator of the Human Factor Method™, a framework that integrates psychology, identity, and behavior into how organizations navigate change. Kevin publishes the long-running Ideas & Innovations newsletter, hosts the Human Factor Podcast, and advises executives, associations, and global organizations on strategy, transformation, and the human dynamics that determine success or failure.
