Skip to content
Contact@2040Digital.com or call us at ‪(240) 630-4674‬
2040's Ideas and Innovations Newsletter Image Header

Being Human in the Age of AI: Where We Are Heading Now and into 2035

Issue 209, April 24, 2025

If you’ve been working with any Large Language Model (LLM) – ChatGPT, Claude, et al. – you’ve probably noticed how weirdly polite and eager to please these tools are. That’s no surprise because they are trained to please you. It’s refreshing to have a handy research tool that responds like a friendly, faithful dog within seconds. It makes it tempting to depend on AI for the instantaneous results that are enhanced by its feel-good personal touch.

Early analysis shows individuals are migrating from searching for information to asking their favorite AI tool, chatbot or app to provide them with the insight they need to meet the day’s task or facilitate making a decision. The level of trust in AI that has become so ingrained is scary in many ways. But we need to remember that the same trust was given to Google, Bing, AltaVista and many other search engines. We quickly embraced the technological opportunity to remove the need to exercise our own critical thinking. Instead, we conserved our energy and transferred our responsibility for making decisions to a tool, platform or app.

We all do it. Given the amount of time lost in consuming TikTok and Facebook feeds or on our social platform of choice, we have convinced ourselves we don’t have the time to think. But maybe we don’t want to think. We just want an easy way to complete a task, trust something or someone else to make a decision, and move along.

Personal Evaluations

But at 2040 we will continue to raise the flag about the dilemmas triggered by LLMs and their influence on us. We are at a critical juncture when taking the easier path may fundamentally upend what we believe it is to be human now and into the future. In addition to our own calls for guardrails, ethical considerations and policies at this critical time, there are plenty of pundits and experts who have weighed in on this very transitional time in our human history.

So, what does it mean to be human faced by burgeoning, nearly exploding AI breakthroughs? Evaluating what it is to be human in the Age of AI is a valuable exercise. We can’t take anything for granted at this point. As humans begin to embrace more advanced AI, society at large is viewing it as the solution to its problems. It sees AI as the thinker and society is the beneficiary.

Being Human

A key question about AI is whether we will change our core human traits, and our very humanness and behaviors will be changed by AI. It’s the chicken and egg conundrum presented by theories of technological determinism. Is technology fundamentally changing us, or have we always been this way but simply lacked the technological tools?

We propose that technology is fundamentally changing how we think and behave. A fascinating study by Elon University explores Being Human in 2035 and captures how experts predict significant change in people’s ways of thinking, being and doing as they adapt to the Age of AI. Many are concerned about how our adoption of AI systems over the next decade will affect essential traits such as empathy, social/emotional intelligence, complex thinking, ability to act independently and sense of purpose.

Some experts have hope for AI’s influence on humans’ curiosity, decision-making and creativity. They foresee deep, meaningful and even dramatic change ahead in regard to these human traits. Experts were asked, “What might be the magnitude of overall change in the next decade…in people’s native operating systems and operations – as we more broadly adapt to and use advanced AIs by 2035?” In response, 61% said the change would be deep and meaningful or fundamental and revolutionary. 2040 was asked to contribute to the report and we provided our thoughts on the influences at play, seeing how those influences will change us in the future. Most importantly, we rang the bell that we always ring about the need to embrace critical thinking. We asked in the report if critical thinking is at risk of extinction. Quick answer: At this point, we think critical thinking may be endangered.

Tech analyst Jerry Michalski writes that “Multiple boundaries are going to blur or melt over the next decade, shifting experience of being human in disconcerting ways: the boundary between reality and fiction … the boundary between human intelligence and other intelligences…the boundary between human creations and synthetic creations…the boundary between what we think we know and what everyone else knows.” And Dave Edwards, co-founder of the Artificiality Institute adds, “The evolution of technology from computational tools to cognitive partners marks a significant shift in human-machine relation. This transition fundamentally reshapes core human behaviors, from problem-solving to creativity, as our cognitive processes extend beyond biological boundaries to incorporate machine interpretations and understanding.”

Our thinking is that as humans continue to embrace more advanced AI, the perceived necessity for humans to ‘think’ loses ground as does humans’ belief in the necessity to learn, fully comprehend and retain information. The traditional amount of effort humans invested in the past in building and honing the critical thinking skills required to live day-to-day and solve life and work problems may be perceived as unnecessary now that AI is available. After all, it offers solutions, direction and information – in both reality and perception – to make life much easier. It also offers so much information. But consider how even misinformation could become so ingrained in society that we will base our lives and work on that information, just because AI told us it was real and factual. If we aren’t thinking anymore, we won’t question it because we’ll believe AI is correct. How could it be wrong?

If theoretical physicist Steven Hawking turns out to be right: “The development of full artificial intelligence could spell the end of the human race…It would take off on its own, and re-design itself at an ever-increasing rate,” we’d better start getting ready to realistically pave the way to that future.

Who’s in Charge?

The rules have changed dramatically with the advancements of AI. The blurred lines between human and AI cognition bring up the issue of being human and more broadly, technological determinism, which we covered in our book The Truth About Transformation. When deploying AI tools consider whether technology drives social change, or whether technology is a reflection of the social change that humans program into these systems. You might believe that tech evolves on its own trajectory and society has no choice but to just adapt. This belief suggests that human agency, ethics, culture, and politics are subordinate to technology. Taking it to the extreme, AI development shapes work, life, policy, and ethics — without humans consciously steering it. We accept automation, surveillance, and algorithmic decision-making as inevitable.

There are warning signs that trigger alarm bells to question this determinate position.

  • The tech community moves fast with a “break things” mentality. The big AI players are rolling out powerful tools at breakneck speed while policy and ethics often lag behind. Businesses and governments feel pressured to adopt AI just to keep up — not always with clarity or consent. Each country is seeking to lead the world in AI as it sees it as a competitive advantage now and to who we will become as a society.
  • Assumptions of inevitability. People talk about job losses, bias, deepfakes, and surveillance as if they’re just going to happen automatically. That narrative precludes critical thinking to question whether it’s a true assumption.
  • Algorithms, and black boxes. As AI systems grow more complex, even their creators struggle to explain the decisions AI puts forward. They cannot explain how AI is thinking, how it is processing and calculating data and information. Is it able to be predictive? Is it using the right information? If no one understands how it works, who’s really in control?
  • Power of the few. A limited number of companies control the foundational AI models both globally and regionally. That centralizes not just economic and information power, but the shaping of knowledge, culture, and truth. And whoever controls the narrative holds the power. When the creators do not know how their AI is evolving, thinking, assessing and considering, who then is really in power? The AI or its human creators?

Term Limits

The guiding principle should be that AI is a powerful tool, not an oracle or an objective soothsayer. Yet, many organizations are using it as a predictive, prophetic crutch. It’s hard to resist as breakthroughs in its image and speech recognition thanks to deep neural nets are making AI all but indispensable. Society will likely seek to build and create personality expectations for AI agents. We will still desire human or human-like interaction. We will seek to personalize AI to act and respond as a human companion would. Consider how it is already being used as a mental health counselor? A fitness trainer. We are trusting more and more what it is telling us to think and do.

But here’s the big problem. Most AI is trained on massive datasets provided by humans, some are then frozen in time to determine how they are going to work with the data provided. At least for now, that means they’re not live 24/7 nor are they trained instantaneously. So, take today’s market uncertainty based on regulatory and policy changes where inputs are changing day to day as humans make one decision, change the decision, pause the decision or simply forget about it.  How likely is AI to be your trusted source of knowledge to set strategy for an individual? For an organization? Not. How human is AI as a partner in your planning and forecasting? Maybe.

AI is only as good as the data it sees. It’s not a crystal ball. It can’t predict black swan events, new laws, or sudden shifts without being retrained or augmented. Simply put, when the world changes fast (which it does), stale data gives stale answers. It also can’t predict unintended consequences for real-time market disruptions. It’s not a trendline, abstract thinking entity.

Our societal challenge at least through 2035 is that AI and learning models are subject to the information (data) we provide them. As such, AI is limited to what it has been fed, therefore bringing human biases forth into its own thinking. Humans are faulty and make mistakes and AI will continue to emulate its human creators. In a slightly unnerving future scenario, there may be a time when AI and learning models can operate objectively and find the information (data) they need to fill their own knowledge gaps and ensure authority and completeness of their output (decision-making).

That thought is a scary but promising outcome. Machines that think on their own and no longer need us humans for input is terrifying to many. But consider that we are already being tracked like migrating animals, often without our knowledge. The question is who or what is overseeing that information? It’s optimistic to think that AI could be trained to recognize its supporting role in society to remain objective. But honestly, since it is developed with bias, and it is by definition knowledge-biased, can it be objective?

AI Erodes Critical Thinking

When we think critically, we use our minds to recognize patterns, dependencies, inter-relationships, influential factors and variables. This facilitates connecting data, information and events that on the surface may not seem important but could be linked to fundamental shifts or changes.

When it comes to AI and our changing behavior, a flashing light is that the disappearance of critical thinking has become so clear that “Brain Rot” was the Oxford University Press’s Word of the Year for 2024. Is AI designed to take our humanness, values and virtues into serious account? The developers of these tools can aim them toward democratic and ethical innovation, putting people and planet over profits, enhancing human flourishing and collaboration, accelerating human progress and augmenting our distinctive and valuable human capacities for reason, communication and social engagement, which are central to individual well-being and the common good.

Our current direction is to implement AI as a sounding board, to take on advocacy on our behalf, to be an active and open listening agent that meets the interaction needs we crave and to complete transactions efficiently. We will therefore change and in many ways evolve to the point at which the once-vital necessity to ‘think’ begins to seem less and less important and more difficult to do. Our core human traits and our behaviors will change, because we will have changed.

Striking a Balance with the Inevitable

AI is here; it isn’t going away and we either rise to the occasion or sit back and wait for 2035. So, practically speaking, the human factor is critical in the AI/human interface. Blind trust in AI is a risk. You need to ask the right questions including whether the AI model is current, the origin of the data and if a decision is specific and contextual or a generalization. You cannot let go of the necessity for critical thinking, your own decision-making, and your own choices.

At 2040 we work with clients to face up to the facts. Critical thinking is becoming an endangered skill, along with practical know-how, common-sense problem-solving and basic thinking skills. These tools are more important than ever for all of us caught in the crossfire of global geopolitical, geo-economic and cultural asynchronies. We have largely defaulted to thinking on the surface, distracted by social media noise, news clutter and a barrage of information most of us have not been educated or trained to understand.

So, Then What? 

Chaos can make you crazy.  Uncertainty can lead you to poor decisions. Disruption can fill you with anxiety. Change the narrative. Stop trying to build a plan that assumes stability. Don’t rely on the past to make sense of the present. Create a future informed by its own context. Start building a system that thrives in volatility. Make decisions as to who we will be in 2035, don’t sit back and see what happens. We have a choice; we have our minds to help inform who we want to become. We can rise or fall to the occasion. It’s up to you.

20Forty Continue Reading

Get “The Truth about Transformation”

The Truth about Transformation Book Cover ImageThe 2040 construct to change and transformation. What’s the biggest reason organizations fail?  They don’t honor, respect, and acknowledge the human factor.  We have compiled a playbook for organizations of all sizes to consider all the elements that comprise change and we have included some provocative case studies that illustrate how transformation can quickly derail.

Order your copy today and let us know what you think!

Back To Top