Can You Imagine Living Only in the Past?
Issue 109, May 18, 2023
What if there were term limits on your ideas, thoughts, hopes and dreams? What if you reached a point in your life where you learned nothing new and were limited by your past experiences and body of knowledge? And if this were the case, and you led an organization, you might become irrelevant pretty quickly since you wouldn’t have the knowledge and scope to address today’s issues and challenges.
Generative AI may be a liberally interpreted case of technological dementia; it’s similar to individuals who are trapped in the past with a repetitive present or a future only informed by what they have known.
A White Knight, or Dark Force?
The excitement about AI has hit an inflection point across society, the tech sector, and even the stock market. Every investor has turned their attention to any company that represents or includes a project leveraging AI.
Society is about to change, perhaps radically and there is much that we really should consider. We seem to be at a point where we are making major commitments and embracing a future with AI as the answer to all things. It will do our work for us. It will answer any question we have. AI will write the business plan we have been procrastinating working on. And it will create a visually stunning, information-packed report that will impress any CEO.
It will give us someone (thing) to talk to when we are lonely or too afraid to talk about our emotions with real people. This is a double-edged sword. As Axios reports, experts worry that unregulated health services-focused AI might further cocoon people from the relationships and conversations they need. But in the short term, AI-powered virtual companions, pets and mental health support services are already being drafted and offered to select audiences to fight the loneliness epidemic. And by the way, if you haven’t yet tried the conversational companion heypi.com, you may want to. It is as exciting as it is frightening. Above all, it is an experience in conversation immersion with AI that seems all too human.
Walter Isaacson wrote in an essay published this Sunday in the Wall Street Journal focused on AI. He posed two very important questions: Should our goal be to make humans and machines tight partners so that they make progress by working in symbiosis? Or is it inevitable that machines will become super-intelligent on their own, eventually leaving us mere mortals behind?
These two incredibly important questions are ones that society needs to ponder before we dive into the deep ocean of AI’s promise. In his essay he set forth, “What is potentially unnerving about the latest advances in AI is that these systems, following the vision of Alan Turing, are being designed so that they can learn on their own. They even appear to be developing their own agency rather than being tightly tethered to human instructions and intentions. Most frighteningly, their creators, as was the case with Dr. Frankenstein, sometimes don’t fully understand how and why they produce certain answers, have hallucinations, make false assertions and even seem to express unsettling emotions.”
We aren’t here to answer all his questions, but we are here as critical thinkers to raise a red flag as we run headfirst in our adoption of AI. It is time for everyone to flex their critical thinking muscles.
When AI Gets into Trouble
For all the noise about generative AI, we are finding it is not the tool to use for foresight and innovative thinking. It is a tool to access the past and draw conclusions based on its input and programming. If you’ve been playing with or using ChatGPT or similar tools, what you receive in response is a representation of what has been said in the past, and in many cases created by humans via the internet. The response it provides reflects past thinking and the bias of the past. It does not always reflect what’s current or predictive of the future. ChatGPT is still using an aged-out dataset and although Google’s Bard appears to be using more recent data, it is still dated.
So, like any responsible thought leader, we asked ChatGPT about the above description, and got this response:
“The first part of the statement is partially true. Generative AI, such as ChatGPT, is indeed a tool that uses its programming and input data to generate output based on patterns and information from the past. However, it is not accurate to say that its output only reflects past thinking and bias. While it is true that the model’s output is based on patterns and information learned from the past, the model can also generate novel and creative responses that go beyond what it has seen in its training data.
The second part of the statement is also partially true. It is true that the responses provided by Chat GPT are based on past data and do not always reflect what is current or predictive of the future. However, it is not accurate to say that the responses always reflect past thinking and bias. The model’s output can be influenced by current events and trends and can generate responses that are not explicitly present in its training data.
In summary, while the statement contains some truths, it oversimplifies the capabilities and limitations of generative AI. While models like ChatGPT are based on past data, they can still generate novel and creative responses that go beyond what they have seen in the past, and their output can be influenced by current events and trends.”
Like any good innovator defending his or her new product, ChatGPT is a strong advocate for itself. And in that way, it is inherently human. It may believe it is novel and creative, but does the output of its programming really know how to synthesize the present to predict the future? Does its understanding of “novel” and “creative” represent the accepted definition of those words and concepts? Or does its understanding represent a more abstract technological classification of what it believes is ”creative” and “novel”?
Facts: Humans program AI. Engineers create the algorithmic models AI uses to learn. Is Isaacson right in his concern about the program self-learning and becoming its own agent? Can it ever be better than the humans that programmed it in predicting the future without using the past as rationale? We think not. Despite its thoughtful answers, we still contend that generative AI, by definition, can only use only the past to understand the present, lacks context, and is self-limited.
Only the Facts
So, let’s take a deeper look at this controversial and aspirational technology and help you break down the various types of technologies that represent AI. All of us, are guilty of using the term generative AI with a degree of poetic license. By way of Harvard, here is a lexicon that can help us use the terms accurately and make us more responsible in our AI conversations and decisions.
- Artificial Intelligence (AI) is an umbrella term for any theory, computer system, or software that is developed to allow machines to perform tasks that normally require human intelligence. The virtual assistant software on your smartphone is an example of artificial intelligence.
- Machine learning is a field that develops and uses algorithms and statistical models to allow computer systems to learn and adapt without needing to follow specific instructions. Asking the GPS on your phone to calculate the estimated time of arrival to your next destination is an example of machine learning playing out in your everyday life.
- Generative AI is a type of AI system capable of generating text, images, or other media in response to prompts.
- Natural Language Processing is the field of artificial intelligence where computer science meets linguistics to allow computers to understand and process human language.
- ChatGPT is a chatbot developed by OpenAIthat uses generative AI and natural language processing to simulate human-like conversations in a chat window where the user can ask the bot to help with a variety of tasks, including drafting emails, essays, code, and more.
The Case For, and Against AI
We do not seek to be naysayers, rather we see exciting potential for AI to complement and help society. Say, augmented intelligence, not artificial. By training and temperament, at 2040 we remain a bit more suspect than many others. We see challenges that need to be solved before we fully immerse. So, let’s consider a few of these challenges that need some work.
- Limitation of Algorithms
According to tech expert Sergio Brotons writing for Lingaro, “Generative AI is still limited in what it can accomplish due to its reliance on data-driven algorithms. While these algorithms may be able to recognize patterns or trends within data sets, they have difficulty understanding context when presented with new information or scenarios outside of their training parameters. This means that generative AI cannot draw conclusions or make decisions based on complex situations — something that only humans can do at present.” If results show that AI is overconfident about its own abilities, and if the humans who have created the AI cannot explain why it has gone off the rails (misinformation, lies, darkness in its words and tone), we really need to stop and think about our overly eager and tight embrace of its promise.
- Emulating Human Emotion
Brotons concludes that “Generative AI cannot replace human creativity completely as it lacks the ability to come up with novel ideas or recognize abstract concepts such as humor or irony — all things which require a human touch.” We need to reconsider the belief that AI can indeed represent new and novel ideas.
- Dated Data
In our ongoing desire for deeper thinking, generative AI has term limits when it comes to how we deploy it. Industry experts and we here at 2040 believe that generative AI is self-limited by reliance on preexisting data to learn and identify patterns that it will then use to synthesize into new conclusions. As in any technological system, if the dataset is limited in its scope, so will be the output in reimagining the future with new ideas or solutions. This harkens back to the question about imagining a future where humans have a hard stop on having new experiences, and therefore stop accumulating fresh knowledge.
- Creative Problem Solving
Think about it: The concept of “thinking outside the box” is contrary to any computer programming model. The whole point is to work within the box to solve problems, producing better results that are similar to those that have been done in the past. That said, it may only be a matter of time before AI takes that quantum leap into its own agency as Isaacson predicts. But that AI agency’s solutions may still be rooted in the past.
Confirmation Bias
In the meantime, the accuracy of generative AI’s generated output is an issue that limits the technology’s brilliance. Poor data can lead to inaccurate or incomplete output. We have surfaced often in our book “The Truth about Transformation” and even via this newsletter, how our faults and limitations emulate in our technologies. Give AI images of only one race or ethnicity, type of dog or really any other limited data set and its conclusions will always be incorrect. Brotons adds, “Users who are easily impressed by generative AI or overvalue the AI’s output may suffer from the ‘It’s Perfect’ effect. They just assume it’s flawless without question. This cognitive bias is analogous to the Dunning-Kruger Effect, where individuals overestimate their abilities and knowledge despite lacking expertise or experience.” This overconfidence in AI can lead to errors and misunderstandings that can ruin an organization’s credibility. As widely reported, it is critical for businesses to proofread, fact-check, and vet cultural and contextual references when using AI.
Controlling the Narrative
Context is everything, and critical thinking makes context relevant in any strategy or plan. Human beings have the ability to spontaneously make connections with critical thinking that provide relevance and align context. For the record, critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action (The Foundation for Critical Thinking). And context is an underlying foundation of critical thinking because it helps people better understand themselves, their motivations, goals, and the situation, environment, and world around them. When you can deduce information to find the most important parts and apply those to your life, you can change your situation and promote personal growth and overall happiness (Indeed). We are far from convinced that any AI program can provide these benefits.
Forging a Synergistic Future
Latanya Sweeney, Professor of the Practice of Government and Technology at the Harvard Kennedy School and in the Harvard Faculty of Arts and Sciences reports that 90% of content will no longer be generated by humans. It will be generated by bots. In addition to that shift, Sweeney adds that “recent developments in artificial intelligence technologies are forcing us to reimagine how we engage with the world around us. ChatGPT may signal a major shift in how we engage with the internet, each other, and the world.”
At 2040 we work with clients to build better organizations by balancing the human factor with the tech tools at their disposal. We stress how important it is to be mindful and aware of the perils of technological determinism. That is to say that technology has the potential to determine our society’s values and behavior. A perfect example of that is over reliance on generative AI without guardrails. How simple it is to write a job description, create a business strategy, predict future growth, design an organizational structure, and produce content and marketing messaging by prompting ChatGPT to stream out the answers. And how dangerous it can be to use this output without the human touch to ensure it is relevant and meaningful. Otherwise, we are truly enabling Huxley’s Brave New World as our future.
Get “The Truth about Transformation”
The 2040 construct to change and transformation. What’s the biggest reason organizations fail? They don’t honor, respect, and acknowledge the human factor. We have compiled a playbook for organizations of all sizes to consider all the elements that comprise change and we have included some provocative case studies that illustrate how transformation can quickly derail.