Skip to content or call us at ‪(240) 630-4674‬
2040's Ideas and Innovations Newsletter Image Header
Humans and AI Align

ChatGPT: Shapeshifting Our World

Issue 95, February 16, 2023

If you’ve been paying even a nanosecond of attention to the tech news headlines, you might think the universe shifted on November 30, 2022, when OpenAI introduced its natural language, generative program that produces text and images in response to user prompts, ChatGPT. Yes, we know that everyone under the tech corridors’ suns have weighed in on its disruption.  And guess what, we’re going to as well, and as you will discover, for a different reason.

Temples to Big Tech

But first a moment of context. For decades we have been reeling from the unbridled worship of big tech. For starters, just look at the monumental architecture of their corporate headquarters as temples to tech. On the positive side, the country initiated widespread support of STEM curriculum. Girls who code became a badge of honor. The U.S. became the mecca for students from all over the world to study at our universities and compete for engineering degrees … and jobs. A legion of 20-year-olds drank the startup Kool-Aid and raised millions of dollars to fund so many solutions looking for a problem.

Apple, Amazon, Alphabet, Microsoft, and Meta became corporate gurus dedicated to reimaging our collective future. Twitter and TikTok became the preferred source of news and information. Instagram and Snap took everyone down rabbit holes appealing to their vanities. And Activision, Electronic Arts plus a myriad of other game developers stole our attention and dropped us into a ready-player-one black hole. Everyone thought they’d get rich riding on the coattails of the big tech geniuses.

Our love affair with tech enabled the rise of next gen tech entrepreneurs including Mark Zuckerberg, Elizabeth Holmes, and Sam Blankman-Fried.

Fault Lines

And then the veneer began to crack. Privacy. Misinformation. Bullying. Shaming. Self-indulgence. Narcissism. Bias. Losses of millions of dollars. Unbridled, unbounded consequential decision making of young, inexperienced leaders left so many of us across the business world thinking WTF.

Even compromising physical impacts of tech are coming to light, as research has revealed young teenage girls developing facial and bodily ticks in response to TikTok content consumption.  Our young peoples’ minds are being reprogrammed to prefer short form content. We are becoming attention deficit and deprived, forcing the cultural conversation to “get to the point” before we lose interest. When you think about all this, what is our society going to look like in the near and longer term? How will our organizations function? Be led? By sound bites without substance?

Cause and Effect

Society aligned to the promises tech titans made and embraced the excitement of the possibilities. As is often the case, the public loves to worship the pioneers who break with the norm. The American psyche likes the entrepreneurial spirit when rule breakers make day-to-day life a bit more interesting and forward looking.

The cascading effects of the conceit of tech companies is creating a domino effect across our economy. They believed they were too big to fail, with the resulting waves of layoffs and pressure from the Street to go for the short term or go home. Suddenly after years of free passes, tech CEOs took million-dollar pay cuts. Tech stock prices became unstable. The public lost money. And through the haze, instead of being the hero, big tech started to look like the anti-hero.

Our intent is not to be overly negative or paint a picture of the end of the world as we know it. Our lives have indeed improved and have been made easier with tech innovations in so many ways; medicine is one obvious example. But at 2040, we live by Newton’s third law of physics, paraphrased, “for every action there is an equal and opposite reaction.” Any reaction results in consequences, many unforeseen and surely unintended.

And that’s where we find ourselves right now.

Code Breakers

So, what happened? ChatGPT and any tech leveraging Generative Pre-Trained Transformer (GPT) are brilliant metaphors for watching how a good idea can become distorted at warp speed. Consider that in today’s ever-growing tech-dependent society, the promise of artificial intelligence (AI) is often seen as a silver bullet solution to problems that haven’t even manifested yet. GPT is currently creating waves of enthusiasm and optimism in terms of shortcutting work processes and even saving money. It has been adopted overnight to game copywriting, fake college applications (think the personal statements and essay), take tests, write papers, and perform as customer service chatbots on steroids. It is lauded as much as a toy as it is as a tool to augment medical diagnoses and provide serious fact-based sources of business and financial information.

Just two weeks ago headlines broadcasted that ChatGPT passed medical board exams — just barely, but it passed. That alone should concern all spectrums of society. How do we know if anyone has used alternative means to achieve their credentials? That is a real crack in the foundation of trust.

Imitation Games

So, ChatGPT gained 100 million users in two months. And in imitation is the sincerest form of flattery syndrome, Microsoft made a $10 billion investment in OpenAI to fold ChatGPT into Bing. In a gamesmanship move, “Microsoft is offering citations with its answers, allowing people to fact-check the AI-powered answers they receive. Microsoft is using Bing’s vast knowledge of the Web to make answers more reliable than those served up by ChatGPT, helping to address a key shortcoming — that it can be confidently wrong,” as reported by Axios.

Then Google launched its own conversational AI service, Bard. “Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills,” stated Alphabet CEO Sundar Pichai. Hey, not so fast.  When Bard launched, it totally botched the Webb Telescope question and Google’s stock tanked $100 billion in real-time, according to Reuters.

Shares in Baidu soared, after the Chinese search engine giant said it would be launching its own ChatGPT-style service. CNN reported, “Its artificial intelligence chatbot called Wenxin Yiyan in Chinese or ERNIE Bot in English will launch in March.” CNN added, “ERNIE, which stands for Enhanced Representation through Knowledge Integration, is based on a language model that Baidu first developed in 2019.”

Meta has taken a different approach with fortunes spent on AI, using it to rank news feed items, moderate content and translate text, according to Axios. “Amazon uses conversational AI for Alexa’s voice recognition, to optimize its warehouse operations and for other purposes. And CEO Tim Cook has said that AI has the potential to change just about everything the company does,” adds Axios.

The me-tooism isn’t surprising. It’s easier to follow the leader rather than create in the first place. Our question: Is this a case of critical-thinking based innovation or a lemming-like march over an edge?

The launch of ChatGPT and the excitement that resulted caused many companies working on their own AI offerings and throw caution and concern to the wind, turn a blind eye to perfection, and join the fray for fear of losing any aspirational or real competitive ranking.

Today’s reality, as a result, is that field testing on perfecting the system and its application is happening real-time with the public as the research lab. At a time when society is already struggling with the proliferation and reach of “fake-news” and alternative facts, the potential of Chat GPT to deliver unchecked facts comes with consequences. If you project forward, unchecked generative language can feed radical beliefs and contribute to our overall anger and rage. And let’s not forget those mental health and physical impacts we already discussed. With innocent citizens as the test subjects in the GPT research lab with the lack of a controlled environment, GPT is going to come with unintended consequences. Caution is once again being thrown to the wind for the sake of potential transformative technology and innovation.

This seemingly unstoppable, unchecked ChatGPT and its cousins are going to be huge moneymakers promising a future of search, content creation, and gain knowledge on steroids.

Unintended Consequences

The alarmists for good reason have waved the red flag about the potential harm ChatGPT can inflict.  CNET (an online tech focused publication) was called out for using ChatGPT to create articles for its website. Across content marketing circles, ChatGPT (and similar AI generative tools) are being embraced and secured by any number of companies seeking to expand their online audiences by increasing the amount of content on their sites and apps. Businesses see an easy path free of resistance to achieve goals and handle tasks by using ChatGPT. And the tool is unlocking a new category of writers who have become content engineers and AI specialists. It’s no surprise since our evolutionary programming defaults to expend the least amount of energy possible to gain what we need and want. ChatGPT fits that bill.

But Chat GPT is also stuck in time having a fixed set of knowledge ending in 2021 when it was programmed. ChatGPT is not yet designed to evolve with new information and input in its current form. So, Microsoft is seeking to contend with ChatGPT’s dated knowledge by promising the integration of its own tools to ensure currency across news and recently published materials. The new improved Bing using GPT is itself a prototype that requires further development and maturation in how it considers and processes information. In our tumultuous times, a 15-month information gap could be catastrophic for any individual or organization seeking help in making a decision or gaining understanding of an issue. In contrast to the human mind and our ongoing collection of knowledge, ChatGPT is based on limited and dated information and data. Limited knowledge that far too many people may use instead of doing the work to find out the facts.

The Fatal Flaw

We live in a digital world infused with misinformation, which will surely increase.  People don’t like to expend the energy to analyze, second guess, and be skeptical. We have search filter bubbles based on our browser history working against us by feeding us what we want to know, not need to know. Bias, the spread of fake news, and high anxiety among teens about body image and the like make headlines. With ChatGPT, if we don’t ask the AI the right questions the right way, we are getting incorrect, incoherent, or out of context responses. As with all AI, can we really trust this powerful new tool?

So, a word of caution. This natural language tool has been created, programmed, and directed by human beings. The system is only as good as it’s input. Humans are prone to error; they can be forgetful, careless and overlook details. Since technology is created by humans, it is important to recognize that human faults also permeate our technologies, including artificial intelligence.

A related issue that we cover in our book, The Truth About Transformation, is technological determinism, which is a perspective on the ongoing challenge to understand how human behavior has been affected by technology. In other words, a society’s technology determines its cultural values, social structure, and history. Therefore, social progress follows an inevitable course that is driven by technological innovation. Ironically, humans have created the technology that has radically shaped their behavior. As technology has evolved, our partnership has evolved with it.  What is not often apparent are the consequences of this symbiotic relationship. Have we always had innate abilities or has the adoption of technology fundamentally changed humans? Look around you and consider your own behavior changes brought to bear by technology. As we seek to advance technology, and in the process evolve humanity, we rarely examine the consequences that will come in the near- or far-term, and the impact those consequences will have on each human.

We can easily be manipulated by algorithms and as a result, we don’t know what we missed as the decision on what to show us was made without our involvement. It takes technological savviness and heightened awareness to understand how these systems work. At an individual level, the impacts may not be significant, but when viewed at an organizational or societal level, the impacts are significant and can often be severe. We have become painfully aware of the impact of misinformation, disinformation, bias, and bullying online.

Generally speaking, the human factor flies directly into the face of the tech community that believes technology is the silver bullet; the answer to everything. In both organizations and society, technology is a tool created by humans to solve problems. How technology is changing organizational cultures and workplaces is a work in progress with positive and negative changes that will be significant.

More on Critical Thinking

Critical thinking is essential to navigate today’s society, especially in consideration of generative AI. An example discussed in The Truth About Transformation cites the challenges with artificial intelligence facial recognition programs. If the AI tool has only been input with one race, one sex, or a limited set of individuals to use as prototypes, any true facial recognition is compromised from the get-go. When incomplete data is entered into the system it guarantees a faulty outcome.

Another example of the necessity for critical thinking is the use of generative AI to create content. Yes, the process is easy and quick, and depending the amount of content available, can result in increases in traffic and therefore increases in advertising revenue. But is the content accurate? Is it free from bias? Is it current? Is the organization putting itself in legal jeopardy? In countless recent examples, editors have fact checked ChatGPT to discover factual errors that need to be corrected. This takes extreme critical thinking to flag the suspicious  inconsistencies.

Those that are using generative AI for business purposes or for experimentation are in learning mode. And one of the biggest lessons is to ask the right questions. Since generative AI is programmed by humans, it may not yet respond to prompts reflecting the diversity of the human thought process. In fact, those prompts (questions) may not be in the common vocabulary with the words we use and context for those words. Editors who have experimented with ChatGPT have discovered that the generative language only gets close to human cadence after repeated prompt rephrasing and clarification. And that requires critical thinking.

Get “The Truth about Transformation”

The Truth about Transformation Book Cover ImageThe 2040 construct to change and transformation. What’s the biggest reason organizations fail?  They don’t honor, respect, and acknowledge the human factor.  We have compiled a playbook for organizations of all sizes to consider all the elements that comprise change and we have included some provocative case studies that illustrate how transformation can quickly derail.

Order your copy today and let us know what you think!

Back To Top