Misinformation, AI, Cyber Troops, and Our Need for Guardrails
Issue 131, October 19, 2023
We’re taking a moment (again) to consider the implications of the speed of technological change confronting us daily. What was charmingly referred to as the information superhighway by Al Gore back in 1993 has morphed into a bullet train … run by AI. What’s more, we face those with malicious intent to manipulate our opinions, bend our minds into believing fiction and influence us in what we believe is the truth.
Our questions for today are: Who can we rely on to place guardrails on the technological opportunistic race to the future? Is it ourselves? Our government? Our organizations? Or do we put all the responsibility on generative AI to protect us? The key point? Our future stability depends on someone (or something) to take action.
Using AI to Manipulate Us
Having adjusted to a world with the latest versions of GPT, Bard and all the other AI tools and interfaces that continue to be made available, we have become comfortable welcoming AI into our lives. We relish the opportunities to save time, become more efficient and most impactfully, expend less energy by allowing something else to do the work for us.
But instead of AI as a bullet train taking society to new positive evolutionary highs, it seems more like a runaway train hurtling us to an unknown destination where we cannot even envision (or conceive of) the very serious and consequential societal fallout. Take a moment to consider the ever-expanding incidents of misinformation generated by humans with malicious intent, leveraging AI, and reinforcing our reliance on digital tools and platforms in the world arena.
Just this past Tuesday, the internet was filled with instant posts following a missile strike on a hospital in Gaza. What was true or false became a struggle; it has been nearly impossible to determine who did what based on the propaganda machines on all sides. The new war as an internet/social media phenomenon has ramped up to manipulate public opinion, with each side placing blame on the other. In terms of the hospital incident, governments, the media, and most of society still can’t figure out what to believe.
And then this past week, CBS reported that research group Alethea detected a network of at least 67 accounts on X posting false content about the war between Israel and Hamas. The immediate danger is that those posts included mistranslated videos, as AI misfired in its translation, with each having millions of views.
Elon Musk reportedly claimed to have taken down the 67 accounts. But is that enough? Is reaction or being proactive the right strategy in a continuously moving information/disinformation whirlpool? Being reactive is a mitigation strategy; being proactive addresses the core issues to eliminate negative impacts and consequences.
Wired reports, “Journalists, researchers, open-source intelligence (OSINT) experts, and fact-checkers rushed to verify the deluge of raw video footage and images being shared online by people on the ground. In the time taken to fact-check the exponentially expanding information, users of X seeking information on the conflict faced a flood of disinformation.”
Again, we go back to the need for someone or something to create guardrails. How can social platforms ensure the credibility of what is being served up to us? And in the case of X, who’s watching the store when most of the individuals who were responsible for tracking disinformation have been fired. “Elon Musk’s changes to the platform work entirely to the benefit of terrorists and war propagandists,” states Emerson Brooking, a researcher at the Atlantic Council Digital Forensics Research Lab, adding, “Any sort of ground truth, which was always hard to get on Twitter (X), is now entirely out of reach.”
Not to beat dead horse X, but here’s just one last egregious example of the manipulative power of AI and bad actors at war. “X users were presented with video game footage passed off as footage of a Hamas attack and images of firework celebrations in Algeria presented as Israeli strikes on Hamas. There were faked pictures of soccer superstar Ronaldo holding the Palestinian flag, while a three-year-old video from the Syrian civil war was repurposed to look like it was taken this weekend,” reports Wired.
The Power of Cyber Troops
A new consequential power is rising and expanding comprised of those seeking to manipulate public opinion and thought, now dubbed “cyber troops.” Oxford Internet Institute defines cyber troops as “government or political party actors tasked with manipulating public opinion online.” Time reports the Oxford research group was able to identify 81 countries with active cyber troop operations utilizing different strategies to spread false information, including spending millions on online advertising. How do we know what is fiction or truth?
Disinformation and the manipulation of information, of course, is nothing new. Humans have sought to bend the opinions of others, revealing some information, omitting information, and telling downright lies since we started to communicate.
The speed, ease of availability and reach of today’s communications platforms and networks result in broader influence which has consequences. In 2020, Oxford University researchers found that Iran targeted Palestine on Facebook and Israel on Twitter. But it goes both ways. “Researchers also noted that Israel developed high-capacity cyber troop operations internally, using tactics like botnets and human accounts to spread pro-government, anti-opposition, and suppress anti-Israel narratives,” as reported by Wired. Cyber troops discredit political opponents and foreign governments. And as Time reported, “Facebook has employed PR firms to use social media to trash the reputation of competing companies.”
Our point is not to be political. Our concern will remain about the distortion of public information and our ability to discern truth from fiction when most don’t or won’t take the time to assess and check what is being fed to them. The ever-expanding crisis we find ourselves in touches each one of us, our organizations, and our government. The public discourse in the US, like in most if not all hyper-connected countries around the globe, has become riddled with a credibility crisis.
Fourth Generation Warfare: Bad Actors
How do you know what to believe? No matter how well we have mastered critical thinking, our natural instinct is to trust. That trust includes what is shared online by who we believe are other humans, not often taking the moment to consider if the account is real or fake. Some of the newly available generative and conversational AI tools are programmed to act just like a human, asking you how your day was, exploring what is on your mind and helping you make decisions. Some are so good that the mind believes it is talking to a real human. If that is what the mind believes, trust is formed, even though you are simply talking to computer code.
Axios reports “AI-generated content could soon account for 99% or more of all information on the internet, further straining already overwhelmed content moderation systems.” As we lack guardrails to contend with the current amount of misinformation put in front of us, AI is adding to the heap. News organizations are investigating protections against generative AI systems. As CNBC states, “Newsroom leaders are preparing for chaos as they consider guardrails to protect their content against artificial intelligence-driven aggregation and disinformation.”
Graham Lawton writes for New Scientist, “Many researchers are saying that the next two years will be make or break in the information wars, as deep-pocketed bad actors escalate their disinformation campaigns, while the good guys fight back. Whether the good or bad side prevails will depend on who has the deepest pockets and greatest energy and will. The winner will determine and shape everything, from people’s beliefs about vaccines to the outcomes of elections.”
The skeptics are typically lone voices in a sea of herd-mentality believers. So, think about it. Are we now faced with fourth-generation warfare, blurring the lines between civilians and protagonists? Is weaponized misinformation and disinformation the new platform for staging a war? And we’re referring not to a military war, but campaigns by businesses and organizations of any size. At its most dismal outlook, fourth-generation warfare could be defined as societal terrorism.
Your Own Guardrails
As we continue to advance generative AI, adapt it to our professional needs and continue to depend on tools and platforms to inform us, it’s critical to hone the skills of separating fact from fiction. Above all, what is the provenance? Identify where the content was generated and how it has been used. Establishing laws and regulations can patrol disinformation, as evidenced by the EU’s oversight of misinformation. Education, media literacy and critical thinking become personal and professional responsibilities to reveal disinformation. Axios adds that using the algorithms generated by AI to detect its own misinformation is a tactic to ferret out the truth, or not.
Today’s misinformation environment requires a compulsory call to action for each of us to take a step back, use our critical thinking skills, and find realistic, relevant, and fruitful ways to stop the runaway train before it’s too late. Our mantra, always, is that truth or the closest version of the truth comes via critical thinking. The human factor in how we adapt, how we change and how we transform remains the most important factor.
Get “The Truth about Transformation”
The 2040 construct to change and transformation. What’s the biggest reason organizations fail? They don’t honor, respect, and acknowledge the human factor. We have compiled a playbook for organizations of all sizes to consider all the elements that comprise change and we have included some provocative case studies that illustrate how transformation can quickly derail.