A Danger of AI Tomorrow is Denying What We See Today
#TIPS4FAMILIES | The suggestion by the WH that videos of POTUS were "manipulated" sets troubling precedent for observable reality at a time when gov't has responsibility for leading policy on AI...
Comments this week by President Biden's press secretary, Karine Jean-Pierre, suggesting that unflattering videos of the president were "purposely being altered" was a deeply troubling abdication of the administration's responsibility to lead on language related to AI.
It’s a very big deal at a time when we are all trying to navigate how AI may alter our observable reality and what that means to everything we see.
By suggesting that video clips from the recent G7 meeting, or star-studded Hollywood fundraiser for Biden, were manipulated without detail or specifics as to how, the White House has now put into motion a precedent that will make it harder to evaluate future content that *is* actually manipulated by AI.
What Happened?
The question asked by the press in the briefing room was specific to Obama putting his hands on Biden’s back as they left the stage at the Hollywood fundraiser hosted by George Clooney and other luminaries. No matter the angle, or how the clip may have been shortened or not; nothing changed about that very real interaction.
There were other aspects to highly-charged coverage, including the question of whether Biden “froze”… but that needs to be something a viewer decides for themselves based on what they see. It doesn’t make the video itself “fake.”
In fact, the vast majority of media, of all political leanings, shared full clips of all the Biden interactions during the period covering the G7 and Hollywood fundraiser. What the press secretary should have been more specific about (rather than throwing around the word “fake”) was that the content in some cases was cut to leave out context — which may or may not matter to you.
For instance, at the G7 gathering where leaders watched parachuters land and Biden seemed to wander off, he was indeed heading toward a man who had just landed and giving him a thumbs up. Does that change how you see it? Maybe.
The point is that we need to train ourselves to look for context and even seek out primary sourced data when we can. In this case, an even longer video straight from the G7 was easy to find.
The answer is not to allow ourselves to be guided to a narrative but try to consume as much content as we can to decide the answer for ourselves. In the future, we want more content and more sources…not less and not a single arbiter of truth.
Politics as Usual at an Unusual Time
It’s understandably a bad time to navigate these newly fraught requirements related to reality, as elections have always been about trying to force persuasion.
The more serious point here, though, is that the public's understanding of how digital manipulation of video can be altered in increasingly sophisticated ways via artificial intelligence is new and fragile. We are “kicking” the general public when we are down.
But even so, loosely throwing around buzzwords that most Americans are yet not familiar with, such as "cheapfake" is not business as usual. Manipulating a public already struggling to navigate AI is as cynical and unethical as we'd expect of anyone in office.
While billions of dollars are being spent on AI innovation, we are already in a tenuous place where innovation is moving much faster than the general public's ability to master what's new.
“The Liar’s Dividend”
The administration has inched dangerously close to a practice warned about by experts for months and years now. Called “the liar’s dividend,” and coined by scholars Bobby Chesney and Danielle Citron in 2019, the concept is a warning that public figures might exploit the "idea" of deepfakes to dismiss real content they find inconvenient.
Academics at the Governance and Responsible AI Lab (GRAIL) at Purdue University have also weighed in on the practice, noting "…politicians may seek to profit from spreading misinformation about misinformation." They also conducted research related to the general public's capacity to be swayed by such an approach finding that "…people are indeed more willing to support politicians who cry wolf over fake news, and these false claims produce greater dividends for politicians than remaining silent or apologizing after a scandal."
Remarkably, warnings about the "liar's dividend" have come from all sides of the political spectrum. The Brennan Center also published a report on the topic this year, warning that "unscrupulous public figures or stakeholders" may exploit consumer awareness about the risk of deepfakes to "…falsely claim that legitimate audio content or video footage is artificially generated and fake."
From Zero to “Fake”
Surely all of these experts and academics would never have expected that the president's press secretary would so willingly prove their point? And that an administration would be so bent on a narrative of Biden as full of vigor and mental acuity at the expense of Americans' vulnerability in understanding new technology?
The public sector is supposed to protect citizens from exploitation, not exploit their collective weakness at a time of great societal change. We need Washington to commit to focusing on defining what is factual with clinical specificity, not an exercise in mind bending wordplay and textbook gaslighting.
If you were to look at Obama leading Biden off the stage and thought it resembled guiding an aging parent somewhere and that’s not the state you want your president to be in, then great. If you thought it represented two affectionate colleagues leaving an event and you are happy Biden is fit for office, fine. But in no case should you be told the interaction did not happen. It's deeply irresponsible.
The Holy Grail is Your Definition of Truth
Our future depends on maintaining a shared basis in observable reality.
As AI becomes more advanced and integrated into society, clear leadership rooted in facts will be paramount to navigate these waters successfully.
We must trust ourselves and our fellow citizens to come to a personal conclusion about what “truth” is based on the simple consumption of “facts.” And our children must understand the difference between edited for length and or edited by AI to be completely fabricated.
Crying wolf about deepfakes when none exist is the opposite of leadership – it undermines public trust and sows confusion at a time when honesty is essential. We deserve better from our elected officials…of all parties.