This Week: POTUS, Humanoids & Artifacts of the Past
#NEWSLETTER | You'd be forgiven for feeling whiplashed on the news front, especially regarding POTUS. But there is a lesson here about finding one's truth and not getting too distracted by noise...
If AI has the potential to control how we see the world, then the story of President Biden’s mental acuity has been a test of whether we can stay focused on the facts while waves of opinion churn beneath us.
The jury is out on whether we passed or not…
Has the Reality of Biden’s Fitness Really Changed?
In a painfully short period of time, media and pundits have gone from dismissing any coverage that questions Biden’s health (most notably The Wall Street Journal’s June expose on the topic) to the pendulum swinging toward an urgent plea that POTUS must abandon his reelection campaign.
In fact, it was only a couple of weeks ago that I wrote about the dangers of the White House calling videos of Biden’s European trip “cheapfakes” at a time when we are actually at risk of being confused by AI-generated/manipulated video and image content. More below ⤵️
The point is not the substance of the debate per se, but instead how we have gotten from there to here in such a short and dizzying amount of time.
The hope should be that we get back to taking in facts and formulating our own opinions rather than co-opting the opinions of others and considering it to be factual in the name of loyalty or tribalism…
The Humanoids are Coming
For all of the election news you’d be forgiven for missing the (mostly trade) coverage of Morgan Stanley’s report on millions of humanoids joining us over the next decade or so. I wrote this week about how we should begin to consider a world shared with these new robotic neighbors… ⤵️
If you think this isn’t something to worry about today you’d definitely be mistaken. Unfortunately chatbots have already exposed us to a very complex emotional response that we as humans have to anthropomorphic machines. And the risk is right now.
Professor Shannon Vallor at the University of Edinburgh explains this challenge in a fantastic interview with Techpolicy.press (highly recommend a read of the full exchange):
I think where we're at today is actually quite a dangerous place because we have in many ways an often seamless appearance of human-like agency because of the language capability of these systems. And yet in some respects, these tools are no more like human agents than old style chatbots or calculators. They don't have consciousness, they don't have sentience, they don't have emotion or feeling or a literal experience of the world, but they can say that they do and that they feel things, want things, that they want to help us, for example. And they can do so in such convincing ways that it will be really unnatural feeling for people to treat them as mere tools.
A Good Reminder About Fact Finding
An old friend works at historical records company Find My Past and this week she reminded me of how much rich historical data is available online. As much as we rely on social media and online news to inform our opinions we’d be smart to supplement it with primary source material.
It’s a great exercise in counteracting the influence of media, including a future led by AI-driven news content.
It will take more work, but there is no other way… identifying what is factual will come from us individually in the future. It’s going to get harder, so teaching kids now how to use solid research skills to determine one’s position is a great start.
What are your thoughts this week? Do get in touch…