Emotional AI is Here 😬 [UPDATED]
#NEWSLETTER | On the long list of ways data is increasingly extracted from us and turned into AI innovation, emotional data ranks high to focus on right now...
[May 17 UPDATE | The speed of AI development means that at times I will add notes to past published content to keep it all connected. In this case, OpenAI, parent to ChatGPT, launched a new version this week with updates including the mimicking of human emotion via a now voice-activated bot. More to come regarding the larger emotional/psychological implications (updates will be here) but for now the context is privacy — OpenAI will train the bot with our verbal interactions — and other companies will not be increasingly motivated to secure this data too. Questions? As always, feel free to get in touch]
Data associated with our image continues to be some of the most valuable to creating new and powerful innovation. But this information also remains shockingly unprotected. It’s why TikTok is problematic, as well as, why we need to pay attention to who is collecting our biometric data and for what reasons (remember, our face is like a fingerprint & unique to us).
Biometric Data & Emotional AI
What can be created by analyzing one’s face? Well, deepfakes, and other fraudulent vehicles, but also the ability to analyze our “emotions.” Companies for years have analyzed text-based information to determine “sentiment,” such as how people are responding to an issue, brand or idea, via social media.
The buzziest new applications (albeit a longtime in the making) look at micro movements in our faces, accompanied often by changes in voice, or even heartrate, to help determine “how we feel.”
China has monitored children in the classroom to assess interest and aptitude, as well as, whether employees are “joyful,” for years. More surprisingly is how fast companies have jumped on the bandwagon, using emotional AI on workers, including job applicants via their video interviews (and no surprise with much controversy as a result).
The Data We Give vs. What is Taken
This holy grail of assessing and swaying your feelings toward a subject is a slippery slope of privacy and coercion and shows how important it is to protect our data and think critically when faced with applications using our information.
I was lucky to be invited by Liam O’Neill to talk about the issue of biometric data on his show New York Insider recently (thank you Liam 🙏). I hope you’ll find the explanation and detail useful.
What Are We Even Talking About?
Overlaying all of these issues is how we talk about AI, technology, and future applications — and whether we are all “speaking the same language.”
The problem isn’t that people don’t have the capacity to understand AI, but instead that we don’t yet talk about it, or understand AI, in the same way.
Neuroscientist and policy analyst Joseph B. Keller honed in on this challenge brilliantly in his article, “Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI.” Keller perfectly captures the urgent need to ensure we are all talking about the same thing with the same words to describe it.
I’ve started to build a resource center with a directory of terms, documents and reports. I hope you find it helpful.
Speaking of Emotions…
Stoking our emotional flames via algorithms is, of course, fundamental to the business models of social media companies (and the cause of ire in how they have ensnared kids as a result). But the fear now is that our emotions will be better read and manipulated using artificial intelligence.
But “Everyone…”
It’s the “everyone” is talking/worrying/thinking about phenomena that should concern us when considering AI’s impact on issues, ideas, and our perception of the world.
At one extreme is a concern consuming our national intelligence: the chaos, discord and danger caused by online vehicles seeking to sway people en masse. But even more basically, it can elevate some issues, while obscuring others.
For me the digital amplification of some issues over others was made real as I read about California finally voting to outlaw child marriage (with one key legislator still resisting).
There is no federal law prohibiting child marriage and in some states, most notably California, there is no minimum age (… its estimated that 10,000 kids are married in the state each year). Incredibly, in California you can’t get a divorce or access women’s shelter services until you are 18. It’s absurd. You can read more here.
I once asked the organization fighting to outlaw child marriage how, when there is so much public outrage and advocacy regarding children, could so many be unaware of this issue. And they said that the media have eschewed their efforts to tell the story because it’s “too dark and depressing.”
This disparity stuck with me as a reminder of how what is important in the digital realm is skewed based on the money, resources, or even “clicks” behind any given issue. The reality is that we are seeing this play out in realtime and with more urgency as the months go by.
So when your kids say “but everyone…” it’s a great opportunity to discuss what this means in an AI-led future...