What is Emotional Artificial Intelligence?
#INVESTIGATION | When considering what makes us human & the data that goes into building AI, emotional/psychological info seems to make sense. But it's minefield of a category that needs our focus.
Perhaps you are already over the seemingly clockwork emotional rollercoaster of: “what is AI and which aspects of this technological whirlwind should I worry about today.”
But “emotional artificial intelligence” (EAI) is probably the least talked about aspect of AI innovation, and yet arguably one of the most important as it raises existential questions about what makes us uniquely human, and whether that essence can be replicated.
Getting to the crux of EAI is not particularly difficult. Because if artificial intelligence seeks to mimic humans (ostensibly for our collective benefit), then we should expect that every aspect of our beings to be scrutinized and thrown into the data pot.
What is Emotional AI?
When we think and feel… we of course “emote” externally. From the tiniest ways our faces register thought, to other cues, including heart rate, skin conductivity, brainwaves, respiration levels, and even the tone and tenor of our voices. As humans these tiny non-verbal “hints” help us understand one another…. the proverbial “reading the room” you could say.
(Note: I find it handy as I write about new topics to add a list of words commonly associated with the topic. Hope it helps. A full list can be found in the “resources” section)
It’s then, of course, no surprise that intrepid innovators would consider the emotional interplay between our thoughts, expressions, and actions, as useful data to collect and process.
The current and potential output for EAI innovation runs the gamut too, from ads targeting consumers with products and services tailored to one’s mood, and chatbots with more empathetic delivery, to devices that can “predict” a health event, such as a stroke, or even alert drivers who may not realize they are getting drowsy at the wheel.
Bigger picture, though, the idea that human choice can be predicted before a person is aware of their own choice, even if possible, is a deeply troubling meditation on self-determination. And, even more of a concern is around who becomes the arbiter and decision maker here?
Led by researchers in the UK, The Emotional AI Lab is a fantastic resource.
More immediately we also need to discuss the ethics around the commercialization and sale of EAI data. Unfortunately, there are few calls right now to do so. Nor do many of us understand when we are giving up the ingredients of EAI to companies.
OMFG.
Okay, let’s say it: This is creepy. Downright dystopian, in fact.
EAI raises profound existential questions, but jumping to the bleakest of Black Mirror-like future scenarios is not helpful. There is much that can be accomplished by soberly considering the topic right now. Because if we ignore EAI, we are more likely to face the worst-case scenario, then if we consider the possibilities that are right before us now.
The Good, Bad & Ugly
So, let’s dig into how EAI could be useful, what could go wrong (even very wrong) and the sensible ways to consider EAI in the context of each of our lives.
It’s also important to remember that none of these applications is possible without tracking human emotion via image, voice, and other biometric data, so a good start is making a list of every way you are giving that up right now. And this is not to say halt and go off the grid, but instead to make sure it’s clear what data we are sharing, why, how it will be used and if you are happy with that use.
The Good: Positive Applications for Emotional AI
Of course, it’s okay to be excited by the ways more nuanced emotional and psychological data could be used to enhance AI, particularly as it relates to our health and well being.
For instance, certain conditions provide biometric “signs” before an event (like a stroke) and the ability to act more quickly could save your life. EAI could also better the lives of those with sensory process disorders, or other conditions where it’s harder to communicate verbally. Pioneering researcher in the space, MIT professor Rosalind Picard, has gone a step farther in execution by developing wearables that are used by patients with conditions such as epilepsy.
There are also ways in which EAI can help us avoid hazards and obstacles by improving response time. Pioneer in the field, Affectiva, has already developed a tool to determine if a driver is distracted or tired before they even might even realize the fact. Smart devices could also help you resist eating certain foods while on a diet or encourage good behavior such as brushing your teeth. To “know us” could definitely be used to reinforce those behaviors we “want” reinforced…where we have consent and control of the input and output. And that could be a great thing.
The Bad: Algorithms Already Work Us & Poor Execution of Tools
Of course, we are already faced daily by a barrage of digital cues (mainly via social media) intended to elicit a reaction that prompts an action (e.g., tweeting in response to content that angers you). And that has been a subject of great controversy especially as it relates to kids.
So, it is fair to ask if advertisers would be more likely to manipulate our emotions to further stir the pot and generate revenue than not (…it’s almost a rhetorical question of course).
EAI has also been embraced by the workplace with mixed response. Company HireVue supplied the technology to help employers evaluate job candidates, and then was unsurprisingly excoriated for having introduced bias via these tools.
It’s downright shocking how many companies are jumping in and experimenting with EAI despite little-to-no consensus around its efficacy, nor any agreed upon standard for the evaluation of its output.
The Ugly: Combined with Biometrics, Stalking and Coercion
Unfortunately, a messy rollout or biased application of EAI isn’t as bad as it gets either. When when you move farther down the spectrum and look at communist countries such as China, for instance, we are now talking about control of human beings.
A couple of years ago there were reports of the Chinese government using Uyghurs as test subjects for emotional AI applications, and more recently news of officials putting sensors into the hard hats of works to monitor their brain waves, using EAI to monitor workers’ “positivity levels” and, finally, to assess the learning patterns of students.
Unfortunately the impact is not just domestic either, as Chinese companies unveil chatbot applications with an “emotional response” built in and cook up other ways to use the technology resulting from the surveillance of its people on unsuspecting Americans. We can expect a lot more too as proposed US regulation remains chaotic and unresolved.
How to Counter EAI: Create Friction
With untested innovation coming fast and furious, it’s easy to throw up your hands and just let it happen. But some of this innovation is good, even great, so instead we should consider“creating friction.”
This simply means, slow down, ask questions, don’t always check that opt-in button, and treat your data like the valued resource it is. It’s also important to remember that nothing is free. In fact “free” in technology speak often means you are paying a bigger price for the data you are giving than the service you are receiving in return.
Bigger picture, and longer term, we also need government agencies in place to smartly regulate AI tools and services without hindering innovation or impeding on consumer rights.
Unfortunately, the messaging to-date from the US government has been muddled stoking fears that innovation is indeed being throttled, or working up consumers into feeling as if their right to free speech is at risk.
But if we shift our thinking to creating something that resembles the agencies that protect us from dangerous food, products and chemicals, and even eschews products (e.g., data) procured in unethical ways (e.g., Chinese surveillance) we’ll start to move in the right direction.
But, for today, we can help ourselves by paying attention, keeping up with what is coming down the pike, talking to one another, and acting like the shareholders we all are in this grand experiment ahead.