Affective computing and the emotion economy: "Trust" takes on a whole new meaning

Rise of the Emotion Economy

The futurist Richard Yonck first coined the term “emotion economy” to describe the network of emotionally intelligent devices that is revolutionizing the technological world as we know it – and with it, the way we conduct business.  Emotional data is becoming an essential tool in creating more effective advertising and forging stronger brand loyalty than ever before.  A market research report released in October estimated the Global Affective Computing Market would rise at a CAGR of 28.46% between 2018 and 2026, and the market is expected to be worth several dozen billions by 2021.

Data is already increasingly emotion-focused.  This not only appeals to the modern user’s desire to express how they feel about absolutely everything, it also provides valuable data to corporations like Facebook.  The emotion economy seeks to take this one step further by creating AI that can respond to emotional feedback and make loyal customers who view the devices as friends, not just technology.  The fledgling company Centiment is already tapping into this need by collecting affective user data and using it to create emotionally informed advertising.

Affective computing seeks to function as an artificial limbic system

The first person to realize the importance of emotion in decision-making was Rosalind Picard, author of Affective Computing and founding director of MIT’s Affective Computing Research Group.  Picard identified the primitive limbic system as key in influencing decisions made by higher-up brain regions.  She compared primitive AI to a camera lens, which takes in information without making sense of it.  In a human, it’s the limbic system that informs the eye what parts of the picture are important.

Since her discovery, Picard has been investigating how to replicate the role of the limbic system in AI.  It’s difficult to translate into code, but scientists are learning to gather data and use it to make increasingly informed decisions about people’s mental processes based on visual and other cues.

Picard soon developed Affectiva, AI wearables designed to help bridge the emotional gap for people with autism.  She had learned that autistic people were having trouble conveying their stress levels as they displayed different facial cues than other people, and her lab developed technology that used wearable sensors in clothing to pick up on alternative signals so that these emotions could be made more clear.  Picard also runs a company called Empatica, which produces a bracelet that can warn caregivers of a patient’s upcoming epilectic seizure.

Welcome to the future: Affective computing is already all around us

There are dozens of other examples of affective AI devices and progress is being made all the time – earlier this year, researchers claimed they could detect emotion based solely on facial color.

MiRo, a beta project by Consequential Robotics, is a platform for developing companion robots.  The robots have six senses and brains modeled after animals, with personalities to boot.  Not only do these robots provide company, they can also monitor your day-to-day life, remind you of things you forgot, etc.

Another company in Israel, BeyondVerbal, claims to have developed AI that can understand when a person’s voice is angry, sad or lonely, as well as identify vocal biomarkers that are indicative of different health conditions.

Perhaps most astounding of all is the humanoid robot Pepper, which looks like something straight out of Star Wars.  Pepper can recognize basic human emotions and respond accordingly.  The robot is already being used as a visitors’ assistant in more than 2000 companies around the world.

And in the EU, an AI-powered border control agent called iBorderCtrl will be piloted by the Hungarian National Police to test its efficiency as a lie detector.  iBorderCtrl will analyze 38 facial micro-gestures and signal out people who appear to be lying.  The project has raised some red flags, especially since iBorderCtrl currently only has a 76% accuracy rate.

The ethics of affective computing: Are we going too fast?

An article published in Nature last year demanded that AI-powered neurotechnologies be held to an ethical code that would preserve people’s privacy, identity, agency and equality.  The authors pointed out that although these technologies are ostensibly developed for the benefit of users, there is always the possibility of them being used in ways they were not originally intended to be used.  The sheer amount of data and autonomy with which we trust AI technologies is something to be wary of.  An example is the UK government, which is using artificial emotional intelligence to monitor people’s emotions on social media.  Isn’t that just a little Big Brother?

That’s not to mention the creepiness factor of having machines that purport to be capable of understanding our emotions and responding to them.  As Judith Shulevitz of The Atlantic points out, the implications of replacing real human interactions with technological facsimiles – especially in today’s world, where people are increasingly glued to their smartphones and spend less and less time talking to their neighbours – are, frankly, chilling.  Shulevitz notes, “even though my adult self knew perfectly well that Sweet dreams was a glitch, a part of me wanted to believe that Alexa meant it.”

In addition to the privacy implications of the reams of data we’re gifting these smart devices, the social and emotional implications of replacing friends with metal cylinders can already be seen in the many cases of people who have gotten so intimate with Amazon’s Alexa that they have expressed suicidal thoughts to their artificial friend.  Picard hastens to remind us that just because a machine has been programmed to respond appropriately to your emotions doesn’t mean it’s actually understood how you’re feeling or is able to empathize with you.  And meanwhile, we’re still not past the kind of glitches that caused Alexa to randomly burst into maniacal laughter.

What kind of responsibility are affective AI devices starting to shoulder, and what will they do with it?