On 2nd July 2021, I gave a talk for The Open Data Institute about whether or not an artificial intelligence can detect emotion.

The event was recorded and is embedded below (could YouTube have picked a worse freeze frame? …hmmm, quite possibly given my natural resting face 🙂 ). The talk lasted around 25 minutes and was followed by some fantastic questions and discussions with attendees.

I have written about this subject before, and links are included with references below. This talk focused on positioning emotion detection within the field of affective computing, and then looking at two key challenges to creating an AI that can reliably detect emotion: first, issues with how machines learn from data and how easily it can go wrong; and, second, issues with the underlying scientific theories for interpreting emotions expressed visually and through language. And that is before we start considering moral and ethical uses of emotion detection in real-world situations. But it didn’t hurt to open the talk with the recent news that Canon in China deployed a camera and algorithm that required staff to smile to gain access to meeting rooms… it is far from the first example, but seriously?

So the short version: humans aren’t very good at detecting emotion. Which means it is asking a lot to expect an AI to do any better. We lack consensus over a scientific definition for what is an emotion, what range of emotions exist, and how to categorise them. Whilst there may be some basic common expressions, the majority we recognise are not universal, and are not discrete. A facial expression can represent elation and rage, and without context, it is not easy to pick. We use emotions to communicate, to express how we feel, and to hide how we feel. Some people are lucky to have a naturally pleasant-looking resting face. Most of us are more like Jeremy Renner. Forcing people to smile to make a ‘happy’ workplace is likely to create the opposite. And this should be no surprise to anyone with any shred of empathy.

Face pareidolia – seeing faces in random objects or patterns of light and shadow – used to be considered a symptom of psychosis. It is now known to simply be an error in visual perception

Emphasising just how messy the world of human experience is, a few days after the talk, an article appeared in The Guardian – So happy to see you: our brains respond emotionally to faces we find in inanimate objects, study reveals. Face pareidolia – seeing faces in random objects or patterns of light and shadow – used to be considered a symptom of psychosis. It is now recognised to be an everyday phenomenon, arising from an error in visual perception. Pretty much every human being does it. We spot faces and shapes in clouds and in the design of everyday things. The latter is often deliberate, a manipulation. I once attended a talk by the head of design for BMW describing how they chose the ‘smile’ for the front of the redesigned Mini.

So here’s the rub. We train computers by providing them with the examples to learn from. And when it comes to detecting human faces and emotions, we provide them with lots of images of human faces expressing different emotions. Yet we see faces everywhere. What does the computer see?

References and further reading

  • Alais, D et al. (2021) A shared mechanism for facial expression in human faces and face pareidolia, Proceedings of the Royal Society B, 20210966.http://doi.org/10.1098/rspb.2021.0966
  • Blastland, M. (2019) The Hidden Half: How the World Conceals its Secrets, Atlantic Books
  • Feldman Barrett, L. (2018)How Emotions Are Made: The Secret Life of the Brain, Pan Books
  • Fry, H. (2018) Hello World: How to be Human in the Age of the Machine, published by Doubleday
  • Izard, C E. (2010) The Many Meanings/Aspects of Emotion: Definitions, Functions, Activation, and Regulation, Emotion Review 2(4): 363-370
  • Lu, D. (2021) So happy to see you: our brains respond emotionally to faces we find in inanimate objects, study reveals, published 7 July 2021, The Guardian
  • Nettle, D. (2005) Happiness: The Science behind your smile, Oxford University Press
  • Patel, N V. (2017) Why Doctors Aren’t Afraid of a Better More Efficient AI Diagnosing Cancer, published 22 December 2017, The Daily Beast
  • Picard, R. (1995) Affective Computing, MIT Media Laboratory Perceptual Computing Section Technical Report No. 321
  • Richardson, S. (2020) Affective computing in the modern workplace, Business Information Review, 37(2): 78-85
  • Rutherford, A and Fry, H. (2021) Rutherford and Fry’s Complete Guide to Absolutely Everything (Abridged), published by Bantam Press
  • Sap et al. (2019) The Risk of Racial Bias in Speech Detection, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678 Florence, Italy, July 28 – August 2, 2019
  • Zhang, M. (2021) Canon Uses AI Cameras That Only Let Smiling Workers Inside Offices, published June 17, 2021, PetaPixel

Related blog posts


Featured image: ‘And the living was easy’ by Thomas Hawk, CC BY-NC 2.0

Category:
Blog, Data Science, Presentations
Tags:
, , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: