
An analysis of election candidates profile photos and personal statements using cognitive algorithms. Does happiness matter? Can it be faked?
In January, I took part in the 24-hour Hacking Happiness hackathon organised by the Digital Catapult in London. I pitched an idea to examine a data set from a recent election and uncover whether or not happiness had an effect on winning. Four cohorts joined me in playing with some data for insights into detecting and measuring happiness: a huge thanks to Arash Pessian (WikiLens App), Billiejo Charlton (Brighton University), Lisa Johnson-Davies (Leeds University) and Samantha Ahern (University College London).
The following is a short summary about some of the findings. This was a very short and intense burst of data analysis as a team, and was meant to be fun as well as useful. It is intended only to provoke some thoughts and ideas about how sentiment can (or can’t) be detected in digital content.
First up, it was a hackathon so the team needed a name. Hat-tip to Arash for suggesting ‘FaceVote’.
The data
The data set consisted of information about people nominated for the role of Independent Police and Crime Commissioner (PCC) in England – their profile photos and personal statements, along with the results of the election.
Elections were held in 2016 for each of the police authorities, excluding the larger ones such as the Metropolitan Police Service covering London and the Greater Manchester police. That left 168 candidates nominated for election in 36 areas. A candidate had roughly a 21% chance of winning. The actual baseline would depend on the area, as different areas had a different number of candidates but for speed and simplicity, we used the general baseline of 0.214 as the conversion rate for comparing positive and negative profiles.
All the data about the candidates is available (at the time of writing this article) on the PCC web site Choose my PCC. The election results can be found on the Association of Police and Crime Commissioners web site.
To avoid any copyright issues, I will not reproduce any photos here.
The tools
For detecting, analysing and manipulating emotions, the following tools were used:
- Microsoft Cognitive Services API – machine-based face and emotion detection in images
- LIWC Text Analytics – machine-based sentiment analysis in text
- Mechanical Turk – human-based analysis of images and text
- Liquify (included in Adobe Photoshop 2015.5) – human/machine manipulation of images
Does a happy face matter?
The first test was to determine if the expression in the profile photo and sentiment in the personal statement correlated with winning.
For the profile photos, Microsoft Cognitive Services were used to detect emotions, using the Emotion API. For a reminder of how the Emotion API works, see related blog post: Computer Vision Accuracy. The emotion API detects emotion from facial expressions, using the Emotional Facial Action Coding System (EMFACS) to score each face for its Happiness, Sadness, Surprise, Fear, Anger, Disgust, Contempt and Neutral expressions. The total score should add up to 1.0.
All 168 faces registered dominant scores for either Happiness or Neutral. Either strong (>0.66) in one of the emotions or split between the two. Most faces scored less than 0.05 for all other emotions. The highest score for any other emotion was 0.15 for sadness by one individual.
So did smiling out perform a neutral expression?
Short answer: No.
Smiling was the most popular choice for profile photo. More than two-thirds of candidates opted for it. But the conversion rate from nomination to winner was similar regardless of expression. Just over 20% of smiles and just over 20% of neutral expressions were elected.
Does a positive personal statement matter?
The second test was to explore the use of emotion in the personal statement each candidate provided. Billiejo took charge on this analysis, using the LIWC Text Analytics tool.
The following image shows two samples, with identifying remarks removed or anonymised. They give an indication of how LIWC would detect positive and negative terms.
The statements averaged 300 words in length and most contained a mix of positive and negative emotions. The total score was simply positive score – negative score. Above 0 meant the statement was overall marked as positive, below 0 mean the statement overall was marked as negative.
So, do positive statements outperform negative statements?
Short answer: Yes.
As with the photos, the majority of candidates opted for positive statements. Only 12.5% went with the fear factor. And it was not a winning strategy. Just 1 pessimistic candidate got elected. Whilst positive statements exceeded the baseline conversion rate of 0.214, negative statements had a conversion rate of less than 0.05.
Did political alliance matter?
An interesting aspect to the PCC elections is that they are supposed to be independent oversight of the police, but the candidates had to be affiliated with a political party (or declared as independent). We wondered if politics mattered…
First of all, how many of each party were elected?
Well, the Conservatives were the most successful. More than half of their nominated candidates won. Labour came second. A handful of independents won and the rest all failed.
Looking at just the Conservatives and Labour nominations, we wondered if there was any political distinction in using positive or negative phrases in personal statements? Billiejo plotted the positive and negative scores for each candidate, with Conservatives coloured blue and Labour coloured red.
Short answer: Possibly…
The better answer would be to calculate the difference. But just looking at the scatter plot, there is some visible separation between the parties. Whilst there is some overlap and outliers, the Labour nominees tended to score higher for negative and lower for positive words, whilst the Conservative nominees tend to score higher for positive and lower for negative words.
Can machine-learning algorithms be tricked?
Moving back to examining the faces, Arash had a great idea – would manipulating the expressions in the photos alter voting choice? This opens up two possibilities. First, can the machine-based algorithms be tricked by faked photos. Second, do humans change their preferences when presented with different expressions?
We selected a face with a very high neutral score (99.7% neutral) and altered the expression using the new Liquify tool available in Adobe Photoshop. We attempted to create the different emotions being detected: happy, sad, angry, disgusted
So, was the Microsoft Emotion API fooled?
Short answer: No.
The scores for the modified image ranged from 88.7% to 99.4% Neutral. The best trick we pulled was the least convincing to the human eye, an artificial joker-like smile. That picked up 10% happiness. The one that our team voted as the most believable smile had the least effect, scoring 99.4% compared to the original image score of 99.7%
To be fair, with a bit more time and experimentation with the Liquify tool, we may have managed more trickery, given the tool does include the ability to alter other aspects of the face. But within our limited time window, we had one more trick to try.
Would people vote differently based on facial expression?
The last test was to see if humans could be tricked into voting differently based on modified photos. Using Amazon’s Mechanical Turk, we posted a question: which person would you vote for? In the first option, we presented a modified happy face versus a different neutral face. In the second option, we presented a modified sad face versus the neutral face.
So were people influenced in ways the machines were not?
Short answer: Yes….
…but before this post goes viral and launches our media careers in the world of cognitive analytics :-). The results were very weak. Due to the time limits of a hackathon, and delays getting Mechanical Turk up and running, we only had a very small sample of respondents. So it was a fun result, but not a particularly scientific one. But it did provide some food for thought and ideas to explore further beyond the hackathon.
With the rise in deliberate generation and posting of fake news, for commercial and/or political reasons, our ability to sift through misdirections that may delight at a magic show but have more serious consequences in real-life is becoming more important than ever. We are not only creating tools to detect emotions, we are creating tools to fake them too. And the machines are likely to be better than us at telling the difference…
The following video summarises recent research into facial reenactment, with researchers taking control of prominent politicians facial expressions:
and here’s the source for that video, outlining the methods used – Face2Face: Real-time Face Capture and Reenactment, by researchers at Stanford University:
If you found this post interesting, you might also be interested in an earlier test of Microsoft’s emotion algorithm – Computer Vision Accuracy and a look at why the Facebook experiment to manipulate emotions by altering news feeds was ethically wrong – Design = Manipulation
[…] Hacking Happiness […]
[…] Hacking Happiness […]