
How can we create a better artificial intelligence? By being aware of the flaws in the data, theories and objectives we are using to build current AIs…
On 15th October 2019, I presented ‘How to create a better AI…’ at The Things Conference. The conference is focused on the development of the internet of things and the use of low-power wide area networks networks such as LoRaWAN. The following is a summary of my talk: ‘How to create a better AI’. Or rather, to discuss can we create a better AI? Enjoy!
The first step is to define what we mean by AI. The phrase can be used quite broadly. According to the Oxford English Dictionary, the term intelligence has two meanings: as a thing that can be acted on (or not), and as a process that includes the application as well as acquisition of intelligence.
In his 2019 book, Human-Compatible, AI pioneer Stuart Russell goes further and states that intelligence is an expectation that actions will achieve objectives. The reason for being this specific is that it is proving quite difficult to build an AI that can understand what our expectations or preferences are at any given moment in time. Until we achieve that goal, the next generation of AI remains a distant possibility.
As for artificial, that simply means something unnatural. AI is intelligence that comes from a manufactured machine, whether as information of value to help human decisions or as a complete process where the decision-making is also delegated to machines. Traditional AI was developed using (human-defined) rules-based logic. Modern AI is where the machines learn for themselves…
Much of the recent successes with AI have been focused on narrowly defined goals. The ultimate target is to produce a generalised AI. What could possibly go wrong?
Well, quite a lot! This talk covers three aspects: bad data, bad theories and bad objectives. It’s worth noting: this isn’t just an issue for AI, it applies to good old-fashioned human intelligence too…
Bad data
The three most common issues with bad data are bias, contamination and ambiguity.
Much of the data narrative focuses on the problem of bias – creating algorithms from data that is not representative of the phenomenon being studied. For example, a recent research paper found that African Americans were up to two times more likely to be labelled as speaking offensively compared with others in speech detection research (Sap et al, 2019). Why? The dictionaries being used for natural language processing failed to accommodate the fact that different groups of people use language differently.
African Americans were up to two times more likely to be labelled as offensive compared to others – The Risk of Racial Bias in Speech Detection (Sap et al, 2019)
The same methods also struggle to interpret emotional expressions in language. For a simple demonstration, when testing IBM Watson’s Tone Analyzer in September, I entered the phrase ‘Loving the bad ass atmosphere here’ The result: 36% joy and 52% sadness…
But bias is not the only challenge. We can also contaminate data during the analysis process, introducing accuracy when we transform, scale, aggregate, trim and do the various tricks and hacks necessary to get messy data into a format that can be analysed. A recent example was the discovery that an algorithm trained on medical images to diagnose cancer was learning that images containing a ruler were more likely to also contain a malignant tumour. Why? The images had already been examined by humans who tended to use a ruler to measure the most serious tumours.
Data can also be ambiguous and contain uncertainty, much like real-life situations. Our perceptions can be easily fooled by sensory trickery. A common example is when experiencing ‘white out’ conditions in a snow storm. The lack of a horizon removes our ability to judge distance, as articulated in a story about Amundsen mistaking a dog turd for a man…
Algorithms built for real-world applications and based on real-world data need to consider the fitness of the data for its intended uses and how to handle bias, contamination and ambiguity that may distort results.
Bad Theories
Whilst bad data is often talked about, less is said about bad theories. And yet it is often bad theories that lead to bad data. Weak assumptions can introduce bias by affecting what sort of data is generated and/or collected, weak methods can create fallacies by contaminating experiments, and weak universality is when a theory fails to generalise beyond the context it was produced within yet is used in generalisations nonetheless…
Weak assumptions most often occur when somebody applies personal experiences to people with a different background, be it gender, race, nationality, age, social class, religious belief… The easiest example comes from medicine. Not least because for many centuries it could only be practiced by a specific group of men. Any woman who tried risked being burnt at the stake… But even today it is still predominantly developed based on male responses to treatments, whether of mice or men.
The typical heart patient in the 1800s: “keen ambitious man… well ‘set’, from 45 to 55 years of age…” Dr William Osler (Doshi, V, 2015)
For literally centuries, heart disease has been considered to be a predominantly male problem. Then, in the 1900s, heart disease became associated with diet and exercise. Yet still, the assumption was that it was a male problem, not a female one. In 1982, the Multiple Risk Factor Intervention Trial established a link between cholesterol and heart disease. There were 12,866 participants in the study. All middle-aged men. In 1995, the Physicians’ Health Study found that aspirin could reduce the risk of heart attack. The trials involved 22,071 participants. All middle-aged men.
It turns out that women often experience different symptoms and react differently to treatments. Despite accounting for roughly half of the human population, the symptoms are explained away as ‘abnormal’…
As well as starting from weak assumptions, research trials can be further weakened when there is interference in the testing process, producing skewed outcomes. Research into whether or not there are universal facial emotions is one such example. It has been argued that core emotions cannot be performed deliberately. Despite the same researchers recruiting actors to deliberately perform emotions for classification studies… The classification process itself is also being challenged. People were presented with images of facial expressions and either a story or list of 5 emotions to pick from. Re-running the studies with control groups that were not given any contextual priming suggested that emotions are not so easily defined….
This matters because the original research has been embraced within the cognitive science community and is being used in algorithmic emotion detection. For more details, view related post: (Failing) to detect emotions.
The desire to pinpoint universal emotions highlights the third challenge with theories – the desire for generalisation or universality. The goal of much research is to produce a general model, a universal theory such as Newton’s three laws of thermodynamics and Euclidean geometry. However, man-made phenomena and cognitive behaviours are not easily defined. Much of human behaviour is chaotic: highly sensitive to initial conditions that makes future actions hard to predict. Aldous Huxley, author of the dystopian novel ‘Brave New World’ articulated as much back in 1932…
“Man is so intelligent that he feels impelled to invent theories to account for what happens in the world. Unfortunately he is not quite intelligent enough, in most cases, to find correct explanations. So that when he acts on his theories, he behaves very often like a lunatic.” Aldous Huxley, Texts and Pretexts, 1932.
And that leads on to…
Bad Objectives
Bad theories can produce bad data. Bad objectives can produce bad theories… and it is perhaps the most dangerous issue of all when creating AI…. Objectives create consequences.
If an objective is ill-defined, it will contain uncertainty that machine algorithms struggle to resolve. Humans do too, but we (well some of us at least) are more easily held to account and are quick to adapt behaviour to unexpected circumstances…
A simple example was given as a thought experiment by AI scientist Stuart Russell during his talk at the CogX conference in London earlier this year. You have a robot assistant. You decide you want a cup of coffee and ask the robot to fetch you one. Simple, no?
What if there isn’t a coffee shop nearby. You didn’t specify a timeframe. Maybe the robot comes back 2 days later with a stone-cold cup of coffee from the nearest coffee outlet. So let’s add, ‘… in the next 30 minutes’. The robot locates a local coffee shop but it has a long queue. The robot kills everyone in the queue to ensure the coffee is served within 30 minutes. So let’s add ‘… provided it doesn’t cause anyone else any harm’. The robot finds a different coffee outlet, a hotel willing to sell a cup of coffee for £300. So now we add ‘…within the price range of up to £3’… and so it can go on.
Another example was articulated in the book ‘Clear Bright Future’ by Paul Mason. Humans currently produce approximately 84 million apples per year. It is a labour-intensive endeavour that relies on an age-old invention – the orchard. The human approach to automation has been to try and build a machine to pick apples in the same way that a human does. It has not been successful. How might an AI help?
If left entirely to its own devices, an AI might synthesise the nutrients of an apple and simply manufacture those on a mass scale. How specific should we have been with the task: does the size or texture of the apple matter? Why 84 million? For what purpose: to eat raw, to cook, to juice and drink, to combine with other foodstuffs to make new products? Do we mind about environmental costs? human costs? What if the AI discovered an author by the name of Aldous Huxley and his book ‘Brave New World’, describing the invention of a drug that induces a state of euphoria with no side effects to control a human population… maybe the AI decides that humans already invented the optimal method for producing apples – the orchard and a plentiful supply of people to do the apple picking. Perhaps the AI decides the problem to be solved is how to make the humans happy whilst picking apples…
AI at the Edge
To wrap up the talk, I concluded with some recommendations when developing IoT solutions that incorporate data analytics and artificial intelligence. Things to consider include, from a data perspective: the position of any sensors collecting data about the environment, the frequency and range of the readings, and calibration of hardware and software. From a theory perspective, the need to consider what data pre-processing is undertaken prior to analysis, such as transforming, scaling and aggregating disparate data sources. The choice of algorithm and parameter selection can have a critical influence on predictions, and be aware of any underlying assumptions that may alter outcomes. But above all else, there is a need to pay closer attention to defining clear objectives, knowing what success will mean, and having mechanisms to maintain control and apply constraints to avoid or manage unintended consequences…
The notable benefit that IoT can bring to the world of AI is in the collection of more relevant data. Analytics at the edge should involve analysing at the edge, not relying on algorithms built and tuned on archived data that may not be appropriate for the real-world situation that the IoT is operating within. IoT can sense data, analyse and then throw it away, continually refining and recalibrating algorithms to improve their fit for purpose.
To close, three tips for creating a better AI: investigate any potential for gaps, contamination or uncertainty in your underlying data. Challenge assumptions and evaluate the confidence in theories being used to model outcomes. And be clear about your objective: define success and consider the consequences.
Suggested reading
As well as the books included in the references, there are a number providing introductions and examples on this subject. Two recently published are ‘Invisible Women: Exposing data bias in a world designed for men‘ by Caroline Criado Perez, and ‘Hello World: How to be human in the age of the machine‘ by Hannah Fry. Disclaimer: I was the researcher on Hello World 🙂
References:
- Blastland, M. (2019) The Hidden Half: How the World Conceals its Secrets. Atlantic Books
- Boucher, J.D. and Ekman, P. (1975) Facial areas and emotional information. Journal of Communication, vol 25 (2), pp 21-29
- Clark, A. (2016) Surfing Uncertainty: Prediction, Action and the Embodied Mind. Oxford University Press
- Doshi, V. (2015) Why Doctors Still Misunderstand Heart Disease in Women: Reconsidering the “typical” heart-attack symptoms, The Atlantic, 26 October 2015.
- Ekman, P. (1993) Facial Expression and Emotion. American Psychologist, vol 48 (4), pp 384-392
- Feldman Barrett, L. (2017) How Emotions Are Made: The Secret Life of the Brain. Macmillan
- Mason, P. (2019) Clear Bright Future: A radical defence of the human being. Allen Lane
- Russell, S. (2019) Human Compatible: Artificial Intelligence and the Problem of Control. Allen Lane
If you’ve made it this far, thanks for reading! I’ve skipped a few slides from the full talk. If you would like a copy of the slides, please contact me. If you are interested in finding out more about this topic or have a related project and would like an independent perspective, I can provide data-intensive research, presentations and consulting services. Please get in touch!
Header image: me, presenting at The Things Conference 2019. Unless otherwise attributed as shared under Creative Commons, all images are licensed for use by the author and are not for re-use.
This is very good, Sharon. I have been arguing some of these points, in an isolated manner, at different times. I’ve heard others make some of these arguments, but this is the best, comprehensive presentation of the whole picture I’ve seen to date. Nicely done!
Thanks Dan! That’s lovely feedback and very much appreciated.
[…] How to create a better AI […]
[…] How to create a better AI, October 2019 […]