This is an interview for IBM’s SpacePod podcast series, discussing the impact of data and technology on spatial intelligence, with thoughts about the effect the current COVID-19 pandemic is having on our relationship with space.
On 6 May 2020, I had the pleasure of being interviewed for IBM’s SpacePod podcast series, hosted by Paul Russell from IBM’s Watson IoT division and Paul Gatland from IBM’s AI Applications division. Paul R leads IBM’s strategy in Europe for smart buildings and workspace and Paul G is responsible for IBM’s Real Estate solution, IBM TRIRIGA and the new Smarter Buildings solution, Building Insights. So no surprises that this podcast series is all about ways to use data and technology to create smarter spaces.
The podcast is included at the end of this post. To keep it to a ten-minute limit, Paul R had to do some editing so I thought I’d post some fuller thoughts here, in the style of a transcript with some key quotes highlighted. Enjoy!
Q: I really want to explore new ways to think about space. We are fixated in our world about the physical space and how we use tech to manage it and this gets us into the whole IoT and workplace management side but I want to hear from you because I think you have a different perspective?
The overwhelming focus seems to be on ‘robotising’ human decisions, which isn’t always a healthy development.
Well, my interest has always been in the human side of technology, how it can be used to help us. The idea of computers in physical space was first envisaged by Mark Weisner of Xerox Park nearly 30 years ago when he made the famous quote that technology would one day become ubiquitous and disappear into the background. We’re just now starting to see that happen. And so we are seeing a lot of talk about smart buildings, smart transport, smart spaces and so on. But what do we mean by smart? We are generally referring to technology that can acquire and respond to data in real-time through the use of built-in sensors and actuators. It’s an extension of what made a mobile phone become a smartphone.
But smart for whom? At the moment, there appears to be an overwhelming emphasis on monitoring, whether to extract more productivity from people in the workplace, to manipulate consumers through content targeting or surveillance. That can be used for health and safety, such as sensing when a person is experiencing fatigue in a potentially dangerous environment such as construction sites. But it can also be used to police or manipulate an environment. And that creates privacy and ethical concerns. Just this week, Tim Bray resigned as a VP at Amazon in protest at the treatment of warehouse workers, saying that whilst Amazon is exceptionally well-managed and has demonstrated great skill at spotting opportunities… It has a corresponding lack of vision about the human costs…
The great promise of ubiquitous computing and smart spaces was in creating personalised environments tailored to our individual wants and needs, to augment our intelligence, and the overwhelming focus seems to be on ‘robotising’ human decisions, which isn’t always a healthy development. Particularly given many AI-based solutions are immature or unproven, and we have a real lack of legal protections for people who may be impacted by AI-led decisions.
Q: When we think about the post COVID world and how we are going to get back to offices, airports and restaurants the knowledge of space – who, how, when, what, where – and the interpretation of what will be a real ‘big data’ output, what problems and opportunities can you foresee?
There are too many unknowns about the COVID-19 virus right now, at least publicly, to be able to predict any future scenario with confidence. If a cure or vaccine is found, and its threat eliminated, I wouldn’t be surprised if people quickly revert to life as before. Hope not. But a mentor once told me that we can either choose to change or change is forced upon us, and that most people don’t choose. It’s the classic start to the ’Hero’s Journey’ – the hero is given a challenge, rejects the challenge, some calamity befalls them and then they take up the challenge… and the story begins… So is Covid-19 the calamity? It just might be.
In the short term, assuming social distancing is still required, there is potential to use data to distribute demand for constrained spaces such as public transport and restaurants. Restaurants are understandably concerned about the viability of their business if they have to reduce the number of tables. But those who have peaky demand, so lots of demand at weekends or in the early evening and quiet at other times, there’s potential to distribute that demand, particularly if working patterns also become more flexible.
People desire social contact but it may be that the work environment moves closer to home, shared spaces not with colleagues but simply with other people in the same neighbourhood where you live.
For the first time, we can now see what impact urban activities have on our daily environment, and in turn our health, because we have a lot of sensors monitoring things like air pollution. Whilst it obviously does not offset the death toll to Covid-19 which is awful, there is already research being published claiming that the drop in urban air pollution could be saving thousands of lives. It is a possibility that the data will provide a demonstrable case for a shift towards cleaner energy and transportation, and different working practices such as more remote working and demand-responsive services that reduce congestion caused by having peak commuting times.
If more remote working becomes more established as part of ‘business as usual’, then office space will require a fundamental rethink, from redesigning interiors to reimagining what is an office. People desire social contact but it may be that the work environment moves closer to home, shared spaces not with colleagues but simply with other people in the same neighbourhood where you live, and fewer visits to a company office when direct interactions with colleagues are needed. Which may not be a daily requirement. That could have interesting consequences for the desirability to live in cities.
Virtual space is different from physical space, and we could perhaps be using sensors to design better interfaces and help with some of those challenges.
Remote working is highlighting the difficulties with virtual meetings. You can’t simply put everyone in front of a webcam and replicate a physical meeting environment. Communication becomes harder, we pick up subtle clues like shifts in body language when we are in physical proximity that aren’t detectable when you are staring at a portrait photo of someone. Perhaps sensors could help with that and be incorporated into the software interface? Also, because you are literally staring at each other’s faces, you are forced into a more constant attentive state, which is very tiring compared with the more typical partial attentive state most of us have during group face-to-face meetings. Virtual space is different from physical space, and we could perhaps be using sensors to design better interfaces and help with some of those challenges.
But, on a grander scale, In over 30 years of talks, there has been a huge struggle to achieve global consensus on climate change, in agreeing its causes let alone considering measures to tackle and/or accommodate it. Could Covid-19 be the tipping point towards making positive strides towards a more sustainable future for humankind? It would be some consolation for the hurt being felt by so many right now and for the socio-economic repercussions that are going to be felt over the coming years.
Q: Are we now reaching a tipping point in our understanding of space where the role of software and sensors is going to have to go through a shift in purpose and outcome?
1 in 3 local authorities in England is using computer algorithms in welfare decisions. I hope the ability to be humane isn’t being taken out of the equation.
Up until now, we have still been seeing IoT projects predominantly as experimental pilots rather than a strategic investment. It’s quite possible that Covid-19 will drive organisations towards a fundamental rethink of how they use IT to benefit the business. It’s created a massive increase in the use of cloud computing for online services and I suspect the follow-on phase as we come out of lockdown will see a similar bump towards mobile and IoT services in physical settings. So yes, it is possible that we will see a transformation. The risk is shifting too quickly and adopting solutions that aren’t ready. We’ve already seen this in incorporating AI and algorithms in real-world decisions.
AI isn’t always very good at performing human-like decisions. It is getting better at individual sensory tasks like object recognition but real-world decisions are complex, messy, ambiguous and uncertain, and sometimes the choices are arbitrary. When that is the case, feelings become incredibly important. It is what we are talking about when we refer to our gut instincts and that is something an AI doesn’t currently have or have access to. Humans are fallible too but I do think we need to be careful about how AI is used in decisions that have human costs and consequences. And I don’t think we are there yet. I was quite concerned to read recently that 1 in 3 local authorities in England is already using computer algorithms in welfare decisions. I hope the ability to be humane isn’t being taken out of the equation.
Q: I know you did some of the research for Hannah Fry’s book, Hello World, a great book BTW and well recommended. I would be fascinated to get a couple of stories from that research that might be relevant?
Obviously, I am biased but I would recommend anyone interested in how humans and computers can make flawed decisions, both alone and together. There are so many great examples in the book to learn from. But relevant to what we have been talking about today is how bad decisions occur at the two extremes: when a situation is extremely rare or has never been experienced before, which is what we are experiencing right now with the COVID-19 pandemic, and when a situation is more common than we realise, which is true of many everyday decisions.
So, starting with the first scenario: having to make a decision when facing a rare or unique situation. This is an example when relying on our gut can be really dangerous because those feelings are unlikely to be coming from a valid experience. We need to rely on the data and we ignore it.
When facing a rare or unique situation that we have no experience in, we tend to make the data fit a situation that we can understand, and it can lead to making terrible choices.
When we automate physical systems and processes, it is common to build in alarms and safety mechanisms if anything abnormal occurs. And, frustratingly, we humans seem incredibly resilient in misreading them, ignoring them and even over-riding them. So one story from the book was an incident at Alton Towers some years ago that led to disastrous consequences. An occupied train on a rollercoaster smashed into an empty train that had stalled earlier on the track, causing terrible injuries to the people at the front of the occupied train. How did it happen? Engineers had been inspecting part of the track for a fault and, after the inspection, sent an empty train around the track as the final test before reopening the ride. The train stalled. Now for an empty train to stall is incredibly unusual. Adding to the uniqueness of the situation, the ride operators requested an empty train out of reserves to increase capacity because queues forming waiting for the ride to reopen. When the ride restarted, the first train out was stopped at the top of the first incline because a sensor on the track detected the stalled train. It’s a standard safety mechanism. It triggered a collision-detection alert. There didn’t appear to be any missing trains because the engineers didn’t realise that an additional train had been called out of reserves. They assumed the system hadn’t reset properly after repairing the fault and overrode the alert with tragic consequences.
There is sadly no shortage of examples of us refusing to believe what the systems are telling us because the situation does not seem to make sense. Another example from the book: planes often fly on auto-pilot for extended times during mid-flight. An Air France flight had an unexpected sensor failure and handed over control to an inexperienced co-pilot who was on duty whilst the captain and other co-pilot rested. The pilot had accumulated plenty of hours in flight simulators, but little flight experience. The plane hit some turbulence and he over-reacted, putting the plane into a steep climb to pull out of it. The plane went so high, air could no longer flow of the wings. It stalled and it went into a free-fall, but with its nose up. Alarms sounded in the cockpit and alerted the other pilots. There was still time to rescue the flight by pushing the stick forward and pushing the nose down to allow air back over the wings. But the inexperienced co-pilot kept his stick pulled back thinking he was trying to pull out of a nose-down dive. Because that is what you would expect to be doing and the type of scenario that would have been rehearsed in training. By the time the captain realised the mistake and ordered them to drop the nose, it was too late.
Whilst such disasters are thankfully rare, the concern is that pilots are increasingly inexperienced in nuances of real-world flight because of reliance on auto-pilot. The question is whether or not the same could become true for drivers of autonomous vehicles. If we sit back whilst our cars drive us around for most of the time, will we be any position to competently take over and handle an emergency situation? There are a number of stories in the book exploring these sorts of scenarios.
Human beings are susceptible to spotting patterns that aren’t really there. As we gather more and more data in physical environments, we are going to produce more inconvenient data and we need to get better at embracing it, not burying it.
The other extreme is when we think a scenario is less common than it actually is. The easiest example is the case of mistaken identity because your face is unfortunately similar to somebody following a less moral life. A great example went viral a couple of years ago when a man shoplifting from a store here in the UK bared a close resemblance to the American actor David Schwimmer, one of the cast of the old tv series Friends. Whilst that instance was funny, if the police mistake your identity it can be terrible. One example from the book is the case of Steven Talley, an American who was somewhat brutally arrested on suspicion of being a bank robber because facial recognition software had matched him to CCTV footage. Despite having a solid alibi, he still spent 2 months in jail and it took over a year to clear his name, by which time he had lost his job, his home and access to his children. The thing is, if you look at the photos included in the book, you honestly would think it’s him. With nearly 8 billion people in the world, we are all going to have our share of doppelgangers. It didn’t need a computer to make a mistaken identity. But because an algorithm confirmed what people thought, his alibi was not trusted. We’ve seen similar mistakes occur with DNA testing. If your DNA matches that at a crime scene, even a solid alibi might not be enough. There have been examples of people serving lengthy prison sentences before someone finally explains how the DNA was contaminated.
Human beings are susceptible to spotting patterns that aren’t really there. If you flip a coin and it turns up heads 10 times in a row, you might think the coin is weighted, a trick. But flip a coin enough times and, whilst it will average 50% heads, 50% tails, what it is unlikely to do is turn up H, T, H, T, H, T in a nice tidy sequence. At some point, it becomes likely that you will get heads 10 times in a row. (To be specific, when you flip it 1,024 times, when likely means the probability is greater than 50%).
A real-world example of this is in the random shuffling of music playlists. Some years ago, Spotify announced that its random shuffle was no longer random because people were complaining that it wasn’t random because songs from the same artist were playing in sequence. But it was the same challenge – if enough people shuffle their music, somebody is going to get a shuffle that results in a sequence playing the entire album from the same artist. So Spotify created an algorithm to make a non-random shuffle that feels more random to humans.
So we have two challenges when dealing with real-world evidence. When facing a rare or unique situation that we have no experience in, we tend to make the data fit a situation that we can understand, and it can lead to making terrible choices. We may already be seeing this happen with government responses to Covid-19. For example, some people believe that early warnings were ignored or not taken seriously because previous recent pandemics such as SARS, H1NV and MER and Ebola, remained contained within continents. And when we think the data has given us the answer we need, we ignore or explain away evidence that conflicts with it. We need to be aware of these flaws in decision-making processes. Time and again, a disaster could have been avoided by not explaining away an inconvenient piece of data. As we gather more and more data in physical environments, we are going to produce more inconvenient data and we need to get better at embracing it, not burying it.
Q: Last question. What is your view on the long term future around how mankind will interact with space? You can be as speculative as possible we are amongst friends here!!!
We are social, and we are spatial, we like to move around and we like to be with others…
Without wanting to go too deep philosophically, human beings are social creatures with brains. Why do we have a brain? All vertebrates and most invertebrate animals have brains whilst plants rooted to the ground do not, which suggests we evolved brains to enable us to physically move around. As an aside, that doesn’t mean plants don’t have their own capacity for intelligence, just that it doesn’t function in any way like an animal with a brain. Trees have been shown to communicate with one another through their roots, which is a whole other subject (and a rant I could get into as to why city planners should stop planting individual trees spaced neatly apart along streets and instead only plant them in groups).
So it would seem that a brain is necessary to be able to move about in space. How has being social benefitted us? Well at the most basic level, it enables us to cooperatively rear our next generation and to divide labour in more productive ways. As the saying goes, a selfish individual will beat an altruistic individual, but groups of altruists will beat groups of selfish individuals. According to famous evolutionary biologist Edward O. Wilson, out of all the species of animals on land, there are only 20 that we know of that have attained what we would consider being advanced social life, defined as being some degree of altruistic division of labour. Most are insects. But it does include us.
So we are social, and we are spatial, we like to move around and we like to be with others. Covid-19 is putting a huge dampener on both right now. For the way humans interact with space to fundamentally change would mean a change to what it is to be human. Which means we are no longer talking about womankind or homo sapiens. There are already authors pondering if our next evolution is to incorporate AI, to become cyborgs. That’s perhaps a whole other podcast. But for those interested, a couple of great books that broach the subject are Homo Deus by Yuval Noah Harari and Life 3.0 by Max Tegmark. After finishing the excellent Hello World! by Dr Hannah Fry, then both of those are well worth the read.
And that’s it. It was a wonderful chat and some thought-provoking questions to ponder over. Thanks again to Paul Russell of IBM for inviting me to be on SpacePod!
And here’s the shortened podcast:
Header image: Screen wall at Satellite Applications Catapult, photo taken in September 2016.