A Timeline of Modern AI

Over the past 6 years, I’ve been updating a timeline of modern AI advances. I first created it for my PhD research and have since updated with various versions and iterations depending on who/what industry I am presenting it for. It mostly focuses on computer vision, natural language processing, and game dynamics (the various Alpha breakthroughs produced by DeepMind). All rely on what I call modern AI – the use of deep learning and reinforcement learning (advances on traditional machine learning that was mostly just computational statistics… we can have a whole separate debate on frequentist statistics vs Bayesian probabilities ☺️) along with some hardware and open source language developments that boosted progress in AI models (I’m using the loosest possible definition of AI here, another separate debate…)

Here’s the general version closing out 2022. For computer vision, it took 9 years from the combination of academic papers linking a new approach to neural networks (Hinton et al, 2006) with applying GPUs to unlock compute limitations (Raina et al, 2009) and crack what was considered a virtually impossible task – for computers to outperform humans at object detection in photo-realistic images. But the approach wasn’t enough to achieve a breakthrough in natural language. Then came a modification, a paper proposing transformer models (Vaswani et al, 2017). Within 4 years, Microsoft announced a model that paralleled humans at completion prediction, reading comprehension and commonsense reasoning. Just over a year later, ChatGPT was launched… 

The image is open licensed under Creative Commons. If you do reuse, please just keep the attribution. Thanks!

Charting the developments in modern AI from 2006 to 2022
Modern AI timeline by Sharon Richardson, 3 January 2023, CC-By-SA-4.0

This article has been cross-posted on LinkedIn

Beware the 25 percent*

I came across two articles today that mentioned 25 percent as the number of people supporting a strongly partisan (i.e. unreliable) view. In both cases, the research has taken place in the USA so may or may not be applicable elsewhere.

First, a book review describing how the US Supreme Court has been using a rather undemocratic ‘Shadow Docket’ procedure to advance a rightwing agenda – The Shadow Docket review: how the US supreme court keeps the sunlight out. In the wake of the controversial overturning of Roe v Wade, it seems just 25 percent of Americans have confidence in the Supreme Court. And not without reason it seems…

Second, an article outlining new research that shows search algorithms are not (entirely) to blame for people developing increasingly extreme views about subjects – The partisans beyond the filter bubble. It has often been assumed that the personalisation of content recommendations by algorithms creates an echo chamber. You click one link to a partisan site, you’ll be served up a bunch more, and your viewpoint will become increasingly partisan. It’s all the algorithm’s fault. But it turns out not to be the case. Research shows that not everyone follows this behaviour pattern. In reality, most people do not continue to engage with unreliable sites. Guess how many do? Yup, roughly 25 percent. The research is US-based, and the demographics appear to skew towards ageing right-wingers…

So, I could be making a huge spurious correlation, but it seems about 1 in 4 people are more likely to pick extreme and unreliable content and will then share it with anyone and anyone. The algorithms amplify the opportunities to view unreliable content, and yes that is problematic, but it is a relatively small group of people actively engage and spread the word. If you’ve been exposed to unreliable content, it is more likely that a person shared it with you than an algorithm promoted it. As commented in the second article:

There’s a relatively small group who are attracted to trouble, and very probably spread what they find around, creating friction even with those who broadly agree with them…

Charles Arthur, Social Warming

References

Header image: State Capital Protest in Raleigh (Nov 2020) by Anthony Crider

* OK so the second article mentions it could be somewhere between 25 to 30 percent… 🙂

Spotify’s strategy for machine learning

On June 10, 2021, Google hosted an Applied ML seminar with an opening keynote by Tony Jabara, VP of Engineering and Head of Machine Learning at Spotify. Jabara presented some fascinating insights into Spotify’s current strategy for machine learning and their growing use of reinforcement learning.

Read More

What is time?

In 3 June 2021, the New Scientist hosted an online talk by Sean Carroll, Research Professor of theoretical physics at the California Institute of Technology, titled ‘How Time Works’…

Read More
image of Good Omens event at Waterloo Station

Look who’s talking…

Deeply-held beliefs can be hard to overcome, no matter the evidence. But they should still be challenged when there is data available. Which gender is most likely to interrupt and dominate the conversation in meetings…?

Read More