In 3 June 2021, the New Scientist hosted an online talk by Sean Carroll, Research Professor of theoretical physics at the California Institute of Technology, titled ‘How Time Works’…

The event was ticketed and not publicly available to share. Here is the abstract, a few soundbites, and links to related online talks that are publicly available, at the time of writing.

For a short condensed overview of some of the concepts covered in this talk, there is an 8-minute online interview as part of a series of people discussing whether or not time is real: Closer to truth interview series: Is time real?

I’m not a physicist so I’m going to keep the soundbites here short for fear of making myself look an idiot 🙂

Central to the talk was the concept that the fundamental laws of physics make no distinction between the past and the future. In theory, if you have perfect information about some system at some moment in time, you can predict its future and retrodict its past. But in our universe, we experience the past and the future very differently. We have memories and artefacts about the past but not the future. It is an asymmetry of knowledge. Similarly, we think we have the ability to make different decisions that lead to different futures, the potential to demonstrate free will and arbitrarily choose to turn left or right when faced with a choice. That creates an asymmetry of influence. A further asymmetry is the relationship between cause and effect. Cause refers always to what comes before or at the same time as an effect. A cause can never happen after an effect. These asymmetries are not built into the fundamental laws of physics.

The answers to these puzzles are believed to come from the 2nd law of thermodynamics and its concept of entropy. (The talk also dabbled in quantum physics… not even going to try and summarise). As repeated throughout a certain song by Muse, the 2nd law of thermodynamics states that, in an isolated system, entropy can only increase. The equilibrium state of such a system will occur when entropy is at its highest, when a system is completely disordered. The talk used a simple example to demonstrate: a cup of coffee with cream placed on the top.

3 glasses each containing coffee and cream in different states of disorder.

In the image above, starting from the left, the first glass contains cream sitting on top of the coffee. This is the most ordered state of the three glasses. In the second glass, the cream has begun to mix with the coffee, and is more disordered than the first. In the third glass, the cream and coffee are completely mixed up. This is the most disordered state for the coffee and the cream.

In the journey from coffee and cream that is separate and ordered to completely mixed, the state of the drink, as a system, goes from simple to complex and back again. What is meant by simplicity versus complexity here is, “how much information do you need to be able to explain what is going on.” The more information that is needed, the more complex the system. When the coffee and cream are completely separated, you don’t need much information. The same is true when the two are completely mixed, when the cream is evenly dispersed throughout the coffee. The system is at its most complex when the cream begins to mix with the coffee.

A really interesting analogy was given to describe measuring complexity. If you took a photo of each glass, the size of the JPEG would be largest for the middle glass, because it contains the most detail to capture…

In Professor Carroll’s words, there is almost a law – not quite a law, because we don’t yet understand it – that the journey of a closed system from low entropy to high entropy is also a journey from simplicity to complexity and back again, as shown in the diagram below.

Order and Complexity in the Universe

This movement from simplicity to complexity and back again as entropy continues to increase is key to understanding our perspective of time, that there is a direction created by entropy. It is part of why our universe is the way that it is. But why do we imagine multiple possible futures?

One hypothesis is linked to evolution and the moment when organisms that previously lived in the water gained the ability to move onto land. The first believed to have achieved this is the Tiktaalik, about 375 million years ago. The argument goes, when you live under water, you cannot see very far, meaning everything is nearby. You can move at up to metres per second, and you can see metres around you. So as soon as you see something, it can get to you in seconds. The evolutionary pressure in that kind of situation is to react quickly, don’t over think it. As soon as you see something, you have to judge: is it a friend, a foe or food? Once you climb onto land, you can see much much further. And that enables different strategies. You now have the time to think, ‘what if I did this versus that…?’ and contemplate different hypothetical futures. The current thinking in the field of neuroscience, from fMRI studies of the brain, is that our brains repurposed areas already developed for storing and replaying memories to imagine different hypothetical scenarios about the future.

Humans are not the only beings to demonstrate the ability to imagine the future and react accordingly. But we have gone further and talk to each other about it, to plan and cooperate about future possible things that we want to bring into existence.

There was a lot more content going into the details of understanding how we perceive the past and future differently, and a rather sobering explanation of how the universe will eventually reach an equilibrium of disorder when the final star burns out in about a quadrillion years time, followed by the final black hole evaporating some time after that to leave nothing but empty space. But it was a fascinating talk that pitched entropy as the key player in explaining how we remember the past and make constant predictions about the future, including imagining different possible futures and how to bring some of them into existence. Free will is still up for debate!

I’ll close out with Professor Carroll’s beautiful and optimistic finish:

“[Time and increasing entropy gives us] the ability to think about a future that we don’t know anything about, we don’t have any pictures, we don’t have any artefacts. But we have the ability to bring it into existence. It’s human, it’s important, and it’s compatible with the fundamental laws of physics. It gives us hope for the future.”

Professor Sean Carroll

References and further reading

And if you were wondering about what the Muse song is all about, courtesy of YouTube (available at the time of posting, and warning: the video is a bit depressing, but quite clever)…


Feature photo ‘Time’ by Mat Brown from Pexels

Time

Category:
Blog, Event Notes
Tags:
, ,

Join the conversation! 2 Comments

  1. I still like the definition I came across at least 2 decades ago.
    Time: a measure of the passage of nowness in which things happen

    🙂

  2. Love it! I may borrow that one from you… 😉 naturally with attribution.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: