It is entirely possible that we will create machines that can self-learn to the point we lose control of them. But it is a flawed assumption that they will be smarter than humans. Or that being smarter, however you define it, is always a good thing.


This weekend there has been news coverage about artificial intelligence and the coming ‘singularity’. In a nutshell, the singularity is the point at which a machine becomes so smart, it can self-learn and will design and build all other machines. Human intervention no longer required. There are some people, blame Ray Kurzweil, who think this is good news and will lead to immortality (currently estimated to occur around about 2050). Then there are others, including the newly formed Centre for the Study of Existential Risk at Cambridge in the UK, who think more along the lines of a Terminator film and that humans may struggle to survive into the next century.

One aspect of discussions about artificial intelligence that drives me slightly nuts is the comparison between computers and brains. How computing power will soon reach a level where it will surpass the brain’s ability enabling ‘smarter-than-human intelligence’.

No. It. Will. Not.

A brain is unique to organic beings with motion and emotion. Why is it that a happy person and a sad person, experiencing the exact same circumstances, will perceive the situation completely differently? When did emotional intelligence, serendipity and spontaneous creativity become so de-valued? Being able to process more information faster than another being – organic, machine or cyborg – does not guarantee you will be smarter.

A study by George Millar and Patrcial Gildea, reported in the book ‘The Social Life of Information’, observed learning words by practice in conversation versus only using a dictionary. The latter can result in the perfectly rational: “me and my parents correlate because without them I wouldn’t be here.” Smarter? You decide.

It is entirely possible that we will create machines that can self-learn to the point we lose control of them. And that might create some new challenges for the world. Because it is a flawed assumption that they will be smarter than humans. Or that being smarter, however you define it, is always a good thing.

References:

Flickr image at the start of this post courtesy of Steven T

Category:
Blog
Tags:
, , ,

Join the conversation! 4 Comments

  1. We really had better get Asimov’s 4 laws built into the firmware soon.

  2. 🙂 Well if you can embed a Windows product key, how hard can it be?

  3. […] Self-learning does not create a brain, November 2012 […]

  4. […] Self-learning will not create a brain […]

Comments are closed.

%d bloggers like this: