A Short Introduction to Artificial Intelligence

What do Stephen Spielberg and Stephen Hawking have in common? Or Alan Turing and Isaac Asimov? They have each theorised about the future of thinking computers, or Artificial Intelligence (AI).

AI has come to mean many things in popular culture and in the media – android children, helpful servant robots, even dangerously self-aware computers like Hal in 2001: A Space Odyssey. But the reality of AI is different to that movies and TV would have us believe.

4IR.org is producing a series of articles (this being the first) to help separate the science fiction from the reality of AI, and to facilitate understanding on what is happening in AI research right now. It isn’t aimed at the AI-educated amongst us but for the novices (like me) and is designed as an efficient AI hack.  The series, Everything AI, will help you to understand the benefits – and potential risks – of AI for human health, society, and work into the future.

The beginnings of AI

Artificial Intelligence is a science “inspired by…the ways people use their nervous systems and bodies to sense, learn, reason and take action” (Stanford University, 2016). The concept of a thinking machine has been theorised for over three hundred years, with writers like Jonathan Swift inventing ideas like a mechanical ‘Engine’ that could write poetry. However, the modern idea of Artificial Intelligence was first posited in 1943 by two mathematicians, Warren S. McCulloch and Walter Pitts (Press, 2016). They conceived of an artificial ‘neural network’, an artificial brain that mimicked the way the human brain worked and could perform simple thinking tasks. This idea formed the basis of AI research as we know it today.

In 1950, British computer scientist Alan Turing expanded upon this idea by considering the ways in which a computer could learn to think like a human being. His paper, ‘Computer Machinery and Intelligence,’ was a critical turning point in the theory of Artificial Intelligence. In this paper, Turing proposed the now famous ‘Imitation Game,’ also known as ‘The Turing Test,’ an experiment whereby a computer could demonstrate ‘thinking’ that was equivalent to that of a human. Turing argued that this was indeed a possibility (Turing, 1950).

Since the 1950s, computer scientists have continued to experiment with AI – both to attempt to pass the Turing Test (there is an annual competition, the Loebner prize, to do so – which no-one has ever won), and to build on the work of early researchers.

Progress in AI

The focus of much of the early work in AI was to teach computers to learn the skills of humans, and to where possible, to surpass human achievement in these skills. For example, computers have been taught skills such as speech recognition, language translation, games (chess, general knowledge, and the Chinese game Go), and navigation. AI scientists have been very successful in this field, and we already use many of the products they have created.

However, there are some areas in which AI researchers have struggled, and their success in general has been called “patchy and unpredictable” (Stanford University, 2016). Just as many of us grew up expecting to own a personal jetpack by 2017, so too many of us believed the promises of science fiction and were expecting the robot housemaids of television and film. These are far from reality.

One of the problems is that AI researchers are struggling with so-called ‘deep learning,’ a process that requires layered data processing that machines are not yet capable of. The kind of learning that machines are so far capable of is a less nuanced, skill-based learning that cannot be generalised beyond the skill they have been taught. For example, a machine like AlphaGo, developed by Google, and taught to play the complex Chinese game Go, beat the world champion of Go in 2016. While AlphaGo is the unquestionably the world’s best Go player, it cannot play anything else or do anything else (currently) except play Go. This makes its direct application limited, although the knowledge arising from this project has applications in other areas of AI. Deep learning however – the ability for a computer to apply its knowledge in other areas, as humans currently do, is the ultimate goal of AI researchers. Currently this remains elusive.

Fast Facts

  • Research in AI has been progressing for the past 65 years, and with big names (and big money) like Google, Amazon and Facebook investing heavily in the research, we can expect continuing advances in it
  • AI research has achieved some significant progress. Google Maps, Google Translate, the modern video gaming industry, Siri, are all modern AI systems that we use everyday
  • AI has the capacity to have “potentially profound positive impacts on our society and economy…between now and 2030” according to a one hundred year study of AI undertaken by Stanford University in 2016
  • However, the research, while almost continuous since 1949, has yet to reliably and successfully mimic the way real humans learn and transfer skills from one situation to the next. The real traits of humanness and human learning remain elusive.

 

W

Post a new comment