FAQ
< All Topics

What is AI

Defining AI

Artificial Intelligence is a term used to refer to computer programs that can perform a range of tasks like identifying a brain tumour from an MRI scan or recognise a face in a crowd from surveillance cameras. These computers are not really intelligent in the way that humans are but can appear to be better than a human when used in very limited tasks like playing chess or Alpha Go.

Intelligent machines

The term ‘artificial intelligence’ (AI) was coined by computer scientist John McCarthy early in his career in 1955, as the title for a conference aimed at exploring ways in which a machine could reason like a human. He defined the term as ‘the science and engineering of making intelligent machines’. He believed that ‘every aspect of learning or any other feature of intelligence can, in principle be so precisely described that a machine can be made to simulate it’.² Decades later AI is everywhere and being applied to a vast array of applications from assessing the likelihood of petty criminals reoffending to identifying cancer tumours.

Expert systems

The basis for early AI technology was to try to encode the rules governing human decision-making, in order for a computer to emulate the thinking that a human goes through in problem solving. Such systems became known as ‘expert systems’ and even today are referred to as ‘traditional AI’. However, these early expert systems and the claims made for AI using this approach failed to produce anything remarkable in terms of simulating human capability. Nor did they result in any useful applications.

The problem with expert systems was that they required pro- grammers to understand the rules surrounding a particular topic. An example might be how to pick company stocks that would be successful in the financial sector. Once the rules were understood, they were then encoded in a computer program. This was easier said than done!

Statistical breakthrough

Statistically based algorithms, called Hidden Markov Models, provided a breakthrough in the 1980s and were the forerunner of what is now called an artificial neural network (ANN). These algorithms  essentially perform pattern matching of unseen input against pre-trained models using masses of data. Early methods required that the data be labelled, for example – an image of a white dog or a black cat.

Not intelligent

The algorithms that have been developed over the last few decades have become so good that they can now exceed the pattern-matching capabilities of humans in tasks such as image recognition for spotting cancerous tumours from MRI scans. Despite these improvements in performance of tasks such as image recognition, the underlying technology has not really changed since the 1980s and there have been no significant breakthroughs in understanding how to model intelligence. In fact, there’s no understanding in the human cognitive sense taking place at all. Artificial intelligence is just that, artificial. It simulates only certain restricted aspects of human cognitive capabilities. On limited tasks this can leave people with the illusion that they’re witnessing a machine as intelligent as they are! There is, however, no doubt that AI has appeared to advance rapidly since 2010.

References

1  J. McCarthy, What Is Artificial Intelligence? (Stanford, Calif.: Computer Science Department, Stanford University, 2007),

2  McCarthy, M. I. Minsky, N. Rochester and C. E. Shannon, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’

 

 

 

 

 

Table of Contents