A short history of artificial intelligence
The father of Artificial Intelligence is the British mathematician Alan Turing. In 1950 he declared that in the future there would be a machine able to duplicate and surpass human intelligence. He devised a test, the “Turing test”, to be used to prove artificial intelligence.
In the test, a human and a computer hidden from view are asked random identical questions. If the computer is successful, the questioner will be unable to distinguish the machine from the human.
The first conference on artificial intelligence was held at Dartmouth College in the United States in 1956. It led to the establishment of the A.I. laboratories at MIT (Massachusetts Institute of Technology).
In the 1990s AI leaped forward thanks to so-called artificial neural networks (ANNs). ANN software is modelled on the human brain and its network of neurons. Like a brain ANNs can be trained through the input of large quantities of data. In the last ten years new training techniques have improved ANNs dramatically (deep learning or machine learning).
Together with the rise of the internet, which has made billions of text documents, images and videos available for AI training, an industry arms race for AI and the usage of graphical processing units (the specialised chips used in PCs to generate graphics that turned out very well suited fir ANNs) the practical application of AI has soared from self-driving cars, medical diagnosis and stock trading to speech recognition and crime-prediction.
These are the main current AI streams:
Image: Thomson Reuters