Can Machines Think?
Alan Turing asked himselfÂ the same questionÂ over six decades ago in his famous article entitledÂ â€œComputing machinery and intelligenceâ€ (1950). His answer was to come up with what is today known as theÂ Turing test, although in his day he called itÂ â€œThe Imitation Gameâ€ â€“ a name thatâ€™s now familiar to everyone, thanks to theÂ movieÂ based on one of the most interesting episodes in the life of the mathematical genius: the time he spentÂ deciphering the Enigma code, which played such a key role in the outcome of the Second World War.
But as well as the end of the war (in part), we also owe many other things to Turing. Considered asÂ one of the fathers of information technology, in the last century he was already pondering the principle of one of the great questions of todayâ€™s society:Â in what direction are machines evolving, and what form will their interactions with human beings take?
Although weâ€™re still very far from the scenario ofÂ technological apocalypse suggested by a fair number ofÂ moviesÂ and books, evidence from new developments in ICT now makes this question much easier to answer. Sixty years ago Turing could not have foreseen the current situation, and yet he still decided to designÂ a method that would respond scientifically to the question of whether a machine could think for itself or not.Â He created the Turing test, a conversation in natural language between a human being and a machine designed to generate a verbal interaction in which it is impossible to tell the difference between human and software. Five minutes of conversation to convince (at least 30% of the time) the person assessing the chat that whoever is behind the screen (who only expresses him/her/itself through text, like in a virtual chat room) is a human being. If it can do so, the machine will have passed the test.
The first to â€œpassâ€ the Turing test
This took place in 2014, over six decades after the tragicÂ death of the British mathematician on 7 June 1954.Â Two years have already passed since the controversial experiment performed by a controversial scientist,Â Kevin Warwick. The results â€“as might be expectedâ€“ were also controversial, and rekindled the debate onÂ artificial intelligence. Warwick has spent his whole life studying artificial intelligence, and is correctly experimenting with robotics and the world of cyborgsÂ atÂ Reading University. Chips, electrostimulators and a whole array of issues to do with the potential of the human brain, in combination with technological tools, are all part of this researcherâ€™s everyday work.Â In 2014 he organized the largest-scaleÂ TuringÂ test in history: 30 judges and five machines who took part in a total of 300 conversations.
The first to pass the Turing test wasÂ EugeneÂ Goostman,Â aÂ chatbotÂ developed by its programmers toÂ simulate the personality of a Ukrainian adolescent.Â This aspect played in his favor during the test: as he didnâ€™t imitate the conversation of an adult,Â he was able to naturally mimic the lack of knowledge characteristic of his age. Eugene only just passed, with 33%, and his achievement raised many questions andÂ objectionsÂ within the scientific community. â€œThey were the official parameters established byÂ Turingâ€,Â said Kevin Warwick, the official organizer, in the British newspaperÂ â€œThe Independentâ€œ a few days after the event.
At the time, Turing laid down a series of conditions, such as avoiding mathematical questions, although he never mentioned anything about not including â€œchildrenâ€. The controversial pass mark in 2014 opened up the debate between those who consider the Turing test to be the cornerstone of AI (Warwick himself defined it as such) and those whoÂ doubtÂ this method can answer the question of whether a machine â€œthinksâ€ for itself or not.
However it is all a question of perspective, even in the world of artificial intelligence. Eugeneâ€™s result also proves that inÂ 66.7% of cases machines do not succeed in taking in the judges,Â and so we can rule out (for the moment) a successful software rebellion. But it begs the question of whether this test also assesses â€œnatural intelligenceâ€ â€“the typical intelligence of human beings. What happens if the results say youâ€™re a machine?
Dory GascueÃ±a for OpenMind