Read the full article in New Scientist.
“IN the summer of 1956, a remarkable collection of scientists and engineers gathered at Dartmouth College in Hanover, New Hampshire. Among them were computer scientist Marvin Minsky, information theorist Claude Shannon and two future Nobel prizewinners, Herbert Simon and John Nash. Their task: to spend the summer months inventing a new field of science called “artificial intelligence” (AI).
They did not lack in ambition, writing in their funding application: “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Their wish list was “to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”. They thought that “a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”
It took rather longer than a summer, but 60 years and many disappointments later, the field of AI seems to have finally found its way. In 2016, we can ask a computer questions, sit back while semi-autonomous cars negotiate traffic, and use smartphones to translate speech or printed text across most languages. We trust computers to check passports, screen our correspondence and fix our spelling. Even more remarkably, we have become so used to these tools working that we complain when they fail.
As we rapidly get used to this convenience, it is easy to forget that AI hasn’t always been this way.
At the Dartmouth conference, and at various meetings that followed it, the defining goals for the field were already clear: machine translation, computer vision, text understanding, speech recognition, control of robots and machine learning. For the following three decades, significant resources were ploughed into research, but none of the goals were achieved. It was not until the late 1990s that many of the advances predicted in 1956 started to happen. But before this wave of success, the field had to learn an important and humbling lesson.
While its goals have remained essentially the same, the methods of creating AI have changed dramatically. The instinct of those early engineers was to program machines from the top down. They expected to generate intelligent behaviour by first creating a mathematical model of how we might process speech, text or images, and then by implementing that model in the form of a computer program, perhaps one that would reason logically about those tasks. They were proven wrong.
They also expected that any breakthrough in AI would provide us with further understanding about our own intelligence. Wrong again.
Over the years, it became increasingly clear that those systems weren’t suited to dealing with the messiness of the real world. By the early 1990s, with little to show for decades of work, most engineers started abandoning the dream of a general-purpose top-down reasoning machine. They started looking at humbler projects, focusing on specific tasks that were more likely to be solved.
Some early success came in systems to recommend products. While it can be difficult to know why a customer might want to buy an item, it can be easy to know which item they might like on the basis of previous transactions by themselves or similar customers. If you liked the first and second Harry Potter films, you might like the third. A full understanding of the problem was not required for a solution: you could detect useful correlations just by combing through a lot of data.
Could similar bottom-up shortcuts emulate other forms of intelligent behaviour? After all, there were many other problems in AI where no theory existed, but there was plenty of data to analyse. This pragmatic attitude produced success in speech recognition, machine translation and simple computer vision tasks such as recognising handwritten digits….”
Continue reading article in New Scientist.