Intelligence rethought: AIs know us, but don’t think like us

Read the full article in New Scientist 

people on train

Credit: Gilles Coulon/Tendance Floue

Can a human-made creature ever surprise its creator, taking initiatives of its own? This question has been asked for centuries, from the golem of Jewish folklore to Frankenstein to I, Robot. There are various answers, but at least one computing pioneer knew well where she stood. “The Analytical Engine has no pretensions whatever to originate anything,” said Ada Lovelace, Charles Babbage’s collaborator, in 1843, removing any doubt about what a computing machine can ever hope to do. “It can do whatever we know how to order it to perform,” she added. “It can follow analysis; but it has no power of anticipating any analytical relations or truths.” 

But 173 years later, a computer program developed just over a mile away from her house in London beat a master of the game Go. None of AlphaGo’s programmers can come close to defeating such a strong player, let alone the program they created. They don’t even understand its strategies. This machine has learned to do things that its programmers can’t do and don’t understand.

Far from being an exception, AlphaGo is the new normal. Engineers began creating machines that could learn from experience decades ago, and this is now the key to modern artificial intelligence (AI). We use them every day, usually without realising it.

For programmers who develop such machines, the whole point is to make them learn things that we don’t know or understand well enough to program in directly. This approach – called machine learning – has been extremely fruitful. It is the secret sauce of modern AI and has delivered recent successes (and spectacular failures) in autonomous cars, product recommendations, personal assistants, Go and more.

How can a machine learn? When I was growing up, my bicycle never learned its way home and my typewriter never suggested a word or spotted a spelling mistake. Mechanical behaviour was synonymous with being fixed, predictable and rigid. For a long time, a “learning machine” sounded like a contradiction, yet today we talk happily of machines that are flexible, adaptive, even curious.

In artificial intelligence, a machine is said to learn when it improves its behaviour with experience. To get a feel for how machines can perform such a feat, consider the autocomplete function on your smartphone.

If you activate this function, the software will propose possible completions of the word you are typing. How can it know what you were going to type? At no point did the programmer develop a model of your intentions, or the complex grammatical rules of your language. Rather, the algorithm proposes the word that has the highest probability of being used next. It “knows” this from a statistical analysis of vast quantities of existing text. This analysis was done mostly when the autocomplete tool was being created, but it can be augmented along the way with data from your own usage. The software can literally learn your style.

The same basic algorithm can handle different languages, adapt to different users and incorporate words and phrases it has never seen before, such as your name or street. The quality of its suggestions will depend mostly on the quantity and quality of data on which it is trained. So long as the data set is sufficiently large and close in topic to what you are writing, the suggestions should be helpful. The more you use it, the more it learns the kinds of words and expressions you use. It improves its behaviour on the basis of experience, which is the definition of learning.

Note that a system of this type will probably need to be exposed to hundreds of millions of phrases, which means being trained on several million documents. That would be difficult for a human, but is no challenge at all for modern hardware.

If you feel that this is cheating, because the algorithm is not really intelligent, then brace yourself. Things get worse.

The next step up in complexity is a product recommendation agent. Consider your favourite online shop. Using your previous purchases, or even just your browsing history, the agent will try to find the items in its catalogue that have the highest probability of being of interest to you. These will be computed from the analysis of a database containing millions of transactions, searches and items. Here, too, the number of parameters that need to be extracted from the training set can be staggering: Amazon has more than 200 million customers and in excess of 3 million books in its catalogue.

Matching users to products on the basis of previous transactions requires statistical analysis on a massive scale. As with autocomplete, no traditional understanding is required – it does not need psychological models of customers or literary criticism of novels. No wonder some question whether these agents should be called “intelligent” at all. But they cannot question the word “learning”: these agents do get better with experience.

 

Continue reading article  in New Scientist