Artificial intelligence is not new. Programmers and scientists have been working on building smart systems for more than fifty years, and have been thinking about it for much longer than that. But something has changed – I think perhaps two things – that are leading to much, much better and smarter systems.
The first thing is perhaps the obvious one. Computers are getting faster, and we are getting pretty good at combining lots of individual computers into networked supercomputers. This makes even dumb software better, but alone it’s not enough. Doing the wrong thing really fast doesn’t make up for being wrong.
Traditionally, artificial intelligence relied on systems of rules, accessed by a piece of software called an inference engine. The job of this engine was to look at a problem, and to try to work out what the rules say about that problem, and just maybe to provide an inferential, logical path to an answer. Scientists quickly discovered that binary rules were too simple – the world is not black and white. They invented fuzzy logic, which allows us to represent concepts with some nuance – warmer, cooler, not just a specific temperature, for example. But these systems have a crippling flaw, which is that they tend to be inflexible.
The next big step, and the second factor that is a game-changer, was machine learning, in which a system can develop, and adapt, and improve its rules, based on training data. Training involves combining raw inputs with known right answers, and the rules are tuned by the software itself to improve the “fit” between its predictions and the desired outcomes.
Google’s AlphaGo program, which recently won a tournament against the world’s best player, was successful because it combined a powerful network of computers, with machine learning and training sets of many high quality go games. Commentators observed that it seemed in many cases to make exactly the move you would expect of a good human player, but sometimes it would be surprisingly different – occasionally making a mistake, but more often showing brilliance.
Sebastian Thrun, the Google Fellow who led the Stanford team that won the DARPA Grand Challenge for the first really functional autonomous car, described the insight that made the difference. They realized that in order to get the car to make consistently good decisions, it needed to be able to assess the quality of its sensor data, and to better understand what was important. Their solution was to have a human driver operate the car, while the software ran, making its own decisions as to what to do. When the car and the human driver agreed, those inputs and rules were promoted, or given higher weights in the calculations. When the car got it wrong, the inputs and rules were demoted. With these adjustments, the car became almost as good as a human driver.
The fascinating, and sobering point of all this, is that we now have a the basis of a repeatable, scalable method for creating intelligent systems. They don’t have to be all that good at the beginning. But if they can interact rapidly with an expert or community of experts, they will rapidly grow stronger. With sufficiently rich training data, they can become better than any individual human.
Alan Turing famously suggested that if we can’t tell the difference between the performance of a human and a machine, we really ought to consider the machine to be intelligent. We now have a proven capability to develop programs that can pass the Turing test, because of the way they learn.
I don’t know what the next great AI breakthrough will be, but from Siri and Alexa, to driverless cars and autonomous rocket landings, we can see the flowering of machine intelligence all around us.