The phrase “artificial intelligence” (AI for short) conjures up images of robots with superhuman brains conspiring to take over the world. So far, nothing like that has happened outside of science fiction. But some recent developments in the field may lead to big changes in the way engineers deal with computers, and could make millions of present-day jobs obsolete in the bargain.
The field of AI was conceived in the 1950s, about the same time computers became advanced enough in terms of memory and processing speed to outperform humans in narrow but significant ways. One of the early leaders was MIT computer scientist Marvin Minsky (1927- 2016), who confidently predicted in 1967 that “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved.” Back then, most AI programs consisted of explicit programming instructions like “if this happens, do that.” In the 1960s, Minsky and others also investigated an alternative to such explicit programming: a system called a “neural network.” But in a 1969 book, he concluded that neural networks would never be as useful for AI as the top-down, explicitinstruction method. This discouraged research work on neural networks, and in the meantime AI funding crashed as results failed to live up to early promises.
AI neural networks use software to imitate the partly-analog, partly-digital way that neurons in the brain work. Despite Minsky, some AI researchers never gave up on the idea. During the 1990s, their efforts to make flexible neural networks that were able to learn from mistakes and thereby improve their performance began to win international contests that pitted pattern-recognition programs against each other. Progress in the last five years or so with advanced deep-learning multi-layer neural networks has been spectacular.
In March of this year, a program called AlphaGo, developed using deep-learning techniques by a division of Google, defeated a world-champion Go player named Lee Sedol. (For those not familiar with the game, Go is played on a sort of checkerboard on steroids, and experts say it’s even more complicated than playing chess.)
Once you’ve written a deep-learning neural-network program, you’re not done yet. Then you have to train it. For example, suppose you want your AI machine to recognize cat photos. Training amounts to showing the software a whole bunch of photos of cats, each one labeled “CAT” and other photos of anything else, labeled “NOT A CAT.” After enough of this sort of thing, the software starts to learn the difference.
There is a philosophical problem with all this that troubles some people. In the old direct-programming days, if someone asked you how your program did a certain thing, you could show logically, step by step, exactly how the program executed. But with neural-network AI, even the software developers don’t know what’s really going on when the system recognizes a particular image, except that a whole lot of numbers are flying around in ways that get results. And it’s results that the corporate world is interested in, not the philosophy. The corporate datamining service CB Insights estimates that the total U.S. venture-capital money put into AI startup companies in 2010 was about $20 million. In 2013, that figure quadrupled to $80 million, and in 2015 it soared above $300 million. Hot AI researchers in academia are being lured away in droves to work for the likes of Google and smaller AI startup firms.
What does the advent of new powerful AI programs mean for working engineers and designers? I can think of two things right away.
One is that systems of all kinds will need more and better-integrated sensors: audio, video, and maybe kinds of sensors no one has thought of yet. So sensors with wider bandwidths and closed-loop controls (think of Robbie the Robot looking at you when you speak up) will probably show up in tandem with increasingly sophisticated and effective AI systems. A glance at one of Google’s self-driving cars tells you that sensors are an important part of the AI game. Sensors will be integrated with the system to an unprecedented degree. The entire system, including its likely users and use environment, will have to be considered holistically as the AI software learns about its surroundings and what it is expected to do.
Once learning-capable AI systems are widely deployed, the training part may be something that ordinary engineers can handle, perhaps even better than taskspecific programmers. Already one startup company is developing neural-network AI systems to be deployed on the cloud so anyone with Internet access can use them, in principle.
Finally, there is the question of ethics. Certain AI neural-network programs have demonstrated “superhuman” patternrecognition capabilities, meaning that they are better at recognizing certain images than human beings are. One can imagine all sorts of dire consequences from this trend. Let’s hope we don’t learn about controlling these new machines the hard way: by allowing a tragedy to happen and then cleaning up the mess. The better way is to minimize the downside of powerful AI software intelligently as the great possible benefits are realized. But we humans will need to keep our wits about us, because the computers will be using theirs too.
This blog originally appeared in the July/August 2016 print issue of Product Design & Development.