Illustration redesigned by Devin Thorpe

Artificial intelligence, or “AI” for short, is no more complicated an idea than the two words that make it up. Any computer program that displays human abilities is considered AI, whether it’s something as complex as a full-fledged talking robot, or as simple as a spam filter for your email inbox. Today’s AI may feel rather invisible, but its influence in our society is on par with the world Capek invented in his play. Our cell phones are “smart” now; so are our televisions, our homes, and our kitchen appliances. Companies worldwide tout new “AI-driven” tools for applications in finance, marketing, medicine, industry, and about every other field out there. You’re probably reading this, right now, on a device featuring more than one AI-infused software program.
Yet, we can trace all the most advanced AI of today back to a single machine first designed over sixty years ago.
On March 10th, 2016, Lee Sedol faced much the same fate as Kasparov did before him. This time, the robot was a deep learning algorithm named AlphaGo. Sedol was the world’s number one ranked player in the ancient Chinese board game of Go. Now, chess is complex, but Go can make chess look like checkers–the game is non-directional, meaning players can land their stones anywhere on its 19×19-place board. The aim of the game is to capture territory on the board by shutting out your opponents’ pieces.
Because the Chinese board game Go does not have rules about what pieces can do or where they can go, a computer cannot consider every possible outcome on the board. The number would be incomprehensible, even to a machine. So, machine learning on the level of DeepBlue would not suffice.
AlphaGo was given the rules of Go, but then it was left to its own devices. Instead of computing every possible move in every possible scenario, AlphaGo’s programmers fed it thousands of past Go games played by human professionals. The algorithm processed all that information and used it to update its predictions.
As it was exposed to more game scenarios, AlphaGo became better at guessing what moves would work best in any given scenario. After parsing all that human data, AlphaGo was set to play itself. AlphaGo played itself hundreds of thousands of times. It learned how to improve by attempting to beat its own best strategies. By the time it met Lee Sedol, AlphaGo had played more games than one human could fit in their lifetime. Sedol didn’t have a chance.
In the second game of their five-game series, Lee Sedol walked off the set of his internationally televised match, as if to say: “What am I supposed to do?”
At this rate of progress, it’s not possible to accurately imagine AI fifty, even twenty years from now. The prospect is exciting and terrifying, in equal parts. After all, we are talking about machines that surpass human capabilities, in ways we can’t always fully understand. The name for this phenomenon is the “black box” problem. When we’re not able to discern the process by which an algorithm reached a conclusion, it’s called a black box.
What happens when we apply AI technology beyond board games and image recognition, to major industries and life-or-death applications? What happens when more algorithms start producing outcomes optimized past our understanding? The implications of machines that exceed human control have been speculated over in many a science fiction story.
In Karel Capek’s 1929 play R.U.R., Harry Domin shrugs off a chilling omen about the dangers of building artificial people.
DOMIN
(laughing it off)
“A revolt of the Robots,” that’s a fine idea, Miss Glory. It would be easier for you to cause bolts and screws to rebel, than our Robots. You know, Helena, you’re wonderful, you’ve turned the heads of us all.
The play ends when the robots revolt, killing off their human creators.
Artificial intelligence may be the defining technology of our age. If we wish to keep it aligned with our best interests and under human control, we must first understand how it works.
Featured Header Image Source: Forbes