Games AI Plays

published on 26 March 2022

Let’s take a look at the competitions between humans and machines in games.

Deep Blue Defeats Garry Kasparov

I could sense a new kind of intelligence.  

Garry Kasparov

In 1997, IBM’s Deep Blue beat the world's best player at the time, Gary Kasporav. 

Deep Blue’s powerful CPUs could assess 200 million moves per second and were massively parallel. It had a library of thousands of opening, middle-game, and ending moves and could compare possibilities of moves from 7,00,000 grandmaster games. It could see 8 moves ahead.

Kasporav got off to a good start and won the first game. He lost the 2nd game and went on to lose the competition. 

Deep Blue wasn’t an AI system that learned on its own. It was an extremely fast computer that had essentially “brute forced” its way to victory. 

IBM's Watson Becomes Jeopardy Champion

In Jeopardy, players have to come up with a question for an answer. IBM had programmed Watson to understand everyday language and reply in it. In AI, this is the field of natural language processing. 

IBM took 4 years to build Watson. Watson's database includes over 250 million pages of information from dictionaries, encyclopedias, books, newspapers, and the full text of Wikipedia.

Watson can process over a million books per second. It’s capable of coming up with hypotheses and ranking them at lightning speed. It thinks much like a person; by searching through unstructured data and determining the best way to answer a question. It could understand human language and categorize the meaning of words and sentences in the appropriate context. 

In 2011, Watson stood on a stage between Brad Rutter and Ken Jennings; the blue light on its face revolving while speaking quietly in a man’s voice. It correctly found question after question to become the world Jeopardy champion. 

AlphaGo Defeats the Reigning Go Champion

Go is a 2-player game where the aim is to capture more territory than the opponent. 

What makes Go difficult is that unlike in Chess there are very few rules. As a result, the number of possible moves is astronomically larger than in Chess. The number of possible moves on a Go board exceeds the number of atoms in the known universe. How can a computer deal with the exponentially greater mathematical complexities of Go? The answer is not by brute force alone. Computational power is insufficient without something more.  

In 2016, a London-based company called DeepMind, which had been taken over by Google two years earlier, launched AlphaGo. AlphaGo played against itself millions of times and studied thousands of human games: improving with each game. 

AlphaGo beat Lee Sedol, who was the reigning world Go champion. 

AlphaGo played with an apparent creativity; different from conventional human playing styles.

Lee later commented that the computer had displayed human-level intuition.

Video games

Games have evolved from two-dimensional playing surfaces of board games into complex, three-dimensional virtual landscapes of video games.

Dota 2 is a video game produced by an American video company called Valve Corporation. 

OpenAI came with a program called OpenAI Five that could play Dota. It played tens of thousands of games with itself and accumulating over 180 years of gaming time each day. 

In April 2019, the OpenAI Five beat a top professional team of Dota 2 players. 


The primary accelerant of AI has been “deep learning”– a machine learning technique based on the use of multilayered artificial neural networks.

Recent deep learning models that play games are entirely self taught. They’re not given any human games to study. Instead, they work it all out from scratch just by playing millions and millions of games against themselves. These deep-learning models are more powerful because we have removed the constraints of human knowledge. 

Read more