Nine-Year-Old Japanese Girl Becomes World's Youngest Professional Go Player

Sumire Nakamura will make her professional debut in April.

Sumire Nakamura, a nine-year-old from Japan is set to become the world’s youngest-ever professional player of the game Go when she makes her debut later this year.

Nakamura, a primary school student from Osaka, started playing the strategy game aged three, she will begin her professional career on 1 April. 

The previous youngest professional player was 11-year-old Rina Fujisawa. Nakamura’s father was a ninth-degree professional player who won a national title in 1998. 

Training program introduces the next generation to the game

The talented 9-year old was trained in the game partly through a special programme aimed to elevate budding talent to create a new generation of top Japanese players who can compete with their Chinese and Korean counterparts in international tournaments. 

In a press announcement, Nakamura told the assembled crowd she loves to win and hopes to win a title while she is still at junior high school. 

Go is a strategy game that requires players to occupy the territory of a game board placing either black or white coloured pieces on a 19 x 19 grid. It can become incredibly complex. 

There are a 181 black and 180 white to start stones that result in an astonishing 10 to the power of 170 board moves. Chess has about 10 to the power of 60 possible moves. 

Go is thought to have originated in China more than 2,500 years ago. There are around 20 million active players worldwide, mostly in East Asia. 

DeepMind's, AlphaGo beats world's best

The ancient game has made the headlines in other ways in the last few years thanks to the development of the deep neural network AlphaZero by Google.

The system can teach itself challenging games like Chess, Shogi (Japanese Chess) and Go to the level where it can beat the worlds best players, despite starting its training from random play, with no inbuilt domain knowledge but the basic rules of the game. 

To learn the games, an untrained neural network plays millions of games against itself via a process of trial and error called reinforcement learning. 

Initially, these games are played entirely randomly, but over time the system learns which moves and strategies results in wins and losses and adjusts its gameplay accordingly, so that is os more consistently choosing advantageous moves. 

Advertisement

The complexity of the game it is learning increases the amount of training the network need. For example, it takes about 9 hours for chess, 12 hours for shogi, and 13 days for Go. 

Neural network self-trains

Unlike traditional chess engines like IBM’s Deep Blue, which ‘rely on thousands of rules and heuristics handcrafted by strong human players that try to account for every eventuality in a game.’ AlphaZero creates its own style from its learning journey. 

This unique style will be examined in detail in a forthcoming book called Game Changer being written by Chess Grandmaster Matthew Sadler and Women’s International Master Natasha Regan, who have analyzed thousands of AlphaZero’s chess games.

Advertisement