AlphaGo Just Beat the World Champ At Go (could AI help theoretical physics?)
Last year AlphaGo, a deep-learning AI program that plays Go, beat a highly ranked South Korean Go player. On 5/23 of 2017 it beat the world champion Chinese Go player. Alpha Go was developed by Deep Thought, which is the AI branch of Google, which is a branch of Alphabet Inc.
Since Go is considered the most complex human game on the planet, a significant milestone has been reached by AI, and this comes not that long after its stunning victories in the worlds of chess, medical diagnosis, self-driving vehicles, etc.
Here are two things about AlphaGo that have a bearing on whether or not it could help theoretical physics out of its apparent rut.
- AlphaGo did not approach the game of Go theoretically but rather empirically, i.e., experientially. It played as many games of Go as possible and learned how to play from what worked and what did not. A theorist studies abstract mathematics for guidance, while a natural philosopher studies nature for guidance. AlphaGo, acting more like the natural philosopher, studies the actual game itself empirically and observationally.
- As noted by the article linked above: “ Players have praised the technology’s ability to make unorthodox moves and challenge assumptions core to a game that draws on thousands of years of tradition.” This may be of great help to theoretical physicists, who are often bound to old and deeply entrenched assumptions that may be thwarting progress, unbeknownst to the physicists. Deep-learning AI programs, if fed enough well-tested and verified information gathered from observation of nature, might be able to suggest where our current assumptions might have led us astray.
I may be a bit biased but I think I know of at least one unifying paradigm that advanced AI would seriously consider: a cosmos governed by well-stratified (i.e., discrete) fractal self-similarity.