Robert Oldershaw
3 min readMar 14, 2019

Deep Neural Networks and Theoretical Breakthroughs

A growing number of theoretical physicists have come to regard their field as having stagnated, due to the disappointments of string/brane theory, supersymmetry, the WIMP conjecture, and the “nightmare scenario” that unfolded at the LHC, as well as the lack of progress in quantum gravity research and problems like the vacuum energy density crisis. The fact that the predicted vacuum energy density in the microcosm differs from the observed vacuum energy density by up to 120 orders of magnitude led the Nobel prize-winning particle physicist Frank Wilzcek to make the following admission.

“We do not understand the disparity. In my opinion, it is the biggest and most profound gap in our current understanding of the physical world. … [The solution to the problem] might require inventing entirely new ideas, and abandoning old ones we thought to be well-established. … Since vacuum energy density is central to both fundamental physics and cosmology, and yet poorly understood, experimental research into its nature must be regarded as a top priority for physical science.”

It is possible that either the Standard Model of Particle Physics or the Standard Model of Cosmology, or possibly both, have relied on one or more bedrock assumptions that are incorrect and have made us blind to a more elegant and unified understanding of the Universe. Clearly the problem is not with the experimental side of physics, which is rapidly advancing at an impressive rate on many fronts. The problem seems more likely to be that we are mentally confined within a Kuhnian sub-optimum paradigm, and are unable to see the next stage in paradigmatic evolution, which Feynman referred to as “the next great awakening”.

This is where deep neural networks might give us a little help, and maybe much more than that. A sufficiently advanced neural net could be programmed with all of the physical data found in the Handbook of Chemistry and Physics. It could learn about our understanding of the force laws, conservation laws and symmetries that we are aware of and are well-tested.

Crucially, if something in our current knowledge is merely assumed or not yet definitively tested, the neural net could be explicitly aware of that. Moreover it could be instructed that science does not deal in “absolute knowledge”, but rather in imperfect models which hopefully grow ever-better. Every piece of information could be given a confidence score, i.e., the masses of stable particles like the proton could be given very high scores, but the masses of galaxies would have lower scores. Also, the neural net should be taught that at least 80% of the mass in the observable universe is in an unknown dark matter form, that that dark energy acceleration is not understood, nor is how to unify quantum mechanics and general relativity. The neural net should be informed that 20–30 key parameters have to be put into the Standard Models “by hand”, and fundamental constants like the fine structure constant are not understood with any certainty.

So the neural net could be taught everything we know that we don’t know, and everything we think we do know, qualified by severely rigorous confidence limits on every physical measurement, modeling parameter, theory, etc. Assumptions could be omitted, or given very low confidence scores. Then the neural net could be asked to use pattern recognition or other strategies to come up with various paradigms that would provide the most elegant and unified syntheses of the available knowledge, along with confidence scores for each paradigm.

It might make theoretical physics fun and exciting again.

No responses yet