Neural Mechanics and Symmetry: Hidenori Tanaka on NeurIPS 2020

Last month’s Neural Information Procession Systems conference (NeurIPS 2020), which took place (virtually) from December 6-12, 2020, was another occasion for Physics & Informatics (PHI) Lab Senior Research Scientist Hidenori Tanaka to advance work that he has undertaken with Stanford University colleagues on the dynamics of neural networks. In a previous blog post, we discussed Dr. Tanaka’s paper at NeurIPS 2019. As he explained, last year’s paper “suggests that we can model the complex brain with complex artificial neural networks, perform model-reduction on those networks and gain intuition and understand how the brain operates.” This year’s paper focuses on model-compression via “pruning,” and, in particular, how to do so without data by leveraging the concept of symmetry. For more on this year’s conference, reaction to the paper and Dr. Tanaka’s future plans, please take a look at the following interview:

How did your Pruning Neural Networks paper at NeurIPS 2020 go? Did the virtual format allow for good questions and feedback?

Yes. NeurIPS 2020 was held on an online platform called “Gather.town.” It was a fresh experience for me to walk around in a virtual poster room that nicely emulates the physical conference site. It even recreated spontaneous encounters with people. I’d like to thank the NeurIPS organizers for a great experience during these unusual times.

What kind of questions and comments have you received so far?

We have received positive feedback. In particular, we are happy to find that our proposed method to prune neural networks “without any data” has been received as a fresh idea with potential. Looking forward, we also received helpful suggestions for improvements, in particular, from people who are working on practical deployments of deep learning models in the real world.

How about the work on Neural Mechanics – you had a separate workshop on that paper, correct? In what ways does it differ from the Pruning paper?

Yes, I’m very excited that another work with Daniel Kunin, Surya Ganguli and collaborators at Stanford is out. Our newest work on “Neural Mechanics” studies learning dynamics of neural networks theoretically, originally inspired by a conservation law that we harnessed in the pruning paper.

How was it received? Are you making any adjustments to your argument or planning any particular follow-up research?

The full paper version of the work is out on arXiv. In this new work, we present a theoretical framework that simplifies, unifies and generalizes an array of results in deep learning theory by drawing parallels with the framework of analytical mechanics in physics. Symmetry is one of the most beautiful and powerful concepts in modern physics, and we’ve harnessed it to solve for the learning dynamics of millions of parameters.

Looking ahead at the new year, what do you see as any further developments or trends regarding deep learning models?

The original success of deep learning was in image recognition, and people called it a “black box” whose machinery is very hard to understand. Over the last few years, the research community has been making progress both in applying deep learning to broader problems and in better understanding the machinery. In particular, this year, I’ve seen an increasing number of works where people apply deep learning to solve scientific problems.

Looking forward, we are excited to contribute to the rapidly expanding frontier of neural computation in the PHI Lab by harnessing theoretical framework of physics to better understand the complex learning dynamics of deep neural networks, applying the developed methods to solve problems in physics and rethinking the foundational paradigm of current neural computation from the perspectives of physical hardware and biological computation in our brain.

Facebook
Twitter
LinkedIn
Pinterest