Computational Neuroscience: A Conversation with Hidenori Tanaka

By NTT Staff

It makes sense that someone with a Ph.D. in Applied Physics would feel at home in the Physics & Informatics (PHI) Lab. What distinguishes one of the research scientists in the PHI Lab, Dr. Hidenori Tanaka, is his keen interest for biological and artificial neural networks.

Just as investigative journalists are told to “follow the money,” Dr. Tanaka’s working principle has been to follow “new quantitative experiments.” What that has meant in practical terms is that after receiving his doctorate, he became a fellow and visiting scholar at Stanford University, where he has not only worked with Professors Surya Ganguli and Daniel Fisher in the Department of Applied Physics, but has also been able to collaborate with Professor Stephen Baccus in the Department of Neurobiology, a field that has been generating plenty of fresh experimental data in recent years.

In particular, along with Professors Ganguli and Baccus, Dr. Tanaka was a lead author of a paper presented at NeurIPS 2019 that advanced basic understanding of biological neural networks in the brain through artificial neural networks. In the conversation below, Dr. Tanaka discusses his research interests, the intersection of physics and neuroscience and some follow-up from the NeurIPS paper. 

How did you become interested in neural networks? Was that at Harvard? Could you say a few words about your dissertation?

One of my guiding principles in research is that where there are new quantitative experiments, there is a chance for building new theoretical physics. Historically, experimental data generated by telescopes, particle accelerators and semiconductor technology have been critical drivers in astrophysics, particle physics and condensed matter physics. Currently, experimentalists and practitioners are making revolutionary progress in the fields of biological and artificial neural networks. Towards the end of my Ph.D. at Harvard, I got quite excited to identify fresh mysteries that these systems generate.

The two senior authors on the NeurIPS paper are Stanford Professors Ganguli and Baccus. The former is a physicist (and your mentor at Stanford) the latter a neurobiologist. What do you think physicists bring to an investigation of neural networks?

The theoretical and computational study of intelligent systems is a very interdisciplinary field where people from neuroscience, psychology, physics, mathematics, computer science and more meet. Some of these disciplines bring experimental techniques while the other brings rigorous mathematical analysis. Physicists stand in between by applying tools of applied mathematics to model data directly interfacing with nature.

You discussed pruning in the paper on deep learning and retinal prediction; now you’ve co-authored another paper with that concept in the title. Could you summarize and compare these two papers?

The central challenge in theoretical modeling is to capture the essence of natural phenomena while keeping the model as simple as possible. Bringing the aspiration for simplicity to the study of neural networks, we recently worked on two projects where we developed first, a model reduction method to curve out important sub-circuits from complex deep neural networks helping us understand the brain; and second, a model compression algorithm to increase time, memory and energy efficiencies of deep neural networks.

What kind of applications do you foresee for the SynFlow algorithm in the recent preprint?

First, the theory we presented in the preprint stands as a guiding principle in designing a pruning algorithm that can avoid a major failure mode of “layer-collapse.” The algorithm we propose, SynFlow, encapsulates the theoretical ideas by design. Furthermore, SynFlow demonstrated that it is possible to successfully prune neural networks at initialization without ever looking at data, which challenges the existing paradigm that data must be used to quantify which synapses are important.

How relevant is this work to the other research in the PHI Lab?

At the PHI Lab, we are fundamentally rethinking the computer by drawing inspirations broadly from physical and computational principles in the natural world, such as quantum mechanics, optics and neural networks. In these works, we unified tools and ideas from physics, neuroscience and machine learning first, to better understand the mechanisms of information processing in the brain; and second, to increase the computational efficiency of modern deep neural networks with various applications.

Could you talk a bit about your ongoing work or other forthcoming articles?

Inspired by model compression’s practical issue, we are now deepening the theoretical framework of conservation laws of neural networks. I believe that we can identify new directions by such an interplay of practical problem solving and theoretical formulations.

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest