Rethinking Computation: When Quantum Information and Brain Science Intersect

By NTT Research PHI Lab

In December 2019, we welcomed to the Physics and Informatics (PHI) Lab Dr. Hidenori Tanaka, who had previously been a post-doctoral fellow and visiting scholar at Stanford University. While there, he became the lead author for a paper presented at NeurIPS 2019, a leading computational neuroscience conference. This paper was one of many published by members of our PHI Lab since December, but we wanted to highlight it first for several reasons.

First, the paper, titled “From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction,” allows us to address a question involving our strategy. The PHI Lab is engaged in three overlapping areas of research. These include optical technologies, which are the basis of the Coherent Ising machine (CIM); and quantum information science, a category that includes quantum computing. How does neuroscience fit into those two disciplines? The answer is that our broader goal here is to apply fundamental principles of intelligent systems, including the brain, to radically re-design artificial computers, both classical and quantum.

This paper also illustrates a multi-disciplinary approach that we endorse. Research began through collaboration between the labs of Stanford University Professors Surya Ganguli and Stephen Baccus, the paper’s two senior authors. Dr. Ganguli, one of four Stanford professors who are lead investigators on collaborative projects with our PHI Lab, is an associate professor in the Department of Applied Physics. Dr. Baccus is a professor in the Department of Neurobiology. In other words, this work was conducted at the intersection of two departments, physics and neuroscience.

Finally, the paper itself is significant. While the authors note that deep feedforward neural networks have successfully modeled biological sensory processing, they raise the profound question of whether this simply substitutes one complex system for another. Their answer is no: the convolutional neural network employed in this research does more than trade complexity; it actually advances knowledge. 

The authors were able to show that in the case of the retina, complex models derived from machine learning can not only replicate sensory responses but also generate valid hypotheses about computational mechanisms in the brain. “Unlike natural systems that physicists usually deal with, our brain is notoriously complicated and rejects simple mathematical models,” said Dr. Tanaka. “Our paper suggests that we can model the complex brain with complex artificial neural networks, perform model-reduction on those networks and gain intuition and understanding of how the brain operates.”

Model reduction is a key contribution, and Dr. Tanaka is leading a follow-up effort that introduces a new pruning algorithm. Whereas the 2019 paper sought better understanding of the brain through reducing the complexity of biological neural networks, this year’s submission aims to make deep learning more powerful and efficient by removing parameters from artificial neural networks. 

Our PHI Lab team as a whole is advancing on several fronts. We will soon highlight other papers that have been published already this year and provide an update on results of significant CIM-related experiments.

Facebook
Twitter
LinkedIn
Pinterest