Upgrade 2021: phi LAB Speakers

September 21, 2021 // Upgrade 2021: PHI LAB Speakers

Utilizing Biological Neural Networks to Optimize Artificial Neural Networks

Hidenori Tanaka, Research Group Leader | NTT Research Physics & Informatics Lab

Summary

While machine learning and the development of artificial neural networks has accelerated in recent years, to further improve the performance of these networks physicists have turned to biological neural networks in hopes of developing artificial neural networks with even greater computational power.

In his talk at Upgrade 2021, Hidenori Tanaka, Head of Neural Network Group and Senior Scientist at NTT Physics and Informatics Labs, discussed the benefits of studying biological neural networks and how combining properties from both natural and artificial networks can lead to increased efficiency.

“The key insight in building our theory, is to make a parallel between the dynamics of parameters during learning and the physical process,” Tanaka said.

Every neural network, whether artificial or biological, requires a large and evolving data set. Learning dynamics involve discrete continuous updates to the data set which further informs decision making. However, learning dynamics only happens in computers, and properties of biological neural networks need to be considered. The solution for this discrepancy is to build models through learning dynamics by drawing elements from classical mechanics. This derives an overall general application for any kind of learning system.

A fundamental factor of this research was determining equations that could describe these new learning dynamics accurately. Physical systems have different properties of symmetry than artificial systems, so researchers needed to combine equations to account for these disparities. A combination of the equation of motion, the Lagrangian function, and Noether’s theorem can address these differences and improve the efficiency of all neural networks.

When researchers considered different properties of biological and artificial neural networks by combining these equations, the theoretical and experimental results matched exactly. 

“Unlike many of the previous theories, we didn’t need to make any big assumption here except for the continuous time approximation,” Tanaka said, “and this experiment basically suggests that our continuous time model is really matching the reality at the largest scale.”

Through this research physicists are not only getting closer to solving challenging optimization problems using neural networks, they may also uncover new potential use cases for these systems.

“In the future, it’s going to be great to have more conversations with people at PHI Lab and collaborators to see how these kinds of view on learning can be applied for broader class of neural networks beyond just deep learning,” Tanaka said.

Hidenori Tanaka

Research Group Leader | NTT Research Physics & Informatics Lab

Hidenori Tanaka is a theorist who is fascinated by questions at the interface of physics, neuroscience, and machine learning. His guiding questions include: What can deep learning models tell us about computational mechanisms of the brain? What is the learning algorithm governing our brain? How are these mechanisms realized respecting the laws of physics? At PHI Lab, he aims to harness scientific discoveries that lead to more natural intelligent algorithms and hardware.