Upgrade 2021: phi LAB Speakers

September 21, 2021 // Upgrade 2021: PHI LAB Speakers

The Potential of Optical Neural Networks to Overcome Electronic Hardware Limitations

Ryan Hamerly, Senior Scientist, NTT Physics and Informatics Lab

Summary

Convolutional neural networks have seen significant advancements over the past 20 years, but further improvements are difficult to come by due to the limitations of modern hardware. Scientists are working to overcome these limitations by integrating optics, with the goal of expanding the capabilities of deep neural networks.

In his talk at Upgrade 2021, the NTT Research Summit, Dr. Ryan Hamerly, Senior Scientist at NTT Research, discussed how the use of optics in computing can overcome the limitations of chip hardware and minimize energy consumption.

In the early days of neural network development, modern hardware was enough. But as the performance of deep neural networks improved, the systems needed more data. As you increase the amount of input data to a neural network, the serial performance of microprocessors eventually flatlines.

“The goal of any neural network hardware is going to be to minimize the energy consumption, normalized to your performance,” Hamerly said. “We’re more and more limited in the performance of chips, not due to the energy consumption in processing itself, but due to the energy consumption and data movement of interconnects.”

Hamerly’s team pursued optics as a solution because effective use of optical neural networks reduces bottlenecks in data processing and movement by enabling a new way for data to be input to the system. The result is a hybrid approach to a neural network with both electronic and photonic functionality.

“Many of the things that are easy in electronics, such as nonlinearity and memory, are hard to do in photonics, at least at the required energy scales,” Hamerly said. Other things that are easy to do in electronics, such as interconnects, require a high energy cost in terms of communication and fan-out, whereas they are essentially free in the optical domain. Interconnects, then, “are still believed to be a promising application of optics,” he said.

Researchers developed theories on an effective way to perform the interconnect function by taking a hybrid approach and splitting these computing problems into two tasks. First, the idea is to generate the learnable parameters of the neural network into the optical format. The resulting signal would then be sent to post-processing for de-multiplexing the signal. With that small amount of additional post-processing, it may be possible to use this data to perform complicated computations. Hamerly and his team are currently conducting experiments to test this theory.

“Optics is important because of these applications: deep neural networks for learning complex tasks and the trends of the exponential growth in these deep neural networks,” he said. At the same time, optics can help us meet energy challenges inherent if we are to see the continuation of Moore’s Law.

Click below for the full transcript.

Ryan Hamerly

Senior Scientist, NTT Physics and Informatics Lab

Ryan Hamerly first discovered physics in high school, where he taught himself electromagnetism to build a Tesla coil. During college (B.S. 2010, Caltech) he studied theoretical particle physics and general relativity. Since graduate school (Ph.D. 2016, Stanford), Hamerly has pursued research in quantum control, quantum optics, nonlinear optics. His current work focuses on the emerging nexus of photonics, deep learning, quantum computing, and optimization.