Dr. Ryan Hamerly: From a Tesla Coil to Photonics and Optics

Ryan Hamerly is a Senior Scientist at NTT Research. He joined the Physics & Informatics (PHI) Lab in October 2019, following two years as a Postdoctoral Fellow at the Massachusetts Institute of Technology (MIT), where he worked with Professor Dirk Englund on photonics for machine learning. He was previously a Postdoctoral Researcher at the National Institute of Informatics in Tokyo, where he worked with Professors Yoshihisa Yamamoto and Shoko Utsunomiya on photonics and quantum computing, focusing on the Coherent Ising Machine. He received his Ph.D. from Stanford in 2016. His current work focuses on the emerging nexus of photonics, deep learning, quantum computing and optimization. For more on Dr. Hamerly’s background and current research, please read the following Q&A:

Toward the end of this NTT Research video, you mention that building a Tesla coil in high school was your introduction to physics. What then led you into the area of physics (linear-quadratic-Gaussian control, optical parametric oscillator dynamics, Ising machines, etc.) that you eventually pursued in your doctorate?

I always knew I wanted to study physics. Then, in my first year at Stanford, after bouncing around a number of groups looking for something interesting, I attended a lecture on nanophotonics. I contacted a number of professors in the field, but they all either declined or didn’t respond. A few days before term ended, I stopped by Prof. Mabuchi’s office to see if I could catch him in person, and with luck he was around that morning. And that’s where it started.

In retrospect, getting involved in quantum computing and nanophotonics was a great career choice, but I didn’t know it at the time. I was not paying attention to who was writing papers or winning awards. I just wanted to work on something interesting.

What else can you say about Professor Mabuchi and how he influenced your understanding and approach to physics?

Prof. Mabuchi has a deep understanding of the field, both the general directions as well as the nitty-gritty technical details. He was also an ideal manager for me: both involved and hands-off at the same time. He put me on an initial project, and we had many long scientific discussions, but he also expected me to develop and pursue my own ideas independently. This gave me space to grow as a student and explore new research directions.

Was it that prior affiliation with Professor Yamamoto that helped bring you to NTT Research? What other reasons made this a good fit?

Obviously that affiliation was a major reason why I was recruited, but what really attracted me to NTT Research’s PHI Lab was the combination of talent, vision and flexibility that I saw in the lab. The people I get to work with are amazing — clear rising thought leaders in the field. The lab’s vision of advancing quantum and neuromorphic computing, with the Coherent Ising Machine as a key archetype, gives us a concrete direction to coordinate our research efforts, supported by Prof. Yamamoto as well as Kazuhiro Gomi and everyone on the corporate side. And NTT Research is very flexible about the research we pursue, freeing us from the demands of grant program officers or product development cycles. The theory is that if you hire the best people, you can usually trust them to work on the right things.

Several of your recent papers have “interferometer” in their titles. You also refer to the Mach-Zehnder interferometer in your discussion of the beam-splitter mesh approach to weight-stationary optical neural networking (ONN) in your Upgrade 2021 talk. Is that one of the technologies within your scope as head of the hardware and devices group in the PHI Lab? What else falls within that domain?

“Interferometer” is just a fancy word for when light beams split and recombine, leading to interference. The idea isn’t restricted to light either; you can make interferometers with sound, radio and even clouds of atoms. What’s special about interferometers is they’re passive: they let you do certain types of math without consuming energy. But in addition to making good interferometer circuits, optical neural networks and Ising machines rely on other tools, including the nonlinear optics and frequency-comb sources that my colleagues Marc Jankowski and Myoung-Gyun Suh work on. A practical platform will need to combine all of these components on a single chip — a daunting prospect, but with the incredible progress in fabrication technology and the growing commercial applications of photonics, I would find it more surprising if this doesn’t happen in the next five years.

Two examples you discussed in your Upgrade 2021 talk were output-stationary ONNs and the NetCast concept. Could you elaborate what you mean by potentially enabling up to 106 modes in the free-space demo of output-stationary ONNs? (What’s a mode in this context?) In any case, it sounds like an extremely dense ONN matrix.

Prof. David Miller (Stanford) likes to talk about free-space optics as the ultimate limit of communication. Normally, light is guided through fibers or, if it’s on-chip, integrated waveguides. In general, you only have one or two modes per waveguide, and that limits your data density because there are only so many waveguides that you can pack on a chip. But if you image data through free space, every pixel (that’s your “mode”) can carry independent information. Since we regularly process megapixel images in even cheap cell phone cameras, free-space optics potentially enables computation with millions of modes. The real difficulties are in mapping that immense computational power onto a relevant problem, overcoming the problems of optical aberration and crosstalk, and developing the electro-optic hardware to interface with the demanding dataflows.

Is the key to the NetCast concept this idea of splitting computing into 1) weight-related tasks and encoding at the server and 2) modulation at the client? Is the ‘value proposition’ of this approach the potential for a sub-single photon per multiply-accumulate (MAC)?

In edge computing, you care much more about the energy cost on the client side as compared to the server. So, if optics allows you to split that cost between the client and the server, dramatically reducing the client-slide cost without adding significant latency, you’d take it. Zeptoscale computing (that’s sub-single photon per MAC) is more of an aspirational goal. Recently, scientists in the group of my colleague Prof. Peter McMahon, who also presented at Upgrade 2021, experimentally reached this limit, so we know that it is fundamentally possible. But even if NetCast only reaches a femtojoule per MAC at first, there is a large value proposition in that too.

Are there any other recent or ongoing papers/research that you would like to mention?

Check out our recent arXiv paper titled “Infinitely Scalable Multiport Interferometers.” One of the big problems of analog computing has been the propagation of errors. Even small errors, cascaded through a deep enough circuit, can make a computer too noisy to be useful. The same problems raise their head in photonic circuits. In our paper, we develop calibration routines to (at least partially) cancel these errors out. Paradoxically, we find that the effect of errors decreases as the circuit grows larger, leading to analog photonic circuits that are, in a sense, asymptotically perfect. For this work, I’d like to highlight the contributions made by Saumil Bandyopadhyay, one of the excellent students in Prof. Dirk Englund’s group who I have the chance to work with thanks to NTT Research’s collaboration with MIT.

Facebook
Twitter
LinkedIn
Pinterest