Yoshihisa Yamamoto talked at NTT Innovation Summit 2020
PHI Lab director, Yoshihisa Yamamoto, participated in the panel discussion on the new field of coherent Ising machine with Prof. Hideo Mabuchi (Stanford) and Prof. Surya Ganguli (Stanford) at the NTT Innovation Summit 2020 on November 20.
Contents are as follows.
1) Questioner ー Dr. Yamamoto, if I could start off with you. Before we get into the detail of today’s specific topic…l think it would be of great benefit to our audience to understand, even if only at a high level, what is the mission or goal for the Physics and informatics Lab and what general areas does the lab cover. Would you mind giving us that high level overview?
Dr. Yamamoto ー The main mission of NTT Physics and Informatics Lab is to seek a next-generation information processing technology, in particular, we look for a “simple, efficient and practical” computing scheme. Two mainstream research directions for next-generation computing technology in the recent past are quantum computing and neuromorphic computing. The two research communities independently search for a faster computer and a lower energy computer than modern digital computers based on von Neumann architecture, respectively. The former approach is inspired by mathematical theory of quantum mechanics, and the latter one is inspired by nature, specifically the brain. We believe that future computing technology will be invented in the cross-disciplinary field between the two. We try to merge the two disciplines by introducing “practicality” for quantum computing and “simplicity” for neuromorphic computing.
2) Questioner ー ln my keynote this week, I talked briefly about the potential of a new era of computing which includes more powerful processing capability with the advent of quantum computing and their ability to solve problems that are computationally too difficult for today’s computers to solve. However, I also mentioned, we’re not so close to mainstream with quantum computing. Can you give us your views on where we are with quantum computing – what is possible today and how far away are we from it becoming more mainstream? Maybe, let’s start with you Dr. Mabuchi, and then Dr. Ganguli if you want to add your thoughts after?
Dr. Yamamoto ー I do not believe the current concept and approach of quantum computing will eventually become a commercial technology, not because the development of required hardware platforms will take a long time but because a very principle of the unitary evolution of state vectors in a closed Hilbert space is not powerful enough to solve various computationally hard problems. A quantum speedy by this principle is limited to very few problems and vast categories of hard problems are not efficiently solved. To make a quantum computing machine an open-dissipative system coupled to eternal reservoirs and let a machine self-organize its final state at a solution state are the keys to the success. This is completely an opposite direction from the mainstream quantum computing research but we trust a future exists in this direction.
3) Questioner ー So, the Coherent lsing Machine or ClM. Let me open this up to the entire panel and whomever wants to start, please go for it. But I’m curious if you could expound on what exactly the CIM is, how it works, and your vision for what its impact could be? Oh, perhaps also include how the CIM idea came to be and actually become a focus area for the PHI lab? I’m quite interested in that backstory.
Dr. Yamamoto ー My background is in laser physics and quantum electronics. I have been working on various experimental tricks to realize a single-mode laser and theoretical understanding of its quantum statistical properties. Among millions of modes in a laser gain bandwidth, an optical cavity selects one particular mode to oscillate and eventually converts all input energy into that particular mode energy. This remarkable phenomenon is, of course, a consequence of laser phase transition and bosonic final state stimulation. I asked whether this laser phase transition and bosonic final state stimulation can provide us a new computing principle around 1995. I tried to realize this idea first by using Bose-Einstein condensates (of exciton-polaritons) and later by injection-locked semiconductor lasers. However, we failed in both approached. The BEC approach was not practical, and the semiconductor laser approach was not stable.
Then, I talked to my colleague, Professor Robert Byer of Stanford University, about the basic idea and asked him, “What is the most stable laser for our experiment?” He answered that a degenerate optical parametric oscillator is finally coming along as a practical coherent light source, which is probably better than any laser oscillators for your purpose. Indeed, Bob and I co-supervised one graduate student, Darwin Serkland, who built a first PPLN waveguide OPO based squeezer back in 1995. Immediately after this conversation, Bob suggested his post-doc, Alireza Marandi, who is now an assistant professor at Caltech, to work on the project. I suggested my post-doc, Peter McMahon, who is now an assistant professor at Cornell to the project. This is how the research on CIM started on Stanford campus.
4) Questioner ー Now as I understand it, the initial prototypes are promising with its computational performance but there is problem in the machine with it not being able to get back to its true ground state when the laser pump rate increases. And my sincere apologies if I dumbed that down too much. Can the group divide and conquer this – first clarify the problem, because I’m sure I oversimplified it, and then the potential solutions – I also understand there are two specific solutions under consideration?
Dr. Yamamoto ー As you pointed out correctly, the initial CIM is often trapped in local minima (sub-optimum solutions) rather than relaxing all the way to the ground states (optimum solutions). Professors Hideo Mabuchi and Surya Ganguli have clarified a mechanism responsible for this problem. One way to fix this problem is to implement an error detection/correction feedback loop in the CIM. The idea was invented by Professor Timothee Leleu. We call this new machine a closed-loop CIM. The closed-loop CIM can self-examine whether it relaxes to one of the local minima or is already trapped in a local minimum by measuring a real-time energy and compared it to the lowest energy visited previously. If the latter is true, the machine destabilizes its state by flipping the pump phase to a negative value and increasing the Ising coupling term exponentially. In this way, the close-loop CIM continues to explore from one local minimum to another without being trapped in any local minima including global minima (optimum solutions), during which a ground state (optimum solution) is discovered. A recent numerical study shows this theoretical prediction is indeed realizable, and the success probability of finding an optimum solution is improved exponentially compared to the initial CIM.
5) Questioner ー Based on what l’ve heard so far, it seems like there is a lot for one company to tackle on their own, let along just in the PHI lab. Dr. Yamamoto, I was reading a little about the joint research agreements you have with several other institutions from universities to government and a private company. Can you share some highlights of the institutions you’re working with as it relates to CIM? I remember reading about MIT and NASA, for example.
Dr. Yamamoto ー Yes, the project needs many different backgrounds and disciplines and we are fortunate to have some of the best minds as our collaborators. For example, Prof. Hideo Mabuchi and Prof. Surya Ganguli have offered the most advanced and sophisticated quantum optics theory and neural network theory to the project. We have a team consisting of ten institutions (Stanford, Caltech, Cornell, MIT, NASA. Ames, 1QBit, Swinburne University of Technology, Riken (Japan), Tokyo Institute of Technology, and the University of Tokyo) and 18 co-PIs. In this way, we can cover a broad spectrum from fundamental physics to application algorithms, from nonlinear optical device fabrication to benchmark experiments against modern heuristics.