Upgrade 2020: PHI LAB
Speakers

Franco-Nori

Franco Nori

Research Scientist, University of Michigan

Machine Learning Applied to Computationally Difficult Problems in Quantum Physics

Dr. Franco Nori, of the University of Michigan Physics Department, presented examples of novel uses of machine learning applied to three quantum physics problems.

 

The first problem stems from issues encountered when attempting to calculate unknown quantum phases in an efficient manner using certain types of machine learning, which come with some limitations. Dr. Nori presented cutting-edge neural network technology that overcomes such limitations.

 

The second example involves quantum state tomography, a process in physics where quantum states are reconstructed using measurements from similar quantum states with identical density matrices. Currently, quantum tomography requires many different measurements and computations that are used to reconstruct the state of a quantum system. These methods become wildly inefficient, in terms of both cost and time, as the quantum system scales up in size. Dr. Nori proposed the use of machine learning can overcome the problem of scale.

 

His last example involved managing experimental data. The solutions discussed in the presentation all rely on immense amounts of data from the frontiers of quantum physics. As Dr. Nori stated, machine learning is meant to handle big data, and that includes data from state-of-the-art experiments.

Dirk Englund

Associate Professor of Electrical Engineering and Computer Science at MIT

How Optical Technologies Overcome Limitations of Electronics in Machine Learning

Machine learning technology is facing limitations imposed by computational bottlenecks in electronics for tasks involving functions such as vision, games, control, and language processing. Dr. Dirk Englund, Associate Professor of Electrical Engineering and Computer Science at MIT, talked about the benefits of using photonic integrated circuits to overcome these bottlenecks.

 

The goal isn’t to build an optical computer, Dr. Englund said, but rather to use photonic technologies selectively in place of electronics to make advancements in the overall system. “Photonics really is on the rise in computing, but you have to be careful in how you compare it to electronics, and find where the gain is to be had,” he says.

 

Dr. Englund lists a few promising theoretical results of using photonic integrated circuits, including:

 

  • An overall improvement in the output of machine learning using an optically integrated device
  • Limitations on error rates
  • The possibility of reversible and irreversible computing – which would push machine learning technologies past the boundaries of the Landauer limit for digital computing.

Zoltan Toroczkai

University of Notre Dame

The Path to Identifying Fundamental Limits of Continuous Time Analog Computing

Boolean Satisfiability (SAT), which was the first NP-complete problem to be identified in the 1970s, is a family of logical constraint satisfaction problems that are still intractable today. An efficient solution to this problem would translate to all other hard computing problems that are known to also be in NP, though the general consensus is that there is no efficient solution to such problems.

 

“If we had an efficient SAT-solver, or NP-complete problem solver, it would literally, positively influence thousands of problems in applications in industry and science,” says Dr. Zoltan Toroczkai, Professor of Physics at the University of Notre Dame. In his talk, he presents his current thinking on the physical limits of continuous-time analog computing from the viewpoint of the SAT problem.

 

Dr. Toroczkai briefly describes the SAT problem as the goal of assigning truth values (true or false) to Boolean variables such that all the given logical constraints of the problem are satisfied (evaluate to true). There are varying degrees of difficulty of these problems, the most alluring being those for which the solution time does not increase polynomially, but rather exponentially, with the size of the problem.

Isaac-Chuang

Isaac Chuang

Professor of Physics, MIT

New Thinking on Reducing Cost and Time in Programmable Quantum Simulators

A quantum simulation that can determine molecular properties enables the advancement of technologies like quantum computers and aids in the progress of quantum chemistry and quantum physics. However, major complications in the form of cost and time arise from undertaking these calculations, given current methods and computing power.

 

“This problem is as hard as the hardest quantum computation,” said Dr. Isaac Chuang, MIT faculty member in Electrical Engineering, Computer Science, and Physics, during his presentation at Upgrade 2020.

 

Dr. Chuang presented examples of the role of quantum simulations in identifying complex molecules, namely:

  • Optimizing a quantum simulation
  • Increasing hardware efficiency

Classical simulations scale exponentially, which means that as the scale increases, the calculations quickly become unruly. With a quantum computer, however, it’s possible to scale polynomially. “This would be a significant advantage if realizable,” Dr. Chuang says.

 

Progressing quantum simulations require increasing the efficiency of current hardware. Traditionally, quantum computers use a circuit model, with quantum circuits acting on qubits. Dr. Chuang considers a contrasting concept by way of the Frank-Condon problem in quantum chemistry, which concerns the transition probability between two frequencies of an oscillating molecule.

 

 “This approach represents a far more efficient utilization of hardware resources compared with the standard qubit model, because of the natural match of the resonators with the physical system being simulated,” Dr. Chuang explains.

Hideo-Mabuchi

Hideo Mabuchi

Professor, Applied Physics, Stanford University

How Coherent Ising Machines Can Bring Improvements to Combinatorial Optimization and other Computing Challenges

Current research into optimization algorithms brings the added benefit of foresight into future computational challenges. By testing optimization techniques on tough problems, researchers can more deeply define what makes that problem difficult, and how that difficulty will impact scalability.

 

In his talk at Upgrade 2020, Dr. Hideo Mabuchi, Professor of Applied Physics at Stanford University, assessed the impacts of the effectiveness of optimization using the Ising machine problem. “We see that a critical frontier for cutting-edge academic research involves both the development of novel heuristic algorithms that deliver better performance with lower cost, and providing deep conceptual insight into what makes a given problem instance easy or hard for such algorithms,” he said.

 

Solving for the ground state of an Ising model represents a canonical problem in combinatorial optimization, Dr. Mabuchi said. That is, to find the ground state of an Ising model is to find an optimal configuration of its variables.

 

To address the issue, he suggests using “highly customized, special purpose hardware architectures on which we may run entirely unconventional algorithms.”

Eli-Yablonovitch

Eli Yablonovitch

Professor, UC Berkeley

Eli Yablonovitch Outlines the Promise of Physics-Based Optimization Principles for the Future of Computing

In the physical world we constantly witness examples of optimization under constraints. Light through a glass window takes the route that requires the least amount of time. A leaf falling from a forest canopy follows a path which generates the least entropy in its floating freefall. It is thus useful to recognize where optimization principles come into play, and exploit them to solve optimization problems across many fields, including electronic engineering.

 

In his Upgrade 2020 talk, Dr. Eli Yablonovitch, world-renowned physicist, engineer and Director of UC Berkeley’s Center for Energy Efficient Electronics Science, discussed how optimization principles of physics, specifically the principle of least entropy generation, can solve difficult engineering and computational problems.  An equivalent of the entropy statement is that nature will try to dissipate the least amount of heat, subject to the external driving terms.

 

Nature’s optimization principles can embody constraints analogous to the Lagrange Multiplier method taught in some undergraduate college classes.

“Physics will try to go to the state of lowest power dissipation. Another way of saying it is we go from some initial state to some final state,” he said. “The promise is that the physics-based hardware will perform the same function at far greater speed and far less power dissipation than conventional digital algorithms.”

Alireza Marandi

Assistant Professor, California Institute of Technology

Computing Opportunities Using Optical Parametric Oscillator Networks

As we reach limitations of standard computing, the need arises for different types of networks capable of solving incredibly complex and costly problems, from protein folding to social network optimization. In his talk, Dr. Alireza Marandi, Assistant Professor of Electrical Engineering and Applied Physics at Caltech, discussed the opportunities provided by Networks of Optical Parametric Oscillators (OPOs), which use the power of phase transitions for computation.

 

Dr. Marandi and his team began by solving the Ising problem on OPO networks as a means to explore their efficacy. This NP-hard problem ­– finding the ground state of the Ising model – is encountered in many real-world applications and is costly to solve with standard computers. It thus serves as a benchmark for measuring the performance of different types of novel networks, including OPO networks.

 

“That’s why there is this demand for making a machine that can target these problems. And, hopefully, it can provide some meaningful computational benefit, compared to the standard digital computers,” Dr. Marandi said. “Can we look at other phase transitions… can we utilize them for computing, and can we bring them to the quantum regime?”

Amir-Safavi-Naeini

Amir Safavi-Naeini

Assistant Professor, Applied Physics, Stanford University

A Platform of Possibilities: Phononic and Photonic Circuits in Harmony

It’s possible to efficiently process both quantum and classical information on a single platform by employing mechanical motion and the efficiency of optics, according to Dr. Amir Safavi-Naeini, Assistant Professor of Applied Physics at Stanford University. In his Upgrade 2020 talk, Dr. Safavi-Naeini discussed his work-in-progress solution to overcoming limitations encountered with current platforms.

 

“Our goal is to realize a platform for quantum coherent information processing that enables functionality that currently does not exist in other platforms,” he says.

 

Such a platform would seek to achieve three primary goals. The first is to foster both low loss and non-linearity that can be dispersion engineered into highly efficient broadband circuits. Second, the circuits within the platform need to be programmable and reconfigurable, which requires effective modulation and switching. Lastly, it must enable large-scale programmable dynamics between many different oscillators on a chip, which can be achieved using advances in superconducting circuits. The findings presented by Dr. Safavi-Naeini build the framework for achieving these goals, which he estimates are possible in the next few years.

Timothee-Leleu

Timothee Leleu

University of Tokyo

Neuromorphic in Silico Simulator For the Coherent Ising Machine

The human brain is an inspiring model of what can be achieved computationally. To design a more effective simulation of a Coherent Ising Machine (CIM), we can use what we understand of the brain’s balance between speed, size, and energy efficiency.

 

“The brain computes using billions of neurons using only 20 Watts of power and operates at a relatively low frequency,” said Timothee Leleu, Project Professor of Information and Electronics at the University of Tokyo, in his talk at Update 2020. “These impressive characteristics motivate us to try to investigate what kind of new inspired principles may be useful for designing better Ising machines.”

 

Looking at the human brain as inspiration, one can derive that it is indeed possible to create a CIM that is both low frequency, low energy, and fast. By analyzing three principles of the brain, we can, at least temporarily, push past the limitations of in-silico simulations. These three principles are:

  • Microstructure, or how a local structure is repeated
  • Asymmetry of connections and how they incite connectivity
  • Hierarchical organization of activity, or how these local structures organize communication

Leleu presented a proof of concept for implementing these brain-inspired ideas into an FPGA, which are discussed in a paper he wrote with colleagues including F. Khoyratee and R. Hamerly.