## Zoltan Toroczkai

**University of Notre Dame**

The Path to Identifying Fundamental Limits of Continuous Time Analog Computing

Boolean Satisfiability (SAT), which was the first NP-complete problem to be identified in the 1970s, is a family of logical constraint satisfaction problems that are still intractable today. An efficient solution to this problem would translate to all other hard computing problems that are known to also be in NP, though the general consensus is that there is no efficient solution to such problems.

“If we had an efficient SAT-solver, or NP-complete problem solver, it would literally, positively influence thousands of problems in applications in industry and science,” says Dr. Zoltan Toroczkai, Professor of Physics at the University of Notre Dame. In a talk at Upgrade 2020, the NTT Research Summit, he presents his current thinking on the physical limits of continuous-time analog computing from the viewpoint of the SAT problem.

Dr. Toroczkai briefly describes the SAT problem as the goal of assigning truth values (true or false) to Boolean variables such that all the given logical constraints of the problem are satisfied (evaluate to true). There are varying degrees of difficulty of these problems, the most alluring being those for which the solution time does not increase polynomially, but rather exponentially, with the size of the problem. Dr. Toroczkai demonstrates this as a true limitation of computing, rather than just a purely mathematical problem. “What we are really looking at is the relationship between problem hardness and the physical cost of the computing of its solution,” he says. An even harder version of SAT is Max-SAT where, given a possibly unsatisfiable SAT problem, the goal is to maximize the number of satisfied constraints.

In a recent paper he and his colleagues propose an analog solver based on differential equations to find solutions to Max-SAT problems, which is at least as efficient as the best digital heuristic solver presented in Max-SAT competitions, despite the fact that the proposed analog solver has also been simulated on digital computers. Running such analog solvers on special purpose-built analog hardware, however, would lead to significant speedups in finding solutions to these problems. This highlights the potential of using continuous functions and differential equations in solving discrete problems that otherwise, traditionally belong to the domain of discrete algorithms and digital computing.

In an analog solver, which theoretically can be orders of magnitude faster, the problem shifts to the physical limitations of the computing device; all the continuous variables of the solver are represented by *physical* variables in an actual analog device and if they exhibit exponential scaling, that ultimately translates to a very high computational cost.

Furthermore, when we consider other, similarly non-digital computing systems (oscillator networks, CIMs, etc.), we must consider that their performances “all hinge on our ability to control real variables with arbitrarily high precision,” Dr. Toroczkai says. He argues that all systems must fall victim eventually, to a bottleneck of lossiness. The lack of infinite precision (i.e., the inherent imperfections of available systems) will prevent the ability to overcome limitations in computing advances. He notes that the source of this loss is not just physical, as in device noise, but also algorithmic. As the imperfections piles up, so does the inaccuracy of the algorithm, due to its inherently non-linear dynamical nature, which magnifies these errors as the algorithm runs. Thus, while the exact mathematical solution of the differential equations defining the analog solver might find SAT solutions efficiently, actually computing those search trajectories with sufficient accuracy in a real physical device would be impossible for hard enough problems.

Dr. Toroczkai remains optimistic. While the fact that a boundary for efficient continuous-time analog computing exists, he notes that the next steps involve identifying that exact boundary. “We also have to be mindful of the possibilities; what are the limits, how close we can get to them? How much better can we do in the analog, continuous-time domain, compared to the digital?” he says. Using the SAT problem, and other NP problems, as a means to continue to brush up against the boundaries of analog computing can only help further the science. There is still a future in analog computing, residing in the concrete identification of the limits of its power.

For the full transcript of Zoltan Toroczkai’s presentation, click here.

Watch Zoltan Toroczkai’s full presentation below.

Towards Understanding the Fundamental Limits of Analog, Continuous-time Computing

## Zoltan Toroczkai,

University of Notre Dame

## recent publications by Zoltan Toroczkai

- Accelerating a continuous-time analog SAT solver using GPUs
- An Ising Hamiltonian Solver using Stochastic Phase-Transition NanoOscillators
- Predictability of cortico-cortical connections in the mammalian brain
- Bounded Continuous-Time Satisfiability Solver
- Zoltan Toroczkai's Google Scholar page
- Transcript of Zoltan Toroczkai's presentation