Beyond the two moderated panels, Day 2 of Upgrade 2023 included eight more talks. The morning sessions featured scientists from the NTT Research Cryptography and Information Security (CIS) Lab, Medical and Health Informatics (MEI) Lab and Physics and Informatics (PHI) Lab. In the afternoon, leaders from other NTT groups discussed several innovative technologies and service offerings. We will review those talks in a separate article. What follows are summaries of the three presentations on ground-breaking NTT Research initiatives.
Privacy Preserving Aggregate Statistics. “Companies want information about how their customers use their products,” CIS Lab Senior Scientist Elette Boyle said, in an admittedly non-shocking statement. This kind of information is useful for companies as they direct future efforts and try to improve their products. Yet there’s a problem: consumers don’t like being tracked; and strengthened privacy laws have limited companies from gathering usage data. But what if we shifted from individual to group data, which would serve most business purposes just as well? “Is there a way that a business can leverage these aggregate statistics…without the exposure of the risks and the problem and the responsibilities of actually learning the private data of the individual?” Boyle’s answer to her rhetorical question is yes; the CIS Lab has developed a way to split private data into two pieces that hides values but enables aggregation. This highly efficient, low overhead “Private Telemetry System,” which could involve NTT acting as a third-party data collector and another company serving as a neutral auditor party, is a rare case of a cryptographic breakthrough with potential near-term applications.
Probabilistic Estimation of Cardiovascular Bio Digital (CVBioDT) Twin Parameters. MEI Lab Research Scientist Iris Shelly noted that most experiences with healthcare reveal clinicians with scarce patient information and limited capacity to consume it. “As a result,” Shelly said, “your physician is fundamentally only seeing a snapshot-in-time of your condition when you visit their office.” The MEI Lab is aiming to change this scenario through its CVBioDT initiative, employing a design based on a mechanistic model of CV physiology, with interconnected layers for the autonomic nervous system, kidneys and lungs. Tuning this generic model to create an individual digital twin then requires parameter estimation. To that end, the MEI Lab is working with a unique software platform that combines population-level knowledge with a patient’s own data. The process involves two steps. First, building up a digital population of hypotheses and corresponding predicted measurements under different contexts; and second, combining observed patient measurements with observed context using Bayesian inference to discover which of the original hypotheses are most likely to explain current observations. “This distribution of hypotheses,” Shelly said, “is the bio digital twin.”
Integrated Nonlinear Optics for Coherent Information Processing. “There’s a tremendous appetite for new modalities of computation,” Tim McKenna, PHI Lab Principal Scientist said. Why so? Problems that need solving keep getting bigger, while computing based on silicon electronics has reached its limit. “You can see,” McKenna said, pointing to a logarithmic-scale scatterplot, “that we’ve basically been plateaued for a decade.” Photonics provide a way out. When computations are performed with pulses of light instead of electrons, clock rates accelerate (a thousand fold), while computers become more efficient. The question is what kind of photonics. Silicon-based photonics cannot outperform the silicon transistor in non-linear processing, memory and gain – all areas critical to computing. But we have a game changer in lithium niobate. The first thin-film lithium niobate (TFLN) optical parametric oscillator (OPO) was integrated on a chip only two years ago at Stanford and the PHI Lab, which has taken an OPO-based approach to quantum computing, has made TFLN devices “that check all the boxes.” McKenna is enthusiastic: “They’re going to excel in machine learning, linear algebra and optimization problems. Those are some of the biggest of the big data tasks.”