Security and Privacy at Upgrade 2024: Part 1

The Security and Privacy track of Upgrade 2024 consisted of six sessions, encompassing theory and practice. Moderated by NTT Security Global Chief Information Security Officer John Petrie, the discussions ranged from cryptographic innovation to Generative (Gen)AI to cybersecurity strategies and tactics. We will review the first three sessions – on the Cryptography & Information Security (CIS) Lab, cryptographic proofs, and Generative (Gen)AI – in this article and the others in a second blog.

In the opening session, Petrie interviewed CIS Lab Director and Distinguished Scientist Brent Waters and CIS Lab Senior Scientist Elette Boyle about the lab and their work. As with NTT Research as a whole, the CIS Lab is concerned with basic research, which can be defined, as Waters explained, by this question: “Does it change the science?” He added that a bona fide, top-tier basic research lab – ranked as such by external metrics, such as the number of papers delivered at industry-leading conferences – nonetheless creates corporate value in two ways: it lends credibility to a company and enables it to think beyond its existing business to what could be the next big idea.

Waters said the CIS Lab’s success to date relates to having gotten “the right people.” Then there is what Boyle said is the X factor of putting a large number of top theoretical cryptographers (10 to be precise) on the same team. “For example, if you look even at top universities, the cryptography group will maybe be one or a couple of people,” she said. “Here we have a powerhouse. And I think even just having everybody there together, the interaction, you walk down the hall, you ask the world expert some question about something and then you start chatting over coffee and you start developing one of the next directions of research that can really lead to a next big thing.” These internal dynamics have been augmented by external collaboration that includes research agreements, internships, post-doctoral fellowships and a visitor program. NTT Research has also become one of the three sponsors, along with Stanford University and the University of California, Berkeley, of a recurrent workshop on cryptography research called the Bay Area Crypto Day

The CIS Lab’s status in part reflects what Waters has achieved. The cryptographic community, for instance, has affirmed that four papers he co-authored have stood the “test of time.” NTT Data, with support from NTT Research, is now engaged in efforts to commercialize one of his innovations, attribute-based encryption (ABE), which maps access to particular data to users with policy-defined attributes. When asked what comes next, Boyle pointed to Secure Multi-Party Computation (MPC), which enables parties to compute a functionality while keeping input private. Looking ahead, Waters would like to revisit an effort from the 1980s to make the initial “hardness” assumptions of cryptographic theory smaller, or more believable.

In the next session, CIS Lab Senior Scientist Abhishek Jain discussed the concept of cryptographic proofs, specifically, succinct non-interactive arguments (SNARGs). To illustrate the concept, Jain pointed out that someone wanting to train a very large neural network is likely to rent capacity from a cloud services provider to do the actual training. But how do you then verify that the training was done correctly, without resorting to a lengthy process of retraining the network yourself, which is what you didn’t want to do in the first place? Enter SNARGs, whose proof size and verification time remain small, independent of complexity. “We can ask the cloud to just send me a very short proof that the training was done correctly,” Jain said. “And because the proof is short, I can verify it very quickly.”

Besides outsource computing, electronic voting and blockchain scalability are two other examples that call for succinctness and non-interactivity (i.e. taking just one message to communicate). In fact, SNARGs have already become a key part of the multi-billion dollar blockchain industry. The problem is that cryptographers are dissatisfied with the assumptions being used. Jain said that one promising workaround involves changing the problem statement to one of proving multiple claims simultaneously. While that approach intuitively sounds more difficult, Jain said it allows for solving from standard assumptions.

Pivoting fully back toward practice, in the third session of this track, NTT Innovation Lab (IL), Israel, CTO Moshe Karako discussed ways to safeguard the human-AI interface and build trust and security into conversational AI. There is no doubt that the use of these tools is widespread. Karako pointed to a 2023 survey in Nature of 1,600 scientists, which indicated that almost 30 percent had used GenAI to help write manuscripts. A U.K. business survey has shown that the percentage of employees using ChatGPT day-to-day is even higher. Yet how secure are these large language models (LLMs)? What does their unsafe or unethical use look like? What business risks do they pose? And what can you do about it?

As for the security question, NTT IL has a team of hackers who have successfully broken these tools; for instance, by using an AI bot to outsmart a captcha. Karako’s team has also been able to get ChatGPT to recommend the use of malware, an unsafe use that clearly poses a business risk. This type of exposure or (“white hat”) hacking actually helps, as the goals are to make GenAI more resistant and implement ways to protect user and data. The NTT IL virtual smart lab (VSL), which has benefited about 90 global users, is another resource. “That’s a playground, a sandbox, somewhere you can play with emerging technologies in a very safe area,” Karako said. “A live demo is always much better than just showing [customers or internal execs] a presentation of what you may or may not do.”

Facebook
Twitter
LinkedIn
Your Privacy

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.