From weather sensors and autonomous vehicles to electric grid monitoring and cloud gaming, the world’s edge computing is getting increasingly complex — but the world of HPC hasn’t necessarily caught up to these rapid innovations at the edge. At a panel at Nvidia’s virtual GTC22 (“HPC, AI, and the Edge”), five experts discussed how leading-edge HPC applications can benefit from deeper incorporation of AI and edge technologies.
On the panel: Tom Gibbs, developer relations for Nvidia; Michael Bussmann, founding manager of the Center for Advanced Systems Understanding (CASUS); Ryan Coffee, senior staff scientist for the SLAC National Accelerator Laboratory; Brian Spears, a principal investigator for inertial confinement fusion (ICF) energy research at Lawrence Livermore National Laboratory (LLNL); and Arvind Ramanathan, a principal investigator for computational biology research at Argonne National Laboratory.
The edge of a deluge, and a deluge of the edge
Early in the panel, Gibbs — who served as the moderator of the discussion — termed the 2020s the “decade of the experiment,” explaining that virtually every HPC-adjacent domain was in the midst of having a major experimental instrument (or a major upgrade to an existing instrument) come online. “It’s really exciting; but on the other side, these are going to produce huge volumes of rich data,” he said. “And how we can use and manage that data most effectively to produce new science is really one of the key questions.”
Coffee agreed. “I’m actually at the X-ray laser facility at SLAC, and so I come at this from a short pulse time-resolved molecular physics perspective,” he said. “And we are facing an impending data deluge as we potentially move to a million-shots-per-second data rate, and so that’s pulled me over the last half decade more into computing at the edge. And so where I feed into this is: how do we actually integrate the intelligence at the sensor with what’s going on with HPC in the cloud?”
“One of the major opportunities I see moving forward is: we can look at rare events now,” he continued. “No one in their right mind is really going to move terabytes per second — that just doesn’t make sense — however, we need the ability to record terabytes per second to watch for the anomalies that actually are driving the science that happens right now.”
Spears, hailing from fusion research at LLNL, spoke to his field as a prime example. He pointed to recent success at LLNL’s ICF experiment, where the team managed to produce 1.35 megajoules of energy from the fusion reaction — almost break-even — and that the JET team in Europe had a similar breakthrough in the last few months. But fusion research, he said, depended on “data streams off of our cameras for experiments that last for — you know, some of the action is happening over 100 picoseconds.”
“We need the ability to record terabytes per second to watch for the anomalies that actually are driving the science that happens right now.”
Sharpening AI’s edge
So: huge amounts of data at very fast timescales, with the aim being to move from once-daily experiments to many-times-per-second. Spears explained how they planned on handling this. “We’re going to do things fast; we’re gonna do them in hardware at the edge; we’re probably gonna do them with an AI model that can do low-precision, fast compute, but that’s going to be linked back to a very high-precision model that comes from a leadership-class institution.”
“You can start to see from these applications the convergence of the experiment and timescales,” he said, “driving changes in the way we think about representing the physics and the model and moving that to the edge[.]”
AI, then, accelerates this same strategy, helping to whittle down the data that moves from the edge to the larger facilities. “You can use AI in terms of guiding where the experiments must go, in terms of seeing what data we might have missed,” Ramanathan said.
Bussmann agreed, citing many fields employing a “live stream [of data] that will not be recorded forever” — “so we have to make fast decisions and we have to make intelligent decisions,” he said. “We realize that this is an overarching subject across domains by now because of the capabilities that have become [widespread].”
“AI provides the capability to put a wrapper around that, train a lightweight surrogate model, and take what I actually think in my head and move it toward the edge of the computing facility,” Spears said. “We can run the experiments now for two purposes: one is to optimize what’s going on with the actual experiment itself — so we can be moving to a brighter beam or a higher-temperature plasma — but we can also say, ‘I was wrong about what I was thinking, because as a human, I have some weaknesses in my conception of the way the world looks. So I can also steer my experiment to the places where I’m not very good, and I can use the experiment to make my model better.’ And if I can tighten those loops by doing the computing at the edge … I can have these dual outcomes of making my experiment better and making my model better.”
“If I can tighten those loops by doing the computing at the edge … I can have these dual outcomes of making my experiment better and making my model better.”
Just a matter of time
Much of the discussion in the latter half of the panel focused on how these AI and edge technologies could be used to usefully interpolate sparse or low-resolution data. “You really need these surrogate models,” Ramanathan said, explaining how his drug discovery work operated in ranges of 15 to 20 orders of magnitude and that to tackle it, it was useful to “build models that can adaptively sample this landscape without having all of this information”: rare event identification.
“I can run a one-dimensional model really cheaply — I can run 500 million of those, maybe,” Spears said. “A two-dimensional model is a few hundred or a thousand times more expensive. A three-dimensional model is thousands of times more expensive than that. All of those have advantages in helping me probe around in parameter space, so what a workflow tool allows us to do is make decisions back at the datacenter saying: run interactively all of these 1D simulations and let me make a decision about how much information gain I’m getting from these simulations.”
“And when I think I’ve found a region or parameter or design space that is high-value real estate, I’ll make a workflow decision to say, ‘plant some 2D simulations instead of 1Ds’ and I’ll hone in on another more precise area of the high-value real estate. And then I can elevate again to the three-dimensional model which I can only run a few times. That’s all high-precision computing that’s being steered on a machine like Sierra that we have at Lawrence Livermore National Laboratory.”
“All of our problems are logarithmically scaled, right?” added Coffee. “We have multiple scales that we want to be sensitive to — it doesn’t matter which domain you’re in. … We all are using computers to help us do the thing that we don’t do well, which is swallow data quickly enough.”
“When you start talking about integrating workflows for multiple domains and they all have a similar pattern of use, doesn’t that beg us to ask for an infrastructure where we bind HPC together with edge to follow a common model and a common infrastructure across domains?” he continued. “I think we’re all asking for the same infrastructure. And this infrastructure is really, now, not just what happens in the datacenter, right? It’s what happens in the datacenter and how it’s sort of almost neurologically connected to all of the edge sensors that are distributed broadly across our culture.”
“We all are using computers to help us do the thing that we don’t do well, which is swallow data quickly enough.”