ORNL
Search Magazine  
   

Virtual reactor

VERA analyzes nuclear reactor designs in unprecedented detail


Some people would call ORNL computer scientist Tom Evans an optimist. He's building a mathematical model of a nuclear reactor so detailed that the only computer it can run on hasn't been built yet.

Most of the time, reactor models are used by utility companies to optimize and troubleshoot power plant operations. Usually, these simulations run on small, networked groups of computers, called "clusters," or on gardenvariety desktop and laptop computers—which makes Evans' gargantuan programming effort all the more mysterious. That is, until you start thinking like a computer scientist.

VERA's users can monitor the performance of any part of the reactor core at any point in its fuel cycle, receiving a data-based snapshot of what is happening in each location. Image: Andrew Godfrey, Tom Evans, Greg Davidson and Josh Jarrell
VERA's users can monitor the performance of any part of the reactor core at any point in its fuel cycle, receiving a data-based snapshot of what is happening in each location. Image: Andrew Godfrey, Tom Evans, Greg Davidson and Josh Jarrell

"The fact of the matter is that today's supercomputer is tomorrow's cluster—and the next day's laptop," he explains. "If we don't push the envelope in the area of supercomputing now, we won't have anything to offer users working on desktops and clusters five years from now." Evans works for the laboratory's Consortium for Advanced Simulation of Light Water Reactors, and over the next five years, he and his colleagues will be focused on producing a "virtual reactor" called VERA. This software tool will enable users in the utility industry to simulate and analyze almost every aspect of the performance of existing or proposed reactor designs in unprecedented detail.

Development of VERA, which began on the laboratory's Jaguar supercomputer, will eventually coalesce on Titan, Jaguar's brawnier successor which recently came online.

Virtual blueprint

Currently, because most nuclear utilities have limited access to high-end computational resources, like Jaguar or Titan, the industry standard for reactor simulation consists primarily of low-resolution models that do a good job of capturing basic reactor behavior but don't provide a great deal of detail or flexibility. CASL's goal is to use VERA's higher-resolution view of reactor operations to get a better understanding of problems that these models don't anticipate, such as corrosion, interactions among fuel rod components, and various safety concerns, and then to shar e VERA's capabilities with utility companies.

VERA provides users with a comprehensive view of reactor core operations by integrating a range of specialized modeling tools designed to simulate what goes on inside a reactor and providing a framework that enables them to interact with one another.

"We take a number of factors into consideration," Evans says, "including thermal hydraulics, neutron transport, fuel depletion, chemistry and a variety of other processes that affect the behavior of the core. There is a lot of physics at play here. Each piece of VERA has a set of model equations that represent the physics involved."

The upshot o f VERA's wide-ranging perspective is that its users have the ability to monitor the performance of any part of the reactor core at any point in its fuel cycle. An engineer can ask the model about the temperature of a particular fuel pin or the power being produced by a specific part of the core and get a data-based snapshot of what is happening in each location.

Evans and his colleagues constantly benchmark the data produced by VERA with data from real-world reactors under identical conditions. These comparisons provide them with a high level of confidence that measurements made on the model will match up with those made on operating reactors at any point in time.

Hybrid path forward

The technology that will enable Titan to support VERA's computational demands is a novel hybrid computer architecture that combines the sophistication of traditional central processing units with the raw datahandling power of graphics processing units.

This combination is expected to provide Titan with 10 times the power of its predecessor. However, with that surge in speed comes a similar increase in the complexity of the software needed to make the most of the GPUs' ability to process thousands of streams of data simultaneously.

"I like to compare moving to a hybrid computer architecture to the transition we made from serial to parallel computers several years ago," Evans says. "It wasn't a drastic change to go from 1 to 4 to 8 to many more processor cores. Obviously things happen when you're using 100,000 cores that don't happen with 1,000 cores, but fundamentally it wasn't a tremendous shift.

"What we're doing now to adapt to hybrid architecture is much more dramatic. We had years to scale from one processor to thousands of processors, so each step up felt like a rolling hill. Getting ready to take advantage of the new hybrid architecture is like going from one processor to 300,000 cores overnight."

Optimizing for Titan

Fortunately, Evans and his colleagues have been able to draw on work that has been done in the laboratory's Center for Accelerated Application Readiness. CAAR has brought together dozens of researchers from national labs, computer vendors and universities to ensure that critical computer codes in a range of research areas will be ready to take full advantage of Titan when it comes online later this year.

"The reality is that GPUs are vectorized machines," Evans says, "which means they are designed to break problems into many threads of operation. So we have to think about programming in a different way. For example, if I want to multiply 5 × 4 on a GPU, it will work faster to do it many times on multiple threads even though the same operation is repeated. GPUs do not perform single operations efficiently. It's an entirely different way of thinking about programming, so it's a fairly steep learning curve."

Evans notes that the driving force behind the move to hybrid supercomputers is not just the need for greater computational power, but the difficulty of sustaining both the current architecture and the current level of power consumption.

"We can't keep scaling the number of chips up indefinitely," he says, "and in order to get more computational power per megawatt, we need to take advantage of new architectures. That's the only way we're going to move forward."

Once Titan is up and running, Evans and his colleagues hope it will be able to complete a simulation of an entire reactor fuel cycle in a week. The data provided by this kind of detailed model would be invaluable to power plant operators, and the one-week timeframe would mesh well with the engineering workflows at most nuclear power facilities.

Evans and his colleagues have tested pieces of VERA on a developmental version of Titan, and preliminary results indicate it's three to five times too slow to meet the one-week goal. However, the next-generation GPUs that will be available when the full version of Titan goes online are expected to make up some or all of the shortfall, providing a 3X to 4X boost in speed.

"We're leveraging our expertise with numerics and algorithms to optimize our code and make the GPUs work harder for us," Evans says, "so we may be able to do this. Creating a model with this level of detail was inconceivable not too long ago, and now it's within the realm of possibility. Will we capture every little piece of physics that happens in the reactor? No. Will we capture a more comprehensive model of the reactor core? I think we will. Will the performance metrics from the development machine really scale up to the full machine? We will find out.

"We've seen a lot of signs that suggest we're on the right path," Evans says. —Jim Pearce