The Register® — Biting the hand that feeds IT

Feeds

section icon HPC

Oak Ridge goes gaga for Nvidia GPUs

  • alert
  • print
  • tweet

Fermi chases Cell for HPC dough

Free whitepaper – Operationalizing Information Security

Hybrid futures

It would not be at all surprising to see a hybrid architecture for the future Oak Ridge machine that uses PCI-Express 2.0 links to hook Fermi GPUs into Opteron server nodes, just like IBM is using PCI-Express 1.0 links to hook Cell boards into the Opteron nodes with Roadrunner.

Jeff Nichols, associate lab director for computing and computational sciences at Oak Ridge, said in a statement that the Fermi GPUs, which have eight times the double precision floating point performance as the Teslas, at around 500 gigaflops, would enable "substantial scientific breakthroughs that would be impossible without the new technology."

Working with the future Tesla GPU coprocessors and their successors, Oak Ridge is hoping to push up into the exaflops barrier within ten years. Getting to 10 petaflops next year with a parallel super that uses the Fermi GPUs is just a down payment.

The important thing about the Fermi GPUs is that the CUDA programming environment from nVidia supports not just C, but C++ as well. When Fortran compilers can see and dispatch work to the GPUs, the combination of decent double-precision performance and C++ and Fortran support will truly push GPU co-processors into the mainstream. This is exactly what Nvidia, AMD (with its FireStream GPUs), and Intel (with its Larrabee GPUs) are all hoping for.

Cell multiplication

The question now is, what will Big Blue do to counter these moves onto its hybrid supercomputing turf?

Several years back, when the Cell chips were first being commercialized and offered terrible double-precision floating-point performance (like 42 gigaflops versus 460 gigaflops for a two-socket Cell blade), Big Blue's roadmap called for a Cell board with two sockets that could deliver 460 gigaflops of single-precision and 217 gigaflops of double-precision math. We know this blade server as the BladeCenter QS22.

The roadmap also called for a BladeCenter QS2Z, which would have Cell chips that in turn had two Power cores and a whopping 32 vector processors each, using a next-generation memory and interconnection technology; the QS2Z blade would sport 2 teraflops per blade at single precision and 1 teraflops per blade at double precision.

That's about twice the oomph in a Cell chip compared to the forthcoming Fermi GPUs. Oak Ridge knew that, of course, but maybe this future Cell chip never made it out of the concept stage, as it was in early 2007.

IBM is mum on its Cell roadmap plans at this point, but this future Cell chip was slated for delivery in the first half of 2010, more or less concurrent with the Fermi GPU co-processors. ®

Free whitepaper – AccelOps’ Unified Infrastructure Management Examined

Spotlight

SC12 Massively parallel 3-D reservoir sims just 1 of the apps in student cluster compo
SC12 If ye would have peace, prepare for cluster WAR
HPC blog You cannae break the laws of physics - and 7nm is the limit
Adapteva's Epiphany-IV chip
Feature Adapteva's parallel dash for community cash
Sysadmin blog I'm not ashamed to admit I was drooling over those racks
SoftLayer's data center
HPC blog US gov travel caps: Agency bods may have to skip supercomputing shows
Webcast Can’t fix it unless you can quantify it
GTC 2012 Hyper-Q and Dynamic Parallelism make GPUs sweat
HPC blog Why I geek out for GTC
HPC blog But what’s Cray going to do with the Intel cash?