Convey Computer AMD
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud

Green500 Turns Blue


The latest Green500 rankings were announced last week, revealing that top performance and power efficiency can indeed go hand in hand. According to the latest list, the greenest machines, in fact the top 20 systems, were all IBM Blue Gene/Q supercomputers. Blue Gene/Q, of course, is the platform that captured the number one spot on the latest TOP500 list, and is represented by four of the ten fastest supercomputers in the world.

Not only did Blue Gene/Q dominate the top of the Green500, but did so in commanding fashion. Although the smaller Q machines tended be slightly more energy efficient, all 20 delivered more than 2,000 megaflops/watt. That turned out to be about twice as efficient as the average for the next 20 supercomputers on the list.

Of those following 20 systems, 10 are accelerator-based. In fact, the 21st and 22nd most efficient supercomputers are the Intel MIC-accelerated prototype cluster (1380.67 megaflops/watt) and the ATI Radeon-equipped DEGIMA cluster (1379.79 megaflops/per watt). The remainder all use NVIDIA GPU parts and are only somewhat less power-efficient.

It's hard to draw a lot of conclusions about the efficiency of accelerator-equipped machines, since the ratio between the more energy-efficient GPUs (or MIC coprocessors) and the CPUs on these machines has a big impact on the overall results. In other words, a high GPU:CPU ratio system would tend to be yield more megaflops/watt than one with a lower ratio. Further, the current crop of accelerator-based systems tend to yield sub-par Linpack performance (the basis of both the TOP500 and Green500 results) compared to the machine's peak performance, although this "bias" does point out that it can be difficult to extract performance and performance per watt from these heterogeneous platforms.

A number of x86 CPU-only systems, especially those employing the latest Intel "Sandy Bridge" processors, did rather well the latest rankings. In this category is the new 2.9 petaflop SuperMUC machine that just booted up at Germany's Leibniz Supercomputing Centre (LRZ) . This IBM iDataPlex cluster sits at number 4 on the TOP500 list and manages a very respectable number 39 placement on the Green500. The system uses an innovative hot-water cooling system that not only saves energy costs, but whose waste heat is repurposed for local use at the LRZ facility. The machine also employs system software that is designed to optimize energy consumption.

The other instructive lesson of SuperMUC is that institutions are willing to endure relatively high energy costs to get leading-edge performance. (SuperMUC is currently the speediest supercomputer in Europe.) Even though its innovative cooling system will supposedly save around a million Euros per year, in energy costs, the high price of electricity in Germany will still make SuperMUC the most expensive supercomputer in Europe to operate.

According to Arndt Bode, LRZ's chairman of the board who spoke about the new system at ISC'12, energy costs for them are rather steep -- 0.158 €/kilowatt-hour as of 2010. That's around 10 times the cost at Oak Ridge National Laboratory, perhaps the least expensive place to do supercomputing in the US, thanks in large part to cheap blocks of power that can be purchased from the Tennessee Valley Authority. Since SuperMUC consumes 3.4 megawatts, that means the Germans are paying for what an equivalent 34 megawatt system would cost the Oak Ridge boys today.

Considering that supercomputing designers have drawn a 20MW line in the sand for exascale systems, the Germans, in effect, have already resigned themselves to that level of cost. Of course, not everyone is going to be able to plop their exascale systems in the Tennessee Valley (or at other cheap energy locales like Iceland). And energy prices are likely to rise between now and the end of the decade, almost everywhere. But 20MW or more (maybe significantly more) is doable for at least some geographies today, assuming the HPC funding and political will is there.

Anyway you look at it, exaflops-level supercomputing is going to be an expensive proposition, at least initially. The average price of a megawatt in the US is a million dollars per year, and even at Oak Ridge, it probably costs between $200 to $300 thousand. That's plenty of motivation to reduce the energy footprint of these machines.

Which brings us back to Blue Gene/Q. The largest such system today, the number one Sequoia machine at Lawrence Livermore, delivers 20 (peak) petaflops and draws 7.9MW when it's running floating-point heavy codes like Linpack. But it would need to be 50 times more powerful to get to an exaflop and would also have to be 25 times as energy-efficient to squeeze such a machine into 20MW.

IBM appears to be on the right track here though, at least from the processor standpoint. Unlike a conventional x86-based HPC cluster, Blue Gene Q is powered by a custom SoC based on the PowerPC A2 core. That processor merges the network and compute on-chip, and is designed as a low-power, high throughput, and high core count (16) architecture. Clock frequency is a modest 1.6 GHz, which is about half that of a top bin Xeon. All exascale processors are likely to follow this general design.

It's not all up to the processor, however. Memory and system network components will also need analogous redesigns to address their own power issues for exascale. By the way, it would be instructive if the Green500 could expand its mandate and develop useful performance per watt metrics aimed at main memory and interconnects. Linpack is a notoriously bad measurement for data movement, which has become the limiting factor for many applications, "big data" and otherwise. A starting point might be to incorporate the Graph 500 results into a separate set of Green500 rankings.

In the meantime, the list is drawing some much-needed attention to HPC power issues. And competition for those top Green500 spots is going to heat up. In the absence of a Blue Gene/R follow-on -- and at this point, IBM has kept mum about extending the BG franchise -- there is likely to be some stiff competition from machines powered by the upcoming NVIDIA Kepler K20 GPUs and Intel MIC coprocessors, and their successors. AMD APU-based systems might show up in a couple of years, and the newer SPARC64 offerings from Fujitsu or Chinese systems based on domestically designed chips like Godson may make their presence felt as well. The green revolution in HPC is just beginning.

Related Articles

TOP500 Gets Dressed Up with New Blue Genes

HPC Lists We’d Like to See

IBM Specs Out Blue Gene/Q Chip

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

July 10, 2012

July 09, 2012

July 06, 2012

July 05, 2012

July 04, 2012

July 03, 2012

July 02, 2012

June 29, 2012

June 28, 2012

June 27, 2012



Around the Web

Plug In and Power Down

Jul 09, 2012 | EU project offers software that makes datacenters more energy-efficient.
Read more...

Keeping Moore’s Law Alive

Jul 05, 2012 | Processor speed and power consumption are now at odds, which will force chipmakers to rethink their designs..
Read more...

UK Lab Fires Up New GPU Cluster

Jul 03, 2012 | University consortium launches with two terascale machines.
Read more...

HPC Luminaries Gather in Southern Italy

Jul 02, 2012 | What an assignment! Covering an HPC conference in picturesque Cetraro, Italy.
Read more...

Supercomputer Learns How to Recognize Cats

Jun 28, 2012 | Google scientists build neural network with visual smarts.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Michael Wolfe Webinar: PGI Accelerator with OpenACC

Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.

Think Tank HPCwire

Newsletters


HPC Job Bank


Featured Events








HPC Wire Events