King Abdullah University of Science and Technology
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud

Plug In and Power Down


The electricity fueling processor cores and memory at the heart of servers creates high operational costs, while adding a hefty impact to the environment. This has led to the funding of green projects, focused on increasing power efficiency in datacenters. Last week, FIT4Green announced a breakthrough that can reduce a considerable amount of the energy used by compute clusters.

Formed in January of 2010, FIT4Green is a EU-funded project tasked with improving energy efficiency and reducing CO2 emissions. The organization set out to achieve their goal by developing a power-aware plug-in that sits atop management tools. The software is designed to work across different types of workloads and has been validated at the Italian energy company ENI (service/enterprise), Jülich Supercomputing Centre (HPC), and HP (cloud computing).

Project researchers say their software can deliver 20 percent, and in some cases, up to 50 percent direct energy savings. As a result, CO2 emissions were also reduced in proportion to the lesser amount of power used. Energy savings also led to additional savings due to the reduced need for cooling.

At its core, the FIT4Green plug-in reallocates workloads. Once jobs have been consolidated, unused equipment is subsequently turned off. The group maintains that their plug-in does not affect Service Level Agreements (SLA) or Quality of Service (QoS) Metrics. The technology has also been tested among a variety of datacenters with differing workloads.

On the HPC front, the project worked with two test clusters, ‘Juggle’ and ‘Jufit,’ housed at the Jülich. A paper describes how the plug-in was able to reduce single-site power consumption by 27.3 percent when the systems had no workload. As utilization increased, energy savings would decrease. This scenario makes sense, as higher utilization leaves less room for job consolidation.

Even better results were observed using a federated model. In this case, jobs would be prioritized based on speed or efficiency. In this case, the FIT4Green software was able to reduce energy consumption by up to 51.7 percent. Savings were achieved by placing unused servers into standby mode and by allocating jobs at different datacenters based on optimal energy efficiencies. 

The group has made 16 public deliverables available for free on their website. The plug-in code has also been released as open-source software.

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

July 12, 2012

July 11, 2012

July 10, 2012

July 09, 2012

July 06, 2012

July 05, 2012

July 04, 2012

July 03, 2012

July 02, 2012

June 29, 2012


Supermicro

Feature Articles

Hybrid Memory Cube Angles for Exascale

Computer memory is currently undergoing something of an identity crisis. For the past 8 years, multicore microprocessors have been creating a performance discontinuity, the so-called memory wall. It's now fairly clear that this widening gap between compute and memory performance will not be solved with conventional DRAM products. But there is one technology under development that aims to close that gap, and its first use case will likely be in the ethereal realm of supercomputing.
Read more...

Green500 Turns Blue

The latest Green500 rankings were announced last week, revealing that top performance and power efficiency can indeed go hand in hand. According to the latest list, the greenest machines, in fact the top 20 systems, were all IBM Blue Gene/Q supercomputers. Blue Gene/Q, of course, is the platform that captured the number one spot on the latest TOP500 list, and is represented by four of the ten fastest supercomputers in the world.
Read more...

NERSC Signs Up for Multi-Petaflop "Cascade" Supercomputer

The US Department of Energy's National Energy Research Scientific Computing Center (NERSC) has ordered a two-petaflop "Cascade" supercomputer, Cray's next-generation HPC platform. The DOE is shelling out $40 million dollars for the system, including about 6.5 petabytes of the company's Sonexion storage. Installation is scheduled for sometime in 2013.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Michael Wolfe Webinar: PGI Accelerator with OpenACC

Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.

Think Tank HPCwire

Newsletters

The Portland Group

HPC Job Bank


Featured Events








HPC Wire Events