HPC on Wall Street Maxeler Technologies
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Datanami
Digital Manufacturing Report
HPC in the Cloud

Tabor Communications
Corporate Video

Report Details Application Speedups Enabled by GPU-Accelerated Supercomputing


Hybrid platforms accelerate answers and insights in science and engineering

OAK RIDGE, Tenn., July 17 -- In the quest to simulate the natural world from subatomic particles to the vast cosmos and the engineered world from turbines to advanced fuels, can scientists and engineers benefit from extreme-scale supercomputers that use application-code accelerators called GPUs (for graphics processing units)? Comparing GPU accelerators with today’s fastest central processing units (CPUs), early results from diverse areas of research show 1.5- to 3-fold speedups for most codes. That acceleration means increased realism of simulations and decreased time to results. With the availability of new, higher-performance GPUs later this year, such as the Kepler GPU chip to be installed in the 20-petaflop Titan supercomputer at Oak Ridge National Laboratory (ORNL), application speedups are expected to be even more substantial.

A special report titled Accelerating Computational Science Symposium 2012 details these findings, which were presented earlier this year at Accelerating Computational Science Symposium 2012 in Washington, D.C. The Oak Ridge Leadership Computing Facility, which ORNL operates for the U.S. Department of Energy Office of Science, co-hosted the meeting with the National Center for Supercomputing Applications and the Swiss National Supercomputing Centre. Additional sponsors were Cray Inc., and NVIDIA. The meeting convened nearly 100 experts in science, engineering, and computing from around the world to discuss research advances that are now possible with extreme-scale hybrid supercomputers, which combine traditional CPUs with high-performance, energy-efficient GPUs. Attendees explored how hybrid supercomputers speed discoveries, such as deeper understanding of phenomena from earthquakes to supernovas, and innovations, such as next-generation catalysts, materials, engines, and reactors.

The Accelerating Computational Science Symposium 2012 report is available for download here.

-----

Source: Dawn Levy, Oak Ridge Leadership Computing Facility

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

July 18, 2012

July 17, 2012

July 16, 2012

July 13, 2012

July 12, 2012

July 11, 2012

July 10, 2012

July 09, 2012

July 06, 2012




Feature Articles

Researchers Squeeze GPU Performance from 11 Big Science Apps

In a report published this week, researchers documented that GPU-equipped supercomputers enabled application speedups between 1.4x and 6.1x across a range of well-known science codes. While those results aren't the order of magnitude performance increases that were being bandied about in the early days of GPU computing, the researchers were encouraged that the technology is producing consistently good results with some of the most popular HPC science applications in the world.
Read more...

Intel Expands HPC Collection with Whamcloud Buy

Intel Corporation has acquired Whamcloud, a startup devoted to supporting the open source Lustre parallel file system and its user community. The deal marks the latest in a line of high performance computing acquisitions that Intel has made over the past few years to expand its HPC footprint.
Read more...

DOE Primes Pump for Exascale Supercomputers

Intel, AMD, NVIDIA, and Whamcloud have been awarded tens of millions of dollars by the US Department of Energy (DOE) to kick-start research and development required to build exascale supercomputers. The work will be performed under the FastForward program, a joint effort run by the DOE Office of Science and the National Nuclear Security Administration (NNSA) that will focus on developing future hardware and software technologies capable of supporting such machines.
Read more...

Around the Web

Aussie Supercomputer Simulates Common Cold's Susceptibility to New Drug

Jul 18, 2012 | Blue Gene/Q super gives researchers better picture of drug-virus interaction.
Read more...

HPC Community Remembers Allan Snavely

Jul 17, 2012 | Co-creator of Gordon supercomputer suffers fatal heart attack.
Read more...

Developing Diminutive Transitors is a Fight Against Physics

Jul 16, 2012 | EUV lithography, the technology chipmakers are counting on to keep Moore's Law alive, is behind schedule.
Read more...

New Mexico to Pull Plug on Encanto, Former Top 5 Supercomputer

Jul 12, 2012 | State says supercomputing center can’t pay bills to keep machine running.
Read more...

Computer Program Learns Games by Watching People

Jul 11, 2012 | Computer scientist builds intelligent machine with single-core laptop and some slick algorithms.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Michael Wolfe Webinar: PGI Accelerator with OpenACC

Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.

Think Tank HPCwire

Newsletters


HPC Job Bank


Featured Events





  • September 24, 2012 - September 25, 2012
    ISC Cloud ‘12
    Mannheim,
    Germany




HPC Wire Events