HPC on Wall Street King Abdullah University of Science and Technology
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud

Pleiades Gets a Speed Bump


WASHINGTON, June 25 -- NASA's flagship Pleiades supercomputer just received a boost to help keep pace with the intensive number-crunching requirements of scientists and engineers working on some of the agency's most challenging missions. 

Pleiades is critical for the modeling, simulation and analysis of a diverse set of agency projects in aeronautics research, Earth and space sciences and the design and operation of future space exploration vehicles. The supercomputer is located at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center in Moffett Field, Calif. 

An expansion completed earlier this month has increased Pleiades' sustained performance rate by 14 percent to 1.24 petaflops -- or a quadrillion calculations per second. To put this enormous number into perspective, if everyone in the world did one calculation per second for eight hours a day, it would take about 370 days to complete what this supercomputer can calculate in 60 seconds. 

"As we move toward NASA's next phase in advanced computing, Pleiades must be able to handle the increasing requirements of more than 1,200 users across the country who rely on the system to perform their large, complex calculations," said Rupak Biswas, chief of the NAS division at Ames. "Right now, for example, the system is being used to improve our understanding of how solar flares and other space weather events can affect critical technologies on Earth. Pleiades also plays a key role in producing high-fidelity simulations used for possible vehicle designs such as NASA's upcoming Space Launch System." 

Since Pleiades' installation in 2008, NAS has performed eight major upgrades to the system. The latest expansion adds 24 of the newest generation systems containing advanced processors. More than 65 miles of cabling interconnects Pleiades nodes with data storage systems and the hyperwall-2 visualization system. 

Recently, scientists have counted on Pleiades for generating the "Bolshoi" cosmological simulation -- the largest simulation of its kind to date -- to help explain how galaxies and the large-scale structure of the universe have evolved over billions of years. The system also has proven essential for processing massive amounts of star data gathered from NASA's Kepler spacecraft, leading to the discovery of new Earth-sized planets in the Milky Way galaxy. The upgraded capability of Pleiades will enable NASA scientists to solve challenging problems like these more quickly, using even larger datasets. 

For more information about NASA Advanced Supercomputing, visit: http://www.nas.nasa.gov

For more information about Pleiades, visit: http://go.nasa.gov/MJ4NvN

-----

Source: NASA

 

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

July 11, 2012

July 10, 2012

July 09, 2012

July 06, 2012

July 05, 2012

July 04, 2012

July 03, 2012

July 02, 2012

June 29, 2012


Supermicro

Feature Articles

Hybrid Memory Cube Angles for Exascale

Computer memory is currently undergoing something of an identity crisis. For the past 8 years, multicore microprocessors have been creating a performance discontinuity, the so-called memory wall. It's now fairly clear that this widening gap between compute and memory performance will not be solved with conventional DRAM products. But there is one technology under development that aims to close that gap, and its first use case will likely be in the ethereal realm of supercomputing.
Read more...

Green500 Turns Blue

The latest Green500 rankings were announced last week, revealing that top performance and power efficiency can indeed go hand in hand. According to the latest list, the greenest machines, in fact the top 20 systems, were all IBM Blue Gene/Q supercomputers. Blue Gene/Q, of course, is the platform that captured the number one spot on the latest TOP500 list, and is represented by four of the ten fastest supercomputers in the world.
Read more...

NERSC Signs Up for Multi-Petaflop "Cascade" Supercomputer

The US Department of Energy's National Energy Research Scientific Computing Center (NERSC) has ordered a two-petaflop "Cascade" supercomputer, Cray's next-generation HPC platform. The DOE is shelling out $40 million dollars for the system, including about 6.5 petabytes of the company's Sonexion storage. Installation is scheduled for sometime in 2013.
Read more...

Around the Web

Computer Program Learns Games by Watching People

Jul 11, 2012 | Computer scientist builds intelligent machine with single-core laptop and some slick algorithms.
Read more...

Helix Nebula Cloud Contributes to Higgs Particle Discovery

Jul 10, 2012 | Science cloud crunched data that helped build the case for the historic announcement.
Read more...

Plug In and Power Down

Jul 09, 2012 | EU project offers software that makes datacenters more energy-efficient.
Read more...

Keeping Moore’s Law Alive

Jul 05, 2012 | Processor speed and power consumption are now at odds, which will force chipmakers to rethink their designs..
Read more...

UK Lab Fires Up New GPU Cluster

Jul 03, 2012 | University consortium launches with two terascale machines.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Michael Wolfe Webinar: PGI Accelerator with OpenACC

Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.

Think Tank HPCwire

Newsletters

The Portland Group

HPC Job Bank


Featured Events








HPC Wire Events