June 25, 2012
WASHINGTON, June 25 -- NASA's flagship Pleiades supercomputer just received a boost to help keep pace with the intensive number-crunching requirements of scientists and engineers working on some of the agency's most challenging missions.
Pleiades is critical for the modeling, simulation and analysis of a diverse set of agency projects in aeronautics research, Earth and space sciences and the design and operation of future space exploration vehicles. The supercomputer is located at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center in Moffett Field, Calif.
An expansion completed earlier this month has increased Pleiades' sustained performance rate by 14 percent to 1.24 petaflops -- or a quadrillion calculations per second. To put this enormous number into perspective, if everyone in the world did one calculation per second for eight hours a day, it would take about 370 days to complete what this supercomputer can calculate in 60 seconds.
"As we move toward NASA's next phase in advanced computing, Pleiades must be able to handle the increasing requirements of more than 1,200 users across the country who rely on the system to perform their large, complex calculations," said Rupak Biswas, chief of the NAS division at Ames. "Right now, for example, the system is being used to improve our understanding of how solar flares and other space weather events can affect critical technologies on Earth. Pleiades also plays a key role in producing high-fidelity simulations used for possible vehicle designs such as NASA's upcoming Space Launch System."
Since Pleiades' installation in 2008, NAS has performed eight major upgrades to the system. The latest expansion adds 24 of the newest generation systems containing advanced processors. More than 65 miles of cabling interconnects Pleiades nodes with data storage systems and the hyperwall-2 visualization system.
Recently, scientists have counted on Pleiades for generating the "Bolshoi" cosmological simulation -- the largest simulation of its kind to date -- to help explain how galaxies and the large-scale structure of the universe have evolved over billions of years. The system also has proven essential for processing massive amounts of star data gathered from NASA's Kepler spacecraft, leading to the discovery of new Earth-sized planets in the Milky Way galaxy. The upgraded capability of Pleiades will enable NASA scientists to solve challenging problems like these more quickly, using even larger datasets.
For more information about NASA Advanced Supercomputing, visit: http://www.nas.nasa.gov
For more information about Pleiades, visit: http://go.nasa.gov/MJ4NvN
-----
Source: NASA
There are 0 discussion items posted.
Join the Discussion |
Computer memory is currently undergoing something of an identity crisis. For the past 8 years, multicore microprocessors have been creating a performance discontinuity, the so-called memory wall. It's now fairly clear that this widening gap between compute and memory performance will not be solved with conventional DRAM products. But there is one technology under development that aims to close that gap, and its first use case will likely be in the ethereal realm of supercomputing.
Read more...
The latest Green500 rankings were announced last week, revealing that top performance and power efficiency can indeed go hand in hand. According to the latest list, the greenest machines, in fact the top 20 systems, were all IBM Blue Gene/Q supercomputers. Blue Gene/Q, of course, is the platform that captured the number one spot on the latest TOP500 list, and is represented by four of the ten fastest supercomputers in the world.
Read more...
The US Department of Energy's National Energy Research Scientific Computing Center (NERSC) has ordered a two-petaflop "Cascade" supercomputer, Cray's next-generation HPC platform. The DOE is shelling out $40 million dollars for the system, including about 6.5 petabytes of the company's Sonexion storage. Installation is scheduled for sometime in 2013.
Read more...
Jul 11, 2012 |
Computer scientist builds intelligent machine with single-core laptop and some slick algorithms.
Read more...
Jul 10, 2012 |
Science cloud crunched data that helped build the case for the historic announcement.
Read more...
Jul 09, 2012 |
EU project offers software that makes datacenters more energy-efficient.
Read more...
Jul 05, 2012 |
Processor speed and power consumption are now at odds, which will force chipmakers to rethink their designs..
Read more...
Jul 03, 2012 |
University consortium launches with two terascale machines.
Read more...
06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.
Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.