Gary Johnson
Jailbreaking HPC
Post Date: March 05, 2012 @ 5:50 PM, Pacific Standard Time
Is high performance computing in jail? Traditional wisdom has held that that the engines of HPC are unique, its people are scarce, and the resources need to be held close in protected locations, its "temples." But those attitudes are out of sync with today's realities, and are holding back the technology's true potential.
Gary Johnson
Number Crunching, Data Crunching and Energy Efficiency: the HPC Hat Trick
Post Date: February 02, 2012 @ 2:54 PM, Pacific Standard Time
In the world of high performance computing, there are three distinct metrics in play: number crunching speed, data crunching speed, and energy efficiency. Can a computer excel at all three, or is our best recourse to try for something less than a hat trick?
Gary M. Johnson is the founder of Computational Science Solutions, LLC, and a specialist in HPC management as well as the development of national science and technology policy. He is also involved in the creation of education and research programs in computational engineering and science.
Even with its promise of easy access to pay-per-use computing, HPC-as-a-Service as a delivery model has yet to be widely embraced by high performance computing users. In this article, authors Wolfgang Gentzsch and Burak Yenier describe an HPC service experiment that brings together industry users, resource providers, software providers, and HPC experts, which they believe will help pave the way for wider adoption.
Read more...
One by one, US government HPC labs are getting into the industry partnership business. The latest is Lawrence Livermore National Laboratory (LLNL), who this week announced it was teaming with IBM to form "Deep Computing Solutions," a collaboration that is being folded into LLNL’s new High Performance Computing Innovation Center,
Read more...
As the supercomputing faithful prepare for exascale computing, there is a great deal of talk about moving beyond the two-decades-old MPI programming model . The HPC programmers of tomorrow are going to have to write codes that are able to deal with systems hundreds of times larger than the top supercomputers of today, and the general feeling is that MPI, by itself, will not make that transition gracefully. One of the alternatives being offered is a PGAS model known as GASPI...
Read more...
Jun 28, 2012 |
Google scientists build neural network with visual smarts.
Read more...
Jun 26, 2012 |
Researchers look to boost speed of phase change memory.
Read more...
Jun 25, 2012 |
SGI 's new UV 2 super swallows Wikipedia and maps the history of the world.
Read more...
06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.