DataDirect Networks
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud

Lawrence Livermore, IBM Offer Petascale Supercomputer to Industry


One by one, US government HPC labs are getting into the industry partnership business. The latest is Lawrence Livermore National Laboratory (LLNL), who this week announced it was teaming with IBM to form "Deep Computing Solutions," a collaboration that is being folded into LLNL’s new High Performance Computing Innovation Center (HPCIC).

Announced in June 2011, LLNL's HPCIC is tasked with spurring industry innovation by bringing companies into the HPC fold. The idea is to marry the technology with the needs of businesses in order to boost the competitive capabilities of US companies. This subtle shift in the mission of national labs, which is occurring at both NSF and DOE sites, reflects a desire by policymakers to leverage the government's multi-million dollar investments in supercomputing for the betterment of the domestic economy.

HPCIC, like similar public-private partnership programs at the Ohio Supercomputer Center (OSC), Oak Ridge National Laboratory ORNL), the National Center for Supercomputing Applications (NCSA), and elsewhere, is designed to help US companies compete more effectively in the global marketplace via HPC. Among other things, this can involve using supercomputers to run virtual product simulations, model new energy systems or new materials, or employ data mining for informatics-type apps or to help make better business decisions.

In the latter case, the idea is to demonstrate use cases for "big data" applications using supercomputing technology. Data-intensive computing has become a major focus across the national labs, with many in the community now looking at the application domain as a way to mainstream HPC technology.

Perhaps the most notable aspect of the HPCIC program is the size of the resource being allocated to support the work. In this case, IBM will be providing one of the most powerful supercomputers in the world, a 5 (peak) petaflop Blue Gene/Q supercomputer. That system, known as Vulcan, is partially installed at LLNL as a 400-teraflop system, which currently sits at number 48 on the latest TOP500 list. The full 5-teraflop machine is scheduled to be fully deployed later this summer.

As such, it will be one of a handful of petascale supercomputers in the world available to industrial users. According to LLNL deputy department head Doug East, Vulcan will mostly be used as a capacity machine, but "with the ability to schedule capability-class jobs as well, including dedicated access times for petascale efforts as they evolve."

Vulcan was acquired under the same $200 million-plus contract used to procure Dawn, LLNL's 500-teraflop Blue Gene/P system, and Sequoia, the lab's new 20-petaflop Blue Gene/Q, which, as of last week, earned the title of the world's top supercomputer. The 24-rack Vulcan is basically a quarter-sized Sequoia, and if it were up and running today, the machine would likely be ranked as the 4th most powerful computer on the planet.

Unlike Sequoia though, whose prime mission is to serve the NNSA and its primary task of maintaining the nation's nuclear weapons stockpile, Vulcan flops will be parceled out to commercial businesses. The focus will be on industries such as manufacturing, transportation, and power generation, which fits into the lab's DOE's mission to advance the nation's energy prospects and the NNSA's somewhat mandate to support national, economic and energy security. Although Vulcan will be the go-to resource for HPCIC's Deep Computing Solutions, it's also slated to support unclassified NNSA research, LLNL academic alliances and other in-house science and technology efforts.

Specific program partners were not announced, but HPCIC's current affiliations include Energy Exemplar (energy systems) and Navistar (truck fuel efficiency). Unlike the DOE INCITE program, proprietary access to Vulcan will be available on demand. Open access initiatives, based on reviewed proposals, may also be offered from time to time.

As with the other public-private partnership programs based at national laboratories, the idea is to not only leverage the supercomputer resources, but the indigenous HPC expertise as well. In fact, according to East, providing access to the physical supercomputer resources is secondary. "We are seeking clients that are primarily interested in utilizing the HPC application and domain expertise of LLNL and IBM to enhance their competitive position," he told HPCwire.

In the case of LLNL, that encompasses over 20 years of HPC experience, including 8 years of accumulated experience developing codes specifically for Blue Gene machines. "We believe that we can help American industry solve some of their largest problems, taking their computing to the next level, and using that capability that we have developed in order to help them be more competitive in their business," said HPCIC director Frederick Streitz.

IBM expertise will also be made available to the partners. The effort will involve a significant number of people from the company's research arm, according to IBM's Dave Turek. What Big Blue gets in return, besides client good will and perhaps the promise of future business, is feedback on what commercial businesses like these require in order to exploit high performance computing.

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

June 28, 2012

June 27, 2012

June 26, 2012

June 25, 2012

June 21, 2012

June 20, 2012

June 19, 2012

June 18, 2012


Most Read Features

Most Read Around the Web

Most Read This Just In


Around the Web

Supercomputer Learns How to Recognize Cats

Jun 28, 2012 | Google scientists build neural network with visual smarts.
Read more...

Changing the Phase of Memory

Jun 26, 2012 | Researchers look to boost speed of phase change memory.
Read more...

Supercomputer Sails Through World History

Jun 25, 2012 | SGI 's new UV 2 super swallows Wikipedia and maps the history of the world.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Think Tank HPCwire

Newsletters


HPC Job Bank


Featured Events






HPC Wire Events