Nvidia CSCS Top Right Frontpage
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud

IBM to Build Cluster for University of Wyoming


June 27 -- IBM has been chosen to design and build the University of Wyoming’s campus cluster or high-performance computing center, which may have as many as 100 UW faculty members using it for their computational science research starting this fall.

The campus cluster, formally known as the Advanced Research Computing Center (ARCC), will use approximately 150 square feet (for five racks of computer equipment) in the UW Information Technology Building. Its capacity will be roughly 3 percent of the 75,000 CPUs or core hours available to UW at the National Center for Atmospheric Science (NCAR)-Wyoming Supercomputing Center (NWSC) in Cheyenne. The CPU is essentially the brains of the computer, where most calculations take place.

“Overall, they (IBM) were the best value for the university,” says Tim Kuhfuss, UW’s director of research support for Information Technology. “Their price was competitive, and they certainly understand our view for a ‘condominium model’ and can deliver it.”

Nuts and bolts

Under a condominium model, the university will provide the basic infrastructure -- personnel to run it, basic networking and the basic computer architecture to keep it running -- for the campus cluster. In exchange, UW researchers will buy computing nodes (computers) or storage. That investment will come from faculty securing successful grant proposals, which are expected to include a request for funding for the computational resources needed for their particular research projects.

Under the broad strokes of the contract, UW will pay IBM $1 million for the initial hardware needed for the cluster. UW has budgeted $1 million annually in hardware in the second and third years of the contract, but may spend less or more depending on how much money UW researchers contribute.

The contract includes an option to renew annually -- in one-year increments -- for two additional years beyond the first three. IBM also offered the university access to its research divisions, which was “very attractive to our faculty,” Kuhfuss says.

“They were looking at this as a partner, not a customer-vendor relationship,” Kuhfuss says. “That was something that was important to us.”

“IBM’s Smarter Planet Initiative focuses on a number of industries like energy. Research partnerships with universities are one of the key incubators to implement improvements in a variety of industries,” says Kent Winchell, IBM’s deep computing chief technology officer. “The UW School of Energy Resources, combined with the new university-wide plan for high-performance computing, aligns with IBM goals for a smarter planet. There also is synergy with the recent NSF/NCAR supercomputer located in Cheyenne for climate and environmental science.”

Initially, seven companies bid for the project. That number was reduced to three before IBM was chosen, Kuhfuss says. From an architectural standpoint, any of the three finalists qualified, but it was IBM’s desire to be a partner rather than just a vendor that made the difference, Kuhfuss says.

IBM will develop and test the system in its development lab in Boulder, Colo., before delivering the hardware and storage racks to UW’s IT Building sometime in July. Kuhfuss expects the campus cluster to be operational between August and October.

Advancing computational research on campus

The campus cluster, nicknamed “Moran” after Mount Moran in western Wyoming’s Teton Range, will serve two purposes.

First, it will enable atmospheric and earth sciences faculty members -- who will be able to use the NWSC -- to learn what to expect with the software. The cluster provides the opportunity for that group of faculty members to work out issues caused by scaling up parallel algorithms from tens or hundreds of processors to thousands of processors, before moving up to tens of thousands of processors on the NWSC supercomputer.

Second, the cluster will provide a research resource for UW research faculty members -- such as bioinformaticists, social scientists, pure mathematicians and theoretical physicists -- whose research doesn’t fall within the scope of the NWSC.

Initially, Kuhfuss says there will be a trial month or “free-range period,” most likely October, when any UW faculty member can use nodes (one node essentially equals 16 desktop computers) on the cluster to conduct research. But there will be an organized resource allocation system created for ARCC, says Tim Brewer, end user support manager of research support for information technology, who reports to Kuhfuss.

Jeff Lang, a high-performance computing architect and administrator, who also reports to Kuhfuss, will handle on-site, day-to-day operations of the ARCC.

Winchell, who graduated with a computer science degree from UW in 1981, recalled his undergraduate days when UW’s Laboratory Information System (LIS) purchased CDC cyber-computer systems, which he said were state of the art at that time.

“Access to those systems created a passion in me for using IT to solve complex problems,” Winchell says. “It’s exciting to see UW keep up the tradition of providing state-of-the-art systems to researchers and students.”

-----

Source: University of Wyoming

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

June 28, 2012

June 27, 2012

June 26, 2012

June 25, 2012

June 21, 2012

June 20, 2012

June 19, 2012

June 18, 2012


Most Read Features

Most Read Around the Web

Most Read This Just In


Feature Articles

The Uber-Cloud Experiment

Even with its promise of easy access to pay-per-use computing, HPC-as-a-Service as a delivery model has yet to be widely embraced by high performance computing users. In this article, authors Wolfgang Gentzsch and Burak Yenier describe an HPC service experiment that brings together industry users, resource providers, software providers, and HPC experts, which they believe will help pave the way for wider adoption.
Read more...

Lawrence Livermore, IBM Offer Petascale Supercomputer to Industry

One by one, US government HPC labs are getting into the industry partnership business. The latest is Lawrence Livermore National Laboratory (LLNL), who this week announced it was teaming with IBM to form "Deep Computing Solutions," a collaboration that is being folded into LLNL’s new High Performance Computing Innovation Center,
Read more...

An HPC Programming Model for the Exascale Age

As the supercomputing faithful prepare for exascale computing, there is a great deal of talk about moving beyond the two-decades-old MPI programming model . The HPC programmers of tomorrow are going to have to write codes that are able to deal with systems hundreds of times larger than the top supercomputers of today, and the general feeling is that MPI, by itself, will not make that transition gracefully. One of the alternatives being offered is a PGAS model known as GASPI...
Read more...

Around the Web

Supercomputer Learns How to Recognize Cats

Jun 28, 2012 | Google scientists build neural network with visual smarts.
Read more...

Changing the Phase of Memory

Jun 26, 2012 | Researchers look to boost speed of phase change memory.
Read more...

Supercomputer Sails Through World History

Jun 25, 2012 | SGI 's new UV 2 super swallows Wikipedia and maps the history of the world.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Think Tank HPCwire

Newsletters


HPC Job Bank


Featured Events






HPC Wire Events