HPC on Wall Street NetApp
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Datanami
Digital Manufacturing Report
HPC in the Cloud

Tabor Communications
Corporate Video

HPC Community Remembers Allan Snavely


Allan Snavely, the CTO of Lawrence Livermore National Laboratory, died unexpectedly last Saturday of an apparent heart attack. Snavely, was a well-regarded HPC expert and a co-creator of the “Gordon” supercomputer at the San Diego Supercomputer Center (SDSC). Along with his colleagues, Snavely won the SC09 storage challenge award for an early version of that system. U-T San Diego published an article on Tuesday, recounting his accomplishments. 

Snavely received his undergraduate, master’s and doctorate degrees from the University of California, San Diego. In 1994 he began working at the university’s San Diego Supercomputer Center. Along with Laura Carrington, he co-founded the Performance Modeling and Characterization laboratory in 2001.

One of the lab’s main objectives is to improve supercomputing speed and sophistication. Along with her associates, Carrington is attempting to optimize the interaction between supercomputing hardware and applications. In 2007 and 2008, she and Snavely were finalists for the Gordon Bell Prize.

 

In May of this year, Snavely left SDSC to join Lawrence Livermore National Laboratory as its new chief technology officer. The position allowed him to work at a DOE site with a stellar reputation in supercomputing, while still maintaining his connection with SDSC. Part of his new duties at the lab involved exascale research.

During a recent visit to SDSC, Snavely drew up a prototype for an updated, more powerful version of the Gordon supercomputer. According to center director Mike Norman, they might end up building such a system based on that sketch.

Allan Snavely was 49 years old and is survived by his 9-year-old daughter and his wife Nancy of 22 years. She recognized his passion for supercomputing, remarking, “I think he just loved the invention process. Problem-solving was his favorite thing.”

A memorial service will be held on August 12th and some faculty members at UC San Diego plan to hold a fall seminar in Snavely’s memory.


Full story at U-T San Diego

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

July 18, 2012

July 17, 2012

July 16, 2012

July 13, 2012

July 12, 2012

July 11, 2012

July 10, 2012

July 09, 2012

July 06, 2012




Feature Articles

Researchers Squeeze GPU Performance from 11 Big Science Apps

In a report published this week, researchers documented that GPU-equipped supercomputers enabled application speedups between 1.4x and 6.1x across a range of well-known science codes. While those results aren't the order of magnitude performance increases that were being bandied about in the early days of GPU computing, the researchers were encouraged that the technology is producing consistently good results with some of the most popular HPC science applications in the world.
Read more...

Intel Expands HPC Collection with Whamcloud Buy

Intel Corporation has acquired Whamcloud, a startup devoted to supporting the open source Lustre parallel file system and its user community. The deal marks the latest in a line of high performance computing acquisitions that Intel has made over the past few years to expand its HPC footprint.
Read more...

DOE Primes Pump for Exascale Supercomputers

Intel, AMD, NVIDIA, and Whamcloud have been awarded tens of millions of dollars by the US Department of Energy (DOE) to kick-start research and development required to build exascale supercomputers. The work will be performed under the FastForward program, a joint effort run by the DOE Office of Science and the National Nuclear Security Administration (NNSA) that will focus on developing future hardware and software technologies capable of supporting such machines.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Michael Wolfe Webinar: PGI Accelerator with OpenACC

Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.

Think Tank HPCwire

Newsletters

The Portland Group

HPC Job Bank


Featured Events





  • September 24, 2012 - September 25, 2012
    ISC Cloud ‘12
    Mannheim,
    Germany




HPC Wire Events