HPC on Wall Street DataDirect Networks
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Datanami
Digital Manufacturing Report
HPC in the Cloud

Tabor Communications
Corporate Video

SDSC Mourns the Loss of Allan Snavely


Co-PI of the ‘Gordon’ Supercomputer Suffers Heart Attack

SAN DIEGO, July 16 -- Dr. Allan Snavely, a widely recognized expert in high-performance computing whose innovative thinking led to the development of the Gordon supercomputer at the San Diego Supercomputer Center (SDSC) at UC San Diego, died of an apparent heart attack on Saturday, July 14. He was 49.

Dr. Snavely, an avid cyclist, had just completed a ride up and down Mt. Diablo, a peak of almost 3,900 feet that is visible from most of the San Francisco Bay area and much of northern California.

Dr. Snavely joined SDSC in 1994 and held a variety of leadership positions, serving as associate director of the center before joining the Lawrence Livermore National Laboratory as Chief Technical Officer earlier this year. He was part of LLNL’s Advanced Simulation and Computing (ASC) program. While at SDSC Snavely also was an adjunct professor in computer science and engineering at UC San Diego.

As an active researcher, Dr. Snavely regularly advised policy makers and federal agencies on the strategic value of high-performance computing, his focus being on how to improve the computational speed efficiency of data-intensive supercomputer systems. He also directed SDSC’s Performance Modeling and Characterization (PMaC) Laboratory, which he founded in 2001 to help advance the understanding of factors that affect performance of HPC applications in order to guide scientific code development and improve architectural design.

Dr. Snavely was a co-PI along with SDSC Director Michael Norman in the development and deployment of Gordon, the first supercomputer to employ massive amounts of flash-based memory, common in smaller devices such as laptops or cell phones, to help speed solutions often hamstrung by slower spinning disk memory. Gordon, the result of a five-year, $20 million National Science Foundation (NSF) award, went online earlier this year. With the ability to perform more than 36 million input/output operations per second, Gordon is considered one of the most capable systems in the world when it comes to moving and analyzing huge amounts of data. 

“It’s difficult to summarize a life so full of accomplishment,” said SDSC Michael Norman. “Allan would rather blaze his own trail than travel along a well-worn path accepted by others. He was driven to stay ahead of the curve, striving to find innovative solutions to advance high performance computing for science and society. The Gordon system embodies Allan’s out-of-the-box thinking. The term visionary is sometimes overused, but I and others believe Allan fit the description. The HPC community has lost a true visionary.”

Dr. Snavely and his research collaborator at SDSC, Laura Carrington, were among finalists for the prestigious Gordon Bell Prize two years running (2007 and 2008). In 2009, he shared the SC09 Storage Challenge Award for the design of Dash, a prototype for the much larger Gordon supercomputer.

“Most people know about Allan’s great accomplishments in high-performance computing but one thing they might not know is that he was very much a family man,” said Carrington. “Our daughters were to be born a few months apart. Allan said ‘Our productivity is going to go down a lot next year but family is first.’ That really tells you what truly mattered to him.”

Said Wayne Pfeiffer, SDSC distinguished scientist: “I’ve known Allan since 1996, but really didn’t get to know him until the next year when he started working on performance evaluation for an HPC system as part of an NSF grant for which I was the principal investigator.

“Allan was really enthusiastic about this project, as he was about most of things he spent his time on. He later earned his Ph.D. from UC San Diego in 2000, carrying that great enthusiasm into his future work as founding director of the Performance Modeling and Characterization Laboratory. More recently, Allan's intellectual drive and leadership were instrumental in the preparation of our winning the Gordon proposal. He was a ferocious competitor at work and away from work.”

“Our thoughts and prayers are now with Allan’s family, his wife Nancy and his daughter Sophia,” added Norman.

Arrangements for a memorial service have not been finalized at this time.

-----

Source: SDSC

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

July 19, 2012

July 18, 2012

July 17, 2012

July 16, 2012

July 13, 2012

July 12, 2012

July 11, 2012

July 10, 2012

July 09, 2012

July 06, 2012



Feature Articles

Researchers Squeeze GPU Performance from 11 Big Science Apps

In a report published this week, researchers documented that GPU-equipped supercomputers enabled application speedups between 1.4x and 6.1x across a range of well-known science codes. While those results aren't the order of magnitude performance increases that were being bandied about in the early days of GPU computing, the researchers were encouraged that the technology is producing consistently good results with some of the most popular HPC science applications in the world.
Read more...

Intel Expands HPC Collection with Whamcloud Buy

Intel Corporation has acquired Whamcloud, a startup devoted to supporting the open source Lustre parallel file system and its user community. The deal marks the latest in a line of high performance computing acquisitions that Intel has made over the past few years to expand its HPC footprint.
Read more...

DOE Primes Pump for Exascale Supercomputers

Intel, AMD, NVIDIA, and Whamcloud have been awarded tens of millions of dollars by the US Department of Energy (DOE) to kick-start research and development required to build exascale supercomputers. The work will be performed under the FastForward program, a joint effort run by the DOE Office of Science and the National Nuclear Security Administration (NNSA) that will focus on developing future hardware and software technologies capable of supporting such machines.
Read more...

Around the Web

Aussie Supercomputer Simulates Common Cold's Susceptibility to New Drug

Jul 18, 2012 | Blue Gene/Q super gives researchers better picture of drug-virus interaction.
Read more...

HPC Community Remembers Allan Snavely

Jul 17, 2012 | Co-creator of Gordon supercomputer suffers fatal heart attack.
Read more...

Developing Diminutive Transitors is a Fight Against Physics

Jul 16, 2012 | EUV lithography, the technology chipmakers are counting on to keep Moore's Law alive, is behind schedule.
Read more...

New Mexico to Pull Plug on Encanto, Former Top 5 Supercomputer

Jul 12, 2012 | State says supercomputing center can’t pay bills to keep machine running.
Read more...

Computer Program Learns Games by Watching People

Jul 11, 2012 | Computer scientist builds intelligent machine with single-core laptop and some slick algorithms.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Michael Wolfe Webinar: PGI Accelerator with OpenACC

Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.

Think Tank HPCwire

Newsletters

The Portland Group

HPC Job Bank


Featured Events





  • September 24, 2012 - September 25, 2012
    ISC Cloud ‘12
    Mannheim,
    Germany




HPC Wire Events