HPC on Wall Street Cray
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Datanami
Digital Manufacturing Report
HPC in the Cloud

Tabor Communications
Corporate Video

Parallel Computing Researchers Honored by ACM Groups


NEW YORK, July 18 --  ACM’s Special Interest Groups on Algorithms and Computation Theory (SIGACT) and Operating Systems (SIGOPS) have recognized two research papers that fundamentally changed parallel computing in both theory and practice.  In 1993, Maurice Herlihy and J. Eliot B. Moss introduced transactional memory, a new abstraction for multiprocessor architecture that greatly simplifies programming of concurrent computer systems.  Two years later, Nir Shavit and Dan Touitou built on the hardware-based transactional synchronization methods of Herlihy and Moss, offering a novel software method for supporting the transactional memory abstraction used on parallel computing operations.  Today, they will receive the 2012 Edsger W. Dijkstra Prize, sponsored jointly by SIGACT, SIGOPS, and the European Association for Theoretical Computer Science (EATCS), for their outstanding papers in distributed computing at the ACM Symposium on Principles of Distributed Computing (PODC) July 16-18, in Madeira, Portugal.

In terms of fostering research, transactional memory has become a truly transformative idea for parallel computing in shared memory systems where all processors share a memory that can be used to exchange information between processors.  Nearly 1,400 citations were recorded for the Herlihy and Moss paper since it was published, and almost 1,000 citations were reported for the Shavit and Touitou paper.  In addition, software architects have developed dozens of runtime implementations of remarkable algorithmic variety.  At least four major compilers now support transactional memory in C++.  Hardware implementations have been developed by Azul, Sun (Oracle), AMD (on paper), IBM, and Intel.  The IBM and Intel implementations, in particular, ensure that hardware support is likely to continue.

At first, Herlihy and Moss’s proposal, introduced in their paper titled “Transactional Memory: Architectural Support for Lock-Free Data Structures,” proved too ambitious for the hardware of the day, and their work was largely ignored within the architecture community for most of the 1990s.  Within the theory community, however, it inspired multiple explorations of the limits of software emulation, most notably the “Software Transactional Memory” paper of Shavit and Touitou. 

A professor of computer science at Brown University, Maurice Herlihy was awarded the 2003 Dijkstra Prize for “Wait-free Synchronization.”  He is a co-recipient with Nir Shavit of the 2004 Gödel Prize for their paper titled “The Topological Structure of Asynchronous Computation.”  An ACM Fellow and co-author with Shavit of the textbookThe Art of Multiprocessor Programming, Herlihy graduated with an A.B. degree in Mathematics from Harvard University.  He received M.S. and Ph.D. degrees in Computer Science from the Massachusetts Institute of Technology (MIT).

J. Eliot B. Moss is a professor of Computer Science at the University of Massachusetts Amherst.  A Fellow of ACM and a Fellow of IEEE, he authored Nested Transactions: An Approach to Reliable Distributed Computing.  He was educated at MIT, where he earned a B.S. degree in Electrical Engineering and Computer Science as well as M.S., E.E., and Ph.D. degrees in Computer Science.

Nir Shavit is a professor of Electrical Engineering and Computer Science at MIT, and a professor of Computer Science at Tel Aviv University.  A co-recipient with Maurice Herlihy of the 2004 Gödel Prize for their paper titled “The Topological Structure of Asynchronous Computation,” he co-authored the textbook The Art of Multiprocessor Programming with Herlihy.  Shavit received B.Sc. and M.Sc. degrees in Computer Science from The Technion – Israel Institute of Technology, and a Ph.D. degree in Computer Science from the Hebrew University in Jerusalem.

Dan Touitou has spent the last fifteen years working for the data storage and networking industries.  He is currently Chief Technology Officer at Toga Networks, a company that provides consulting services to network equipment vendors.  Before that position, he was a Distinguished Engineer at Cisco Systems.  He received a B.Sc. degree in Computer Science from the Technion – Israel Institute of Technology and M.Sc. and Ph.D. degrees from Tel Aviv University.

The Dijkstra Prize includes an award of $2,000, and is named in honor of Edsger W. Dijkstra, a pioneer in distributed computing.  He received the 1972 ACM A.M. Turing Award for fundamental contributions to developing programming languages. The Dijkstra Prize is given for outstanding papers on the principles of distributed computing, whose significance and impact on the theory and/or practice of distributed computing have been evident for at least a decade. 

About ACM

ACM, the Association for Computing Machinery www.acm.org, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.  

About SIGACT

The ACM Special Interest Group on Algorithms and Computation Theory http://sigact.acm.org fosters and promotes the discovery and dissemination of high quality research in the domain of theoretical computer science.  The field includes algorithms, data structures, complexity theory, distributed computation, parallel computation, VLSI, machine learning, computational biology, computational geometry, information theory, cryptography, quantum computation, computational number theory and algebra, program semantics and verification, automata theory, and the study of randomness. Work in this field is often distinguished by its emphasis on mathematical technique and rigor. 

About SIGOPS

The ACM Special Interest Group on Operating Systems http://sigops.org addresses a broad spectrum of issues associated with operating systems research and development. Members are drawn from a broad community spanning industry, academia, and government. SIGOPS supports many conferences and workshops. Areas of special interest include: interactions with computer architecture, multiprocessing, distributed, mobile computing, networking, resource management, security, and interprocess communication.

About EATCS

The European Association for Theoretical Computer Science http://eatcs.org is an international organization aimed at promoting research in the wide field of the foundations of computer science (ranging from formal languages, abstract computation models, algorithm design and complexity analysis, to applications of logic and semantics in programming).  It facilitates the exchange of ideas and results among computer scientists, in particular through the organization of the annual International Conference on Automata, Languages and Programming (ICALP).

-----

Source: ACM

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

July 20, 2012

July 19, 2012

July 18, 2012

July 17, 2012

July 16, 2012

July 13, 2012

July 12, 2012

July 11, 2012

July 10, 2012

July 09, 2012



Feature Articles

Too Big to FLOP?

At the cutting edge of HPC, bigger has always been seen as better and user demand has been the justification. However, as we now grapple with trans-petaflop machines and strive for exaflop ones, is evidence emerging that contradicts these notions? Might computers be getting too big to effectively serve up those FLOPS?
Read more...

Researchers Squeeze GPU Performance from 11 Big Science Apps

In a report published this week, researchers documented that GPU-equipped supercomputers enabled application speedups between 1.4x and 6.1x across a range of well-known science codes. While those results aren't the order of magnitude performance increases that were being bandied about in the early days of GPU computing, the researchers were encouraged that the technology is producing consistently good results with some of the most popular HPC science applications in the world.
Read more...

Intel Expands HPC Collection with Whamcloud Buy

Intel Corporation has acquired Whamcloud, a startup devoted to supporting the open source Lustre parallel file system and its user community. The deal marks the latest in a line of high performance computing acquisitions that Intel has made over the past few years to expand its HPC footprint.
Read more...

Around the Web

Aussie Supercomputer Simulates Common Cold's Susceptibility to New Drug

Jul 18, 2012 | Blue Gene/Q super gives researchers better picture of drug-virus interaction.
Read more...

HPC Community Remembers Allan Snavely

Jul 17, 2012 | Co-creator of Gordon supercomputer suffers fatal heart attack.
Read more...

Developing Diminutive Transitors is a Fight Against Physics

Jul 16, 2012 | EUV lithography, the technology chipmakers are counting on to keep Moore's Law alive, is behind schedule.
Read more...

New Mexico to Pull Plug on Encanto, Former Top 5 Supercomputer

Jul 12, 2012 | State says supercomputing center can’t pay bills to keep machine running.
Read more...

Computer Program Learns Games by Watching People

Jul 11, 2012 | Computer scientist builds intelligent machine with single-core laptop and some slick algorithms.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Michael Wolfe Webinar: PGI Accelerator with OpenACC

Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.

Think Tank HPCwire

Newsletters

The Portland Group

HPC Job Bank


Featured Events





  • September 24, 2012 - September 25, 2012
    ISC Cloud ‘12
    Mannheim,
    Germany




HPC Wire Events