HPC on Wall Street Maxeler Technologies
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Datanami
Digital Manufacturing Report
HPC in the Cloud

Tabor Communications
Corporate Video

Features


2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | Recent

Adapteva Unveils 64-Core Chip

Aug 22, 2012 | Multicore chipmaker Adapteva is sampling its 4th-generation multicore processor, known as Epiphany-IV. The 64-core chip delivers a peak performance of 100 gigaflops and draws just two watts of power, yielding a stunning 50 gigaflops/watt. The engineering samples were manufactured by GLOBALFOUNDRIES on its latest 28nm process technology.
Read more...

A Petabyte of Flash in a Rack

Aug 20, 2012 | Solid state storage specialist Nimbus Data Systems has unveiled its third-generation flash memory array, setting new benchmarks on resiliency, performance, and capacity. The new product, known as Gemini, offers up to 48 TB of capacity and over 1 million IOPS per 2U box. And despite moving to the less expensive and less reliable consumer-grade MLC flash, Nimbus has managed to double the endurance of its storage arrays.
Read more...

Analyst Weighs In on 64-Bit ARM

Aug 16, 2012 | In a recent report in Real World Technologies, chip guru David Kanter dissects the new 64-bit ARM design and what it might mean to the IT landscape. His take on the architecture is almost uniformly positive, noting that not only did the designers manage to develop an elegant instruction set that was backwardly compatible with the existing ISA, but they also took the extra step to jettison a few of the poorly designed features of the 32-bit architecture.
Read more...

Climate Science Triggers Torrent of Big Data Challenges

Aug 15, 2012 | Supercomputers at Oak Ridge National Laboratory produce some of the world’s largest scientific datasets, many of which are related to climate change research. In this interview, Galen Shipman, data-systems architect for ORNL’s Computing and Computational Sciences Directorate and the person who oversees data management at the OLCF, discusses strategies for coping with the “3 Vs” of big data: variety, velocity, and volume.
Read more...

Startup Aims to Upend Enterprise Storage with MLC Flash-Based Systems

Aug 14, 2012 | Silicon Valley startup Skyera has unveiled a solid state storage system that the company believes will be a game changer for enterprise storage. The product, known as Skyhawk, will use consumer-grade multi-level cell (MLC) flash memory as the basis for a bulk storage solution at a price point of less than $3 per gigabyte.
Read more...

AMD Unveils Teraflop GPU with ECC Support

Aug 08, 2012 | Advanced Micro Devices (AMD) has launched six new FirePro processors for workstation users who want high-end graphics and computation in a single box. One of them promises a teraflop of double precision performance as well as support for error correcting code (ECC) memory. The new offerings also includes two APUs (Accelerated Processing Units) that glue four CPU cores and hundreds of FirePro GPU stream cores onto the same chip.
Read more...

Steven Reiner Urges Scientists to Tell Their Stories

Aug 07, 2012 | At the annual Extreme Science and Engineering Discovery Environment (XSEDE) conference in Chicago, journalist Steven Reiner talked about the crucial role scientists play in educating the public about their work. A lifelong journalist and Emmy-award winning producer, Reiner believes it is essential to our society that researchers explain what they do, how they do it, and why it is important. “Scientists have a responsibility to share the meaning and implications of their work,” he said.
Read more...

Proving the Case for Climate Change with Hi-Res Models

Aug 02, 2012 | Although serious scientists believe we’re past the point of debating the validity of climate change, the computer models that support this research are not perfect. Fortunately, the latest improvements to high-resolution climate simulations are not only improving the fidelity of the models, but are also deepening our understanding of climate dynamics, both qualitatively and quantitatively.
Read more...

Drug Discovery Looks for Its Next Fix

Jul 31, 2012 | Despite the highly profitable nature of the pharmaceutical business and the large amount of R&D money companies throw at creating new medicines, the pace of drug development is agonizingly slow. Over the last few years, on average, less than two dozen new drugs have been introduced per year. One of the more promising technologies that could help speed up this process is supercomputing.
Read more...

Australia Goes on Spending Spree in Supercomputing Market

Jul 26, 2012 | While governments in much of the rest of the world are wringing their hands over stagnant or shrinking R&D budgets, Australia is buying up HPC machinery like there is no tomorrow. Just this week, Cray, IBM, and SGI announced supercomputing deals that would send the vendors' latest and greatest HPC equipment Down Under. In this case, the three systems are headed to various research facilities in New South Wales and Western Australia.
Read more...

Lack of Minority Representation in Science and Engineering Endangering US Economic Health

Jul 26, 2012 | Rapid growth in certain segments of the nation’s population is pushing the country’s educational challenges to a crisis level, while too many of the “precious few” under-represented minority students pursuing science, technology, engineering, or mathematics (STEM) disciplines are dropping out or changing majors, according to Richard Tapia, an internationally known mathematician.
Read more...

NASA Builds Supercomputing Lab for Earth Scientists

Jul 25, 2012 | This week, NASA announced it would soon be launching a new HPC and data facility that will give Earth scientists access to four decades of satellite imagery and other datasets. Known as the NASA Earth Exchange (NEX), the facility is being promoted as a "virtual laboratory" for researchers interested in applying supercomputing resources to studying areas like climate change, soil and vegetation patterns, and other environmental topics.
Read more...

Mellanox Roars Through Second Quarter As InfiniBand Revenue Takes Off

Jul 24, 2012 | With the rollout of high performance, lossless Ethernet products over the last few years, there were more than a few analysts predicting the slow retreat of InfiniBand. But thanks to a peculiar confluence of technology roadmaps, a payoff in some investments made by Mellanox, and a pent-up demand for server and storage deployment now being alleviated by Intel's Romley platform, InfiniBand is having a big year.
Read more...

Too Big to FLOP?

Jul 19, 2012 | At the cutting edge of HPC, bigger has always been seen as better and user demand has been the justification. However, as we now grapple with trans-petaflop machines and strive for exaflop ones, is evidence emerging that contradicts these notions? Might computers be getting too big to effectively serve up those FLOPS?
Read more...

Researchers Squeeze GPU Performance from 11 Big Science Apps

Jul 18, 2012 | In a report published this week, researchers documented that GPU-equipped supercomputers enabled application speedups between 1.4x and 6.1x across a range of well-known science codes. While those results aren't the order of magnitude performance increases that were being bandied about in the early days of GPU computing, the researchers were encouraged that the technology is producing consistently good results with some of the most popular HPC science applications in the world.
Read more...

Intel Expands HPC Collection with Whamcloud Buy

Jul 16, 2012 | Intel Corporation has acquired Whamcloud, a startup devoted to supporting the open source Lustre parallel file system and its user community. The deal marks the latest in a line of high performance computing acquisitions that Intel has made over the past few years to expand its HPC footprint.
Read more...

DOE Primes Pump for Exascale Supercomputers

Jul 12, 2012 | Intel, AMD, NVIDIA, and Whamcloud have been awarded tens of millions of dollars by the US Department of Energy (DOE) to kick-start research and development required to build exascale supercomputers. The work will be performed under the FastForward program, a joint effort run by the DOE Office of Science and the National Nuclear Security Administration (NNSA) that will focus on developing future hardware and software technologies capable of supporting such machines.
Read more...

Hybrid Memory Cube Angles for Exascale

Jul 10, 2012 | Computer memory is currently undergoing something of an identity crisis. For the past 8 years, multicore microprocessors have been creating a performance discontinuity, the so-called memory wall. It's now fairly clear that this widening gap between compute and memory performance will not be solved with conventional DRAM products. But there is one technology under development that aims to close that gap, and its first use case will likely be in the ethereal realm of supercomputing.
Read more...

Green500 Turns Blue

Jul 05, 2012 | The latest Green500 rankings were announced last week, revealing that top performance and power efficiency can indeed go hand in hand. According to the latest list, the greenest machines, in fact the top 20 systems, were all IBM Blue Gene/Q supercomputers. Blue Gene/Q, of course, is the platform that captured the number one spot on the latest TOP500 list, and is represented by four of the ten fastest supercomputers in the world.
Read more...

NERSC Signs Up for Multi-Petaflop "Cascade" Supercomputer

Jul 03, 2012 | The US Department of Energy's National Energy Research Scientific Computing Center (NERSC) has ordered a two-petaflop "Cascade" supercomputer, Cray's next-generation HPC platform. The DOE is shelling out $40 million dollars for the system, including about 6.5 petabytes of the company's Sonexion storage. Installation is scheduled for sometime in 2013.
Read more...

The Uber-Cloud Experiment

Jun 28, 2012 | Even with its promise of easy access to pay-per-use computing, HPC-as-a-Service as a delivery model has yet to be widely embraced by high performance computing users. In this article, authors Wolfgang Gentzsch and Burak Yenier describe an HPC service experiment that brings together industry users, resource providers, software providers, and HPC experts, which they believe will help pave the way for wider adoption.
Read more...

Lawrence Livermore, IBM Offer Petascale Supercomputer to Industry

Jun 27, 2012 | One by one, US government HPC labs are getting into the industry partnership business. The latest is Lawrence Livermore National Laboratory (LLNL), who this week announced it was teaming with IBM to form "Deep Computing Solutions," a collaboration that is being folded into LLNL’s new High Performance Computing Innovation Center,
Read more...

An HPC Programming Model for the Exascale Age

Jun 26, 2012 | As the supercomputing faithful prepare for exascale computing, there is a great deal of talk about moving beyond the two-decades-old MPI programming model . The HPC programmers of tomorrow are going to have to write codes that are able to deal with systems hundreds of times larger than the top supercomputers of today, and the general feeling is that MPI, by itself, will not make that transition gracefully. One of the alternatives being offered is a PGAS model known as GASPI...
Read more...

Exascale Computing: The View from Argonne

Jun 21, 2012 | As a result of the dissolution of DARPA's UHPC program, the driving force behind exascale research in the US now resides with the Department of Energy, which has embarked upon a program to help develop this technology. To get a lab-centric view of the path to exascale, HPCwire asked a three of the top directors at Argonne National Laboratory -- Rick Stevens, Michael Papka, and Marc Snir -- to provide some context for the challenges and benefits of developing these extreme scale systems.
Read more...

TOP500 Gets Dressed Up with New Blue Genes

Jun 19, 2012 | The 39th TOP500 list was released today at the International Supercomputing Conference in Hamburg, Germany, with a new machine at the top. Sequoia, an IBM Blue Gene/Q machine, delivered a world record 16 petaflops on Linpack, knocking RIKEN's 10-petaflop K Computer into second place. The Japanese K machine had held the TOP500 title for a year.
Read more...

2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | Recent

Sponsored Links

Nominations are now open for the 2012 HPCwire Readers' Choice Awards.
The 
HPCwire Readers’ Choice Award nominations are a way for our readers to determine, select and honor their own from the leaders, movers and shakers within the global HPC community each year. Play a proactive role in the selection process by nominating your favorite candidates for the 2012 HPCwire Readers' Choice Awards today! Submit your nominations today!

August 22, 2012

August 21, 2012

August 20, 2012

August 17, 2012

August 16, 2012

August 15, 2012

August 14, 2012

August 13, 2012

August 10, 2012



Feature Articles

Adapteva Unveils 64-Core Chip

Multicore chipmaker Adapteva is sampling its 4th-generation multicore processor, known as Epiphany-IV. The 64-core chip delivers a peak performance of 100 gigaflops and draws just two watts of power, yielding a stunning 50 gigaflops/watt. The engineering samples were manufactured by GLOBALFOUNDRIES on its latest 28nm process technology.
Read more...

A Petabyte of Flash in a Rack

Solid state storage specialist Nimbus Data Systems has unveiled its third-generation flash memory array, setting new benchmarks on resiliency, performance, and capacity. The new product, known as Gemini, offers up to 48 TB of capacity and over 1 million IOPS per 2U box. And despite moving to the less expensive and less reliable consumer-grade MLC flash, Nimbus has managed to double the endurance of its storage arrays.
Read more...

Analyst Weighs In on 64-Bit ARM

In a recent report in Real World Technologies, chip guru David Kanter dissects the new 64-bit ARM design and what it might mean to the IT landscape. His take on the architecture is almost uniformly positive, noting that not only did the designers manage to develop an elegant instruction set that was backwardly compatible with the existing ISA, but they also took the extra step to jettison a few of the poorly designed features of the 32-bit architecture.
Read more...

Around the Web

Modeling Proteins at Supercomputing Speeds on Your PC

Aug 21, 2012 | A clever molecular dynamics algorithm and GPU computing delivers HPC to the desktop
Read more...

Supercomputers Study Singing Mice

Aug 14, 2012 | Rodent vocalizations could be linked to speech disorders in humans.
Read more...

GPU Computing Gets Jolt of Java

Aug 13, 2012 | Free compiler allows Java developers to target GPU accelerators.
Read more...

Sponsored Whitepapers

Sponsored Multimedia

Newsletters

HPC on Wall Street

HPC Job Bank

HPC Wire Events

Featured Events