Convey Computer AMD
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud

Keeping Moore’s Law Alive


There has been a lot of discussion regarding the end of Moore’s Law, almost since its inception. Renowned leaders in high performance computing and physics have predicted scenarios detailing how chip advancements will eventually come to a halt. Last week, IEEE Spectrum dedicated a podcast to the subject and talked about a number of design changes aimed at extending silicon’s viability.

In a recent IEEE Spectrum article, associate editor Rachel Courtland explained that silicon has become increasingly difficult to work with as semiconductor manufacturers continue to push the physical limits of the technology. Transistors have become so small, that they have begun to leak electrical current. This problem has led to a search for new technologies that may eventually replace or enhance conventional chip designs.

Courtland met up with Bernd Hoefflinger, editor of Chips 2020, a book written by experts in the field explaining their thoughts regarding the future of computing. In Courtland’s interview, Hoefflinger noted that computational performance is not the only issue at hand. The power consumed by these technologies has a profound impact on their practicality. Said Hoefflinger:

“They expect 1000 times more computations per second within a decade. If we were to try to accomplish this with today’s technology, we would eat up the world’s total electric power within five years. Total electric power!”

He was referring to Dennard’s scaling, which is related to Moore’s Law. Essentially, as transistors get smaller, they will increase in speed and consume less power. Unfortunately, this phenomenon is losing steam and overcoming this limitation has become a primary focus by semiconductor designers. Hoefflinger believes if the power needed to compute a simple multiplier could be reduced to 1 femtojoule, silicon will keep Moore’s Law alive for the next decade.  A femtojoule is roughly 10 percent of the energy fired from a human synapse.

To reach these low-power benchmarks, new 3-D circuit designs have emerged. Currently, 3-D chips have entered the market, using wires to connect multiple dies together.  In addition, tri-gate or FinFET transistors have been developed, but Hoefflinger thinks that another design holds more promise.

According to him, 3-D merged transistors can be developed that combine two transistors into a single device. Instead of designing p-doped and n-doped transistors with their own gates, they share a gate with a PMOS transistor on one side and a NMOS transistor on the other side. These have sometimes been referred to as “hamburger transistors.”

Another method to reduce power has to do with how calculations are performed. For example, if multiplication was performed starting with the most significant bits (rather than the least significant bits), it could reduce the amount of transistors required for a calculation. While the reduction might not drop the energy to one femtojoule, it may bring consumption down “by an order of magnitude or two”.

Lastly, Hoefflinger suggested changing chip circuit architecture similar to communication circuitry. The change in design would allow for an integrated error correction, also leading to lower operational voltage.

If all of these suggestions for power reduction are implemented, it may extend Moore’s Law beyond 2020. Hoefflinger believes it could go either way, but is encouraged by the fact that these issues are getting a lot of attention right now.


Full story at IEEE Spectrum

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

July 12, 2012

July 11, 2012

July 10, 2012

July 09, 2012

July 06, 2012

July 05, 2012

July 04, 2012

July 03, 2012

July 02, 2012

June 29, 2012


Xyratex ClusterStor 6000

Feature Articles

Hybrid Memory Cube Angles for Exascale

Computer memory is currently undergoing something of an identity crisis. For the past 8 years, multicore microprocessors have been creating a performance discontinuity, the so-called memory wall. It's now fairly clear that this widening gap between compute and memory performance will not be solved with conventional DRAM products. But there is one technology under development that aims to close that gap, and its first use case will likely be in the ethereal realm of supercomputing.
Read more...

Green500 Turns Blue

The latest Green500 rankings were announced last week, revealing that top performance and power efficiency can indeed go hand in hand. According to the latest list, the greenest machines, in fact the top 20 systems, were all IBM Blue Gene/Q supercomputers. Blue Gene/Q, of course, is the platform that captured the number one spot on the latest TOP500 list, and is represented by four of the ten fastest supercomputers in the world.
Read more...

NERSC Signs Up for Multi-Petaflop "Cascade" Supercomputer

The US Department of Energy's National Energy Research Scientific Computing Center (NERSC) has ordered a two-petaflop "Cascade" supercomputer, Cray's next-generation HPC platform. The DOE is shelling out $40 million dollars for the system, including about 6.5 petabytes of the company's Sonexion storage. Installation is scheduled for sometime in 2013.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

Michael Wolfe Webinar: PGI Accelerator with OpenACC

Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.

Think Tank HPCwire

Newsletters

The Portland Group

HPC Job Bank


Featured Events








HPC Wire Events