June 26, 2012
One of the more promising solid state memory technologies on the horizon is Phase Change Memory (PCM). PCM has the potential to write data faster than current DRAM chips by charging atoms within a crystal. This has led some people to believe that the technology might enable the much-sought-after instantaneous computer boot-up. Ars Technica discussed the future prospects for PCM last week.
At the atomic level, PCM stores data in a compound of germanium, antimony and tellurium. When a voltage is applied to the atoms, they change into an ordered crystal. The data can then be deleted by melting the crystalline substance. To read the information, a computer determines the electrical resistance of the material.
An important attribute of phase change memory is that the technology is non-volatile. This means it does not require power to retain information like standard RAM offerings. Along with the possibility of replacing system memory, these chips might end up competing with NAND flash as well.
Some memory manufacturers are dabbling with PCM on a small scale. Micron offers phase change modules with densities up to 128 MB and Samsung inserted PCM into an unnamed cell phone, but ended up removing it later on.
Despite the benefits, PCM suffers from an inherent issue that has slowed its path to adoption. The biggest one is its write speed. Current DRAM technology can perform write operations within a 1-10 nanosecond window, which is faster than the time it takes for the germanium, antimony and tellurium compound in PCM to crystallize. Other crystalline compounds with faster reaction times have been researched, but they are not as stable as the current PCM design, slowly erasing themselves in low temperatures over time.
Recent research from the University of Cambridge has given hope to the new technology though. Stephen Elliott, Professor of Chemical Physics at the university, along with his colleagues, have discovered a method to improve PCM write speed.
By preparing the material with a 0.3-volt electrical current, crystallization occurred after receiving a 500-picosecond burst of 1 volt. Essentially, the low current made the material act like water at near-freezing temperatures. A few crystalline seeds formed, enabling the material to change at an accelerated rate when receiving additional voltage. The improvement was ten times faster than similar compounds that were tested and remained stable for 10,000 write-rewrite cycles.
The need for extra electrical current during the write cycle could become an Achilles heel for phase change memory, however, since that’s going to increase the overall power draw. It’s a relatively new development though, and further optimizations may be under development. If the price point and power consumption are competitive, PCM may indeed replace one or more current memory technologies.
Full story at Ars Technica
There are 0 discussion items posted.
Join the Discussion |
Even with its promise of easy access to pay-per-use computing, HPC-as-a-Service as a delivery model has yet to be widely embraced by high performance computing users. In this article, authors Wolfgang Gentzsch and Burak Yenier describe an HPC service experiment that brings together industry users, resource providers, software providers, and HPC experts, which they believe will help pave the way for wider adoption.
Read more...
One by one, US government HPC labs are getting into the industry partnership business. The latest is Lawrence Livermore National Laboratory (LLNL), who this week announced it was teaming with IBM to form "Deep Computing Solutions," a collaboration that is being folded into LLNL’s new High Performance Computing Innovation Center,
Read more...
As the supercomputing faithful prepare for exascale computing, there is a great deal of talk about moving beyond the two-decades-old MPI programming model . The HPC programmers of tomorrow are going to have to write codes that are able to deal with systems hundreds of times larger than the top supercomputers of today, and the general feeling is that MPI, by itself, will not make that transition gracefully. One of the alternatives being offered is a PGAS model known as GASPI...
Read more...
06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.