July 09, 2012
SEATTLE, WA, July 09 -- Global supercomputer leader Cray Inc. today announced it has been awarded a supercomputer contract to provide the Finnish IT Center for Science Ltd. (CSC) with a next-generation Cray supercomputer code-named "Cascade."
CSC is Finland's national high performance computing (HPC) facility, and the Cascade supercomputer will enable CSC to provide cost-effective supercomputing capacity for the needs of science and research across Finland. The acquisition of the Cascade system is an investment of the Finnish state and the procurement is handled by state-owned CSC on behalf of the Ministry of Education, Science and Culture.
Researchers and scientists will use Cray's next-generation Cascade supercomputer to solve scientific and engineering problems in a wide range of fields including climate change, energy research, materials science, gene interactions and medical research. The system will also be a Tier-1 resource under the European PRACE (Partnership for Advanced Computing in Europe) initiative.
"One of our primary goals is to provide the Finnish research community with extremely high performance computing capability and pave their way towards new scientific innovations," said Kimmo Koski, managing director of CSC. "We chose the Cascade system for its ability to enable breakthrough science in a production environment, and we are convinced it will strengthen our position as a world-class research facility."
"We are very excited to be the supercomputer provider for CSC in Finland, which is one of the leading supercomputer centers in Europe, and to be able to continue our successful partnership," said Dr. Ulla Thiel, Cray vice president, Europe. "We are very pleased that researchers and scientists at CSC Finland, as well as PRACE users throughout Europe, will have access to our next-generation Cascade supercomputer and its innovative HPC technologies. The Cascade system will be a valuable resource for running scientific and engineering applications that demand outstanding parallel computational performance and scalability."
Cray's Cascade supercomputer, which is expected to be widely available in the first half of 2013, is the next step in Cray's Adaptive Supercomputing vision. The system will feature major advancements to the Cray Linux Environment, Cray's HPC-optimized programming environment, and the next-generation Aries interconnect chipset. Cascade will also feature support for Intel(R) Xeon(R) processors -- a first for Cray's high-end systems. The Cascade supercomputer is in part made possible by Cray's participation in the Defense Advanced Research Project Agency's (DARPA) High Productivity Computing Systems program.
Consisting of products and services, the multi-year, multi-phase contract is valued at more than $12 million, and a vast majority of the system is expected to be delivered in 2014.
About CSC
CSC is a state-owned non-profit company providing services to the Ministry of Education, Science and Culture. CSC provides IT support and resources for academia, research institutes and companies: modeling, computing and information services. CSC provides Finland's widest selection of scientific software and databases and Finland's most powerful supercomputing environment that researchers can use via the Funet network.
About Cray
Cray Inc. As a global leader in supercomputing, Cray provides highly advanced supercomputers and world-class services and support to government, industry and academia. Cray technology is designed to enable scientists and engineers to achieve remarkable breakthroughs by accelerating performance, improving efficiency and extending the capabilities of their most demanding applications. Cray's Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to surpass today's limitations and meeting the market's continued demand for realized performance. Go to www.cray.com for more information.
-----
Source: Cray Inc.
There are 0 discussion items posted.
Join the Discussion |
Computer memory is currently undergoing something of an identity crisis. For the past 8 years, multicore microprocessors have been creating a performance discontinuity, the so-called memory wall. It's now fairly clear that this widening gap between compute and memory performance will not be solved with conventional DRAM products. But there is one technology under development that aims to close that gap, and its first use case will likely be in the ethereal realm of supercomputing.
Read more...
The latest Green500 rankings were announced last week, revealing that top performance and power efficiency can indeed go hand in hand. According to the latest list, the greenest machines, in fact the top 20 systems, were all IBM Blue Gene/Q supercomputers. Blue Gene/Q, of course, is the platform that captured the number one spot on the latest TOP500 list, and is represented by four of the ten fastest supercomputers in the world.
Read more...
The US Department of Energy's National Energy Research Scientific Computing Center (NERSC) has ordered a two-petaflop "Cascade" supercomputer, Cray's next-generation HPC platform. The DOE is shelling out $40 million dollars for the system, including about 6.5 petabytes of the company's Sonexion storage. Installation is scheduled for sometime in 2013.
Read more...
Jul 11, 2012 |
Computer scientist builds intelligent machine with single-core laptop and some slick algorithms.
Read more...
Jul 10, 2012 |
Science cloud crunched data that helped build the case for the historic announcement.
Read more...
Jul 09, 2012 |
EU project offers software that makes datacenters more energy-efficient.
Read more...
Jul 05, 2012 |
Processor speed and power consumption are now at odds, which will force chipmakers to rethink their designs..
Read more...
Jul 03, 2012 |
University consortium launches with two terascale machines.
Read more...
06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.
Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.