July 17, 2012
July 17 -- A new storage environment providing 7.8PB of storage and an additional 19.5PB of backup capability is to improve long-term data storage for hundreds of UK users of the HECToR (High-End Computing Terascale Resource) supercomputer. HECToR is hosted by EPCC at the University of Edinburgh and funded by the Engineering and Physical Sciences Research Council (EPSRC), and the Natural Environment Research Council (NERC).
The additional storage compliments HECToR’s existing 1 Petabyte of disk space. Although tightly integrated with HECToR, the new storage environment is built independently and - because it is designed to out-live HECToR - will be available for use with successive supercomputers.
The storage environment was designed and built by data processing, data management and storage provider OCF plc. It uses storage hardware from DataDirect Networks (DDN), and archive hardware and file management software from IBM.
“We needed a more a data-centric view of high performance computing,” says Professor Arthur Trew, University of Edinburgh. “Data persists beyond any computer, including HECToR, so we’re prioritising data storage, management and analysis. Doing this enables us to upgrade HECToR and integrate its successor without fear of impacting access to research data. Our expectation is that any future computer must be able to integrate seamlessly with our storage.”
Scientists currently store highly complex simulations on site at Edinburgh – file sizes vary from user to user, but each can potentially be gigabytes in size. The passage of data for further interrogation is unique to each researcher and may involve transferring the data to other data repositories off site, moving data to different parts of the country or simply “taking it home” using portable media.
Julian Fielden, OCF managing director, says: “There is lots of talk and consensus at the moment that the problem with big data isn’t really the capacity to store it, but how to access, use and find the data and, in doing so, make it into useful information. The collective investment of the research councils is cleverly helping to avoid this problem by making storage independent of the machine that generated it. Combined with good network access and IBM’s parallel file system GPFS, the data becomes easy to locate and use by any researcher irrespective of location.”
“As we enter the big data era, organisations in every field of endeavour are addressing the World’s most pressing scientific and medical questions – questions that would have been too complex to address just a few years ago,” says Bill Cox, DDN Vice President of Worldwide Channel Sales. “EPCC and its partner organisations have built a technologically advanced, state-of-the-art facility at the University of Edinburgh that opens a world of possibility to researchers across the UK. DDN is very pleased to join with OCF in assisting on this important project.”
The storage environment built by OCF now uses:
seamless storage capacity expansion to handle the explosive growth of big data and digital information;
Improved efficiency through enterprise wide, interdepartmental file sharing;
Proven commercial-grade reliability to eliminate production outages and eases information life cycle management with policy-driven automation;
Cost-effective disaster recovery and business continuity;
Active File Management to enable asynchronous access and control of local and remote files.
-----
Source: OCF
There are 0 discussion items posted.
Join the Discussion |
In a report published this week, researchers documented that GPU-equipped supercomputers enabled application speedups between 1.4x and 6.1x across a range of well-known science codes. While those results aren't the order of magnitude performance increases that were being bandied about in the early days of GPU computing, the researchers were encouraged that the technology is producing consistently good results with some of the most popular HPC science applications in the world.
Read more...
Intel Corporation has acquired Whamcloud, a startup devoted to supporting the open source Lustre parallel file system and its user community. The deal marks the latest in a line of high performance computing acquisitions that Intel has made over the past few years to expand its HPC footprint.
Read more...
Intel, AMD, NVIDIA, and Whamcloud have been awarded tens of millions of dollars by the US Department of Energy (DOE) to kick-start research and development required to build exascale supercomputers. The work will be performed under the FastForward program, a joint effort run by the DOE Office of Science and the National Nuclear Security Administration (NNSA) that will focus on developing future hardware and software technologies capable of supporting such machines.
Read more...
Jul 18, 2012 |
Blue Gene/Q super gives researchers better picture of drug-virus interaction.
Read more...
Jul 17, 2012 |
Co-creator of Gordon supercomputer suffers fatal heart attack.
Read more...
Jul 16, 2012 |
EUV lithography, the technology chipmakers are counting on to keep Moore's Law alive, is behind schedule.
Read more...
Jul 12, 2012 |
State says supercomputing center can’t pay bills to keep machine running.
Read more...
Jul 11, 2012 |
Computer scientist builds intelligent machine with single-core laptop and some slick algorithms.
Read more...
06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.
Join Michael for a look at the first PGI Accelerator Fortran and C compilers to include comprehensive support for OpenACC, the new open standard for programming accelerators using compiler directives.