2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | Recent
Apr 17, 2013 |
Advances in data-intensive supercomputing increase understanding of autism and related disorders, set the stage for future treatments.
Read more...
Apr 16, 2013 |
For a growing number of non-traditional HPC workloads, the cloud is the place to be.
Read more...
Apr 15, 2013 |
Oak Ridge Leadership Computing | Getting scientific applications to scale across Titan's 300,000 compute cores means there will be bugs. Finding those bugs is where Allinea DDT comes in.
Read more...
Apr 11, 2013 |
The Rockhopper cluster has been in production for over a year now, long enough for additional details to emerge on this interesting HPC cloud use case.
Read more...
Apr 11, 2013 |
Gordon, a supercomputer at the San Diego Supercomputing Center on the campus of the University of California at San Diego is helping point the direction of the Large Hadron Collider's next research project.
Read more...
Apr 10, 2013 |
Amazon's EC2 Cluster Compute instance goes head-to-head with Myrinet 10GigE cluster.
Read more...
Apr 10, 2013 |
Randall J. Leveque, Professor of Applied Mathematics at the University of Washington in Seattle, will be conducting a free course that brings the principles of parallelism in high performance computers to those in scientific computing.
Read more...
Apr 10, 2013 |
Before the annual SC conference last year, Russian HPC vendor, T-Platforms delivered the first supercomputer from the nation. Not long before that, it completed an installation for PRACE and worked on complex software challenges via a partnership with the Juelich Supercomputer Center. From the outside, the company appears to have done bang-up business in 2012, but they’ve gone into radio silence since.
Read more...
Apr 09, 2013 |
The Top500 list is dominated each year by unified systems. That is, the same vendor provides most of the hardware, the same CPUs are used across the system, etc. If HPC systems are built this way, why aren’t many datacenters?
Read more...
Apr 08, 2013 |
Financial Review | Chief executive of the NSW (University of New South Wales) university consortium Intersect makes the case for a holistic funding strategy.
Read more...
Apr 02, 2013 |
The large-scale classical physics problems that remain unsolved must for the most part be run in parallel by high-performance machines like the Kraken supercomputer. Literally millions of variables culled from billions of particles combine to make this type of research unreasonable for ordinary computational physics.
Read more...
Mar 29, 2013 |
Intel has put out feelers for a well-connected champion of exascale technologies to bolster its role around the new efforts and funding that....
Read more...
Mar 28, 2013 |
The Yellowstone supercomputer has a 1.5-petaflop I-data plex system at peak. The machine was first tasked with 11 compute-intensive projects as part of the Accelerated Scientific Discovery (ASD) initiative.
Read more...
Mar 27, 2013 |
One of the biggest areas of concern with brain trauma is swelling, which can become life threatening if it’s not caught in time. UCLA, with the help of Excel Medical Electronics and IBM, is turning to big data to proactively prevent this problem.
Read more...
Mar 26, 2013 |
Tutorial describes how to implement CUDA and parallel programming in the AWS Cloud.
Read more...
Mar 26, 2013 |
Federal Computer Week | The US Postal Service relies on the power of big data to analyze over 528 million mail pieces per day for signs of fraud.
Read more...
Mar 25, 2013 |
IDC report highlights the continued shift to large system sales.
Read more...
Mar 25, 2013 |
Hooper, the Opteron-powered Cray system at NERSC has been tasked with helping scientists on the Planck space telescope project filter ancient light against sensor signals to help astronomers understand the....
Read more...
Mar 22, 2013 |
The New York Times | Lockheed Martin puts D-Wave quantum computer to work manufacturing aircraft systems.
Read more...
Mar 22, 2013 |
Sixth grade science project asks "How Super Is Your Computer?"
Read more...
Mar 21, 2013 |
The New York Times | UCSD rolls out a next-generation bypass network across its La Jolla, Calif., campus.
Read more...
Mar 20, 2013 |
Lawrence Livermore National Laboratory | LLNL researchers have successfully harnessed all 1,572,864 of Sequoia's cores for one impressive simulation.
Read more...
Mar 20, 2013 |
As NCSA's Blue Waters supercomputer approaches full service status, we thought it would be appropriate to see how the machine was built.
Read more...
Mar 14, 2013 |
QMachine leverages the processing power of Web browsers to create a commodity supercomputer.
Read more...
Mar 13, 2013 |
Quantum Cures wants your help identifying drug candidates for orphan and rare diseases.
Read more...
2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | Recent
Later this month, Indiana University will formally introduce the successor to the Big Red system, the aptly-named, Big Red II. The Cray-crafted and tuned system is 25 times faster than its baby brother (the 4100-core original Big Red from 2006) and sports some notable improvements across its 1,020 nodes. According to Thomas Sterling, there are theoretical lessons that can be applied to...
Read more...
A giant leap in bone structure research paves the way for advances in osteoporosis treatment; details from UCSD's Research CyberInfrastructure (RCI) Program reveal what PIs really want; and a cloud computing programming model puts the focus on predictable performance. Plus GPU-related research and more...
Read more...
Despite the important advances that middleware enables in both the HPC and enterprise spheres, it generally fails to elicit the same excitement as, say, brand-new leadership class hardware. But middleware, such as Adaptive Computing's intelligent management engine, Moab, is cool and you don't have to take Adaptive's word for it. During the company's annual user event last week, Gartner gave Adaptive its "Cool Vendor" stamp of approval.
Read more...
Apr 17, 2013 |
Advances in data-intensive supercomputing increase understanding of autism and related disorders, set the stage for future treatments.
Read more...
Apr 16, 2013 |
For a growing number of non-traditional HPC workloads, the cloud is the place to be.
Read more...
Apr 15, 2013 |
Getting scientific applications to scale across Titan's 300,000 compute cores means there will be bugs. Finding those bugs is where Allinea DDT comes in.
Read more...
Apr 11, 2013 |
The Rockhopper cluster has been in production for over a year now, long enough for additional details to emerge on this interesting HPC cloud use case.
Read more...
Apr 11, 2013 |
Gordon, a supercomputer at the San Diego Supercomputing Center on the campus of the University of California at San Diego is helping point the direction of the Large Hadron Collider's next research project.
Read more...
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
04/09/2013 | IBM, Omnibond, Xyratex | In today's era of data-intensive computing, the big data phenomenon has grown far beyond a trend, it's become a critical top-line business issue that organizations must tackle in order to remain competitive. With the emergence of the next generation of tools for managing, storing, and analyzing data flooding the IT marketplace, deciding on the best data management solution to drive your business' requirements can be overwhelming.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.
CFD models of 100 million+ cells are increasingly common. Models of a billion+ cells are emerging. To meet demand for 10 to 100x larger models, the simulations will have to scale to 10 to 100x more cores. Both compute environment capabilities and capacity must be enhanced. Learn how Cray and ANSYS are scaling production models to 1000s of cores and providing the compute environment to take CFD simulations to the next level of extreme fidelity.