Latest Posts
Inside the Titan Supercomputer: 299K AMD x86 Cores and 18.6K NVIDIA GPUs
by Anand Lal Shimpi on 10/31/2012

Earlier this month I drove out to Oak Ridge, Tennessee to pay a visit to the Oak Ridge National Laboratory (ORNL). I'd never been to a national lab before, but my ORNL visit was for a very specific purpose: to witness the final installation of the Titan supercomputer.

ORNL is a US Department of Energy laboratory that's managed by UT-Battelle. Oak Ridge has a core competency in computational science, making it not only unique among all DoE labs but also making it perfect for a big supercomputer.

Titan is the latest supercomputer to be deployed at Oak Ridge, although it's technically a significant upgrade rather than a brand new installation. Jaguar, the supercomputer being upgraded, featured 18,688 compute nodes - each with a 12-core AMD Opteron CPU. Titan takes the Jaguar base, maintaining the same number of compute nodes, but moves to 16-core Opteron CPUs paired with an NVIDIA Kepler K20 GPU per node. The result is 18,688 CPUs and 18,688 GPUs, all networked together to make a supercomputer that should be capable of landing at or near the top of the TOP500 list.

Over the course of a day in Oak Ridge I got a look at everything from how Titan was built to the types of applications that are run on the supercomputer. Having seen a lot of impressive technology demonstrations over the years, I have to say that my experience at Oak Ridge with Titan is probably one of the best. Normally I cover compute as it applies to making things look cooler or faster on consumer devices. I may even dabble in talking about how better computers enable more efficient datacenters (though that's more Johan's beat). But it's very rare that I get to look at the application of computing to better understanding life, the world and universe around us. It's meaningful, impactful compute.

Read on for our inside look at the Titan supercomputer.

Intel Announces Xeon Phi Family of Co-Processors – MIC Goes Retail news
by Ryan Smith on 6/19/2012

As conference season is in full swing, this week’s big technical conference is the 2012 International Supercomputing Conference (ISC) taking place over in Hamburg, Germany. ISC is one of the traditional venues for major supercomputing and high performance computing (HPC) announcements and this year is no exception. Several companies will ...

Rendering and HPC Benchmark Session Using Our Best Servers
by Johan De Gelas on 9/30/2011

Each time we publish a new server platform review, several of our readers inquire about HPC and rendering benchmarks. We're always willing to accommodate reasonable requests, so we're going to start expanding beyond our usual labor intensive virtualization benchmarks. This article is our first attempt. It was a bumpy ride, but this first attempt produced some very interesting insights.

The increasing core counts on modern servers makes finding useful benchmarks difficult, and given how many businesses have chosen to use virtualization that has been our major focus. For this article, we decided to run the latest version of Cinebench and Stars Euler 3D CFD, as both seemed like they would be fairly easy to work with. That didn't end up being the case, but at least our testing results are a lot more interesting than we imagined they would be.

Latest from AnandTech