Latest Posts
The Bulldozer Aftermath: Delving Even Deeper
by Johan De Gelas on 5/30/2012

It has been months since AMD's Bulldozer architecture surprised the hardware enthusiast community with performance all over the place. The opinions vary wildly from “server benchmarks are here, and they're a catastrophe” to “Best Server Processor of 2011”. The least you can say is that the new architecture's idiosyncrasies have stirred up a lot of dust.

Although there have been quite a few attempts to understand what Bulldozer is all about, we cannot help but feel that many questions are still unanswered. Since this architecture is the foundation of AMD's server, workstation, and notebook future (Trinity is based on an improved Bulldozer core called "Piledriver"), it is interesting enough to dig a little deeper. Did AMD take a wrong turn with this architecture? And if not, can the first implementation "Bulldozer" be fixed relatively easily?

We decided to delve deeper into the SAP and SPEC CPU2006 results, as well as profiling our own benchmarks. Using the profiling data and correlating it with what we know about AMD's Bulldozer and Intel's Sandy Bridge, we attempt to solve the puzzle.

LaCie's 2big NAS Review
by Ganesh T S on 5/28/2012

The SMB (Small to Medium Businesses) / SOHO (Small Office / Home Office) NAS market is a highly competitive one. We have been reviewing a number of ARM-based 2-bay / 4-bay NAS units over the last year or so. In addition, we have also looked at some x86-based high-end systems such as the LaCie 5Big Storage Server and the QNAP TS-659 Pro II.

On May 15th, LaCie launched an updated version of their 2big Network 2 2-bay product, the 2big NAS. The 2big NAS comes in diskless and 6TB versions, priced at $299.99 and $649.00 respectively. At this price point, the NAS competes with advanced 2-bay SMB solutions such as the Synology DS211+, and not the LG NAS N2A2 (which is geared primarily towards home users). Do the features and performance match up to the price point? Read on for our analysis and detailed review.

ioSafe SoloPRO: Disaster Proofing Your Storage Needs
by Ganesh T S on 4/9/2012

Consumers understand the importance of keeping their documents and other material possessions safe from unexpected disasters. Towards this, many invest in fireproof and waterproof safes. However, as the digital economy grows, many of the possessions such as documents and photo albums are in terms of bits and bytes, rather than tangible things which can be placed in safes. This brings to fore the necessity to find a disaster-proof safe place for those bits and bytes in both personal and business settings.

Storage media (hard disks, in particular) are quite sensitive to environmental conditions, and protecting them from disasters such as fires and floods is an interesting problem. ioSafe has been in the business of selling disaster proof storage solutions for the last 7 years. Their products have been well-reviewed and their CES demonstrations have always drawn a large audience. We have had the ioSafe SoloPRO 1TB USB 3.0 version in-house over the last month. To tell the truth, I spent more time reading up and understanding ioSafe's technology than actually testing out the drive. A number of other ioSafe reviews have already subjected their products to harsh conditions and proved that the hard drive inside is still salvageable. In this review, we will concentrate more on ioSafe's technology itself. Read on for our coverage.

The Xeon E5-2600: Dual Sandy Bridge for Servers
by Johan De Gelas on 3/6/2012

Eight improved cores, 16 threads, integrated 40 lane PCIe 3.0: the new socket 2011 Xeon E5-2660 manages to package it all in a very modest power envelope of 95W TDP (at 2.2 GHz). If you read the Intel Xeon E5 paper specs, it becomes more and more likely that Intel has pulled off another "Nehalem": much better performance, richer features while consuming less power. Yes, as much as we like a good fight, the question is not whether Intel will outperform the competition and the previous Intel generation but by how much...

Intel sent us both the Xeon E5-2690 - their newest performance champ - and the more performance/watt oriented E5-2660. We managed to turn this last one into a chip that will perform like the Xeon E5-2630, a chip that is in the price range of the best Opteron 6200s. We compare Intel latest Xeon with the Xeon X5650, the Opteron 6276 and 6174. So whether you are searching for the performance champ, the best balance between performance and energy consumption or the best deal for your money, you should find an answer in this article. We improved our regular server performance testing with some HPC (LS-Dyna) and the renewed OLAP tests. Read on...

The Opteron 6276: a closer look
by Johan De Gelas on 2/9/2012

When we first looked at the Opteron 6276, our time was limited and we were only able to run our virtualization, compression, encryption, and rendering benchmarks. Most servers capable of running 20 or more cores/threads target the virtualization market, so that's a logical area to benchmark. The other benchmarks either test a small part of the server workload (compression and encryption) or represent a niche (e.g. rendering), but we included those benchmarks for a simple reason: they gave us additional insight into the performance profile of the Interlagos Opteron, they were easy to run, and last but not least those users/readers that use such applications still benefit.

Back in 2008, however, we discussed the elements of a thorough server review. Our list of important areas to test included ERP, OLTP, OLAP, Web, and Collaborative/E-mail applications. Looking at our initial Interlagos review, several of these are missing in action, but much has changed since 2008. The exploding core counts have made other bottlenecks (memory, I/O) much harder to overcome, the web application that we used back in 2009 stopped scaling beyond 12 cores due to lock contention problems, the Exchange benchmark turned out to be an absolute nightmare to scale beyond 8 threads, and the only manageable OLTP test—Swingbench Calling Circle—needed an increasing number of SSDs to scale.

The ballooning core counts have steadily made it harder and even next to impossible to benchmark applications on native Linux or Windows. Thus, we reacted the same way most companies have reacted: we virtualized our benchmark applications. It's only with a hypervisor that these multi-core monsters make sense in most enterprises, but there are always exceptions. Since quite a few of our readers still like seeing "native" Linux and Windows benchmarks, not to mention quite a few ERP, OLTP, and OLAP servers are still running without any form of virtualization, we took the time to complete our previous review and give the Opteron Interlagos another chance.

A Look at Enterprise Performance of Intel SSDs
by Anand Lal Shimpi on 2/8/2012

For the majority of the history of AnandTech we've hosted our own server infrastructure. A benefit of running our own infrastructure is that we're able to gain a lot of hands on experience with enterprise environments that we'd otherwise have to report on from a distance.

When I first started covering SSDs four years ago I became obsessed with the idea of migrating nearly every system over to something SSD based. The first to make the switch were our CPU testbeds. Moving away from mechanical drives ensured better benchmark consistency between runs as any variation in IO load was easily absorbed by the tremendous amount of headroom that an SSD offered. The holy grail of course was migrating all of the AnandTech servers over to SSDs. Over the years our servers seem to die in the following order: hard drives, power supplies, motherboards. We tend to stay on a hardware platform until the systems start showing the signs of their age (e.g. motherboards start dying), but that's usually long enough that we encounter an annoying number of hard drive failures. A well validated SSD should have a predictable failure rate, making it an ideal candidate for an enterprise environment where downtime is quite costly and in the case of a small business, very annoying.

Our most recent server move is a long story for a separate article but to summarize the move, we recently switched hosting providers and data centers. Our hardware was formerly on the east coast and the new datacenter is in the middle of the country. At our old host we were trying out a new cloud platform while our new home would be a mixture of a traditional back-end with a virtualized front-end. With a tight timetable for the move and no desire to deploy an easily portable solution at our old home before making the move we were faced with a difficult task: how do we physically move our servers half way across the country with minimal downtime?

Thankfully our new host had temporary hardware very similar in capabilities to our new infrastructure that they were willing to put the site on as we moved our hardware. The only exception was, as you might guess, a relative lack of SSDs. Our new hardware uses a combination of consumer and enterprise SSDs but our new host only had mechanical drives or consumer grade SSDs on tap (Intel SSD 320s).

In preparing for this move I realized we hadn't publicly discussed the performance and endurance issues associated with using consumer SSDs in an enterprise environment. What follows is a discussion of just that. Read on...

Latest from AnandTech