Adaptive Computing Texas Advanced Computing Center
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud

Whitepapers

HPCwire's white paper database contains reports from the leading thought-leaders and idea generators in the HPC industry.

Most Recent White Papers


2009 | 2010 | 2011 | 2012 | Recent

NetApp The IT Data Explosion Is Game Changing for Storage Requirements
Source: NetApp
Release Date: June 4, 2012

Data-intensive computing has been an integral part of high-performance computing (HPC) and other large datacenter workloads for decades, but recent developments have dramatically raised the stakes for system requirements — including storage resiliency. The storage systems of today's largest HPC systems often reach capacities of 15–30PB, not counting scratch disk, and feature thousands or tens of thousands of disk drives. Even in more mainstream HPC and enterprise datacenters, storage systems today may include hundreds of drives, with capacities often doubling every two to three years. With this many drives, normal failure rates can mean that a disk is failing somewhere in the system often enough to make MTTF a serious concern at the system level.

AMD Exploring the Potential of Heterogeneous Computing
Source: AMD
Release Date: April 2, 2012

Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.

Appro Appro White Paper: Enabling Performance-per-Watt Gains in HPC
Source: Appro
Release Date: April 5, 2012

Designed to meet the growing global demand for HPC solutions, Appro's Xtreme-X™ Supercomputer delivers superior performance-per-watt and reduced I/O latency while bringing significant flexibility to HPC workload configurations including capacity, hybrid, data intensive and capability computing.

NetApp Solving Agencies Big Data Challenges: PED for On-the-Fly Decisions
Source: NetApp
Release Date: March 12, 2012

With the growing volumes of rich sensor data and imagery used today to derive meaningful intelligence, government agencies need to address the challenges posed by these “big” datasets NetApp provides a scalable, unified single pool of storage to better handle your processing and analysis of data to drive actionable intelligence in the most demanding environments on earth.

Inphi Introducing LRDIMM – A New Class of Memory Modules
Source: Inphi
Release Date: January 17, 2012

This paper introduces the LRDIMM, a new type of memory module for high capacity servers and high-performance computing platforms. LRDIMM is an abbreviation for Load Reduced Dual Inline Memory Module, the newest type of DIMM supporting DDR3 SDRAM main memory. The LRDIMM is fully pin compatible with existing JEDEC-standard DDR3 DIMM sockets, and supports higher system memory capacities when enabled in the system BIOS.

Rogue Wave Software Debugging CUDA-Accelerated Parallel Applications with TotalView
Source: Rogue Wave Software
Release Date: November 21, 2011

CUDA introduces developers to a number of new concepts (such as kernels, streams, warps and explicitly multi-level memory) that are not encountered in serial or other parallel programming paradigms. In addition, CUDA is frequently used alongside MPI parallelism and host-side multi-core and multi-thread parallelism. The TotalView parallel debugger provides developers with methods to handle these CUDA-specific constructs, as well as an integrated view of all three levels of parallelism within a single debugging session.

Matrox Effectively applying high-performance computing (HPC) to imaging
Source: Matrox
Release Date: January 9, 2012

Applications with image resolutions, data rates and analysis requirements that exceed the capabilities of a typical workstation computer continue to exist to this day. Moreover, developers must decide how to select and best use the processing technologies – multi-core CPU, GPU and FPGA – at their disposal. The suitability of a scalable heterogeneous computing platform for demanding applications will be examined by way of a representative scenario.

Appro Appro and Intel Collaborate on Supercomputing Design Wins for Critical U.S. Infrastructure and Research
Source: Appro
Release Date: February 3, 2012

Appro, a leader in High Performance Computing (HPC) has partnered with Intel Corporation to deliver powerful next generation supercomputing solutions to help maintain the federal governments’ critical nuclear deterrent infrastructure and conduct human genome and earthquake research at a top university. By coupling its optimized Xtreme-X™ Supercomputer architecture with the future Intel® Xeon® processor E5 family Appro was able to deliver a next generation super computer solution that can tackle today’s most difficult problems. This paper provides an overview of these problems and highlights the unique technical capabilities of the Appro/Intel solution, which makes it the obvious choice for your supercomputing solution needs.

Optimizing Data Centers for Big Infrastructure Applications
Source: StackIQ
Release Date: December 8, 2011

Today's application workloads have grown to encompass hundreds or thousands of servers and massive amounts of data. Organizations such as Google and Facebook have developed proprietary tools and techniques to manage their compute environments. But what about the rest of us? This white paper provides an overview of the growing wave of large-scale, ‘Big Infrastructure’ applications and explores the challenges inherent in provisioning, configuring, and deploying software for clusters running massive data workloads.

Big Data Applications in HPC
Source: Intersect360 Research
Release Date: December 4, 2011

"Big Data" is a rapidly growing set of applications touching large datacenters, SMBs, and research


2009 | 2010 | 2011 | 2012 | Recent

Search Whitepapers