King Abdullah University of Science and Technology
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Datanami
Digital Manufacturing Report
HPC in the Cloud

Tabor Communications
Corporate Video

Whitepapers

HPCwire's white paper database contains reports from the leading thought-leaders and idea generators in the HPC industry.

Most Recent White Papers


2009 | 2010 | 2011 | 2012 | Recent

NetApp Tackling the Data Deluge: File Systems and Storage Technologies
Source: NetApp
Release Date: June 25, 2012

A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

AMD Exploring the Potential of Heterogeneous Computing
Source: AMD
Release Date: April 2, 2012

Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.

Appro High Performance Computing Made Simple: The 10 Gigabit Ethernet Cluster
Source: Appro
Release Date: August 16, 2012

In this white paper you will learn how to balance high performance with simplicity using Appro HyperGreen™ Cluster, Intel® Xeon® processor 5500 series (Nehalem), and 10 Gigabit Ethernet. A complete turn-key solution that uses standard cabling and ready to run Rocks+ software is perfect for small to medium research/engineering labs in academic, government, and industry sectors.

Appro HPC Software Requirements to Support High Performance Cluster Architecture
Source: Appro
Release Date: August 2, 2012

Explore how Appro combines HPC open source software with key compatibility features of the Appro Cluster Engine™ (ACE) management software to provide a complete software stack for its Appro Xtreme-X™ Supercomputer product line that have been helping many customers address pain points for medium to large HPC deployments.

NetApp The IT Data Explosion Is Game Changing for Storage Requirements
Source: NetApp
Release Date: June 4, 2012

Data-intensive computing has been an integral part of high-performance computing (HPC) and other large datacenter workloads for decades, but recent developments have dramatically raised the stakes for system requirements — including storage resiliency. The storage systems of today's largest HPC systems often reach capacities of 15–30PB, not counting scratch disk, and feature thousands or tens of thousands of disk drives. Even in more mainstream HPC and enterprise datacenters, storage systems today may include hundreds of drives, with capacities often doubling every two to three years. With this many drives, normal failure rates can mean that a disk is failing somewhere in the system often enough to make MTTF a serious concern at the system level.

Appro Appro White Paper: Enabling Performance-per-Watt Gains in HPC
Source: Appro
Release Date: April 5, 2012

Designed to meet the growing global demand for HPC solutions, Appro's Xtreme-X™ Supercomputer delivers superior performance-per-watt and reduced I/O latency while bringing significant flexibility to HPC workload configurations including capacity, hybrid, data intensive and capability computing.

NetApp Solving Agencies Big Data Challenges: PED for On-the-Fly Decisions
Source: NetApp
Release Date: March 12, 2012

With the growing volumes of rich sensor data and imagery used today to derive meaningful intelligence, government agencies need to address the challenges posed by these “big” datasets NetApp provides a scalable, unified single pool of storage to better handle your processing and analysis of data to drive actionable intelligence in the most demanding environments on earth.

Inphi Introducing LRDIMM – A New Class of Memory Modules
Source: Inphi
Release Date: January 17, 2012

This paper introduces the LRDIMM, a new type of memory module for high capacity servers and high-performance computing platforms. LRDIMM is an abbreviation for Load Reduced Dual Inline Memory Module, the newest type of DIMM supporting DDR3 SDRAM main memory. The LRDIMM is fully pin compatible with existing JEDEC-standard DDR3 DIMM sockets, and supports higher system memory capacities when enabled in the system BIOS.

Rogue Wave Software Debugging CUDA-Accelerated Parallel Applications with TotalView
Source: Rogue Wave Software
Release Date: November 21, 2011

CUDA introduces developers to a number of new concepts (such as kernels, streams, warps and explicitly multi-level memory) that are not encountered in serial or other parallel programming paradigms. In addition, CUDA is frequently used alongside MPI parallelism and host-side multi-core and multi-thread parallelism. The TotalView parallel debugger provides developers with methods to handle these CUDA-specific constructs, as well as an integrated view of all three levels of parallelism within a single debugging session.

Matrox Effectively applying high-performance computing (HPC) to imaging
Source: Matrox
Release Date: January 9, 2012

Applications with image resolutions, data rates and analysis requirements that exceed the capabilities of a typical workstation computer continue to exist to this day. Moreover, developers must decide how to select and best use the processing technologies – multi-core CPU, GPU and FPGA – at their disposal. The suitability of a scalable heterogeneous computing platform for demanding applications will be examined by way of a representative scenario.


2009 | 2010 | 2011 | 2012 | Recent

Search Whitepapers

Supermicro