Welcome to the HPCwire multimedia section!
2008 | 2009 | 2010 | 2011 | 2012 | Recent
Feb 15, 2012
In this Inside Analytics 2011 video series, you will hear from a number of key conference participants on topics including high-performance analytics and why it is a game-changer for businesses, the development of the SAS® High-Performance Analytics suite and how to empower the analytical expert.
Watch now...
Jun 01, 2012
Listen to an interview with SDSC and hear about the most powerful data intensive supercomputer configuration based on the latest Intel® Xeon® processor. Gordon supercomputer, listed as one of the fastest Top500 supercomputers in the world offers excellent memory bandwidth and has achieved an outstanding benchmark data-intensive application results which was not possible with traditional computing methods.
Watch now...
Feb 03, 2012
The Appro next generation Xtreme-X™ Supercomputer recently launched has made headlines across the nation over the past month. In this interview we sit down with Appro CTO, Giri Chukkapalli to discuss the new system design of the Xtreme-X™ supercomputer to support future technologies. In addition, he also talks about the Appro Cluster Engine™ (ACE) management software suite, part of Appro’s cluster software stack that is tightly integrated with the new Xtreme-X™ Supercomputer.
Watch now...
Feb 03, 2012
Doug Eadline, Editor of Cluster Monkey had a chance to sit down with Jim Ang, Technical Manager at Sandia National Laboratories for an interview about the “First of a Kind” Experimental Cluster, Appro Xtreme-X™ Supercomputer ,using Intel’s Knights Ferry (KNF) Software Development Platform for the Intel® Many Integrated Core (MIC) architecture. Just for the record, Knights Ferry is available only to select individuals including Jim Ang’s group at Sandia and represents a potential new direction in HPC.
Watch now...
Feb 03, 2012
As a multi-million and multi-year contract, the TLCC2 project was awarded to Appro as a joint procurement offered by the US Department of Energy’s National Nuclear Security Administration (NNSA) where Appro has begun delivering the Xtreme-X™ Supercomputer to 3 major national Laboratories including Sandia (SNL), Los Alamos (LANL) and Lawrence Livermore (LLNL). Matt Leininger of LLNL and Appro’s VP of Advanced Technology, John Lee, discuss the TLCC2 in further detail as the project is underway.
Watch now...
Feb 03, 2012
Power, performance, and scalability are just a few enhancements of Appro’s next generation Xtreme-X™ Supercomputer which addressed four HPC workload configurations including capacity, hybrid, data intensive, and capability computing. Appro sustains its position as a leading HPC solutions provider by offering the complete package with innovative products based on industry-level standards and exceptional professional services. The new Xtreme-X™ supercomputer is your solution to a better future.
Watch now...
Sep 06, 2011
Complimentary Webcast! Break free from the database vendors that force you to keep investing in additional skills and hardware to accommodate the inefficiencies of their software. Learn how you can achieve higher DBA efficiency and give your DBAs more time to focus on strategic projects and add more value to your business. Join us to hear best practices and client experiences on reducing both the risk and cost associated with growing Data Center complexity.
Watch now...
Aug 29, 2011
NFS has been the standard protocol for NAS systems since the 1980s. However, with the explosive growth of Linux clusters running demanding technical computing applications, NFS is no longer sufficient for these big data workloads. After years of development effort, driven by Panasas and others, pNFS is now just around the corner and promises to dramatically improve Linux client I/O performance thanks to its parallel architecture. Watch the on-demand webinar – “pNFS: Are We There Yet?”
Watch now...
Jun 26, 2011
In this lively panel discussion, Addison Snell of Intersect360 Research engages four forward-thinking leaders of the supercomputing industry on a range of thought-provoking topics, providing a high-pace conclusion to the ISC conference.
Watch now...
Apr 12, 2011
Speed up innovation and eliminate costs with Amazon EC2. No more procuring, configuring and operating in-house compute clusters. Access compute resources in minutes instead of months. See for yourself–build a 64-core parallel cluster and run a molecular dynamics simulation in under 10 minutes. Learn more
Watch now...
One by one, US government HPC labs are getting into the industry partnership business. The latest is Lawrence Livermore National Laboratory (LLNL), who this week announced it was teaming with IBM to form "Deep Computing Solutions," a collaboration that is being folded into LLNL’s new High Performance Computing Innovation Center,
Read more...
As the supercomputing faithful prepare for exascale computing, there is a great deal of talk about moving beyond the two-decades-old MPI programming model . The HPC programmers of tomorrow are going to have to write codes that are able to deal with systems hundreds of times larger than the top supercomputers of today, and the general feeling is that MPI, by itself, will not make that transition gracefully. One of the alternatives being offered is a PGAS model known as GASPI...
Read more...
As a result of the dissolution of DARPA's UHPC program, the driving force behind exascale research in the US now resides with the Department of Energy, which has embarked upon a program to help develop this technology. To get a lab-centric view of the path to exascale, HPCwire asked a three of the top directors at Argonne National Laboratory -- Rick Stevens, Michael Papka, and Marc Snir -- to provide some context for the challenges and benefits of developing these extreme scale systems.
Read more...
Jun 26, 2012 |
Researchers look to boost speed of phase change memory.
Read more...
Jun 25, 2012 |
SGI 's new UV 2 super swallows Wikipedia and maps the history of the world.
Read more...
06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.