IBM Platform Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud

Supercomputer Sails Through World History


A novel use of supercomputing has resulted in a unique approach to researching the history of the world. SGI and Kalev Leetaru, Assistant Director for Text and Digital Media Analytics at the University of Illinois, set out to map the full contents of Wikipedia’s English language edition using a history analytics application.

To implement the application, Leetaru took advantage of the UV 2000’s global memory architecture and high performance capabilities to perform in-memory data-mining. According to the press release, the project can now visually represent historical events using dates, locations and sentiment data gleaned from the text.

Leetaru recently published Culturnomics 2.0, which utilized 100 million global news articles over a 25-year span, a network of 10 billion people, and contained 100 trillion relationships. A 2.4 petabyte dataset visualized changes in society, including the lead up to the Arab Spring and the location of Osama Bin Laden.

That led to the idea of building a historical map based on Wikipedia entries. The project encompassed a wide range of analysis, generating videos, graphs and charts detailing any number of relationships.  Some examples include connectivity structures, visualizing persons who were plotted and cross-referenced in the same article, and graphs depicting the online encyclopedia’s sentiment context over a millennium.

This does not mark the first time a project has attempted to map Wikipedia entries. Previous attempts involved manual metadata entry, which resulted in a narrower scope of location data. In this case, SGI and Leetaru were able to identify and build connections based on every location and date found in Wikipedia’s four million pages.

To achieve these results, the entire English version of the Wikipedia dataset was loaded into the UV 2000’s memory, although no specifics were provided about how big a chunk of RAM that involved or how many processors were being utilized. The UV 2000 architecture is capable of scaling up to 4,096 threads, using Intel E5-4600 processors, and up to 64 TB of memory.

Once in memory, the Wikipedia data was geo- and date-coded using algorithms that tracked locations and dates in text. An average article included 19 locations and 11 dates. The resulting connections were then placed in a large network structure representing the history of the world. With all tags and connections established, visual analysis of the entire dataset could be generated in “near real-time.”

The in-memory application model gave Leetaru the ability to test theories and research historical data, in a way that has never been done before. “It's very similar to using a word processor instead of using a typewriter,” he said. “I can conduct my research in a completely different way, focusing on the outcomes, not the algorithms."

HPCwire on Twitter

Discussion

There are 0 discussion items posted.

Join the Discussion

Join the Discussion

Become a Registered User Today!


Registered Users Log in join the Discussion

June 27, 2012

June 26, 2012

June 25, 2012

June 21, 2012

June 20, 2012

June 19, 2012

June 18, 2012

June 15, 2012


Most Read Features

Most Read Around the Web

Most Read This Just In



Feature Articles

Lawrence Livermore, IBM Offer Petascale Supercomputer to Industry

One by one, US government HPC labs are getting into the industry partnership business. The latest is Lawrence Livermore National Laboratory (LLNL), who this week announced it was teaming with IBM to form "Deep Computing Solutions," a collaboration that is being folded into LLNL’s new High Performance Computing Innovation Center,
Read more...

An HPC Programming Model for the Exascale Age

As the supercomputing faithful prepare for exascale computing, there is a great deal of talk about moving beyond the two-decades-old MPI programming model . The HPC programmers of tomorrow are going to have to write codes that are able to deal with systems hundreds of times larger than the top supercomputers of today, and the general feeling is that MPI, by itself, will not make that transition gracefully. One of the alternatives being offered is a PGAS model known as GASPI...
Read more...

Exascale Computing: The View from Argonne

As a result of the dissolution of DARPA's UHPC program, the driving force behind exascale research in the US now resides with the Department of Energy, which has embarked upon a program to help develop this technology. To get a lab-centric view of the path to exascale, HPCwire asked a three of the top directors at Argonne National Laboratory -- Rick Stevens, Michael Papka, and Marc Snir -- to provide some context for the challenges and benefits of developing these extreme scale systems.
Read more...

Sponsored Whitepapers

Tackling the Data Deluge: File Systems and Storage Technologies

06/25/2012 | NetApp | A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.

Sponsored Multimedia

ISC Think Tank 2012

Newsletters


HPC Job Bank


Featured Events






HPC Wire Events