NCSA
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Datanami
Digital Manufacturing Report
HPC in the Cloud
Green Computing Report

Tabor Communications
Corporate Video

Australian Researchers Model the Common Cold


MELBOURNE, Australia, July 16 -- Rhinovirus infection is linked to about 70 per cent of all asthma exacerbations with more than 50 per cent of these patients requiring hospitalisation. Furthermore, over 35 per cent of patients with acute chronic obstructive pulmonary disease (COPD) are hospitalised each year due to respiratory viruses including rhinovirus.

A new antiviral drug to treat rhinovirus infections is being developed by Melbourne company Biota Holdings Ltd, targeted for those with these existing conditions where the common cold is a serious threat to their health and could prove fatal.

A team of researchers led by Professor Michael Parker from St Vincent’s Institute of Medical Research (SVI) and the University of Melbourne is now using information on how the new drug works to create a 3D simulation of the complete rhinovirus using Australia’s fastest supercomputer.

“Our recently published work with Biota shows that the drug binds to the shell that surrounds the virus, called the capsid. But that work doesn’t explain in precise detail how the drug and other similar acting compounds work,” Professor Parker said.

Professor Parker and his team are working on the newly installed IBM Blue Gene/Q at the University of Melbourne with computational biologists from IBM and the Victorian Life Sciences Computation Initiative (VLSCI).

In production from 1 July 2012, the IBM Blue Gene/Q is the most powerful supercomputer dedicated to life sciences research in the Southern Hemisphere and currently ranked the fastest in Australia.

“The IBM Blue Gene/Q will provide us with extraordinary 3D computer simulations of the whole virus in a time frame not even dreamt of before,” Professor Parker said.
 
“Supercomputer technology enables us to delve deeper in the mechanisms at play inside a human cell, particularly how drugs work at a molecular level.

“This work offers exciting opportunities for speeding up the discovery and development of new antiviral treatments and hopefully save many lives around the world,” he said.

Professor Parker said that previously we have only been able to run smaller simulations on just parts of the virus. 

Professor James McCluskey Deputy Vice-Chancellor (Research) at the University of Melbourne said:

“The work on rhinovirus is an example of how new approaches to treat disease will become possible with the capacity of the IBM Blue Gene Q, exactly how we hoped this extraordinary asset would be utilised by the Victorian research community in collaboration with IBM.”

“This is a terrific facility for Victorian life science researchers, further strengthening Victoria’s reputation as a leading biotechnology centre,” he said.

Dr John Wagner, Manager, IBM Research Collaboratory for Life Sciences-Melbourne, co located at VLSCI, said these types of simulations are the way of the future for drug discovery.

“This is the way we do biology in the 21st Century,” he said.

The newly operational IBM Blue Gene/Q hosted by the University of Melbourne at the VLSCI is ranked 31st on the prestigious global TOP500 list.

The TOP500 table nominates the 500 most powerful computer systems in the world.

The VLSCI is an initiative of the Victorian Government in partnership with the University of Melbourne and the IBM Life Sciences Research Collaboratory, Melbourne.

-----

Source: University of Melbourne

Sponsored Links

Accelerate your science with Seneca
One of the first HPC providers installing a 4X NVIDIA Kepler K-20 cluster. Invites you to a free evaluation on Seneca’s NVIDIA K20 Kepler cluster, pre-loaded with AMBER, NAMD, LAMMPS

Free NVIDIA Tesla Kepler GPU Test Drive
Exxact offers researchers an opportunity to remotely evaluate NVIDIA K20/K10 GPU systems that are pre-installed with AMBER, NAMD, LAMMPS, TeraChem, and more applications. Sign up now to experience the acceleration.

High-Performance Computing in Action
Businesses that want to be on the cutting edge of their industries are increasingly turning to high-performance computing (HPC) solutions to handle complex compute processes and speed up their rate of innovation. Download this Executive Brief to see how businesses in energy, life sciences and entertainment put HPC solutions to work in their operations.

April 19, 2013

April 18, 2013

April 17, 2013

April 16, 2013

April 15, 2013

April 12, 2013

April 11, 2013

April 10, 2013

April 09, 2013

April 08, 2013


Most Read Features

Most Read Around the Web

Most Read This Just In


Feature Articles

Big Red II Colors New Page for Hybrid Systems

Later this month, Indiana University will formally introduce the successor to the Big Red system, the aptly-named, Big Red II. The Cray-crafted and tuned system is 25 times faster than its baby brother (the 4100-core original Big Red from 2006) and sports some notable improvements across its 1,020 nodes. According to Thomas Sterling, there are theoretical lessons that can be applied to...
Read more...

The Week in HPC Research

A giant leap in bone structure research paves the way for advances in osteoporosis treatment; details from UCSD's Research CyberInfrastructure (RCI) Program reveal what PIs really want; and a cloud computing programming model puts the focus on predictable performance. Plus GPU-related research and more...
Read more...

Middleware Is Cool

Despite the important advances that middleware enables in both the HPC and enterprise spheres, it generally fails to elicit the same excitement as, say, brand-new leadership class hardware. But middleware, such as Adaptive Computing's intelligent management engine, Moab, is cool and you don't have to take Adaptive's word for it. During the company's annual user event last week, Gartner gave Adaptive its "Cool Vendor" stamp of approval.
Read more...

Short Takes

Supercomputing Transforms Data into Knowledge

Apr 17, 2013 | Advances in data-intensive supercomputing increase understanding of autism and related disorders, set the stage for future treatments.
Read more...

When Is the Cloud Right for HPC?

Apr 16, 2013 | For a growing number of non-traditional HPC workloads, the cloud is the place to be.
Read more...

Debugging at Titan Scale

Apr 15, 2013 | Getting scientific applications to scale across Titan's 300,000 compute cores means there will be bugs. Finding those bugs is where Allinea DDT comes in.
Read more...

Rockhopper POD Cluster Beats Amazon

Apr 11, 2013 | The Rockhopper cluster has been in production for over a year now, long enough for additional details to emerge on this interesting HPC cloud use case.
Read more...

SDSC's Gordon to Help Guide Future of Particle Physics

Apr 11, 2013 | Gordon, a supercomputer at the San Diego Supercomputing Center on the campus of the University of California at San Diego is helping point the direction of the Large Hadron Collider's next research project.
Read more...

Sponsored Whitepapers

Progress in Parallel: the Bull Parallel Programming Center

04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.

Big Data Implementation: Hadoop and Beyond

04/09/2013 | IBM, Omnibond, Xyratex | In today's era of data-intensive computing, the big data phenomenon has grown far beyond a trend, it's become a critical top-line business issue that organizations must tackle in order to remain competitive. With the emergence of the next generation of tools for managing, storing, and analyzing data flooding the IT marketplace, deciding on the best data management solution to drive your business' requirements can be overwhelming.

Sponsored Multimedia

Cray CS300-AC Cluster Supercomputer Air Cooling Technology Video

The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.

Webinar: Extreme Scalability for High-Fidelity CFD Simulations

CFD models of 100 million+ cells are increasingly common. Models of a billion+ cells are emerging. To meet demand for 10 to 100x larger models, the simulations will have to scale to 10 to 100x more cores. Both compute environment capabilities and capacity must be enhanced. Learn how Cray and ANSYS are scaling production models to 1000s of cores and providing the compute environment to take CFD simulations to the next level of extreme fidelity.

SC12 Editorial Feature HPCwire Soundbite

Newsletters

Stay informed! Subscribe to HPCwire email Newsletters.

HPCwire Weekly Update
HPC in the Cloud Update
Digital Manufacturing Report
Datanami
HPCwire Conferences & Events
Job Bank
HPCwire Product Showcases



HPC Job Bank


Featured Events






  • June 16, 2013 - June 20, 2013
    ISC'13
    Leipzig,
    Germany


HPC Wire Events