July 16, 2012
MELBOURNE, Australia, July 16 -- Rhinovirus infection is linked to about 70 per cent of all asthma exacerbations with more than 50 per cent of these patients requiring hospitalisation. Furthermore, over 35 per cent of patients with acute chronic obstructive pulmonary disease (COPD) are hospitalised each year due to respiratory viruses including rhinovirus.
A new antiviral drug to treat rhinovirus infections is being developed by Melbourne company Biota Holdings Ltd, targeted for those with these existing conditions where the common cold is a serious threat to their health and could prove fatal.
A team of researchers led by Professor Michael Parker from St Vincent’s Institute of Medical Research (SVI) and the University of Melbourne is now using information on how the new drug works to create a 3D simulation of the complete rhinovirus using Australia’s fastest supercomputer.
“Our recently published work with Biota shows that the drug binds to the shell that surrounds the virus, called the capsid. But that work doesn’t explain in precise detail how the drug and other similar acting compounds work,” Professor Parker said.
Professor Parker and his team are working on the newly installed IBM Blue Gene/Q at the University of Melbourne with computational biologists from IBM and the Victorian Life Sciences Computation Initiative (VLSCI).
In production from 1 July 2012, the IBM Blue Gene/Q is the most powerful supercomputer dedicated to life sciences research in the Southern Hemisphere and currently ranked the fastest in Australia.
“The IBM Blue Gene/Q will provide us with extraordinary 3D computer simulations of the whole virus in a time frame not even dreamt of before,” Professor Parker said.
“Supercomputer technology enables us to delve deeper in the mechanisms at play inside a human cell, particularly how drugs work at a molecular level.
“This work offers exciting opportunities for speeding up the discovery and development of new antiviral treatments and hopefully save many lives around the world,” he said.
Professor Parker said that previously we have only been able to run smaller simulations on just parts of the virus. Professor James McCluskey Deputy Vice-Chancellor (Research) at the University of Melbourne said:
“The work on rhinovirus is an example of how new approaches to treat disease will become possible with the capacity of the IBM Blue Gene Q, exactly how we hoped this extraordinary asset would be utilised by the Victorian research community in collaboration with IBM.”
“This is a terrific facility for Victorian life science researchers, further strengthening Victoria’s reputation as a leading biotechnology centre,” he said.
Dr John Wagner, Manager, IBM Research Collaboratory for Life Sciences-Melbourne, co located at VLSCI, said these types of simulations are the way of the future for drug discovery.
“This is the way we do biology in the 21st Century,” he said. The newly operational IBM Blue Gene/Q hosted by the University of Melbourne at the VLSCI is ranked 31st on the prestigious global TOP500 list.
The TOP500 table nominates the 500 most powerful computer systems in the world.
The VLSCI is an initiative of the Victorian Government in partnership with the University of Melbourne and the IBM Life Sciences Research Collaboratory, Melbourne.
-----
Source: University of Melbourne
Later this month, Indiana University will formally introduce the successor to the Big Red system, the aptly-named, Big Red II. The Cray-crafted and tuned system is 25 times faster than its baby brother (the 4100-core original Big Red from 2006) and sports some notable improvements across its 1,020 nodes. According to Thomas Sterling, there are theoretical lessons that can be applied to...
Read more...
A giant leap in bone structure research paves the way for advances in osteoporosis treatment; details from UCSD's Research CyberInfrastructure (RCI) Program reveal what PIs really want; and a cloud computing programming model puts the focus on predictable performance. Plus GPU-related research and more...
Read more...
Despite the important advances that middleware enables in both the HPC and enterprise spheres, it generally fails to elicit the same excitement as, say, brand-new leadership class hardware. But middleware, such as Adaptive Computing's intelligent management engine, Moab, is cool and you don't have to take Adaptive's word for it. During the company's annual user event last week, Gartner gave Adaptive its "Cool Vendor" stamp of approval.
Read more...
Apr 17, 2013 |
Advances in data-intensive supercomputing increase understanding of autism and related disorders, set the stage for future treatments.
Read more...
Apr 16, 2013 |
For a growing number of non-traditional HPC workloads, the cloud is the place to be.
Read more...
Apr 15, 2013 |
Getting scientific applications to scale across Titan's 300,000 compute cores means there will be bugs. Finding those bugs is where Allinea DDT comes in.
Read more...
Apr 11, 2013 |
The Rockhopper cluster has been in production for over a year now, long enough for additional details to emerge on this interesting HPC cloud use case.
Read more...
Apr 11, 2013 |
Gordon, a supercomputer at the San Diego Supercomputing Center on the campus of the University of California at San Diego is helping point the direction of the Large Hadron Collider's next research project.
Read more...
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
04/09/2013 | IBM, Omnibond, Xyratex | In today's era of data-intensive computing, the big data phenomenon has grown far beyond a trend, it's become a critical top-line business issue that organizations must tackle in order to remain competitive. With the emergence of the next generation of tools for managing, storing, and analyzing data flooding the IT marketplace, deciding on the best data management solution to drive your business' requirements can be overwhelming.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.
CFD models of 100 million+ cells are increasingly common. Models of a billion+ cells are emerging. To meet demand for 10 to 100x larger models, the simulations will have to scale to 10 to 100x more cores. Both compute environment capabilities and capacity must be enhanced. Learn how Cray and ANSYS are scaling production models to 1000s of cores and providing the compute environment to take CFD simulations to the next level of extreme fidelity.