April 08, 2013
April 8 — Mel Bernstein, Northeastern’s senior vice provost for research and graduate education, has been unanimously elected president of the Massachusetts Green High Performance Computing Center, a state-of-the-art computational infrastructure and collaborative research center in Holyoke, Mass., and chairperson of MGHPCC Holyoke Inc., its nonprofit affiliate. He will succeed Tom Chmura in the position.
In the last 18 months, the MGHPCC has gone from vision to reality. Officially opening its doors in November 2012, the center is an unprecedented example of collaboration between private industry, state government, and five of the commonwealth’s leading research institutions.
Northeastern, Boston University, Harvard University, the Massachusetts Institute of Technology, and the University of Massachusetts have each contributed $10 million to support construction of the facility, which is the first of its kind in the nation. The partnership also includes Massachusetts Gov. Deval Patrick’s office, Cisco Systems, and EMC Corp., a Hopkinton-based data-storage company founded by Northeastern engineering alumni.
“Now, the question is how do we move forward to maximize both the benefits of the facility and the collaborative spirit that has been developed in building it,” Bernstein said.
In recent months, MGHPCC has secured external funding from a variety of sources, including federal research and education grants and a $4.5 million grant from the Massachusetts Life Sciences Center. Bernstein is optimistic that by continuing to encourage collaboration among the center’s partners, it will become an even more competitive facility in its bid for major research funding.
Already, researchers from across the universities have entered into a number of collaborative research projects with seed funding from the facility. These are proof-of-concept projects, laying the foundation for larger-scale work once the MGHPCC is fully operational, which Bernstein expects will happen over the next six months.
“The university is thrilled that this investment has borne so much success already,” said provost Stephen W. Director. “It is particularly important that Mel is now in a position to carry the vision to the next level of research collaboration.”
Bernstein, who also serves as professor of the practice in technology policy and materials engineering, earned his doctorate in metallurgy and material science from Columbia University. Before joining Northeastern, he held faculty and senior administrative positions at Carnegie Mellon University, Tufts University, and the University of Maryland. In 2003, Bernstein created the U.S. Department of Homeland Security’s Office of University Programs and served as its director for three years.
“My responsibility at the university is to build our research base, and part of that has to be to work collaboratively with other universities both in the commonwealth, and across the nation” Bernstein said. “I plan to use the lessons gained through my work in Washington and elsewhere to make MGHPCC an even greater success.”
-----
Source: Northeastern
Later this month, Indiana University will formally introduce the successor to the Big Red system, the aptly-named, Big Red II. The Cray-crafted and tuned system is 25 times faster than its baby brother (the 4100-core original Big Red from 2006) and sports some notable improvements across its 1,020 nodes. According to Thomas Sterling, there are theoretical lessons that can be applied to...
Read more...
A giant leap in bone structure research paves the way for advances in osteoporosis treatment; details from UCSD's Research CyberInfrastructure (RCI) Program reveal what PIs really want; and a cloud computing programming model puts the focus on predictable performance. Plus GPU-related research and more...
Read more...
Despite the important advances that middleware enables in both the HPC and enterprise spheres, it generally fails to elicit the same excitement as, say, brand-new leadership class hardware. But middleware, such as Adaptive Computing's intelligent management engine, Moab, is cool and you don't have to take Adaptive's word for it. During the company's annual user event last week, Gartner gave Adaptive its "Cool Vendor" stamp of approval.
Read more...
Apr 17, 2013 |
Advances in data-intensive supercomputing increase understanding of autism and related disorders, set the stage for future treatments.
Read more...
Apr 16, 2013 |
For a growing number of non-traditional HPC workloads, the cloud is the place to be.
Read more...
Apr 15, 2013 |
Getting scientific applications to scale across Titan's 300,000 compute cores means there will be bugs. Finding those bugs is where Allinea DDT comes in.
Read more...
Apr 11, 2013 |
The Rockhopper cluster has been in production for over a year now, long enough for additional details to emerge on this interesting HPC cloud use case.
Read more...
Apr 11, 2013 |
Gordon, a supercomputer at the San Diego Supercomputing Center on the campus of the University of California at San Diego is helping point the direction of the Large Hadron Collider's next research project.
Read more...
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
04/09/2013 | IBM, Omnibond, Xyratex | In today's era of data-intensive computing, the big data phenomenon has grown far beyond a trend, it's become a critical top-line business issue that organizations must tackle in order to remain competitive. With the emergence of the next generation of tools for managing, storing, and analyzing data flooding the IT marketplace, deciding on the best data management solution to drive your business' requirements can be overwhelming.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.
CFD models of 100 million+ cells are increasingly common. Models of a billion+ cells are emerging. To meet demand for 10 to 100x larger models, the simulations will have to scale to 10 to 100x more cores. Both compute environment capabilities and capacity must be enhanced. Learn how Cray and ANSYS are scaling production models to 1000s of cores and providing the compute environment to take CFD simulations to the next level of extreme fidelity.