Heterogeneous Multi-core Workshop at NCAR

By Marijke Unger
10/03/2014 - 12:00am

A select group of computational experts familiar with weather and climate applications, along with representatives from industry, gathered at the NCAR Mesa Lab September 17-18 at the fourth Heterogeneous Multi-core Workshop to discuss latest findings and developments in programming emerging disruptive computing technologies.

Over the past few years, a new architectural paradigm has appeared in high-performance computer design: namely, the emergence of coprocessors, also known as accelerators, that sport large amount of computing power in the form of many SIMD/vector processing elements. Typically, these coprocessors can be installed in a computing node alongside a couple of conventional microprocessors, which serve as the host to these computationally powerful sidekicks.

This many-core architectural approach has arisen because a fundamental scaling law of transistors, called Dennard scaling, which says that transistors can be reduced in size without increasing their power density, has broken down due to quantum tunneling effects associated with their minute size. In response, the many-core design restricts rising power-densities by lowering clock speeds and dramatically increasing the number of processing elements per unit area. The result is a system that is potentially much faster and more efficient in terms of energy per floating point calculation, but that can also be much more difficult to program.   

The NCAR workshop provided a forum for experts to share experiences and have open discussion, thus leading to an improved collective understanding of the utility of these new technologies. The workshop specifically focused on the algorithms, programming models, design strategie,s and tools that will be needed to create a new generation of applications capable of efficiently exploiting the disruptive computing power heterogeneous multi-core platforms. In addition, the workshop sought to create a community of developers that can work together to provide technical feedback to vendors, develop necessary software standards, and further develop programming models for weather and climate applications.

“This workshop is an important way of bringing together people who are doing work on different solutions, to share what worked well for them,” said Ilene Carpenter, a computational scientist at the National Renewable Energy Laboratory. “This meeting is different from most traditional conferences where people present completed work. Instead, sharing what they’ve learned, what they’re trying and what didn’t work makes this a valuable brainstorming session to refine plans for future work.”

Most attendees at the workshop started from the premise that the trends driving the many-core paradigm shift will continue and that the future of scientific progress in computational weather and climate simulation will profoundly depend on their ability to adapt or develop and then optimize methods for exploiting massive levels of parallelism. However, the apparent consensus from the meeting was that the size and complexity of weather and climate applications make adoption of radically new technology difficult, as does the current maturity of these architectures and the compilers that target them. This reality for the community was reflected in both the rate of progress in many-core application development as well as the relatively modest performance gains (2-4 times in many cases) achieved so far. Thus, significant challenges remain to the widespread adoption of many-core systems.

“The workshop highlighted advances in physics packages implementations on both GPUs and Xeon Phi processors for both weather and climate models,” said Srinath Vadlamani, a software engineer at NCAR. “The most successful projects thus far have been running the new enhanced versions of the code simultaneously on the CPUs and accelerated hardware, taking advantage of the faster CPU’s processors for the more serialized portions of the codes.”