CHAP: 2012 Accelerated Scientific Discovery

The Accelerated Scientific Discovery (ASD) initiative provides dedicated, large-scale computational resources to a small number of projects for a very short time period, usually two months or so following acceptance of a new HPC system. These projects are selected to help put the new system through its paces and pursue scientific objectives that would not be possible through normal allocation opportunities. In most cases, three to five NSF-supported, university-led projects from across the geosciences or the supporting computational sciences are chosen.

For the Yellowstone HPC system, the first system to be installed at the NCAR-Wyoming Supercomputing Center (NWSC), CISL issued a call for proposals in late 2011 and received 10 proposals for ASD projects, ranging from 7 million core-hours to 31 million core-hours. Altogether the requests asked for 132.3 million core-hours, but only 70 million core-hours could be made available. The proposals were reviewed by the CHAP, and the following five awards were made.

Towards seamless high-resolution prediction at intraseasonal and longer timescales

Project lead: James Kinter, Center for Ocean-Land-Atmosphere Studies (COLA)
Yellowstone allocation: 21 million core-hours

The proposed experiments represent a continuation of the highly successful Project Athena, an international collaboration between the Center for Ocean-Land-Atmosphere Studies (COLA) and the European Center for Medium-Range Weather Forecasts (ECMWF), in support of both centers' ongoing efforts to understand and quantify predictability in the weather and climate system from daily to interannual time scales. Building upon the results of Project Athena, we propose to explore the impact of increased atmospheric resolution on model fidelity and prediction skill in a coupled, seamless framework.

Direct numerical simulation of cumulus cloud core processes over larger volumes and for longer times

Project lead: Lance Collins, Cornell University
Yellowstone allocation: 19 million core-hours

We will simulate particle-turbulence interactions in conditions which mimic cumulus cloud cores, with scales ranging from millimeters up to a few meters for a period of about 20 minutes. The simulations are performed with our state-of-the-art, high-Reynolds-number direct numerical simulation code for isotropic turbulence and homogeneous turbulent shear flow. Two-dimensional domain decomposition is used for massively parallel simulations on tens of thousands of processors. With 1010 grid points and 109 particles on 105 processors, these would be the largest such simulations ever carried out in the United States. By studying the role of particle and fluid parameters on droplet clustering, collisions, and coalescence, we will address long-standing questions regarding warm rain initiation in cumulus clouds. We will rely on insight gained from this study to improve large-eddy simulation models of cloud dynamics and to develop scaling relations relevant for the high Reynolds numbers and large domain sizes indicative of atmospheric clouds. This research also has important implications for climate modeling, as clouds modulate the radiation from the sun and thus affect the global climate balance.

Arrest of frontogenesis in oceanic submesoscale turbulence

Project lead: Baylor Fox-Kemper, University of Colorado, Boulder
Yellowstone allocation: 16 million core-hours

The proposed simulations would be the first resolving the true multiscale character of the surface frontogenesis equilibration process in the atmosphere or ocean. In the ocean this occurs in the presence of both submesoscale and Langmuir turbulence, each of which scales independently and has been shown to have a strong effect on the oceanic mixed layer in global climate. It is presently unknown what the detailed frontogenetic equilibration mechanism will be.

Community computational platforms for developing three-dimensional models of earth structure

Project lead: Thomas Jordan, University of Southern California
Yellowstone allocation: 7.3 million core-hours

Precise information about the structure of the solid Earth comes from seismograms recorded at the surface of a highly heterogeneous lithosphere. Full-3D tomography based on adjoint methods can assimilate this information into three-dimensional models of elastic and anelastic structure. These methods fully account for the physics of wave excitation, propagation, and interaction by numerically solving the inhomogeneous equations of motion for a heterogeneous anelastic solid. Full-3D tomography using adjoint methods requires the execution of complex computational procedures that challenge the most advanced high-performance computing (HPC) systems. In this allocation, we request computational resources to migrate and run two tomographic platforms onto NWSC systems. The two computational systems we propose to port to NWSC include the AWP-ODC 4th-order, staggered-grid, finite-difference code, which has been widely used for regional earthquake simulation and physics-based seismic hazard analysis, and the SPECFEM3D spectral element code, which is capable of modeling wave propagation through aspherical structures of essentially arbitrary complexity on scales ranging from local to global. We will use these tomographic computational tools to refine Community Velocity Models (CVMs) using earthquake waveforms and ambient-noise Green functions, and to investigate the iterative convergence of the tomographic models from disparate starting models. The resulting improved regional and global 3D models will provide better images of mantle convection and its relationship to crustal tectonics and the geodynamo; will more precisely constrain the plate-tectonic processes of lithospheric creation, deformation, magmatism, and destruction; and will improve imaging of seismic sources, including damaging earthquakes and nuclear explosions.

Turbulence in the heliosphere: The role of current sheets and magnetic reconnection

Project lead: Michael Shay, University of Delaware
Yellowstone allocation: 7.2 million core-hours

Turbulence is a ubiquitous multi-scale phenomenon in the heliosphere, playing a critical role in heating the solar corona, accelerating the solar wind, and mediating the interaction of the solar wind with the Earth's magnetosphere. The dissipation of energy in this turbulence is a grand challenge problem and has been the focus of much recent scrutiny. Turbulence is known to form intense concentrations of energy in current sheets, but how these dissipate through magnetic reconnection and the role they play in the turbulence is unknown. Using a fully kinetic simulation code, we will perform the first systematic study of magnetic reconnection of current sheets during turbulence.