Resources overview

Yellowstone | Geyser and Caldera | GLADE | Janus | BlueM | Pronghorn | Erebus | HPSS

These NCAR supercomputing, data storage, and archive systems support the development of climate models, weather forecasting, and other critical research.

Access to these resources is available through several allocation opportunities. In general, researchers who are supported by the National Science Foundation to pursue work in the atmospheric and closely related sciences are eligible to apply.

Yellowstone: High-performance computing resource

The 1.5-petaflops Yellowstone HPC system is an IBM iDataPlex cluster with 72,576 Intel Sandy Bridge processors. Yellowstone has been deployed to enable dramatic improvements in the scientific capabilities for a broad spectrum of important Earth System science applications. Yellowstone serves researchers across the United States and around the world.

See Yellowstone system documentation.


Geyser and Caldera: Analysis and visualization systems

The analysis and visualization resource comprises two systems. Geyser offers large-memory nodes (1 TB in each node) and is used for large-scale analysis and post-processing tasks, including 3D visualization. Caldera is designed to run distributed-memory parallel applications and for development and testing of general-purpose GPU (GPGPU) code.

See Geyser and Caldera system documentation.


GLADE: Centralized file systems and data storage

This central file and data storage resource consists of file system servers and storage devices with 11 PB of usable capacity. It is shared by the Yellowstone system and the Geyser and Caldera analysis and visualization clusters. The centralized file systems allow scientists to generate model output on the supercomputer, then analyze or visualize it on the other clusters without needing to move data between separate systems.

See GLADE system documentation.


Janus

Through a collaboration with the University of Colorado, NCAR shares access to the Janus 184-teraflops Dell supercomputer system. Janus is housed on the CU-Boulder campus and has a high-speed networking connection to the computing and data storage systems that CISL manages.

See Janus system documentation.


BlueM

BlueM is the Colorado School of Mines IBM Blue Gene/Q + iDataPlex hybrid system, which is housed in NCAR's Mesa Lab colocation facility. NCAR researchers can request access to the system for code porting, testing and evaluation. This high-performance computing system contains two independent compute partitions that share a common 480 TB file system but different architectures. Each is optimized for a particular type of parallel application.

See BlueM system documentation.


Pronghorn

This 16-node system features two eight-core Intel Sandy Bridge processors and two Intel Xeon Phi coprocessors per node. Pronghorn was installed in the spring of 2013, and its primary use initially is for evaluating the capabilities of Intel's Many Integrated Core (MIC) architecture.

See Pronghorn system documentation.


Erebus

The Antarctic Mesoscale Prediction System (AMPS) uses the Erebus cluster in support of the United States Antarctic Program, Antarctic science, and international Antarctic efforts. AMPS is an experimental, real-time numerical weather prediction capability that provides numerical guidance from the Weather Research and Forecasting (WRF) model with twice-daily forecasts covering Antarctica.


HPSS: NCAR data archive

The High Performance Storage System (HPSS) encompasses tape libraries at the Mesa Lab and new libraries at the NCAR-Wyoming Supercomputing Center (NWSC) that feature greatly increased capacity to accommodate ever-larger data sets generated by the Yellowstone user community. The 14.5 PB in HPSS at the Mesa Lab were migrated transparently to new media at NWSC. The Mesa Lab libraries remain in service to provide offsite replication for key scientific data sets and for disaster recovery data.

See HPSS documentation.