Geyser and Caldera

Hardware | Central file systems | Using Geyser and Caldera | Related documentation

The Geyser and Caldera clusters are specialized data analysis and visualization resources within the data-centric Yellowstone environment.

Users access those resources by logging in to yellowstone.ucar.edu and submitting batch jobs or interactive jobs to the appropriate queue.

Those queues are shown in the table below.

Geyser can be used for most large-scale data analysis and post-processing tasks, including 3D visualization, with applications that do not support distributed-memory parallelism. Applications supported include NCL, VAPOR, MATLAB, Octave, IDL, CDO, and numerous others.

Caldera is designed to run distributed-memory parallel applications; visualization, data analysis and post-processing tasks; and general-purpose GPU code using the "gpgpu" queue. Because Caldera shares the same node and processor architecture as Yellowstone, it can also be useful for compiling code for Yellowstone or testing Yellowstone codes on a smaller scale.


Hardware

Geyser

16 large-memory nodes

1 TB DDR3-1600 memory per node (1000 GB usable memory per node)
IBM x3850, quad-socket nodes
Four 10-core, 2.4-GHz Intel Xeon E7-4870 (Westmere EX) processors per node
FDR Mellanox InfiniBand, full fat tree
1 NVIDIA GPU per node

Caldera

30 nodes, 16 with GPUs

64 GB DDR3-1600 memory per node (62 GB usable memory per node)
IBM x360 M4, dual-socket nodes
Two 8-core 2.6-GHz Intel Xeon E5-2670 (Sandy Bridge) processors per node with AVX
FDR Mellanox InfiniBand, full fat tree
2 NVIDIA GPUs per node (16 nodes)

The Caldera cluster initially comprised 16 nodes. Fourteen nodes were added in April 2015 when CISL repurposed the Pronghorn evaluation cluster.


Central file systems

Geyser, Caldera, and Yellowstone all mount the central GLADE file systems. This means you can analyze your data files in place, without sending large amounts of data across a network or creating copies in multiple locations. The GLADE scratch file system has a 90-day retention policy to provide ample time to post-process and analyze your simulation output.

In addition to sharing the file systems and login nodes with Yellowstone, the Geyser and Caldera clusters have the same software except for additional packages that take advantage of GPUs on the Geyser and Caldera systems. Some differences between the architecture of Yellowstone and Caldera and the architecture of the Geyser cluster are addressed here: Where to compile.


Using Geyser and Caldera

To use Geyser or Caldera, submit an interactive job or batch job through the LSF scheduling system.

For details, see:

Geyser and Caldera queues

To submit debugging jobs, use the "geyser" or "caldera" queues or the "small" queue on Yellowstone.

The "intviz" queue is intended only for highly interactive visualization tasks that require frequent keyboard or mouse input and immediate responsiveness from the system. Appropriate uses include working with the graphical interfaces for tools such as MATLAB and VAPOR to visualize data. The queue is monitored to ensure that it is used as intended.

Use the "hpss" queue for data transfers.

You can use this link to check queue activity, or log in and run bsumm on your command line.

Queue Wall clock Job size
(# cores)
Priority Queue
factor
Notes
geyser 24 hours  1-40
(up to 80 with hyperthreading)
2 1.0 Interactive and batch use; shared nodes
intviz 4 hours 1-40
(up to 80 with hyperthreading)
2 1.0 Interactive use only; shared nodes
bigmem 6 hours 1-640 1 1.0  Interactive and batch use, exclusive;
jobs charged for all 40 cores on each node used; daytime limit of four nodes
caldera 24 hours 1-16
(up to 32 with hyperthreading)
2 1.0 Interactive and batch use; shared nodes
gpgpu 6 hours 1-256 1 1.0  Interactive and batch use, exclusive; jobs charged for all 16 cores on each node used; daytime limit of four nodes
hpss 24 hours 1 1 0 For HPSS and external data transfer only
The geyser, intviz, and bigmem queues use nodes in the Geyser cluster.
The caldera and gpgpu queues use nodes in the Caldera cluster.

Related documentation