Using data analysis and visualization clusters

Interactive jobs | Batch jobs | Compiling your code

The Geyser and Caldera clusters are being decommissioned at the end of December 2018.
Please use the new Casper system for data analysis and visualization.

To run jobs on the Casper, Geyser, and Caldera data analysis and visualization (DAV) clusters, Cheyenne users submit them with the open-source Slurm Workload Manager.

Procedures for starting both interactive jobs and batch jobs are described below. 

Begin by logging in on Cheyenne.

Compiling code

Which cluster to use to compile your code depends on where you intend to run that code. See compiling your code below.


Interactive jobs

Using execdav

Run the execdav script/command to start an interactive job. Invoking it without an argument will start an interactive shell on the first available DAV node. The default wall-clock time is 6 hours.

The execdav command has these optional arguments:

  • -a project_code (defaults to value of DAV_PROJECT)
  • -t time (minutes:seconds or hours:minutes:seconds; defaults to 6 hours)
  • -n number_of_cores (defaults to 1 core: -n 1)
  • -m nG
    • Use this if you want to specify how much memory to use per node, from 1 to 1100 gigabytes.
      Example: -m 300G
    • If you do not specify memory per node, the default memory available is 1.87G per core that you request.
  • -C constraint
    • Options include skylake, gpu, gp100, v100, x11.
      Example: -C v100

To specify which project code to charge for your CPU time, set environment variable DAV_PROJECT as shown before invoking execdav.

DAV_PROJECT=UABC0001

* * *

Using execgy and execca

The execgy and execca commands execute scripts that start interactive sessions on Geyser and Caldera respectively. A session started with one of these commands uses a single core and has a wall-clock time of 6 hours. Use execdav (above) if you want to specify different resource needs.

Example with output

cheyenne6:~> execgy
mem =
amount of memory is default
Submitting interactive job to slurm using account SCSG0001 ...
submit cmd is
salloc  -C geyser   -N 1  -n 1 -t 6:00:00 -p dav --account=SCSG0001 srun --pty  ... (shortened for space)
salloc: Pending job allocation 132885
salloc: job 132885 queued and waiting for resources
salloc: job 132885 has been allocated resources
salloc: Granted job allocation 132885
salloc: Waiting for resource configuration
salloc: Nodes geyser10 are ready for job
username@geyser13:~>

To end the session, run exit.

Run execgy -help or execca -help for additional information.

* * *

Using exechpss

The exechpss command is used to initiate HSI and HTAR file transfers. See examples in Managing files with HSI and Using HTAR to transfer files.


Batch jobs

Prepare a batch script by following one of the examples below. The system does not import your Cheyenne environment, so be sure your script loads the software modules that you will need to run the job.

Basic Slurm commands

When your script is ready, run sbatch to submit the job.

sbatch script_name

To check on your job's progress, run squeue.

squeue -u $USER

To get a detailed status report, run scontrol show job followed by the job number.

scontrol show job nnn

To kill a job, run scancel with the job number.

scancel nnn

Setting constraints and reserving resources

The example batch scripts below can be customized further by setting constraints on which types of nodes the job can use or by reserving certain resources. 

Examples of node constraints

#SBATCH -C geyser
#SBATCH -C caldera
#SBATCH -C x11
#SBATCH -C skylake
#SBATCH -C v100

Use the --gres option to reserve specific resources, such as GPUs.

Example reserving two V100 GPUs

#SBATCH --gres=gpu:v100:2

Constraints and reservations as shown here produce different behavior.

  • If you constrain your job with GPUs, you are simply asking Slurm to place the job on a node with GPUs.
  • If you reserve a number of GPUs, your job will have exclusive access to those GPUs.

In general, it is best to minimize resource constraints and reservations when possible to decrease the length of time your job waits in the queue.

Wall-clock

The wall-clock limit on these clusters is 24 hours.

Specify the hours your job needs as in the examples below, which use the hours:minutes:seconds format. It can be shortened to minutes:seconds.

Script examples

The examples below show how to create a script for running an MPI job. See these pages for other examples:

For tcsh users

Insert your own project code where indicated and customize other settings as needed for your own job.

#!/bin/tcsh
#SBATCH -J job_name
#SBATCH -n 8
#SBATCH --ntasks-per-node=4
#SBATCH --mem=8G
#SBATCH -t 00:60:00
#SBATCH -A project_code
#SBATCH -p dav
#SBATCH -e job_name.err.%J
#SBATCH -o job_name.out.%J

setenv TMPDIR /glade/scratch/$USER/temp
mkdir -p $TMPDIR

module purge
module load gnu ncarenv ncarcompilers
module load openmpi

srun ./mpihello

For bash users

Insert your own project code where indicated and customize other settings as needed for your own job.

#!/bin/bash -l
#SBATCH -J job_name
#SBATCH -n 8
#SBATCH --ntasks-per-node=4
#SBATCH --mem=8G
#SBATCH -t 00:60:00
#SBATCH -A project_code
#SBATCH -p dav
#SBATCH -e job_name.err.%J
#SBATCH -o job_name.out.%J

export TMPDIR=/glade/scratch/$USER/temp
mkdir -p $TMPDIR

module purge
module load gnu ncarenv ncarcompilers
module load openmpi

srun ./mpihello

Compiling your code

You will need to compile your code on the appropriate cluster to run it.

CISL recommends using the default Intel, GNU or PGI compilers for parallel programs. Once you are on an appropriate node, do the following:

  1. Load the compiler.
  2. Load the openmpi module if you plan to use MPI.
  3. Compile your code as you usually do. 

Serial programs can use any compiler.

 

* Some Caldera nodes use the hostname "pronghorn." Compiling on caldera and pronghorn hosts will generate equivalent executables.

Related training courses

NCAR courses