Quick start: Cheyenne

Logging in | Environment | Compiling | Debugging | Cheyenne queues | Submitting jobs

Once you have an account and the necessary software, you can log in and run jobs on the Cheyenne supercomputer.

Logging in also gives you access to:

If you are a Yellowstone user who is migrating work to Cheyenne, review the notice on our GLADE file spaces page about the transition in scratch spaces. You can already access other GLADE spaces from either Cheyenne or Yellowstone without transferring files.

Users who need to run analysis and visualization jobs on the Geyser or Caldera clusters can log in to Yellowstone to submit their jobs. There is no need to transfer output files from Cheyenne for this since all of the clusters mount the same GLADE file systems.


Logging in

To log in to the Cheyenne system from your terminal, use Secure Shell (ssh) as shown here:

ssh -X -l username cheyenne.ucar.edu

You can use this shorter command if your Cheyenne username is the same as your username on your local computer:

ssh -X cheyenne.ucar.edu

After entering your username, you will be prompted to enter a "Token_Response." Use your YubiKey token or CRYPTOCard keypad to generate that response.

Alternatives

You can also use an SSH client to log in. Some of those are described here.


Environment

The Cheyenne HPC system uses a Linux operating system, PBS Pro for scheduling jobs, and supports widely used shells on its login and compute nodes. Users also have several compiler choices. See Compiling below.

Operating system: SUSE Linux. Most Yellowstone users will notice few differences from Red Hat Enterprise Linux.

Scheduler: Altair PBS Pro. See Submitting jobs below for some basic information or this detailed documentation about using PBS Pro and running jobs.

Shells: The default login shell for new Cheyenne users is bash unless they have active Yellowstone accounts with tcsh for the default. To change the default, log in to the CISL Systems Accounting Manager (SAM). It may take several hours for a change to take effect. You can confirm which shell is set as your default by entering echo $SHELL on your Cheyenne command line. 

Environment modules

The Cheyenne module utility enables users to easily load and unload compilers and compatible software packages as needed, and to create multiple customized environments for various tasks.

Here are some of the most commonly used commands. (See the Environment modules page for more details.)

module av - Show which modules are available for use with the current compiler.

module help - List switches, subcommands, and arguments for the module utility. Specify a modulefile by name for help with an individual module.

module help netcdf

module list - List the modules that are loaded.

module load - Load the default version of a software package, or load a specified version.

module load modulefile_name
module load modulefile_name/n.n.n

module spider - List all modules that exist on the system.

module swap - Unload one module and load a different one. Example:

module swap netcdf pnetcdf

module unload - Unload the specified software package.

module unload modulefile_name

Compiling

Cheyenne users have access to Intel, PGI, and GNU compilers. The Intel compiler module is loaded by default.

After loading the compiler module that you want to use, identify and run the appropriate compilation wrapper command from the table below. (If your script already includes one of the following generic MPI commands, there is no need to change it: mpif90, mpif77, ftn; mpicc, cc; mpiCC and CC.)

Also consider using the compiler's diagnostic flags to identify potential problems.

Compiler Language Commands for serial programs Commands for programs
using MPI
Flags to enable OpenMP
(for serial and MPI)
Intel (default) Fortran ifort foo.f90 mpif90 foo.f90 -qopenmp
  C icc foo.c mpicc foo.c -qopenmp
  C++ icpc foo.C mpicxx foo.C -qopenmp

Include these flags for best performance when you use the Intel compiler:
-march=corei7 -axAVX

PGI Fortran pgfortran (or pgf90 or pgf95) foo.f90 mpipf90 foo.f90 -mp
  C pgcc foo.c mpipcc foo.c -mp
  C++ pgcpp (or pgCC) foo.C mpicxx foo.C -mp
GNU Fortran gfortran foo.f90 mpfort foo.f90 -fopenmp
  C gcc foo.c mpicc foo.c -fopenmp
  C++ g++foo.C mpicxx foo.C -fopenmp

Debugging

CISL provides the Allinea Forge tools, DDT and MAP, for debugging, profiling, and optimizing code in the Cheyenne environment.

Performance Reports is another Allinea tool for Cheyenne users. It summarizes the performance of HPC application runs.

See Running Allinea DDT, MAP and PR jobs.


Cheyenne queues

Most of the Cheyenne batch queues are for exclusive use, and jobs are charged for all 36 cores on each node that is used. Jobs in the shared use "share" queue are charged only for the cores used.

The "regular" queue, which has a 12-hour wall-clock limit, meets most users' needs for running batch jobs.

See the table on this page for information about other queues.


Submitting jobs

Schedule your jobs to run on Cheyenne by submitting them through PBS Pro workload management system.

To submit a batch job, use the qsub command followed by the name of your PBS batch script file. See this page for job script examples. The page also includes a comparison chart to help Yellowstone users adapt their Platform LSF scripts for use in PBS Pro.

qsub script_name

To start an interactive job, use the qsub command with the necessary options but no script file.

qsub -I -l select=1:ncpus=36:mpiprocs=36 -l walltime=01:00 -q small -A project_code

More detailed PBS Pro documentation

Submitting jobs with PBS

Scripts that you can adapt for your own jobs.

 

Commonly used PBS job commands and similar LSF commands

Task PBS Pro Platform LSF
Delete job qdel bkill

Get status

 

 

qstat

qstat -f

qstat -xf

bjobs

bjobs -l

bhist -l

List queues qstat -Q bqueues -l
Submit job qsub bsub

Examples

qsub script_name      # Submit the specified batch script
qdel jobID            # Delete the specified job
qstat -u $USER        # Show the status of your own jobs
qstat jobID           # Show the status of an individual job
qstat -Q queue_name   # Show the status of the specified queue

Comparing basic PBS and LSF MPI scripts

PBS Pro Platform LSF
#!/bin/tcsh #!/bin/tcsh
#PBS -N job_name #BSUB -J job_name
#PBS -A project_code #BSUB -P project_code
#PBS -l walltime=00:05:00 #BSUB -W 00:05
#PBS -l select=2:ncpus=32:mpiprocs=32

#BSUB -n 64
#BSUB -R "span[ptile=16]"

#PBS -j oe

#BSUB -o job_name.%J.out
#BSUB -e job_name.%J.err

#PBS -q queue_name

#BSUB -q queue_name

mpiexec_mpt -n 64 ./hw_mpi.exe

mpirun.lsf ./myjob.exe