Quick start

Logging in | Storage resources | File transfers | Compilers | Libraries
Submitting and monitoring jobs | Checking allocation status | Debugging

Once you have an account and other basics that you need for logging in to yellowstone.ucar.edu, you will be able to run jobs on the Yellowstone, Geyser, and Caldera clusters.

Links below provide more detailed information that you may want to review.

Logging in and the Yellowstone environment

To log in to Yellowstone, you need Secure Shell (SSH) on your local machine. If you do not already have it, see Secure Shell access for information.

If your UCAR username and your local computer username are the same, log in like this:

ssh -X yellowstone.ucar.edu

Otherwise, include your username to log in like this:

ssh -X -l username yellowstone.ucar.edu

Next, you will need to generate a "Token_Response" with your YubiKey token or CRYPTOCard keypad.


The default shell is tcsh. You can also use the bash and ksh shells. To run a different shell during a session, just enter the shell name after you log in. You can change your default shell through the CISL Systems Accounting Manager (SAM).

Intel is the default compiler.

CISL uses environment modules to help you configure your environment. To see which modules are loaded at any time, use the module list command. For a list of modules that are compatible with those that are loaded, execute the module av command. (The module spider command gives you a complete list of modules.) Then, use module loadmodule swap, and other commands as needed to customize your environment.

Storage resources

GLADE file spaces

Each individual with a user account has home, scratch, and work file spaces in the CISL Globally Accessible Data Environment (GLADE), the centralized file service that is common to the Yellowstone, Geyser, and Caldera resources. Dedicated project spaces are available through our allocations process to support longer-term disk needs that are not easily accommodated by the scratch or work spaces.

File space Quota Backup Purge
10 GB Yes Not purged User home directory
10 TB No See GLADE Temporary
computational space
512 GB No Not purged User work space
N/A No Not purged Project space allocations
(via allocation request)

To check your space usage on GLADE, enter the following on your command line:


HPSS long-term storage

For long-term data storage, you can use our High Performance Storage System (HPSS).

See the HPSS Quick start page or these links for documentation:

File transfers and sharing data

Once your files are in place on GLADE, they are accessible when you log in to yellowstone.ucar.edu. Because the files are stored centrally, there is no need to transfer them between the Yellowstone supercomputer and the Geyser and Caldera clusters.

To transfer files between the GLADE file spaces and non-NCAR systems, we recommend using Globus. For transfers to and from your local computer, use Globus Connect Personal. SCP also works for transferring small files.

See sharing data for how to give colleagues access to your data if they are not Yellowstone users.


The Intel compiler module is loaded by default when you log in. CISL recommends using the wrapper commands shown on this table for compiling code:

Intel (default compiler) For serial
For programs
using MPI
Flag to enable
OpenMP (for serial and MPI)
Fortran ifort foo.f90 mpif90 foo.f90 -openmp
C icc foo.c mpicc foo.c -openmp
C++ icpc foo.C mpiCC foo.C -openmp


PGI, PathScale, GNU, and NVIDIA compilers also can be used. The wrapper commands for each are shown in our Compiling code documentation along with information on how to change from one compiler to another.


Numerous file format and mathematical libraries are provided for use on the Yellowstone, Geyser, and Caldera clusters.

To identify the libraries that are available to you, run the module av command. When you determine which library you want to load, specify it with module load library_name, as in this example:

module load netcdf

See Libraries for more information.

Submitting and monitoring jobs

To run a job on the Yellowstone, Geyser, or Caldera clusters, schedule the job through Platform LSF. Take care to select the most appropriate queue for each job and to provide accurate wall-clock times in your job script. This will help us fit your job into the earliest possible run opportunity.

We recommend passing the options to bsub in a batch script file rather than with numerous individual commands. See these pages for sample batch scripts:

Keep the system’s usable memory in mind and configure your job script to avoid oversubscribing the cores. Also consider spreading memory-intensive jobs across more nodes than you would request for other jobs, and using fewer than the full number of cores on each node. This makes more memory available to run your processes.

Also see:

Monitoring jobs

To get information about your unfinished jobs, use the LSF bjobs command without arguments.

To get information regarding unfinished jobs for a user group, add -u and the group name; for a report on all users' unfinished jobs, add -u all instead.

bjobs -u all

For information about your own unfinished jobs in a queue, use -q and the queue name.

bjobs -q queue_name

Checking allocation status

You can track your usage in the CISL Systems Accounting Manager (SAM) if you have a user account and authentication token or a UCAS password.

SAM reports show usage data and charges against allocations for both computing and storage. Charges for computing are calculated and updated daily; storage charges are updated weekly.


Load the basic debug module to set the -g option before compiling your code.

module load debug

The module will include other options in addition to -g as development continues.

These additional debugging packages and tools are available for users of the Yellowstone system. Follow the links for documentation.

  • The Allinea DDT, MAP and PR tools for debugging, optimizing, and reports on code performance.
  • The TotalView debugger from Rogue Wave Software for debugging both parallel and serial code. The Yellowstone license for TotalView expires on August 6, 2016, and will not be renewed.

Additional debugging tools are provided with the compilers that are installed on our supercomputing and analysis systems.