Quick start

Logging in | Storage resources | File transfers | Compilers | Libraries
Submitting and monitoring jobs | Checking allocation status | Debugging

The Yellowstone HPC system and the Geyser and Caldera analysis and visualization clusters share login nodes. Once you log in to yellowstone.ucar.edu, you will be able to schedule jobs to run through any queue or start interactive sessions on the Geyser or Caldera nodes.

Here are the basics you need to know to get started.


Logging in and the Yellowstone environment

To log in, use ssh as follows:

ssh -X -l username yellowstone.ucar.edu

You can use this shorter command if your UCAR username is the same as the username on your local computer:

ssh -X yellowstone.ucar.edu

After entering your username, you will be prompted to enter a "Token_Response"; use your YubiKey token or CRYPTOCard keypad to generate it.

Some Mac users and others encounter errors when working with X11 forwarding. One alternative is to use the -Y option instead of -X to treat the Yellowstone system as a "trusted" rather than "untrusted" client. Also see Troubleshooting tips.

Environment

The default shell is tcsh. You can also use the bash and ksh shells. To run a different shell during a session, just enter the shell name after you log in. You can change your default shell through the CISL Systems Accounting Manager (SAM).

Intel is the default compiler.

CISL uses environment modules to help you configure your environment. To see which modules are loaded at any time, use the module list command. For a list of compatible modules available to use at that point, execute the module av command. (The module spider command gives you a complete list of modules.) Then, use module load, module swap, and other commands as needed to customize your environment.


Storage resources

GLADE file spaces

Each individual with a user account has home, scratch, and work file spaces in the CISL Globally Accessible Data Environment (GLADE), the centralized file service that is common to the Yellowstone, Geyser, and Caldera resources. Dedicated project spaces are available through our allocations process to support longer-term disk needs that are not easily accommodated by the scratch or work spaces.

File space Quota Backup Purge
policy
Description
Home
/glade/u/home/username
10 GB Yes Not purged User home directory
Scratch
/glade/scratch/username
10 TB No See GLADE Temporary
computational space
Work
/glade/p/work/username
512 GB No Not purged User work space
Project
/glade/p/project_code
N/A No Not purged Project space allocations
(via allocation request)

 

To check your space usage on GLADE, enter the following on your command line:

gladequota

HPSS long-term storage

For long-term data storage, you can use our High Performance Storage System (HPSS).

See the HPSS Quick start page or these links for documentation:


File transfers and sharing data

Once your files are in place on GLADE, they are accessible when you log in to yellowstone.ucar.edu. Because they are stored centrally, there is no need to transfer them between the Yellowstone supercomputer and the Geyser and Caldera clusters.

To transfer files between the GLADE file spaces and non-NCAR systems, we recommend using Globus. For transfers to and from your local computer, use Globus Connect Personal. SCP also works for transferring small files.

See sharing data for how to give colleagues access to your data if they are not Yellowstone users.


Compilers

The Intel compiler module is loaded by default when you log in. CISL recommends using the wrapper commands shown on this table for compiling code:

Intel (default compiler) For serial
programs
For programs
using MPI
Flag to enable
OpenMP (for serial and MPI)
Fortran ifort foo.f90 mpif90 foo.f90 -openmp
C icc foo.c mpicc foo.c -openmp
C++ icpc foo.C mpiCC foo.C -openmp

 

PGI, PathScale, GNU, and NVIDIA compilers also can be used. The wrapper commands for each are shown in our Compiling code documentation along with information on changing compiler modules.


Libraries

Numerous file format and mathematical libraries are provided for use on the Yellowstone, Geyser, and Caldera clusters.

To identify the libraries available to you, run the module av command. When you determine which library you want to load, specify it with module load library_name, as in this example:

module load netcdf

See Libraries for more information.


Submitting and monitoring jobs

To run a job on the Yellowstone, Geyser, or Caldera clusters, schedule the job through Platform LSF. When submitting, take care to select the most appropriate queue for each job and to provide accurate wall-clock times in your job script. This will help us fit your job into the earliest possible run opportunity.

We recommend passing the options to bsub in a batch script file rather than with numerous individual commands.

Keep the system’s usable memory in mind and configure your job script to avoid oversubscribing the cores. Also consider spreading memory-intensive jobs across more nodes than you would request for other jobs, and use fewer than the full number of cores on each node. This makes more memory available to run your processes.

See Running jobs for sample batch scripts. Also see:

Monitoring jobs

To get information about your unfinished jobs, use the LSF bjobs command without arguments.

To get information regarding unfinished jobs for a user group, add -u and the group name; for a report on all users' unfinished jobs, add -u all instead.

bjobs -u all

For information about your own unfinished jobs in a specific queue, use -q and the queue name.

bjobs -q queue_name

Checking allocation status

You can track your usage in the CISL Systems Accounting Manager (SAM) if you have a user account and a YubiKey authentication token or CRYPTOCard keypad.

SAM reports show usage data and charges against allocations for both computing and storage. Charges for computing are calculated and updated daily; storage charges are updated weekly.


Debugging

Load the debug module to set the -g option before compiling your code.

module load debug

This module will include other options in addition to -g as development continues.

To use the TotalView debugger, load both the debug and TotalView modules into your environment before compiling.

module load debug totalview

Submit TotalView debugging jobs through the "small," "geyser," or "caldera" queues.

See Debugging code for more detailed documentation.