The Yellowstone HPC system and the Geyser and Caldera analysis and visualization clusters share login nodes. Once you log in to yellowstone.ucar.edu, you will be able to schedule jobs to run through any queue or start interactive sessions on the Geyser or Caldera nodes.
Here are the basics you need to know to get started.
Logging in and the Yellowstone environment
To log in, use ssh as follows:
ssh -X -l username yellowstone.ucar.edu
If your UCAR username is the same as the username on the local computer you are using, you can shorten the login command to:
ssh -X yellowstone.ucar.edu
After entering your username, you will be prompted to enter a "Token_Response"; use your YubiKey token or CRYPTOCard keypad to generate it.
Some Mac users encounter errors with some applications when working with X11 forwarding. An alternative is to use the -Y option instead of -X to treat the Yellowstone system as a "trusted" rather than "untrusted" client.
The default shell is tcsh. These others are available for your use if you prefer: csh, bash, and ksh. To change your shell for a session, just enter the shell name after you log in. You can change your default shell through the CISL Systems Accounting Manager (SAM).
Intel is the default compiler.
CISL uses environment modules to help you configure your environment. To see which modules are loaded at any time, use the module list command. For a list of compatible modules available to use at that point, execute the module av command. (The module spider command gives you a complete list of modules.) Then, use module load, module swap, and other commands as needed to customize your environment.
GLADE file spaces
Each individual with a user account has home, scratch, and work file spaces in the CISL Globally Accessible Data Environment (GLADE), the centralized file service that is common to the Yellowstone, Geyser, and Caldera resources. Dedicated project spaces are available through our allocations process to support longer-term disk needs that are not easily accommodated by the scratch or work spaces.
||User home directory
||Temporary computational space
||User work space
||Project space allocations
(via allocation request)
To check your space usage on GLADE, enter the following on your command line:
Once your files are in place on GLADE, they are accessible when you log in to yellowstone.ucar.edu. Because they are stored centrally, there is no need to transfer them between the Yellowstone supercomputer and the Geyser and Caldera clusters.
Transfers between non-NCAR systems and GLADE
To transfer files between non-NCAR systems and the GLADE file spaces, we recommend using Globus Online. For transfers to and from your local computer, use Globus Connect. For transferring small files, SCP also can be used.
HPSS long-term storage
For long-term data storage, you will be able to use our High Performance Storage System (HPSS).
See the HPSS Quick start page or these links for documentation:
The Intel compiler module is loaded by default when you log in. CISL recommends using the wrapper commands shown on this table:
|Intel (default compiler)
|Flag to enable
OpenMP (for serial and MPI)
PGI, PathScale, and GNU compilers also can be used. The wrapper commands for each are shown in our Compiling code documentation along with information on changing compiler modules.
Numerous file format and mathematical libraries are provided for use on the Yellowstone, Geyser, and Caldera clusters. To identify the libraries available to you, execute the module av command. When you determine which library you want to load, specify it with module load library_name, as in this example:
module load netcdf
See Libraries for more information.
Submitting and monitoring jobs
All jobs to run on the Yellowstone, Geyser, and Caldera clusters must be scheduled through Platform LSF. When submitting, take care to select the most appropriate queue for each job and to provide accurate wall-clock times in your job script. This will help us fit your job into the earliest possible run opportunity.
We recommend passing the options to bsub in a batch script file rather than with numerous individual commands.
Keep the system’s usable memory in mind and configure your job script to avoid oversubscribing the cores. Also consider spreading memory-intensive jobs across more nodes than you would request for other jobs, and use fewer than the full number of cores on each node. This makes more memory available to run your processes.
Notable differences from batch scripts used on the Bluefire system include the following:
- Yellowstone nodes have 16 cores rather than 32. We recommend running no more than 16 tasks per node. To see if your code benefits from the hyper-threading support on Yellowstone, however, you can experiment with up to 32 tasks per node.
- The wall-clock limit for most Yellowstone queues is 12 hours rather than six. The queue structure also is quite different. See Queues and charges.
- New project codes are alphanumeric rather than 8-digit numbers. Example: UUOM00001
- If your Bluefire project or projects were migrated to Yellowstone, your alphanumeric project code is now "P" followed by the 8-digit project number – for example, P12345678. The P is required.
See Running jobs for sample batch scripts. Also see:
To get information about your own unfinished jobs, use the LSF bjobs command without arguments.
To get information regarding unfinished jobs for a user group, add -u and the group name; for a report on all users' unfinished jobs, add -u all instead.
bjobs -u all
For information about your own unfinished jobs in a specific queue, use -q and the queue name.
bjobs -q queue_name
Checking allocation status
You can track your usage in the CISL Systems Accounting Manager (SAM) if you have a user account and a YubiKey authentication token or CRYPTOCard keypad.
SAM reports show usage data and charges against allocations for both computing and storage. Charges for computing are calculated and updated daily; storage charges are updated weekly.
Load the debug module to set the -g option before compiling your code.
module load debug
This module will include other options in addition to -g as development continues.
To use the TotalView debugger, load both the debug and TotalView modules into your environment before compiling.
module load debug totalview
Submit TotalView debugging jobs through the "small," "geyser" or "caldera" queues.
See Debugging code for more detailed documentation.