New to NCAR and the CISL computing environment?
Here are some of the essentials:
To use our high-performance computing (HPC), data analysis, and visualization resources, you need a project with an allocation for the resources you plan to use, a user account, and a device that we call a “token,” which generates one-time passwords. (We assume you have some UNIX/Linux skills and a good idea of your scientific and computing objectives.)
What you need
A project and allocation. An allocation defines the amount of resources you can use on a system like our Yellowstone HPC cluster. Think of an allocation as a resource budget for your research project. You can request an allocation with yourself as project lead or have a project lead add you to a project. Several types of allocation opportunities are described in our Allocations section.
A project number or code. Your project is identified by an alphanumeric code. Be sure to use the appropriate project code when submitting batch jobs, storing files, and doing other work. This will help you and your colleagues keep track of how resources are being used.
A user account. To use CISL resources, you must have your own individual user account. Individuals who need user accounts can be identified when an allocation is requested; or the project lead can request accounts for additional users (graduate students or collaborators, for example) after a project has been awarded. See Access and user accounts.
A token. When your account is created, CISL sends you a YubiKey token and provides a personal identification number (PIN). You need your username, your PIN, and your token to log in to CISL systems. See Authentication and security to learn about using your token to log in and about some important security practices.
Once you have your user account and token, your username and token typically do not change even if you become associated with different projects and allocations over time.
Resources we provide
We provide world-class supercomputing, analysis, and visualization resources as well as software, data, and consulting services to support the geosciences community. As you will see, all are closely interconnected.
Before you begin using these resources, please review the User responsibilities that you accept along with the opportunity.
Click here to learn more about the Yellowstone environment
HPC systems are supercomputers that comprise many thousands of processor cores; in the case of Yellowstone, more than 70,000. These computing clusters are where users develop and test their scientific parallel codes and submit batch jobs to perform simulations, perhaps with any of several NCAR community models and weather prediction programs.
Data analysis and visualization clusters
The Geyser and Caldera systems are designed to support scientific data post-processing, analysis and visualization. They provide access to large amounts of memory for data analysis applications, and they support interactive use of scientific data-processing software such as NCAR Command Language (NCL), NCAR Graphics, and the VAPOR interactive 3-D visualization environment.
User files are stored in GLADE file spaces that are accessible from computing, analysis, and visualization systems. HPSS is for longer-term storage.
We provide two types of storage for users. Our disk-based Globally Accessible Data Environment (best known as GLADE) is accessible from any of the HPC, analysis, and visualization computer clusters that CISL manages.
Each user has dedicated space on GLADE that includes a home directory, which is backed up daily, as well as scratch and work spaces for short-term use.
Our High Performance Storage System (HPSS) is available for longer-term storage of large data sets and other essential files. To copy files to this tape-based system, and to retrieve them later on, you’ll use one of two transfer methods that are explained in our HPSS documentation.
Scientific data collections
Many of our users find the data sets in our Research Data Archive and other repositories invaluable in their work. These data sets include meteorological and oceanographic observations, operational and reanalysis model outputs, and others that support atmospheric and geosciences research.
The Consulting Services Group (CSG) provides expert advice about our computing resources and a range of related topics. These include programming, optimizing code, data analysis and post-processing, visualization, and mass storage.
Getting help: Several ways to contact both CSG and the CISL Help Desk are shown at the top of each of our documentation pages, including this one.
CISL provides training events, workshops, and other presentations each year. These include courses that participants can attend on-site or online, and many are recorded for reviewing at any time. Check the following links and watch the CISL Daily Bulletin for announcements.
Using NCAR resources
Working within an HPC resource environment that is shared by dozens of institutions and hundreds of individual users may be quite different from your previous experience.
Here are a few additional topics to be aware of before you start. Please also see our Best practices page for some information that will help you get off to a good start.
CISL provides many tools to help you develop and debug code for use on our systems. Our Yellowstone environment, with Red Hat Linux as the operating system, offers all of the programs, compilers, libraries and other packages necessary for high-performance computing, analysis, and visualization.
Parallel codes are essential for successful computing in the Yellowstone environment. If you aren’t familiar with parallel programming, you may want to read Parallel computing concepts and also take advantage of some of our training opportunities to make the best use of these powerful HPC resources. We provide opportunities for training in Fortran, C, GPU programming, and related topics.
Submitting and running jobs
Because the HPC system and analysis clusters are shared so widely, we employ a scheduling system that balances the workload between large and small jobs, ensures that members of our diverse user community all have fair access, and that computing resources are used as productively as possible.
The Platform LSF (load sharing facility) distributes the system’s workload based on priorities. These are determined by the CISL fair share policy, type of allocation, the user’s choice of queues when submitting individual jobs, job size, and other factors.
Except for small interactive jobs that can run on a login node in 30 minutes (wall clock) or less, both interactive and batch jobs need to be submitted for scheduling.
Since the system doesn’t know precisely when a job will run, and many jobs are simply too big to run interactively, most are submitted as batch jobs so they can run without your intervention.
We encourage all users to take advantage of our data analysis and visualization clusters to analyze the results of their HPC simulations. With the centralized GLADE file spaces, you don’t have to move files from system to system to perform different tasks. When a batch job runs on the HPC system, for example, the data generated are stored on GLADE. You can then use one of the data analysis and visualization clusters to analyze the data without having to transfer any files.
Furthermore, because the Geyser and Caldera clusters are managed by the LSF scheduler, you can set up batch jobs that will automatically process the results of Yellowstone HPC simulations without any manual intervention.
When you need to move data from GLADE or our HPSS system to another institution for permanent storage or analysis, you can do that several different ways.
For moving files between GLADE and remote systems, one of the simplest and most efficient ways is to use Globus Online. With its Globus Connect feature you can move files easily to and from your laptop or desktop computer, GLADE, or other destinations.
We also provide SCP and SFTP capabilities through command line interface and Windows clients. These are best suited for transferring small numbers of small files.
HPSS transfers: Our HPSS environment is optimized and secured for transfers to and from the GLADE file spaces. Before you can transfer HPSS files to a remote system, you will need to retrieve them from HPSS to GLADE. Then you can use one of the transfer methods described above.
Acknowledging CISL support
Once you've conducted your work on CISL resources and are writing up the results for a journal article, presentation, or other published work, we ask that you acknowledge CISL support for the computational aspects of the work.