- Data Portals
- User Support
- About Us
Are you new to NCAR and the CISL computing environment?
A project and allocation. An allocation of core-hours and storage space defines the amount of resources that you can use on a system like our Yellowstone HPC cluster. Think of an allocation as a resource budget for your research project. You can request an allocation with yourself as project lead or have a project lead add you to a project. Several types of allocation opportunities are described in our Allocations section.
A project code. Your project is identified by an alphanumeric code. Be sure to use the appropriate project code when you submit batch jobs, store files, and do other work. This will help you and your colleagues keep track of how your resource allocation is being used.
A user account. To use CISL resources, you must have your own individual user account. Individuals who need user accounts can be identified when an allocation is requested, or the project lead can request accounts for additional users (graduate students or collaborators, for example) after a project has been awarded. See User accounts and access.
A token. When your account is created, CISL sends you a YubiKey authentication token and provides a personal identification number (PIN). You need your username, your PIN, and your token to log in. See Authentication and security to learn about using your token to log in and about some important security practices.
Your username and your token typically do not change even if you become associated with different projects and allocations over time.
We provide world-class supercomputing, analysis, and visualization resources as well as software, data, and consulting services to support the geosciences community. All of these resources are closely interconnected.
Before you begin using them, please review the responsibilities that you accept along with the opportunity.
HPC systems are supercomputers that comprise many thousands of processor cores; in the case of Yellowstone, more than 72,000. These computing clusters are where users develop and test their scientific parallel codes and submit batch jobs to perform simulations, perhaps with any of several NCAR community models and weather prediction programs.
The Geyser and Caldera systems are designed to support scientific data post-processing, analysis, and visualization. They provide access to large amounts of memory for data analysis applications, and they support interactive use of scientific data-processing software such as NCAR Command Language (NCL), NCAR Graphics, and the VAPOR interactive 3-D visualization environment.
We provide two types of storage for users. Our disk-based Globally Accessible Data Environment (best known as GLADE) is accessible from any of the HPC, analysis, and visualization computer clusters that CISL manages.
Each user has dedicated space on GLADE that includes a home directory, which is backed up several times each week, as well as scratch and work spaces for short-term use. Some users have special project spaces.
Our High Performance Storage System (HPSS) is available for longer-term storage of large data sets and other essential files. To copy files to this tape-based system, and to retrieve them later on, you’ll use one of the transfer methods that are described in our HPSS documentation.
Many of our users find the data sets in our Research Data Archive and other repositories invaluable in their work. These data sets include meteorological and oceanographic observations, operational and reanalysis model outputs, and others that support atmospheric and geosciences research.
The Consulting Services Group (CSG) provides expert advice about using our computing resources and related topics. These include programming, optimizing code, data analysis and post-processing, visualization, and mass storage.
CISL provides training events, workshops, and other presentations each year. These include courses that participants can attend on-site or online, and many are recorded for reviewing at any time. Check the following links and watch the CISL Daily Bulletin for announcements.
Working within an HPC resource environment that is shared by dozens of institutions and hundreds of individual users may be quite different from your previous experience.
Here are a few additional topics to be aware of before you start. Please also see our Best practices page for some information that will help you make efficient use of your allocation.
CISL provides many tools to help you develop and debug code for use on our systems. Our Yellowstone environment, with Red Hat Linux as the operating system, offers all of the programs, compilers, libraries, and other packages necessary for high-performance computing, analysis, and visualization.
Parallel codes are essential for successful computing in the Yellowstone environment. If you aren’t familiar with parallel programming, you may want to read Parallel computing concepts and also take advantage of some of our training opportunities to make the best use of these powerful HPC resources. We provide opportunities for training in programming and related topics.
Because the HPC system and analysis clusters are shared so widely, we employ a scheduling system. It balances the workload between large and small jobs, ensures that all members of our diverse user community have fair access, and ensures that computing resources are used as productively as possible.
The scheduler—Platform LSF (load sharing facility)—distributes the system’s workload based on priorities. These are determined by the CISL fair share policy, type of allocation, the user’s choice of queues when submitting individual jobs, job size, and other factors.
Except for some small interactive processes that can run on login nodes, both interactive and batch jobs need to be submitted for scheduling.
Since the system doesn’t know precisely when a job will run, and many jobs are simply too big to run interactively, users submit most jobs as batch jobs to run without manual intervention.
We encourage all users to take advantage of our data analysis and visualization clusters to analyze the results of their HPC simulations. With the centralized GLADE file spaces, you don’t have to move files from system to system to perform different tasks. When a batch job runs on the HPC system, for example, the data generated are stored on GLADE. You can then use one of the other clusters to analyze the data without having to transfer files.
Furthermore, because the Geyser and Caldera clusters are managed by the LSF scheduler, you can set up batch jobs that will automatically process the results of Yellowstone HPC simulations without any manual intervention.
When you need to move data from GLADE or our HPSS system to another institution for permanent storage or analysis, you can do that several different ways.
For moving files between GLADE and remote systems, one of the simplest and most efficient ways is to use Globus. With its Globus Connect Personal feature, you can move files easily to and from your laptop or desktop computer, GLADE, or other destinations.
We also provide SCP and SFTP capabilities through command line interface and Windows clients. These are best suited for transferring small numbers of small files.
HPSS transfers: Our HPSS environment is optimized and secured for transfers to and from the GLADE file spaces. Before you can transfer HPSS files to a remote system, you will need to retrieve them from HPSS to a GLADE directory. Then you can use one of the transfer methods described above.
Once you've conducted your work on CISL resources and are writing up the results for a journal article, presentation, or other published work, we ask that you acknowledge CISL and NCAR support for the computational aspects of the work.