System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 70 6.76 %
derecho2 UP 77 1.84 %
derecho3 UP 67 1.52 %
derecho4 UP 91 2.94 %
derecho5 UP 87 1.58 %
derecho6 UP 99 7.02 %
derecho7 UP - 0.02 %
derecho8 UP - 0.00 %
CPU Nodes
Reserved 3 ( 0.1 %)
Offline 31 ( 1.2 %)
Running Jobs 1898 ( 76.3 %)
Free 556 ( 22.3 %)
GPU Nodes
Offline 1 ( 1.2 %)
Running Jobs 12 ( 14.6 %)
Free 69 ( 84.1 %)
Updated 12:15 am MDT Tue Mar 19 2024
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 232 0 400 1837 78
gpu 4 0 1 11 3
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 13 0 10 86 2
pgpu 1 0 1 1 2
gpudev 1 0 0 1 1
cpudev 1 0 0 1 1
repair 0 0 0 0 -
jhub 0 0 0 0 -
S2855391 0 0 0 0 -
R3705318 1 0 0 2 1
R3812721 0 0 0 0 -
Updated 12:15 am MDT Tue Mar 19 2024

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 124 17.8 %
casper-login2 UP 94 29.4 %
HTC Nodes
936 of 2304 CPUs in use ( 40.6 %)
Partially Allocated 55 ( 85.9 %)
Fully Allocated 3 ( 4.7 %)
Free 6 ( 9.4 %)
Large Memory Nodes
Partially Allocated 1 ( 50.0 %)
Free 1 ( 50.0 %)
GP100 Visualization Nodes
6 of 48 GPU sessions in use ( 12.5 %)
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
V100 GPU Nodes
13 of 64 GPUs in use ( 20.3 %)
Partially Allocated 5 ( 50.0 %)
Free 5 ( 50.0 %)
A100 GPU Nodes
20 of 35 GPUs in use ( 57.1 %)
Partially Allocated 8 ( 72.7 %)
Free 3 ( 27.3 %)
RDA Nodes
Partially Allocated 4 ( 80.0 %)
Fully Allocated 1 ( 20.0 %)
JupyterHub Login Nodes
Partially Allocated 6 ( 85.7 %)
Fully Allocated 1 ( 14.3 %)
Updated 12:15 am MDT Tue Mar 19 2024
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 534 0 11 936 64
vis 6 0 0 38 5
largemem 2 0 0 2 2
gpgpu 11 0 0 40 6
rda 66 0 0 126 6
tdd 0 0 0 0 -
jhublogin 232 0 0 232 232
system 0 0 0 0 -
S9227431 0 0 0 0 -
Updated 12:15 am MDT Tue Mar 19 2024

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 full 4 4 0
casper08 free 8 2 6
casper29 full 4 4 0
casper30 free 8 0 8
casper31 free 8 1 7
casper09 free 4 2 2
casper24 free 8 0 8
casper25 free 4 0 4
casper28 free 8 0 8
casper27 free 8 0 8
Updated 12:15 am MDT Tue Mar 19 2024
A100 Node State # GPUs # Used # Avail
casper18 free 1 0 1
casper19 full 1 1 0
casper21 free 1 0 1
casper38 full 4 4 0
casper39 free 4 2 2
casper40 free 4 2 2
casper41 free 4 2 2
casper42 full 4 4 0
casper43 free 4 2 2
casper44 free 4 0 4
casper37 free 4 3 1
Updated 12:15 am MDT Tue Mar 19 2024

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 80 150 54%
/glade/u/apps 2 10 17%
/glade/work 1,063 4,096 26%
/glade/derecho/scratch 21,234 55,814 39%
/glade/campaign 107,904 123,593 88%
Updated 12:00 am MDT Tue Mar 19 2024