Examples: peak_memusage on Geyser

Return to Checking memory use

Geyser serial and OpenMP jobs with peak_memusage

Be sure to substitute your own project code, job name, executable name, wall-clock time (hours:minutes:seconds) and so on when customizing a sample script to run your job. Specify input or output redirection as you normally do if needed. For parallel jobs, you also will likely need to adjust the node count and possibly tasks per node.

#!/bin/tcsh
#SBATCH -J peakmem
#SBATCH -n 1
#SBATCH --ntasks-per-node=1
#SBATCH -t 01:00:00
#SBATCH -A project_code
#SBATCH -p dav
#SBATCH -o peakmem.%j
#SBATCH -C geyser
#SBATCH --mem=100G

setenv TMPDIR /glade/scratch/$USER/temp
mkdir -p $TMPDIR

### Load modules required to run the job
module load intel peak_memusage
module li

### Run program
peak_memusage.exe ./executable_name --arguments

The output will include a line like the following one.

Used memory in task 0/1: 381.99MiB (+0.67MiB overhead). ExitStatus: 0. Signal: 0

The "overhead" identified in the output is memory that the tool uses to check your program. If the program exits unsuccessfully or if it receives a signal, the exit status and signal number also will be printed.


Geyser MPI jobs with peak_memusage (OpenMPI-compiled binaries)

Be sure to substitute your own project code, job name, executable name, wall-clock time (hours:minutes:seconds) and so on when customizing a sample script to run your job. Specify input or output redirection as you normally do if needed. For parallel jobs, you also will likely need to adjust the node count and possibly tasks per node.

#!/bin/tcsh
#SBATCH -J peakmem
#SBATCH -n 6
#SBATCH --ntasks-per-node=3
#SBATCH -t 01:00:00
#SBATCH -A project_code
#SBATCH -p dav
#SBATCH -o peakmem.%j
#SBATCH -C geyser

setenv TMPDIR /glade/scratch/$USER/temp
mkdir -p $TMPDIR

### Load modules required to run the job
module load intel openmpi peak_memusage
module li

### Run program
srun peak_memusage.exe ./executable_name --arguments

Sample output below.


Geyser MPI jobs with peak_memusage (Intel MPI-compiled binaries)

Be sure to substitute your own project code, job name, executable name, wall-clock time (hours:minutes:seconds) and so on when customizing a sample script to run your job. Specify input or output redirection as you normally do if needed. For parallel jobs, you also will likely need to adjust the node count and possibly tasks per node.

#!/bin/tcsh
#SBATCH -J peakmem
#SBATCH -n 6
#SBATCH --ntasks-per-node=3
#SBATCH -t 01:00:00
#SBATCH -A project_code
#SBATCH -p dav
#SBATCH -o peakmem.%j
#SBATCH -C geyser

setenv TMPDIR /glade/scratch/$USER/temp
mkdir -p $TMPDIR

### Load modules required to run the job
module load intel impi peak_memusage
module li

### Run program

srun peak_memusage.exe ./executable_name --arguments

The output will include a line for each MPI task that the program used, as shown here.

Used memory in task 5/3: 26.07MiB (+0.57MiB overhead). ExitStatus: 0. Signal: 0
Used memory in task 4/3: 26.07MiB (+0.57MiB overhead). ExitStatus: 0. Signal: 0
Used memory in task 2/3: 26.10MiB (+0.55MiB overhead). ExitStatus: 0. Signal: 0
Used memory in task 1/3: 26.11MiB (+0.55MiB overhead). ExitStatus: 0. Signal: 0
Used memory in task 3/3: 26.20MiB (+0.57MiB overhead). ExitStatus: 0. Signal: 0
Used memory in task 0/3: 26.29MiB (+0.54MiB overhead). ExitStatus: 0. Signal: 0

The "overhead" identified in the output is memory that the tool uses to check your program. If the program exits unsuccessfully or if it receives a signal, the exit status and signal number also will be printed.