Batch Scripts

Batch scripts are plain-text files that specify a job to be run. They consist of batch scheduler (Slurm) directives which specify the resources requested for the job, followed by a script used to successfully run a program.

Here is a simple example of a batch script that will be accepted by Slurm on Chinook:

#!/bin/bash
#SBATCH --partition=debug
#SBATCH --ntasks=24
#SBATCH --tasks-per-node=24
#If running on the bio or analysis queue add:
#SBATCH --mem=214G

echo "Hello world"

On submitting the batch script to Slurm using sbatch, the job's ID is printed:

$ ls
hello.slurm
$ sbatch hello.slurm
Submitted batch job 8137

Among other things, Slurm stores what the current working directory was when sbatch was run. Upon job completion (nearly immediate for a trivial job like the one specified by hello.slurm), output is written to a file in that directory.

$ ls
hello.slurm  slurm-8137.out
$ cat slurm-8137.out
Hello world

Running an MPI Application

Here is what a batch script for an MPI application might look like:

#!/bin/sh

#SBATCH --partition=t1standard
#SBATCH --ntasks=<NUMTASKS>
#SBATCH --tasks-per-node=24
#SBATCH --mail-user=<USERNAME>@alaska.edu
#SBATCH --mail-type=BEGIN
#SBATCH --mail-type=END
#SBATCH --mail-type=FAIL
#SBATCH --output=<APPLICATION>.%j

ulimit -s unlimited
ulimit -l unlimited

# Load any desired modules, usually the same as loaded to compile
. /etc/profile.d/modules.sh
module purge
module load toolchain/pic-intel/2016b
module load slurm

cd $SLURM_SUBMIT_DIR
# Generate a list of allocated nodes; will serve as a machinefile for mpirun
srun -l /bin/hostname | sort -n | awk '{print $2}' > ./nodes.$SLURM_JOB_ID
# Launch the MPI application
mpirun -np $SLURM_NTASKS -machinefile ./nodes.$SLURM_JOB_ID ./<APPLICATION>
# Clean up the machinefile
rm ./nodes.$SLURM_JOB_ID
  • <APPLICATION>: The executable to run in parallel

  • <NUMTASKS>: The number of parallel tasks requested from Slurm

  • <USERNAME>: Your Chinook username (same as your UA username)

There are many environment variables that Slurm defines at runtime for jobs. Here are the ones used in the above script:

  • $SLURM_JOB_ID: The job's numeric id

  • $SLURM_NTASKS: The value supplied as <NUMTASKS>

  • $SLURM_SUBMIT_DIR: The current working directory when "sbatch" was invoked

Last updated