Batch Scripts
Batch scripts are plain-text files that specify a job to be run. They consist of batch scheduler (Slurm) directives which specify the resources requested for the job, followed by a script used to successfully run a program. These are typically bash or csh scripts and can make use of any features available to those shells.
The first part of a batch script are the #SBATCH Directives which are used to pass parameters to the Slurm Job Scheduler to set options for the job, such as time limit, partition, and number of nodes and cores. sbatch will process #SBATCH directives
Here is a simple example of a batch script that will be accepted by Slurm on Chinook:
On submitting the batch script to Slurm using sbatch
, the job's ID is printed:
Among other things, Slurm stores what the current working directory was when sbatch
was run. Upon job completion (nearly immediate for a trivial job like the one specified by hello.slurm
), output is written to a file in that directory.
Running an MPI Application
Here is what a batch script for an MPI application might look like:
<APPLICATION>: The executable to run in parallel
<NUMTASKS>: The number of parallel tasks requested from Slurm
<USERNAME>: Your Chinook username (same as your UA username)
There are many environment variables that Slurm defines at runtime for jobs. Here are the ones used in the above script:
$SLURM_JOB_ID: The job's numeric id
$SLURM_NTASKS: The value supplied as <NUMTASKS>
$SLURM_SUBMIT_DIR: The current working directory when "sbatch" was invoked
Last updated