Slurm MPI examples

Slurm MPI examples

This example shows a job with 28 task and 14 tasks per node. This matches the normal nodes on Kebnekaise.

#!/bin/bash
# Example with 28 MPI tasks and 14 tasks per node.
#
# Project/Account (use your own)
#SBATCH -A hpc2n-1234-56
#
# Number of MPI tasks
#SBATCH -n 28
#
# Number of tasks per node
#SBATCH --tasks-per-node=14
#
# Runtime of this jobs is less then 12 hours.
#SBATCH --time=12:00:00

# Clear the environment from any previously loaded modules
module purge > /dev/null 2>&1

# Load the module environment suitable for the job
module load foss/2019a

# And finally run the job​
srun ./mpi_program

# End of submit file

When submitted this will create a job with two nodes with 14 tasks per node.

This example shows a 4 node job with --exclusive. The exclusive flag will allocate all cores on each node and make them available to the job.

#!/bin/bash
# Example with 4 exclusive nodes.
#
# Project/Account (change to your own)
#SBATCH -A hpc2n-1234-56
#
# Number of nodes
#SBATCH -N 4
#
# Use nodes exclusive
#SBATCH --exclusive
#
# Runtime of this jobs is less then 12 hours.
#SBATCH --time=12:00:00
#

# Clear the environment from any previously loaded modules
module purge > /dev/null 2>&1

# Load the module environment suitable for the job
module load foss/2019a

# Total number of MPI tasks will be calculated by slurm based on either the defaults or command line parameters.

srun ./mpi_program

# End of submit file

You can submit this into slurm with:

sbatch submit_script.sh
Updated: 2024-03-19, 10:33