Slurm MPI examples

Slurm MPI examples

This example shows a job with 48 task and 24 tasks per node. This is on Abisko.

#!/bin/bash
# Example with 48 MPI tasks and 24 tasks per node.
#
# Project/Account (use your own)
#SBATCH -A hpc2n-1234-56
#
# Number of MPI tasks
#SBATCH -n 48
#
# Number of tasks per node
#SBATCH --tasks-per-node=24
#
# Runtime of this jobs is less then 12 hours.
#SBATCH --time=12:00:00

module load openmpi/gcc

srun ./mpi_program

# End of submit file

This will create a job with two nodes with 24 tasks per node.

This example shows a 4 node job with --exclusive. This is on Abisko.
 

#!/bin/bash
# Example with 4 exclusive nodes.
#
# Project/Account (change to your own)
#SBATCH -A hpc2n-1234-56
#
# Number of nodes
#SBATCH -N 4
#
# Use nodes exclusive
#SBATCH --exclusive
#
# Runtime of this jobs is less then 12 hours.
#SBATCH --time=12:00:00
#

# Load the compiler and MPI library you compiled the program with. Here, openmpi/gcc   
module load openmpi/gcc

# Total number of MPI tasks will be calculated by slurm based on either the defaults or command line parameters.

srun ./mpi_program

# End of submit file

You can submit this into slurm with:

sbatch submit_script.sh

Kebnekaise

In order to run the above examples on Kebnekaise, you need to make the following changes:

  • Load one of the compiler toolchains, for instance foss
    ml foss
  • Run with either
    mpirun ./mpi_program

    or

    srun ./mpi_program	 
    
  • Adjust the number of cores per node, when needed. Remember, the standard compute nodes on Kebnekaise only have 28 cores. Some types of nodes have more cores. Read the Kebnekaise hardware page for more information.
Updated: 2017-12-06, 15:21