Support &
Documentation
#!/bin/bash # Example with 28 MPI tasks and 14 tasks per node. # # Project/Account (use your own) #SBATCH -A hpc2n-1234-56 # # Number of MPI tasks #SBATCH -n 28 # # Number of tasks per node #SBATCH --tasks-per-node=14 # # Runtime of this jobs is less then 12 hours. #SBATCH --time=12:00:00 # Clear the environment from any previously loaded modules module purge > /dev/null 2>&1 # Load the module environment suitable for the job module load foss/2019a # And finally run the job srun ./mpi_program # End of submit file
When submitted this will create a job with two nodes with 14 tasks per node.
#!/bin/bash # Example with 4 exclusive nodes. # # Project/Account (change to your own) #SBATCH -A hpc2n-1234-56 # # Number of nodes #SBATCH -N 4 # # Use nodes exclusive #SBATCH --exclusive # # Runtime of this jobs is less then 12 hours. #SBATCH --time=12:00:00 # # Clear the environment from any previously loaded modules module purge > /dev/null 2>&1 # Load the module environment suitable for the job module load foss/2019a # Total number of MPI tasks will be calculated by slurm based on either the defaults or command line parameters. srun ./mpi_program # End of submit file
You can submit this into slurm with:
sbatch submit_script.sh