documentation

Using SUPR

Using SUPR

SUPR is the NAISS database containing information about users and projects. SUPR is a self-service portal for users of most Swedish HPC Centres. It is used for registering and updating contact info, applying for new projects, requesting membership in projects, and viewing statistics for the projects you are a member of.

Think Tank

Think Tank

This is an opportunity for users of HPC2N's resources to drop by and have an informal talk with the system operators at HPC2N, ask questions, make suggestions...

To begin with, the following times have been set aside, and rooms have been booked:

2015-09-08, 13:00-14:00: Room MC323SEM

2015-11-10, 13:00-14:00: Room MC323SEM NC275

2015-12-07, 13:00-14:00: Room MC323SEM

If it turns out there is a lot of interest for this, the 'Think Tank' will be made a recurring event.

Job Dependencies

Job dependencies - SLURM

A job can be given the constraint that it only starts after another job has finished.

In the following example, we have two Jobs, A and B. We want Job B to start after Job A has successfully completed.

First we start Job A by submitting it via sbatch:

$ sbatch <jobA.sh>

Making note of the assigned job ID for Job A, we then submit Job B with the added condition that it only starts after Job A has successfully completed:

Job Cancellation

Deleting a job

To cancel a job, use scancel. You need the running or pending jobid. It is only the job's owner and SLURM administrators that can cancel jobs.
$ scancel <jobid>

To cancel all your jobs (running and pending) you can run

$ scancel -u <username>

You get the job id when you submit the job.

Job Status

Job status

To see status of partitions and nodes, use

$ sinfo

To get the status of all SLURM jobs

$ squeue

To only view the jobs in the largemem partition on Kebnekaise

$ queue -p largemem

Get the status of an individual job

$ scontrol show job <jobid>

Slurm MPI + OpenMP examples

Slurm MPI + OpenMP examples

This example shows a hybrid MPI/OpenMP job with 4 tasks and 28 cores per task.

#!/bin/bash
# Example with 4 tasks and 28 cores per task for MPI+OpenMP
#
# Project/Account
#SBATCH -A hpc2n-1234-56
#
# Number of MPI tasks
#SBATCH -n 4
#
# Number of cores per task
#SBATCH -c 28
#
# Runtime of this jobs is less then 12 hours.
#SBATCH --time=12:00:00
#

# Clear the environment from any previously loaded modules
module purge > /dev/null 2>&1

# Load the module environment suitable for the job
module load foss/2019a

Slurm OpenMP Examples

Slurm OpenMP Examples

This example shows a 28 core OpenMP Job (maximum size for one normal node on Kebnekaise).

#!/bin/bash
# Example with 28 cores for OpenMP
#
# Project/Account
#SBATCH -A hpc2n-1234-56
#
# Number of cores
#SBATCH -c 28
#
# Runtime of this jobs is less then 12 hours.
#SBATCH --time=12:00:00
#

# Clear the environment from any previously loaded modules
module purge > /dev/null 2>&1

# Load the module environment suitable for the job
module load foss/2019a
​
# Set OMP_NUM_THREADS to the same value as -c
# with a fallback in case it isn't set.

Pages

Updated: 2024-04-17, 14:47