Short Quick-start Guide

Short Quick-start Guide

This is a quickstart guide to using the compute clusters on HPC2N.

Logging on to 'Kebnekaise'.

Follow the instructions below to log on to 'Kebnekaise'.

If this is the first time you are using any of the HPC2N facilities, please change your password after you have logged in. See Login and password at HPC2N for more information. Remember, the HPC2N and SUPR accounts are separate.

Access to our systems is possible by using SSH. The example below uses SSH with standard interactive password login. For information on using Kerberos/GSSAPI for passwordless logins, see the section about Login/password.

If you are using Linux or Mac, you should just open a terminal and:

  • Enter: ssh -l yourusername kebnekaise.hpc2n.umu.se
  • give password

If you are using Windows, you need to use something like PuTTY or CygWin to connect. You can read a short introduction here.

Choosing where to store your files

Your home directory is very small (default 25GB), so we strongly recommend storing your files in your project storage. See the section about the different filessystems at HPC2N for more information.

Submit a job to the batch system

A set of computing tasks submitted to a batch system is called a job. Jobs can be submitted in two ways: a) from a command line or b) using a job script. We recommend using a job script as it makes troubleshooting easier and also allows you to keep track of batch system parameters you used in the past.

Kebnekaise runs the batch system SLURM.

To create a new job script (also called a submit script or a submit file) you need to:

  • open a new file in one of our text editors (nano, emacs or vim)
  • write a job script including batch system directives
  • remember to load any modules needed
  • save the file and submit it to the batch system queue using command sbatch

There are several examples, and more information, about using the batch system and writing scripts in the subsection for the batch system.

Batch system directives start with #SBATCH. The first line says that Linux shell bash will be used to interpret the job script. Here are some of the most common directives.

  • -A specifies the local/SNIC/NAISS project ID formated as hpc2nXXXX-YYY, SNICXXX-YY-ZZ, or NAISSXXXX-YY-ZZ (mandatory; spaces and slashes are not allowed in the project ID. The letters can be upper or lower case though)
  • -N  the number of nodes that slurm should allocate for your job. This should only be used together with --ntasks-per-node or with --exclusive. But in almost every case it is better to let slurm calculate the number of nodes required for your job, from the number of tasks, the number of cores per task, and the number of tasks per node.
  • -J is a job name (default is the name of the submit file)
  • --output= and --error= specify paths to the standard output and standard error files (the default is that both standard out and error are combined into slurm-jobid.out)
  • -n specifies requested number of tasks. The default is one task.
  • --time= is the real time (as opposed to the processor time) that should be reserved for the job
  • -c specifies requested number of cpus (actually cores) per task. Request that ncpus be allocated per task. This can be useful if the job is multi-threaded and requires more than one core per task for optimal performance. The default is one core per task.
  • --exclusive requests the whole node exclusively for this job
  • By default your job's working directory is the directory you start the job in
  • Before running the program it is necessary to load the appropriate module(s) for the MPI, and your code, to have access to relevant (parallel) libraries (see our modules pages)
  • Generally, you should run your parallel program with srun

Here we will demonstrate the usage of the batch system directives on the following simple submit file example.

MPI job with 6 (mpi-)tasks and a 1 hour run-time

#!/bin/bash
#SBATCH -A <account>
#SBATCH -n 6
#SBATCH --time=01:00:00

# Load compiler toolchain module for MPI, compiler, libraries, etc. as needed/desired. This loads foss, which 
# is GCC, OpenMPI, OpenBLAS/LAPACK, FFTW, and ScaLAPACK
module add foss 

srun ./mpi_program 

In order to see how much of your allocation you have used up, use the command projinfo.

Batch system commands

There is a set of batch system commands available to users for managing their jobs. The following is a list of commands useful to end-users: 

  • sbatch <submit_file> submits a job to the batch system (if there are no syntax errors in the submit file the job is processed and inserted into the job queue, the integer job ID is printed on the screen)
  • squeue shows the current job queue (grouped by running and then pending jobs)
  • scontrol show job <jobid> shows detailed information about a specific job
  • scancel <jobid> deletes a job from the queue

Additional information

More information on batch systems can be found on the Internet. We recommend visiting the following pages (keep in mind that some information may not apply to the HPC2N environment):

  • SLURM example job scripts: Leibniz-Rechenzentrum webpage.
  • SLURM Quick Start User Guide: webpage.

Follow the instructions below to compile a parallel program

  • Use your own code or download a small example that sends messages between two nodes using MPI (uses standard output).
  • Download a C Makefile: makefile
  • Download a Fortran Makefile: makefile (rename to makefile after downloading)
  • Edit the make-file
    • Specify the files you want to compile in the makefile
      (change from pingpong to the real name)
  • To setup environment for work with MPI compilers you need to add appropritate module. For example, to enable GCC OpenMPI compilers (and some libraries) enter:

    $ module add foss

    For availability of other compilers write:

    $ module avail

  • Enter the following to make an executable from C or FORTRAN source code:

    $ make -f makefile

Updated: 2024-03-08, 14:54