Using Kebnekaise

Using the different parts of Kebnekaise

Since Kebnekaise consists of several different node types using them correctly takes a bit of getting used to.

The most simple case is using the pure CPU compute nodes and not caring about which type of compute node the job uses. In this case there is nothing special to do, just write your submit file and specify the number of tasks and cores-per-task etc, that you want to allocate. See our Slurm submit file design page for details.

Caveat: There may be modules missing on one or other of the various parts of Kebnekaise. If you notice that this is the case, please inform

Specifying CPU type

By default a job can end up on either Broadwell, Skylake or a combination of both types of nodes. To explicitly use only Intel Broadwell or Skylake nodes one has to specify:

#SBATCH --constraint=broadwell


#SBATCH --constraint=skylake

in the submit file.

To use the AMD Zen3 node(s), specify

#SBATCH --constraint=zen3

The "constraint" clause isn't limited to a single value, it can actually be rather complex. See the "sbatch" man page for details and examples.

Using the GPU nodes

We have multiple types of GPU cards available on Kebnekaise, NVIDIA Tesla K80 (Kepler), NVIDIA Tesla V100 (Volta), and NVIDIA Ampere A100.

To request GPU resources one has to include a GRES in the submit file. The general format is:

#SBATCH --gres=gpu:<type-of-card>:x

where <type-of-card> is either k80, v100 or a100 and x = 1, 2.

For the A100 nodes one must also add this at the moment,

#SBATCH -p amd_gpu

All GPU nodes contain two cards each of the same type.

Note that the K80 cards contains two gpu engines each and in practice each K80 node thus contain 4 actual GPUs.

On the dual card nodes one can request either a single card (x = 1) or both (x = 2). For each requested card, a whole CPU socket (14 cores, or 24 for the A100 nodes) is also dedicated on the same node. Each card is connected to the PCI-express bus of the corresponding CPU socket.

One can activate Nvidia Multi Process Service (MPS), if so required, by using:

#SBATCH --gres=gpu:k80:x,nvidiamps

If the code that is going to run on the allocated resources expects the gpus to be in exclusive mode (default is shared), this can be selected with "gpuexcl", like this:

#SBATCH --gres=gpu:v100:x,gpuexcl

Using the large memory nodes

Using the large memory nodes requires the project to have an allocation on that sub-resource of Kebnekaise. It is called "Kebnekaise Large Memory" in SUPR.

In the submit file one has to specify the large memory partition like this:

#SBATCH -p largemem

Please also note that the large memory nodes consists of 4 sockets with 18 cores each, i.e., 72 cores in total. See our Allocation policy on Kebnekaise information for details on how allocations are done.

Updated: 2023-12-04, 11:15