Kebnekaise

  • Posted on: 13 October 2016
  • By: bbrydsoe

Kebnekaise

[ Details: Compute (Skylake-SP) nodesCompute nodes (AMD Zen3) | Largemem nodes | GPU nodes (V100) | GPU nodes (A100) ]

Kebnekaise is the latest supercomputer at HPC2N. It is named after the massif of the same name, which has some of Sweden's highest mountain peaks (Sydtoppen and Nordtoppen). Just as the massif, the supercomputer Kebnekaise is a system with many faces. 

Kebnekaise was delivered by Lenovo and installed during the summer 2016, except for the 36 nodes with the (then) new generation of Intel Xeon Phi, also known as Intel Knights Landing (KNL), which were installed during spring 2017. These nodes have since been decommissioned. Kebnekaise was opened up for general availability on November 7, 2016. 

In 2018, Kebnekaise was extended with 52 Intel Xeon Gold 6132 (Skylake) nodes, as well as 10 NVidian V100 (Volta) GPU nodes.

In 2023, Kebnekaise was extended further, with 2 dual NVIDIA A100 GPU nodes and one many-core AMD Zen3 CPU node.

Kebnekaise Celebration and HPC2N Open House was held 30 November 2017.

Node Type #nodes CPU Cores Memory Infiniband Notes
Compute-skylake 52 Intel Xeon Gold 6132 2x14 192 GB/node EDR Some of the Skylake nodes are reserved for WLCG use.
Compute-AMD Zen3 1 AMD Zen3 (AMD EPYC 7763) 2x64 1 TB/node EDR  
Large Memory 20 Intel Xeon E7-8860v4 4x18 3072 GB/node EDR Allocations for the Large Memory nodes is handled separately.
2xV100 10 Intel Xeon Gold 6132
2x NVidia V100
2x14
2x5120(CUDA)
2x640(Tensor)
192 GB/node EDR  
2xA100 2 AMD Zen3 (AMD EPYC 7413) 2x24
2x6912(CUDA)
2x432(Tensor)
512GB/node EDR These nodes run Ubuntu Jammy 22.04 LTS.

There is local scratch space on each node (about 170GB, SSD), which is shared between the jobs currently running. Connected to Kebnekaise is also our parallel file system Ransarn (where your project storage is located), which provide quick acccess to files regardless of which node they run on. For more information about the different filesystems that are available on our systems, read the Filesystems and Storage page.

All nodes are running Ubuntu Focal (20.04 LTS). We use EasyBuild to build software and we also use a module system called Lmod. We are still improving the portfolio of installed software. The software page currently lists only a few of the installed software packages. Please log in to Kebnekaise (regular: kebnekaise or ThinLinc: kebnekaise-tl) for a list of all available software packages.

NOTE: There is a special login node for the A100 GPUs that is AMD Zen3 (AMD EPYC 7313) and with 1 A100 card: kebnekaise-amd (for ThinLinc: kebnekaise-amd-tl). It is also running Ubuntu Jammy 22.04 like the A100 nodes, and is recommended for when you are using the A100 GPUs as it allows you to see which software is available on them.

With all the different node types of Kebnekaise, the scheduling of jobs is somewhat more complicated than on our previous systems. Different node types are "charged" differently. See the allocation policy on Kebnekaise page for details. Kebnekaise is using SLURM for job management and scheduling.

HPL performance of Kebnekaise (except the new AMD nodes)
Compute-skylake Nodes 87 TFlops/s
Large Memory Nodes 34 TFlops/s
2xV100 Nodes 75 TFlops/s

Do note that running all 28 cores with lots of AVX (on the normal CPUs) will limit the clock to at absolute maximum 2.9 GHz per core, and probably no more than 2.5.

The AVX clock frequency != the rest of the CPUs clock frequency and has a lower starting point and lower max boost.

Detailed node Info

Compute nodes, Skylake-SP

Architecture is Intel Xeon Gold 6132 (Skylake-SP).

Each core has:

  • 64 kB L1 cache
    • 32 kB L1 data cache
    • 32 kB L1 instruction cache
  • 1 MB L2 cache (private per core)
  • 1.375 MB L3 cache (total of 19.25 MB shared between cores)

The memory is shared in the whole node, but physically 96 GB is placed on each NUMA island. The memory controller on each NUMA node has 6 channels.

The Intel Xeon Gold 6132 has two AVX-512 FMA units per core.

Some more information can be found here and here.

Intel Xeon Gold 6132 (Skylake-SP)
Instruction set SSE4.2, AVX, AVX2, AVX-512
SP FLOPs/cycle 64 (32 per AVX-512 FMA unit)
DP FLOPs/cycle 32 (16 per AVX-512 FMA unit)
Base Frequency 2.6 GHz
Turbo Mode Frequency (single core)  
Turbo Mode Frequency (all cores)  

Thus it is possible to run 32 double precision or 64 single precision floating point operations per second per clock cycle within the 512-bit vectors, as well as eight 64-bit and sixteen 32-bit integers, with up to two 512-bit fused-multiply add (FMA) units.

Compute nodes, AMD Zen3

Architecture is AMD Zen3 (AMD EPYC 7763 64-Core)

 - The CPU-only node have 2 CPU sockets with 64 cores each and 1TB of memory (or 8020MB/core usable)

Large memory nodes

There are 18 cores on each of the 4 NUMA islands. The cores on each NUMA island share 768 GB memory, but have access to the full 3072 GB on the node. The memory controller on each NUMA island has 4 channels.

Each core has:
  • 64 kB L1 cache
    • 32 kB L1 data cache
    • 32 kB L1 instruction cache
  • 256 kB L2 cache
  • 45 MB L3 cache shared between the cores on each NUMA island

largememnode-overlayed_numbering-fixed.png

GPU nodes, V100

We have 10 nodes with NVidia V100 (Volta) GPUs. Each CPU core is identical to the cores in the Skylake compute nodes and in addition to that the nodes each have 

  • 2 V100 GPUs each with
    • 2x5120(CUDA) cores
    • 2x640(Tensor) cores

One V100 GPU is located on each NUMA island.

GPU nodes, A100

 - The GPU enabled nodes (AMD EPYC 7413 24-Core) have 2 CPU sockets with 24 cores each, i.e. 48 in total and 512GB memory (or 10600MB/core usable)


References and further information


The information that were used in creating the images for the compute node and largemem node on this page are generated with the lstopo command.

Updated: 2024-03-19, 10:33