Libraries

Libraries

[ MPI Libraries | Math Libraries and linkage (incl. Buildenv) | CUDA libraries | Intel MKL Libraries | Linking with MKL ]

This page is about software libraries. The libraries on HPC2N systems includes parallel communications libraries (MPI) as well as various mathematical libraries, including MKL. More detailed documentation on the libraries available follows below.

To access the libraries you need to load a module. More specifically you load the libraries together with a compiler in a compiler toolchain (see 'Installed compilers').

Build environment

Using the libraries available through a compiler toolchain by itself is possible but requires a fair bit of manual work, figuring out which paths to add to -I or -L for include files and libraries, and similar.

To make life as a software builder easier there is a special module available, buildenv, that can be loaded on top of any toolchain. If it is missing for some toolchain, send a mail to support@hpc2n.umu.se and let us know.

This module defines a large number of environment variables with the relevant settings for the used toolchain. Among other things it sets CC, CXX, F90, FC, MPICC, MPICXX, MPIF90, CFLAGS, FFLAGS, and much more.

To see all of them do, after loading a toolchain

ml show buildenv

Depending on the software one can use these environment variables to set related makefile variables or cmake defines, or just use them for guidelines on what to use in makefiles etc.

Exactly how to use them depends on the softwares build system.

An example using the foss toolchain:

ml foss
ml buildenv
ml show buildenv

You will now get a list resembling this (I have colourized some of the common libraries).

There are some variables that ends in "_MT", they should be used if threaded versions of the libraries are needed.

Note: It is highly recommended to use the enviroment variables from the buildenv module.

MPI Libraries

Message Passing Interface (MPI) is a standardized and portable message-passing system designed to function on a wide variety of parallel computing architectures. Several implementations exist, among others OpenMPI and Intel MPI.

A number of compiler toolchains at HPC2N has OpenMPI and Intel MPI installed. They are best loaded using one of the following compiler toolchains:

  • foss: GCC, OpenMPI, OpenBLAS/LAPACK, FFTW, ScaLAPACK
  • gompi: GCC, OpenMPI
  • gompic: GCC, OpenMPI, CUDA
  • goolfc: gompic, OpenBLAS/LAPACK, FFTW, ScaLAPACK
  • iimpi: icc, ifort, IntelMPI
  • intel: icc, ifort, IntelMPI, IntelMKL
  • intelcuda: intel, CUDA
  • gimkl: GCC, IntelMPI, IntelMKL
  • pomkl: PGI, OpenMPI, IntelMKL
  • pompi: PGI, OpenMPI

To compile something, first load the compiler toolchain module with:

ml <compiler toolchain module>

and then use the appropriate mpi wrapper command:

Language Command, gcc or pgi Command, intel
Fortran 77 mpif77 mpiifort
Fortran 90 mpif90 mpiifort
Fortran 95 mpif90 mpiifort
C mpicc mpiicc
C++ mpicxx mpiicpc

To run, you need to add this to your job submit script:

ml <compiler toolchain module>
mpirun <program>

Here are a few links to pages with more information about the different implementations of the MPI libraries.

OpenMPI, external documentation

Intel MPI, external documentation

Math Libraries

The following list is not exhaustive, but it covers the most 'popular' of the libraries that are installed at HPC2N.

Use:

ml spider

to see a more complete list of modules, including libraries.

In order to access the libraries, you should first load a suitable compiler toolchain, i.e. one of these:

  • foss: GCC, OpenMPI, OpenBLAS/LAPACK, FFTW, ScaLAPACK
  • goolfc: gompic, OpenBLAS/LAPACK, FFTW, ScaLAPACK

You can also use Intel MKL, recommended for compiling with the Intel compilers.

Examples of linking with math libraries

NOTE: in all the examples I use -o <program> to give a name <program> to the executable. If you leave out this your executable will be named a.out.

BLAS

Blas is available in the form of OpenBLAS or Intel MKL. Intel MKL is often recommended if you are compiling with Intel compilers. See the section about Intel MKL for more information about that.

Linking with OpenBLAS:

Load either foss or goolfc. Do ml av to see which versions you can load. Then use the following command to compile and link:

Language Command
Fortran 77
Fortran 90
gfortran -o <program> <program.f> -lopenblas
gfortran -o <program> <program.f90> -lopenblas
C
C++
gcc -o <program> <program.c> -lopenblas
g++ -o <program> <program.cc> -lopenblas

Or use the environment variable to link with: $LIBBLAS.

LAPACK

LAPACK is written in Fortran77 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.

Link to external information about LAPACK.

You can also use the Intel MKL version of LAPACK.

Linking with LAPACK and OpenBLAS:

To use the Fortran based lapack library you must first load its module and one BLAS module, as well as the compiler you wish to use.

Load either foss or goolfc. Do ml av to see which versions you can load. Then use the following command to compile and link:

Language Command
Fortran 77
Fortran 90
gfortran -o <program> <program.f> -lopenblas
gfortran -o <program> <program.f90> -lopenblas
C
C++
gcc -o <program> <program.c> -lopenblas
g++ -o <program> <program.cc> -lopenblas

Or use the environment variable $LIBLAPACK to link with.

BLACS

As of ScaLAPACK version 2, BLACS is now included in the ScaLAPACK library. (Link to external information about BLACS.)

ScaLAPACK

Since the usage of ScaLAPACK depends on LAPACK, it involves multiple libraries.

NOTE: As of version 2, ScaLAPACK includes BLACS. This means that it is tightly coupled to the MPI implementation used to build it. In order to use this library, a compiler and the corresponding MPI libraries needs to be loaded first, as well as scalapack, lapack and blas for that compiler. This is easily accomplished by loading a suitable compiler toolchain module.

Linking with ScaLAPACK, OpenBLAS, and LAPACK:

You can load either foss or goolfc. Do ml av to see which versions you can load. In addition, you can use Intel MKL if you are using the Intel compilers.

After loading the compiler toolchain module, use the following command to compile and link with ScaLAPACK:

Language Command
Fortran 77
Fortran 90
mpifort -o <program> <program.f> -lscalapack -lopenblas
mpifort -o <program> <program.f90> -lscalapack -lopenblas
C
C++
mpicc -o <program> <program.c > -lscalapack -lopenblas -lgfortran
mpicc -o <program> <program.cc> -lscalapack -lopenblas -lgfortran

Or use the environment variable, $LIBSCALAPACK to link with.

FFTW

There are two versions of FFTW available, version 2.1.5 and version 3.x. Both have MPI support. Note that the API has changed between 2.1.5 and the 3.x versions.

Link to external information about FFTW.

Linking with FFTW:

Do ml spider and look for FFTW to see which versions are available. To use FFTW version 3, you should load it as part of a compiler toolchain. The available modules are foss and goolfc. Do ml av to see which versions you can load. In addition, you can use Intel MKL if you are using the Intel compilers.

Then use these commands to compiler and link with FFTW3:

Language Command
Fortran 77
Fortran 90
gfortran -o <program> <program.f> -lfftw3 -lm
gfortran -o <program> <program.f90> -lfftw3 -lm
C
C++
gcc -o <program> <program.c > -lfftw3 -lm
g++ -o <program> <program.cc> -lfftw3 -lm

Or use $LIBFFT -lm to link with ($LIBFFT_MT -lm for threaded).

METIS

METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices.

Link to external information about METIS.

To see which versions of METIS are available use:

ml spider metis

Then use the the corresponding mpi compiler wrappers to build and link with METIS.

SCOTCH

Software package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.

Link to external information about SCOTCH.

To see which versions of SCOTCH are available, and how to load it and its dependencies, use:

ml spider scotch

When the module has been loaded, you can use the environment variable $EBROOTSCOTCH to find the binaries and libraries for SCOTCH.

There is a user manual here where you can see how to use SCOTCH.

CUDA libraries

NOTE: CUDA libraries are only installed on Kebnekaise and can be used with either GCC or Intel compilers. In addition, the NVIDIA CUDA compiler driver nvcc is installed.

You should load one of the following compiler toolchain modules:

  • gompic: GCC, OpenMPI, CUDA
  • goolfc: gompic, OpenBLAS/LAPACK, FFTW, ScaLAPACK
  • iccifortcuda: icc, ifort, CUDA
  • intelcuda: icc, ifort, IntelMPI, IntelMKL, CUDA

After you have loaded the compiler toolchain module, you compile and link with CUDA like this:

Language GCC, OpenMPI Intel, Intel MPI NVCC
Fortran calling CUDA functions     1) nvcc -c <cudaprogram.cu>
2) gfortran -lcudart -lcuda <program.f90> <cudaprogram.o>
C / C++ with CUDA mpicc -o <program> <program.cu> -lcuda -lcudart mpiicc -o <program> <program.cu> -lcuda -lcudart nvcc <program.cu> -o <program>

NOTE: CUDA functions can be called directly from Fortran programs: 1) first use the nvcc compiler to create an object file from the .cu file. 2) Then compile the Fortran code together with the object file from the .cu file.

Intel MKL libraries

The Intel MKL libraries contains:

  • ScaLAPACK
  • LAPACK
  • Sparse Solver
  • BLAS
  • Sparse BLAS
  • PBLAS
  • GMP
  • FFTs
  • BLACS
  • VSL
  • VML

More information about MKL and the libraries in it can be found at the external links:

Linking with MKL libraries

To use the MKL libraries load one of the following compiler toolchain modules:

  • intel: icc, ifort, IntelMPI, IntelMKL
  • intelcuda: intel, CUDA
  • pomkl: PGI, OpenMPI, IntelMKL
  • gimkl: GCC, IntelMPI, IntelMKL

To correctly use MKL it is vital to have read the documentation.

To find the correct way of linking, take a look at the offical Intel MKL documentation.

Using the buildenv module, the common blas/lapack/scalapack/fftw libraries are available in the following environment variables, just like when using a non-MKL capable toolchain:

  • LIBBLAS
  • LIBLAPACK
  • LIBSCALAPACK
  • LIBFFT

And threaded versions are available from the corresponding environment variable appended with "_MT"

There are too many libraries in MKL to show a complete list of combinations. We refer you to the official MKL documentation for examples and support@hpc2n.umu.se for help.

Updated: 2017-05-25, 14:33