Libraries

Libraries

[ MPI Libraries | Math and other Libraries and linkage (incl. Buildenv) | CUDA libraries | Intel MKL Libraries | Linking with MKL ]

This page is about software libraries. The libraries on HPC2N systems includes parallel communications libraries (MPI) as well as various mathematical libraries, including MKL. More detailed documentation on the libraries available follows below.

To access the libraries you need to load a module. More specifically you load the libraries together with a compiler in a compiler toolchain (see 'Installed compilers').

Build environment

Using the libraries available through a compiler toolchain by itself is possible but requires a fair bit of manual work, figuring out which paths to add to -I or -L for include files and libraries, and similar.

To make life as a software builder easier there is a special module available, buildenv, that can be loaded on top of any toolchain. If it is missing for some toolchain, send a mail to support@hpc2n.umu.se and let us know.

This module defines a large number of environment variables with the relevant settings for the used toolchain. Among other things it sets CC, CXX, F90, FC, MPICC, MPICXX, MPIF90, CFLAGS, FFLAGS, and much more.

To see all of them do, after loading a toolchain

ml show buildenv

Depending on the software one can use these environment variables to set related makefile variables or cmake defines, or just use them for guidelines on what to use in makefiles etc.

Exactly how to use them depends on the softwares build system.

An example using the foss toolchain:

ml foss
ml buildenv
ml show buildenv

You will now get a list resembling this (I have colourized some of the common libraries).

There are some variables that ends in "_MT", they should be used if threaded versions of the libraries are needed.

Note: It is highly recommended to use the enviroment variables from the buildenv module.

MPI Libraries

Message Passing Interface (MPI) is a standardized and portable message-passing system designed to function on a wide variety of parallel computing architectures. Several implementations exist, among others OpenMPI and Intel MPI.

A number of compiler toolchains at HPC2N has OpenMPI and Intel MPI installed. They are best loaded using one of the following compiler toolchains:

  • foss: GCC, OpenMPI, OpenBLAS/LAPACK, FFTW, ScaLAPACK
  • gompi: GCC, OpenMPI
  • gompic: GCC, OpenMPI, CUDA
  • goolfc: gompic, OpenBLAS/LAPACK, FFTW, ScaLAPACK
  • iimpi: icc, ifort, IntelMPI
  • intel: icc, ifort, IntelMPI, IntelMKL
  • intelcuda: intel, CUDA
  • gimkl: GCC, IntelMPI, IntelMKL
  • pomkl: PGI, OpenMPI, IntelMKL
  • pompi: PGI, OpenMPI

To compile something, first load the compiler toolchain module with:

ml <compiler toolchain module>

and then use the appropriate mpi wrapper command:

Language Command, gcc or pgi Command, intel
Fortran 77 mpif77 mpiifort
Fortran 90 mpif90 mpiifort
Fortran 95 mpif90 mpiifort
C mpicc mpiicc
C++ mpicxx mpiicpc

To run, you need to add this to your job submit script:

ml <compiler toolchain module>
mpirun <program>

Here are a few links to pages with more information about the different implementations of the MPI libraries.

OpenMPI, external documentation

Intel MPI, external documentation

Math (and other) Libraries

The following list is not exhaustive, but it covers the most 'popular' of the libraries that are installed at HPC2N.

Use:

ml spider

to see a more complete list of modules, including libraries.

In order to access the libraries, you should first load a suitable compiler toolchain, i.e. one of these:

  • foss: GCC, OpenMPI, OpenBLAS/LAPACK, FFTW, ScaLAPACK
  • goolfc: gompic, OpenBLAS/LAPACK, FFTW, ScaLAPACK

You can also use Intel MKL, recommended for compiling with the Intel compilers.

Examples of linking with math libraries

NOTE: in all the examples I use -o <program> to give a name <program> to the executable. If you leave out this your executable will be named a.out.

BLAS

Blas is available in the form of OpenBLAS or Intel MKL. Intel MKL is often recommended if you are compiling with Intel compilers. See the section about Intel MKL for more information about that.

Linking with OpenBLAS:

Load either foss or goolfc. Do ml av to see which versions you can load. Then use the following command to compile and link:

Language Command
Fortran 77
Fortran 90
gfortran -o <program> <program.f> -lopenblas
gfortran -o <program> <program.f90> -lopenblas
C
C++
gcc -o <program> <program.c> -lopenblas
g++ -o <program> <program.cc> -lopenblas

Or use the environment variable to link with: $LIBBLAS.

LAPACK

LAPACK is written in Fortran77 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.

Link to external information about LAPACK.

You can also use the Intel MKL version of LAPACK.

Linking with LAPACK and OpenBLAS:

To use the Fortran based lapack library you must first load its module and one BLAS module, as well as the compiler you wish to use.

Load either foss or goolfc. Do ml av to see which versions you can load. Then use the following command to compile and link:

Language Command
Fortran 77
Fortran 90
gfortran -o <program> <program.f> -lopenblas
gfortran -o <program> <program.f90> -lopenblas
C
C++
gcc -o <program> <program.c> -lopenblas
g++ -o <program> <program.cc> -lopenblas

Or use the environment variable $LIBLAPACK to link with.

BLACS

As of ScaLAPACK version 2, BLACS is now included in the ScaLAPACK library. (Link to external information about BLACS.)

ScaLAPACK

Since the usage of ScaLAPACK depends on LAPACK, it involves multiple libraries.

NOTE: As of version 2, ScaLAPACK includes BLACS. This means that it is tightly coupled to the MPI implementation used to build it. In order to use this library, a compiler and the corresponding MPI libraries needs to be loaded first, as well as scalapack, lapack and blas for that compiler. This is easily accomplished by loading a suitable compiler toolchain module.

Linking with ScaLAPACK, OpenBLAS, and LAPACK:

You can load either foss or goolfc. Do ml av to see which versions you can load. In addition, you can use Intel MKL if you are using the Intel compilers.

After loading the compiler toolchain module, use the following command to compile and link with ScaLAPACK:

Language Command
Fortran 77
Fortran 90
mpifort -o <program> <program.f> -lscalapack -lopenblas
mpifort -o <program> <program.f90> -lscalapack -lopenblas
C
C++
mpicc -o <program> <program.c > -lscalapack -lopenblas -lgfortran
mpicc -o <program> <program.cc> -lscalapack -lopenblas -lgfortran

Or use the environment variable, $LIBSCALAPACK to link with.

FFTW

There are two versions of FFTW available, version 2.1.5 and version 3.x. Both have MPI support. Note that the API has changed between 2.1.5 and the 3.x versions.

Link to external information about FFTW.

Linking with FFTW:

Do ml spider and look for FFTW to see which versions are available. To use FFTW version 3, you should load it as part of a compiler toolchain. The available modules are foss and goolfc. Do ml av to see which versions you can load. In addition, you can use Intel MKL if you are using the Intel compilers.

Then use these commands to compiler and link with FFTW3:

Language Command
Fortran 77
Fortran 90
gfortran -o <program> <program.f> -lfftw3 -lm
gfortran -o <program> <program.f90> -lfftw3 -lm
C
C++
gcc -o <program> <program.c > -lfftw3 -lm
g++ -o <program> <program.cc> -lfftw3 -lm

Or use $LIBFFT -lm to link with ($LIBFFT_MT -lm for threaded).

METIS

METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices.

Link to external information about METIS.

To see which versions of METIS are available use:

ml spider metis

Then use the the corresponding mpi compiler wrappers to build and link with METIS.

ELPA

The publicly available ELPA library provides highly efficient and highly scalable direct eigensolvers for symmetric matrices.
Though especially designed for use for PetaFlop/s applications solving large problem sizes on massively parallel supercomputers,  ELPA eigensolvers have proven to be also very efficient for smaller matrices.

LInk to external information about ELPA.

To see which versions of ELPA are available use:

ml spider elpa

Remember to load any prerequisites (the versions of icc, ifort, impi that ml spider ELPA/<version> lists) before loading the ELPA module. 

You can find the libraries that can be linked with in $EBROOTELPA/lib when the module has been loaded. In addition, there is a USERS_GUIDE.md file with information about how to use ELPA. It can be found in $EBROOTELPA/share/doc/elpa

Eigen

Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

Link to external information about Eigen.

To see which versions of Eigen are available use:

ml spider eigen

Remember to load any prerequisites (the versions of icc, ifort, impi that ml spider Eigen/<version> lists) before loading the Eigen module. 

You can find the Eigen files under the $EBROOTEIGEN directory after the module has been loaded.

There is a getting started guide and other documentation on the Eigen homepage.

GSL

The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. It is free software under the GNU General Public License.

Link to external information about GSL.

To see which versions of GSL are available use:

ml spider GSL

Remember to load any prerequisites (the versions of iccifort+impi OR gcc+openmpi that ml spider GSL/<version> lists) before loading the GSL module. 

The GSL libraries can be found in $EBROOTGSL/lib after the module has been loaded, if you need to update yourself on their names. You can get some information about GSL from the command

man gsl

and

info gsl-ref

ipp

Intel Integrated Performance Primitives (Intel IPP) is an extensive library of multicore-ready, highly optimized software functions for multimedia, data processing, and communications applications. Intel IPP offers thousands of optimized functions covering frequently used fundamental algorithms.

Link to external information about ipp.

To see which versions of Intel ipp are available and how to load it use:

ml spider ipp

You can find a getting started guide and other documentation on the Intel Integrated Performance Primitives homepage.

SCOTCH

Software package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.

Link to external information about SCOTCH.

To see which versions of SCOTCH are available, and how to load it and its dependencies, use:

ml spider scotch

When the module has been loaded, you can use the environment variable $EBROOTSCOTCH to find the binaries and libraries for SCOTCH.

There is a user manual here where you can see how to use SCOTCH.

Libint

Libint library is used to evaluate the traditional (electron repulsion) and certain novel two-body matrix elements (integrals) over Cartesian Gaussian functions used in modern atomic and molecular theory.

Link to external information about Libint.

To see which versions of Libint are available, and how to load it and any dependencies, use:

ml spider libint

When the module has been loaded, you can use the environment variable $EBROOTLIBINT to find the binaries and libraries for Libint.

There is some information about Libint and how to use it on the Libint Homepage. There is a brief Libint Programmers Manual here.

libxc

Libxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.

Link to external information about Libxc.

To see which versions of Libxc are available, and how to load it and any dependencies, use:

ml spider libxc

When the module has been loaded, you can use the environment variable $EBROOTLIBXC to find the binaries and libraries for Libxc.

There is a Libxc manual here.

Libxsmm

LIBXSMM is a library for small dense and small sparse matrix-matrix multiplications targeting Intel Architecture (x86).

Link to external information about Libxsmm.

To see which versions of Libxsmm are available, and how to load it and any dependencies, use:

ml spider libxsmm

When the module has been loaded, you can use the environment variable $EBROOTLIBXSMM to find the binaries and libraries for Libxsmm.

There is some Libxsmm documentation here.

MPFR

The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.

Link to external information about MPFR.

To see which versions of MPFR are available, and how to load it and any dependencies, use:

ml spider mpfr

When the module has been loaded, you can use the environment variable $EBROOTMPFR to find the binaries and libraries for MPFR.

The MPFR Reference Guide is here.

NetCDF

NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

Link to external information about NetCDF.

To see which versions of NetCDF are available, and how to load it and any dependencies, use:

ml spider netcdf

When the module has been loaded, you can use the environment variable $EBROOTNETCDF to find the binaries and libraries for NetCDF.

There is some information about NetCDF and how to use it on the NetCDF documentation page.

ParMETIS

ParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices.

Link to external information about ParMETIS.

To see which versions of ParMETIS are available, and how to load it and any dependencies, use:

ml spider parmetis

When the module has been loaded, you can use the environment variable $EBROOTPARMETIS to find the binaries and libraries for ParMETIS.

There is a ParMETIS manual here [PDF].

SIONlib

SIONlib is a scalable I/O library for parallel access to task-local files. The library not only supports writing and reading binary data to or from several thousands of processors into a single or a small number of physical files, but also provides global open and close functions to access SIONlib files in parallel. This package provides a stripped-down installation of SIONlib for use with performance tools (e.g., Score-P), with renamed symbols to avoid conflicts when an application using SIONlib itself is linked against a tool requiring a different SIONlib version.

Link to external information about SIONlib.

To see which versions of SIONlib are available, and how to load it and any dependencies, use:

ml spider sionlib

When the module has been loaded, you can use the environment variable $EBROOTSIONLIB to find the binaries and libraries for SIONlib.

There is some documentation for SIONlib here.

CUDA libraries

NOTE: CUDA libraries are only installed on Kebnekaise and can be used with either GCC or Intel compilers. In addition, the NVIDIA CUDA compiler driver nvcc is installed.

You should load one of the following compiler toolchain modules:

  • gompic: GCC, OpenMPI, CUDA
  • goolfc: gompic, OpenBLAS/LAPACK, FFTW, ScaLAPACK
  • iccifortcuda: icc, ifort, CUDA
  • intelcuda: icc, ifort, IntelMPI, IntelMKL, CUDA

After you have loaded the compiler toolchain module, you compile and link with CUDA like this:

Language GCC, OpenMPI Intel, Intel MPI NVCC
Fortran calling CUDA functions     1) nvcc -c <cudaprogram.cu>
2) gfortran -lcudart -lcuda <program.f90> <cudaprogram.o>
C / C++ with CUDA mpicc -o <program> <program.cu> -lcuda -lcudart mpiicc -o <program> <program.cu> -lcuda -lcudart nvcc <program.cu> -o <program>

NOTE: CUDA functions can be called directly from Fortran programs: 1) first use the nvcc compiler to create an object file from the .cu file. 2) Then compile the Fortran code together with the object file from the .cu file.

Intel MKL libraries

The Intel MKL libraries contains:

  • ScaLAPACK
  • LAPACK
  • Sparse Solver
  • BLAS
  • Sparse BLAS
  • PBLAS
  • GMP
  • FFTs
  • BLACS
  • VSL
  • VML

More information about MKL and the libraries in it can be found at the external links:

Linking with MKL libraries

To use the MKL libraries load one of the following compiler toolchain modules:

  • intel: icc, ifort, IntelMPI, IntelMKL
  • intelcuda: intel, CUDA
  • pomkl: PGI, OpenMPI, IntelMKL
  • gimkl: GCC, IntelMPI, IntelMKL

To correctly use MKL it is vital to have read the documentation.

To find the correct way of linking, take a look at the offical Intel MKL documentation.

Using the buildenv module, the common blas/lapack/scalapack/fftw libraries are available in the following environment variables, just like when using a non-MKL capable toolchain:

  • LIBBLAS
  • LIBLAPACK
  • LIBSCALAPACK
  • LIBFFT

And threaded versions are available from the corresponding environment variable appended with "_MT"

There are too many libraries in MKL to show a complete list of combinations. We refer you to the official MKL documentation for examples and support@hpc2n.umu.se for help.

Updated: 2024-03-08, 14:54