Singularity is a is a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization. It is freely available to users at HPC2N.
Singularity is a container system for Linux HPC that lets you define your own environment and makes your work portable and reproducible on any system that supports it. A container is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.
Singularity enables you to run applications under a different Linux environment than the one you are currently using. This could solve the problem with working with proprietary licensed Linux software that only has support for another Linux distribution.
Other typical examples when to use singularity are:
- Reproducibility of software contained in a single file that can be used any many HPC centers!
- Software development performed on RHEL7 (or any Linux distribution that is different from the one in use on HPC2N clusters)
- Working with proprietary licensed Linux software that doesn't have support for the Linux distribution used on the HPC2N clusters
- Need the flexibility to work with software immediately on other HPC hardware that runs with Singularity
- You want to run software quickly because the software versions are rapidly evolving
- Software metadata files are unmanageable (e.g., installing software with Python, R, conda) – a Singularity container allows for the use of a single compressed image file.
But it is also important to know when singularity is not the best option. For instance:
- When performance is important. Singularity generally does not slow down your code but the images are usually not optimized for the HPC2N clusters.
On HPC2N we have Singularity available as a module.
This section will describe how to use Singularity at HPC2N, and how it might differ from how it is used at other sites. The official Singularity documentation can be found at https://sylabs.io/docs/.
Note that this documentation is meant for Singularity 3.x!
To use Singularity, you first need to load the singularity module. This is done with the "module load" command.
This very simple example shows how to run the newer version bash using singularity. It involves loading the latest version of the singularity module, download the bash image once, and then run bash from the image. It is a trivial example and not especially useful but works as an example.
# Load Singularity module module load singularity # Download image (once) singularity pull docker://bash # Run the image singularity exec bash_latest.sif bash # Alternative way that works for this image ./bash_latest.sif
This example uses this created example image.
#!/bin/bash #SBATCH -n 4 #SBATCH -t 00:10:00 IMAGE=<path to the image> # singularity exec openfoam.sif find / -xdev -iname '*bashrc' -ipath '*foam*' FOAM_BASHRC=/opt/OpenFOAM/OpenFOAM-7/etc/bashrc # This MPI version should match whatever this command says: # singularity exec openfoam.sif mpirun --version ml GCC/10.2.0 OpenMPI/4.0.5 # Copied OpenFOAM example cd damBreak # Execute the serial stuff in one singularity instance # Could also be done with three separate singularity # runs with (very) slight extra overhead singularity exec $IMAGE bash -c " source $FOAM_BASHRC && blockMesh -case damBreak && setFields -case damBreak && decomposePar -case damBreak " || exit 1 # execute interFoam in parallel srun singularity exec $IMAGE bash -c " source $FOAM_BASHRC && interFoam -parallel -case damBreak &> result.out "
More detailed examples and information on creating an image on this page and running on this page.
Specifics of the HPC2N setup
When running a Singularity image at HPC2N, everything below the following directories from the host environment will be available in the running image:
As usual, when running batch jobs, data will have to be placed in the directory tree of a storage project (recommended) or in your $HOME directory tree.
The current configuration have not limited the paths where containers can be stored. Both bind control and fusemount are enabled.
Comparison, Singularity and Docker
Singularity and Docker provides similar functionality, but there are some important differences in the way they work.
|Runs docker containers||X||X|
|Edits docker containers||X||X|
|Interacts with host devices (like GPUs)||X||X|
|Interacts with host filesystems||X||X|
|Runs without sudo||X|
|Runs as host user||X|
|Can become root in container||X||X (using fakeroot, not allowed at HPC2N)|
|Control network interfaces||X||X (using fakeroot, not allowed at HPC2N)|
|Configurable capabilities for enhanced security||X|
Containers were created to isolate applications from the host environment. This means that all necessary dependencies are packaged into the application itself, allowing the application to run anywhere containers are supported. With container technology, administrators are no longer bogged down supporting every tool and library under the sun, and developers have complete control over the environment their tools ship with. You can find more information about containers on this page.
Unsual error messages and the solution
/opt/wine-devel/bin/wine: error while loading shared libraries: cannot allocate symbol search list: Cannot allocate memory
If running applications in a 32 bit image (for example wine), this might actually mean that you have too high values for some settings and for the high addresses the application crashes or fails.
Run "ulimit -a" in a job using a submit script and on the accessnode to compare the settings between the different environments.
The solution is to add "ulimit -s 8192" (if the stack is unlimited or too large) in your submit script before calling singularity, forcing it to have a lower range of the addresses and thus fits better within the 32 bit environment.
More information can be found on: