CUDA

Software name: 
CUDA
Policy 

The CUDA Toolkit is freely available to users at HPC2N.

General 

CUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce.  CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.

Description 

The NVIDIA® CUDA® Toolkit provides a development environment for creating high performance GPU-accelerated applications. With the CUDA Toolkit, you can develop, optimize and deploy your applications on GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers. The toolkit includes GPU-accelerated libraries, debugging and optimization tools, a C/C++ compiler and a runtime library to deploy your application.

Availability 

On HPC2N we have CUDA available as a module on Kebnekaise.

Usage at HPC2N 

CUDA is available on Kebnekaise.

To use the CUDA module, first add it to your environment. Use:

module spider cuda

to see which versions are available. Then do

module spider CUDA-VERSION

where CUDA-VERSION is one of the versions available, to determine how to load the module and any prerequisite modules. Loading the module should set any needed environmental variables as well as the path. 

You can read more about loading modules on our Accessing software with Lmod page and our Using modules (Lmod) page.

GCC and CUDA

HPC2N has gcccuda - a GNU Compiler Collection (GCC) based compiler toolchain, available along with the CUDA toolkit.

Do

ml spider gcccuda

to learn about available versions and how to load the module.

Intel and CUDA

HPC2N has iccifortcuda and intelcuda toolchains available. They provide Intel C/C++ and Fortran compilers, Intel MPI & Intel MKL, with the CUDA toolkit.

Do

ml spider iccifortcuda

or

ml spider intelcuda

in order to learn about available versions and how to load the modules.

Compiling and linking

After you have loaded the compiler toolchain module, you compile and link with CUDA like this:

Language GCC, OpenMPI Intel, Intel MPI NVCC
Fortran calling CUDA functions     1) nvcc -c <cudaprogram.cu>
2) gfortran -lcudart -lcuda <program.f90> <cudaprogram.o>
C / C++ with CUDA mpicc -o <program> <program.cu> -lcuda -lcudart mpiicc -o <program> <program.cu> -lcuda -lcudart nvcc <program.cu> -o <program>

NOTE: CUDA functions can be called directly from Fortran programs: 1) first use the nvcc compiler to create an object file from the .cu file. 2) Then compile the Fortran code together with the object file from the .cu file.

Example, nvcc

To compile a CUDA program with the NVIDIA CUDA compiler driver nvcc, you first need to load a toolchain containing CUDA compilers. Here we use the gcccuda toolchain: 

ml gcccuda

We will be compiling the small test program "hello-world.cu" (and naming the executable "hello"):

nvcc hello-world.cu -o hello

Submitting as a batch job

Let use submit a small job that compiles and runs the above program:

#!/bin/bash 
# Remember to change this to your own project ID! 
#SBATCH -A SNICXXXX-YY-ZZ
#SBATCH --time=00:05:00
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out
# We need to run on GPUs. Here asking for 1 
#SBATCH --gres=gpu:k80:1

ml purge
ml gcccuda

nvcc hello-world.cu -o hello
./hello

We submit the above job script with

sbatch <jobscript>

 

 

Updated: 2024-12-12, 13:22