The Vasp program is not distributed via site licences. However, HPC2N have access to the VASP code to be able to support any research groups that have a valid VASP license.
See the VASP license for information regarding terms for published work.
When you have gotten access to a license, the license holder should either add the license info into SUPR (or contact email@example.com with the following information: license number and list of users who should have access). You will then be given access to using VASP.
Note: only the owner of the license can add/delete users to/from the access list.
VASP is a package for performing ab-initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set.
VASP is a complex package for performing ab-initio quantum-mechanical molecular dynamics (MD) simulations using pseudopotentials or the projector-augmented wave method and a plane wave basis set. The approach implemented in VASP is based on the (finite-temperature) local-density approximation with the free energy as variational quantity and an exact evaluation of the instantaneous electronic ground state at each MD time step.
VASP uses efficient matrix diagonalisation schemes and an efficient Pulay/Broyden charge density mixing. These techniques avoid all problems possibly occurring in the original Car-Parrinello method, which is based on the simultaneous integration of electronic and ionic equations of motion.
The interaction between ions and electrons is described by ultra-soft Vanderbilt pseudopotentials (US-PP) or by the projector-augmented wave (PAW) method. US-PP (and the PAW method) allow for a considerable reduction of the number of plane-waves per atom for transition metals and first row elements. Forces and the full stress tensor can be calculated with VASP and used to relax atoms into their instantaneous ground-state.
At HPC2N we have a site installation available for those with a valid VASP contract/license. We can adapt the installed binaries to suit the groups needs.
The binaries for vasp are available by loading the vasp module for users authenticated to use them. You should use:
module spider vasp
to see which versions are available, as well as how to load the module and the needed prerequisites.
Note that while the case does not matter when you use "ml spider", it is necessary to match the case when loading the modules.
Note also that you need to load both the icc and the ifort compilers, despite what 'module spider' says.
VASP has been built in both a normal CPU-only version and a GPU-enabled version.
Depending on which toolchain is loaded one or the other is available.
Use ml spider vasp to see which versions are available, and then look at the specific versions with ml spider VASP/<some-version>, to see which toolchains are available.
To load a CPU-only version make sure that the list of modules to load does not contain CUDA. To load the GPU-enabled version the list must contain CUDA.
Example, loading VASP/5.4.1-05Feb16-p02-hpc2n on Kebnekaise with GPU support
ml icc/2017.1.132-GCC-5.4.0-2.26 ml ifort/2017.1.132-GCC-5.4.0-2.26 ml impi/2017.1.132 ml CUDA/8.0.44 ml VASP/5.4.1-05Feb16-p02-hpc2n
ml intelcuda/2016.11 ml VASP/5.4.1-05Feb16-p02-hpc2n
Pseudo potentials are installed under $VASP_PP_PATH/potpaw_LDA and $VASP_PP_PATH/potpaw_PBE. The VASP code has been modified to automatically find the site installed vdw_kernel.bindat, ($VASP_VDW_KERNEL).
Naming scheme for VASP binaries:
- vasp_std - compiled with -DNGZhalf, normal (standard) version for bulk system
- vasp_gam - compiled with -DwNGZhalf -DNGZhalf, gamma-point only (big supercells or clusters)
- vasp_ncl - compiled without -D(w)NGZ*, for spin-orbit/non-collinear calculations.
- vasp_gpu - standard VASP compiled for GPU
- vasp_gpu_ncl - compiled without -D(w)NGZ*, for spin-orbit/non-collinear calculations and with GPU support
All versions of VASP have been built with scalapack support.
Versions of VASP with a "-hpc2n" suffix have been built with LONGCHAR turned on.
The x/y/z-restrict patch from NSC is also built in to the "-hpc2n" versions.
The default is set to behave as the code does without the patch.
To lock relaxation in the Z-axis add:
Replace ZRELAX with XRELAX or YRELAX as needed.
If you need a differently compiled VASP binary, please contact firstname.lastname@example.org.
Memory consumption guidelines
- The default memory is 2500MB/core on Abisko.
- The default memory is 4500 MB/core on Kebnekaise.
- If you need more memory than that you should either run on the bigmem nodes (Abisko) / largemem nodes (Kebnekaise) or increase the number of cores per mpi-task in your allocation, see examples below.
If any of this is unclear please send a mail to email@example.com and we will try to make it more understandable.
Submit file examples
Abisko (three different examples)
#!/bin/bash #SBATCH -o vasp.%j.out #SBATCH -J my_vasp_job #SBATCH -A SNICXXXX-YY-ZZ # Use 12 mpi-tasks, all on the same node #SBATCH -n 12 #SBATCH --ntasks-per-node=12 #SBATCH --time=10:00:00 # Load modules, unless already done before job is submitted (remember, # SLURM export the environment as per default, unless you # set --export=NONE. You can also do 'ml purge' before loading, to # make sure you don't have anything conflicting loaded. ml intel/2017a ml VASP/5.4.1-05Feb16-p02-hpc2n srun vasp_std
#!/bin/bash #SBATCH -A SNICXXXX-YY-ZZ #SBATCH -J my_vasp_job # Use 24 mpi-tasks, 12 per node #SBATCH -n 24 #SBATCH --ntasks-per-node=12 # each task needs 5000MB of memory so allocate 2 cores per mpi task #SBATCH -c 2 #SBATCH --time=04:00:00 ml intel/2017a ml VASP/5.4.1-05Feb16-p02-hpc2n srun vasp_std
#!/bin/bash #SBATCH -A SNICXXXX-YY-ZZ #SBATCH -J my-large-mem-vasp-job # Use 48 mpi-tasks, all on the same node #SBATCH -n 48 #SBATCH --ntasks-per-node=48 #SBATCH --time=04:00:00 # Need lots of memory per task, use the bigmem nodes #SBATCH -p bigmem ml intel/2017a ml VASP/5.4.1-05Feb16-p02-hpc2n srun vasp_gam
Kebnekaise (2 different examples)
#!/bin/bash #SBATCH -A SNICXXXX-YY-ZZ #SBATCH -J my-cpu-only-vasp-job # Use 56 mpi-tasks #SBATCH -n 56 ml icc/2017.1.132-GCC-5.4.0-2.26 ml ifort/2017.1.132-GCC-5.4.0-2.26 ml impi/2017.1.132 ml VASP/5.4.1-05Feb16-p02-hpc2n srun vasp_std
#!/bin/bash #SBATCH -A SNICXXXX-YY-ZZ #SBATCH -J my-GPU-vasp-job #SBATCH -n 28 # Asking for 2 GPUs #SBATCH --gres=gpu:k80:2,mps ml icc/2017.1.132-GCC-5.4.0-2.26 ml ifort/2017.1.132-GCC-5.4.0-2.26 ml CUDA/8.0.44 ml impi/2017.1.132 ml VASP/5.4.1-05Feb16-p02-hpc2n srun vasp_gpu
Documentation is available on the VASP homepage.