WRF is available for all users at HPC2N.
The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs.
WRF features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers.
At HPC2N we have WRF installed as a module on Abisko and Kebnekaise. WPS is installed as a module on Abisko.
The binaries of WRF/WPS are available through the module system.
NOTE that the WPS module is only available on Abisko.
To access them you need to load the module on the command line and/or in the submit file. Use:
ml spider wrf
ml spider wps
to see which versions are available and how to install the module and its dependencies.
WRF 3.8.0 is built with Intel compilers and Intel MPI, and built with both MPI and OpenMP. That is the only version available on Kebnekaise. The same version exists on Abisko, in two versions - one built with Intel/Intel MPI and one built with GCC/OpenMPI.
WRF/WPS 3.6.1 are built with the foss 2017a toolchain (GCC compilers, OpenMPI).
Example, loading WRF version 3.6.1 (Abisko)
ml GCC/6.3.0-2.27 ml OpenMPI/2.0.2 ml WRF/3.6.1-dmpar
The name of the wrf binary is wrf.exe and it is built with both MPI and OpenMP.
If that is not sufficient please contact firstname.lastname@example.org of what you need and we will see if we can build it.
All other binaries from WRF are available in normal serial versions.
The input tables are located under /pfs/nobackup/data/wrf/geog/
The Vtables are located in $EBROOTWPS/WPS/ungrib/Variable_Tables (environment variable can only be used after module WPS is loaded).
Files in $EBROOTWRF/WRFV3/run may need to be copied or linked to your case directory if the program complains about missing files.
Since wrf is built as a OpenMP/MPI combined binary special care must be taken in the submitfile.
#!/bin/bash # Request 2 nodes exclusively #SBATCH -N 2 #SBATCH --exclusive # We want to run OpenMP on one NUMA unit (the cores that share a memory channel) # On Abisko this is 6 cores. On Kebnekaise it is 14. Change -c accordingly. #SBATCH -c 6 # Slurm will then figure out the correct number of MPI tasks available #SBATCH --time=6:00:00 # WRF version 3.6.1 on Abisko ml GCC/6.3.0-2.27 ml OpenMPI/2.0.2 ml WRF/3.6.1-dmpar # Set OMP_NUM_THREADS to the same value as -c, i.e. 6 # Change accordingly for Kebnekaise export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK # --cpu_bind=rank_ldom is only possible if --exclusive was used above and allocates one MPI task # with its 6 (14) OpenMP cores per NUMA unit. srun --cpu_bind=rank_ldom wrf.exe