File system and storage

File systems and storage

[ Overview | AFS | PFS | /scratch | SweStore ]



  • AFS
    • This is where your home directory is located (cd $HOME)
    • Regularly backed up
    • NOT accesseable by the batch system (except the folder Public with the right settings)
  • PFS
    • Parallel File System
    • Accessible by the batch system

AFS - Andrew File System

Your home-directory (ie. the directory pointed to by the $HOME variable) is placed on an AFS file system. This file system is backed up regularly.

Note that since ticket-forwarding to batch jobs does not work, the only AFS-access possible from batch jobs are to read files from your Public-directory which is world-wide readable (yes, the entire world). Use the parallel file system 'pfs' for data management in conjunction with batch jobs.

This generally means you should setup and run your jobs from directories somewhere on /pfs/nobackup/, not your home directory!

To find the path to your home directory, either run pwd just after logging in, or

$ cd
$ pwd

If you need more space in your home directory, contact and include an explanation of what you need the extra space for.

See AFS at HPC2N for further explanation of AFS.

PFS ('parallel') File System

There is a parallel file system (PFS) available on all clusters.

Apart from your usual home directory you also have file space in the parallel file system. This file system is set up in "parallel" to the usual home tree, but starting from /pfs/nobackup instead. Thus, to create a soft link from your home directory to your corresponding home on the parallel file system, you could issue the following command:

$ ln -s /pfs/nobackup$HOME $HOME/pfs

Now, if you do

$ cd ~/pfs

you will end up in your "parallel" home directory.

Your home directory on the parallel file system is very useful, since batch jobs can create files there without any Kerberos ticket or manipulations with permissions. Moreover the parallel file system offers high performance when accessed from the nodes making it suitable for storage that are to be accessed from parallel jobs.

Note that the parallel file system is not intended for permanent storage and there is NO BACKUP of /pfs/nobackup. In case the file system gets full, files that have been unused for some time might get deleted without warning.

Quota on pfs

In order to avoid having runaway programs filling the file system quota limits are in place. Use the quota command to view your current quotas.

There are actually 4 quota limits for pfs. Soft and hard limit for disk usage and soft and hard limit for the number of files. The hard limits are really hard limits. You can never go above them. The soft limit, you can be above for a grace period (4 weeks), but when you have been above the soft limit for more than that grace period the soft limit will behave as a hard limit until you have gone below the soft limit again.

If your limit is too small your PI should contact, and include an explanation of what you need the extra space for.


On some of the computers at HPC2N there is a directory called /scratch. It is a local disc area, usually pretty fast and big. It is intended for saving (temporary) files you create or need during your computations. Please do not save files in /scratch you don't need when not running jobs on the machine, and please make sure your job removes any temporary files it creates.

When anybody need more space than available on /scratch, we will remove the oldest/largest files without any notices.

There is NO backup of /scratch.

The size of /scratch depends on the type of nodes:

Abisko (all nodes): 352 GB
Kebnekaise, standard compute nodes: 171 GB
Kebnekaise, GPU nodes: 171 GB
Kebnekaise, Largemem nodes: 352 GB (a few have 391 GB)

SweStore - Nationally Accessible Storage

For data archiving and long-term storage we recommend our users to use the SweStore Nationally Accessible Storage. This is a robust, flexible and expandable long term storage system aimed at storing large amounts of data produced by various Swedish research projects.

For more information, see the SNIC documentation for SweStore available at

Updated: 2020-04-01, 12:51