High Performance Computing Center North
|Recommended for batch jobs||Yes||No||Yes|
|Accessible by batch system||Yes||Yes||Yes (node only)|
|Default readability||Group only||Owner||Owner|
|Permission management||chmod, chgrp, ACL||chmod, chgrp, ACL||N/A for batch jobs|
|Notes||This is the storage your group gets allocated through the storage projects.||Your home directory||Per node|
This is your home-directory (pointed to by the $HOME variable). It has a quota limit of 25GB per default. Your home directory is backed up regularly.
Since the home directory is quite small, it should not be used for most production jobs. These should instead be run from project storage directories.
To find the path to your home directory, either run pwd just after logging in, or
$ cd $ pwd /home/u/username $
It is not generally possible to get more space in your home directory. You should generally use project storage instead. If you need more of that, the PI in your project should apply for it.
However, if you really need more space in your home directory, have your PI contact firstname.lastname@example.org and include a good explanation of what you need the extra space for.
Project storage is where a project's members have the majority of their storage. It is applied for through SUPR, as a storage project. While storage projects needs to be applied for separately, they are usually linked to a compute project.
Since batch jobs can create files in the project storage space without any Kerberos ticket or manipulations with permissions, this is where you should keep your data and run your batch jobs from. Moreover, it offers high performance when accessed from the nodes making it suitable for storage that are to be accessed from parallel jobs.
Project storage is located below /proj/nobackup/ in the directory name selected during the creation of the proposal.
Note that the project storage is not intended for permanent storage and there is NO BACKUP of /proj/nobackup.
The size of the storage depends on the allocation. There are small, medium, and large storage projects, each with their own requirements. You can read about this on SUPR. The quota limits are specific for the project as such, there are no user level quotas on that space.
There are actually 4 quota limits for the project storage space. Soft and hard limit for disk usage and soft and hard limit for the number of files. The hard limits are really hard limits. You can never go above them. You can be above the soft limit for a grace period, but after the grace period the soft limit will behave as a hard limit until you have gone below the soft limit again.
It is recommended to use the project's storage directory for the projects data. Layout structure in that project directory is the responsibility of the project itself.
NOTE: For the PI, make sure to add any user in SUPR that should be granted access to the storage space to the storage project.
NOTE: The storage project PI can link one or several compute projects to the storage project, thereby allowing users in the compute project access to the storage project without the PI having to explicitly handle access to the storage project.
NOTE: For those who has previously had their storage under their /pfs/nobackup$HOME directory, there are a few things to notice:
Our recommendation is that you use the project storage instead of /scratch when working on Compute nodes or Login nodes.
On the computers at HPC2N there is a directory called /scratch. It is a small local area split between the user using the node and it can be used saving (temporary) files you create or need during your computations. Please do not save files in /scratch you don't need when not running jobs on the machine, and please make sure your job removes any temporary files it creates.
When anybody need more space than available on /scratch, we will remove the oldest/largest files without any notices.
There is NO backup of /scratch.
The size of /scratch depends on the type of nodes and that size is split between the number of cores that you job has on the node.
Kebnekaise, standard compute nodes: ~170 GB
Kebnekaise, GPU nodes: ~170 GB
Kebnekaise, Largemem nodes: ~350 GB
For data archiving and long-term storage we recommend our users to use the SweStore Nationally Accessible Storage. This is a robust, flexible and expandable long term storage system aimed at storing large amounts of data produced by various Swedish research projects.
For more information, see the documentation for SweStore available at docs.swestore.se