system news

Power failure at HPC2N, both clusters down, 20181019 19:15, *SOLVED 20181022 13:00*

  • Posted on: 19 October 2018
  • By: ake

We have suffered a power failure at HPC2N due to cooling failure.

The power failure covers both Kebnekaise and Abisko, thus all jobs that where running have failed and no new jobs can start.

This may also result in problems login in to Kebnekaise if your login shell tries to access the /pfs/nobackup file system or load modules.

We have stopped all queues and will asses the situation as soon as we can.

 

*UPDATE 20181022 09:00*

The problem with the cooling system should be fixed according to Akademiska Hus.

GPU outage on Kebnekaise (resolved)

  • Posted on: 25 September 2018
  • By: zao

The GPU nodes of Kebnekaise are temporarily unavailable since the evening of Monday 2018-09-24. Jobs that started after that may have failed with messages about mismatched driver versions and may report missing GPUs.

We are addressing the problem, while work progresses nodes will be unavailable and the queue may report that jobs are blocked due to resources. We will update this news entry when this is resolved.

Kebnekaise running jobs failed 2018-09-05 18:44

  • Posted on: 6 September 2018
  • By: brorerik

Kebnekaise running jobs failed 2018-09-05 18:44

Kebnekaise cluster nodes had unexpected problems during the
maintenance of a internal service related to them.

This caused disk access timeouts, which in turn terminated
jobs running on the nodes.

The problem have been solved and the cluster is working
normally again.

We are sorry about the problems this have caused.

Maintenance work are complete

  • Posted on: 9 June 2018
  • By: torkel

The upgrade of Lustre parallel file system (/pfs/nobackup) as well as the rest of the maintenance we have done during the week are now complete. All systems are up and running again, including the batch queues on both Kebnekaise and Abisko. The login nodes are open for acess again.

During the week we have, among other things:

Maintenance affecting all HPC2N systems between 2018-06-04 and 2018-06-08

  • Posted on: 22 May 2018
  • By: ake

We have a major maintenance to upgrade the Lustre parallel file system (/pfs/nobackup).

One of the goals is to introduce new functionality, for instance project based storage.

We will also do a number of related changes and optimizations to the Lustre file system setup.

 

We are planning for a full week of complete downtime on all HPC2N systems, including login and thinlinc nodes, starting 2018-06-04 07:30 CEST.

Please make sure to copy any files you want to work with during the downtime to some other system well in advance of the maintenance.

Maintenance for Slurm (batch system) upgrade on Abisko and Kebnekaise: 2018-03-16

  • Posted on: 7 March 2018
  • By: ake

The maintenance window has been moved to March 16th.

On March 16 we will have a maintenance window for upgrading Slurm (the batch system) on both Abisko and Kebnekaise.

We have therefor put a reservation in place on both clusters starting 2018-03-16 08:00.

Jobs will not be allowed to start if their requested runtime reaches beyond that point in time.

So leading up to this maintenance window it is advantageous to submit jobs with smaller runtimes.

 

We expect to be finished no later than 2018-03-17 17:00.

Pages

Updated: 2018-10-22, 13:02