Schrodinger

Software name: 
Schrodinger
Policy 

Schrodinger is available at HPC2N to users with their own license, except for Desmond and Maestro which we have an academic license for.

General 

The Schrodinger suite is collection of programs for computational chemistry and molecular modelling.

Description 

The suite contains programs such as Maestro, Desmond and Jaguar.

  • Maestro is a molecular modelling environment.
  • Desmond is a molecular dynamics simulation tool.
  • Jaguar is a ab initio electronic structure package.
Availability 

On HPC2N we have the Schrodinger suite available as a module on Abisko and Kebnekaise.

Usage at HPC2N 

To use it, add the module to your environment. Use the command:

module spider schrodinger

to see which versions are available and then do

module spider Schrodinger/version

to see how to load the module.

You can read more about loading modules on our Accessing software with Lmod page and our Using modules (Lmod) page.

Loading the module should set all the needed environmental variables as well as the path.

To use any parts except Desmond or Maestro you need your own license running on a license server.
To make the programs aware of your license server do the following after adding the module.

LM_LICENSE_FILE="port@your.license.server:$LM_LICENSE_FILE"

Replace "port" and "your.license.server" with the relevant values, ask you Schrodinger administrator for the correct values.
You will also have to check with your system administrator that the license server is reachable from HPC2Ns networks, most easily accomplished by allowing 130.239.0.0/16 access.

Example for Jaguar

Most of the tools creates a submit file and submits it when given the correct parameters.
Here is a typical invocation of jaguar using 96 cores on 2 nodes of abisko, note the -QARGS parameter which is passed to the sbatch command.

$SCHRODINGER/jaguar run -HOST abisko-batch -OMPI 16 -TPP 6 -QARGS "-n 16 -c 6 --ntasks-per-node=8 -t <hh:mm:ss> -A <YOUR-ACCOUNT-ID>" somefile.in

The important arguments are:

  • -HOST should be followed by abisko-batch if you are running on Abisko.
  • -OMPI specifies how many MPI-tasks you want to use.
    • Note that Abisko has a minimum allocatable unit of 6 cores, so factors of 6 are preferred.
  • -TPP specifies how many OpenMP threads you want to use per MPI task
  • -QARGS is a string passed on to the batch system. Here you should specify things like requested runtime and your project-id.
    • Both systems use -A for passing project-id
    • You should use "-t hh:mm:ss" for runtime
    • See the respective cluster support pages for examples of other flags to use.
  • NOTE: the argument to -OMPI and -n must be the same
  • NOTE: the argument to -TPP and -c must be the same

Not all of the functionality in jaguar can use OMPI or TPP, check the manual for the details.

Submit example for Prime

When using Prime one should take advantage of the OpenMP capability it has.
The following submit file is an example of how to run Prime.
NOTE: prime_mmgbsa requires 8 SUITE_* and 8 PSP Plop tokens per sub job.
NOTE: selected number-of-simultaneous-subjobs * number-of-OpenMP-threads should not exceed number of cores in a single node on the cluster where you are running.
NOTE: nor is there any real reason to set number-of-simultaneous-subjobs higher than what is available from the license server.

#!/bin/bash
#SBATCH -A <your-snic-account>
#SBATCH -J <your-job-name>
#SBATCH -t hh:mm:ss
# Can only run on a single node at the moment so make sure to specify -N 1
#SBATCH -N 1
#SBATCH -n <your-selected-number-of-simultaneous-subjobs-to-use>
#SBATCH -c <your-selected-number-of-OpenMP-threads-to-use>

export LM_LICENSE_FILE="port@your.license.server"

# This example loads Schrodinger 2016-4
module add Schrodinger/2016-4_Linux-x86_64 

export SCHRODINGER_NODEFILE=`mktemp`
echo localhost:$SLURM_NTASKS > $SCHRODINGER_NODEFILE

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

$SCHRODINGER/prime_mmgbsa -WAIT -LOCAL -NJOBS <number-of-subjobs-to-split-the-workload-into> <your-parameters>

You can set the number-of-subjobs-to-split-workload-into as high as you like, but there is no reason to choose it larger then the total number of subjobs in the workload. The more sub jobs you split the workload into, the less work is needed during a restart.

To restart a failed or otherwise stopped prime_mmgbsa job, just add -RESTART to the arguments of prime_mmgbsa in the above submit file. Any complaints about any -out.maegz file being incorrect format should be resolved by simply deleting the offending file and the job restarted again.

Submit example for Glide

When using Glide we have to create a more complicated submit file. It can only run on a single node at the moment.
NOTE: Subjobs can only be used for simplified docking jobs.
NOTE: Glide consumes 4+1 token of SUITE_*, one token of GLIDE_MAIN and 4 tokens of GLIDE_SP_DOCKING per job

#!/bin/bash
#SBATCH -A <your-snic-account>
#SBATCH -J <your-job-name>
#SBATCH -t hh:mm:ss
#SBATCH -N 1
#SBATCH -n <number-of-processors>

export LM_LICENSE_FILE="port@your.license.server"
# This example loads Schrodinger 2016-4
module add Schrodinger/2016-4_Linux-x86_64 

export SCHRODINGER_NODEFILE=`mktemp`
echo localhost:$SLURM_NTASKS > $SCHRODINGER_NODEFILE

# Run with 16 subjobs
$SCHRODINGER/glide -NJOBS 16 -NOJOBID <your-input-file-and-parameters>

Additional info 

More information abour Schrödinger can be found at the following locations:

Updated: 2017-12-06, 15:21