Parameters of the command “sbatch”
We recommend providing all parameters inside the job script (instead of using
sbatch command line parameters). You can find examples under the corresponding scripts (MPI, OpenMP, MPI+OpenMP).
Here, only the most important pragmas are given. You can find a complete list of parameters using the command ''
man sbatch'' on the login nodes (e.g. lcluster1).
With this option, you choose the project the core hours used are accounted on. The project name usually consists of the word '
project', followed by a 5-digit number, e.g. '
Attention: If you omit this pragma, the core hours used will be accounted on your default project (typically your first or main project), which may or may not be intended!
This gives the job a more descriptive name.
Send an email at begin of ob.
Send an email at end or termination of job.
Send an email at both events (and in some other special cases).
Please note: if you submit a lot of distinct jobs separately, at least the same number of emails will be generated. In the past, this problem has caused the mail servers of the TU Darmstadt to be blacklisted as “spamming hosts” by several mail and internet service providers. The mail and groupware team of the HRZ was having a lot of efforts to revert this.
Avoid this by
- using job arrays (
#SBATCH -a 1-100for 100 similar jobs)
--mail-type=NONE– instead, use “
squeue” to see all your jobs still running or finished.
This writes the standard output STDOUT of the whole job script in the designated file.
This writes the error channel STDERR of the whole job script in the designated file.
For both options, we recommend to use the full pathname to avoid overwriting other job's files.
This gives the number of tasks for this job. Typically, this corresponds to the number of necessary compute cores for the job.
This gives the number of cores per task. For pure OpenMP jobs, one should set
-n to 1 and
-c to the number of OpenMP threads. Default: 1
This defines the maximum required main memory per compute core in MByte.
-t run time
This sets the run time limit for the job (“wall clock time”). If a job is not completed within this time, it will be terminated automatically by the batch system. The prospected run time can be given in minutes or specified in 00:00:00 (hours:minutes:seconds).
Requests that the nodes for this job have a certain feature, e.g. AVX2 or NVIDIA-GPUs. Features can be combined by “&”. Possible features are for example:
phi7(these features have been deactivated since August 2018)
multi): Only available to certain users
This determines dependencies between different jobs. For details, please see ''
This requests a compute node job-exclusively, meaning there are none of your other jobs allowed on this node.
This might be important if you request fewer cores per node than available (16 on our phase I nodes or 24 on our phase II nodes, respectively). In this case, Slurm could dispatch other jobs of the same user to the node. While permitted in general, this could adversely affect the runtime behaviour of the first job (possibly distorting timing and performance analyses).
Jobs of other users are not permitted anyhow on nodes already running jobs -- our Slurm configuration is per se user-exclusive.