Loginknoten und Queuingsystem

Cluster Usage

Reccomendation: In order to have a smooth and efficient start of work with the Lichtenberg-HPC, we advise all new users to attend the “Introduction to the Lichtenberg High Performance Computer”.

Login-nodes

Login nodes are used to access the cluster from outside. You can find a current list of accessible login nodes here.

SSH-Fingerprints of all login-nodes – new since 16th November 2017:

  • 1024 SHA256:J6jNs711jkTWTtq2Gmz1aic7/DjMGYcW6H+v9CDBdMQ (DSA)
  • 2048 SHA256:7DwyCfkMxPWNMlm0rwiiTC9GwC6VD83+q7EKROB3Rl0 (RSA)
  • 2048 SHA256:83Sv25pFP5FsaTW03sNr7Q5tNcYX4r7Pw8vf3TEbl0E (RSA1)
  • 256 SHA256:FtiApfklRof19e1DA4Q1KaFQG3kfHjVg0UScR6cHp+0 (ED25519)
  • 256 SHA256:C+fJ3oDd7oj4fY6e+SurbQuW4mq9MgeQonBjI/QMEQM (ECDSA)

Valid Ciphers, MACs and KEx algorithms:

Ciphers: aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc

MACs: hmac-sha2-512,hmac-sha2-256,umac-128-etm@openssh.com,umac-64-etm@openssh.com

KexAlgorithms: diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256

With respect to installed software, the login nodes are identical, so it does not matter which one you use. In terms of hardware however, there are two types (details here).

Users log in with ssh on one of the login nodes and put their jobs into the queue. For that, you need to specify your required resources (e.g. amount of main memory, number of nodes (and tasks), maximum runtime) per job.

The login nodes are not for “productive” calculations!

Used by all users of the HPC, the login nodes are intended to be used only for

  • job preparation and submission
  • scp'ing data in and out of the cluster
  • short test runs of your program
  • debugging your software
  • job status checking

Due to the login nodes facing the public (and sometimes evil) internet, we have to install (security) updates from time to time. This will happen on short notice (30 minutes). Don't expect all login nodes to be available 24h/7d.

Batch system Slurm

The arbitration, dispatching and processing of all user jobs on the cluster is organized with the Slurm batch system. Slurm calculates when and where a given job will be started, considering all jobs' resource requirements, workload of the system, waiting times of the job and the priority of the associated project.

When eligible to be run, the user jobs are dispatched to one (or more) compute nodes and started there by Slurm.

The batch system expects a batch script for each job (array), which contains

  • the resource requirements of the job in form of #SBATCH … pragmas and
  • the actual commands and programs you want to be run in your job.

The batch script is a plain text file (with UNIX line feeds!) and can either be created on your local PC and then be transferred to the login node. In Windows, use “Notepad++” and switch to UNIX (LF) in “Edit” – “Line feed format” before saving the script.

Or you can create it with UNIX editors on the login node itself and avoid the fuss with improper line feeds.

Further information: