Aktuell Verfügbar

Available Login Nodes

(as of May 2020)

lcluster1.hrz.tu-darmstadt.de – 4x Sandy-Bridge, 32 cores (AVX)
lcluster2.hrz.tu-darmstadt.de – 4x Sandy-Bridge, 32 cores (AVX)
lcluster3.hrz.tu-darmstadt.de – 4x Sandy-Bridge, 32 cores (AVX)
lcluster4.hrz.tu-darmstadt.de – 4x Sandy-Bridge, 32 cores (AVX), 1x NVidia “Quadro P2000” GPU

lcluster5.hrz.tu-darmstadt.de – 2x Haswell, 24 cores (AVX2)
lcluster6.hrz.tu-darmstadt.de – 2x Haswell, 24 cores (AVX2)
lcluster7.hrz.tu-darmstadt.de – 2x Haswell, 24 cores (AVX2)
lcluster8.hrz.tu-darmstadt.de – 2x Haswell, 24 cores (AVX2)

lcluster9.hrz.tu-darmstadt.de – 2x Haswell, 24 cores (AVX2)
lcluster10.hrz.tu-darmstadt.de – 2x Haswell, 24 cores (AVX2)
lcluster11.hrz.tu-darmstadt.de – 2x Haswell, 24 cores (AVX2)

All login nodes above are part of the very same “Lichtenberg” HPC cluster, they are just distinct, yet equivalent points of entry.

Phase II of Lichtenberg 1

From phase II of the Lichtenberg 1 HPC (avx2), almost all nodes are fully available.

From time to time, compute nodes will fail our thorough health check done before accepting jobs, and we will repair it to the extent possible.

The amount of cores available and allocated (occupied) per island can be listed with the csinfo command.

Phase II of Lichtenberg 1

From phase II of the Lichtenberg 1 HPC (avx2), almost all nodes are fully available.

From time to time, compute nodes will fail our thorough health check done before accepting jobs, and we will repair it to the extent possible.

The amount of cores available and allocated (occupied) per island can be listed with the csinfo command.

Phase I of Lichtenberg 1

Since 2020-04-27, the phase I nodes of the Lichtenberg 1 (avx) have been switched off, to free cooling capacity for its successor system Lichtenberg 2

From this first and oldest generation of LB1, only the compute nodes of the GPU section (nvd2) are still running.

Please check your job scripts for any restriction on phase I nodes, eg.

#SBATCH -C avx

as this now unnecessary constraint will cause these jobs to remain in “Pending” at length, waiting for the rather few nodes of the GPU section.