Projects, accounts & accounting

The Lichtenberg HPC can only be used via projects , defining the approved amount of resources the project can allocate on the HPC. In other words, a project's allotted number of core*hours determines the “share” of the overall computing resources of the HPC for this project.

All core*hours used within the course of a project are accounted on that project (like money spent is accounted on a bank account).

Project Memberships

To use a project for your scientific calculations, you need to become a member of this project. Info about project membership

User vs. Project

A user account (personalized) is associated with one or more projects (the first project being that user's default project).
Unlike this strictly personalized user account, projects can and are permitted to be shared among several colleagues and students working on the same scientific problem.

Do not share your user account (neither password nor ssh keys)! Collaboration is permitted only by being members in a common project.

Expiration

As projects can have several users/members, and a given user can be member of several projects, the validity terms of HPC user accounts and HPC projects are completely independent of each other. Both can expire (run out) at different dates, and extending one does not imply extending the other.

Jobs vs. Project

Submitting batch jobs is not possible without (implicitly) specifying a project (sbatch -A parameter). If a user does not explicitly specify sbatch -A <projectname>, the job will be allocated on that user's default project.

Rules of Accounting

The Lichtenberg cluster runs in “user-exclusive” mode: a given compute node will always execute only jobs of the same user at the same time.

This in turn means that even one single (small) job will block the assigned compute node for other users. Therefore, the accounting will book the equivalent of the full node's core*h (even if your job does not use all cores) on your project!

For small jobs (with no overly large memory footprint), we recommend to request even dividers of the amount of cores per node, so as to have these jobs share a given compute node without “clipping” waste of resources. In our case of compute nodes with 96 cores:

96 / 24 = 4 of your jobs per node
96 / 32 = 3 of your jobs per node

For this to work, strictly avoid the

#SBATCH --exlusive

pragma, as this would assign every (small) job its own, separate compute node!

Resources used

With the commands csum and csreport, any user can get a list of their current overall resource consumption.

Monthly Usage Report

At the end of a month, users get an automatic email with a usage overview on all projects they are associated with (“Lichtenberg User Report”).

Due to changes in the cluster's job accounting, the former graphs of your projects' usage and job efficiency are currently unavailable.