Nutzung der Software

Software Usage / Environment Modules

Loading and initializing software modules

On the Lichtenberg HPC, there are various compilers, (scientific) libraries, and other (application) software installed and readily available. Each of these software packages needs different settings in terms of $PATH, $LD_LIBRARY_PATH and other environment variables, which can adversely affect each other or even be mutually exclusive.

Secondly, different user (groups) need different versions of the same software, which in general cannot be installed nor used in parallel.

Therefore, the settings for all these software packages and their supported versions are encapsulated in “environment modules” maintained by our “LMod” module system. These modules are in no way related to 'perl' or 'python' modules and should not be confused with that.

By means of the module system, all software currently available can be listed, loaded, and unloaded, by using the command module.
With module help, you get an overview of all its parameters (subcommands).

In the following, Module_Name corresponds to (an element of) the output of module avail. You can explicitly specify particular versions of a package, or use the short forms, such as module load gcc or module load openmpi/intel. When module load'ing only with this short form, you will get the version currently set as default (indicated by a (D) in the list of all modules).

  • module avail
    Shows all available software modules.
  • module avail Module_Name
    Shows all available versions of the given software.
    For example with module avail gcc, all available versions of GCC will be listed, or with module avail openmpi/gcc you will see all OpenMPI module versions compatible with GCC.
  • module list
    Shows the software modules already loaded.
  • module whatis Module_Names
    Shows the short descriptions of the named modules.
  • module help Module_Names
    Shows the detailed description (if any) of the named module(s).
  • module load Module_Names
    Loads the named software module(s). Only after this command (and when all other required modules have been loaded), the software becomes available to you.
    Tip: It is possible to automate all your recurring “module loading” in your own '~/.bashrc' file.
  • module unload Module_Names
    Unloads the loaded modules. From this time on, the corresponding software is no longer usable.
  • module switch Module_Namenew or module switch Module_Nameold Module_Namenew
    Replaces the previously loaded (old) module version by the named version (new). Similar to the pair module unload / module load, but keeping the former order of the loaded modules.
  • module purge
    Unloads all currently loaded software modules.
    Recommended inside job scripts before any other “module load …” commands, to always start with a clean and defined environment.

Shortcut “ml”:

  • ml (without parameter) = module list
  • ml Module_Name(s) = module load Modul-Name(s)

Caution: all batch jobs you submit will inherit the currently loaded modules of your login session! It is thus good practice to begin your job scripts with “module purge”, followed by only the “module load” commands necessary for this job.

Changes effective as of 2019-04-01

Since 1st April 2019, we have switched to a hierarchical software module tree.
This type is much more clearer to use (and easier to be built automatically and to maintain, particularly in case of changes in the mutual dependencies between the software packages.)

For you, the relevant changes are

  • modules will automatically show up (or remain hidden), dependant on other modules.
    Unfavourable or impossible combinations of modules will now be considered already at the automatic build, and are thus easier to avoid.
  • modules are automatically built with all versions of the supported compilers--you will thus get compiler-dependent modules compiled with precisely the compiler version you have loaded, not just compiled with “any version of this compiler family”.
  • more recent versions of modules
  • for several modules, we have set newer versions as their (D)efault (what is to be loaded if you don't explicitly specify a version number).
    Please note that this might affect the behaviour of jobs submitted after 1st April using long established job scripts (as long as these contain “module load <modul without version number>” commands)!

In case the new module tree does not (yet) suit your needs, you can use the old module tree for the time being by
export MODULEPATH=/shared/old_apps/modules/
Any job script subsequently submitted from this session will inherit this setting and use the old module tree. (This dynamic change only affects the current session. If you log in from afresh, you will have the new module tree again.)

Beware: since some applications rely on fixed pathes, these may not work out of the old module tree, even when their modules are loading without error.

Visibility and availability of conditional modules

From the new module tree, the command „module spider“ still shows all modules in the new tree.

However, the „module avail“ command will list only those modules you can currently load—dependent on what other modules are (not yet) loaded.
Some programs and libraries will show up only after loading a compiler and perhaps a MPI implementation. Some loaded modules will be amended (or even be getting new functionality) after loading other modules.

Example: according to „module spider“ there is the fftw module. In „module avail“ however, it will not show up until you load a compiler („ml gcc“ or „ml intel“). After having loaded a certain compiler module, „ml fftw“ will give you exactly the fftw binaries compiled with that compiler.
In case you load openmpi or intelmpi (before or after loading fftw), the loaded fftw will be replaced by the respective MPI-fftw (still based on the very same compiler).


When you are not sure what modules are currently invisible but available, you can use --show_hidden , eg. module --show_hidden avail.

Renamed or removed modules

Some modules were to be renamed for the new structure to be feasible and transparent to you, some others have been removed. If you are affected, change your job scripts as follows:

Old job script:

 module load gcc/4.9.4
module load openmpi/gcc/3.1.3/4.9.4
module load fftw/openmpi/3.3.5

New job script:

 ml gcc/4.9.4 openmpi/3.1.3 fftw/3.3.8

For the bash-initiated: „compatible with old and new”:

 module load gcc/4.9.4
module whatis openmpi/3.1.3 &>/dev/null && ml openmpi/3.1.3 || ml openmpi/gcc/3.1.3/4.9.4
module whatis fftw/3.3.8    &>/dev/null && ml fftw/3.3.8    || ml fftw/openmpi/3.3.5

Self-compiled software linked against OpenMPI and/or Slurm

If you have compiled your own software and linked against OpenMPI, you need to re-compile with one of the new openmpi modules loaded. This should also be possible before 1st April.
Due to interdependencies between OpenMPI and Slurm, these recompiled binaries might not behave properly (in terms of mpirun/srun) until the new Slurm version is rolled out (i.e. after the downtime).

Since we cannot provide a “test cluster” with the new Slurm version, fully testing of your own recompiled binaries against the new Slurm version will only be possible after the downtime. Is that a necessary requirement for you, you are limited to ceasing your job submission before 1st Apr, testing your binaries after the downtime and then submitting new jobs.

Please don't hesitate to in case of problems--we are happy to assist.