Located in the basement of building B, the aurum cluster provides computational resources to all employees of IOCB. It consists of 286 computational nodes and several control and storage servers with about 400 TB of storage. Users can calculate in aurum freely without the need to apply for computational resources grants. We are happy to introduce new users to the field of high-performance computing. We also try to be flexible in accommodating special requests of our users, so don’t hesitate to contact us.

The software readily available on the server includes Amber and Ambertools, CP2K, Gromacs, Lammps, Orca, Pymol, Relion, Biopython, Psi, Gaussian, Mopac, Turbomole, VASP, castep, SMART-Aptamer, Schrodinger. More details on aurum wiki pages , accessible after registration.

Request user account Documentation Issue tracker

Hardware

The cluster is separated into the following physical partitions that differ in hardware capabilities. See details below.

Queue
Nodes
Count
Cores
CPU
RAM (GiB)
GPU
Scratch
Network
cpu
a[001-232]
204
36
intel
87
1 TB nvme
100 Gb1
mem
a[205-224]
20
36
intel
181
1 TB nvme
100 Gb1
bigmem
a[225-232]
8
36
intel
748
1 TB nvme
100 Gb1
gpu
a[233-237]
5
36
intel
82
2×RTX 50002
1 TB nvme
100 Gb1
gpu
b[001-032]
32
32
amd
126
2×RTX 30903
2 TB nvme
25 Gb
mem
b[033-048]
16
64
amd
1000
2 TB nvme
25 Gb
hugemem
b049
1
64
amd
1960
2 TB nvme
25 Gb


1 Omnipath 100 Gb PCIe ×8 Adapter
2 Omnipath 100 Gb PCIe ×8 Adapter RTX 5000 16GB GDDR6
3 Omnipath 100 Gb PCIe ×8 AdapterGeForce RTX 3090 24GB GDDR6