Vienna Scientific Cluster
Beginning with January 2023 the Vienna Scientific Cluster 5 (VSC-5) is ready for regular user operation. The older cluster VSC-4 can still be used as the VSC team strives to always provide two cluster versions at the same time.
Access
The Faculty of Mathematics bought one GPU node of the VSC-5. Each GPU node has 128 CPU cores, 512 GB RAM and two NVIDIA A100 cards with 40GB memory each. To get access please contact the math:helpdesk with the following information:
- username (free to choose)
- a strong password (not dictionary based and at least: 10 characters, one digit, one lower case letter, one upper case letter and one of these special characters: '!@#$%^&*?_~(),-")
- first and last name
- your cellphone number (international format: e.g. +43...) to receive text messages (since the cluster works with 2FA)
If you don't feel comfortable with sending a password by email, please make an appointment with Martin Piskernig to get an account.
Once your account is created, you'll receive a confirmation email.
Note that only two persons can compute at a time! If you need more computing power, please apply for a project.
Hardware
VSC-5 (CPU + GPU nodes) | |
---|---|
Year of acquisition | 2022 |
Manufacturer | Megaware |
CPU | AMD EPYC 7713 |
Sockets per node | 2 |
Clock rate | 2,0 GHz |
Number of nodes | 710 + 60 |
Cores per node | 128 |
Total number of cores | 98560 |
Memory per core | 4 GB |
Memory per node | 512 GB - 2048 GB |
Total memory | ~400 TB |
Network | Mellanox HDR infiniband (200 Gbit/s) |
Theoretical computing power | 3,05 PFlop/s |
LINPACK benchmark | 2,31 PFlop/s |
Initial entry on the Top 500 | 301 (June 2022) |
Initial entry on the Green500 | 90 (June 2022) |
Software
The operating system of the VSC-5 is AlmaLinux, the job schedular is SLURM and there is Mathemtica 12.3.1 and Matlab v9.13_REL2022b (as well as v9.7.R2019b) installed. Please check /opt/sw/vsc4
for various other software packages (or previous versions of these) or use
$ module avail
Login
Use ssh to connect to vsc5.vsc.ac.at (from the university network or VPN):
$ ssh yourloginname@vsc5.vsc.ac.at
After entering your cluster password you will be asked to enter your OTP which you will receive via sms (the OTP lasts 12 hours). You can then acquire a GPU node using the following command:
$ salloc -p zen3_0512_a100x2 --gres=gpu:2
Using a single A100 card (i.e. --gres=gpu:1
) is also possible but only when using the sbatch
command. Note that the node allocation may take some time, usually only up to a few minutes. Use the command
$ squeue -q p70700_a100dual
to view the current state of the private Faculty queue.
For more details see "Access to VSC" documentation in the VSC Wiki.
Backup
There is no backup of your user data, you have to backup your data by yourself! Once there's no space left anymore on the /global file system, data older than about 180 days will be deleted.
Support
If you need further support please don't hesitate to contact the VSC support team.