... | @@ -14,7 +14,7 @@ The Lumi supercomputer is divided in two partitions: [Lumi-C](https://docs.lumi- |
... | @@ -14,7 +14,7 @@ The Lumi supercomputer is divided in two partitions: [Lumi-C](https://docs.lumi- |
|
The internal hardware structure of these partitions is different, and thus a comparison between the two is mostly of no use.
|
|
The internal hardware structure of these partitions is different, and thus a comparison between the two is mostly of no use.
|
|
In other words, the CPU version of your code should be executed on Lumi-G as well when comparing to the GPU version.
|
|
In other words, the CPU version of your code should be executed on Lumi-G as well when comparing to the GPU version.
|
|
|
|
|
|
One Lumi-G node uses 4 AMD MI250X GPUs in combination with a single 64-cores AMD EPYC 7A53 "Trento" CPU. The CPU offers 8 processes, of which 1 is dedicated to running the OS. Each node thus offers 7 processes to be executed in parallel.
|
|
One Lumi-G node uses 4 AMD MI250X GPUs in combination with a single 64-cores AMD EPYC 7A53 "Trento" CPU. The CPU offers 8 threads, of which 1 is dedicated to running the OS. Each node thus offers 7 processes to be executed in parallel.
|
|
|
|
|
|
### Launching jobs
|
|
### Launching jobs
|
|
Lumi uses the [Slurm Workload Manager](https://slurm.schedmd.com/documentation.html), and thus jobs can be launched using a simple `.batch` script. Running interactive nodes is a bit of a hassle since Lumi does not support SSHing into compute nodes. Luckily there is a workaround.
|
|
Lumi uses the [Slurm Workload Manager](https://slurm.schedmd.com/documentation.html), and thus jobs can be launched using a simple `.batch` script. Running interactive nodes is a bit of a hassle since Lumi does not support SSHing into compute nodes. Luckily there is a workaround.
|
... | | ... | |