|
There are couple supercomputers available to BSC.
|
|
There are couple supercomputers available to BSC.
|
|
Each have their own hardware and software stack and need to be managed differently.
|
|
Each have their own hardware and software stack and need to be managed differently.
|
|
This page will give an overview of the basic information, but we *strongly* encourage you to view the official documentation of the supercomputers. All supercomputers use the [Slurm Workload Manager [1.1]](https://slurm.schedmd.com/documentation.html) for managing jobs, which we will not cover.
|
|
This page will give an overview of the basic information, but we *strongly* encourage you to view the official documentation of the supercomputers. All supercomputers use the [[a.1] Slurm Workload Manager](https://slurm.schedmd.com/documentation.html) for managing jobs, which we will not cover.
|
|
|
|
|
|
## MareNostrum 4
|
|
## MareNostrum 4
|
|
*[official documentation [1.2]](https://bsc.es/supportkc/docs/MareNostrum4/intro/)*
|
|
*[[a.2] official documentation](https://bsc.es/supportkc/docs/MareNostrum4/intro/)*
|
|
MareNostrum 4 is located at BSC and offers only a CPU partition.
|
|
MareNostrum 4 is located at BSC and offers only a CPU partition.
|
|
Hence, GPU offloading is not an option.
|
|
Hence, GPU offloading is not an option.
|
|
However, it could function nicely as a different platform to test sequential and CPU optimized versions of your code.
|
|
However, it could function nicely as a different platform to test sequential and CPU optimized versions of your code.
|
|
|
|
|
|
## Lumi
|
|
## Lumi
|
|
*[official documentation[1.3]](https://docs.lumi-supercomputer.eu/)*
|
|
*[[a.3] official documentation](https://docs.lumi-supercomputer.eu/)*
|
|
The Lumi supercomputer is divided in two partitions: [Lumi-C](https://docs.lumi-supercomputer.eu/hardware/lumic/) containing CPU nodes and [Lumi-G](https://docs.lumi-supercomputer.eu/hardware/lumig/) containing GPU nodes.
|
|
The Lumi supercomputer is divided in two partitions: [Lumi-C](https://docs.lumi-supercomputer.eu/hardware/lumic/) containing CPU nodes and [Lumi-G](https://docs.lumi-supercomputer.eu/hardware/lumig/) containing GPU nodes.
|
|
The internal hardware structure of these partitions is different, and thus a comparison between the two is mostly of no use.
|
|
The internal hardware structure of these partitions is different, and thus a comparison between the two is mostly of no use.
|
|
In other words, the CPU version of your code should be executed on Lumi-G as well when comparing to the GPU version.
|
|
In other words, the CPU version of your code should be executed on Lumi-G as well when comparing to the GPU version.
|
... | @@ -87,7 +87,7 @@ The list below contains modules that are required when building specific project |
... | @@ -87,7 +87,7 @@ The list below contains modules that are required when building specific project |
|
Be careful with experimenting with loading different module combinations. We noticed that some modules overwrite loaded architectures from previously loaded modules, which breaks their inner workings and causes errors.
|
|
Be careful with experimenting with loading different module combinations. We noticed that some modules overwrite loaded architectures from previously loaded modules, which breaks their inner workings and causes errors.
|
|
|
|
|
|
## MeluXina
|
|
## MeluXina
|
|
*[official documentation [1.4]](https://docs.lxp.lu/)*
|
|
*[[a.4] official documentation](https://docs.lxp.lu/)*
|
|
The MeluXina supercomputer consists of modules for CPU, GPU, FPGA, and Large memory applications.
|
|
The MeluXina supercomputer consists of modules for CPU, GPU, FPGA, and Large memory applications.
|
|
The internal hardware structure of these partitions is different, and thus a comparison between the two is mostly of no use.
|
|
The internal hardware structure of these partitions is different, and thus a comparison between the two is mostly of no use.
|
|
In other words, the CPU version of your code should be executed on the GPU module as well when comparing to the GPU version.
|
|
In other words, the CPU version of your code should be executed on the GPU module as well when comparing to the GPU version.
|
... | @@ -140,7 +140,7 @@ We generally make a distinction between 3 use-cases: |
... | @@ -140,7 +140,7 @@ We generally make a distinction between 3 use-cases: |
|
**nemo-build:** There are no extra dependencies. HDF5 and NETCDF are build manually to avoid conflicts.
|
|
**nemo-build:** There are no extra dependencies. HDF5 and NETCDF are build manually to avoid conflicts.
|
|
|
|
|
|
## Sources
|
|
## Sources
|
|
- [1.1 SLURM workload manager](0.-Sources#11-slurm-workload-manager)
|
|
- [a.1 SLURM workload manager](0.a.-Documentation-&-Manuals#a1-slurm-workload-manager)
|
|
- [1.2 MareNostrum4 documentation](0.-Sources#12-marenostrum4-documentation)
|
|
- [a.2 MareNostrum4 documentation](0.a.-Documentation-&-Manuals#a2-marenostrum4-documentation)
|
|
- [1.3 Lumi documentation](0.-Sources#13-lumi-documentation)
|
|
- [a.3 Lumi documentation](0.a.-Documentation-&-Manuals#a3-lumi-documentation)
|
|
- [1.4 MeluXina documentation](0.-Sources#14-meluxina-documentation) |
|
- [a.4 MeluXina documentation](0.a.-Documentation-&-Manuals#a4-meluxina-documentation) |
|
\ No newline at end of file |
|
\ No newline at end of file |