Commit 37886ebf authored by erifarov's avatar erifarov
Browse files

Add special_setup to cluster list

parent 9c8f313a
Pipeline #7336 passed with stage
in 88 minutes and 16 seconds
......@@ -25,7 +25,7 @@
#' to use for the computation. The default value is 1.
#'@param cluster A list of components that define the configuration of the
#' machine to be run on. The comoponents vary from the different machines.
#' Check \href{}{startR GitLab} for more
#' Check \href{}{Practical guide on GitLab} for more
#' details and examples. Only needed when the computation is not run locally.
#' The default value is NULL.
#'@param ecflow_suite_dir A character string indicating the path to a folder in
......@@ -125,14 +125,7 @@ After following these steps for the connections in both directions (although fro
Do not forget adding the following lines in your .bashrc on the HPC machine.
If you are planning to run it on CTE-Power:
if [[ $BSC_MACHINE == "power" ]] ; then
module unuse /apps/modules/modulefiles/applications
module use /gpfs/projects/bsc32/software/rhel/7.4/ppc64le/POWER9/modules/all/
If you are on Nord3-v2, then you'll have to add:
If you are planning to run it on Nord3-v2, you have to add:
if [ $BSC_MACHINE == "nord3v2" ]; then
module purge
......@@ -140,6 +133,13 @@ if [ $BSC_MACHINE == "nord3v2" ]; then
module unuse /apps/modules/modulefiles/applications /apps/modules/modulefiles/compilers /apps/modules/modulefiles/tools /apps/modules/modulefiles/libraries /apps/modules/modulefiles/environment
If you are using CTE-Power:
if [[ $BSC_MACHINE == "power" ]] ; then
module unuse /apps/modules/modulefiles/applications
module use /gpfs/projects/bsc32/software/rhel/7.4/ppc64le/POWER9/modules/all/
You can add the following lines in your .bashrc file on your workstation for convenience:
......@@ -585,6 +585,7 @@ The parameter `cluster` expects a list with a number of components that will hav
cluster = list(queue_host = '',
queue_type = 'slurm',
temp_dir = temp_dir,
r_module = 'R/4.1.2-foss-2019b'
cores_per_job = 4,
job_wallclock = '00:10:00',
max_jobs = 4,
......@@ -609,6 +610,7 @@ The cluster components and options are explained next:
- `extra_queue_params`: list of character strings with additional queue headers for the jobs to be submitted to the HPC. Mainly used to specify the amount of memory to book for each job (e.g. '#SBATCH --mem-per-cpu=30000'), to request special queuing (e.g. '#SBATCH --qos=bsc_es'), or to request use of specific software (e.g. '#SBATCH --reservation=test-rhel-7.5').
- `bidirectional`: whether the connection between the R workstation and the HPC login node is bidirectional (TRUE) or unidirectional from the workstation to the login node (FALSE).
- `polling_period`: when the connection is unidirectional, the workstation will ask the HPC login node for results each `polling_period` seconds. An excessively small value can overload the login node or result in temporary banning.
- `special_setup`: name of the machine if the computation requires an special setup. Only Marenostrum 4 needs this parameter (e.g. special_setup = 'marenostrum4').
After the `Compute()` call is executed, an EC-Flow server is automatically started on your workstation, which will orchestrate the work and dispatch jobs onto the HPC. Thanks to the use of EC-Flow, you will also be able to monitor visually the progress of the execution. See the "Collect and the EC-Flow GUI" section.
......@@ -903,10 +905,6 @@ res <- Compute(step, list(system4, erai),
wait = FALSE)
### Example of computation of weekly means
### Example with data on an irregular grid with selection of a region
### Example on MareNostrum 4
......@@ -1038,8 +1036,7 @@ cluster = list(queue_host = '',
max_jobs = 4,
extra_queue_params = list('#BSUB -q bsc_es'),
bidirectional = FALSE,
polling_period = 10,
special_setup = 'marenostrum4'
polling_period = 10
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment