practical_guide.md 63.3 KB
Newer Older
1
# Practical guide for processing large data sets with startR
2

erifarov's avatar
erifarov committed
3
This guide includes explanations and practical examples for you to learn how to use startR to efficiently process large data sets in parallel on the BSC's HPCs (Nord3-v2, CTE-Power 9, Marenostrum 4, ...). See the main page of the [**startR**](README.md) project for a general overview of the features of startR, without actual guidance on how to use it.
4

5
If you would like to start using startR rightaway on the BSC infrastructure, you can directly go through the "Configuring startR" section, copy/paste the basic startR script example shown at the end of the "Introduction" section onto the text editor of your preference, adjust the paths and user names specified in the `Compute()` call, and run the code in an R session after loading the R and ecFlow modules.
6

7
## Index
8

Nicolau Manubens's avatar
Nicolau Manubens committed
9
10
11
12
13
14
15
16
17
18
19
20
21
22
1. [**Motivation**](inst/doc/practical_guide.md#motivation)
2. [**Introduction**](inst/doc/practical_guide.md#introduction)
3. [**Configuring startR**](inst/doc/practical_guide.md#configuring-startr)
4. [**Using startR**](inst/doc/practical_guide.md#using-startr)
   1. [**Start()**](inst/doc/practical_guide.md#start)
   2. [**Step() and AddStep()**](inst/doc/practical_guide.md#step-and-addstep)
   3. [**Compute()**](inst/doc/practical_guide.md#compute)
      1. [**Compute() locally**](inst/doc/practical_guide.md#compute-locally)
      2. [**Compute() on CTE-Power 9**](inst/doc/practical_guide.md#compute-on-cte-power-9)
      3. [**Compute() on the fat nodes and other HPCs**](inst/doc/practical_guide.md#compute-on-the-fat-nodes-and-other-hpcs)
   4. [**Collect() and the EC-Flow GUI**](inst/doc/practical_guide.md#collect-and-the-ec-flow-gui)
5. [**Additional information**](inst/doc/practical_guide.md#additional-information)
6. [**Other examples**](inst/doc/practical_guide.md#other-examples)
7. [**Compute() cluster templates**](inst/doc/practical_guide.md#compute-cluster-templates)
23

24
## Motivation
25

Nicolau Manubens's avatar
Nicolau Manubens committed
26
What would you do if you had to apply a custom statistical analysis procedure to a 10TB climate data set? Probably, you would need to use a scripting language to write a procedure which is able to retrieve a subset of data from the file system (it would rarely be possible to handle all of it at once on a single node), code the custom analysis procedure in that language, and apply it carefully and efficiently to the data. Afterwards, you would need to think of and develop a mechanism to dispatch the job mutiple times in parallel to an HPC of your choice, each of the jobs processing a different subset of the data set. You could do this by hand but, ideally, you would rather use EC-Flow or a similar general purpose workflow manager which would orchestrate the work for you. Also, it would allow you to visually monitor and control the progress, as well as keep an easy-to-understand record of what you did, in case you need to re-use it in the future. The mentioned solution, although it is the recommended way to go, is a demanding one and you could easily spend a few days until you get it running smoothly. Additionally, when developing the job script, you would be exposed to the difficulties of efficiently managing the data and applying the coded procedure to it.
27

28
With the constant increase of resolution (in all possible dimensions) of weather and climate model output, and with the growing need for using computationally demanding analytical methodologies (e.g. bootstraping with thousands of repetitions), this kind of divide-and-conquer approach becomes indispensable. While tools exist to simplify and automate this complex procedure, they usually require adapting your data to specific formats, migrating to specific database systems, or an advanced knowledge of computer sciences or of specific programming languages or frameworks.
29
30
31
32
33
34

startR is yet another tool which allows the R user to apply user-defined functions or procedures to collections of NetCDF files (see Note 1) as large as desired, transparently using computational resources in HPCs (see Note 2) to minimize the time to solution. Although it has been designed to require as few mandatory technical parameters as possible from the user, an experienced user can configure a number of additional parameters to adjust the execution. startR operates on, and provides as outputs, multidimensional arrays with named dimensions, a basic and widely used data structure in R, and this makes the framework more familiar to the general R user.

Although startR can be difficult to use if learnt from the documentation of its functions, it can also be used with little effort if re-using and adapting already existing startR scripts, such as the ones provided later in this guide. startR scripts are written in R, and are short (usually under 50 lines of code), concise, and easy to read.

Other things you can expect from startR:
35
36
37
- Combining data from multiple model executions or observational data sources.
- Extracting values for a certain time period, geographical location, etc., from a collection of NetCDF files.
- Obtaining data arrays with results of analytical procedures that are to be plotted or stored as RData or NetCDF for later use in the analysis workflow.
38

39
Things that are not supposed to be done with startR:
40
- Curating/homogenizing model output files or generating files to be stored under /esarchive following the department/community conventions. Although metadata is understood and used by startR, its handling is not 100% consistent yet.
41

Nicolau Manubens's avatar
Nicolau Manubens committed
42
_**Note 1**_: The data files do not need to be migrated to a database system, nor have to comply with any specific convention for their file names, name, number or order of the dimensions and variables, or distribution of files in the file system. Although only files in the NetCDF format are supported for now, plug-in functions (of approximately 200 lines of code) can be programmed to add support for additional file formats.
43
44
45

_**Note 2**_: The HPCs startR is designed to run on are understood as multi-core multi-node clusters. startR relies on a shared file system across all HPC nodes, and does not implement any kind of distributed storage system for now.

46
47
## Introduction

48
In order to use startR you will need to follow the configuration steps listed in the "Configuring startR" section of this guide to make sure startR works on your workstation with the HPC of your choice.
49

50
51
Afterwards, you will need to understand and use five functions, all of them included in the startR package:
 - **Start()**, for declaing the data sets to be processed
52
 - **Step()** and **AddStep()**, for specifying the operation to be applied to the data
53
 - **Compute()**, for specifying the HPC to be employed, the execution parameters (e.g. number of chunks and cores), and to trigger the computation
54
 - **Collect()** and the **EC-Flow graphical user interface**, for monitoring of the progress and collection of results
55

erifarov's avatar
erifarov committed
56
Next, you can see an example startR script performing the ensemble mean of a small data set on an HPC cluster such as Nord3-v2 or CTE-Power9, for you to get a broad picture of how the startR functions interact and the information that is represented in a startR script. Note that the `queue_host`, `temp_dir` and `ecflow_suite_dir` parameters in the `Compute()` call are user-specific.
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81

```r
library(startR)

repos <- '/esarchive/exp/ecmwf/system5_m1/6hourly/$var$/$var$_$sdate$.nc'
data <- Start(dat = repos,
              var = 'tas',
              sdate = '20180101',
              ensemble = 'all',
              time = 'all',
              latitude = indices(1:40),
              longitude = indices(1:40))

fun <- function(x) {
  # Expected inputs:
  #   x: array with dimensions ('ensemble', 'time')
  apply(x + 1, 2, mean)
}

step <- Step(fun, 
             target_dims = c('ensemble', 'time'), 
             output_dims = c('time'))

wf <- AddStep(data, step)

aho's avatar
aho committed
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
  res <- Compute(wf,
                 chunks = list(latitude = 2,
                               longitude = 2),
                 threads_load = 2,
                 threads_compute = 4,
                 cluster = list(queue_host = 'nord4.bsc.es',
                   queue_type = 'slurm',
                   temp_dir = '/gpfs/scratch/bsc32/bsc32734/startR_hpc/',
                   cores_per_job = 4,
                   job_wallclock = '00:10:00',
                   max_jobs = 4,
                   extra_queue_params = list('#SBATCH --mem-per-cpu=3000'),
                   bidirectional = FALSE,
                   polling_period = 10
                 ),
                 ecflow_suite_dir = '/home/Earth/aho/startR_local/',
                 wait = TRUE
                 )
100
101
```

102
103
## Configuring startR

104
At BSC, the only configuration step you need to follow is to set up passwordless connection with the HPC. You do not need to follow the complete list of deployment steps under [**Deployment**](inst/doc/deployment.md) since all dependencies are already installed for you to use.
105

106
Specifically, you need to set up passwordless, userless access from your machine to the HPC login node, and from the HPC login node to your machine if at all possible. In order to establish the connection in one of the directions, you need to do the following:
107

108
1. generate an ssh pair of keys on the origin host if you do not have one, using `ssh-keygen -t rsa`
109

110
2. ssh to the destionation node and create a directory where to store the public key, using `ssh username@hostname_or_ip mkdir -p .ssh`. 'hostname_or_ip' refers to the host name or IP address of the login node of the selected HPC, and 'username' to your account name on the HPC, which may not coincide with the one in your workstation.
111

112
3. dump your public key on a new file under that folder, using `cat .ssh/id_rsa.pub | ssh username@hostname_or_ip 'cat >> .ssh/authorized_keys'`
113

114
4. adjust the permissions of the repository of keys, using `ssh username@hostname_or_ip "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"`
115

116
5. if your username is different on your workstation and on the login node of the HPC, add an entry in the file .ssh/config in your workstation as follows:
117
118
119
120
121
122
123
```
  Host short_name_of_the_host
    HostName hostname_or_ip
    User username
    IdentityFile ~/.ssh/id_rsa
```

124
After following these steps for the connections in both directions (although from the HPC to the workstation might not be possible), you are good to go.
125

126
127
Do not forget adding the following lines in your .bashrc on the HPC machine.

erifarov's avatar
erifarov committed
128
If you are planning to run it on Nord3-v2, you have to add:
129
130
131
132
133
134
135
```
if [ $BSC_MACHINE == "nord3v2" ]; then
  module purge
  module use /gpfs/projects/bsc32/software/suselinux/11/modules/all
  module unuse /apps/modules/modulefiles/applications /apps/modules/modulefiles/compilers /apps/modules/modulefiles/tools /apps/modules/modulefiles/libraries /apps/modules/modulefiles/environment
fi
```
erifarov's avatar
erifarov committed
136
137
138
139
140
141
142
If you are using CTE-Power:
```
if [[ $BSC_MACHINE == "power" ]] ; then
  module unuse /apps/modules/modulefiles/applications
  module use /gpfs/projects/bsc32/software/rhel/7.4/ppc64le/POWER9/modules/all/
fi
```
143

144
You can add the following lines in your .bashrc file on your workstation for convenience:
145
```
146
alias ctp='ssh -XY username@hostname_or_ip'
147
148
149
alias start='module load R CDO ecFlow'
```

150
151
152
153
## Using startR

If you have successfully gone through the configuration steps, you will just need to run the following commands in a terminal session and a fresh R session will pop up with the startR environment ready to use.

154
155
156
157
158
```
start
R
```

159
The library can be loaded as follows:
160

161
```r
162
163
164
165
166
167
library(startR)
```

### Start()

In order for startR to recognize the data sets you want to process, you first need to declare them. The first step in the declaration of a data set is to build a special path string that encodes where all the involved NetCDF files to be processed are stored. It contains some wildcards in those parts of the path that vary across files. This special path string is also called "path pattern".
168

169
Before defining an example path pattern, let's introduce some target NetCDF files. In the "esarchive" at BSC, we can find the following files:
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184

```
/esarchive/exp/ecmwf/system5_m1/6hourly/
  |--tas/
  |   |--tas_19930101.nc
  |   |--tas_19930201.nc
  |   |        ...
  |   |--tas_20171201.nc
  |--tos/
      |--tos_19930101.nc
      |--tos_19930201.nc
      |        ...
      |--tos_20171201.nc
```

185
A path pattern that can be used to encode the location of these files in a compact way is the following:
186
187
188
189
190

```r
repos <- '/esarchive/exp/ecmwf/system5_m1/6hourly/$var$/$var$_$sdate$.nc'
```

191
The wildcards used (the pieces wrapped between '$' symbols) can be given any names you like. They do not necessarily need to be 'var' or 'sdate' or match any specific key word (although in this case, as explained later, the 'var' name will trigger a special feature of `Start()`).
192

193
Once the path pattern is specified, a `Start()` call can be built, in which you need to provide, as parameters, the specific values of interest for each of the dimensions defined by the wildcards (also called outer dimensions, or file dimensions), as well as for each of the dimensions inside the NetCDF files (inner dimensions).
194

195
You can check in advance which dimensions are inside the NetCDF files by using e.g. easyNCDF on one of the files:
196

197
198
199
```r
easyNCDF::NcReadDims('/esarchive/exp/ecmwf/system5_m1/6hourly/tas/tas_19930101.nc',
                     var_names = 'tas')
200
```
201
202
203
204
205
206

This will show the names and lengths of the dimensions of the selected variable:

```r
      var longitude  latitude  ensemble      time 
        1      1296       640        25       860
207
208
```

209
_**Note**_: If you check the dimensions of that file with `ncdump -h`, you will realize the 'var' dimension is actually not defined inside. `NcReadDims()` and `Start()`, though, perceive the different variables inside a file as if stored along a virtual dimension called 'var'. You can ignore this for now and assume 'var' is simply a file dimension (since it appears as a wildcard in the path pattern). Read more on this in Note 1 at the end of this section.
210

211
Once we know the dimension names, we have all the information we need to put the `Start()` call together:
212
213
214
215
216
217
218
219
220
221
222
223
224

```r
data <- Start(dat = repos,
              # outer dimensions
              var = 'tas',
              sdate = '19930101',
              # inner dimensions
              ensemble = 'all',
              time = 'all',
              latitude = 'all',
              longitude = 'all')
```

225
226
For each of the dimensions, the values or indices of interest (a.k.a. selectors) can be specified in three possible ways:
- Using one or more numeric indices, for example `time = indices(c(1, 3, 5))`, or `sdate = indices(3:5)`. In the latter case, the third, fourht and fifth start dates appearing in the file system in alphabetical order would be selected ('19930301', '19930401' and '19930501').
227
- Using one or more actual values, for example `sdate = values('19930101')`, or `ensemble = values(c('r1i1p1', 'r2i1p1'))`, or `latitude = values(c(10, 10.5, 11))`. The `values()` helper function can be omitted (as shown in the example). See Note 2 for details on how value selectors are handled when specified for inner dimensions.
228
229
230
231
- Using a list of two numeric values, for example `sdate = indices(list(5, 10))`. This will take all indices from the 5th to the 10th.
- Using a list of two actual values, for example `sdate = values(list('r1i1p1', 'r5i1p1'))` or `latitude = values(list(-45, 75))`. This will take all values, in order, placed between the two values specified (both ends included).
- Using the special keywords 'all', 'first' or 'last'.

232
The dimensions specified in the `Start()` call do not need to follow any specific order, not even the actual order in the path pattern or inside the file. The order, though, can have an impact on the performance of `Start()` as explained in Note 3.
233
234

Running the `Start()` call shown above will display some progress and information messages:
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257

```r
* Exploring files... This will take a variable amount of time depending
*   on the issued request and the performance of the file server...
* Detected dimension sizes:
*         dat:    1
*         var:    1
*       sdate:    1
*    ensemble:   25
*        time:  860
*    latitude:  640
*   longitude: 1296
* Total size of involved data:
*   1 x 1 x 1 x 25 x 860 x 640 x 1296 x 8 bytes = 132.9 Gb
* Successfully discovered data dimensions.
Warning messages:
1: ! Warning: Parameter 'pattern_dims' not specified. Taking the first dimension,
!   'dat' as 'pattern_dims'. 
2: ! Warning: Could not find any pattern dim with explicit data set descriptions (in
!   the form of list of lists). Taking the first pattern dim, 'dat', as
!   dimension with pattern specifications.
```

258
The warnings shown are normal, and could be avoided with a more wordy specification of the parameters to the `Start()` function.
259

260
The dimensions of the selected data set and the total size are shown. As you have probably noticed, this `Start()` call is very fast, even though several GB of data are involved (see Note 4 on the size of the data in R). This is because `Start()` is simply discovering the location and dimension of the involved data.
261

262
You can give a quick look to the collected metadata as follows:
263

264
265
266
```r
str(data)
```
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
```r
Class 'startR_header' length 9 Start(dat = "/esarchive/exp/ecmwf/system5_m1/6hourly/$var$/$var$_$sdate$.nc",      var = "tas", sdate = "19930101", ensemble = "all", time = "all", latitude = "all",  ...
  ..- attr(*, "Dimensions")= Named num [1:7] 1 1 1 25 860 ...
  .. ..- attr(*, "names")= chr [1:7] "dat" "var" "sdate" "ensemble" ...
  ..- attr(*, "Variables")=List of 2
  .. ..$ common: NULL
  .. ..$ dat1  : NULL
  ..- attr(*, "ExpectedFiles")= chr [1, 1, 1] "/esarchive/exp/ecmwf/system5_m1/6hourly/tas/tas_19930101.nc"
  ..- attr(*, "FileSelectors")=List of 1
  .. ..$ dat1:List of 3
  .. .. ..$ dat  :List of 1
  .. .. .. ..$ : chr "dat1"
  .. .. ..$ var  :List of 1
  .. .. .. ..$ : chr "tas"
  .. .. ..$ sdate:List of 1
  .. .. .. ..$ : chr "19930101"
  ..- attr(*, "PatternDim")= chr "dat"
```

286
287
288
289
290
The retrieved information can be accessed with the `attr()` function. For example:

```r
attr(data, 'FileSelectors')$dat1
```
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
```r
$dat
$dat[[1]]
[1] "dat1"


$var
$var[[1]]
[1] "tas"


$sdate
$sdate[[1]]
[1] "19930101"
```
306

307
If you are interested in actually loading the entire data set in your machine you can do so in two ways (_**be careful**_, loading the data involved in the `Start()` call in this example will most likely stall your machine. Try it with a smaller region or a subset of forecast time steps):
308
309
- adding the parameter `retrieve = TRUE` in your `Start()` call.
- evaluating the object returned by `Start()`: `data_load <- eval(data)`
Nicolau Manubens's avatar
Nicolau Manubens committed
310
See the section on "How to choose the number of chunks, jobs and cores" for indications on working out the maximum amount of data that can be retrieved with a `Start()` call on a specific machine. 
311

aho's avatar
aho committed
312
You may realize that this functionality is similar to the `Load()` function in the s2dv package. In fact, `Start()` is more advanced and flexible, although `Load()` is more mature and consistent for loading typical seasonal to decadal forecasting data. `Load()` will be adapted in the future to use `Start()` internally.
313

314
315
There are no constrains for the number or names of the outer or inner dimensions used in a `Start()` call. In other words, `Start()` will handle NetCDF files with any number of dimensions with any name, as well as files distributed across folders in complex ways, since you can use customized wildcards in the path pattern.

316
There are a number of advanced parameters and features in `Start()` to handle heterogeneities across files involved in a `Start()` call, such as the `synonims` parameter, or to handle dimensions extending across multiple NetCDF files, such as the `*_across` parameter. See the documentation on the [**Start()**](inst/doc/start.md) function or `?Start` for more information.
317

318
_**Note 1**_ _on the 'var' dimension_: as mentioned above in this section, `NcVarReader()` is showing as if a virtual dimension 'var' appeared inside the file. The existence of this dimension is justified by the fact that, many times, NetCDF files contain more than one variable. The 'var' dimension should hence be considered a "inner" dimension. But, in our example, the dimension 'var' is also defined as a file dimension in the path pattern. So, following the logic of `Start()`, there would be two 'var' dimensions, one of them outer and the other inner, and we should consequently specify selectors for each of them. However, as exception, they are automatically understood to be the same dimension, and the target variable name specified as selector for the outer 'var' dimension is also re-used to select the variable inside the file. This is a feature triggered only by the 'var' dimension name and, if other dimension names appeared more than once as inner or outer dimensions, `Start()` would crash throw an error. The feature described here is useful for the very common case where file paths contain the variable name and that variable is the only climate variable inside the file. If this feature was not available, one could still define the data set as shown in the code snippet below, where there would be some redundancy in the `Start()` call and in the dimensions of the resulting array.
319

320
_**Note 2**_ _on providing values as selectors for inner dimensions_: when values are requested for a inner dimension, the corresponding numeric indices are automatically calculated by comparing the provided values with a variable inside the file with the same name as the dimension for which the values have been requested. In the last example, where specific values are requested for the latitudes, the variable 'latitude' is automatically retrieved from the files. If the name of the variable does not coincide with the name of the dimension, the parameter `*_var` can be specified in the `Start()` call, as detailed in `?Start`.
321

322
_**Note 3**_ _on the order of dimensions_: neither the file dimensions nor the inner dimensions need to be specified in the same order as they appear in the path pattern or inside the NetCDF files, respectively. The resulting arrays returned by `Start()` will have the dimensions in the same order as requested in `Start()`, so changing the order in the call can potentially trigger automatic reordering of data, which is time consuming. But, depending on the use case, it may be a good idea to ask for a specific dimension order so that the data is properly arranged for making posterior calculations more efficient. Remember that the order of the dimensions in R is "big endian"; the values are consecutive along the first (left-most) dimension. In contrast, the order of dimensions in NetCDF files is "little endian". This means that if you want to respect the order of the data values in memory as stored in the NetCDF files, you should request the dimensions in your `Start()` call in the opposite order.
323

324
_**Note 4**_ _on the size of the data in R_: if you check the size of the involved file in the example `Start()` call used above ('/esarchive/exp/ecmwf/system5_m1/6hourly/tas/tas_19930101.nc'), you will realize it only weighs 34GB. Why is the data reported to occupy 134GB then? This is due to two facts: by one side, NetCDF files are usually compressed, and their uncompressed size can be substantially greater. In this case, the uncompressed data would occupy about 72GB. Besides, the variable we are targetting in the example ('tas') is defined as a float variable inside the NetCDF file. This means each value is a 4-byte real number. However, R cannot represent 4-byte real numbers; it always takes 8 bytes to represent a real number. This is why, when float numbers are represented in R, the effective size of the data is doubled. In this case, the 72GB of uncompressed float numbers need to be represented using 132GB of RAM in R.
325
326
327

### Step() and AddStep()

328
Once the data sources are declared, you can define the operation to be applied to them. The operation needs to be encapsulated in the form of an R function receiving one or more multidimensional arrays with named dimensions (plus additional helper parameters) and returning one or more multidimensional arrays, which should also have named dimensions. For example:
329
330
331
332
333
334
335
336
337
338
339
340

```r
fun <- function(x) {
  r <- sqrt(sum(x ^ 2) / length(x))
  for (i in 1:100) {
    r <- sqrt(sum(x ^ 2) / length(x))
  }
  dim(r) <- c(time = 1)
  r
}
```

341
As you see, the function only needs to operate on the essential dimensions, not on the whole set of dimensions of the data set. This example function receives only one multidimensional array (with dimensions 'ensemble' and 'time', although not expressed in the code), and returns one multidimensional array (with a single dimension 'time' of length 1). startR will automatically feed the function with subsets of data with only the essential dinmensions, but first, a startR "step" for the function has to be built with with the `Step()` function.
342

343
The `Step()` function requires, as parameters, the names of the dimensions of the input arrays expected by the function (in this example, a single array with the dimensions 'ensemble' and 'time'), as well as the names of the dimensions of the arrays the function returns (in this example, a single array with the dimension 'time'):
344
345
346
347
348
349
350

```r
step <- Step(fun = fun, 
             target_dims = c('ensemble', 'time'), 
             output_dims = c('time'))
```

351
352
353
354
355
356
357
The step function should ideally expect arrays with the dimensions in the same order as requested in the `Start()` call, and consequently the dimension names specified in the `Step()` function should appear in the same order. If a different order was specified, startR would reorder the subsets for the step function to receive them in the expected dimension order.

Functions that receive or return multiple multidimensional arrays are also supported. In such cases, lists of vectors of dimension names should be provided as `target_dims` or `output_dims`.

Since functions wrapped with the `Step()` function will potentially be called thousands of times, it is recommended to keep them as light as possible by, for example, avoiding calls to the `library()` function to load other packages or interacting with files on disk. See the documentation on the parameter `use_libraries` of the `Step()` function, or consider adding additional parameters to the step function with extra information. 

Once the step is built, a workflow of steps can be assembled as follows:
358
359
360
361
362

```r
wf <- AddStep(data, step)
```

363
If the step involved more than one data source, a list of data sources could be provided as first parameter. You can find examples using more than one data source further in this guide.
364

365
It is not possible for now to define workflows with more than one step, but this is not a crucial gap since a step function can contain more than one statistical analysis procedure. Furthermore, it is usually enough to perform only the first or two first steps of the analysis workflow on the HPCs because, after these steps, the volume of data involved is reduced substantially and the analysis can go on with conventional methods.
366

367
### Compute()
368

369
370
Once the data sources are declared and the workflow is defined, you can proceed to specify the execution parameters (including which platform to run on) and trigger the execution with the `Compute()` function.

371
Next, a few examples are shown with `Compute()` calls to trigger the processing of a dataset locally (only on the machine where the R session is running) and different HPCs (Nord3-v2, CTE-Power9 and other HPCs). However, let's first define a `Start()` call that involves a smaller subset of data in order not to make the examples too heavy.
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393

```r
library(startR)

repos <- '/esarchive/exp/ecmwf/system5_m1/6hourly/$var$/$var$_$sdate$.nc'
data <- Start(dat = repos,
              var = 'tas',
              sdate = '19930101',
              ensemble = 'all',
              time = indices(1:2),
              latitude = 'all',
              longitude = 'all')

fun <- function(x) {
  # Expected inputs:
  #   x: array with dimensions ('ensemble', 'time')
  apply(x + 1, 2, mean)
}

step <- Step(fun,
             target_dims = c('ensemble', 'time'),
             output_dims = c('time'))
394

395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
wf <- AddStep(data, step)
```

The output of the `Start()` call follows, where the size of the selected data is reported.
```r
* Exploring files... This will take a variable amount of time depending
*   on the issued request and the performance of the file server...
* Detected dimension sizes:
*         dat:    1
*         var:    1
*       sdate:    1
*    ensemble:   25
*        time:    2
*    latitude:  640
*   longitude: 1296
* Total size of involved data:
*   1 x 1 x 1 x 25 x 2 x 640 x 1296 x 8 bytes = 316.4 Mb
* Successfully discovered data dimensions.
Warning messages:
1: ! Warning: Parameter 'pattern_dims' not specified. Taking the first dimension,
!   'dat' as 'pattern_dims'. 
2: ! Warning: Could not find any pattern dim with explicit data set descriptions (in
!   the form of list of lists). Taking the first pattern dim, 'dat', as
!   dimension with pattern specifications.
```
420

421
#### Compute() locally
422

423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
When only your own workstation is available, startR can still be useful to process a very large dataset by chunks, thus avoiding a RAM memory overload and consequent crash of the workstation. startR will simply load and process the dataset by chunks, one after the other. The operations defined in the workflow will be applied to each chunk, and the results will be stored on a temporary file. `Compute()` will finally gather and merge the results of each chunk and return a single data object, including one or multiple multidimensional data arrays, and additional metadata.

A list of the dimensions which to split the data along, and the number of slices (or "chunks") to make for each, is the only piece of information required for `Compute()` to run locally. It will only be possible to request chunks for those dimensions not required by any of the steps in the workflow built by `Step()` and `AddStep()`.

Following the worklfow of steps defined in the example, where the step uses 'time' and 'ensemble' as target dimensions, the dimensions remaining for chunking would be 'dat', 'var', 'sdate', 'latitude' and 'longitude'. Note that defining a step which has many target dimensions should be avoided as it will reduce the chunking options.

As an example, we could request for computation performing two chunks along the 'latitude' dimension, and two chunks along the 'longitude' dimension. This would result in the data being processed in 4 chunks of about 80MB (the size of the involved data, 316MB, divided by 4).

Calculate the size of the chunks before executing the computation, and make sure they fit in the RAM of your machine. You should have as much free RAM as 2x or 3x the expected size of one chunk. Read more on adjusting the number of chunks in the section "How to choose the number of chunks, jobs and cores". 

```r
res <- Compute(wf,
               chunks = list(latitude = 2,
                             longitude = 2))
```

Once the `Compute()` call is executed, the R session will wait for it to return the result. Progress messages will be shown, with a remaining time estimate after the first chunk has been processed.
```r
* Processing chunks... remaining time estimate soon...
* Loading chunk 1 out of 4 ...
* Processing...
* Remaining time estimate (at 2019-01-27 23:31:07) (neglecting merge
*   time): 1.662909 mins
* Loading chunk 2 out of 4 ...
* Processing...
* Loading chunk 3 out of 4 ...
* Processing...
* Loading chunk 4 out of 4 ...
* Processing...
```

After all chunks have been processed, a summary of the execution and its performance is reported.
```r
* Computation ended successfully.
*   Number of chunks: 4
*   Max. number of concurrent chunks (jobs): 1
*   Requested cores per job: NA
*   Load threads per chunk: 1
*   Compute threads per chunk: 1
*   Total time (s): 123.636801242828
*     Chunking setup: 0.000516414642333984
*     Data upload to cluster: 0
*     All chunks: 123.416877985001
*     Transfer results from cluster: 0
*     Merge: 0.219406843185425
*     Each chunk: 
*       queue: 
*         mean: 0
*         min: 0
*         max: 0
*       job setup: 
*         mean: 0
*         min: 0
*         max: 0
*       load: 
*         mean: 3.07621091604233
*         min: 2.05077886581421
*         max: 5.05771064758301
*       compute: 
*         mean: 27.7766432762146
*         min: 27.4815557003021
*         max: 28.199684381485
```

Also, some warning messages will be displayed corresponding to the execution of the `Start()` function to retrieve the data for each chunk.

`Compute()` will return a list of data arrays in your R session, one data array for each output returned by the last step in the specified workflow. In the example here, ony one array is returned.
```r
str(res)
```
```r
List of 1
 $ output1: num [1:2, 1, 1, 1, 1:640, 1:1296] 250 250 251 250 251 ...
 - attr(*, "startR_compute_profiling")=List of 14
  ..$ nchunks          : num 4
  ..$ concurrent_chunks: num 1
  ..$ cores_per_job    : logi NA
  ..$ threads_load     : num 1
  ..$ threads_compute  : num 1
  ..$ bychunks_setup   : num 0.000516
  ..$ transfer         : num 0
  ..$ queue            : num 0
  ..$ job_setup        : num 0
  ..$ load             : num [1:4] 5.06 3.14 2.06 2.05
  ..$ compute          : num [1:4] 28.2 27.8 27.5 27.7
  ..$ transfer_back    : num 0
  ..$ merge            : num 0.219
  ..$ total            : num 124
```

The configuration details and profiling information are attached as attributes to the returned list of arrays.

If you check the dimensions of one of the ouput arrays, you will see that it preserves named dimensions, and that the output dimensions of the workflow steps appear on the first prositions (left-most).

```r
dim(res$output1)
```
```r
     time       dat       var     sdate  latitude longitude 
        2         1         1         1       640      1296
```

In addition to performing the computation in chunks, you can adjust the number of execution threads to use for the data retrieval stage (with `threads_load`) and for the computation (with `threads_compute`). Using more than 2 threads for the retrieval will usually be perjudicial, since two will already be able to make full use of the bandwidth between the workstation and the data repository. The optimal number of threads for the computation will depend on the number of processors in your machine, the number of cores they have, and the number of threads supported by each of them.
526
527
528
529
530

```r
res <- Compute(wf,
               chunks = list(latitude = 2,
                             longitude = 2),
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
               threads_load = 2,
               threads_compute = 4)
```
```r
* Computation ended successfully.
*   Number of chunks: 4
*   Max. number of concurrent chunks (jobs): 1
*   Requested cores per job: NA
*   Load threads per chunk: 2
*   Compute threads per chunk: 4
*   Total time (s): 44.6467976570129
*     Chunking setup: 0.000483036041259766
*     Data upload to cluster: 0
*     All chunks: 44.4387269020081
*     Transfer results from cluster: 0
*     Merge: 0.207587718963623
*     Each chunk: 
*       queue: 
*         mean: 0
*         min: 0
*         max: 0
*       job setup: 
*         mean: 0
*         min: 0
*         max: 0
*       load: 
*         mean: 3.08622789382935
*         min: 2.77512788772583
*         max: 3.93441939353943
*       compute: 
*         mean: 8.02220791578293
*         min: 8.01178908348083
*         max: 8.03660178184509
564
565
```

566
#### Compute() on HPCs
567

568
In order to run the computation on a HPC, you will need to make sure the passwordless connection with the login node of that HPC is configured, as shown at the beginning of this guide. If possible, in both directions. Also, you will need to know whether there is a shared file system between your workstation and that HPC, and will need information on the number of nodes, cores per node, threads per core, RAM memory per node, and type of workload used by that HPC (Slurm, PBS and LSF supported).
569

570
You will need to add two parameters to your `Compute()` call: `cluster` and `ecflow_suite_dir`.
571

572
The parameter `ecflow_suite_dir` expects a path to a folder in the workstation where to store temporary files generated for the automatic management of the workflow. As you will see later, the EC-Flow workflow manager is used transparently for this purpose.
573

574
The parameter `cluster` expects a list with a number of components that will have to be provided a bit differently depending on the HPC you want to run on. You can see next an example cluster configuration that will execute the previously defined workflow on Nord3-v2.
575
```r
576
577
578
579
580
581
582
583
584
585
586
587
  # user-defined
  temp_dir <- '/gpfs/scratch/bsc32/bsc32734/startR_hpc/'
  ecflow_suite_dir <- '/home/Earth/aho/startR_local/'

  res <- Compute(wf,
                 chunks = list(latitude = 2,
                               longitude = 2),
                 threads_load = 2,
                 threads_compute = 4,
                 cluster = list(queue_host = 'nord4.bsc.es',
                   queue_type = 'slurm',
                   temp_dir = temp_dir,
erifarov's avatar
erifarov committed
588
                   r_module = 'R/4.1.2-foss-2019b'
589
590
591
592
593
594
595
596
597
598
                   cores_per_job = 4,
                   job_wallclock = '00:10:00',
                   max_jobs = 4,
                   extra_queue_params = list('#SBATCH --mem-per-cpu=3000'),
                   bidirectional = FALSE,
                   polling_period = 10
                 ),
                 ecflow_suite_dir = ecflow_suite_dir,
                 wait = TRUE
                 ) 
599
```
600

601
The cluster components and options are explained next:
602
603
604
605
606
607
608
609
610
611
612
- `queue_host`: must match the 'short_name_of_the_host' you associated to the login node of the selected HPC in your .ssh/config file.
- `queue_type`: one of 'slurm', 'pbs' or 'lsf'.
- `temp_dir`: directory on the HPC where to store temporary files. Must be accessible from the HPC login node and all HPC nodes.
- `lib_dir`: directory on the HPC where the startR R package and other required R packages are installed, accessible from all HPC nodes. These installed packages must be compatible with the R module specified in `r_module`. This parameter is optional; only required when the libraries are not installed in the R module.
- `r_module`: name of the UNIX environment module to be used for R. If not specified, 'module load R' will be used.
- `cores_per_job`: number of computing cores to be requested when submitting the job for each chunk to the HPC queue. Each node may be capable of supporting more than one computing thread.
- `job_wallclock`: amount of time to reserve the resources when submitting the job for each chunk. Must follow the specific format required by the specified `queue_type`.
- `max_jobs`: maximum number of jobs (chunks) to be queued simultaneously onto the HPC queue. Submitting too many jobs could overload the bandwidth between the HPC nodes and the storage system, or could overload the queue system.
- `extra_queue_params`: list of character strings with additional queue headers for the jobs to be submitted to the HPC. Mainly used to specify the amount of memory to book for each job (e.g. '#SBATCH --mem-per-cpu=30000'), to request special queuing (e.g. '#SBATCH --qos=bsc_es'), or to request use of specific software (e.g. '#SBATCH --reservation=test-rhel-7.5').
- `bidirectional`: whether the connection between the R workstation and the HPC login node is bidirectional (TRUE) or unidirectional from the workstation to the login node (FALSE).
- `polling_period`: when the connection is unidirectional, the workstation will ask the HPC login node for results each `polling_period` seconds. An excessively small value can overload the login node or result in temporary banning.
erifarov's avatar
erifarov committed
613
- `special_setup`: name of the machine if the computation requires an special setup. Only Marenostrum 4 needs this parameter (e.g. special_setup = 'marenostrum4').
614

615
616
617
After the `Compute()` call is executed, an EC-Flow server is automatically started on your workstation, which will orchestrate the work and dispatch jobs onto the HPC. Thanks to the use of EC-Flow, you will also be able to monitor visually the progress of the execution. See the "Collect and the EC-Flow GUI" section.

The following messages will be displayed upon execution:
618
619

```r
620
621
622
623
624
625
626
* ATTENTION: Dispatching chunks on a remote cluster. Make sure
*   passwordless access is properly set in both directions.
ping server(bscearth329:5678) succeeded in 00:00:00.001342  ~1 milliseconds
server is already started
* Processing chunks...
* Remaining time estimate soon...
```
627

628
629
At this point, you may want to check the jobs are being dispatched and executed properly onto the HPC. For that, you can either use the EC-Flow GUI (covered in the next section), or you can `ssh` to the login node of the HPC and check the status of the queue with `squeue` or `qstat`, as shown below.
```
630
[bsc32734@login4 ~]$ squeue
631
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
632
633
634
635
            757026      main /STARTR_ bsc32734  R       0:46      1 s02r2b24
            757027      main /STARTR_ bsc32734  R       0:46      1 s04r1b61
            757028      main /STARTR_ bsc32734  R       0:46      1 s04r1b63
            757029      main /STARTR_ bsc32734  R       0:46      1 s04r1b64
636
```
637

638
Here the output of the execution after waiting for about a minute:
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
```r
* Remaining time estimate (neglecting queue and merge time) (at
*   2019-01-28 01:16:59): 0 mins (46.22883 secs per chunk)
* Computation ended successfully.
*   Number of chunks: 4
*   Max. number of concurrent chunks (jobs): 4
*   Requested cores per job: 4
*   Load threads per chunk: 2
*   Compute threads per chunk: 4
*   Total time (s): 58.3834805488586
*     Chunking setup: 1.13428020477295
*     Data upload to cluster: 0
*     All chunks: 52.9015641212463
*     Transfer results from cluster: 4.05326294898987
*     Merge: 0.294373273849487
*     Each chunk: 
*       queue: 
*         mean: 10.5
*         min: 7
*         max: 14
*       job setup: 
*         mean: 3.3917236328125
*         min: 1.16518902778625
*         max: 5.61825823783875
*       load: 
*         mean: 5.77260076999664
*         min: 5.27595114707947
*         max: 6.26925039291382
*       compute: 
*         mean: 23.7037765979767
*         min: 23.4937765598297
*         max: 23.9137766361237
* Computation ended successfully.
Warning messages:
1: ! Warning: Parameter 'ecflow_server' has not been specified but execution on
!cluster has been requested. An ecFlow server instance will
!be created on localhost:5678. 
2: ! Warning: ATTENTION: The source chunks will be removed from the
!system. Store the result after Collect() ends if needed.
```

As you have probably realized, this execution has been slower than the local execution, even if 4 simultaneous jobs have been executed on CTE-Power. This is due to the small size of the data being processed here. The overhead of queuing and starting jobs at CTE-Power is large compared to the required computation time for this amount of data. The benefit would be obvious in use cases with larger inputs.
681

682
683
684
685
Usually, in use cases with larger data inputs, it will be preferrable to add the parameter `wait = FALSE` to your `Compute()` call. With this parameter, `Compute()` will return an object with all the information on your startR execution which you will be able to store in your disk. After doing that, you will be able to close your R session and collect the results later on with the `Collect()` function. This is discussed in the next section.

As mentioned above in the definition of the `cluster` parameters, it is strongly recommended to check the section on "How to choose the number of chunks, jobs and cores".

686
You can find the `cluster` configuration for other HPCs at the end of this guide [Compute() cluster templates](#compute-cluster-templates)
687

688

689
690
### Collect() and the EC-Flow GUI

691
692
693
Usually, in use cases where large data inputs are involved, it is convenient to add the parameter `wait = FALSE` to your `Compute()` call. With this parameter, `Compute()` will immediately return an object with information about your startR execution. You will be able to store this object onto disk. After doing that, you will not need to worry in case your workstation turns off in the middle of the computation. You will be able to close your R session, and collect the results later on with the `Collect()` function.

```r
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
  res <- Compute(wf,
                 chunks = list(latitude = 2,
                               longitude = 2),
                 threads_load = 2,
                 threads_compute = 4,
                 cluster = list(queue_host = 'nord4.bsc.es',
                   queue_type = 'slurm',
                   temp_dir = '/gpfs/scratch/bsc32/bsc32734/startR_hpc/',
                   cores_per_job = 4,
                   job_wallclock = '00:10:00',
                   max_jobs = 4,
                   extra_queue_params = list('#SBATCH --mem-per-cpu=3000'),
                   bidirectional = FALSE,
                   polling_period = 10
                 ),
                 ecflow_suite_dir = '/home/Earth/aho/startR_local/',
                 wait = FALSE
                 ) 
  saveRDS(res, file = 'test_collect.Rds')
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
```

At this point, after storing the descriptor of the execution and before calling `Collect()`, you may want to visually check the status of the execution. You can do that with the EC-Flow graphical user interface. You need to open a new terminal, load the EC-Flow module if needed, and start the GUI:

```
module load ecFlow
ecflow_ui &
```

After doing that, a window will pop up. You will be able to monitor the status of your EC-Flow suites there. However, if it is the first time you are using the EC-Flow GUI with startR, you will need to register the EC-Flow server that has been started automatically by `Compute()`. You can open the top menu "Manage servers" > "New server" > set host to 'localhost' > set port to '5678' > save.

Note that the host and port can be adjusted with the parameter `ecflow_server` in `Compute()`, which must be provided in the form `c(host = 'hostname', port = port_number)`.

After registering the EC-Flow server, an expandable entry will appear, where you can see listed the jobs to be executed, one for each chunk, with their status represented by a colour. Gray means pending, blue means queuing, green means in progress, and yellow means completed.

You will see that, if you are running on an HPC where the connection with its login node is unidirectional, the jobs remain blue (queuing). This is because the jobs, upon start or completion, cannot send the signals back. In order to retrieve this information, the `Collect()` function must be called from an R terminal.
729
730

```r
731
732
733
734
library(startR)

res <- readRDS('test_collect.Rds')

735
result <- Collect(res, wait = TRUE)
736
```
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775

In this example, `Collect()` has been run with the parameter `wait = TRUE`. This will be a blocking call, in which `Collect()` will retrieve information from the HPC, including signals and outputs, each `polling_period` seconds (as described above). `Collect()` will not return until the results of all chunks have been received. Meanwhile, the status of the EC-Flow workflow on the EC-Flow GUI will be updated periodically and you will be able to monitor the status, as shown in the image below (image taken from another use case).

<img src="inst/doc/ecflow_monitoring.png" width="600" />

Upon completion, `Collect()` returns the merged data array, as seen in the "Compute locally" section.

```r
str(result)
```
```r
List of 1
 $ output1: num [1:2, 1, 1, 1, 1:640, 1:1296] 251 252 249 246 245 ...
 - attr(*, "startR_compute_profiling")=List of 14
  ..$ nchunks          : num 4
  ..$ concurrent_chunks: num 4
  ..$ cores_per_job    : num 4
  ..$ threads_load     : num 2
  ..$ threads_compute  : num 4
  ..$ bychunks_setup   : num 1.16
  ..$ transfer         : num 0
  ..$ queue            : Named num [1:4] 18 18 18 18
  .. ..- attr(*, "names")= chr [1:4] "queue" "queue" "queue" "queue"
  ..$ job_setup        : Named num [1:4] 2.25 2.25 2.25 2.25
  .. ..- attr(*, "names")= chr [1:4] "job_setup" "job_setup" "job_setup" "job_setup"
  ..$ load             : Named num [1:4] 5.64 6.19 5.64 5.62
  .. ..- attr(*, "names")= chr [1:4] "load" "load" "load" "load"
  ..$ compute          : Named num [1:4] 24.5 24.4 24.3 24.4
  .. ..- attr(*, "names")= chr [1:4] "compute" "compute" "compute" "compute"
  ..$ transfer_back    : num 1.72
  ..$ merge            : num 0.447
  ..$ total            : logi NA
```

Note that, when the results are collected with `Collect()` instead of calling `Compute()` with the parameter `wait = TRUE`, it will not be possible to know the total time taken by the entire data processing workflow, but we will still be able to know the timings of most of the stages.

You can also run `Collect()` with `wait = FALSE`. This will crash with an error if the results are not yet available, or will return the merged array otherwise.

`Collect()` also has a parameter called `remove`, which by default is set to `TRUE` and triggers removal of all data results received from the HPC (and stored under `ecflow_suite_dir`). If you would like to preserve the data, you can set `remove = FALSE` and `Collect()` it as many times as desired. Alternatively, you can `Collect()` with `remove = TRUE` and store the merged array right after with `saveRDS()`.
776

777
## Additional information
778

779
### How to choose the number of chunks, jobs and cores
780

781
#### Number of chunks and memory per job
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796

Adjusting the number of chunks, simultaneous jobs, cores and memory per job is crucial for a successful and efficient execution of your startR workflow on the HPC.

The most important choice is that of the number of chunks and memory per job. Your first priority should be to make the data chunks fit in the HPC nodes' RAM memory.

After running the `Start()` call (or calls) for a use case, you will get a message on the terminal with the total size of the involved data. You should divide the data in as many chunks as required to make one of them fit in the blocks of RAM memory on the HPC. If you check the specifications of your target HPC, you will see that each node has a fixed amount of RAM (e.g., for CTE-Power 9, each node has 512GB of RAM). But that is not what you are looking for. You want to know the size of the RAM memory modules (e.g., for CTE-Power 9, the memory modules are of 32 GB). startR can not use memory regions allocated in multiple memory modules.

Knowing the size of the involved data and the size of the memory modules, you can work out the ideal number of chunks with a simple division. If the data weighs 100GB and the memory modules are of 32GB, 4 chunks will be required ideally.

However, we must take into account that the handling of these data chunks is not ideal, and that neither startR nor the functions to be applied to the data will manage the data in a 100% efficient manner. This is why the number of chunks should be finally doubled or multiplied by 3.

The amount of memory requested for each job, in this case where the data to be processed is a few times larger than the memory modules, must be equal to the size of the memory modules.

If the data to be processed was smaller than the memory modules, it could still be interesting to split it in chunks in order to parallelize the computation and get a faster result. In that case, the memory requested for each job must be equal to 2 or 3 times the size of one chunk.

797
#### Maximum number of simultaneous jobs
798
799
800

The maximum number of simultaneous jobs should be adjusted according to the capacity of the file system to deal with multiple simultanous file accesses, according to the bandwidth of the connection between the HPC nodes and the file system, and according to the number of jobs per user supported/allowed by the HPC queue. A recommended practice is to make a test with a subset of the data and try first with a small number of simultaneous jobs (e.g. 2), and keep repeating the test while increasing the number of simultaneous jobs, until the performance of the process is not improved due to any of the reasons mentioned before.

801
#### Number of cores and computing threads
802
803
804
805

The number of cores per job should be as large as possible, with two limitations:
- If the HPC nodes have each "M" total amount of memory with "m" amount of memory per memory module, and have each "N" amount of cores, the requested amount of cores "n" should be such that "n" / "N" does not exceed "m" / "M".
- Depending on the shape of the chunks, startR has a limited capability to exploit multiple cores. It is recommended to make small tests increasing the number of cores to work out a reasonable number of cores to be requested. 
806

807
### How to clean a failed execution
808

809
810
811
812
813
814
815
- Work out the startR execution ID, either by inspecting the execution description by `Compute()` when called with the parameter `wait = FALSE`, or by checking the `ecflow_suite_dir` with `ls -ltr`.
- ssh to the HPC login node and cancel all jobs of your startR execution.
- Close the R session from where your `Compute()` call was made.
- Remove the folder named with your startR execution ID under the `ecflow_suite_dir`.
- ssh to the HPC login node and remove the folder named with your startR execution ID under the `temp_dir`.
- Optionally remove the data under `data_dir` on the HPC login node if the file system is not shared between the workstation and the HPC and you do not want to keep the data in the `data_dir`, used as caché for future startR executions.
- Open the EC-Flow GUI and remove the workflow entry (a.k.a. suite) named with your startR execution ID with right click > "Remove".
816

817
### Visualizing the profiling of the execution
818

819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
As seen in previous sections, profiling measurements of the execution are provided together with the data output. These measurements can be visualized with the `PlotProfiling()` function made available in the source code of the startR package.

This function has not been included as part of the official set of functions of the package because it requires a number of extense plotting libraries which take time to load and, since the startR package is loaded in each of the worker jobs on the HPC, this could imply a substantial amount of time spent in repeatedly loading unused visualization libraries during the computing stage.

The function takes as inputs one or a list of 'startR_compute_profiling' attribute objects from one or more `Compute()` executions.
```r
source('https://earth.bsc.es/gitlab/es/startR/raw/master/inst/PlotProfiling.R')
PlotProfiling(attr(res, 'startR_compute_profiling'))
```

A chart displays the timings for the different stages of the computation, as shown in the image below. Note that these results have been taken from a use case different to the ones used in this guide. 

<img src="inst/doc/compute_profiling.png" width="800" />

You can click on the image to expand it.
834

835
### What to do if your function has too many target dimensions
836

837
### Pending features
838
839
840
841
842
- Adding feature for `Copute()` to run on multiple HPCs or workstations.
- Adding plug-in to read CSV files.
- Supporting multiple steps in a workflow defined by `AddStep()`.
- Adding feature in `Start()` to read sparse grid points.
- Allow for chunking along "essential" (a.k.a. "target") dimensions.
843

844
## Other examples
845

846
### Using experimental and (date-corresponding) observational data
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907

```r
repos <- paste0('/esnas/exp/ecmwf/system4_m1/6hourly/',
                '$var$/$var$_$sdate$.nc')

system4 <- Start(dat = repos,
                 var = 'sfcWind',
                 #sdate = paste0(1981:2015, '1101'),
                 sdate = paste0(1981:1984, '1101'),
                 #time = indices((30*4+1):(120*4)),
                 time = indices((30*4+1):(30*4+4)),
                 ensemble = 'all',
                 #ensemble = indices(1:6),
                 #latitude = 'all',
                 latitude = indices(1:10),
                 #longitude = 'all',
                 longitude = indices(1:10),
                 return_vars = list(latitude = NULL,
                                    longitude = NULL,
                                    time = c('sdate')))

repos <- paste0('/esnas/recon/ecmwf/erainterim/6hourly/',
                '$var$/$var$_$file_date$.nc')

dates <- attr(system4, 'Variables')$common$time
dates_file <- sort(unique(gsub('-', '', sapply(as.character(dates),
substr, 1, 7))))

erai <-    Start(dat = repos,
                 var = 'sfcWind',
                 file_date = dates_file,
                 time = values(dates),
                 #latitude = 'all',
                 latitude = indices(1:10),
                 #longitude = 'all',
                 longitude = indices(1:10),
                 time_var = 'time',
                 time_tolerance = as.difftime(1, units = 'hours'),
                 time_across = 'file_date',
                 return_vars = list(latitude = NULL,
                                    longitude = NULL,
                                    time = 'file_date'),
                 merge_across_dims = TRUE,
                 split_multiselected_dims = TRUE)

step <- Step(eqmcv_atomic,
             list(a = c('ensemble', 'sdate'),
                  b = c('sdate')),
             list(c = c('ensemble', 'sdate')))

res <- Compute(step, list(system4, erai),
               chunks = list(latitude = 5,
                             longitude = 5,
                             time = 2),
               cluster = list(queue_host = 'bsceslogin01.bsc.es',
                              max_jobs = 4,
                              cores_per_job = 2),
               shared_dir = '/esnas/scratch/nmanuben/test_bychunk',
               wait = FALSE) 
```

908
### Example on MareNostrum 4
909
910
911
912

```r
library(startR)

913
914
915
916
repos <- paste0('/esarchive/exp/ecmwf/system5_m1/6hourly/',
                '$var$-longitudeS1latitudeS1all/$var$_$sdate$.nc')
# Slower alternative, using files with a less efficient NetCDF 
# compression configuration
917
#repos <- '/esarchive/exp/ecmwf/system5_m1/6hourly/$var$/$var$_$sdate$.nc'
918

919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
data <- Start(dat = repos,
              var = 'tas',
              sdate = indices(1),
              ensemble = 'all',
              time = 'all',
              latitude = indices(1:40),
              longitude = indices(1:40),
              retrieve = FALSE)
lons <- attr(data, 'Variables')$common$longitude
lats <- attr(data, 'Variables')$common$latitude

fun <- function(x) apply(x + 1, 2, mean)
step <- Step(fun, c('ensemble', 'time'), c('time'))
wf <- AddStep(data, step)

res <- Compute(wf,
               chunks = list(latitude = 2,
                             longitude = 2),
               threads_load = 1,
               threads_compute = 2,
               cluster = list(queue_host = 'mn2.bsc.es',
                              queue_type = 'slurm',
                              data_dir = '/gpfs/projects/bsc32/share/startR_data_repos/',
                              temp_dir = '/gpfs/scratch/pr1efe00/pr1efe03/startR_tests/',
                              r_module = 'R/3.4.0',
                              cores_per_job = 2,
                              job_wallclock = '00:10:00',
                              max_jobs = 4,
                              extra_queue_params = list('#SBATCH --qos=prace'),
                              bidirectional = FALSE,
                              polling_period = 10,
                              special_setup = 'marenostrum4'
                             ),
               ecflow_suite_dir = '/home/Earth/nmanuben/test_remove/',
               ecflow_server = NULL,
               silent = FALSE,
               debug = FALSE,
               wait = TRUE)
```

959
### Example on CTE-Power using GPUs
960

961
### Seasonal forecast verification example on cca
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009

```r
crps <- function(x, y) {
  mean(SpecsVerification::EnsCrps(x, y, R.new = Inf))
}

library(startR)

repos <- '/perm/ms/spesiccf/c3ah/qa4seas/data/seasonal/g1x1/ecmf-system4/msmm/atmos/seas/tprate/12/ecmf-system4_msmm_atmos_seas_sfc_$date$_tprate_g1x1_init12.nc'

data <- Start(dat = repos,
              var = 'tprate',
              date = 'all',
              time = 'all',
              number = 'all',
              latitude = 'all',
              longitude = 'all',
              return_vars = list(time = 'date'))

dates <- attr(data, 'Variables')$common$time

repos <- '/perm/ms/spesiccf/c3ah/qa4seas/data/ecmf-ei_msmm_atmos_seas_sfc_19910101-20161201_t2m_g1x1_init02.nc'

obs <- Start(dat = repos,
             var = 't2m',
             time = values(dates),
             latitude = 'all',
             longitude = 'all',
             split_multiselected_dims = TRUE)

s <- Step(crps, target_dims = list(c('date', 'number'), c('date')),
                output_dims = NULL)
wf <- AddStep(list(data, obs), s)

r <- Compute(wf,
             chunks = list(latitude = 10,
                           longitude = 3),
             cluster = list(queue_host = 'cca',
                            queue_type = 'pbs',
                            max_jobs = 10,
                            init_commands = list('module load ecflow'),
                            r_module = 'R/3.3.1',
                            extra_queue_params = list('#PBS -l EC_billing_account=spesiccf')),
             ecflow_output_dir = '/perm/ms/spesiccf/c3ah/startR_test/',
             is_ecflow_output_dir_shared = FALSE
            )
```

1010
## Compute() cluster templates
1011

1012
1013
1014
### Nord3-v2

```r
aho's avatar
aho committed
1015
1016
1017
1018
1019
1020
1021
1022
1023
cluster = list(queue_host = 'nord4.bsc.es',
               queue_type = 'slurm',
               temp_dir = '/gpfs/scratch/bsc32/bsc32734/startR_hpc/',
               cores_per_job = 2,
               job_wallclock = '01:00:00',
               max_jobs = 4,
               bidirectional = FALSE,
               polling_period = 10
               )
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
```

### Nord3 (deprecated)

```r
cluster = list(queue_host = 'nord1.bsc.es',
               queue_type = 'lsf',
               data_dir = '/gpfs/projects/bsc32/share/startR_data_repos/',
               temp_dir = '/gpfs/scratch/bsc32/bsc32473/startR_hpc/',
               init_commands = list('module load intel/16.0.1'),
               cores_per_job = 2,
               job_wallclock = '00:10',
               max_jobs = 4,
               extra_queue_params = list('#BSUB -q bsc_es'),
               bidirectional = FALSE,
erifarov's avatar
erifarov committed
1039
               polling_period = 10
1040
1041
1042
              )
```

1043
### CTE-Power9
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056

```r
cluster = list(queue_host = 'p9login1.bsc.es',
               queue_type = 'slurm',
               temp_dir = '/gpfs/scratch/bsc32/bsc32473/startR_hpc/',
               cores_per_job = 4,
               job_wallclock = '00:10:00',
               max_jobs = 4,
               bidirectional = FALSE,
               polling_period = 10
              )
```

1057
### BSC ES fat nodes (deprecated)
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068

```r
cluster = list(queue_host = 'bsceslogin01.bsc.es',
               queue_type = 'slurm',
               cores_per_job = 2,
               job_wallclock = '00:10:00',
               max_jobs = 4,
               bidirectional = TRUE
              )
```

1069
### Marenostrum 4
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082

```r
cluster = list(queue_host = 'mn2.bsc.es',
               queue_type = 'slurm',
               data_dir = '/gpfs/projects/bsc32/share/startR_data_repos/',
               temp_dir = '/gpfs/scratch/pr1efe00/pr1efe03/startR_hpc/',
               r_module = 'R/3.4.0',
               cores_per_job = 2,
               job_wallclock = '00:10:00',
               max_jobs = 4,
               extra_queue_params = list('#SBATCH --qos=prace'),
               bidirectional = FALSE,
               polling_period = 10,
1083
1084
1085
1086
               special_setup = 'marenostrum4'
              )
```

1087
### MinoTauro
1088
1089
1090
1091
1092

```r
cluster = list(queue_host = 'mt1.bsc.es',
               queue_type = 'slurm',
               data_dir = '/gpfs/projects/bsc32/share/startR_data_repos/',
1093
               temp_dir = '/gpfs/scratch/bsc32/bsc32473/startR_hpc/',
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
               r_module = 'R/3.3.3',
               cores_per_job = 2,
               job_wallclock = '00:10:00',
               max_jobs = 4,
               extra_queue_params = list('#SBATCH --qos=bsc_es'),
               bidirectional = FALSE,
               polling_period = 10,
               special_setup = 'marenostrum4'
              )
```
1104

1105
### ECMWF ecgate (workstation) + cca/ccb (HPC)
1106

1107
1108
1109
1110
1111
1112
1113
1114
1115
```r
cluster = list(queue_host = 'cca',
               queue_type = 'pbs',
               max_jobs = 10,
               init_commands = list('module load ecflow'),
               r_module = 'R/3.3.1',
               extra_queue_params = list('#PBS -l EC_billing_account=spesiccf')
              )
```