... | ... | @@ -130,6 +130,18 @@ Observations: |
|
|
| **HadCRUT4** | Yes | No |
|
|
|
| **HadSLP2** | Yes | No |
|
|
|
|
|
|
## The first steps
|
|
|
|
|
|
In order to run the ESS Verification Suite, you need to load the necessary modules. To do this, you can run the command `source MODULES` in the terminal, from the main folder of the Auto-S2S repository.
|
|
|
|
|
|
Before calling the modules in your script or in the R console, you should run the `prepare_outputs()` function as shown in the [example script](https://earth.bsc.es/gitlab/es/auto-s2s/-/snippets/96), which will read your recipe and set up the directory for your outputs.
|
|
|
|
|
|
If you had a recipe named recipe-wiki.yml that looked like the example in this wiki, this directory might look something like this:
|
|
|
|
|
|
`/esarchive/scratch/vagudets/repos/auto-s2s/out-logs/recipe-wiki_20221025164151`
|
|
|
|
|
|
Inside you will find a log file, a copy of your recipe, and your outputs from the Saving and Visualization modules.
|
|
|
|
|
|
## Loading module
|
|
|
|
|
|
The Loading module retrieves the data requested in the recipe from /esarchive/, interpolates it to the desired grids if interpolation has been requested, and converts it to objects of class `s2dv_cube`, which can be passed onto the other modules in the tool. An `s2dv_cube` object is a list containing the data array in the element $data and many other elements that store the metadata.
|
... | ... | @@ -218,36 +230,36 @@ Including the `calibrated_data` parameter will save the calibrated datasets. Inc |
|
|
|
|
|
```R
|
|
|
# Therefore, all the data can be saved at once:
|
|
|
save_data(recipe, archive, data, calibrated_data, skill_metrics, probabilities)
|
|
|
save_data(recipe, data, calibrated_data, skill_metrics, probabilities)
|
|
|
|
|
|
# Or, one can choose to save only some of it. For example,
|
|
|
# saving the calibrated hcst/fcst and corresponding observations only:
|
|
|
save_data(recipe, archive, data, calibrated_data = calibrated_data)
|
|
|
save_data(recipe, data, calibrated_data = calibrated_data)
|
|
|
|
|
|
# Or saving the skill metrics only:
|
|
|
save_data(recipe, archive, data, skill_metrics = skill_metrics)
|
|
|
save_data(recipe, data, skill_metrics = skill_metrics)
|
|
|
```
|
|
|
|
|
|
### The structure of the output directory
|
|
|
|
|
|
Several subdirectories are created in the output directory specified in the recipe. Their structure is as follows:
|
|
|
The outputs are saved to a unique folder inside the directory you specified in the recipe, under /outputs/. Their structure is as follows:
|
|
|
|
|
|
If fcst_year has been requested:
|
|
|
|
|
|
`output_dir/<calibration_method>-<frequency>/<forecast_date>/<var>/`
|
|
|
`output_dir/outputs/<calibration_method>-<frequency>/<forecast_date>/<var>/`
|
|
|
|
|
|
If fcst_year is empty:
|
|
|
|
|
|
- For seasonal data:
|
|
|
`output_dir/<calibration_method>-<frequency>/hcst-<mmdd>/<var>/`
|
|
|
`output_dir/outputs/<calibration_method>-<frequency>/hcst-<mmdd>/<var>/`
|
|
|
|
|
|
- For decadal data:
|
|
|
`output_dir/<calibration_method>-<frequency>/hcst-<yyyy>_<yyyy>/<var>/`
|
|
|
`output_dir/outputs/<calibration_method>-<frequency>/hcst-<yyyy>_<yyyy>/<var>/`
|
|
|
|
|
|
Please take this structure into account when defining Run:output_dir, to avoid unintentionally rewriting previous data.
|
|
|
For example, in our example recipe, the final output directory will be:
|
|
|
|
|
|
`/esarchive/scratch/vagudets/repos/auto-s2s/out-logs/mse_min-monthly_mean/20201101/tas/`
|
|
|
`/esarchive/scratch/vagudets/repos/auto-s2s/out-logs/recipe-wiki_20221025164151/outputs/mse_min-monthly_mean/20201101/tas/`
|
|
|
|
|
|
The calibrated hindcast and forecast are saved in files named `<var>_<yyyymmdd>.nc`, where `var` is the name of the variable, `yyyy` is the year and `mmdd` is the initialization date. There is one file per year loaded. The observations are saved in the same format, in files named `<var>-obs_<yyyymmdd>.nc`.
|
|
|
|
... | ... | @@ -260,14 +272,18 @@ The file containing the requested quantiles is named `<var>-percentiles_month<mm |
|
|
The Visualization module provides a few basic plots to visualize the data loaded and computed using the previous modules.
|
|
|
|
|
|
|
|
|
**plot_data()** is the main wrapper function for this module. It extracts the output directory from the recipe and creates a subdirectory named `plots/`.
|
|
|
**plot_data()** is the main wrapper function for this module. It generates the plots and saves them to the output directory, under `plots/`, as:
|
|
|
|
|
|
`output_dir/outputs/<calibration_method>-<frequency>/<forecast_date>/<var>/plots`
|
|
|
|
|
|
|
|
|
This function's parameters are similar to those in the Saving module:
|
|
|
|
|
|
`recipe` (the recipe), `archive` (the archive) and `data` (the list obtained from the Loading module) are mandatory arguments. The rest of the arguments (`calibrated_data`, `skill_metrics` and `probabilities`, and `significance`) are optional:
|
|
|
`recipe` (the recipe) and `data` (the list obtained from the Loading module) are mandatory arguments. The rest of the arguments (`calibrated_data`, `skill_metrics` and `probabilities`, `archive` and `significance`) are optional:
|
|
|
- `calibrated_data`: List containing the calibrated hindcast and forecast as s2dv_cube objects.
|
|
|
- `skill_metrics`: List in the format of the Skill module output, containing the skill metrics as named arrays.
|
|
|
- `probabilities`: List in the format of the Skill module output, containing the 33rd and 66th percentiles.
|
|
|
- `archive`: List containing the configuration parameters for the datasets. Defaults to the archives in the conf/ folder.
|
|
|
- If `significance = TRUE` (default), the statistical significance dots will be displayed in the plot, when available. It defaults to FALSE.
|
|
|
|
|
|
plot_data() attempts to generate:
|
... | ... | |