... | @@ -3,8 +3,8 @@ |
... | @@ -3,8 +3,8 @@ |
|
Auto-S2S is the GitLab repository for the ESS Verification Suite, a modular tool for subseasonal to seasonal to decadal forecast verification workflows. It is intended to have a modularized structure, where each module is a separate part of the code that performs a specific task, so that parts of the workflow can be skipped or reordered.
|
|
Auto-S2S is the GitLab repository for the ESS Verification Suite, a modular tool for subseasonal to seasonal to decadal forecast verification workflows. It is intended to have a modularized structure, where each module is a separate part of the code that performs a specific task, so that parts of the workflow can be skipped or reordered.
|
|
The datasets, forecast horizon, time period, skill metrics to compute and other parameters are specified by the user in a configuration file, called "recipe".
|
|
The datasets, forecast horizon, time period, skill metrics to compute and other parameters are specified by the user in a configuration file, called "recipe".
|
|
|
|
|
|
- Modules currently available: Loading, Calibration, Skill, Saving, Visualization
|
|
- Modules currently available: Loading, Calibration, Anomalies, Skill, Saving, Visualization
|
|
- Modules in development: Downscaling, Anomalies, Scorecards
|
|
- Modules in development: Downscaling, Scorecards
|
|
- Future modules: Aggregation, Indicators
|
|
- Future modules: Aggregation, Indicators
|
|
|
|
|
|
This tool is in the early stages of development, so the code and the information in this wiki may be subject to frequent changes and updates. This wiki contains all the information needed to use the available modules.
|
|
This tool is in the early stages of development, so the code and the information in this wiki may be subject to frequent changes and updates. This wiki contains all the information needed to use the available modules.
|
... | @@ -17,7 +17,7 @@ In order to use the Verification Suite, users must define a recipe containing al |
... | @@ -17,7 +17,7 @@ In order to use the Verification Suite, users must define a recipe containing al |
|
|
|
|
|
Here is an example of a recipe to load monthly mean ECMWF System 5 data from `/esarchive/`, with a 1993 to 2016 hindcast period, the corresponding ERA5 observations, and a 2020 forecast for the November initialization, for the months of November and December.
|
|
Here is an example of a recipe to load monthly mean ECMWF System 5 data from `/esarchive/`, with a 1993 to 2016 hindcast period, the corresponding ERA5 observations, and a 2020 forecast for the November initialization, for the months of November and December.
|
|
The observations will be interpolated to the experiment grid (Regrid type: 'to_system') using bilinear interpolation. The hindcast and forecast will be calibrated using Quantile Mapping, and the Ranked Probability Skill Score (RPSS) and Continuous Ranked Probability Skill Score (CRPSS) will be computed.
|
|
The observations will be interpolated to the experiment grid (Regrid type: 'to_system') using bilinear interpolation. The hindcast and forecast will be calibrated using Quantile Mapping, and the Ranked Probability Skill Score (RPSS) and Continuous Ranked Probability Skill Score (CRPSS) will be computed.
|
|
The terciles (1/3, 2/3), quartiles (1/4, 2/4, 3/4), extremes (1/10, 9/10) and their corresponding probability bins will also be computed. Any output files will be saved to the output directory.
|
|
The terciles (1/3, 2/3), quartiles (1/4, 2/4, 3/4), extremes (1/10, 9/10) and their corresponding probability bins will also be computed. Any output files will be saved inside the output directory.
|
|
|
|
|
|
```yaml
|
|
```yaml
|
|
Description:
|
|
Description:
|
... | @@ -31,17 +31,17 @@ Analysis: |
... | @@ -31,17 +31,17 @@ Analysis: |
|
freq: monthly_mean # Mandatory, str: 'monthly_mean' or 'daily_mean'
|
|
freq: monthly_mean # Mandatory, str: 'monthly_mean' or 'daily_mean'
|
|
Datasets:
|
|
Datasets:
|
|
System:
|
|
System:
|
|
name: system5c3s # Mandatory, str: System codename.
|
|
name: ECMWF-SEAS5 # Mandatory, str: System name.
|
|
Multimodel: no # Mandatory, bool: Either yes/true or no/false
|
|
Multimodel: no # Mandatory, bool: Either yes/true or no/false
|
|
Reference:
|
|
Reference:
|
|
name: era5 # Mandatory, str: Reference codename.
|
|
name: ERA5 # Mandatory, str: Reference name.
|
|
Time:
|
|
Time:
|
|
sdate: '1101' # Mandatory, int: Start date, 'mmdd'
|
|
sdate: '1101' # Mandatory, int: Start date, 'mmdd'
|
|
fcst_year: '2020' # Optional, int: Forecast initialization year 'YYYY'
|
|
fcst_year: '2020' # Optional, int: Forecast initialization year 'YYYY'
|
|
hcst_start: '1993' # Mandatory, int: Hindcast initialization start year 'YYYY'
|
|
hcst_start: '1993' # Mandatory, int: Hindcast initialization start year 'YYYY'
|
|
hcst_end: '2016' # Mandatory, int: Hindcast initialization end year 'YYYY'
|
|
hcst_end: '2016' # Mandatory, int: Hindcast initialization end year 'YYYY'
|
|
ftime_min: 1 # Mandatory, int: First forecast time step in months. Starts at “1”.
|
|
ftime_min: 1 # Mandatory, int: First forecast time step in months. Starts at “1”.
|
|
ftime_max: 2 # Mandatory, int: Last forecast time step in months. Starts at “1”.
|
|
ftime_max: 6 # Mandatory, int: Last forecast time step in months. Starts at “1”.
|
|
Region:
|
|
Region:
|
|
latmin: -10 # Mandatory, int: minimum latitude
|
|
latmin: -10 # Mandatory, int: minimum latitude
|
|
latmax: 10 # Mandatory, int: maximum latitude
|
|
latmax: 10 # Mandatory, int: maximum latitude
|
... | @@ -54,6 +54,9 @@ Analysis: |
... | @@ -54,6 +54,9 @@ Analysis: |
|
Workflow:
|
|
Workflow:
|
|
Calibration:
|
|
Calibration:
|
|
method: mse_min # Mandatory, str: Calibration method.
|
|
method: mse_min # Mandatory, str: Calibration method.
|
|
|
|
Anomalies:
|
|
|
|
compute: no # Mandatory, bool: Either yes/true or no/false
|
|
|
|
cross_validation: no # Mandatory if 'compute: yes', bool: Either yes/true or no/false
|
|
Skill:
|
|
Skill:
|
|
metric: RPSS CRPSS # Mandatory, str: List of skill metrics.
|
|
metric: RPSS CRPSS # Mandatory, str: List of skill metrics.
|
|
Probabilities:
|
|
Probabilities:
|
... | @@ -62,8 +65,8 @@ Analysis: |
... | @@ -62,8 +65,8 @@ Analysis: |
|
# enclosed within brackets.
|
|
# enclosed within brackets.
|
|
Indicators:
|
|
Indicators:
|
|
index: no # This feature is not implemented yet
|
|
index: no # This feature is not implemented yet
|
|
ncores: 4 # Optional, int: number of cores to be used in parallel computation.
|
|
ncores: 10 # Optional, int: number of cores to be used in parallel computation.
|
|
# If left empty, defaults to 1.
|
|
# If left empty, defaults to 1.
|
|
remove_NAs: TRUE # Optional, bool: Whether to remove NAs.
|
|
remove_NAs: TRUE # Optional, bool: Whether to remove NAs.
|
|
# If left empty, defaults to FALSE.
|
|
# If left empty, defaults to FALSE.
|
|
Output_format: S2S4E # This feature is not implemented yet
|
|
Output_format: S2S4E # This feature is not implemented yet
|
... | @@ -81,23 +84,23 @@ Here is a list of the datasets that can currently be loaded by the tool. To requ |
... | @@ -81,23 +84,23 @@ Here is a list of the datasets that can currently be loaded by the tool. To requ |
|
### Seasonal datasets
|
|
### Seasonal datasets
|
|
|
|
|
|
Systems:
|
|
Systems:
|
|
| Forecast System | Monthly mean | Daily mean | Codename |
|
|
| Forecast System | Monthly mean | Daily mean |
|
|
|--------------------------|-----------------|------------|-----------------------|
|
|
|----------------------------|-----------------|------------|
|
|
| **ECMWF SEAS5** | Yes | Yes | system5c3s |
|
|
| **ECMWF-SEAS5** | Yes | Yes |
|
|
| **DWD GFCS 2.1** | Yes | No | system21_m1 |
|
|
| **DWD-GFCS2.1** | Yes | No |
|
|
| **CMCC 3.5** | Yes | No | system35c3s |
|
|
| **CMCC-SPS3.5** | Yes | No |
|
|
| **MeteoFrance System 7** | Yes | No | system7c3s |
|
|
| **Meteo-France-System 7** | Yes | No |
|
|
| **JMA System 2** | Yes | No | system2c3s |
|
|
| **JMA-CPS2** | Yes | No |
|
|
| **ECCC CanCM4i** | May to November | No | eccc1 |
|
|
| **ECCC-CanCM4i** | May to November | No |
|
|
| **UKMO GloSea 6 6.0** | Yes | No | glosea6_system600-c3s |
|
|
| **UK-MetOffice-GloSea600** | Yes | No |
|
|
| **NCEP CFSv2** | Yes | No | ncep-cfsv2 |
|
|
| **NCEP-CFSv2** | Yes | No |
|
|
|
|
|
|
Observations:
|
|
Observations:
|
|
| Reference | Monthly mean | Daily mean | Codename |
|
|
| Reference | Monthly mean | Daily mean |
|
|
|--------------|--------------|------------|-------------|
|
|
|---------------|--------------|------------|
|
|
| **ERA5** | Yes | Yes | era5 |
|
|
| **ERA5** | Yes | Yes |
|
|
| **ERA5Land** | `tas` only | Yes | era5land |
|
|
| **ERA5-Land** | `tas` only | Yes |
|
|
| **UERRA** | No | `tas` only | uerra |
|
|
| **UERRA** | No | `tas` only |
|
|
|
|
|
|
### Decadal datasets
|
|
### Decadal datasets
|
|
|
|
|
... | @@ -134,7 +137,8 @@ Observations: |
... | @@ -134,7 +137,8 @@ Observations: |
|
|
|
|
|
In order to run the ESS Verification Suite, you need to load the necessary modules. To do this, you can run the command `source MODULES` in the terminal, from the main folder of the Auto-S2S repository.
|
|
In order to run the ESS Verification Suite, you need to load the necessary modules. To do this, you can run the command `source MODULES` in the terminal, from the main folder of the Auto-S2S repository.
|
|
|
|
|
|
Before calling the modules in your script or in the R console, you should run the `prepare_outputs()` function as shown in the [example script](https://earth.bsc.es/gitlab/es/auto-s2s/-/snippets/96), which will read your recipe and set up the directory for your outputs.
|
|
Before calling the modules in your script or in the R console, you should run the `prepare_outputs()` function as shown in the [example script](https://earth.bsc.es/gitlab/es/auto-s2s/-/snippets/96), which will read your recipe and set up the directory for your outputs.
|
|
|
|
`prepare_outputs()` will perform a check on your recipe to detect potential errors. If you want to disable this check, you may set the argument `disable_checks = TRUE` when calling the function.
|
|
|
|
|
|
If you had a recipe named recipe-wiki.yml that looked like the example in this wiki, this directory might look something like this:
|
|
If you had a recipe named recipe-wiki.yml that looked like the example in this wiki, this directory might look something like this:
|
|
|
|
|
... | @@ -146,7 +150,7 @@ Inside you will find a log file, a copy of your recipe, and your outputs from th |
... | @@ -146,7 +150,7 @@ Inside you will find a log file, a copy of your recipe, and your outputs from th |
|
|
|
|
|
The Loading module retrieves the data requested in the recipe from /esarchive/, interpolates it to the desired grids if interpolation has been requested, and converts it to objects of class `s2dv_cube`, which can be passed onto the other modules in the tool. An `s2dv_cube` object is a list containing the data array in the element $data and many other elements that store the metadata.
|
|
The Loading module retrieves the data requested in the recipe from /esarchive/, interpolates it to the desired grids if interpolation has been requested, and converts it to objects of class `s2dv_cube`, which can be passed onto the other modules in the tool. An `s2dv_cube` object is a list containing the data array in the element $data and many other elements that store the metadata.
|
|
|
|
|
|
The output of the main function, load_datasets(), is a list containing the hindcast, observations and forecast, named hcst, obs and fcst respectively. fcst will be `NULL` if no forecast years have been requested.
|
|
The output of the main function, `load_datasets()`, is a list containing the hindcast, observations and forecast, named hcst, obs and fcst respectively. fcst will be `NULL` if no forecast years have been requested.
|
|
|
|
|
|
### Regridding
|
|
### Regridding
|
|
|
|
|
... | @@ -166,7 +170,7 @@ The StartR documentation has [a guide to explain how to select your longitude ra |
... | @@ -166,7 +170,7 @@ The StartR documentation has [a guide to explain how to select your longitude ra |
|
|
|
|
|
## Calibration module
|
|
## Calibration module
|
|
|
|
|
|
The Calibration performs bias correction on the loaded data. It module accepts the output of the Loading module as input, and also requires the recipe. It applies a calibration method to the hindcast and forecast data using the observations as a reference, and returns the calibrated data and its metadata as an `s2dv_cube` object.
|
|
The Calibration module performs bias correction on the loaded data. It accepts the output of the Loading module as input, and also requires the recipe. It applies a calibration method to the hindcast and forecast data using the observations as a reference, and returns the calibrated data and its metadata as an `s2dv_cube` object.
|
|
|
|
|
|
The output of the main function, calibrate_datasets(), is a list containing the calibrated hindcast and forecast, named hcst and fcst respectively. fcst will be `NULL` if no forecast years have been requested.
|
|
The output of the main function, calibrate_datasets(), is a list containing the calibrated hindcast and forecast, named hcst and fcst respectively. fcst will be `NULL` if no forecast years have been requested.
|
|
|
|
|
... | @@ -182,6 +186,12 @@ For more details, see the [CSTools documentation](https://CRAN.R-project.org/pac |
... | @@ -182,6 +186,12 @@ For more details, see the [CSTools documentation](https://CRAN.R-project.org/pac |
|
- Monthly data: `'bias'`, `'evmos'`, `'mse_min'`, `'crps_min'`, and `'rpc-based'`.
|
|
- Monthly data: `'bias'`, `'evmos'`, `'mse_min'`, `'crps_min'`, and `'rpc-based'`.
|
|
For more details, see the [CSTools documentation](https://CRAN.R-project.org/package=CSTools) for CST_Calibration().
|
|
For more details, see the [CSTools documentation](https://CRAN.R-project.org/package=CSTools) for CST_Calibration().
|
|
|
|
|
|
|
|
## Anomalies module
|
|
|
|
|
|
|
|
The Anomalies module computes the anomalies of the data with respect to the climatological mean; with or without cross-validation, depending on what it specified in the recipe. Calibration performs bias correction on the loaded data. It accepts the output of either the Loading or the Calibration module as input, and also requires the recipe. It makes use of the CSTools function CST_Calibration().
|
|
|
|
|
|
|
|
The output of the main function, `compute_anomalies()`, is a list of `s2dv_cube` objects containing the anomalies for the hcst, fcst and obs, as well as the original hcst and obs full fields in case they are needed for later computations.
|
|
|
|
|
|
## Skill module
|
|
## Skill module
|
|
|
|
|
|
The Skill module is the part of the workflow that computes the metrics to assess the quality of a forecast. It accepts the output of the Calibration module as input, and also requires the recipe. It is comprised of two main functions:
|
|
The Skill module is the part of the workflow that computes the metrics to assess the quality of a forecast. It accepts the output of the Calibration module as input, and also requires the recipe. It is comprised of two main functions:
|
... | @@ -218,85 +228,101 @@ For example, if the extremes (1/10, 9/10) are requested, the output will be: |
... | @@ -218,85 +228,101 @@ For example, if the extremes (1/10, 9/10) are requested, the output will be: |
|
- `prob_10_to_90`: Probability of falling inbetween the 10th and 90th percentile.
|
|
- `prob_10_to_90`: Probability of falling inbetween the 10th and 90th percentile.
|
|
- `prob_a90`: Probability of falling above the 90th percentile.
|
|
- `prob_a90`: Probability of falling above the 90th percentile.
|
|
|
|
|
|
|
|
`$probs_fcst`:
|
|
|
|
- `prob_b10`: Probability of falling below the 10th percentile.
|
|
|
|
- `prob_10_to_90`: Probability of falling inbetween the 10th and 90th percentile.
|
|
|
|
- `prob_a90`: Probability of falling above the 90th percentile.
|
|
|
|
|
|
**Note**: When naming the variables, the probability thresholds are converted to percentiles and rounded to the nearest integer to avoid dots in variable or file names. However, this is just a naming convention; the computations are performed based on the original thresholds specified in the recipe.
|
|
**Note**: When naming the variables, the probability thresholds are converted to percentiles and rounded to the nearest integer to avoid dots in variable or file names. However, this is just a naming convention; the computations are performed based on the original thresholds specified in the recipe.
|
|
|
|
|
|
## Saving module
|
|
## Saving module
|
|
|
|
|
|
The Saving module contains several functions that export the data (the calibrated hindcast and forecast, the corresponding observations, the skill metrics, percentiles and probabilities) to netCDF files and save them.
|
|
The Saving module contains several functions that export the data (the calibrated hindcast and forecast, the corresponding observations, the skill metrics, percentiles and probabilities) to netCDF files and save them.
|
|
|
|
|
|
save_data() serves as the main wrapper function for this module.`recipe` (the recipe), `data` (the list obtained from the Loading module) and `archive` (the archive) are mandatory arguments. The rest of the arguments (calibrated_data, skill_metrics and probabilities) are optional.
|
|
`save_data()` serves as the main wrapper function for this module.`recipe` (the recipe) and `data` (the list of `s2dv_cube` objects containing at least the processed hindcast and observation data). The rest of the arguments (skill_metrics and probabilities) are optional. Including `skill_metrics` and `probabilities` will save the skill metrics and the percentiles and probability bins, respectively.
|
|
|
|
|
|
Including the `calibrated_data` parameter will save the calibrated datasets. Including `skill_metrics` and `probabilities` will save the skill metrics and the percentiles and probability bins, respectively.
|
|
|
|
|
|
|
|
```R
|
|
```R
|
|
# Therefore, all the data can be saved at once:
|
|
# Therefore, all the data can be saved at once:
|
|
save_data(recipe, data, calibrated_data, skill_metrics, probabilities)
|
|
save_data(recipe, data, skill_metrics, probabilities)
|
|
|
|
|
|
# Or, one can choose to save only some of it. For example,
|
|
# Or, one can choose to save only some of it. For example,
|
|
# saving the calibrated hcst/fcst and corresponding observations only:
|
|
# saving the calibrated data and corresponding observations only:
|
|
save_data(recipe, data, calibrated_data = calibrated_data)
|
|
save_data(recipe, data)
|
|
|
|
|
|
# Or saving the skill metrics only:
|
|
# Or saving the hcst/fcst/obs and skill metrics only:
|
|
save_data(recipe, data, skill_metrics = skill_metrics)
|
|
save_data(recipe, data, skill_metrics = skill_metrics)
|
|
|
|
|
|
|
|
# Or only hcst/fcst/obs and the probabilities:
|
|
|
|
save_data(recipe, data, probabilities = probabilities)
|
|
```
|
|
```
|
|
|
|
More flexibility will be added in future releases.
|
|
|
|
|
|
### The structure of the output directory
|
|
### The structure of the output directory
|
|
|
|
|
|
The outputs are saved to a unique folder inside the directory you specified in the recipe, under /outputs/. Their structure is as follows:
|
|
The outputs are saved to a unique folder inside the directory you specified in the recipe, under outputs/. The structure of the subdirectories and file names will depend on the option you specify for '`Output_format`' in the recipe.
|
|
|
|
|
|
If fcst_year has been requested:
|
|
There are two possible output formats currently: 'S2S4E' and 'Scorecards', described below. To request the inclusion of an additional output format, please open an issue.
|
|
|
|
|
|
`output_dir/outputs/<calibration_method>-<frequency>/<forecast_date>/<var>/`
|
|
#### S2S4E (default)
|
|
|
|
With the 'S2S4E' output format, the structure of the subdirectory and the files is as follows:
|
|
|
|
|
|
If fcst_year is empty:
|
|
If fcst_year has been requested:
|
|
|
|
- `output_dir/outputs/<system>/<calibration_method>-<frequency>/<forecast_date>/<var>/`
|
|
- For seasonal data:
|
|
|
|
`output_dir/outputs/<calibration_method>-<frequency>/hcst-<mmdd>/<var>/`
|
|
|
|
|
|
|
|
- For decadal data:
|
|
If fcst_year is empty:
|
|
`output_dir/outputs/<calibration_method>-<frequency>/hcst-<yyyy>_<yyyy>/<var>/`
|
|
- For seasonal data: `output_dir/outputs/<system>/<calibration_method>-<frequency>/hcst-<mmdd>/<var>/`
|
|
|
|
- For decadal data: `output_dir/outputs/<system>/<calibration_method>-<frequency>/hcst-<yyyy>_<yyyy>/<var>/`
|
|
|
|
|
|
Please take this structure into account when defining Run:output_dir, to avoid unintentionally rewriting previous data.
|
|
|
|
For example, in our example recipe, the final output directory will be:
|
|
For example, in our example recipe, the final output directory will be:
|
|
|
|
|
|
`/esarchive/scratch/vagudets/repos/auto-s2s/out-logs/recipe-wiki_20221025164151/outputs/mse_min-monthly_mean/20201101/tas/`
|
|
`/esarchive/scratch/vagudets/repos/auto-s2s/out-logs/recipe-wiki_20221025164151/outputs/mse_min-monthly_mean/20201101/tas/`
|
|
|
|
|
|
The calibrated hindcast and forecast are saved in files named `<var>_<yyyymmdd>.nc`, where `var` is the name of the variable, `yyyy` is the year and `mmdd` is the initialization date. There is one file per year loaded. The observations are saved in the same format, in files named `<var>-obs_<yyyymmdd>.nc`.
|
|
The structure of the names of the files is:
|
|
|
|
- Skill metrics: `<var>-skill_month<mm>.nc`
|
|
All of the skill metrics with (time, latitude, longitude) dimensions are saved to a common file named `<var>-skill_month<mm>.nc`, where mm is the initialization month. Each metric is stored as a variable within the file. In the case of Corr, which has an extra 'ensemble' dimension, it will be saved to a separate file named `<var>-corr_month<mm>.nc`
|
|
- Correlation: `<var>-corr_month<mm>.nc`
|
|
|
|
- Processed hindcast and forecast data: `<var>_<yyyymmdd>.nc`
|
|
|
|
- Observations: `<var>-obs_<yyyymmdd>.nc`
|
|
|
|
- Probabilities: `<var>-probs_<yyyymmdd>.nc`
|
|
|
|
- Percentiles: `<var>-percentiles_month<mm>.nc`
|
|
|
|
|
|
The file containing the requested quantiles is named `<var>-percentiles_month<mm>.nc`. For each year of the hindcast period there is also a file named `<var>-probs_<yyyymmdd>.nc` containing the probability bins.
|
|
Where `var` is the name of the variable, `yyyy` is the year, `mm` is the initialization month and `mm` is the initialization day.
|
|
|
|
|
|
## Visualization module
|
|
#### Scorecards:
|
|
|
|
For the 'Scorecards' output format, the structure of the subdirectory and the files is:
|
|
|
|
|
|
The Visualization module provides a few basic plots to visualize the data loaded and computed using the previous modules.
|
|
- Skill metrics: `<system>/<variable>/scorecards_<system>-<reference>_<variable>-skill_<startyear>_<endyear>_s<mm>.nc`
|
|
|
|
- Processed hindcast and forecast data: `<system>/<variable>/scorecards_<system>-<reference>_<variable>_<yyyymmdd>_<startyear>_<endyear>_s<mm>.nc`
|
|
|
|
- Observations: `<system>/<variable>/scorecards_<system>-<reference>_<variable>-obs_<yyyymmdd>_<startyear>_<endyear>_s<mm>.nc`
|
|
|
|
- Probabilities: `<system>/<variable>/scorecards_<system>-<reference>_<variable>-probs_<yyyymmdd>_<startyear>_<endyear>_s<mm>.nc`
|
|
|
|
- Percentiles: `<system>/<variable>/scorecards_<system>-<reference>_<variable>-percentiles_<startyear>_<endyear>_s<mm>.nc`
|
|
|
|
|
|
|
|
`system` and `reference`are abbreviations of the names of the system and reference, respectively.
|
|
|
|
`var` is the name of the variable, `startyear` and `endyear` are the initial and final hindcast years, `yyyymmdd` is the start date for each year and `mm` is the initialization month.
|
|
|
|
|
|
**plot_data()** is the main wrapper function for this module. It generates the plots and saves them to the output directory, under `plots/`, as:
|
|
## Visualization module
|
|
|
|
|
|
`output_dir/outputs/<calibration_method>-<frequency>/<forecast_date>/<var>/plots`
|
|
The Visualization module provides a few basic plots to visualize the data loaded and computed using the previous modules. The color palettes and titles of the plots will vary depending on the output format requested in the recipe. The palettes are generated using the clim.colors() function from the s2dv package.
|
|
|
|
- For 'S2S4E', the "bluered" palette is used.
|
|
|
|
- For 'Scorecards', "purpleorange" is used.
|
|
|
|
|
|
|
|
**plot_data()** is the main wrapper function for this module. It generates the plots and saves them to the same output directory described in the Saving module section, under `plots/`.
|
|
|
|
|
|
This function's parameters are similar to those in the Saving module:
|
|
This function's parameters are similar to those in the Saving module:
|
|
|
|
|
|
`recipe` (the recipe) and `data` (the list obtained from the Loading module) are mandatory arguments. The rest of the arguments (`calibrated_data`, `skill_metrics` and `probabilities`, `archive` and `significance`) are optional:
|
|
`recipe` (the recipe) and `data` (the list obtained from the Loading module) are mandatory arguments. The rest of the arguments (`skill_metrics`, `probabilities`, and `significance`) are optional:
|
|
- `calibrated_data`: List containing the calibrated hindcast and forecast as s2dv_cube objects.
|
|
|
|
- `skill_metrics`: List in the format of the Skill module output, containing the skill metrics as named arrays.
|
|
- `skill_metrics`: List in the format of the Skill module output, containing the skill metrics as named arrays.
|
|
- `probabilities`: List in the format of the Skill module output, containing the 33rd and 66th percentiles.
|
|
- `probabilities`: List in the format of the Skill module output, containing the 33rd and 66th percentiles.
|
|
- `archive`: List containing the configuration parameters for the datasets. Defaults to the archives in the conf/ folder.
|
|
|
|
- If `significance = TRUE` (default), the statistical significance dots will be displayed in the plot, when available. It defaults to FALSE.
|
|
- If `significance = TRUE` (default), the statistical significance dots will be displayed in the plot, when available. It defaults to FALSE.
|
|
|
|
|
|
plot_data() attempts to generate:
|
|
`plot_data()` attempts to generate:
|
|
|
|
|
|
- Plots of all the skill metrics provided in `skill_metrics`.
|
|
- Plots of all the skill metrics provided in `skill_metrics`.
|
|
- A plot of the forecast ensemble mean, if a forecast has been provided. If `calibrated_data` is NULL, the uncalibrated forecast will be used.
|
|
- A plot of the forecast ensemble mean, if a forecast has been provided.
|
|
- A Most Likely Terciles plot, if a forecast has been provided and the `probabilities` include the terciles (percentiles 33 and 66).
|
|
- A Most Likely Terciles plot, if the `probabilities` include the terciles (percentiles 33 and 66).
|
|
|
|
|
|
The three functions that plot_data() calls can also be called independently. In this case, the output directory for the plots will have to be provided explicitly:
|
|
|
|
|
|
|
|
**plot_skill_metrics(recipe, archive, data_cube, skill_metrics, outdir, significance = F)**:
|
|
The three functions that `plot_data()` calls can also be called independently. In this case, archive configuration file and the the output directory for the plots will have to be provided explicitly:
|
|
|
|
|
|
Generates, for each metric in skill_metrics, a figure with one plot per time step, and saves each figure to the output directory `outdir` as `<metric>.png`.
|
|
**plot_skill_metrics(recipe, archive, data_cube, skill_metrics, outdir, significance = F)**: Generates, for each metric in skill_metrics, a figure with one plot per time step, and saves each figure to the output directory `outdir` as `<metric>.png`.
|
|
|
|
|
|
`data_cube` is an s2dv_cube containing the appropriate metadata, for example the hcst object from the Loading module.
|
|
`data_cube` is an s2dv_cube containing the appropriate metadata, for example the hcst object from the Loading module.
|
|
|
|
|
... | | ... | |