... | ... | @@ -181,6 +181,12 @@ The Loading module retrieves the data requested in the recipe from /esarchive/, |
|
|
|
|
|
The output of the main function, `Loading()`, is a list containing the hindcast, observations and forecast, named hcst, obs and fcst respectively. fcst will be `NULL` if no forecast years have been requested.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
data <- Loading(recipe = recipe)
|
|
|
```
|
|
|
|
|
|
### Regridding
|
|
|
|
|
|
The Loading module can interpolate the data while loading them, using CDO as the core tool. The interpolation methods that can be specified in the recipe under `Regrid:method` are those accepted by CDO: `'conservative'`, `'bilinear'`,`'bicubic'`, `'distance-weighted'`, `'con2'`, `'laf'` and `'nn'`. Consult the CDO User Guide for more details.
|
... | ... | @@ -205,7 +211,21 @@ For the time being, unit transformation is only available for temperature, preci |
|
|
|
|
|
The output of the function, `Units()`, is a list containing the hindcast, observations and forecast, named hcst, obs and fcst respectively. fcst will be `NULL` if no forecast years have been requested.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
data <- Units(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
## Orography correction
|
|
|
|
|
|
The `orography_correction()` function performs orographic temperature correction between the experiment (`data$hcst`, `data$fcst`) and reference (`data$obs`) datasets. To apply this, the orography files for the system and reference datasets are loaded. This correction is relevant for the mean_bias metric.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
data <- orography_correction(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
## Calibration module
|
|
|
|
... | ... | @@ -213,6 +233,12 @@ The Calibration module performs bias correction on the loaded data. It accepts t |
|
|
|
|
|
The output of the main function, `Calibration()`, is a list containing the calibrated hindcast and forecast, named hcst and fcst respectively. fcst will be `NULL` if no forecast years have been requested.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
data <- Calibration(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
### Calibration methods currently available:
|
|
|
|
|
|
The calibration method can be requested in the `Workflow:Calibration:method` section of the recipe. **The user can only request one calibration method per recipe.** This is a list of the methods currently available:
|
... | ... | @@ -241,6 +267,12 @@ The output of the main function, `Anomalies()`, is a list of `s2dv_cube` objects |
|
|
|
|
|
If cross-validation is chosen, leave-one-out cross-validation will be applied. The cross-vaidation option is only available when the hindcast and the observations share the same grid.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
data <- Anomalies(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
### Recipe template:
|
|
|
|
|
|
```yaml
|
... | ... | @@ -258,6 +290,12 @@ Additionally, forecast data can be downscaled. In this case, while the hindcast |
|
|
|
|
|
The output of the main function, **Downscaling()**, is a list containing the downscaled hindcast (forecast) and observations, named **hcst** (**fcst**) and **obs**.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
data <- Downscaling(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
**Downscaling methods currently available:**
|
|
|
|
|
|
The first step is to specify the type of downscaling, choosing from the following options: `'none'`, `'analogs'`, `'int'`, `'intbc'`, `'intlr'`, `'logreg'`. Only one downscaling type can be chosen per recipe. Detailed information about each methodology can be found in the CSDownscale documentation.
|
... | ... | @@ -301,6 +339,12 @@ The Indices module aggregates the hindcast and reference data to compute climato |
|
|
|
|
|
The main function, `Indices()`, returns the hcst and obs s2dv_cube objects for each requested index, in the form of a list of lists. The 'latitude' and 'longitude' dimensions of the original arrays are aggregated into a single 'region' dimension.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
data <- Indices(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
Indices currently available:
|
|
|
|
|
|
| Index | Recipe name | Longitude range | Latitude range |
|
... | ... | @@ -356,6 +400,12 @@ The following metrics are currently available: |
|
|
|
|
|
The output of `Skill()` is a list containing one or more arrays with named dimensions; usually 'var', ‘time’, ‘longitude’ and ‘latitude’. For more details on the specific output for each metric, see the documentation for [s2dv](https://CRAN.R-project.org/package=s2dv) and [SpecsVerification](https://CRAN.R-project.org/package=SpecsVerification).
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
skill_metrics <- Skill(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
**Probabilities()** returns a list of lists. Inside the lists there are arrays containing the values corresponding to the thresholds defined in the recipe, in `Workflow:Probabilities:percentiles` (`$percentiles`), as well as their probability bins (`$probs`).
|
|
|
|
|
|
Each list contains arrays with named dimensions ‘time’, ‘longitude’ and ‘latitude’.
|
... | ... | @@ -377,6 +427,12 @@ For example, if the extremes ([1/10, 9/10]) are requested, the output will be: |
|
|
|
|
|
**Note**: When naming the variables, the probability thresholds are converted to percentiles and rounded to the nearest integer to avoid dots in variable or file names. However, this is just a naming convention; the computations are performed based on the original thresholds specified in the recipe.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
probabilities <- Probabilities(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
### Recipe template:
|
|
|
|
|
|
```yaml
|
... | ... | @@ -403,6 +459,12 @@ The function **Statistics()** computes the statistics metrics requested in `Work |
|
|
|
|
|
The output of `Statistics()` is a list containing one or more arrays with the same dimensions as the Skill metrics. If the statistics metrics are saved in the output directory; `outputs:Statistics`. When the output format in the recipe is `S2S4E` then the metrics are all saved in one netCDF file. If instead the output format in the recipe is set to `scorecards` then each statistic metric is saved in a separate netCDF file.
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
data <- Statistics(recipe = recipe, data = data)
|
|
|
```
|
|
|
|
|
|
### Recipe template:
|
|
|
|
|
|
```yaml
|
... | ... | @@ -440,7 +502,7 @@ The Scorecards module takes the output netCDF files that are saved from the Skil |
|
|
|
|
|
The Saving module contains several functions that export the data (the calibrated hindcast and forecast, the corresponding observations, the skill metrics, percentiles and probabilities) to netCDF files and save them.
|
|
|
|
|
|
Since version 1.x.x of SUNSET, each module has different options to save all or some the data produced. These options can be specified in the recipe. To see the options for a specific module, consult the 'Recipe' section of this wiki or the documentation for that module. Once the computations are finished, the output subdirectories and files are generated internally within the module function, according to the user's specifications.
|
|
|
Since version 1.1.0 of SUNSET, each module has different options to save all or some the data produced. These options can be specified in the recipe. To see the options for a specific module, consult the 'Recipe' section of this wiki or the documentation for that module. Once the computations are finished, the output subdirectories and files are generated internally within the module function, according to the user's specifications.
|
|
|
|
|
|
Inside the main output directory, the netCDF files produced by each module will be stored inside `/outputs/<module_name>/`. The old function `save_data()`, which has been renamed to `Saving()`, still exists, but will likely be deprecated in the near future.
|
|
|
|
... | ... | @@ -541,6 +603,16 @@ The three functions that `Visualization()` calls can also be called independentl |
|
|
|
|
|
**plot_most_likely_terciles(recipe, archive, fcst, percentiles, outdir)**: Computes the forecast tercile probability bins with respect to the terciles provided in 'percentiles', then generates a figure with one plot per time step and saves it to the directory `outdir` as `forecast_most_likely_terciles.png`
|
|
|
|
|
|
### How to call it
|
|
|
|
|
|
```r
|
|
|
Visualization(recipe = recipe, data = data,
|
|
|
skill_metrics = skill_metrics,
|
|
|
statistics = statistics,
|
|
|
probabilities = probabilities,
|
|
|
significance = TRUE)
|
|
|
```
|
|
|
|
|
|
### Recipe template:
|
|
|
|
|
|
```yaml
|
... | ... | |