From 80d5ff85d8847221d80dff72613ac63f7ca85d0a Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Mon, 16 Oct 2023 11:30:24 +0200 Subject: [PATCH 01/66] Add Hands-on 2 for PATC 2023 --- .../PATC2023/handson_2-data-assesment.md | 260 ++++++++++++++++++ 1 file changed, 260 insertions(+) create mode 100644 inst/doc/tutorial/PATC2023/handson_2-data-assesment.md diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md new file mode 100644 index 00000000..7de88529 --- /dev/null +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -0,0 +1,260 @@ +# Hands-on 2: Data assesment + +**Goal** + +Use CSTools to calibrate, compute anomalies and skill of a climate dataset over a period. + +**Load packages** +```r +# Clean the session +rm(list = ls()) + +library(CSTools) # CST_Calibration +library(startR) # Start +# TODO: remove startR if new CSTools is installed +library(s2dv) # +``` + +## 1. Load the data + +In this section we will use the function **Start** to load the data. Then, we will transfrom the output `startR_array` to an `s2dv_cube` object in order that the data is easy to use within CSTools functions. + +The `s2dv_cube` object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. + +**Note:** If you have already loaded the data with Start, go directly to section 1b). + +### 1a) Load the data +TODO: If CSTools new version can be installed, use CST_Start + +The following section is taken from ( TODO: cite handson1 ). + +```r +#TODO: update the path +# Use this one if on workstation or nord3 (have access to /esarchive) +path_exp <- "/esarchive/exp/meteofrance/system7c3s/monthly_mean/$var$_f6h/$var$_$syear$.nc" + +# Use this one if on Marenostrum4 and log in with PATC2021 account +path_exp <- paste0('/gpfs/scratch/nct01/nct01127/d3_R_handson/esarchive/', + 'exp/ecmwf/system5c3s/daily_mean/', + '$var$_s0-24h/$var$_$sdate$.nc') + +var <- 'tas' +sdate_hcst <- paste0(1993:2016, '1101') +sdate_fcst <- '20201101' +lon.min <- -20 +lon.max <- 40 +lat.min <- 20 +lat.max <- 80 +``` + +```r +hcst <- Start(dat = path_exp, + var = var, + syear = sdate_hcst, + ensemble = 'all', + time = 1:2, + latitude = values(list(lat.min, lat.max)), + latitude_reorder = Sort(), + longitude = values(list(lon.min, lon.max)), + longitude_reorder = CircularSort(-180, 180), + synonims = list(syear = c('syear', 'sdate'), + latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) +``` + +```r +fcst <- Start(dat = path_exp, + var = var, + syear = sdate_fcst, + ensemble = 'all', + time = 1:2, + latitude = values(list(lat.min, lat.max)), + latitude_reorder = Sort(), + longitude = values(list(lon.min, lon.max)), + longitude_reorder = CircularSort(-180, 180), + synonims = list(syear = c('syear', 'sdate'), + latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) +``` + +```r +# Adjust the day to the correct month +attributes(hcst)$Variables$common$time <- attributes(hcst)$Variables$common$time - lubridate::days(1) + +date_string <- format(attributes(hcst)$Variables$common$time, '%Y%m') +sdate_obs <- array(date_string, dim = c(syear = 24, time = 2)) +``` + +```r +path_obs <- '/esarchive/recon/ecmwf/era5/monthly_mean/$var$_f1h-r1440x721cds/$var$_$syear$.nc' + +obs <- Start(dat = path_obs, + var = var, + syear = sdate_obs, + split_multiselected_dims = TRUE, + latitude = values(list(lat.min, lat.max)), + latitude_reorder = Sort(), + longitude = values(list(lon.min, lon.max)), + longitude_reorder = CircularSort(-180, 180), + transform = CDORemapper, +#TODO: Change to relative path +# transform_params = list(grid = './griddes_system7c3s.txt', + transform_params = list(grid = '/esarchive/scratch/aho/git/startR/inst/doc/tutorial/PATC2023/griddes_system7c3s.txt', + method = 'bilinear'), + transform_vars = c('latitude', 'longitude'), + synonims = list(syear = c('syear', 'sdate'), + latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) +``` + +### 1b) Create `s2dv_cube`: + +Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube**: + +```r +> str(hcst) + 'startR_array' num [1, 1, 1:24, 1:25, 1:2, 1:60, 1:60] 295 296 297 297 297 ... + - attr(*, "Variables")=List of 2 + ..$ common:List of 4 + .. ..$ time : POSIXct[1:48], format: "1993-12-01" "1994-12-01" ... + .. ..$ longitude: num [1:60(1d)] -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5 ... + .. .. ..- attr(*, "variables")=List of 1 + .. .. .. ..$ longitude:List of 7 + .. .. .. .. ..$ ndims : num 1 + .. .. .. .. ..$ size : int 360 + .. .. .. .. ..$ units : chr "degrees_east" + .. .. .. .. ..$ dim :List of 1 + .. .. .. .. .. ..$ :List of 10 + +``` +Now, we convert the startR object into 's2dv_cube' object. +```r +hcst <- as.s2dv_cube(hcst) +fcst <- as.s2dv_cube(fcst) +obs <- as.s2dv_cube(obs) +``` +**Questions 1:** + +Find `s2dv_cube` information: + +1. Can you tell what type of object is an **s2dv_cube** in R structures? +2. What are the time dimensions of the object? +3. In which region we have loaded the data? +4. How many coordinates have the object hcst? +5. How many ensemble members have the observation dataset? +6. What is the full variable name of the loaded data? +7. What season is the data loaded from? + +**Exercise 1**: +1. Find the mean, the maximum and the minimum of the fcst, hcst and obs data and compare it. +2. Explore the data (TODO: complete it) + + +## 2. Calibrate the data + +Calibrate the hindcast: +```r +hcst_cal <- CST_Calibration(exp = hcst, obs = obs, + cal.method = "evmos", + eval.method = "leave-one-out", + multi.model = FALSE, + na.fill = TRUE, + na.rm = TRUE, + apply_to = NULL, + alpha = NULL, + memb_dim = "ensemble", + sdate_dim = "syear", + ncores = 10) +``` + +Calibrate the forecast: +```r +fcst_cal <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, + cal.method = "evmos", + eval.method = "leave-one-out", + multi.model = FALSE, + na.fill = TRUE, + na.rm = TRUE, + apply_to = NULL, + alpha = NULL, + memb_dim = "ensemble", + sdate_dim = "syear", + ncores = 10) + +``` + + + +## 2. Compute Anomalies + + +```r +hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, + cross = TRUE, + memb = TRUE, + memb_dim = 'ensemble', + dim_anom = 'syear', + dat_dim = c('dat', 'ensemble'), + ftime_dim = 'time', + ncores = 10) +``` +Compute forecast anomaly field: +```r +clim <- s2dv::Clim(exp = hcst_cal$data, obs = obs$data, + time_dim = "syear", + dat_dim = c("dat", "ensemble"), + memb = FALSE, + memb_dim = "ensemble", + ftime_dim = "time", + ncores = 10) +clim_hcst <- InsertDim(clim$clim_exp, posdim = 1, lendim = 1, name = "syear") +dims <- dim(clim_hcst) +clim_hcst <- rep(clim_hcst, fcst$dim[['ensemble']]) +dim(clim_hcst) <- c(dims, ensemble = fcst$dim[['ensemble']]) +clim_hcst <- Reorder(clim_hcst, order = names(fcst$dim)) +fcst_anom <- fcst_cal$data - clim_hcst +``` + +## 3. Compute skill: RPSS + +Compute Ranked Probability Skill Score: +```r +skill_data <- RPSS(exp = hcst_cal$data, obs = obs$data, memb_dim = 'ensemble', + time_dim = 'syear', Fair = FALSE, cross.val = TRUE, + ncores = 10) + +skill_anom <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, + time_dim = 'syear', + memb_dim = 'ensemble', + Fair = FALSE, + cross.val = TRUE, + ncores = 10) +``` +Explore the result: + +```r +> summary(skill_data$rpss) + Min. 1st Qu. Median Mean 3rd Qu. Max. +-0.54095 -0.11060 -0.03200 -0.03599 0.04465 0.41849 +> summary(skill_data$sign) + Mode FALSE TRUE +logical 6813 387 +> summary(skill_anom$rpss) + Min. 1st Qu. Median Mean 3rd Qu. Max. +-0.54005 -0.11225 -0.03170 -0.03722 0.04480 0.44376 +> summary(skill_anom$sign) + Mode FALSE TRUE +logical 6798 402 +``` + + +## References \ No newline at end of file -- GitLab From 38047ebccdbccec9df71cec2f6e1d0908b4aea77 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Mon, 16 Oct 2023 12:47:23 +0200 Subject: [PATCH 02/66] Compute only RPSS from Anomalies --- .../PATC2023/handson_2-data-assesment.md | 29 +++++++------------ 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index 7de88529..290da549 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -226,32 +226,23 @@ fcst_anom <- fcst_cal$data - clim_hcst ## 3. Compute skill: RPSS -Compute Ranked Probability Skill Score: +Compute Ranked Probability Skill Score for anomalies: ```r -skill_data <- RPSS(exp = hcst_cal$data, obs = obs$data, memb_dim = 'ensemble', - time_dim = 'syear', Fair = FALSE, cross.val = TRUE, - ncores = 10) - -skill_anom <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, - time_dim = 'syear', - memb_dim = 'ensemble', - Fair = FALSE, - cross.val = TRUE, - ncores = 10) + +skill <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, + time_dim = 'syear', + memb_dim = 'ensemble', + Fair = FALSE, + cross.val = TRUE, + ncores = 10) ``` Explore the result: ```r -> summary(skill_data$rpss) - Min. 1st Qu. Median Mean 3rd Qu. Max. --0.54095 -0.11060 -0.03200 -0.03599 0.04465 0.41849 -> summary(skill_data$sign) - Mode FALSE TRUE -logical 6813 387 -> summary(skill_anom$rpss) +> summary(skill$rpss) Min. 1st Qu. Median Mean 3rd Qu. Max. -0.54005 -0.11225 -0.03170 -0.03722 0.04480 0.44376 -> summary(skill_anom$sign) +> summary(skill$sign) Mode FALSE TRUE logical 6798 402 ``` -- GitLab From d1ee5ea70becb043cea77253e913a637b655955c Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Mon, 16 Oct 2023 16:24:45 +0200 Subject: [PATCH 03/66] Remove rm() --- inst/doc/tutorial/PATC2023/handson_2-data-assesment.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index 290da549..6fdcff13 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -6,9 +6,6 @@ Use CSTools to calibrate, compute anomalies and skill of a climate dataset over **Load packages** ```r -# Clean the session -rm(list = ls()) - library(CSTools) # CST_Calibration library(startR) # Start # TODO: remove startR if new CSTools is installed -- GitLab From 051c38061a8818f2df0425ea555d2c71db830c5e Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 25 Oct 2023 18:29:57 +0200 Subject: [PATCH 04/66] Improve hands-on 2 --- .../PATC2023/handson_2-data-assesment.md | 345 +++++++++----- .../PATC2023/handson_2-data-assesment_ans.md | 437 ++++++++++++++++++ 2 files changed, 674 insertions(+), 108 deletions(-) create mode 100644 inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index 6fdcff13..74ce3eb2 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -2,14 +2,12 @@ **Goal** -Use CSTools to calibrate, compute anomalies and skill of a climate dataset over a period. +Use CSTools and s2dv to calibrate, compute anomalies and skill of a climate dataset over a period. **Load packages** ```r -library(CSTools) # CST_Calibration -library(startR) # Start -# TODO: remove startR if new CSTools is installed -library(s2dv) # +library(CSTools) +library(s2dv) ``` ## 1. Load the data @@ -21,96 +19,98 @@ The `s2dv_cube` object is a structured list that contains the information needed **Note:** If you have already loaded the data with Start, go directly to section 1b). ### 1a) Load the data -TODO: If CSTools new version can be installed, use CST_Start -The following section is taken from ( TODO: cite handson1 ). +The following section is taken from [PATC 2023 startR tutorial](https://earth.bsc.es/gitlab/es/startR/-/blob/doc-bsc_training_2023/inst/doc/tutorial/PATC2023/handson_1-data-loading.md?ref_type=heads). The experiment data are Meteo-France System 7 from ECMWF, and the observation ones are ERA5 from ECMWF. We're going to analyze the near-surface temperature (short name: tas) for seasonal forecast. We will focus on the Europe region (roughly 20W-40E, 20N-80N). The hindcast years are 1993 to 2016, and the forecast year is 2020. The initial month is November. ```r -#TODO: update the path # Use this one if on workstation or nord3 (have access to /esarchive) path_exp <- "/esarchive/exp/meteofrance/system7c3s/monthly_mean/$var$_f6h/$var$_$syear$.nc" +#---------------------------------------------------------------------- +# Run these two lines if you're on Marenostrum4 and log in with training account +prefix <- '/gpfs/scratch/bsc32/bsc32734/bsc_training_2023/R_handson/' +path_exp <- paste0(prefix, path_exp) +#---------------------------------------------------------------------- -# Use this one if on Marenostrum4 and log in with PATC2021 account -path_exp <- paste0('/gpfs/scratch/nct01/nct01127/d3_R_handson/esarchive/', - 'exp/ecmwf/system5c3s/daily_mean/', - '$var$_s0-24h/$var$_$sdate$.nc') - -var <- 'tas' sdate_hcst <- paste0(1993:2016, '1101') + +hcst <- CST_Start(dat = path_exp, + var = 'tas', + syear = sdate_hcst, + ensemble = 'all', + time = 1:2, + latitude = startR::values(list(20, 80)), + latitude_reorder = startR::Sort(), + longitude = startR::values(list(-20, 40)), + longitude_reorder = startR::CircularSort(-180, 180), + transform = startR::CDORemapper, + transform_params = list(grid = 'r360x181', method = 'bilinear'), + transform_vars = c('latitude', 'longitude'), + synonims = list(latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) + +``` + +```r sdate_fcst <- '20201101' -lon.min <- -20 -lon.max <- 40 -lat.min <- 20 -lat.max <- 80 -``` - -```r -hcst <- Start(dat = path_exp, - var = var, - syear = sdate_hcst, - ensemble = 'all', - time = 1:2, - latitude = values(list(lat.min, lat.max)), - latitude_reorder = Sort(), - longitude = values(list(lon.min, lon.max)), - longitude_reorder = CircularSort(-180, 180), - synonims = list(syear = c('syear', 'sdate'), - latitude = c('lat', 'latitude'), - longitude = c('lon', 'longitude')), - return_vars = list(time = 'syear', - longitude = NULL, latitude = NULL), - retrieve = TRUE) -``` - -```r -fcst <- Start(dat = path_exp, - var = var, - syear = sdate_fcst, - ensemble = 'all', - time = 1:2, - latitude = values(list(lat.min, lat.max)), - latitude_reorder = Sort(), - longitude = values(list(lon.min, lon.max)), - longitude_reorder = CircularSort(-180, 180), - synonims = list(syear = c('syear', 'sdate'), - latitude = c('lat', 'latitude'), - longitude = c('lon', 'longitude')), - return_vars = list(time = 'syear', - longitude = NULL, latitude = NULL), - retrieve = TRUE) + +fcst <- CST_Start(dat = path_exp, + var = 'tas', + syear = sdate_fcst, + ensemble = 'all', + time = 1:2, + latitude = startR::values(list(20, 80)), + latitude_reorder = startR::Sort(), + longitude = startR::values(list(-20, 40)), + longitude_reorder = startR::CircularSort(-180, 180), + transform = startR::CDORemapper, + transform_params = list(grid = 'r360x181', method = 'bilinear'), + transform_vars = c('latitude', 'longitude'), + synonims = list(syear = c('syear', 'sdate'), + latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) + ``` ```r # Adjust the day to the correct month -attributes(hcst)$Variables$common$time <- attributes(hcst)$Variables$common$time - lubridate::days(1) +hcst$attrs$Dates <- hcst$attrs$Dates - lubridate::days(1) -date_string <- format(attributes(hcst)$Variables$common$time, '%Y%m') +date_string <- format(hcst$attrs$Dates, '%Y%m') sdate_obs <- array(date_string, dim = c(syear = 24, time = 2)) ``` ```r path_obs <- '/esarchive/recon/ecmwf/era5/monthly_mean/$var$_f1h-r1440x721cds/$var$_$syear$.nc' +#---------------------------------------------------------------------- +# Run these two lines if you're on Marenostrum4 and log in with training account +prefix <- '/gpfs/scratch/bsc32/bsc32734/bsc_training_2023/R_handson/' +path_obs <- paste0(prefix, path_obs) +#---------------------------------------------------------------------- + +obs <- CST_Start(dat = path_obs, + var = 'tas', + syear = sdate_obs, + split_multiselected_dims = TRUE, + latitude = startR::values(list(20, 80)), + latitude_reorder = startR::Sort(), + longitude = startR::values(list(-20, 40)), + longitude_reorder = startR::CircularSort(-180, 180), + transform = startR::CDORemapper, + transform_params = list(grid = 'r360x181', method = 'bilinear'), + transform_vars = c('latitude', 'longitude'), + synonims = list(syear = c('syear', 'sdate'), + latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) -obs <- Start(dat = path_obs, - var = var, - syear = sdate_obs, - split_multiselected_dims = TRUE, - latitude = values(list(lat.min, lat.max)), - latitude_reorder = Sort(), - longitude = values(list(lon.min, lon.max)), - longitude_reorder = CircularSort(-180, 180), - transform = CDORemapper, -#TODO: Change to relative path -# transform_params = list(grid = './griddes_system7c3s.txt', - transform_params = list(grid = '/esarchive/scratch/aho/git/startR/inst/doc/tutorial/PATC2023/griddes_system7c3s.txt', - method = 'bilinear'), - transform_vars = c('latitude', 'longitude'), - synonims = list(syear = c('syear', 'sdate'), - latitude = c('lat', 'latitude'), - longitude = c('lon', 'longitude')), - return_vars = list(time = 'syear', - longitude = NULL, latitude = NULL), - retrieve = TRUE) ``` ### 1b) Create `s2dv_cube`: @@ -139,26 +139,121 @@ hcst <- as.s2dv_cube(hcst) fcst <- as.s2dv_cube(fcst) obs <- as.s2dv_cube(obs) ``` + +```r +> str(hcst) +List of 4 + $ data : num [1, 1, 1:24, 1:25, 1:2, 1:60, 1:60] 295 296 297 297 297 ... + $ dims : Named int [1:7] 1 1 24 25 2 60 60 + ..- attr(*, "names")= chr [1:7] "dat" "var" "syear" "ensemble" ... + $ coords:List of 7 + ..$ dat : chr "dat1" + .. ..- attr(*, "indices")= logi FALSE + ..$ var : chr "tas" + .. ..- attr(*, "values")= logi TRUE + .. ..- attr(*, "indices")= logi FALSE + ..$ syear : chr [1:24] "19931101" "19941101" "19951101" "19961101" ... + .. ..- attr(*, "values")= logi TRUE + .. ..- attr(*, "indices")= logi FALSE + ..$ ensemble : int [1:25] 1 2 3 4 5 6 7 8 9 10 ... + .. ..- attr(*, "indices")= logi TRUE + ..$ time : int [1:2] 1 2 + .. ..- attr(*, "indices")= logi TRUE + ..$ latitude : num [1:60] 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5 29.5 ... + .. ..- attr(*, "variables")=List of 1 + .. .. ..$ latitude:List of 7 + .. .. .. ..$ ndims : num 1 + [...] +``` **Questions 1:** -Find `s2dv_cube` information: +Find `s2dv_cube` information of `hcst` object. Here there are some useful functions: +```r +str() +typeof() +class() +dim() +names() +summary() +``` +```r +`s2dv_cube` overview +$ data: [data array] +$ dims: [dimensions vector] +$ coords: [List of coordinates vectors] + [...] +$ attrs: [List of the attributes] + $ Dates + $ Variable: + $ varName + $ metadata + [...] +``` 1. Can you tell what type of object is an **s2dv_cube** in R structures? -2. What are the time dimensions of the object? -3. In which region we have loaded the data? -4. How many coordinates have the object hcst? -5. How many ensemble members have the observation dataset? -6. What is the full variable name of the loaded data? -7. What season is the data loaded from? +```r +class(hcst) +# "s2dv_cube" +typeof(___) + +``` +1. Can you tell what type of object is the element data in common language? Use the function `dim()` and `typeof()` to check `hcst$data` +```r +dim(____) + +typeof(hcst$data) + +``` +2. The Dates of an `s2dv_cube` can be found in element: `hcst$attrs$Dates`. +Could you tell what are the time dimensions of the object? +```r + +``` +3. The coordinates in the `s2dv_cube` are stored in element `hcst$coords`. What are the coordinates names in the object `hcst`? Use the function `names()` to check: +```r +names(____) + +``` +4. In which region we have loaded the data? +```r + +``` +5. What is the start date dimension name? What is the ensemble member dimension name? +```r +hcst$dims +# syear +# ensemble +``` + +6. How many ensemble members have the `hcst`, `fcst` and `obs` datasets? +```r + +``` +7. The metadata of the `s2dv_cube` is stored in `hcst$attrs$Variable` What is the full variable name of the loaded data? Find out the information in `hcst$attrs$Variable$metadata` with the function `str()`: +```r + +``` +8. What season is the data loaded from? You can use the function `month()` +```r + +``` +9. What are the units of the data? +```r + +``` **Exercise 1**: -1. Find the mean, the maximum and the minimum of the fcst, hcst and obs data and compare it. -2. Explore the data (TODO: complete it) +1. Find the mean, the maximum and the minimum of the `fcst`, `hcst` and obs data and compare it: + +```r +``` ## 2. Calibrate the data +Now, we are going to use the function CST_Calibration from CSTools to calibrate the hindcast. The parameter `cal.method` alllows us to chose the calibration method. In this exercise we are going to chose the `evmos`. -Calibrate the hindcast: +The "evmos" method applies a variance inflation technique to ensure the correction of +the bias and the correspondence of variance between forecast and observation (Van Schaeybroeck and Vannitsem, 2011). ```r hcst_cal <- CST_Calibration(exp = hcst, obs = obs, cal.method = "evmos", @@ -173,8 +268,9 @@ hcst_cal <- CST_Calibration(exp = hcst, obs = obs, ncores = 10) ``` -Calibrate the forecast: +Now we are going to calibrate also the forecast: ```r + fcst_cal <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, cal.method = "evmos", eval.method = "leave-one-out", @@ -186,13 +282,18 @@ fcst_cal <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, memb_dim = "ensemble", sdate_dim = "syear", ncores = 10) - +``` +**Exercise 2**: +1. Test the hcst with a different calibration methods available and compare the results. You can use the function `summary()` to see the differences. +```r +hcst_cal_bias <- CST_Calibration(____) ``` +## 3. Compute Anomalies +Now, we will compute the hindcast anomalies with the calibrated `hcst`. We are going to use the function CST_Anomaly from CSTools. -## 2. Compute Anomalies - +This function computes the anomalies relative to a climatology computed along the selected dimension (usually starting dates or forecast time) allowing the application or not of crossvalidated climatologies. The computation is carried out independently for experimental and observational data products. ```r hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, @@ -203,29 +304,23 @@ hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, dat_dim = c('dat', 'ensemble'), ftime_dim = 'time', ncores = 10) + ``` -Compute forecast anomaly field: +**Exercise 3**: +1. Calculate the hcst anomalies with the raw hindcast but change the number of ncores. ```r -clim <- s2dv::Clim(exp = hcst_cal$data, obs = obs$data, - time_dim = "syear", - dat_dim = c("dat", "ensemble"), - memb = FALSE, - memb_dim = "ensemble", - ftime_dim = "time", - ncores = 10) -clim_hcst <- InsertDim(clim$clim_exp, posdim = 1, lendim = 1, name = "syear") -dims <- dim(clim_hcst) -clim_hcst <- rep(clim_hcst, fcst$dim[['ensemble']]) -dim(clim_hcst) <- c(dims, ensemble = fcst$dim[['ensemble']]) -clim_hcst <- Reorder(clim_hcst, order = names(fcst$dim)) -fcst_anom <- fcst_cal$data - clim_hcst +t1 <- Sys.time() +hcst_anom_raw <- CST_Anomaly(____) +t2 <- Sys.time() +t2-t1 ``` -## 3. Compute skill: RPSS + +## 4. Compute skill: RPSS +In order to trust the results we need to evaluate the skill of the system. Compute Ranked Probability Skill Score for anomalies: ```r - skill <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, time_dim = 'syear', memb_dim = 'ensemble', @@ -233,16 +328,50 @@ skill <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, cross.val = TRUE, ncores = 10) ``` -Explore the result: +**Exercise 4**: +1. Compare the result with the raw results. ```r +skill_raw <- RPSS(exp = hcst_anom_raw$exp$data, obs = hcst_anom_raw$obs$data, + time_dim = 'syear', + memb_dim = 'ensemble', + Fair = FALSE, + cross.val = TRUE, + ncores = 10) > summary(skill$rpss) Min. 1st Qu. Median Mean 3rd Qu. Max. -0.54005 -0.11225 -0.03170 -0.03722 0.04480 0.44376 > summary(skill$sign) Mode FALSE TRUE logical 6798 402 +> summary(skill_raw$rpss) + Min. 1st Qu. Median Mean 3rd Qu. Max. +-0.59113 -0.11045 -0.03069 -0.03585 0.04637 0.44305 +> summary(skill_raw$sign) + Mode FALSE TRUE +logical 7055 387 ``` +## 5. Additional Exercise: Visualization + +We can use the function from s2dv PlotEquiMap to visualize the data. -## References \ No newline at end of file +**Example** + +With the following code, we will plot a map to compare the hindcast raw and calibrated data. Can we appreciate the differences? + +We are going to plot the last year of the hindcast period (2016) for the last timestep (December). Also, we are going to use the last ensemble member (arbirtrary choice). +```r +dim(hcst$data) + # dat var syear ensemble time latitude longitude + # 1 1 24 25 2 60 60 +``` +```r +# Visualization: compare raw and calibrated data +PlotEquiMap(hcst$data[,,24,1,1,,], lat = lat, lon = lon, + filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/raw_hcst_24.png") + +PlotEquiMap(hcst_cal$data[,,24,1,1,,], lat = lat, lon = lon, + filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/cal_hcst_24.png") + +``` diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md new file mode 100644 index 00000000..d21c5834 --- /dev/null +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md @@ -0,0 +1,437 @@ +# Hands-on 2: Data assesment + +**Goal** + +Use CSTools and s2dv to calibrate, compute anomalies and skill of a climate dataset over a period. + +**Load packages** +```r +library(CSTools) +library(s2dv) +``` + +## 1. Load the data + +In this section we will use the function **Start** to load the data. Then, we will transfrom the output `startR_array` to an `s2dv_cube` object in order that the data is easy to use within CSTools functions. + +The `s2dv_cube` object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. + +**Note:** If you have already loaded the data with Start, go directly to section 1b). + +### 1a) Load the data + +The following section is taken from [PATC 2023 startR tutorial](https://earth.bsc.es/gitlab/es/startR/-/blob/doc-bsc_training_2023/inst/doc/tutorial/PATC2023/handson_1-data-loading.md?ref_type=heads). The experiment data are Meteo-France System 7 from ECMWF, and the observation ones are ERA5 from ECMWF. We're going to analyze the near-surface temperature (short name: tas) for seasonal forecast. We will focus on the Europe region (roughly 20W-40E, 20N-80N). The hindcast years are 1993 to 2016, and the forecast year is 2020. The initial month is November. + +```r +# Use this one if on workstation or nord3 (have access to /esarchive) +path_exp <- "/esarchive/exp/meteofrance/system7c3s/monthly_mean/$var$_f6h/$var$_$syear$.nc" +#---------------------------------------------------------------------- +# Run these two lines if you're on Marenostrum4 and log in with training account +prefix <- '/gpfs/scratch/bsc32/bsc32734/bsc_training_2023/R_handson/' +path_exp <- paste0(prefix, path_exp) +#---------------------------------------------------------------------- + +sdate_hcst <- paste0(1993:2016, '1101') + +hcst <- CST_Start(dat = path_exp, + var = 'tas', + syear = sdate_hcst, + ensemble = 'all', + time = 1:2, + latitude = startR::values(list(20, 80)), + latitude_reorder = startR::Sort(), + longitude = startR::values(list(-20, 40)), + longitude_reorder = startR::CircularSort(-180, 180), + transform = startR::CDORemapper, + transform_params = list(grid = 'r360x181', method = 'bilinear'), + transform_vars = c('latitude', 'longitude'), + synonims = list(latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) + +``` + +```r +sdate_fcst <- '20201101' + +fcst <- CST_Start(dat = path_exp, + var = 'tas', + syear = sdate_fcst, + ensemble = 'all', + time = 1:2, + latitude = startR::values(list(20, 80)), + latitude_reorder = startR::Sort(), + longitude = startR::values(list(-20, 40)), + longitude_reorder = startR::CircularSort(-180, 180), + transform = startR::CDORemapper, + transform_params = list(grid = 'r360x181', method = 'bilinear'), + transform_vars = c('latitude', 'longitude'), + synonims = list(syear = c('syear', 'sdate'), + latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) + +``` + +```r +# Adjust the day to the correct month +hcst$attrs$Dates <- hcst$attrs$Dates - lubridate::days(1) + +date_string <- format(hcst$attrs$Dates, '%Y%m') +sdate_obs <- array(date_string, dim = c(syear = 24, time = 2)) +``` + +```r +path_obs <- '/esarchive/recon/ecmwf/era5/monthly_mean/$var$_f1h-r1440x721cds/$var$_$syear$.nc' +#---------------------------------------------------------------------- +# Run these two lines if you're on Marenostrum4 and log in with training account +prefix <- '/gpfs/scratch/bsc32/bsc32734/bsc_training_2023/R_handson/' +path_obs <- paste0(prefix, path_obs) +#---------------------------------------------------------------------- + +obs <- CST_Start(dat = path_obs, + var = 'tas', + syear = sdate_obs, + split_multiselected_dims = TRUE, + latitude = startR::values(list(20, 80)), + latitude_reorder = startR::Sort(), + longitude = startR::values(list(-20, 40)), + longitude_reorder = startR::CircularSort(-180, 180), + transform = startR::CDORemapper, + transform_params = list(grid = 'r360x181', method = 'bilinear'), + transform_vars = c('latitude', 'longitude'), + synonims = list(syear = c('syear', 'sdate'), + latitude = c('lat', 'latitude'), + longitude = c('lon', 'longitude')), + return_vars = list(time = 'syear', + longitude = NULL, latitude = NULL), + retrieve = TRUE) + +``` + +### 1b) Create `s2dv_cube`: + +Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube**: + +```r +> str(hcst) + 'startR_array' num [1, 1, 1:24, 1:25, 1:2, 1:60, 1:60] 295 296 297 297 297 ... + - attr(*, "Variables")=List of 2 + ..$ common:List of 4 + .. ..$ time : POSIXct[1:48], format: "1993-12-01" "1994-12-01" ... + .. ..$ longitude: num [1:60(1d)] -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5 ... + .. .. ..- attr(*, "variables")=List of 1 + .. .. .. ..$ longitude:List of 7 + .. .. .. .. ..$ ndims : num 1 + .. .. .. .. ..$ size : int 360 + .. .. .. .. ..$ units : chr "degrees_east" + .. .. .. .. ..$ dim :List of 1 + .. .. .. .. .. ..$ :List of 10 + +``` +Now, we convert the startR object into 's2dv_cube' object. +```r +hcst <- as.s2dv_cube(hcst) +fcst <- as.s2dv_cube(fcst) +obs <- as.s2dv_cube(obs) +``` + +```r +> str(hcst) +List of 4 + $ data : num [1, 1, 1:24, 1:25, 1:2, 1:60, 1:60] 295 296 297 297 297 ... + $ dims : Named int [1:7] 1 1 24 25 2 60 60 + ..- attr(*, "names")= chr [1:7] "dat" "var" "syear" "ensemble" ... + $ coords:List of 7 + ..$ dat : chr "dat1" + .. ..- attr(*, "indices")= logi FALSE + ..$ var : chr "tas" + .. ..- attr(*, "values")= logi TRUE + .. ..- attr(*, "indices")= logi FALSE + ..$ syear : chr [1:24] "19931101" "19941101" "19951101" "19961101" ... + .. ..- attr(*, "values")= logi TRUE + .. ..- attr(*, "indices")= logi FALSE + ..$ ensemble : int [1:25] 1 2 3 4 5 6 7 8 9 10 ... + .. ..- attr(*, "indices")= logi TRUE + ..$ time : int [1:2] 1 2 + .. ..- attr(*, "indices")= logi TRUE + ..$ latitude : num [1:60] 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5 29.5 ... + .. ..- attr(*, "variables")=List of 1 + .. .. ..$ latitude:List of 7 + .. .. .. ..$ ndims : num 1 + [...] +``` +**Questions 1:** + +Find `s2dv_cube` information of `hcst` object. Here there are some useful functions: +```r +str() +typeof() +class() +dim() +names() +summary() +``` +```r +`s2dv_cube` overview +$ data: [data array] +$ dims: [dimensions vector] +$ coords: [List of coordinates vectors] + [...] +$ attrs: [List of the attributes] + $ Dates + $ Variable: + $ varName + $ metadata + [...] +``` + +1. Can you tell what type of object is an **s2dv_cube** in R structures? +```r +class(hcst) +# "s2dv_cube" +typeof(hcst) +# "list" +``` +1. Can you tell what type of object is the element data in common language? +```r +dim(hcst$data) +# dat var syear ensemble time latitude longitude +# 1 1 24 25 2 61 61 +typeof(hcst$data) # base type +# [1] "double" + +# Answer: Multi-dimensional array / N-dimensional array /tensor. +``` +2. The Dates of an `s2dv_cube` can be found in element: `hcst$attrs$Dates`. +Could you tell what are the time dimensions of the object? +```r +dim(hcst$attrs$Dates) +# syear time +# 24 2 +``` +3. The coordinates in the `s2dv_cube` are stored in element `hcst$coords`. What are the coordinates names in the object `hcst`? +```r +names(hcst$coords) +# [1] "dat" "var" "syear" "ensemble" "time" "latitude" +# [7] "longitude" +``` +4. In which region we have loaded the data? +```r +hcst$coords$lat +# latitude: 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 +# [26] 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 +# [51] 70 71 72 73 74 75 76 77 78 79 80 + +hcst$coords$lon +# [1] -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 +# [20] -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 +# [39] 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 +# [58] 37 38 39 40 +``` +5. What is the start date dimension name? What is the ensemble member dimension name? +```r +hcst$dims +# syear +# ensemble +``` + +6. How many ensemble members have the `hcst`, `fcst` and `obs` datasets? +```r +hcst$dims[['ensemble']] +# [1] 25 +fcst$dims[['ensemble']] +# [1] 51 +obs$dims +# No ensemble member in obs +``` +7. The metadata of the `s2dv_cube` is stored in `hcst$attrs$Variable` What is the full variable name of the loaded data? +```r +str(hcst$attrs$Variable) +str(hcst$attrs$Variable$metadata$tas$long_name) +# chr "2 metre temperature" +``` +8. What season is the data loaded from? +```r +dim(hcst$attrs$Dates) +hcst$attrs$Dates[1,] +months(hcst$attrs$Dates[1,]) +# "December" "January" +``` +9. What are the units of the data? +```r +hcst$attrs$Variable$metadata$tas$units +# K +``` + +**Exercise 1**: +1. Find the mean, the maximum and the minimum of the `fcst`, `hcst` and obs data and compare it: + +```r +summary(hcst$data) +# Min. 1st Qu. Median Mean 3rd Qu. Max. +# 237.3 274.1 280.5 280.1 287.8 303.5 +summary(hcst$data) +# Min. 1st Qu. Median Mean 3rd Qu. Max. +# 237.3 274.1 280.5 280.1 287.8 303.5 +summary(fcst$data) +# Min. 1st Qu. Median Mean 3rd Qu. Max. +# 240.6 274.9 280.8 280.8 288.2 303.4 +summary(obs$data) +# Min. 1st Qu. Median Mean 3rd Qu. Max. +# 239.8 272.9 279.8 279.6 287.7 303.0 +``` + +## 2. Calibrate the data + +Calibrate the hindcast: +```r +hcst_cal <- CST_Calibration(exp = hcst, obs = obs, + cal.method = "evmos", + eval.method = "leave-one-out", + multi.model = FALSE, + na.fill = TRUE, + na.rm = TRUE, + apply_to = NULL, + alpha = NULL, + memb_dim = "ensemble", + sdate_dim = "syear", + ncores = 10) +``` + +Calibrate the forecast: +```r + +fcst_cal <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, + cal.method = "evmos", + eval.method = "leave-one-out", + multi.model = FALSE, + na.fill = TRUE, + na.rm = TRUE, + apply_to = NULL, + alpha = NULL, + memb_dim = "ensemble", + sdate_dim = "syear", + ncores = 10) +``` +**Exercise 2**: +1. Test the hcst with a different calibration methods available and compare the results. +```r +hcst_cal_bias <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, + cal.method = "bias", + eval.method = "leave-one-out", + multi.model = FALSE, + na.fill = TRUE, + na.rm = TRUE, + apply_to = NULL, + alpha = NULL, + memb_dim = "ensemble", + sdate_dim = "syear", + ncores = 10) +summary(hcst_cal_bias$data) + Min. 1st Qu. Median Mean 3rd Qu. Max. + 240.4 273.6 280.0 280.3 288.1 302.7 +summary(hcst_cal$data) + Min. 1st Qu. Median Mean 3rd Qu. Max. + 237.9 272.9 279.8 279.6 287.6 303.5 +``` + +## 3. Compute Anomalies + +Now, we will compute the hindcast anomalies with the calibrated `hcst`. +```r +hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, + cross = TRUE, + memb = TRUE, + memb_dim = 'ensemble', + dim_anom = 'syear', + dat_dim = c('dat', 'ensemble'), + ftime_dim = 'time', + ncores = 10) + +``` +**Exercise 3**: +1. Calculate the hcst anomalies with the raw hindcast but change the number of ncores. +```r +t1 <- Sys.time() +hcst_anom_raw <- CST_Anomaly(exp = hcst, obs = obs, + cross = TRUE, + memb = TRUE, + memb_dim = 'ensemble', + dim_anom = 'syear', + dat_dim = c('dat', 'ensemble'), + ftime_dim = 'time', + ncores = 5) +t2 <- Sys.time() +t2-t1 +``` + + +## 4. Compute skill: RPSS + +Compute Ranked Probability Skill Score for anomalies: +```r +skill <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, + time_dim = 'syear', + memb_dim = 'ensemble', + Fair = FALSE, + cross.val = TRUE, + ncores = 10) +``` +**Exercise 4**: +1. Compare the result with the raw results. + +```r +skill_raw <- RPSS(exp = hcst_anom_raw$exp$data, obs = hcst_anom_raw$obs$data, + time_dim = 'syear', + memb_dim = 'ensemble', + Fair = FALSE, + cross.val = TRUE, + ncores = 10) +> summary(skill$rpss) + Min. 1st Qu. Median Mean 3rd Qu. Max. +-0.54005 -0.11225 -0.03170 -0.03722 0.04480 0.44376 +> summary(skill$sign) + Mode FALSE TRUE +logical 6798 402 +> summary(skill_raw$rpss) + Min. 1st Qu. Median Mean 3rd Qu. Max. +-0.59113 -0.11045 -0.03069 -0.03585 0.04637 0.44305 +> summary(skill_raw$sign) + Mode FALSE TRUE +logical 7055 387 +``` + +## 5. Additional Exercise: Visualization + +We can use the function from s2dv PlotEquiMap to visualize the data. + +**Example** + +With the following code, we will plot a map to compare the hindcast raw and calibrated data. Can we appreciate the differences? + +We are going to plot the last year of the hindcast period (2016) for the last timestep (December). Also, we are going to use the last ensemble member (arbirtrary choice). +```r +dim(hcst$data) + # dat var syear ensemble time latitude longitude + # 1 1 24 25 2 60 60 +``` +```r +lat <- hcst$coords$lat +lon <- hcst$coords$lon +# Visualization: compare raw and calibrated data +PlotEquiMap(hcst$data[,,24,1,1,,], lat = lat, lon = lon, + filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/raw_hcst_24.png") + +PlotEquiMap(hcst_cal$data[,,24,1,1,,], lat = lat, lon = lon, + filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/cal_hcst_24.png") + +# Visualization of RPSS +PlotEquiMap(skill$rpss[,,1,,], lat = lat, lon = lon, brks = seq(-1, 1, by = 0.1), + filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/skill.png") + +``` -- GitLab From e95380995b83fe259f09a06a1f2415ee1018a9b8 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 26 Oct 2023 11:01:37 +0200 Subject: [PATCH 05/66] Few corrections --- .../PATC2023/handson_2-data-assesment_ans.md | 22 +++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md index d21c5834..2ff5e31e 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md @@ -316,7 +316,7 @@ fcst_cal <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, alpha = NULL, memb_dim = "ensemble", sdate_dim = "syear", - ncores = 10) + ncores = 14) ``` **Exercise 2**: 1. Test the hcst with a different calibration methods available and compare the results. @@ -353,6 +353,22 @@ hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, ftime_dim = 'time', ncores = 10) +``` +**Compute forecast anomaly field** +```r +clim <- s2dv::Clim(exp = hcst_cal$data, obs = obs$data, + time_dim = "syear", + dat_dim = c("dat", "ensemble"), + memb = FALSE, + memb_dim = "ensemble", + ftime_dim = "time", + ncores = 10) +clim_hcst <- InsertDim(clim$clim_exp, posdim = 1, lendim = 1, name = "syear") +dims <- dim(clim_hcst) +clim_hcst <- rep(clim_hcst, fcst$dim[['ensemble']]) +dim(clim_hcst) <- c(dims, ensemble = fcst$dim[['ensemble']]) +clim_hcst <- Reorder(clim_hcst, order = names(fcst$dim)) +fcst_anom <- fcst_cal$data - clim_hcst ``` **Exercise 3**: 1. Calculate the hcst anomalies with the raw hindcast but change the number of ncores. @@ -429,7 +445,9 @@ PlotEquiMap(hcst$data[,,24,1,1,,], lat = lat, lon = lon, PlotEquiMap(hcst_cal$data[,,24,1,1,,], lat = lat, lon = lon, filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/cal_hcst_24.png") - +``` + +```r # Visualization of RPSS PlotEquiMap(skill$rpss[,,1,,], lat = lat, lon = lon, brks = seq(-1, 1, by = 0.1), filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/skill.png") -- GitLab From aa48128b97a6b5dffd91fd0cc57ee22918f7f062 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 26 Oct 2023 15:09:32 +0200 Subject: [PATCH 06/66] Clean text --- .../PATC2023/handson_2-data-assesment.md | 182 ++++--------- .../PATC2023/handson_2-data-assesment_ans.md | 250 ++++++------------ 2 files changed, 140 insertions(+), 292 deletions(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index 74ce3eb2..548a1dcb 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -2,7 +2,7 @@ **Goal** -Use CSTools and s2dv to calibrate, compute anomalies and skill of a climate dataset over a period. +Use CSTools and s2dv to perform a quality assesment of a climate model. **Load packages** ```r @@ -16,11 +16,11 @@ In this section we will use the function **Start** to load the data. Then, we wi The `s2dv_cube` object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. -**Note:** If you have already loaded the data with Start, go directly to section 1b). +**Note:** If you have already loaded the data with Start, go directly to section **b)**. -### 1a) Load the data +### a) Load the data -The following section is taken from [PATC 2023 startR tutorial](https://earth.bsc.es/gitlab/es/startR/-/blob/doc-bsc_training_2023/inst/doc/tutorial/PATC2023/handson_1-data-loading.md?ref_type=heads). The experiment data are Meteo-France System 7 from ECMWF, and the observation ones are ERA5 from ECMWF. We're going to analyze the near-surface temperature (short name: tas) for seasonal forecast. We will focus on the Europe region (roughly 20W-40E, 20N-80N). The hindcast years are 1993 to 2016, and the forecast year is 2020. The initial month is November. +The following section is taken from [PATC 2023 startR tutorial](https://earth.bsc.es/gitlab/es/startR/-/blob/doc-bsc_training_2023/inst/doc/tutorial/PATC2023/handson_1-data-loading.md?ref_type=heads). The experiment data are Meteo-France System 7 from ECMWF, and the observation ones are ERA5 from ECMWF for near-surface temperature (short name: tas). We will focus on the Europe region (roughly 20W-40E, 20N-80N). The hindcast years are 1993 to 2016. ```r # Use this one if on workstation or nord3 (have access to /esarchive) @@ -53,30 +53,6 @@ hcst <- CST_Start(dat = path_exp, ``` -```r -sdate_fcst <- '20201101' - -fcst <- CST_Start(dat = path_exp, - var = 'tas', - syear = sdate_fcst, - ensemble = 'all', - time = 1:2, - latitude = startR::values(list(20, 80)), - latitude_reorder = startR::Sort(), - longitude = startR::values(list(-20, 40)), - longitude_reorder = startR::CircularSort(-180, 180), - transform = startR::CDORemapper, - transform_params = list(grid = 'r360x181', method = 'bilinear'), - transform_vars = c('latitude', 'longitude'), - synonims = list(syear = c('syear', 'sdate'), - latitude = c('lat', 'latitude'), - longitude = c('lon', 'longitude')), - return_vars = list(time = 'syear', - longitude = NULL, latitude = NULL), - retrieve = TRUE) - -``` - ```r # Adjust the day to the correct month hcst$attrs$Dates <- hcst$attrs$Dates - lubridate::days(1) @@ -115,132 +91,82 @@ obs <- CST_Start(dat = path_obs, ### 1b) Create `s2dv_cube`: -Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube**: +Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube** from **CSTools** package: -```r -> str(hcst) - 'startR_array' num [1, 1, 1:24, 1:25, 1:2, 1:60, 1:60] 295 296 297 297 297 ... - - attr(*, "Variables")=List of 2 - ..$ common:List of 4 - .. ..$ time : POSIXct[1:48], format: "1993-12-01" "1994-12-01" ... - .. ..$ longitude: num [1:60(1d)] -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5 ... - .. .. ..- attr(*, "variables")=List of 1 - .. .. .. ..$ longitude:List of 7 - .. .. .. .. ..$ ndims : num 1 - .. .. .. .. ..$ size : int 360 - .. .. .. .. ..$ units : chr "degrees_east" - .. .. .. .. ..$ dim :List of 1 - .. .. .. .. .. ..$ :List of 10 - -``` -Now, we convert the startR object into 's2dv_cube' object. ```r hcst <- as.s2dv_cube(hcst) -fcst <- as.s2dv_cube(fcst) obs <- as.s2dv_cube(obs) ``` - -```r -> str(hcst) -List of 4 - $ data : num [1, 1, 1:24, 1:25, 1:2, 1:60, 1:60] 295 296 297 297 297 ... - $ dims : Named int [1:7] 1 1 24 25 2 60 60 - ..- attr(*, "names")= chr [1:7] "dat" "var" "syear" "ensemble" ... - $ coords:List of 7 - ..$ dat : chr "dat1" - .. ..- attr(*, "indices")= logi FALSE - ..$ var : chr "tas" - .. ..- attr(*, "values")= logi TRUE - .. ..- attr(*, "indices")= logi FALSE - ..$ syear : chr [1:24] "19931101" "19941101" "19951101" "19961101" ... - .. ..- attr(*, "values")= logi TRUE - .. ..- attr(*, "indices")= logi FALSE - ..$ ensemble : int [1:25] 1 2 3 4 5 6 7 8 9 10 ... - .. ..- attr(*, "indices")= logi TRUE - ..$ time : int [1:2] 1 2 - .. ..- attr(*, "indices")= logi TRUE - ..$ latitude : num [1:60] 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5 29.5 ... - .. ..- attr(*, "variables")=List of 1 - .. .. ..$ latitude:List of 7 - .. .. .. ..$ ndims : num 1 - [...] -``` -**Questions 1:** - -Find `s2dv_cube` information of `hcst` object. Here there are some useful functions: +By printing the object, we see that it has been organized following an order. ```r -str() -typeof() -class() -dim() -names() -summary() +hcst ``` ```r -`s2dv_cube` overview -$ data: [data array] -$ dims: [dimensions vector] -$ coords: [List of coordinates vectors] - [...] -$ attrs: [List of the attributes] - $ Dates - $ Variable: - $ varName - $ metadata - [...] +'s2dv_cube' +Data [ 294.975204467773, 295.99658203125, 296.999153137207, 296.874618530273, 297.662521362305, 297.113525390625, 296.145011901855, 295.981201171875 ... ] +Dimensions ( dat = 1, var = 1, syear = 24, ensemble = 25, time = 2, latitude = 61, longitude = 61 ) +Coordinates + * dat : dat1 + * var : tas + * syear : 19931101, 19941101, 19951101, 19961101, 19971101, 19981101, 19991101, 20001101, 20011101, 20021101, 20031101, 20041101, 20051101, 20061101, 20071101, 20081101, 20091101, 20101101, 20111101, 20121101, 20131101, 20141101, 20151101, 20161101 + ensemble : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25 + time : 1, 2 + * latitude : 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80 + * longitude : -20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 +Attributes + Dates : 1993-11-30 1994-11-30 1995-11-30 1996-11-30 1997-11-30 ... + varName : tas + metadata : + time + other : class, tzone + longitude + other : dim + latitude + other : dim + tas + units : K + long name : 2 metre temperature + other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table + Datasets : dat1 + when : 2023-10-26 14:43:30 + source_files : /esarchive/exp/meteofrance/system7c3s/monthly_mean/tas_f6h/tas_19931101.nc ... + load_parameters : + ( dat1 ) : dat = dat1, var = tas, syear = 19931101 ... + ... ``` +> **Note:** An `s2dv_cube` object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$coords`, `hcst$attrs`, ...) -1. Can you tell what type of object is an **s2dv_cube** in R structures? +#### Questions 1: +**Goal:** To find `s2dv_cube` information of `hcst` object. + +1. What type of object is an **s2dv_cube** in base R? ```r -class(hcst) -# "s2dv_cube" +class(____) typeof(___) - ``` -1. Can you tell what type of object is the element data in common language? Use the function `dim()` and `typeof()` to check `hcst$data` +2. What type of object is the element `hcst$data` (common language)? Use the function `dim()` and `typeof()` to check `hcst$data`: ```r +typeof(____) dim(____) - -typeof(hcst$data) - ``` -2. The Dates of an `s2dv_cube` can be found in element: `hcst$attrs$Dates`. -Could you tell what are the time dimensions of the object? -```r +3. What are the **time dimensions** of the `hcst` object? The Dates of an `s2dv_cube` can be found in element: `hcst$attrs$Dates`. -``` -3. The coordinates in the `s2dv_cube` are stored in element `hcst$coords`. What are the coordinates names in the object `hcst`? Use the function `names()` to check: +4. What are the coordinates names in the object `hcst`? Use the function `names()` to check. The coordinates in the `s2dv_cube` are stored in element `hcst$coords`. ```r names(____) ``` -4. In which region we have loaded the data? -```r - -``` -5. What is the start date dimension name? What is the ensemble member dimension name? -```r -hcst$dims -# syear -# ensemble -``` +5. In which **latitude** and **longitude** we have loaded the data? -6. How many ensemble members have the `hcst`, `fcst` and `obs` datasets? -```r +6. What is the **start date** dimension name of the `hcst`? What is the **ensemble member** dimension name? -``` -7. The metadata of the `s2dv_cube` is stored in `hcst$attrs$Variable` What is the full variable name of the loaded data? Find out the information in `hcst$attrs$Variable$metadata` with the function `str()`: -```r +7. How many **ensemble members** have the `hcst`, `fcst` and `obs` datasets? -``` -8. What season is the data loaded from? You can use the function `month()` -```r +8. What is the full variable name of the loaded data? Find out the information in `hcst$attrs$Variable$metadata` with the function `str()`. -``` -9. What are the units of the data? -```r +9. From what season is the data loaded from? You can use the function `months()`. -``` +10. What are the **units** of the data? **Exercise 1**: 1. Find the mean, the maximum and the minimum of the `fcst`, `hcst` and obs data and compare it: diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md index 2ff5e31e..7362c980 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md @@ -2,7 +2,7 @@ **Goal** -Use CSTools and s2dv to calibrate, compute anomalies and skill of a climate dataset over a period. +Use CSTools and s2dv to perform a quality assesment of a climate model. **Load packages** ```r @@ -16,11 +16,11 @@ In this section we will use the function **Start** to load the data. Then, we wi The `s2dv_cube` object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. -**Note:** If you have already loaded the data with Start, go directly to section 1b). +> **Note:** If you have already loaded the data with Start, go directly to section **b)**. -### 1a) Load the data +### a) Load the data -The following section is taken from [PATC 2023 startR tutorial](https://earth.bsc.es/gitlab/es/startR/-/blob/doc-bsc_training_2023/inst/doc/tutorial/PATC2023/handson_1-data-loading.md?ref_type=heads). The experiment data are Meteo-France System 7 from ECMWF, and the observation ones are ERA5 from ECMWF. We're going to analyze the near-surface temperature (short name: tas) for seasonal forecast. We will focus on the Europe region (roughly 20W-40E, 20N-80N). The hindcast years are 1993 to 2016, and the forecast year is 2020. The initial month is November. +The following section is taken from [PATC 2023 startR tutorial](https://earth.bsc.es/gitlab/es/startR/-/blob/doc-bsc_training_2023/inst/doc/tutorial/PATC2023/handson_1-data-loading.md?ref_type=heads). The experiment data are Meteo-France System 7 from ECMWF, and the observation ones are ERA5 from ECMWF for near-surface temperature (short name: tas). We will focus on the Europe region (roughly 20W-40E, 20N-80N). The hindcast years are 1993 to 2016. ```r # Use this one if on workstation or nord3 (have access to /esarchive) @@ -53,30 +53,6 @@ hcst <- CST_Start(dat = path_exp, ``` -```r -sdate_fcst <- '20201101' - -fcst <- CST_Start(dat = path_exp, - var = 'tas', - syear = sdate_fcst, - ensemble = 'all', - time = 1:2, - latitude = startR::values(list(20, 80)), - latitude_reorder = startR::Sort(), - longitude = startR::values(list(-20, 40)), - longitude_reorder = startR::CircularSort(-180, 180), - transform = startR::CDORemapper, - transform_params = list(grid = 'r360x181', method = 'bilinear'), - transform_vars = c('latitude', 'longitude'), - synonims = list(syear = c('syear', 'sdate'), - latitude = c('lat', 'latitude'), - longitude = c('lon', 'longitude')), - return_vars = list(time = 'syear', - longitude = NULL, latitude = NULL), - retrieve = TRUE) - -``` - ```r # Adjust the day to the correct month hcst$attrs$Dates <- hcst$attrs$Dates - lubridate::days(1) @@ -113,114 +89,86 @@ obs <- CST_Start(dat = path_obs, ``` -### 1b) Create `s2dv_cube`: +### b) Create `s2dv_cube`: -Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube**: +Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube** from **CSTools** package: -```r -> str(hcst) - 'startR_array' num [1, 1, 1:24, 1:25, 1:2, 1:60, 1:60] 295 296 297 297 297 ... - - attr(*, "Variables")=List of 2 - ..$ common:List of 4 - .. ..$ time : POSIXct[1:48], format: "1993-12-01" "1994-12-01" ... - .. ..$ longitude: num [1:60(1d)] -19.5 -18.5 -17.5 -16.5 -15.5 -14.5 -13.5 -12.5 -11.5 -10.5 ... - .. .. ..- attr(*, "variables")=List of 1 - .. .. .. ..$ longitude:List of 7 - .. .. .. .. ..$ ndims : num 1 - .. .. .. .. ..$ size : int 360 - .. .. .. .. ..$ units : chr "degrees_east" - .. .. .. .. ..$ dim :List of 1 - .. .. .. .. .. ..$ :List of 10 - -``` -Now, we convert the startR object into 's2dv_cube' object. ```r hcst <- as.s2dv_cube(hcst) -fcst <- as.s2dv_cube(fcst) obs <- as.s2dv_cube(obs) ``` - -```r -> str(hcst) -List of 4 - $ data : num [1, 1, 1:24, 1:25, 1:2, 1:60, 1:60] 295 296 297 297 297 ... - $ dims : Named int [1:7] 1 1 24 25 2 60 60 - ..- attr(*, "names")= chr [1:7] "dat" "var" "syear" "ensemble" ... - $ coords:List of 7 - ..$ dat : chr "dat1" - .. ..- attr(*, "indices")= logi FALSE - ..$ var : chr "tas" - .. ..- attr(*, "values")= logi TRUE - .. ..- attr(*, "indices")= logi FALSE - ..$ syear : chr [1:24] "19931101" "19941101" "19951101" "19961101" ... - .. ..- attr(*, "values")= logi TRUE - .. ..- attr(*, "indices")= logi FALSE - ..$ ensemble : int [1:25] 1 2 3 4 5 6 7 8 9 10 ... - .. ..- attr(*, "indices")= logi TRUE - ..$ time : int [1:2] 1 2 - .. ..- attr(*, "indices")= logi TRUE - ..$ latitude : num [1:60] 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5 29.5 ... - .. ..- attr(*, "variables")=List of 1 - .. .. ..$ latitude:List of 7 - .. .. .. ..$ ndims : num 1 - [...] -``` -**Questions 1:** - -Find `s2dv_cube` information of `hcst` object. Here there are some useful functions: +By printing the object, we see that it has been organized following an order. ```r -str() -typeof() -class() -dim() -names() -summary() +hcst ``` ```r -`s2dv_cube` overview -$ data: [data array] -$ dims: [dimensions vector] -$ coords: [List of coordinates vectors] - [...] -$ attrs: [List of the attributes] - $ Dates - $ Variable: - $ varName - $ metadata - [...] -``` - -1. Can you tell what type of object is an **s2dv_cube** in R structures? +'s2dv_cube' +Data [ 294.975204467773, 295.99658203125, 296.999153137207, 296.874618530273, 297.662521362305, 297.113525390625, 296.145011901855, 295.981201171875 ... ] +Dimensions ( dat = 1, var = 1, syear = 24, ensemble = 25, time = 2, latitude = 61, longitude = 61 ) +Coordinates + * dat : dat1 + * var : tas + * syear : 19931101, 19941101, 19951101, 19961101, 19971101, 19981101, 19991101, 20001101, 20011101, 20021101, 20031101, 20041101, 20051101, 20061101, 20071101, 20081101, 20091101, 20101101, 20111101, 20121101, 20131101, 20141101, 20151101, 20161101 + ensemble : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25 + time : 1, 2 + * latitude : 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80 + * longitude : -20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 +Attributes + Dates : 1993-11-30 1994-11-30 1995-11-30 1996-11-30 1997-11-30 ... + varName : tas + metadata : + time + other : class, tzone + longitude + other : dim + latitude + other : dim + tas + units : K + long name : 2 metre temperature + other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table + Datasets : dat1 + when : 2023-10-26 14:43:30 + source_files : /esarchive/exp/meteofrance/system7c3s/monthly_mean/tas_f6h/tas_19931101.nc ... + load_parameters : + ( dat1 ) : dat = dat1, var = tas, syear = 19931101 ... + ... +``` +> **Note:** An `s2dv_cube` object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$coords`, `hcst$attrs`, ...) + +#### Questions 1: +**Goal:** To find `s2dv_cube` information of `hcst` object. + +1. What type of object is an **s2dv_cube** in base R? ```r class(hcst) # "s2dv_cube" typeof(hcst) # "list" ``` -1. Can you tell what type of object is the element data in common language? +2. What type of object is the element `hcst$data` (common language)? Use the function `dim()` and `typeof()` to check `hcst$data`: ```r -dim(hcst$data) -# dat var syear ensemble time latitude longitude -# 1 1 24 25 2 61 61 typeof(hcst$data) # base type # [1] "double" +dim(hcst$data) # dimensions +# dat var syear ensemble time latitude longitude +# 1 1 24 25 2 61 61 -# Answer: Multi-dimensional array / N-dimensional array /tensor. +# Answer: Multi-dimensional array / N-dimensional array / Tensor. ``` -2. The Dates of an `s2dv_cube` can be found in element: `hcst$attrs$Dates`. -Could you tell what are the time dimensions of the object? +3. What are the **time dimensions** of the `hcst` object? The Dates of an `s2dv_cube` can be found in element: `hcst$attrs$Dates`. ```r dim(hcst$attrs$Dates) # syear time # 24 2 ``` -3. The coordinates in the `s2dv_cube` are stored in element `hcst$coords`. What are the coordinates names in the object `hcst`? +4. What are the coordinates names in the object `hcst`? Use the function `names()` to check. The coordinates in the `s2dv_cube` are stored in element `hcst$coords`. ```r names(hcst$coords) # [1] "dat" "var" "syear" "ensemble" "time" "latitude" # [7] "longitude" ``` -4. In which region we have loaded the data? +5. In which **latitude** and **longitude** we have loaded the data? ```r hcst$coords$lat # latitude: 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 @@ -233,14 +181,14 @@ hcst$coords$lon # [39] 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 # [58] 37 38 39 40 ``` -5. What is the start date dimension name? What is the ensemble member dimension name? +6. What is the **start date** dimension name of the `hcst`? What is the **ensemble member** dimension name? ```r hcst$dims # syear # ensemble ``` -6. How many ensemble members have the `hcst`, `fcst` and `obs` datasets? +7. How many **ensemble members** have the `hcst`, `fcst` and `obs` datasets? ```r hcst$dims[['ensemble']] # [1] 25 @@ -249,20 +197,20 @@ fcst$dims[['ensemble']] obs$dims # No ensemble member in obs ``` -7. The metadata of the `s2dv_cube` is stored in `hcst$attrs$Variable` What is the full variable name of the loaded data? +8. What is the full variable name of the loaded data? Find out the information in `hcst$attrs$Variable$metadata` with the function `str()`. ```r str(hcst$attrs$Variable) str(hcst$attrs$Variable$metadata$tas$long_name) # chr "2 metre temperature" ``` -8. What season is the data loaded from? +9. From what season is the data loaded from? You can use the function `months()`. ```r dim(hcst$attrs$Dates) hcst$attrs$Dates[1,] months(hcst$attrs$Dates[1,]) # "December" "January" ``` -9. What are the units of the data? +10. What are the **units** of the data? ```r hcst$attrs$Variable$metadata$tas$units # K @@ -303,42 +251,6 @@ hcst_cal <- CST_Calibration(exp = hcst, obs = obs, ncores = 10) ``` -Calibrate the forecast: -```r - -fcst_cal <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, - cal.method = "evmos", - eval.method = "leave-one-out", - multi.model = FALSE, - na.fill = TRUE, - na.rm = TRUE, - apply_to = NULL, - alpha = NULL, - memb_dim = "ensemble", - sdate_dim = "syear", - ncores = 14) -``` -**Exercise 2**: -1. Test the hcst with a different calibration methods available and compare the results. -```r -hcst_cal_bias <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, - cal.method = "bias", - eval.method = "leave-one-out", - multi.model = FALSE, - na.fill = TRUE, - na.rm = TRUE, - apply_to = NULL, - alpha = NULL, - memb_dim = "ensemble", - sdate_dim = "syear", - ncores = 10) -summary(hcst_cal_bias$data) - Min. 1st Qu. Median Mean 3rd Qu. Max. - 240.4 273.6 280.0 280.3 288.1 302.7 -summary(hcst_cal$data) - Min. 1st Qu. Median Mean 3rd Qu. Max. - 237.9 272.9 279.8 279.6 287.6 303.5 -``` ## 3. Compute Anomalies @@ -354,22 +266,7 @@ hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, ncores = 10) ``` -**Compute forecast anomaly field** -```r -clim <- s2dv::Clim(exp = hcst_cal$data, obs = obs$data, - time_dim = "syear", - dat_dim = c("dat", "ensemble"), - memb = FALSE, - memb_dim = "ensemble", - ftime_dim = "time", - ncores = 10) -clim_hcst <- InsertDim(clim$clim_exp, posdim = 1, lendim = 1, name = "syear") -dims <- dim(clim_hcst) -clim_hcst <- rep(clim_hcst, fcst$dim[['ensemble']]) -dim(clim_hcst) <- c(dims, ensemble = fcst$dim[['ensemble']]) -clim_hcst <- Reorder(clim_hcst, order = names(fcst$dim)) -fcst_anom <- fcst_cal$data - clim_hcst -``` + **Exercise 3**: 1. Calculate the hcst anomalies with the raw hindcast but change the number of ncores. ```r @@ -422,9 +319,10 @@ logical 6798 402 logical 7055 387 ``` -## 5. Additional Exercise: Visualization +## 5. Additional Exercises: Visualization We can use the function from s2dv PlotEquiMap to visualize the data. +**Exercise 1: Visualization** **Example** @@ -436,6 +334,7 @@ dim(hcst$data) # dat var syear ensemble time latitude longitude # 1 1 24 25 2 60 60 ``` + ```r lat <- hcst$coords$lat lon <- hcst$coords$lon @@ -453,3 +352,26 @@ PlotEquiMap(skill$rpss[,,1,,], lat = lat, lon = lon, brks = seq(-1, 1, by = 0.1) filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/skill.png") ``` + +**Exercise 2: Change Calibration method and compare results** + +1. Test the hcst with a different calibration methods available and compare the results. +```r +hcst_cal_bias <- CST_Calibration(exp = hcst, obs = obs, + cal.method = "bias", + eval.method = "leave-one-out", + multi.model = FALSE, + na.fill = TRUE, + na.rm = TRUE, + apply_to = NULL, + alpha = NULL, + memb_dim = "ensemble", + sdate_dim = "syear", + ncores = 10) +summary(hcst_cal_bias$data) + Min. 1st Qu. Median Mean 3rd Qu. Max. + 240.4 273.6 280.0 280.3 288.1 302.7 +summary(hcst_cal$data) + Min. 1st Qu. Median Mean 3rd Qu. Max. + 237.9 272.9 279.8 279.6 287.6 303.5 +``` \ No newline at end of file -- GitLab From 2b1396761dcb8dad08e0cb730a5d8b3f461d94a8 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 26 Oct 2023 15:12:44 +0200 Subject: [PATCH 07/66] Improve text --- inst/doc/tutorial/PATC2023/handson_2-data-assesment.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index 548a1dcb..9e81ec98 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -16,7 +16,7 @@ In this section we will use the function **Start** to load the data. Then, we wi The `s2dv_cube` object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. -**Note:** If you have already loaded the data with Start, go directly to section **b)**. +> **Note:** If you have already loaded the data with Start, go directly to section **b)**. ### a) Load the data @@ -89,7 +89,7 @@ obs <- CST_Start(dat = path_obs, ``` -### 1b) Create `s2dv_cube`: +### b) Create `s2dv_cube`: Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube** from **CSTools** package: -- GitLab From 987f8bad96c5d0479a5722468dceb3f00a3b878c Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 26 Oct 2023 17:34:15 +0200 Subject: [PATCH 08/66] Improve Handson --- .../PATC2023/Figures/hcst_anom_cal.png | Bin 0 -> 12792 bytes .../PATC2023/Figures/hcst_anom_raw.png | Bin 0 -> 12676 bytes .../tutorial/PATC2023/Figures/skill_cal.png | Bin 0 -> 14520 bytes .../tutorial/PATC2023/Figures/skill_raw.png | Bin 0 -> 14718 bytes .../PATC2023/handson_2-data-assesment.md | 191 +++++++++--------- .../PATC2023/handson_2-data-assesment_ans.md | 187 ++++++++--------- 6 files changed, 178 insertions(+), 200 deletions(-) create mode 100644 inst/doc/tutorial/PATC2023/Figures/hcst_anom_cal.png create mode 100644 inst/doc/tutorial/PATC2023/Figures/hcst_anom_raw.png create mode 100644 inst/doc/tutorial/PATC2023/Figures/skill_cal.png create mode 100644 inst/doc/tutorial/PATC2023/Figures/skill_raw.png diff --git a/inst/doc/tutorial/PATC2023/Figures/hcst_anom_cal.png b/inst/doc/tutorial/PATC2023/Figures/hcst_anom_cal.png new file mode 100644 index 0000000000000000000000000000000000000000..749830c4f39d8f610efb4dbd01339491a35fe516 GIT binary patch literal 12792 zcmd6NXH-+o*Y8O}2|ZM)a)1CLy$B-hfRrFbM2a*K2q03VBRwEhT0liQih!aB0)imz z0YO493QB)~(4-gX<;M5_-Y@sVz4yyq>t>y-Gc$W;&&-*&=eK|HMus|c)ST1+0MO}T zG>riO0RjMQ7l|OHZ0FV8AiXq<^i8x#Z*nO+X8kz+`yb?Rb$S`?&H%4(<&R`q!R)So ztGL7($9{iPSN*lLHR^1L!nJ)+Up(UVo_|H<}j`e%e{szgt^BeQx@Y3vY5HQgVX{+?=xY z{^stG_$plxCchxcwUc`yfZV>(DFZN&Sv-vZE+Xxp8j-?V@lrU-$N?;FAg9U*DWeCF zsZ+yXWFndjXb&uk`1=;DM5X_vTC0joE#a_*SU$I19#s2gNU(V0_X~-K`cs+8V>^fX zA464FlhmxjC>hRcMm@Bw6zU!m!zIoI4^#L%#ZIrEUu0%QQ9fp_ga@kLc%n%Ohmy(c zKkFjV9djxD8(_(^tKm;iQohOFf_L82D^`SIaFHkKY_jhdf_X)l0Yw&!$r%-Xpp$bf zF67ro==Tf`A5&OOE`sF;>)5cbQ(u+sz(6f9^G~)x+&^=jeii)uhr6&XnQQfTev(y2 ze-gIN1uze3?DdObotB&It82>m_Sn#)*oW$4o>*Lz+}#TVAC(`!$)KPJVl>(z@lLD% zO>M#h2Nw-hHDqd$kKq0HGEjktJ@-D0pS@6r)h+E-d@1dD;0?4M2(+dysKSqMSNQ{} z+-h7BzK8!ZBOcxiqh+e0r{&aceMt*%*ZiFQGo>Tb*{%BDp9bTLyQyN^?UI>JOkg%$ zvDRA-5fulQYw`lOR&H_OeCm|&HxlOu2Pokv6@2xbueV|gSOcZz@Brq^g@nr-Nc8m4 z1S4h~gh5$2v8y9qItCOj_pveTAokuM49yDeNL|5K`?T*@0K_}sI5NsC?A1Yjil)1M z-5{UpHt&Y2xOm*h=vox2UcMb+#maBfZdHX0Znfg|>}ckwXZv>@Vsl~gUhAmVFB&`fZU z8x_EfE&1T+jv#PA=}un-2z%wmpzpmrMVM>wT{g@>P6HcC{X%zH=Dm=uPg`U2A2ABzHfu~Y^w_;n$g2(dLvOFT2qh`(r^mE6A7roK)hJ^oDY z*3Yh~y8NV395P@>qJ!%3WCQ!-9Gb#){^iinXAy#D%1tJSrx$Apn=EwHrCbQNoGaqa3F-GR*;7h0bO_jo6D**#xkwIBuYQnL z^IYrQ3v&CyYCE5kuy7*&xbG!n*&TV;s)hzY&}$RO}VwjLp7yuBaY{YgP zcq-4yL1uV-fG3O~0)+vDACT(z?|J}M0o@@W^1K~DC|mj zcA#W?0l zZkb(5dQz5TLK`$P^?b`&!b6Ap`^NN{>$In7ndsTAVaZEp7g?@}7EFIm|EJFSS>K+E zIw4-DI z8DS`r9jLpCn(I4gH0UE|O1@V6gv%zV!}NE=GQGu{77{$_fn$#V zBo4svkyHW3q!1FvLJGmbXlquadD>(7lb1QBXZm}gRhD8ridJ;Iz^!>Ps=sky8$dbs zrTv4QR-geHfLjayhvchV$f^%@ravpl{om$~L-;~wNZk2(crZ}N32?lq>S|B{jOlQR z-4>2h7l9HA=-Yg$%tHp82?Ct+Hp~1>3B7~iy%gW~jQh}{oM=30hAYU?TuraTBy3L7 zd`p9}@?=VV#3-verp|5MbpGnjT@#k2ZR~ z*{%EcX>oX8x6GlK@Urb8xkyr(TUQimQa+r}9rnhqt@oRD>@=V0=I5_x#qXL^>|YAB z@t`V{PXYn{Zc9zO8l{U~sm#a~wUylda|+e_fqBW~&^8QZ5yIEr__pw^dxCvPK$n5~ z#_Z)k+=7oIXf4;2h^vvxa$SC3Sv&T=yEvV0_;XkX$^O{fpFWds>{uavy)dLSs7ejch zyUMV@`UmCn?15Se)Zlw<0XCpyan<|ho6|;1?)67C@#Wxi*K6x!L=G8W7(S>^A|Nzt z`oVz6=NINQI(6gh#4{+8Q5DDH?O(U@ealIO!TYr3sw7!dax+kvr|x*QMX%`bmSevA zqrGQJW~aikZQ{}6x$g~vw_JON^KY2z{pteZx_fbCZ$>OdW^`l1N0^b%qeFE-4iSK9 z(F_W4?h;l;{2X~)ny=~Y3hGS&jnQO^E7m`S4|{{K^#lm3wR@)k&XfvPsD-$gK_#AGpByRmZ#p z!Ionf3Y)-&TsEkU z5eJ_uZi-LJa9&KghYzy_!!WZnb|a{?*QUb--&DQ{zI;FvG$!Id*{he{cA7e|v6D38B5+o{Ea2+>P7YzAjIHNT zpk%U$kHC)Kk*L2&X(5_#a=t^K@>nEp`0HmB0_^vNQCZhE(Bp?ObaBMAf#Xl0?v4@8 zt|}r<$>P{#ntX_ssEf8W3x0Abgfsg#F67JftNUb?;wKW2rG_QlXD65@0=ps?ejY3G z-o`I#pRAYSmNy!o$JXXoRZOPw5{RoLw~$})f>w1bLc+)DEK|A3(cDpTfS1BIc&ACA zUJYl-es>wC?wmv)vG&8V-69e{aMU}%It%hYg3f)&ZVo=m?4Zb6sf(20Y@L2W*wN%5jYct zC+s9gKwbq8r4d@hZ1_aHNWTg*Ri}h{+maEOGI2Gk-e_1r)w5PHlb&$u*N_5d@_l_1 zVER)bqlZt7rl&}AxxV_O8|ZXo0!_-!Mx9^Oi``%`6+2{TCF_1!$jZIHQqZ{)Oifm9 z`QTb~ybiH!yNc)iAx`f7BNtG8+>0J{u|uZroQ$O!4axBZw8HP%c>i$_G8M5v@|Wyy*qUJ0lZ=C|B-) z0tO8z)YJIt)MK9WNt|1GFRMS`90bl*HzV*9Y&>K34b=EGfxfW5%r)mx*$d?p7ve%c zXxMdppvJM_a8e(BQlfB=+Pm(DLKR2zr-c2wjp4MU1Yr1L&HOR1#raF1^?p*Hin}7|xPIW; z`0zOMkEOyolTmi6Kt!#FP2hU*r_aAxBigQe0G;K3M#MW-N8@P1!ohV=haBTBPdW(e zs;52~+ z#w}B9BBBsU9Jma%NaH`MfM&ac2dKw6mNJPWb`amdr7wdw1%QHorPc7o6j&Y5t1)*x z63wIP)Uu2o;~CheFrFca>b~3=zx!Sc6?$6+And4J16%ByuHi4IsDh6_`j%=-C>?&B z&JYq@_$|K8C-QLN#bmASH5}Q7y2X6tebHc}UTKaHZtQ?@BY;0aMsXzT(?qGbR66tm z7!^r?Xm?SO`fzGk9ZFHvdFp9 z4^6e1f1DKzfEn?+UwNVji(jAEA7=KyH2wil88ys;p~rDta{^Zfv7$Qbv{PRr?g)Cg zoCt>iZ7sjfdM|GYXk^n%of; ziub1Rh~_9XA>|CrWW#<~j!-oN52x2t0bPkzId-JW4}$##bF*(LkH*E*nV5hyCwowU zIRqpB2^4DF9W?@d>k1(mD&#Xx)+%e?n^s~wK;WVJ5xvI6y&uJ7x;zAhltx_P1Fi^S zy_JiCo4ZmvWp%p@BT8oCUY_**k_BM)=tr=f#eTeq6^i$@%9~pn0vA)AO$hQ)k|DQv zj>1@OZ@w-N)8f7)wtVNabYrj--avc(9#?m3fz;kjhbY5BsFd+bDHGVV$g*4TjEfig zok@l%b&x|9pxree@y1gLxane{8pp%4eD*kZaW~=1+}KMBXU`+VY~E||uBbxR9P13k zr|V+xfX?gqy{ZRS?ThU}vluSNhWj*?wVUU44EO=qq5jR{2^i9+>yaVlPa|aX=A{w5 zL^>0}$~X1nOR8HQCoAyLF^269tq68D&42gA7i@i*?-Jwv1$HmYRIBKZgJ0O+%J6r4X z2o&E1a;6L6#$i^vBY~?vfb{y>^EX26)~k`G5yV*au+-mBL48O>BMQF;9MvpW_79ZN zra9oq+Ve$Q*EXtfOH9cYSqz*qbeMlWk)fGw@BP-Ol|lv0dYp0muAZrqL_KJ!<`WX% zK48vqx(u%HM?-QNa4?WNpFu{IbfJ2`G#_CiBZ=QniWT2oQ=(M_y;Keqd*E%2FFumr zVifrduBiB)lT)HsZR*rW%`m>;dKIsJ2@5H+Oc!|G)(cs+6I5eEYIkEb*ChRqbvLN} z8LR639e4-Ze~RP1Y0~mu)JO2K7MD`V1C_F<>kr4v_h@G-l<+U#Trle zN>tR~Qx#{*00_xYCAJOv&Ad+4R%#JasHP~fFd$Z zC23N<#LzbO6EEuAjyt6J$+612D`z%ewF1&en)8`_SgYDDsAS)Hnx~U- zGI3ZO2b)#9zk7l0Dla(>vm+KZor9I%BSLN+q8UX#xs>(=A#fy4$|7WdN8->HNKE9W zk7Bi-SpOhMTwGM$fojj#d4WI!uxWkLQN_T^fmz507`>4z2QF+ahx?6T!}?|pKi{Cl zHC)Bde;b#l?)R@qb8gHXY`+m9rYz6L5H6@N7pm;FN>{l3s+A0q0NTEtCQl0$o{{-z z(=hoZPHdE9&)17HC>icQE!UVe9N7=O9>LxcdxI0mWYKbJFkm2m!yx1HDx3#FlwG=^ zlFqv;7+#y7EzE3XzKm1$!5AUYm6fZbs1VC%mYM2M*bfA#-x?vWv|Yx{TFgA8Mvu_Y znR7ZGdU4!+evspLDD^0n>aWxOLi*TzwqFQk^ry|z=wws zfikciD6~%}S^a*XCCUS=9QHC<>2A(1xTW}Wv#mG1kqTUc$Q+}^ zY2xPPp%B-56E6#B(f3u;&e)e<%}0cM1_K3xfk2|9dv)Z?H5^sbEq2p@N6v%asL|o1 zYnC+wCaj?a)~TPn;cki}HtYI{;U{!XobUWv-}+$sN*sx6pum4FWu(W<*VBfjFS{*$ z-3a7U0{D0j3r01g*Ya}wNkEY!4y!#PhI)nXxza*#kavj4ppElD zN;Kwo?2j<{LfWLwq3?d8yp|v>Te2f1{*f#FKm3{2e>+_B`RfnAFLI?Qh3-}TCFTh; zhG;_I9l>F51PWND3RR2L#s(U)VM0X*g?~}BHBdGKeDijc9FhMekMHXkYwyZEHjI(B z2WK!UoRTIHFUdvR4FC?7^2fZ&QF>qV^z3Eo{S%;}xWr%U!$-D&?q?xfq8{<_8@m@V zYCj(UsLq!`N(o(mJ^|>dta5TqV-FsSDa?HA9)c{8xA~dc$jv;?YCbZ(be)qTHSrkg zC%F*=7>sCz=X)!h23N#$D}eD6&!C5i|HULdUjK^wu4(g(XI)mJ3;^lEU30W}*D)|L zyhir5^#M2P+P!{+^j6*S?SV42U*N)vg+ml~kGJ=6X<#lbKbAxY6n_p-(gnas*t;hE@RSim9 zAq>B;3)fXLC*FM6W|);pM}m2<&^05?_%He^Hfh2#E(S@|4{>q)PpYQh^a*?(_xv^l5#kjy-;lHS}hZ)sdY*y z;nu0R2x2xp$$a*S7+9r2*B%=7ec`vu#&6vT`qx(z7pV;|^bq`K`zaMJhZ_6ZuDn7Z zm4uXU+e6$eq=5he9a3UQbvDSMf+z~O(pPl2NVR-Mq^V(-HbiY}nV|xJgge%nDgb)j z(tqg1x>nsVht_NSm!?uKLFVn%XVvDSwmZnO>AAC)22|4mT;x$@ zQtM&rOVO9ud8gp8K9BE1yVa!AkM&eHem+c3?6^uoZJp-_@XtleO z*)7n-lE9b6`jtO{9N>a`y&FwQ)HVF0HoJqPTGC-UedAE zqB!N)<{$ZqDRCG-6`MrQrRCcV3}Jb5>d6(UkfY7yU@zmzBhqBj&3~e5@JlY#vdpGHtkO-z47z^vo^(mR{IwtbQwPC^wm(~`Cs)>4!Bb7yW~he zyC<*uXPT8ReK}HZ(HN`m{BegoCclgVn{DA9cyQb3E(-?227&H(e)LjI-rN;4`j%Ql zQDFb+N_YRQ9sZCV6#rX8{BB*%Qt8_nDXUj+G_L&{wJ)jip}mVP;*%yo7ewNhVKI zy85=)2z;N)1$SZPAAQ?30Oyfb+cC_}4(Y&1qS3R$6nLu|us5`nk(Jx=65bC-d?@LF z%48HN93Z4Z`g^-f?uO({40jgEhb|S?M}8mJUd>A7o%N|IPj8AtMtg&(FO5N$iR2M- zEMKhVYy>S*8ss_m!nOMQC(%AB+EQ2-Ym1dQ<=LGD4l$;yz7{*Yn3!WtF58=y*OP2Y zX#kG>=V}sf0K0y_SLLaQ$~?k*Sq;uSn}~*H>FXU^e|(Xsm<|%;Up;C1?9Y9iZxhQT zt91rJHJx%h^H~1*KLly0QiJF>drf{TcPr``VHc=RjxCljdx+4uW3uopuPwt&s;c{) zW?NixO#r-;Ds>nc&Dc<9n|XVYbff+k1}cJsn=hqDrGvs+_o?w8cgdjfU`Y)1f$jOvLDYHR*?UFSO^Gx1}fWIKZjvuzQ~ zf*L}M^p$fC>le)>-sg03(t&vmH%Cuh=XG|ncwbBxn<`p2>4mB^c+3r8dD;AvR~m-P zH}Oew+)Pe7g?~QHgvV`aIXblj%hplr-?PwG{ zzVXoRg1)2!6Ih+@%z}?GWrm3iqdtcU){E`D^#5Mb_+r!a_^x}fOlZ}f$1FeE_`pNS zMrJNQ{rp5YKvP>k4cnrGyL3wq;U#sI?>iupK5!pA-hPzQ#4dP^%{b$ySuh_;rgba- zWajDbm9MoNZt|7AxQf@FLGWUtN-qN%Qa=M%uxwyQHTbTHb!YTWs2Q^Gyy3avS+9`F zyNS#{egZ84b*0IhIyR(sEc2h1;A#bvCiz08dPSm>Oct2X9`Hc*mdWXYsSMo2zrL+Qii{3_PYEQZgDC zFn(sDh=@ukGx@x7t>`pB%LaYg?p05=bZF_%nDDe*&<4)W^~_z)H=pwc%i|$W`SO`Z z`tY{$p~(mng-wSKD#hthRQpm%6OL>lp_+TWhZ<@L5(Uz3r}dm)1n@V60Q>zbM|91+ zc>N0P(-;l$PKU)ysxvmSu9eS*^3y+(=LO#`n=@Hbv*5YG2gtNhs{C86gfCQM+<)Hz zj4sO&9H&((fWNJ7Xx4Qj&{1$`}Ut=JuEMryy?B|1< z!NX}@({BCT4AXauJ_6fCxfd}f3vbA_YfM({Y)U8Cnoy|Ya*s}0pD}^JiM6RdwNRz# zEhn2&l5yw@ugJf91MKoVeVi9~`F9bEQ}UMRm9@xX@yCG-;c)((xRncIciGjnKA||RcnIcWd<)QtYb9*;ok>%>c1ye?-L!ik=hMp)+baK`F^?Y z>L(wd+jWa-NbLIdmfNCqvb`^MdLy@h#!lFQ&E_ci+B^050xslvq6MyS@%Gcb21bHv zY~>YbBvk9m+;86-3fW#z0l8Q1yAv`SO#sV{`Z6vw29G%p^83I-##JHs=`22~CzIP> zWI6MrpJxe*#_ODf6K^mx=ABH||e#jA50A!H==!xkLhfOftN*~_!}GLy%>?69IivQlvmJL$xmQ%YLiI7HgY6R{%H8tMFIQAq zM`SGM@ihqHt<{s4G)inlyoi^rDt;A>`o=0s@ycD;UB|xF*t_qz?@=-B*}sF#J*oVu z4&Z~*42Fj_#b)||jX0uj=T!Se4(~`dHLCrL_m0MMUcGO)(gZ3ecz-^d=YwG2OxXcr zyL4I%J3;xHb(j5U$CMeM-l8P=sftC9a3YhtW~QaON(YpDd)UpG|4+Aa%nlxQOWbLL zi?vDr`-)ipQ3 zNEQWLx+-mE4OkpYntKM0>lQ>!yG`X48ksOMTyqsY%t24=q3BXqt6qfsGRXn#kovZe z_8U0cAk16u!5kydv=moh=Swosj}jgVqute4dCn%%)U$mKGqsyvWQ-<6Pd02f$OsT^mD!)(I8G=EBIF&v>~vkxrNY|+{E1j>nb`Oa&P3v-l1^(`I=QU9 zmG-G@D>;-UFzVyS3MQnya^}7Gf{*LFrw%hSxMUyyv*g~K4J>YLFG=-y^KIf-j;^q) zuCYk~yz)KkpvPcdzn>$g^ZA6`KjQOKMJhsb3gUF)1@ozn}tI`A?s>Xp`y^|^LjX{c%Ex9p2?7Woe{@>82MVm!vs4k09;hFaQF zw}#P((onIlA)${phAuv}%a-Swo%^Ww(S{jQi7=|%DyH17TWgo8lgIuQpBY$ny$6!N*W;{Hlt ze|f6ha9P8uQU)!;`21DOPej_WJ%CGu7nJR9W=FLqB4#~caXYVUUZ;sufHhNAd4*xy zAn2=r!nvvLx|&qBdmxE}KTemq!oKGxi5PS|6stN>Ejdkg=fD^}DMzB85{LU?GRR0W z$> zm_QyB+Dl+xXQ24L=%d(6E0G9(LL*L7gUpo^>hh|uy{0-F5LYP%U}-GzlWltgQkq$SP;_eGH!yB5b#Tz=;GYyXAbscURgFFO15y4 z<4LoIS^C2Zn|e0<5xRvZ@(MM$53i)w)q8#TB}y2^c!hFim+m%lN>~?F1VMp_wpXjlMzCd zup=~&zm-SK{}dvc?;0=_)=Xk@wpAx*XLa)P$^CGZjVP?TQv5W8p2 zP}D_S>McIP(l_$nwH1gRbb^@Hkd)l@safj9FOaY7ycrL!0ghVlaPFM6p0Q?&^Uj+f zjnfODRe^tx!u+Z^c>zzI=L#Kkmy2Mv{NhG_oe=7jkZ1J}*ye{0@}+F!J)?cJKyjs} z+G+$Si6|6{63t0~4dxxK6j*07iQUk&9=>t;?mY|@7WE`PqR=Y;+~C_1V_94=dQi6eaUxAdUt{VzF7o{mvjHLSGgNtm^@9J+LVC zXnZ(gw^y^yA4bfBs5^diK4-1v&*H!wZ-y9jWTmro^j)ewM7E}qHVrp{R_lz|KBos@ zZvO^p=6Q>PMb~38JC@u4G#60EfuHqjnfsqHF8u6NY3qL^JovGAiypTH@*yQnN)$jP zz^36WD}w*MY`#4_^o$3G;Rjk9E^`4}B0P*UPS+E>Il;(p*DEY7;a~unv2y+AFq2Rw zu(FEz%KIX|zs+VOTYr@qfw?&~{DI96AQnS6G^Wz>sj?X24v7Bq4+be959&hSt8E6- zlBmNMYQFEacz`-lR6JfJqEu<9XFCOGrypeNpofCcpBBe1*GAe4l_LTj%4IyS;d@Dh ztvKbm2u}?|O8Za}{zQ09#?RQA)8-|QT0}sA=0$N-1Zn{@(FryD#T)~FK$nAqrfWp& z93{vvlN<=Nux(~iHeMC{=gsk@9x-QxMu;aA^znWI6dSqlPvi@+`IicYuWj~=WZcGc z1l|>ZSaHIxWt}%DoqiYmX2eTE%C_Ig_s2Zcswv=xN6oxKZ0E3%Mt8YH#k7UJ^Y}ij zb=wGLAw-S>T!J3lG?mgB(SeC<8re`BT_6#-c5WZzu(A9OIF`UmICc_(m;7d(ywHc% zc;*$1KX+4%Mb46Su7Wm5fUZc6U&(ELN>FN|Tc}ZT-eZcN)FHgJ|F5n|y^sEOLn2RW z<_qUQ_cyhAGH=uG1BH^ZaEoZH(3+b%1?=$dmvl*8-TxN6fHTo3E?So^4qXZ9HA_>k z)_vfvb48RgLE!p^~_homvPTB0+$yV3Rc!x}%odsn8eJgVvv`KG#&_BYWIho3TC4vy~a_HFsM zcj^iqv~+-gM*aqaKo}ih@mL}FCyD=ZPL3y)3!uZ%QcbPee5c3O!9dqdr;V%{^)I9o z(CTmS+hoJ~H@y#HWV*Kx=}I4`!_|f8kRXY37YRlK$Vd|54rz>$MV(h)J`(nR8rQ_) zJKVPMZr74J~XR^pX^ME?+XWZDlIhm**_hv4LJMf z*tpSR^a)5kA+EGBj<7iMncgS)_5&!GKE7?T5XTeE{(SnDu(&6V^`D#KM2e5zf|^!) z3aVWI;&!L|1Q$X&81Sm9`941S#oX0$M6`|Ud_o~JDY!qws67u#%sx$0yJyQvunDlw zn@(@BWE!Ni$M&c2AAL>j67in`WLYJWyHg>bJ(N<03?A zMTh!HFOrbky-1#1sH0#@M)+4?naCo>{IymQZ?)db*Xc2F;i|>0Jtr+uFawsEiWq7n z$S#&SRth$I*#dE^y;I)gAmQ!*{KiP!|MJoQpCk|%A=*gpjEZ(8UymNwlh+yNqKoctSf0p z0x&aINGukhaFdg?R{+M3=>BFDX`uKo(&k;#CPfBOmy>)6B=Y7@5}*&mxBh=#48kL1 zYaYe+tu_3--_w>5Tw)_Sc;M_2ks3b}rSvxn1JeBqhOr(zpgJCbU#6>blc zs&77}15(jsGS^>oNovOZZEkUliGn>w3&fs!BYpU8YD-9Mnc6CSX2lR!+x=m{AH@|gPm2J`Cx1c$-gE!^6?e{HnC725oBJdVApLZ;3^mI%?nV9&U-z_f literal 0 HcmV?d00001 diff --git a/inst/doc/tutorial/PATC2023/Figures/hcst_anom_raw.png b/inst/doc/tutorial/PATC2023/Figures/hcst_anom_raw.png new file mode 100644 index 0000000000000000000000000000000000000000..568e3bff8ccf3b812f0c799216e947e142084af7 GIT binary patch literal 12676 zcmdVBcTiJN_cnSG0tBRl-UJT47m+StKuYKxM4I#t3P>*zP(u$OMWjjZU6CRvHGqJC zASk^T0RaI)g)hGEcfUV=Gk50RnfuSpOy*>-wf5R)=d82NdY-Jr8t7?|lQNS606?yZ zQZ)hq0uTToKj8#;$x&Xb7XF}OpktzjKN5=D(db|}U27!Kt=d_AE&#iKj&bkhd%iI6}1T?wK; zQ>%Qwutqx$hBx^}_C`XL54hOF>#QXoh&WoPZlsF zSPp`KGT|&lGmQ*pZOw~x|JKTGlffJ&d|~m0Qu&Ipu1n}=i?06-k*CwCY=h4Vp6TYR z+IB>=@W+kWF84)77baiPtl9F6|2^wk$6Ym9TBKab+~Y->iF2#WX%XL04rMqayM7{C4U29+Jl=ra;5ob>3a$<2qNO=Rfrq;Q8TI3cftY}-~H zNQJOrM)vchnr^~khCw!RqDO;#KrsYS8g9{t@QhW#36e?0UI0w`{Uv~fAK|VuAR!Sg zJK%`0U#k;QdjLa)#f)5il|WHwaDii-8KTpn1V^64fCLLR5z+c=-Iw|gbrPcS!WNbi z=06$_;kIyKiJl`z?1ABad-+q&awk4zsV2j0cWLK3a6d3H&tF|H{{1L^_bMd)*Y1}( zNuM7~syX!+U&b4K)umtS9(%7ow7Zt7NPlt!K~|!Jq%&e(<@FtH^KfxP*D{~2j^%TQ z{y={w-WYJbjTF2`yuEs%2owW2`)QGIaf@OAa}aF`9CBJj8%ZFrv9p_wXfjqt04o8C z$!6Sur`_i4=_1>-{o`yl1y$kS2Dtg|+xU3OATvhO=LVqQZ|I5!SR9|ui)F+<3K4X~ zDs02iKWqV8Z1ySA!HN9reO(m?q7v`rWM;{*+o-MxP)mf{KHD)WRkcao3)pcQxj6sh#_kmX-H<7f~)t@9%Ol^Gf zuR(C5Xp`E?RQN+FjN}E~#S^ye58B-h38mf+K1JK_o;A-5rI5_PL)C3LO=IQ`D1lYH|h|;F5`A1)|=&>3E zqt0sZDO;eRKI(o{47HL+r@Y`@~WhiU3qj-k^ow3AMaCe9&y&D{F4#Mbz5W z^M}>I)se95&P1)epK17LiI5TtF(&NFnod3^O_flcAgd`Lo~8<{^DWav8B| z!mL2CrUum46oOrrXC|-!dPM;VD-8$m8hVBh`E=Kn%n^cU2XGHq6CVo$_FuLkDb+3s zAIsmfukUd%Pr)6UtHz$e(VrhZZ+Yfw!j&*yyTHZ?7=1BZU@J6C6jyH#3B>>?YmMMg zjJi}eH+-JE^l+nZ9X=8b&ddr)R8A6w8<6C zE0^4}cvALZbLNDWtD+ijrddvkF^JWe9uW>Ifj%A4*NI zl&QL5yYS5PP2!sx0V66A*3wT?m*IdLEfCxcNu9fOGe}8F4#M$6#khZnSR?19uOA&? zPmNc)AU1Yy4+0(;7-`M#{FHZ`4H|EQaf(Ld@}ot%kLWBEo5JEnC`}^Ujsx!^Z2BcT z*j{&b>ix9p7tuGnI8OfQb@A1D{pIVP$ZkPkE%7rHH~Bo$b9=h1@>v%P_@QfIt?o(| zDd-P#TB@lGRFwb2SB;MqlrvI+)ngRHIGEs}t_+Kr&}hW~0{#Q{{BVp= z1t{ID>-|V%-q%07A^fxKh<(NlBnPC>V)K;xE0`GmaDd~h@6)SxJpw8M&4_kFkVF80p&i3Kt%W2Ie74mG5Y+R3R+@-& zQn#3K7RZX?y%@&@$paXIcgt1io0G!(bP|UA({G}=A=#0^Qt&80z~1<|T9GgTQk3`P zmE6|~QWCBm%eS_}gfX|!4+D_X32CWNWG(Od<8%H^^UeNnfqZ2EHO+I31+Z0`!S_e6 z0yrVPXHB^K*U`eM>W9FZy#(8RBFvR@F+R+?dX*J{*6ajg#u1#X@09$DIw;t63U!Q& zJ)0PxPM-4P-hRu)8maVb6RkoI*gcY@ZU{+Iy`JTR{ys3!dyRHG=L7{om}ziPRc)g9 zLC2xgkfjT>Flpu?g$OOWUYO3w@qIKoZ1o$MF zSr*;7|2;I95W6QHGAFL{Usq7JL-21WS|R-h!bbX)%BEBjZ0+WptLUqU=lb&g&`g%} zT#00t8l&9d_3rCcx@!;8^ieKpJK7TmH94_pY__MlQtvR$Eyc{@;eckwsV9(N!Z%+> z-g1B>LIPl`ucL3bGth`e-VYX?S>pftgOX#PKi<&#I#vXi%5y-D^9f;5{~f;JdV>Za zk->pli=j6%w2X(tTpHTML4DOXzyTp@`P<#`-}wOUVDTGyM>cSno)5*CMb)@6c*ecR z_6TrC=83#X7>U&kfE24J=ccxXNdk?|OU6(YBm}(o%g6)!=dK6T&g{vIlF6=uNV z8uK+AXdArV^Q*^;f2$yS{grhSZt3^qBl-+4TM73kx@Dxk5OtKi+qf znte*jYI2hr$N#n+NF^7XVg6}9PJm;`XX|aIa$22CQm*#@*z!tc;Iqzm^#d4cV!+=0 zQ?9f;gz9UV^q+#NJOYH4w+B<`w_{hjsE&9Zp`y7RFo@5qe<(ZIGUxTtIGU_1$~8|e zIU~8+F-{Svm~T8LdjPBP>ThvsD4~Qoax%yB${y3+yK_Mn;Y5b7Ek?+;gPWzSGW6Z$ z99@trcAr$Gy@}w7oWZRp?Dltb2#aR|SKR$Ne$kuy*PjwPQu1<=A8suBCUo0>!1@7p z7CK67`+FQaZhuM852)sO9Pl;Zn98p_%dP|0myJ&u$xsh%zX=w3De5WS$O3QzQ$Al5 zX$MR<3hevsEF8XtnoaT5?%!0XatGPcUe0W>0KS4w)1H0{AaZNrxUF--@+W`GjqJHz z^+f<{EHINQ+w>%b3eDZ_RCTMwL8>)2u@eH8rFJ2`LsRbu*&W|y2R(O&c6YQ~?Byl@ zX>S0j6s-{4U6PwDB5b=K!8yGb7fMv(kSoTV>_&^gN9q0G+%HElOb(vzTRo+S{> zuM_|`z^JRtozoh~_m&SfZa+Ae#cEMt{g7uzYO&-=$3|8ZU4uOh2xTiHirG+V3>t%P z3ZA^8fN=8`+x%5z7I2Lc;68zHt~_){vIU<8xsSnJ1SuqwmbtPwYRn@uosY8ZNR<`Q zv_E8JK(W^6SVgX!atYqsF_bi5{`fqk%p+UC_cei0@9H14Bgxu*q)nv$WZ#hQiyd%7 zntEK=SlpekIBMV=B4^I9>x6<}A9wXm5*+;*$Wji_;h5A(sUpV;vaJHM(dkyd`$AG= z7@qZYhQ+)qd%lxwS1kF8`QWBz^jLI4-IAvurhlLfjDS&ZL(v~mV75Wx*$Zx<(M_fP zn(rL<^knVtv)buDPK1tH#M(5Khe~OZzwT8Z(&gw9rYJpiXj=$o*eyt7_n@RiJ3_&i zlH=DHo+Lboy7nZO2wNiU@nq*DkkyucLWEZGb~OurRcYi9YxQq}71sGJct5E>{Vw{0 z1{C*o{0Or}@qUhTR4y2P8tn8R1XN3mnVrb*k*EJ~_Xev5cbfkLw#qcqK#MHNW#@e@ zOASz=@Y!Bp-k+jFb@%6lSa{q^vWbz5fFAh;|AhT<+47cj0&4z2q2El9#waSCF~hSaIYt@97c9atMryJ%>{f(HB=p=pC| zW>>_9psdFvXb!&^^f1`=A;d|i&aLQm{vk`sDNdkzq)|tS_(F&)!gT-C@#RIrIT#4n zsE(t#h(>UcNq}V)(5I5pcnF5QY~~KYJAV zJOy~7-mmzM3LH;Yv3py)7$)bcYd-A-j zz~^4b3JG}q?YgM{@+vu-#0;XOLVbZ>*NYM3nj^z0oij=G<5If+02 z8@9(QtYbmcrxY@w4hLERP3xz-O=iRo!ZML@#hWfz^uh_!sG z=#Ls>BFAB5K-Wv2OLYAox-M#ETS?qqONT!IE-d(q2VkgUAev~bn3^kr3N_!nImG1U z31BUOqfagfLJsZKwShaQ^gqK~Ue{Frl>F89G161bu|Wn14n;KG35{| zI-Bm~Hke(R6Cwc&s7sv!Y1s3a7gqAA zYkcs#+>#3);z1QUoip+Scjh&UG*oawo-LO(?j;0FFtqraY$zt+$NNL704EOB@`j?F>jlQo9EsLw&L?_^wfHeZN4ps!Zk;HUI^b$6~2}$#1M|E z)q7P%+J`4xeC?6n$5CxM?CfsxH|T?HkVGGRC6N7hAV@Q@y72N}Fb9p4DeVRTrDo&^ z!6BGNs2t*A*3-9$Q(+Snryd1=T1jPr{Zgp^L;%dRAK&WzIjxYJ2hCbVSL zwU}%GrJ2nv^i`k637DwXgLTY|%K`%0#tv$~-3rN&*-qRX5RlnK*839#sp{rVhDv24V}2%`%aB1LRV48yjkf!5X&3}$5H>Q1WR%sdXTwKQbg%k* z2m-$u8)`0OZ4f)0pp0btANt2uQW@yqmGX!>!CyNjZe3VjK2vExP4<07yfH6|s+f=c zCXLp!`C15JdV`m1`Drv1?Kz?9X3ra3+2id)hVkQqyef-=z|$oRZfbC|2822wrB#TG zAV>)}E<$8KZbn+t#O+HZ0an4hewF~uSn79!h>6qT+b09H*`9ZoEkO5K;oL=RGZQfG zZYEQO_eq3v|B;{65}HDBn0Az5jz`wFMY1K#4(h+P7BJ3<>Z_qWSAARF7Y>4^Pu%9nZ+;kZ1>a(3NYZ&5exsf<>3ZkiTT`zA_jpzk$07mJ5TY*vY^N#=vb^^^3!13l^ z`8lire-1uESmncssQ&KL>-UwS-orTRD0&6&5kuA9a@>WMu*A&kfn-qK6n+NzU6~$7G zeDL@-`D|n-MgSdz@+QG4`6~#cP1;O7rI*iWYl>iwJjIQ_Q|3jsQKX~~o8;}7!E0PH zFh?;J$qig$HC;R3ndYP}%yh87_WOp|myehe6sbHH*zyr2w#_a3*u`;YWSI9Soo9b7j7d;l0bV{xdX;AH@-sj+%Rss5I zGuI0$zk8#r_sA^Kj~+0}Tb13WoLKnSP9Q-XZ?Iw%Fw8csOFCbpHAM^<>5+M# zrz-=G??kIi+Lce8zicF=Kqq{ONmw3E>{_)vT^-{4R-|}H_(_HrDQ#Ki#!<>;S9r9v zM{(z&1!qyArym6IWPuS&(%Qq2b1-NXlPNh}OD}eHr22JTUETZSSB*{-dNK$}^r)eh z5I5?8bqf>ck=b`Ii6M^JDEH6TLgE56)loB;5Kb~_2_`<`;*4ND0M0U@KUD!qZ14DCXk-c|%F~h}4zk>L)Ar>F7a5-`?7|92fU`YP%dU2ZJ zMBMc}?7B0dJB!m3?*7eCebIz&>&9}^O$c{XR0oHbFFo>f7%c5f9}X3(tkLq>^L3MI zWTOk5?zB1N4~D^35~y)ocQ~idA;FtuMuFsy9GrF2Q$>i@t5!~uN9CMvrz_qryfl_z zWEfzXhl0E$RQUuLX!24^-oal1G}hZRJil`7jDLahy<%g-1tSEJmNnBUNq{gtUHa>6 z_)x3j&J#DGfISy8ftu2mOKIVjr9&<2wT?997sZoz>SBtx2&zm-I<+Ra0lX_7AB#5#XThp7U^K9?%#g=f#|O)ZX}@vb8C2Zs$g3NE0h`gTRgn32YN zraAqf*bw8_9JsEGEe-42@oHlO#pFjRaoqB ztYXUtNgCS53EoMOH0mP`?z*>RX#ErPgo#NAQg zIIS7Jgy;=GY-B1EQE*vH9ah@+aXa2(sDnV|eMZRw!h1l$puy7J{=C@K7y_nt5< zB~nM?hjOXcin#oFe~y%pzntbP5xHtuZ%}nzce}psz2RF{1e83AH0(C&i>BXv9!|0+ zycOU4J;PFZf>PQ84+2wxX%jEwXQ+=4e5Ti)!+#0VhF?Db5@B2Hkm6uG` zV_h96@t_#HbFMJ~7H&EuPPX)e!kE{j6A&|%`G9NVV`@}x4H2~#xb5{9wQtifb+V4- zG&OZoplzKQ^NNxoj9FUpRl+^ugQWa0rD8aScbQq4m51`sws!vhk*rbx1*?C-B)Vms zYbT_H(P8J7&1dGJn3Yjs4)Ar*(1?%`TXIeH1C_k_wm1?VuIe~4`z7Ap1^i!C+c@_e*JKd3Lz~JL#Ff8U$(#S6%qH> zU38B@<>xnaM1uK;XWS>Y_kT7`{Uo{O>ukXQ#a2q2CvRc9h8UBcQ6SnPUT41}Zo?Zp zA>&vi7zQ>SgGx}P|&nw-~0Ko`Cr9HWT0&+nZ(lY{qX|v-o_NCv!pu=+BH*R zhu14KMKAldl+m9CBt3TK0;(&Bo_wehSkJ0i{nR(%@lduu7kAv(;9AVu#BOR?MP zr}gpb8ALZT`Nt*h4k#%2q&zI6jcx8A(K%tD9XS8Wib#A@#YAJ0^-$e~pMaX(AK8s0 zK$7{wuu|Ck?bw7rrz4@`cF^8jDue_fb#~ZtZ^?1%--fWn_XG?RC{_oAanVx-H-jg# zbsrWIlW77Qgoi@!jkcb))(R)-bYuUajEewkTML2+c_f9bp{r@w;ns````aJ|qb&Td z#HD8BjYPja(H*~J&&FCdbgtDk{8T_WB$Ffj8$Nq!*;q!0^CG*ksK4K^c(CGWFDv9& z9W%#{s7xYCSHeO&*l9WG$iNbkXvIArnZL#;xPoqG#iLwbuxS9!>EbJZRFBe0c)2zC6rFuTA8K-lOm9(Itby*Qm< zEH!zgR7;FgO+OHsB*zr6DsL_EYVnbhz@NV?jO}@BfG~|z!0{~yjM@y>jDA7Z!`8Xwf^nO zb>y@;(7QVpsD6tbaFhji68Qj?vc~|faC0AmA;JzFXgUK&nm#OP5vA9#e#^0~>>r!R ze8sAvpcvhm5(KMk@TajNIu1s46P)XLDD2hxYT14&-pc$h7}zCvm*Y|gs>%Tz)uJ6K z$&7$hg~$gcmh>7)odIqwl}OD_T66~t>52Z)aUHXraP%n>_|evs{b)t1l+fbjv0(Riv3Qg~rU1q4^lnZ%h9gPAZ@J9)s!hhMFV0J{rW z8M00A3iR*d02SE!&03&^=#{zv-KT1P_3FCHkIiX=QV?H$>0q@Xg#t!Hcnf0i3&C0u zRyFUtaNRyBYA<`_k;pQQ1Kw8^w3}@%x50k>~+)!hAY zQ-D!YysQyw#8n6V%8*u1SEljcen@o3~7d+jw$&8L4ZGKp0~WXUXqy+SOI2~Gjc_&q9tE0bcR zEnn{WnsVY_>h=2v&n2`t+z z{{+Mcs+6q?uGEL4WVo#5^Flo50>g~|NImD#eekE`6(&lIhbjtFm~{-9u;co_%a~l%+IFBD)V7jWploWe?1f| zkAt^DE>qS%-UK8lDDe~xBNr_^IHszJ+b0jeriA^oVS42+?`VBTV)GD9woZWO@IJyz zeo~pqNr57khPoIulvI!eQ591DmJ570`lFKI2$3)GKo;D5@wEgB`Ve7Mam(n0En8g- z8PG}t#u?)7yv3}#BA+JQ=2ay^&ynJ()+h!_qBZQxo31eAJ|W7fG;o$q#@Z3+l)lmrUQj=s61B!1aqx!;u%G zk#~zEaW{e9+9X^6V@zg_+Mmr?A`vt^Xa?B+Zf)^@l&8TlVm9~JPVCGXs^ahskqhI` zq_mYFb-Du>5-5NQWEUJYv# z;XB)Arl@9ZGU0vkX#XT-!JxYUlWz_^`H>DW$Z9|aEgoMI*-7$wNz?r8ZP=vR>`hgC z4XvhtC&@_uBmT@Hw@gJcOTJDaPdO18+b(4fz|aPo$*8T5U^H-U+-{>%O@$}eVzk1Z zqK9=&)MuLKpI46#z;8h2|WGX6Vr5dB9vrq%^oJoh2nw_zn_Nf&Tz z)YE~cNd5Y)X4RP2tn`A&`{hPD-0hbl4l+WB+`wJ`GtVi?q(Aq*W(-viP%Gcq@IAXg zIjZuu{byqH84EKjKi~i5xsJ&>1}tb1R0Wa#z4|yUP%Vv1((B* zzy#S{#rawaLe(_i|H>?XoF&M5D9zUrQBH7dIX40-aDLC}2bXKz-5(ZAPollNC?r>{ zpF~U5-Uw*dCM6Cve+{UFn3pWlNq^an`e#Fq{&YwTSj0rXtNM;sbp^0$vNP?KOxSg? z{|W&dV)_0-WdPDf3{T-+|86}cjiM9-Fbou@weC3^*0vp{4$20#LyGkQm4vWd6C@Bt z3I!1+2;gx+6aa_OeI~=5HVIzPx5b*=CT**nC=z6?egmzlZ#*(t|7g-;$3R1YXYU1u|ev>Op$YS z9Omx`L7fMb)gwAi1j&Zh)oU>z(3K(WJGWDm&KYA=ZANYiI_=6Is3b<1V$lB;>aP-aX!C@JQt&~Ap`*zgrkuL z$^h(teCPk^gZ}^4#FY|x9ky5Au)%sSyFU3!U?+0;!qOsAY?s~XZ~2ys1-p~t(1q}B zRrR9N-^#7J#!JiT$f1isiz~t1e5b#*YWVvOpY@>l7ES;H0 zasG6ZyJkDpyf}`y(xfSx&V27y;_fToZ{}i~t)tzc*UITa$&qtsO!n55vmbo7g@6h3 z>LSIM*Wbi_o~hToe(*4j&9?oMB_=BT&ski2KWMl7f?33?$vCh9<@BXkxjN`A&ei*T zb@95bTqZz$DHq~T+nTKYlPs|xkf`>iN%GF!xEqLUJQ~ksY%6xioqb%gck^|`?ejCm0&kiD(9LC;4Bg?5Ftzjew(79QdEM&G8qx_}T(NI(!}fu< z!Tc$d`$|he=a7LT{zzHgXXF<;ugHCo2^_BSBUqV!v2_si-ap7g!bPcZbm>Oirqqo0 zm0cW^Iln|ssaUA3IBs!s-#*-@2fJ3iv0St5N+}g}x^=h_>~We)_)|5LJuL0jfXr^| nWg!RLp77rv4gT+MuuFEaw`{9PV-)`02s@a*-=jqd36V9A|y61s~nE^cw7YzUa^oAHc zO8`KC007%TAxx-v*iu2c4Ob7y=RSLLf=G z&q}_UTzRLh`DtnMvxntUg5TQK?rv4$pN*A+-rB=&AO8OR8{iq}e>w{FO^ibz05D$r zx51uim3jdH2pH;VTi?!E&rQf)E3eKq3?M}wSgV$X3^{N7Ezeq{We2ILpZAA6+^ia_ z|8h?ob3p&cc9zd}w&ssHxRlc63aEpg>GlX8du3cUWq$I< zE=cxF`;#Y&O#&?e3u*&D=u9itZc3!l5>at_y~NwM#^x7iCChk4v+ zO&b+;y)zj__G*7hIaz3Zc>IdD01~8q@-0Q{K#N`osQsR#{myz$(;lv1_jciOUg)|s zyV?amFj?(W6%+2JAUTFIR@HYZOEevIAvW|D#PPT8AcZ?Pxy%B>udP2X8}8W2 zy=TPiykW<5-t@?`7x?zz>(C->|E;@&Cf9)DQV$I{TDsVo%e|T-{a%eDiBxFuZESYD zYjO(6FT96+dFnCKP0EcG+%LS|;#1;!U}F3)p%l3~w?watCRFvz9}jrOW^Kzg*q4h! zMBpUh_XB%dkO;rSY6g$=+6tuo#6WWl`&;@#!y% zYJ;s+J@+&SzrETdBLT3(7w7XsCu%*SvXu?b`pA)EmVN6-O6j4#=sr+(j<>(g_CJ&# zG^dTp?BbYvl97I7Ao=M5iK#HD@P{qcI)D)~<^#z7w4hLrHmKh6Vnn?Oe~JfXo2$h6%-Bgz$cE*l%Ae%_Y>&dB%-Mt7;y7{<78e0GW|97bFvQ z2Dj5S0piT~#cVExa(71v$B(Nn7VIs!o(z+D@PDXNs_yQZ;_ypCwaEf=O!n;~(d_h8c@C8QTLw_45y7{&M zAcsU-ZnOT{e}(y}e#^a4-q(wX7R7vlGEXvh_m&$;eqSoU{TCQCQxnOX8ILo$4G^7S z#3rv&Q5r<^0qc=GJwT$twVB>AtsRlK1B%Rn1>Z@3dN_1?KLN~aGI@V_O!9`#A0t6u zIOsom6=D}?e9|8#+0qT_ZjDB8SQiV|_pmnM<|U;KUCFfMR+h273+qpvdcSJ`4>l~b zqlG_M%2d$y@}qE{kPuhM6es$-Hu1no)HMJF2bEyNIHyp`qTij`0BPgK@>38PdA>T0 zJKE8NJjqu9R~EqQ(=GssD6+WzYg+MvUwjj(&=wyefe#sHdG2OBUF!WKTW5c@c1`Nt zsz-8vPw*>F$+XY?bCx_dKUHI%!@&u75i0-#ondGI4zd1U{^e-Uz0W9>t5(S5yp>-1 z_-S#f;0P)es|`q{YGe7rXF6!If(pFf=M`z*)9Vh5IBNhZ)@G}BRLztS#Jv?5IX~d? z%;>E;PF#Z>BgBw4dXV=hWe_NO6%GVE%NhS2l;aa61M+0~ifu$_hERnS22t-~pXc)$ zTUn1dcnFjkwK#oZ!>x&zYE|=}x~Jc}oaBwd0mMrI2lHHT5WuG9oV<(!Pyna!e~=Ld zl-HF$77r!0ggf=o7K)Z>mYiZ9?^v7G#LGnI>1}An9HE;!{Joz?s$f}sM>gA3xO1P^ zrkZRQ_kNaE9;_h%%l-;yi0u7Rd>;W7`S2o9)ED98y^$C^t)KuzBVXjLE3q8ogjwP$Tk$wp3xbgLd2bOL~@!;AHC8&e!cX2AAzb^#CO9+4YiZ zv%91L>_I9(vSlgxOCzPWjH?_&I_n0QTgp{&JkW*ceq2|0 z$u)!LQNElw0NtB$2VPfl64wduH7Cc{;0e~KlWIWDkONb2@zSY@)OH1IM?Olb{_1Nw z^yUnoDr7I2a^wrc*GAQg0cfD$fYi@d{L*<*uJH4g^gcs^g6L;);eMfLWjN^^kK=nr z0{W`hFUkprHa|nS2S}y3h>y8J)n8}nRi3h8Z~EP~!3b{Boe|66ZF_I|h(npLA7>Wn zhC|^R`iDP;!FuI9cs1(0(&~-MV_^=n7Fs#2-qef$r+}_iDFQo~vKw8X5}o7XFXQmd zWaE*O#+XOHI50=M!eE+9L-ZZ6KcOGEAfVE>-2aZw@6(v0#w`k9lOp(j$$KP5Q;c!H zp*-_0hpL3baB}A>1?zGd4LFpjJ(XW357R+s(NM2CbY3!Z3Blf!p-)@s|CLHyqayGt zQx=z7Kjilo-p%~B_U9JE0~6vkH%G#-irwz#EF;rVYD#Y-WwtXvV3<}#9YqhA z5Jth`MqY^YLOKVJA;sfO5${ZAUugu`9By(mBqI9$aIbZNYu?xw<_f_wwA}R7haW2* zByutXBYqDp2EBk;BdKl4mf|L1TLVjeY+EN+nVq+%b0R==_r~9N4&Yr5vXNaFVn{S=&v2?EH(8)gicuorF{?2G`m=Y+Tf5c$V)IEcEp&jo!#)j3aj z$y&0Wn&Gq@41=k-zdjfAjWxmfJLg9mW*$JUC_50o#jfDa(@L3-P0TbVEw!-`tIj%b z0&>ts;r#xii{bBd_uT~yoMe^mI8)Uwygk76cS>MN+c-B^eakhfGOe;c7Mt2|wWsWYUCR} zPrS#TNO<#F+fuhg$cGVfN24UK-SIF{)l4n)x}E0-Gz);;vp?Pf`w#sA+v{cSR8SOY z;#wDdxMt-CwOOHl<}8K}CGSjXi#+N>Z|WRfQQ=~YFX_(sjQT)9c0H2DDJ&o!PG23` z>-!Jb0%RG``z?JsOEju{*EKR{%*RIq-pUR(f(UM{Q>supIf$S%rWHd7T0;Tx+dpg`-{LzogN37dE^z&EOPh*xz!q3bM&KW)tdnDkk6n^a% zOjuN7!fSnOJ-2T=>K-lNhR+6BO3Fq zZogjKo19A5-%r}L@=S!;(XRCQLwM*L5Py%4<80fAwm36pPz>WN|XZs6{kN1 zhZHpb3`{ekgAKJ0S{yuA|2+HkYuUv|kQdwZgbA&B@u?pSloPPr+n^#GH2AG~g`7>! zuP=PFEs@*9eE%U9>I zLqG@hHD`&1z&d{KcHx;&&zomEm&0!zTs)BOwL-2vU2asKSR4YE)>}_0>IX zw;oA@3z-j$j%=PCUxh}l4?o)oB7GI+xTOtwzWcC{P#rY!i2*a>%s|eL@SXCpbi~Go zWmp_0;|3lrCaE}`LO2WtDNz8`r50_qEI*hx)2)!iuRa1cOQf7`C zXHx2adBv*VJm>pCU|#j-NPGUIKavyMWXIzm9X92#_;R?Pn_IEl?i^kJsPF1ombe$o zyQ}Tj0PwL;O$P<>MAkKtKFvI@VYjF@06GCNOy+Y1>X^WZ?qe}qD zBE75k;mmWPQ$D2$4EkLcg4VD9Q^520!7%L$l;G}-EBO}heYhnY3-365-(CH79t;0x zV}lX3FE{vdk;n68=GT6x#--=pU8{LkWB5XolUb<4*GylR0vUHmHx0r1P)_7482#(| z75nyGz#qjNcRJyK)ZCZeq zPq|z(*3(UOx9C}95_l&7uwpxeZe z0p6AS^S;@0Ge&~;6F4E;h{sCOum_kM3I*U$K=hbQ_!pS0i?$XG@iW)qD-kE<)`X<2 zps!0;hpTZ*ER~0dI&6b8nFgcTYFT}c8%drtSe=TOFgMemyL;E^wiD;SP!ID*S%#Vh zAsNTbk-^>O8sE`!{3`M-&chByTu5Slq#PVlU5R`V(cd|_lKwSKP?sh~T`Q2qW6e5c zNF66+@hf5H5W(E8;j(YXr#&8KWY6c~v|I9?#N~h+)fdFjf|9~>ub-W}?sc`aP4g@a zwcP&-mQ^SvBfjILvLUgt9%ki{BL3joCWW_{aFNVe!K~~x4hxDIvC^UQF(1?51@3K) z67xrB|9r`m*CX2D>UdK#VVd8f;MN)9u*~_(oTc6(^U&-Uo`RJEI1TFGS{jI-Pm26e5=-Kn=4w^*`1U0 zXW3STzTfVu+gLJ4AG%FR&8P)Z8b+q{sX3Ki|Ha)D&}VV2`#nppr9vXezrRQC(Ib3G z9WVM!DtED?NiPVcEI)dr`mCOkzTM(BA4Hym9^8^5hFgrel@hvXx*OdY5HoSDF9aAo zT|Jb|!;-|EI~Z0fwE?|fsw#wW-6<4i#6pumj?fdCYn;H$t-UPzbFy4%#6}EmuxAu4 ziS6ToWtG_Waj$CPhKqK)C+W!}&h1XGW2dr9eG3DAAp_jC=24-(dUB7v;8 zoRK7+(bfE)XSM&VbMuTem2q_Nrub{FOI6vPzrP__MOMOJh;;w8$Pn=4yHhVmK=yA_ z9PzUW?`737>Y_xA@skwi?zI~vwnt`A04*2bP-f5CWxN6iHN<|qcP~Vz?`(BP;JK8g z&DyUzCckU+j}H&Eu?*rJ4E%h3K?8p{hNP&+ks)@S8O5c9p*g-Fs5a9gtx8l^&*{(* zH^7-0m-z5%%OJCGfe1vnoB#Fch-^mzZ#zb6%A26$iB#k1r z`U-`fhxugLI~M4Dq`)X%Ehq{j@mC=ml}w_8AO$gFrcxG48@Edm;LbazsH9Q24kMX~ zW^vrd>~bpM5lqNk8YZN&Yr#~NXSPD%`|iIi9a;7#@z%3JWb8jRHIOa$3~~KI!lLE4 za&*bPFYMkH1$qryAT8Ho6TF(hQE3ucf85?OyPKl4Igs$2VLm-jgoTdF`r*Yxm;g4U z$IICxx~o2SN3+?#KUe1wrqlw1vxobDxp@lV?8+)iufB+*sXn|H5$(F?WUPFz!76|v zh7m7NFtj&D1*h>NI~#jQp~}^SeeVqAiX!zgzWw^S{VIx?ey04!FZD99^xb#L z`3f14*yhzB!vj?jcH$la&zqRSBt*ceJ0)V1HnT=8I?M-0>hf=!@4D@T;o3?luHDl! zjq1^fGJv)`$K>5;$I@zjQ)di&RC@pBdO+J}4=(UpSVEI! zm*pVJL9SD5@`EFSB;7C?VK`*>Dry|IAI=B{Eyc(7GgkccQm{ zoWnN+WGoY3#4;pL`qPr-+viHr4&g$^Osd@^SXP@7Gbx;yuC%UGQxt6~&_Vt+^j;KjJuue5(5NX$w8*jV@)|vulpKDz9!Ddhol& zECc3H{hW{aRavYhvgvRy!Nk2@&EiGYP-sz8buAoh@ONw|ZoU-tYp9y;``IOHV`R~B z-|Pj5u8dcU?=AjQaz0xqEe;#$Sl|!S>)5@45c0IDT&6kH7R3DEkN$M~5)JMOCDH#7 z%$D(N$JE1r-Qi{9wGKKeA@TEWsk|luvB)F{LOo7kUeDxZ%l1yUG;Nl3Y z9IeRO{IcJa)cbMZ98j=SjYY9x)sEfn8Li zlehQOIgv<w*YkpJpm6Q0an?>H&Wd0OYfGLO;>Z>gz263f5)y@DlaKp0?XkE!N z-oy?7^8`TAl1YELxLtJKr+Nm#>MGmPc0T^F`p2x{sjwx5IQ%0C0Z@f&5QooU5Yh0i zpa(`ofj5XP8%_PZp=Ld3oJsG(prgpE0fT1Q2YJ`D8K46s#0rfB$JJp-qa zS-Ox^E`HtwM<5LjMyhInI!ca!EQhfZ|6=)QVbDYl%g+yvWj3UfM8!9pF(dec@4UO# zVf5cANwp}P5Uwa|K=M$cN)GqY-3Guu{ZFbVTv}Ho?|Qj{md4vDPT1f>%Bd0z-2sp( zQBDs%OA{x~>di3=U%x9okt+fa&FF>Pm;i@O6eA5P*S$(hNoa`_agb|U=0bJJ1OTVF zFz*C_EkbF47+-H@?UB6!(WnAgh&&Y+bmD@rdkk{7ceH<&O?@}q>eGIf=ht{e$%{5R zq3As3dI)hM=MnFt2@izMIGV9;#Fl+rXBS(s+~R5_qR>l9J}lt?nlFH|g;F2T599LvM4@dv5 zZUtNqMP%;yKCUwdFlkO4J+RB!u?J$HEj@O%yhjUdoPh2qgcX26MYCFL}!NjWKE zoyHs`&6)0lmYa9R!PC?E>9M8a5C0cj>FBZSW!GJ*Y}w z)zHz;IHXEu6Gk{gq-W|Z-k=nEuPPvXrK|#QeWvf^USTEqmh<^BZ*v+@i*39Y9}?D? ze`qGHoaY$PH^en6M=C_x2xjvt&GW?%@_-lzwL~fR#bnzox|GBxI@O{MbpYCI3a37( ztcnpCmUDxN(G`YX;wW(}@!M?u2}3n0cvR%v5&h%!<<{cc2m%7E!ew$xW2@ror;K(4rR;=s0Rjtpf}Vjy-7s(!>iZ>QMHA+CTN8R?)KK5Y zN_?L8Ln^czRq%`jJ){Bs1Korc7^~qJg)xW4cLrDd$UK!-00(0jS_aI+7@zu82>YNl zYdM#QTN*QN^}UVqCI(E<2y9SbiK^LKm)!xq@{z2~29ySsmG@qn&}DI{F?+c$S&Qj& zadJBYX$xrWij+iYGj)N84PIc=!JPvt5*L=J82eS~+r1U7Tg7w!$$~(y3XD2}sW+OE zGckILu^Waiz21Y^qH)~1d!9thv;?+XXO9HQkAc~a*KD{ylJb?OgXDfVJei(m>x{Sp z>Qo$k_M`f-=fkP9a?DGCxqTM=xLD!GNLIR2^Yez+3K}Zty^IUg@k|Ruejh!FX-oZh zs6mI$CoE{i(!DDcJik9bjJT4-OO-CznYkPh1+T9!uH-P2<@XW2|kKa}%s{_6*i{5=@GbhFy8fz`0 z+l3^{1zh@qUpXr`Q*`f=Slu)_1h!?ry%dLIb-_Bbt8fvYUNpQAIppEt2?#L@h?JZ! zpdP!!GV|<(a@~^%{xwRh`$8NJAMe6ICG_kPbCG^56}U)7KSCHOBV>%z)R zYVC_Zt8~g1t7bR`a^@1R_OF$#-y?m*Yryh*(JvSg}=d13~Q*5+r) zb_;{@_80a1iiXDfG&}?KqqDp8GuJX*I&lENf&NzuaNwY6BBk!Xu0B{XcS{8N$lvwU z9f}yeWBlI7Qesg4u`E8+8bjI|#Hs%+QeDpo+X;;Tm~HE zd@cMZOueIqvZUn&%(3v$sB^!^X?p!|&wenbEAQyp+PR8xaVnVNiYkTJM(FRdBEZTc z+{bCJ_4V}@F+Qpz-1DT&o4k(|IEaHs#NT)&lk<7}S%$T>!$|UkQcSA#l9x2#I!uLN z4nYXX%y+cE;>lh$mfzb?ws?HB_%1r7$0S}`hG#hblJfX0*ZFYZL*EQLF?Iqx%cqtI zI^-X$PJV7_qNdw;exW-a(c4%-kG70d62O*-$|n~asXp&w%8L`MGC)`wN1 z_H+zsaI8Nq2gq$63QX*MRv3iWpm=VLDMx}guz#*_QK8gE>(?*8xWE*xPY~hcRGX~o zEwzsY3>p8}JdxTi@n=Kt5ig~?xI28&Tjj<5VPD>T9cpMIIa`dFmwL*g(wpw9DLv_E z7jkQQKd5}HZtlu;^dy5(RQ}WIcVq7m! z3{-u_XU9gm?$;Ce*OKa4mLgxNN6uD5wn>DE%=DLo+_WT>AOlET6$Y+0>kEI1mAp-g zRT+HZ%-bhjC6QHRI7l6n$uMlyE&{bT(ft^?ag5LF85})dS$HFt*ml4ptPrdWksPP+ zTc?_0LOtIi|76y5)K(xpF8WaA^>fun) z6Am<~P-kD8U9zv*dkw%0DMr1>W>f38fdsNAdiS=%|9A(&wJ)`cdH*I3f85;Y=1R>7 z;POdu%|8NZ^WMxioPlGH?am2W*K;3rH%u~I+Ns)?rJtu;^|O+s1#_9at(iK6^%@dy zGFxA0_6bW(eX?7W6sLO3uhQlC&_ZxZlOnnow!Eh9$j`$GcL)$o>Gx$xiI=!){NU~n zY$$PC!T(@G1(qe4YNy4GmaTerC-;Pd3ez?tR^xtr)pUA7i=m@rLPky%CDKRBW~;!v zK%MADdUf$UGI=WYF1<6_mkJ%!V+jzk*vzuKw;wY;m*j*pa&|&#I-Jd>Qf+<|z z`&e10?&N?=v)%6nW(J+Cd6Uv6o{nkoa!>W7F-;NoQ`1X)-Kw!05lODa*Ppv4^ed$f z##ac148^0Xl^A7}_wPwtLQe_GT3Tx*EL$ zCcmpSw@JOynQx^e9K0-Qij@Q%rGFqLZ&!9BWJS|iA3O2Xnw~iCd)7{aTV_M{!P%r5 zua7NUE>TmKid}*2B~8L`v_@J^t7Zn;U49Xb|8mm?+Ua><%0ri>1dY=@;*%}j_MiLe zEZ77qXb(HLCL1##X4ZqVC|YQHpXqXFct5#Eu-bIaT zVMavu9~Z5}^w`=_Ym?&xGl?r++D6~SV95p7FuG#Ok{?*gkIbamx~v=5vM1`iYGPC>7(Ce#F<5&2fdN%t(Nz8 zyFDA{NYp$$wdlGqb9IhHOYJyDqGGeu*tS)L-aaZu%vLPFulR>N_d`LwVDgyL{z@Tj%>)D?SZDNXmVTBdvKWz|cc7^llNYzq{z`)~csl~xHPhU0V# zCSjG979|U7-rnBJaxoT$6k?Z4ADH?n#WXJH43!w~7$-1(H!{&j0c9(W_F=$vGc#;pz*ys;nt2J|PRJ~5*(5^GShcp02B>NM> zs^R3L#*2c&{s{a8QEXf_f>oET#D1_YXT1#$t~Ly*_qGUvy<^)l`KK|PGzCHSIj^CZ zVT&Y%eci}a*_n}Q9=heJH0D4{14%DWGM~TV#mV$7Tisb^3Hk~3gcoin1RQz(zHkj) z74JwVHzQC0cfC!KG2Cg6&}+^1>6Or}xF2-B2Z8g%VSamZs{V1nJ)b~U*m6lmy+x6d zgmi9|2~H32S`Rt`vrq6%!M_3lYE_>2G|B`=S-_iVlI*S!`vn#8RJv>+h|{HK?+HE&zV8;!oAC zyJ0i-)AVR(6l9ChS=?m3aawNmAA}L?PT;BrFr$&tjSZK##IY`k?xa-XxOL0TM|t+D z#)`_Cx^*wXj!x2LuU?;PcSu@0A{8ufOU)-yFi7?1qpBYRIcKlyWJ=Bl_C9wKpZ>sN z>|uQRHg(*4QZGSv`PgR$8HtZ{cq#fL@y)Lm;JBB3Y5_%KN&8>=!}?!3(SIqP5Q8pj6^ykpqJQp@e!~S8Ow3RIjI7+B2K>nGHIxqiOAo5*Z7$8v z-xLsy7N}07Gr)EcL$cd&~;MQ{#6v2gkl`q=(-nKhf_*IbLMRMJFH+;#Md*#<-_Z@S?JOo6vziR%ul22v@D`Ft|5uE{^FJiHut$G7@dH5 zM(XGjpODFYZ)1O26bD0>+>aA@5JM!XA>L#X-wN1>Dbk*DME{BJjSoovTDlTT)=Zat zjz{9`i>Eui2OeHlSa~7ZA_Atwml=J%34m#HO_W^%x|+{bQeK!x@-M^*KO<4c30U*s zI3zJ9H*Fm}x`N8gjdYEzbPS4!*2x^9DvP!_EFN_U4jL(D1*%VU!pO_8PL!o;DI+=w zAb*DSK0Z@`gi>!*QxYz`gb~|J_UY2m!a#v_paz800Wgo~r-0^#GGyaOKq4xQ_D=(G6 zMaS$cxi>67<8Uyd+x7E36l95ey%N1FP>GCQnwOolMd*F6+K*$C6u zK=01cbcjoJPV(n3Dqyy6z$X=l?9shk3Sf;w4Rk(IB|YtE;>1NDL0az7Bvv9Uf?G8v zHNwKZZ>^?EYp%yK_Wv-)NDiO511D#~unJ&!stEhZ08R9GD@T;^;AfYQ`mc;P_8Nq= z4aylvquJUx%!yi!WK>_L!$wkRmZ{m6GX>F*mM?nupm$gn#{81gwo{OM;wPQdBe6m=G?gSY{5CkWu5K4 z3)cr90t>pZERL|QjMAE{KM7O;zfSx*Eq){+wqCWNJzcSmRrien z=Amo1aOV?(;{K6~>}9wCBXiGOZghWSc#K38zST73QCwG+#q{I6^ewlLwDQdx7K)T% zx}>6QeTrj0^Noy6CMN*hKB9b4_uPi5x$(YitI^nP#b;cxZT$H-;u%{jNYcNTB`6BQ ztIn{}=eH?qNGMN`t z+QpX3lFu@q1*>Aoy|Cnsp78&NZW0|ajmS;Z*E6FhuXauFcK#&9{og#c1elRfHj0yu z5Hvj=q?#e_Y|OWKqjOb*DX?77mn!+&i>oFT&JI9wqs))Gy3w`;{KmwD6u>WEajBXT z%PQkzBqyL@#G0+DPZMOyf#TP~DD7ogy0OYBc-!mo?V{mjK6 z636{QB28JCzV+wRR`?rco$<|6k~1^N5o!qChNA)3u}Ii8l*9f@5wJhDNhfbd@7qhK zvNkcVPr+1E97GcFLJStq%|PI%BKadT`5f(13i)-t?LGv28UdeW(j~p>7nm9 z;r{)9%K>sJjQIwRjHjd2yDgTpGJSo2HZkraznbWj1(9LTT5!$VP}bk7ZByf$eT3J| zNJdsT*n6jz*GAZXPn``+vuY)k!Z5n006<7|!30Ned202567cejS3Wtx*|!_rZ`VB5 z!}*kk=6(P`dehB!(Ev!PGv~`)3ddhLxv7u9DJXQG{Y#Bg|KtH#LgS+st8jP|UEmhQ zmpMya1xf7}DBKkkvFG3g>a_SRJs1VD|F-QwIPFz;JH?SNZ1X2CqQ3c*zm*f4|B@1& znuFWXYkT)lhfIgLRqr#;>Wu_?;=wHUe75{7c}P?jUOWQI^QA)X^g{^C^8*pj!Cg6HG z>e(~}Sf>aSu@T=ZVkj6ni3NWL)`iWQhT9E)Dk6QFDmls@N&bE#pweUU5Qf(f(sMCl z50CbL(5rqZ(5h;_U4F`iy*f=iy#VUIe(MKFq90`^w2h$e#!HmH;Y2>5pq!eRg?29) zQ;DI2FUat+@c(c54M~uN-xtTR0ALShzBvP12*_Fy{>HqPcT{d+9cOa618W+3I?=(v za(%?snPhiphuZdlTvhBJC!^g2$c@jI7e%aMG)jv~mO_4L6sDKt{lg9`-R}Vkm*Ed+(=h25|#_;Y5d}H75!v4wd_=ViJ}W1^W+G?wj0x zaykd+*N587drkl8H&CICF|wJy8%$(F2U)^kr(fnkAchIJ3tj|}F|3M+It|;4fAM|- zqq^F5Hq#PS^|n;_xix=EOhW#|u+u2kit}p{(Dl)w5Yc!%@4sn2vcMBOqwdg+{QaJ) zE&59C3zZlPbhP7Yy9PL#bhwv9|1%e6`21POI4x$=akueZ!s?!&(Alq!2py{sR`73S z+VyEVU}0-6=8hZmh8NdWKC$KfqV|P|W6e|IQJ{-#=c`g3qwYK$-v0O@Jp*Iv7@YWpoFZ>GuGGp!j|tb8bd3HwlzfQ~hGO}ue7U|f{$9^BG20MrH14b6 zm76mjgLwiO$R0RJ{lVkhIj5R?U#)&?C3I8%E;%m7(P<2x9ho6jECCCXuCaIKPe$&M zKdpjaYB<4AvnPx%NqNTy{5ZOHl9^v8jS7b{)k3;QAmP)usMmw>jqkI$iPn&KJTB;Q zg!`1DDf?#UF#!4GzA8q5o~Zx|DFz48MJOCZ6ae7b@&BI+mQWBJ$@8qXSFJ%Q|KZ>K zmFSfNmzERViNE8!HSz618bOh(;VTCpb}LrRFP_R~T27u+|E*Z<2s$pU{%geq!2ad3 zPOH)3rz-;h_CJjm^#7WK5z2o8Juk$+{(=VBKg6X;;2-9k?|&+(O8+Stojnx+=;HsM ztD%FQz7yFPo}8Z0FEsPv^6_Q`IG?dRru@Q10a31Wm2@`h!-#iPYrURVT{=x@|DPej z8W6Zt_wT|k#ljwBq`q@;;>wqw3<@uOIKdyOUM%=GF7&~#-a)k#v!4walxIUAE%Wdq z>c5q*w3@>_MtXA_b?acnly_V>h2?l$&S=)%u2U}Tz7;$mpwtu&y0MaCXP09lqad@T z2Ez%sP>$ko^ED6o&C0059|LBOgQOq^pRS6Oo`Y0e&Xku7jHn%7tGV2H@PvgcIHdG+ zlBM)gDdsa%&DGouCf2L23j=aSU*r~2MX_dHA3x{r4JpD-KgOf^f4wOW{zgs8Upk(% RhXJQQ!;5Bm)jC&W{udP&w_*SQ literal 0 HcmV?d00001 diff --git a/inst/doc/tutorial/PATC2023/Figures/skill_raw.png b/inst/doc/tutorial/PATC2023/Figures/skill_raw.png new file mode 100644 index 0000000000000000000000000000000000000000..814fc08d2f9aa044cfa1fd9630f8f6f092c56599 GIT binary patch literal 14718 zcmdtJc|4TS`#(GjW^7~ZJ2xg>xZHbX3dqjjO*|J2TFqlHb7(0a+vQ~(!sVIYF z7b=l`-(~mH=lgwr&+qs9<9R**JpVj@%s;4$&Yb(4b6@9u-dBvNi9RD84;=&o zVKl(%nnNIP00Mz-A>pTzjqJM1rw?sYBg;#tPik2wR-YqOt`I^0%7J)> zGg%$uRlHJ7qYGcOG=}HaM_kS4V(%<%Y;Tvx?XE8Dw^tmD_nw@b_}#wied>kQ2Wxv5 z0%3Cc_k$K`zI1~?0EmIEmSy0x<>#@flVzxajDxKKoJl~y_CfQ4|Hu~(Am*HO{N(M| z0ml~0c(k_Thh5TIn$@n@F>Bi5^H%5az*_rKE;Jug+SwBDZYu+fHWsN965OR znP1&2VMOb+`4qC&{>c)4Dk-jWdqEy*Uds7i$yRMw!_F~6gBm(rARK+Hc1aQV9M!&i zC_(c?@coB}yPFki8~TyWPo4RVjZq|8B0ob6@l8~;?)CW=T18nuaHnT8=<|e-Ue)m> zv3;MnUQ;@cyvn5T7*k>x<~OH-&G2|q7(+*3{unJBa6yo%!Q}SH>qtFxWvz|j-^0*vv?%8y~I-O4#&$q4zd3 zF*xq81b<<~UlMr}BG4ydD}+H_h2gH-w|ht-rtHDvGE3r_p?yw4n4j%|axy=$HLU6H zxjOt4ER{4BHtvg~6YE*{RPe23yh zMEFj`-N}hZGxrRLU-kSv-r%0dl5FC8 zeS%JGG5xwQz5HFv2Mp9EKI4wTzv6`oGhRrz{EIhWyEOz{&~iZa(ZfmEjCUMuvESl! z!S4?s8)8klMdh!t#g}h!9wLW+GoWw8eE#-(Jt7J2cLA8=x64r4{`zf}Q>Z05=yvog zII_-&%l%ygCt(TkUv0dD5QJG01{#Zhx@4URxufnjKw4(3M+4hlV^66i_ED6(5T{q~ zXun|qH@6I9&Ba8`pHE4vRjnl+&CXWQ-ANVg*{?OXpWE2^$gG;v#aqX z{IJ|~Myy{|_eKv7Ndct!zWM8PCcK0DX#njCw9tkt{FL$F_gT~U!+YR28eAk$Um* zOy^af*E^q9m2JY^$(u!@3H5UjG7d;rq%%OjSYQB$9%#RVUOV0uZ98K`w1;izlHjBc zXgkL8xl&{+bI0o9a{C$J3PuGe<|7(u4@xLtLBaCs8B#AM<^NKwY8~aU7}TnsVIhi< z_=uJ$tmumcDKHgEU;Tdzxj5Rq;`rz1@i_e4OWvVy1c5w}?q5~nl9Xw^yv9&mBKp12 zQE8FOH@_@gI8!Oo=K1rVVJ@ZoqQ!S06t#f3TC*XLSp{e@Hdg^oha!&x^#%FRmrjd+ z@Zx@_#j`-t!H7_?twiS|hHR-;mlw6wldM~Ok5yGy@B=N=__!pvyoFZ~lcD<%lv@me zD;(VrNF6AH%bTDmM%B~mQpL6i+%rC6hI{7&8}lj@MY-uB;FXc_GAlXJTA{D1QqN!p z%7l?YFzx^fkwzwHSSRp}P;%M<`Hm`DKXA9v55yVEDp01VRDg#LDnO-ZIycTQf)B0n znh$Nu`jtUmMGJ+@xr&M|qrF9w@htZTR9i}fTZ+H=DH*^@j5xH8l5i~Ikz`)sd;TM|-+rfxWAh?!1P{})fL13%8GuT%simQ^_#`FrIUcVE|N-hZ|6&974qL6#WECEBz2 z>wwM+f?S@&hy7cQ3Rd1Ft^qWI|NZ;#PmoE%EblmT~T5-uPNh0Q^v244Og;N!Y3 z%2UV~7H3~oU$7*<*J+~!kJjxAE2zxiBUS|I(_|!D>eUl)0sz0kjkrWPrLu-xeq(MI zVv)-@wf9aV(;jvF!>&R3@Mw8endKvqh3JtNTl5ie(Qxy>H;fF4s*h{eM6imc?mQ+J z0q*Bvs=!oeyEtNM2p>&f==}WVl7MLb=IKXC@Dn2og1quEJ@p%caQBX#Yp=+c6(@Ps zFfA~2tL4<&_CIeljBmD!zB#fHy>qnCOkdtox%EKkFJEbYY3}Sl2IVG+4JBqM z>^uur_3)&d2L#{a6qF#hR0mdWrhU2J3gym=g@3YNPUm~cA@lU|zCyVeih{n*g(hiQ zEAUqakCX-Su9jsh13LcK!B1V#m@5r08@^9Y;Td|*LxnsN@^+_`xf}U%j||64YaUL$2HMzn5S?6=Q_36i9J1h#`)43W1lOM!;S3wsm-Jz1maLUc4O~s{NF;@PhQp1D`)P+S zBif?Y9C`_$y@lQceRx=o6zbfEZb5n`@*8<(lAG&)DOc8fEPh`{whs9QxA*`FqrF)> zIg*BA(F4fKW}M45UGR^^1rj3tLskvZNznRPIlkr;kzr<(NWO&R!F;xV|Lr7)un~`X zS>U59vEC#i!u`G+g>#|!kMI%>l-H)@Ecp(Bwc@}OjUu^Lj(pv0_P>12%;)vb?_v&~ z4ihmAw%S1-j9N^6=NdQ?I0#Z!GYz58pS)~nf36G}$3A8WD^IPh3deW_DX}KkhZKH8(*%&vwXWVpc3FxYjk0UqvO>P9<{@n)i za}d-U;dU9j&2M@9OIw3`)P$XJBELI8^sHcL&M^bB$C4=(g|)g{4rduj2N;?7vl$a= z6#1qqDlGIiPZ#~B-eQTow8;(KjxcUi2lg%xYGb%h0)exn>#lw!zSJCWSIt$CmsxZ< zyq~wa)iO+m4jbCBj90XCEm-#aYL&6ieqqRbG)gR(;a&%nJM+`s@>yZc!a6~fb+bF# zG;b9+=q~~r`Y-_FddxM*EG=vGj={*>yC+wewje0rkIRn|BNMe=7nj;ZGmfu{P~mZ3 zH;`b}c^6uT9qL?88R~+Vc=!`|W@_h$2R!#T7*$gKwR?HM*@DYHNv^WQ7Ke zs?+S1!8=E;!B9%4Amce zBRk-zHQMTKU&8=teHd|X0k*h-hXiwinPX?nhk3Fbj__=#e}R|Py3l(eJVR5228=k! zyOSnK_`^Rt|L~HEz-vhK`w#V+0IvX}zpO;+>sP9;A2C31cU`)#Lx@j(lU^Qh!XNzE z?!YhIz!KtMIBoH3@BeI;@bab1Mw;4?S{EWkylX;hg?9$9K5N*O%k~!NsqK+B27?$O zQx1?tyfB;W$VL1xHsj^#hQM7(Jueo_qT_>@&jldkN5eEoLj(E)2}4&basUZp9ak7B zs-x8;F$=BEs80qn>BuK4H z9X(N|pIr0h~n!-ANpU?-VLfK7C%#Ie?H*VtXdu+*~d}7!eVEO_}Ih`pmqZ?-xHXJWeM+&c5(S zaVuAN^MW@f^7l*#BN&0yjdCCv9PQ2tLu4lYw}p#krGS@n-(Y~_XZA7GiC3e%jTPfZR%XPdX0Yuny|S* z%iH!nHbh`0ZT!r$Ge%3lX^C4#q&OFWS)U$X7U0M2sW{o5wHNbnj6ZavIL~LnNH{P? zlvW8N3L=G0m-+ii2Zn_UJ?5zvPZ8usf^{m|xqtIgxudt#Pw@qhs@6Kga~};|USZ(Q z5L^TPJtM!}~(F=_ZQ!nKDua)G4iUhd+tsezo}nq_nJ)qqcM$| zQ0a4Mrl)*D^RlB;gPk~7z(yRx;Z}m|_YXd=R!GRXcq;7=h<@yh_hEW(8fXi?ubGI= zuRXf1t1RGB*(iRb2wqU^aHRIsB|p=$*7+o$$SW(Pw_4rGgF-@3=?b1fcXE;9bCn@o z0l}lFPUWKLxk9rgic{f3>zXmiyW3g1hSi@|ym1JK>*IDVL(rQF>-Q3U(x23rxz0s{ zZ0?JV;V;S{-ftXpcNQIe{HY}@`}s8V8v|?drf3r_uvb_2rm1r=DnV3@21?TCxbGjS zv}r*Zl6?15cMcs}BzDH)5iZu)8-|sj2cGt~0bsnV_z)^9c%pCR++apzvuhjW=edPRwXf9j;d+M5n=LMuQ80Ubzt zP8a1DeH{0VM{HduMux_9EBI#&SbedLUnbJ_27CzvEMiedvAmsr!e^yYppwrtgDdV8 zsAWT}soLl=;#F;I+Wlxv zlbyI)uexvjmB;oANvG3M&?-hH9DjqaX~1m%wTl+lEZRN(SyZyv)L|)oxj@(#g<7Sa zCaA%ckNpn~%=Y%Y`H2a$kPYm8G|IWpPDWER<3Q6q|8UhiJxMg#e$46}4M9Wjc^H$R zW6OcBszq4@Z3Z0{xT{Bg))V@bJLl_|H#129#OCLqD5Y?cXh*&&l}lvQjQ#HT`tB~e z(pePvOn7&cwW0M=k5Z3}zCEX)GBQE`)ASuRM(`#FI)c)C(ifwB&4pH};u{U-x9;#U zyM;C7_Pu%{M8cCz^mk;m(W$CD%f)Lj(zAk-Xz2GoTJdBl!Zfd z343`z`5)*%XF@_Av8V19@O;-7To2Rb0}{waSd&K{R)Q@Ab|8^eepNcSsf=aTek?_3rJ3j-F^mQ}8x({Qybd6C1G&g^~A<_wJ@+r|@dtG1a%! zVaSp5DNo8C-Zm_|*=mxhrP{oabHOp$56q^)PP;quEImi_69KsVofOi}%?7x7MYiS@ z?=*Pj;&o_DsVD>SIhv99;c9`%1z@?xvHpneK)p6x18$DnInUB$g`yzQ4PX2K^WWVZV0ANI2XE`Bz5 zX^T`TxGNTVOsYl>2sCL#{TmEMV@~(l5;IG9uw`8tO4f9JKP)7>)5^u$x=-M`ALzCn zs9lA^R=ZdHbT&CSJH!nR*L@ylb@4Wfi?NSA*OWLhd--;C4I?f~n%DI}qpJRW1UeDa!TxH{E->5X}1Tn=$ z{G8Rvne+GS-#0;ooBGnFvQnFiT`7fW6=Gk8fg!Qh+7`poHrC7laH_=ay@duz#pU42 zGcC!;S^QQzPnA|zHaxlq=sTfh$?QV88W!f{&+C+fhV7D`jcNROEkLy-W6IWb8aBbH zBOH|fp*KiT(N!}s5oHh%bCt+%S|a_!>726EQH8w5=3L>aa}tL>G7VGz0K4H^XS~U> zL%f$?Wf`qi>Y$>95d<&m4L!Qa{o+6;5!|YI#`w*se+VD)N7)o3Z_D%OUBq9VJ0GED zU+u5X!uz1q_xtPoKAVW62CJhnC;P7Bx*qoT-#q=S-#+JirLCd}JJ?vqJugveJE1RhR94f#$`QV@Z>vFz%eqbQWMnDq0PddL%onMeuOcFsJca9SA z5#vWi+v+7eP}LAMIYl);(cJexps&I4k88=_Dv`SqjN7VxIqC98ciSG6;(lfTgF>T< zGrO6PeM5QiSVzP>Vut zk)U2lcKQ+J;xWcJdU}^j8L$F1=8WWnFpE)%C94qZ_RCP?S3QOwN5oKAb|NBZ{m%E? zA^plwuHtTrC+fcGJ}`Z;a!1#8(x2?tW3EyC%7m%Z>c|Cwk- z8#y>VV74UwcQ443x-^B5Q=O1=5omaGW@r&&zIW$gJs$N*T|72~Mfj ziOY#_XKKPxCW&U17LRK=;SRIk`rU#4g>oMNLG9^lxopq;eYAhho@3l)kY_@#cwFXb zdXlf=d@+l~*AA z6Z}#^O}Z6^bV7Sn$$O#=g%34!HLUfDyzxP9xbTvj_qClWMW#aVo-MwHpr;t`%iCHY z>XV!%ZKOU4dl;4LtsPBd8i}*ME0j1Ok?a0ym=~i3*@kb}=*>LF6H1|Zni|@Mbf?O5 zfxgh|Ts??Uo0b6hbG&l<<<-Bp=Bx14pfl-o{sWwrBk0N$8OQrP5tq1W0 zDhM&~VLlbbqu2tpDyGLqK-*f+2+f5xMr#d_ntZ_G9fANPPUkIno;=kjDY{-MG5Oe)=9^D5a%p9MQ$^$39zl9J&+8co{cFd8Mo?BKg^JKYQNp*^fe;wBkj@0g}Lj*Aus;QL(DFAc=D5MhX!9D1$qQ2<0**T-V zKLw1LvfvPDQ1}=AA;-JEA^G$SMS|scr`1=PL9c(X3=)0R_sq;38$Q>!>hOWVii6~3 zarN7A8RjY`ZGvi&4csaa=d_pQ7RKx$GWTNY#CstSuAzTi0GsbHEHzV5)>S%wpqOd& zL)eB(w}E_&CyRAsJEyGqYby57N)JGuzU$|TtIXrq9bhs872bQ7wM3q}NeHG-2sy8w zXI%6O{_0AK`_U*Va1}-s^2SXW*w_$gKhw19Y)NN%tlp(8JUjZ4c6Hhf-+HI;<dRSL91?pw`EO(@cD(oUC5MVJ-;=n7 zTtsYRmZtAs)$GpH`T{l23L@&?DOkJr;S=u7c8x)oLh;CC(Vm%o5`qgLwbiwk%P(Wq zjBJd;5$`5RIoZbjC(8HM0;3;9Sn%Ai(oR(x89jUpsf-k2_TvWYS%+X01~nF@zAY+& zBy*!Lt9M5er!F6@cZFFRDO`7#XwCEYO{oKAe)1a(Z%Zi*-h~?E0w;i69UU;+1$-D? zKyMb%1rkHT;IC1G#vG2lLKjz~nYw4+b7sq(z7ajuKJMwwiYr|F24@#D@r|1d=Ei?< z1s~5{To53iS9K*8A(~#9T3(lLTrzYMT|7JQzph7NCzgMXPad4YFOgJ}87v!!w}jMc zP85t4>kt+YmCEF%^Jo61<(9uC`}>6q?hoa`H;h2NXRJV%^=vV2<@R5|(Rl=R#(zPJ zR;bjaT4DcyFER8B&1UIjr2k&h*20@kky}*cxP3`g-7nANRAoXO`{o1|eXD$nx#^;N zO$EBadvBd9-9tS;h)sx&?la&*ke_})oM*X9@J@RrNiQKx5Dcc>ZSJ z_nXYyW86<)Ew#K1NEMZY)iYNX!2A*ZsV7_c`i(AUT2&&tTGgXgkl0I{5X{Q?nbT|5 zxNl*87TB6QZ?dMYy=b_+*0dDY5iM%>eoh4TAVR^dIxMd&N0>hn=?k7xQQp-XBgi%v zrxzD!l8i*!ch*cnYwbB&=nXx|P<+y|kivZJ>duacpB6Z!f9F8Ij2CSu9RS;~mC6-` zklX=aed?$3Y;U7UeA@SxcS4CIKZ@=CO-TF2FavO?>3e^w&Cij)^iY~wc zFYn=7+eOsAJSY6*M!Rh-L~XCr-<&}?%RRW0a5g>;Z}tIn_Qi za)q|fdzsU_1uosvS1j9XqlawV&r*8D6n$J1&y1&=RYO;kv~?AN(`kk7q0(pwsY06+xrO;GpE$WO&Z|bM)b;hF zVD75H4fou*MjC*g|2qP=!*VvvR`s0lqtWpY>2`)Nd%i~@Jqq1R_##f&&deWr4TO!z%HF=)S?E*WeQt0t<`&{kNlR=6k zz42fglF6UpfYZLbCYyp7tre{X`f+p0Mks8~1Wxcn|=-p8f;UOce;KOql z*<+)+;oqCcH?pN`ceI$`(4Zxcngpi(gZuhBVZ^lRM~5u|pI&6qpePT)3tl7WK7x%y zucdnqR9+oTF7~{0x#dMKW=I@eI%NG6-%3cX+!4DJwmzD4j2HfWeDfCLJoUhU z(M-;sL(P3^T=y-ZVlQK>p4)OU6x}vY(3wX-Y{_UQl!9OK0Gt)PD&| zaria-QHdHiuUbh{qGXk&%wv>SxyJ0s;@#^5YT&>ft3iGDb0Q+tQq&$gu5=y~<)PcpFlkv=;% zP6h4?)}+wNonBNDryF(Km*ntNj?UbdUh~PDXO?6?rO$*?I5#90XRI$mB+}GifD;V( z+Oc#5sgPX#I{|IzIAZZj_ym1BFm7vn$!dn~>5c1jN+c5A)gbSE0C!Ot+0Hlbpzj5@*@@_=S}u8)FO7!lb{m`Ot#LS=XF&70@VCyS;ok5 zDEp0>d3>uy-iF@8kM(SKB#`DoT34VLlPd+!^L{>pYHdK<&wPkmvrBFF`pqvNd}}%8 zMf5#ds=4U%Sr>JUkN*^-Q95t10e7FI=MRw4FaN^Ht5%D8-DT?Gv9|-20S3V(Cj?pi z_kjh~e+lQVBEsWOuc7+tV&J3{?*^(nTYaMvxeskICcvQ)%zx@Rgt_|{KiXVlt3N!* zZJEt9TcVQ4Olpf-{3sMG0#22mL*NVx#4KNG|>Ktjb$m+v|T*lH|2~j%kV0b%$PULH&`Bx#%0faJKa|!+5jeLC?)SEhn77M zEC}*^ZKejkmY%HAQ4-4N+wViErYFtuZzgXWU6C+zrVnDhkC>VZR6aJxbDu))NrTkT z@V#0VApEFCq3~s@n7uK~6qerDuMzyE^okDrqUGG0dINCP z=KvmwoXfcOe!WZFh34#=Twa?@K4SgcZU6LV#3Q`&mATk5FUc8%^x16%QV9*LueT?(TiZ7-0pb5NChC}C{uVHz zMSpL5n&e=p3bI=NDczyLB0nhrC~Qd*q}X!iQI_SYRP$82(-Mw~Ub5}qcu7x0gm}db zJ@J(9ag0X=@5u!|qmEW|c1ePSysNj=<&o?1yfk1KWEXzUWiGHc*+*HzxiKUOQ*gcw zDZKZZMb}}#I*D%Ncg963Jp@^H9xvOLQevCBpdQ3WP=pH`+&L8brh+(g$-=|;zpE2V zO8rf4!8T-CVI_wiGECSfb#wT3sDzO!@YRd6kZ4u*VQ?xzUIlG~yd7NOXIn zslB^_yg?*-$Ah-VT|Yv5gA4htx>(EdCkOHMI3BkEopNcui2?fYaK9k%A&tZUEi&sd ziV_Rk=x2zV7Vl_7sE&>^!?g_sBpBfNA=(*05t@k{@ zHk6f`oM8EE#o#jBQ4qt71{lLr%PPMZDUVH_LG$8yaaljzZ)w(!HB+D)Tvc)*K_J;c zLPDoz=_#!fzvV~X+%u(Ra;))x1Krv!Wvjf-X?4+T)U*H7cY&NmdBu{fPJ#?NnR72p<3??rAP~Hs!}8nQAs{Z!(qWC}cDtm78}^e1zf8Slfxg9--n_qQ+MxyV?0FVsKY}sYVwBB;2^{)v@=59 zV#7>LD191?zEse-z{OqMIL_EZvNCV00+fM-qxXLaRPy1chpmvc)P$2Vy0Aaj8GeM9 zJuEtWLfUTS!zy81RZA}48s*np2_r`_GZ802Gz2> zEBehKECY@cohyndpN8c==qX^W^4st&D5SxZ{*&m|(;gza~TA(mz+k*Sv36SjQuV&&#>(U0{?E69i(=IPoH*cc8EYEt^uuA17%kq^GZo6mvp>`~|TGm>}S@ zhR_t&Z~sTO{{wpNZ>KxOz1TN$j}m>L@^$I$(2r_F=Nuz=TksQ@_NuGyV%X-#zG#r5uj$oKAg*dmhobobj`iU&W6f{nG77D}T?=08! zyddV)?p5nN+hx61yb zd5Q|tpr4|POMDJV_^rkRX!IPQV4Z}u&}K&Hn$bq@ym~x!;VOb0a5~J5Bk?BO@S6*NoCt>f zm>}V?B5$nhR>B1pd06$n%yE;tSikzSy}Ll~ulqzW`8a?Ue@HjMDYpNNqJypoRR@oi zfnT>&pV(zls|tVahJ=gK5*?duv}-Jq$MNY34~X2wkwc9Ovi)X1N2Pk0B|>SCitYF} zgr3bnL~HU$k4450hA!p3U&Loq;-poy0%%;Vx;fuQ3dLg zce#NcGV~K#yL-ZsiMT8m0JxSVYIMg9Xn8mUq|b}r@#2_^X48aIWmDbpPS+`#XU{~E#4wo42v60*|~2@@)K_=#I9@# zJen4~jv0ClkM=oWM_-p40EY|ykm&jS!1p;&3X_l;z_gxb5VIW%F%L1Q=J?+gU=$|d zsS>9rux?~UC@q+J|M;!`RfhBEd&RX=+EIMORWT?DZT`npUc1Z@irzlI0{VY8g*&A^ zF_iYFK=(q27=)!NkRn%Ua1UJVAN=oiqb#K~SiMuf#}~ZiU_a&&ve=;>+5jL;~?FRQMQ=Qjk3b zn%mT$^G{Xew&QycMHrC6B!8Wj=<wCo&5hQh66%ZVB1>Yz9ZuC4wQ+Qz5fY#to;sYsq?U@P@X+s1D3DjK( z+)Z5s&hh4RRJ5YJ|13z9lWI>&`eMeDZ1`&=@7=10yWzzHZ6I2EKA_^JIX6O=dP%@xqJotNJe(pC2me z{s(?SQ7DIU1sbA=DWPUa_P2B71(okwm%;JhN^{6qIH?^P#3zIArVT=CV+T)fshOCP ziScigPdQISRUl#MYSFhk&R8hb8M27fDSQ0X$q9ktF#g*OS-j5&ayV&L4l#2eFiG%; zQ&VR1L7G#l(PhE9l>HnkJpgB9Lqw>Qi;N^@>m^}*=W8JmR$+m*7jLu4tHD_?4?Dio zQI^HMel)TO-2$n!X|^=I>BJ+7M1IAICjRL=jU6%pA(VIuIdx86kVWoDdpcP+q8|KL^mVb2ih>R*l5IoIg5} z$%H)t7O+bB2IN}*pV0Y>h!@dntY}|e5AbAj4R#3~K+fvLnA@Ln($%QZNz3kO>f8{fvP81X%u zObA@c{he2pa8m9BzU-g03tx7!`d`IU4qTAyYRNq?Rg;fs_0Q`vM*VmK@A*&f>5s+( zL2q`cv{K7yejBl%3BT)>@%YYyQ$~Fqn+hPE_Mcp$0S3yt4g1l{MbQWWx?rp6raskJ z8i1{#nHxA4|Nikxrf5{h#}Ilj_ge&1`E;5xU5#492Lv{I- zukVFx&$v2NQP0DOtrtms9`;j)*f0h|-}1?s>oQ^&6Ua}s8KKrfOqu+l`5QF$m8}k= z%t4$o_~{v|cR7 zf+Ek(kK}m6DC1}6r#Q-pk-n$#L0y(kD{6)JXIXRr2Oa2)=Ng61-`ne`_sI18gZRq` z`KbD!6H;rRKZQTd(Bea<879L2_(}MI_R^;r4FBiQ|9_y=%Z;Z>Thx7OJoaLUaN#(% z36C<^%h zzVI3m-ML-)pBL@TKnXQ)I~*)vuFt!Y#xxrEysyoWX5;tT9g&B=e_xSf5zG3K{1;#3 za2^%XhPkZDtc<1jujd+LFCeD!Tu$k!^X$;NTe5KOMS5XAXQF+u_ce z*Gv@Z$eOuo2<=t=xt|rKb%BGmM@@b`98Da6*ahdo&u#l8yuJEuO+S#vO>i+b)b^w) z*WxPEXaXNmYqVyo7;!v`JV*C|A*;& b6qX&dw&zvxM)ouX5@K-4ME8~UjfnpNiX51X literal 0 HcmV?d00001 diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index 9e81ec98..c32a2c67 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -12,9 +12,9 @@ library(s2dv) ## 1. Load the data -In this section we will use the function **Start** to load the data. Then, we will transfrom the output `startR_array` to an `s2dv_cube` object in order that the data is easy to use within CSTools functions. +In this section we will use the function **Start** to load the data. Then, we will transfrom the output **startR_array** to an **s2dv_cube** object in order that the data is easy to use within CSTools functions. -The `s2dv_cube` object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. +The **s2dv_cube** object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. > **Note:** If you have already loaded the data with Start, go directly to section **b)**. @@ -91,17 +91,16 @@ obs <- CST_Start(dat = path_obs, ### b) Create `s2dv_cube`: -Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube** from **CSTools** package: +Now we convert the **hindcast and observations data** (**startR_array**) into an **s2dv_cube** object with the function **as.s2dv_cube** from **CSTools** package: ```r hcst <- as.s2dv_cube(hcst) obs <- as.s2dv_cube(obs) ``` By printing the object, we see that it has been organized following an order. + ```r -hcst -``` -```r +> hcst 's2dv_cube' Data [ 294.975204467773, 295.99658203125, 296.999153137207, 296.874618530273, 297.662521362305, 297.113525390625, 296.145011901855, 295.981201171875 ... ] Dimensions ( dat = 1, var = 1, syear = 24, ensemble = 25, time = 2, latitude = 61, longitude = 61 ) @@ -134,24 +133,24 @@ Attributes ( dat1 ) : dat = dat1, var = tas, syear = 19931101 ... ... ``` -> **Note:** An `s2dv_cube` object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$coords`, `hcst$attrs`, ...) +> **Note:** An **s2dv_cube** object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$coords`, `hcst$attrs`, ...) -#### Questions 1: -**Goal:** To find `s2dv_cube` information of `hcst` object. +#### Exercise 1 +**Goal:** To find **s2dv_cube** information of **hindcast** data. 1. What type of object is an **s2dv_cube** in base R? ```r class(____) -typeof(___) +typeof(____) ``` -2. What type of object is the element `hcst$data` (common language)? Use the function `dim()` and `typeof()` to check `hcst$data`: +2. What type of object is the element `hcst$data` (common language)? Use the function **dim()** and **typeof()** to check `hcst$data`: ```r typeof(____) dim(____) ``` -3. What are the **time dimensions** of the `hcst` object? The Dates of an `s2dv_cube` can be found in element: `hcst$attrs$Dates`. +3. What are the **time dimensions** of the **hindcast** data? The Dates of an **s2dv_cube** can be found in element: `hcst$attrs$Dates`. -4. What are the coordinates names in the object `hcst`? Use the function `names()` to check. The coordinates in the `s2dv_cube` are stored in element `hcst$coords`. +4. What are the coordinates names in the **hindcast**? Use the function **names()** to check. The coordinates in the **s2dv_cube** are stored in element `hcst$coords`. ```r names(____) @@ -160,44 +159,31 @@ names(____) 6. What is the **start date** dimension name of the `hcst`? What is the **ensemble member** dimension name? -7. How many **ensemble members** have the `hcst`, `fcst` and `obs` datasets? +7. How many **ensemble members** have the **hindcast** and **observations** datasets? -8. What is the full variable name of the loaded data? Find out the information in `hcst$attrs$Variable$metadata` with the function `str()`. +8. What is the full variable name of the loaded data? Find out the information in `hcst$attrs$Variable$metadata` with the function **str()**. -9. From what season is the data loaded from? You can use the function `months()`. +9. From what season is the data loaded from? You can use the function **months()**. 10. What are the **units** of the data? -**Exercise 1**: -1. Find the mean, the maximum and the minimum of the `fcst`, `hcst` and obs data and compare it: -```r +## 2. Calibrate the data -``` +The first step to perform a quality assesment is to correct biases as well as dispersion errors of the model. The function **Calibration** from **CSTools** allows us to chose from different calibration member-by-member techniques. -## 2. Calibrate the data -Now, we are going to use the function CST_Calibration from CSTools to calibrate the hindcast. The parameter `cal.method` alllows us to chose the calibration method. In this exercise we are going to chose the `evmos`. +In this case, we are going to chose a method called **"evmos"** which applies a variance inflation technique to ensure the correction of the bias and the correspondence of variance between forecast and observation (Van Schaeybroeck and Vannitsem, 2011). -The "evmos" method applies a variance inflation technique to ensure the correction of -the bias and the correspondence of variance between forecast and observation (Van Schaeybroeck and Vannitsem, 2011). -```r -hcst_cal <- CST_Calibration(exp = hcst, obs = obs, - cal.method = "evmos", - eval.method = "leave-one-out", - multi.model = FALSE, - na.fill = TRUE, - na.rm = TRUE, - apply_to = NULL, - alpha = NULL, - memb_dim = "ensemble", - sdate_dim = "syear", - ncores = 10) -``` +> **Note:** The functions of **CSTools** whose name starts with the prefix **CST** work directly with **s2dv_cube** objects. If we are not using **s2dv_cube** we can use the standard version without the prefix that work with arrays (e.g. use **Calibration**, **Anomaly** instead of **CST_Calibration**, **CST_Anomaly**...). -Now we are going to calibrate also the forecast: -```r +To calibrate the hindcast, we need to use also the observations and to specify some parameters, such as the adjustment specifications and the dimension names. + +#### Exercise 2: -fcst_cal <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, +**Goal:** Calibrate the hindcast with completing the ensemble member dimension names and the start date dimension names of data. + +```r +hcst_cal <- CST_Calibration(exp = hcst, obs = obs, cal.method = "evmos", eval.method = "leave-one-out", multi.model = FALSE, @@ -205,47 +191,42 @@ fcst_cal <- CST_Calibration(exp = hcst, obs = obs, exp_cor = fcst, na.rm = TRUE, apply_to = NULL, alpha = NULL, - memb_dim = "ensemble", - sdate_dim = "syear", + memb_dim = ____, + sdate_dim = ____, ncores = 10) ``` -**Exercise 2**: -1. Test the hcst with a different calibration methods available and compare the results. You can use the function `summary()` to see the differences. -```r -hcst_cal_bias <- CST_Calibration(____) -``` ## 3. Compute Anomalies +In this section we will compute the hindcast anomalies from the calibrated data in the previous step. -Now, we will compute the hindcast anomalies with the calibrated `hcst`. We are going to use the function CST_Anomaly from CSTools. +Anomalies are deviations from the average weather conditions over a long period. A positive anomaly indicates that the conditions are higher than the average while negative indicates that is lower. Calculating anomalies is an important step in the model quality assesment for several reasons such as removing seasonal variations, visualization and for policy and decision-making among others. -This function computes the anomalies relative to a climatology computed along the selected dimension (usually starting dates or forecast time) allowing the application or not of crossvalidated climatologies. The computation is carried out independently for experimental and observational data products. +We are going to use the function **CST_Anomaly** from **CSTools**. This function computes the anomalies relative to a climatology computed along the selected dimension (in our case starting dates). The computation is carried out independently for experimental and observational datasets. + +#### Exercise 3: +**Goal:** Calculate the hindcast anomalies from the calibrated hindcast and observations dataset. You can take a look on the [CSTools package documentation](https://cran.r-project.org/web/packages/CSTools/CSTools.pdf) on page 40. ```r -hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, +hcst_anom <- CST_Anomaly(exp = ____, + obs = ____, cross = TRUE, memb = TRUE, - memb_dim = 'ensemble', - dim_anom = 'syear', + memb_dim = ____, + dim_anom = ____, dat_dim = c('dat', 'ensemble'), - ftime_dim = 'time', + ftime_dim = ____, ncores = 10) ``` -**Exercise 3**: -1. Calculate the hcst anomalies with the raw hindcast but change the number of ncores. -```r -t1 <- Sys.time() -hcst_anom_raw <- CST_Anomaly(____) -t2 <- Sys.time() -t2-t1 -``` ## 4. Compute skill: RPSS -In order to trust the results we need to evaluate the skill of the system. +To trust the climate models we need to evaluate its skill. To do it, we are going to use the Ranked Probability Skill Score (RPSS; Wilks, 2011). Is the skill score based on the Ranked Probability Score (RPS; Wilks, 2011). It can be used to assess whether a forecast presents an improvement or worsening with respect to a reference forecast. -Compute Ranked Probability Skill Score for anomalies: +The RPSS ranges between minus infinite and 1. If the RPSS is positive, it indicates that the forecast has higher skill than the reference forecast, while a negative value means that it has a lower skill. It is computed as `RPSS = 1 - RPS_exp / RPS_ref`. The statistical significance is obtained based on a Random Walk test at +the specified confidence level (DelSole and Tippett, 2016). + +Next, we compute the RPSS for anomalies: ```r skill <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, time_dim = 'syear', @@ -254,50 +235,62 @@ skill <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, cross.val = TRUE, ncores = 10) ``` -**Exercise 4**: -1. Compare the result with the raw results. + +The output of the **RPSS** function is a list of two elements. The first element is the RPSS; the second element, `sign` is a logical array of the statistical significance of the RPSS with the same dimensions as `rpss`. + +#### Exercise 4: + +**Goal:** Compare the **RPSS** results with calibrated and raw anomalies. ```r -skill_raw <- RPSS(exp = hcst_anom_raw$exp$data, obs = hcst_anom_raw$obs$data, - time_dim = 'syear', - memb_dim = 'ensemble', - Fair = FALSE, - cross.val = TRUE, - ncores = 10) -> summary(skill$rpss) - Min. 1st Qu. Median Mean 3rd Qu. Max. --0.54005 -0.11225 -0.03170 -0.03722 0.04480 0.44376 -> summary(skill$sign) - Mode FALSE TRUE -logical 6798 402 -> summary(skill_raw$rpss) - Min. 1st Qu. Median Mean 3rd Qu. Max. --0.59113 -0.11045 -0.03069 -0.03585 0.04637 0.44305 -> summary(skill_raw$sign) - Mode FALSE TRUE -logical 7055 387 -``` +hcst_anom <- CST_Anomaly(exp = ____, + obs = ____, + cross = ____, + memb = ____, + memb_dim = ____, + dim_anom = ____, + dat_dim = ____, + ftime_dim = ____, + ncores = ____) +skill_raw <- RPSS(exp = ____, + obs = ____, + time_dim = ____, + memb_dim = ____, + Fair = ____, + cross.val = ____, + ncores = ____) + +summary(____) -## 5. Additional Exercise: Visualization +``` -We can use the function from s2dv PlotEquiMap to visualize the data. +## 5. Additional Exercises: Visualization -**Example** +#### Exercise 5 +**Goal:** Use the function **PlotEquiMap** from **s2dv** to compare the raw and calibrated data. -With the following code, we will plot a map to compare the hindcast raw and calibrated data. Can we appreciate the differences? +We are going to plot the **last year** of the hindcast period (**2016**) for the last timestep (**December**). Also, we are going to use the **last ensemble member** (arbirtrary choice). -We are going to plot the last year of the hindcast period (2016) for the last timestep (December). Also, we are going to use the last ensemble member (arbirtrary choice). ```r -dim(hcst$data) - # dat var syear ensemble time latitude longitude - # 1 1 24 25 2 60 60 -``` -```r -# Visualization: compare raw and calibrated data -PlotEquiMap(hcst$data[,,24,1,1,,], lat = lat, lon = lon, - filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/raw_hcst_24.png") -PlotEquiMap(hcst_cal$data[,,24,1,1,,], lat = lat, lon = lon, - filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/cal_hcst_24.png") +lat <- hcst$coords$lat +lon <- hcst$coords$lon + +PlotEquiMap(var = hcst_anom$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, + filled.continents = FALSE, + fileout = ____) +PlotEquiMap(var = ____, lat = ____, lon = ____, + filled.continents = ____, + fileout = ____) ``` + +#### Exercise 6 +**Goal:** Use the function **PlotEquiMap** from **s2dv** to compare the RPSS results with calibrated and raw data. + +```r +PlotEquiMap(var = ____, lat = ____, lon = ____, + brks = seq(-1, 1, by = 0.1), + filled.continents = ____, + fileout = ____) +``` \ No newline at end of file diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md index 7362c980..33e9bed9 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md @@ -12,9 +12,9 @@ library(s2dv) ## 1. Load the data -In this section we will use the function **Start** to load the data. Then, we will transfrom the output `startR_array` to an `s2dv_cube` object in order that the data is easy to use within CSTools functions. +In this section we will use the function **Start** to load the data. Then, we will transfrom the output **startR_array** to an **s2dv_cube** object in order that the data is easy to use within CSTools functions. -The `s2dv_cube` object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. +The **s2dv_cube** object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. > **Note:** If you have already loaded the data with Start, go directly to section **b)**. @@ -91,17 +91,16 @@ obs <- CST_Start(dat = path_obs, ### b) Create `s2dv_cube`: -Now we convert the data (`startR_array`) into an 's2dv_cube' object with the function **as.s2dv_cube** from **CSTools** package: +Now we convert the **hindcast and observations data** (**startR_array**) into an **s2dv_cube** object with the function **as.s2dv_cube** from **CSTools** package: ```r hcst <- as.s2dv_cube(hcst) obs <- as.s2dv_cube(obs) ``` By printing the object, we see that it has been organized following an order. + ```r -hcst -``` -```r +> hcst 's2dv_cube' Data [ 294.975204467773, 295.99658203125, 296.999153137207, 296.874618530273, 297.662521362305, 297.113525390625, 296.145011901855, 295.981201171875 ... ] Dimensions ( dat = 1, var = 1, syear = 24, ensemble = 25, time = 2, latitude = 61, longitude = 61 ) @@ -134,10 +133,10 @@ Attributes ( dat1 ) : dat = dat1, var = tas, syear = 19931101 ... ... ``` -> **Note:** An `s2dv_cube` object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$coords`, `hcst$attrs`, ...) +> **Note:** An **s2dv_cube** object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$coords`, `hcst$attrs`, ...) -#### Questions 1: -**Goal:** To find `s2dv_cube` information of `hcst` object. +#### Exercise 1 +**Goal:** To find **s2dv_cube** information of **hindcast** data. 1. What type of object is an **s2dv_cube** in base R? ```r @@ -146,7 +145,7 @@ class(hcst) typeof(hcst) # "list" ``` -2. What type of object is the element `hcst$data` (common language)? Use the function `dim()` and `typeof()` to check `hcst$data`: +2. What type of object is the element `hcst$data` (common language)? Use the function **dim()** and **typeof()** to check `hcst$data`: ```r typeof(hcst$data) # base type # [1] "double" @@ -156,13 +155,13 @@ dim(hcst$data) # dimensions # Answer: Multi-dimensional array / N-dimensional array / Tensor. ``` -3. What are the **time dimensions** of the `hcst` object? The Dates of an `s2dv_cube` can be found in element: `hcst$attrs$Dates`. +3. What are the **time dimensions** of the **hindcast** data? The Dates of an **s2dv_cube** can be found in element: `hcst$attrs$Dates`. ```r dim(hcst$attrs$Dates) # syear time # 24 2 ``` -4. What are the coordinates names in the object `hcst`? Use the function `names()` to check. The coordinates in the `s2dv_cube` are stored in element `hcst$coords`. +4. What are the coordinates names in the **hindcast**? Use the function **names()** to check. The coordinates in the **s2dv_cube** are stored in element `hcst$coords`. ```r names(hcst$coords) # [1] "dat" "var" "syear" "ensemble" "time" "latitude" @@ -181,14 +180,14 @@ hcst$coords$lon # [39] 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 # [58] 37 38 39 40 ``` -6. What is the **start date** dimension name of the `hcst`? What is the **ensemble member** dimension name? +6. What is the **start date** dimension name of the **hindcast**? What is the **ensemble member** dimension name? ```r hcst$dims # syear # ensemble ``` -7. How many **ensemble members** have the `hcst`, `fcst` and `obs` datasets? +7. How many **ensemble members** have the **hindcast** and **observations** datasets? ```r hcst$dims[['ensemble']] # [1] 25 @@ -197,13 +196,13 @@ fcst$dims[['ensemble']] obs$dims # No ensemble member in obs ``` -8. What is the full variable name of the loaded data? Find out the information in `hcst$attrs$Variable$metadata` with the function `str()`. +8. What is the full variable name of the loaded data? Find out the information in `hcst$attrs$Variable$metadata` with the function **str()**. ```r str(hcst$attrs$Variable) str(hcst$attrs$Variable$metadata$tas$long_name) # chr "2 metre temperature" ``` -9. From what season is the data loaded from? You can use the function `months()`. +9. From what season is the data loaded from? You can use the function **months()**. ```r dim(hcst$attrs$Dates) hcst$attrs$Dates[1,] @@ -216,27 +215,20 @@ hcst$attrs$Variable$metadata$tas$units # K ``` -**Exercise 1**: -1. Find the mean, the maximum and the minimum of the `fcst`, `hcst` and obs data and compare it: +## 2. Calibrate the data -```r -summary(hcst$data) -# Min. 1st Qu. Median Mean 3rd Qu. Max. -# 237.3 274.1 280.5 280.1 287.8 303.5 -summary(hcst$data) -# Min. 1st Qu. Median Mean 3rd Qu. Max. -# 237.3 274.1 280.5 280.1 287.8 303.5 -summary(fcst$data) -# Min. 1st Qu. Median Mean 3rd Qu. Max. -# 240.6 274.9 280.8 280.8 288.2 303.4 -summary(obs$data) -# Min. 1st Qu. Median Mean 3rd Qu. Max. -# 239.8 272.9 279.8 279.6 287.7 303.0 -``` +The first step to perform a quality assesment is to correct biases as well as dispersion errors of the model. The function **Calibration** from **CSTools** allows us to chose from different calibration member-by-member techniques. -## 2. Calibrate the data +In this case, we are going to chose a method called **"evmos"** which applies a variance inflation technique to ensure the correction of the bias and the correspondence of variance between forecast and observation (Van Schaeybroeck and Vannitsem, 2011). + +> **Note:** The functions of **CSTools** whose name starts with the prefix **CST** work directly with **s2dv_cube** objects. If we are not using **s2dv_cube** we can use the standard version without the prefix that work with arrays (e.g. use **Calibration**, **Anomaly** instead of **CST_Calibration**, **CST_Anomaly**...). + +To calibrate the hindcast, we need to use also the observations and to specify some parameters, such as the adjustment specifications and the dimension names. + +#### Exercise 2: + +**Goal:** Calibrate the hindcast with completing the ensemble member dimension names and the start date dimension names of data. -Calibrate the hindcast: ```r hcst_cal <- CST_Calibration(exp = hcst, obs = obs, cal.method = "evmos", @@ -251,12 +243,20 @@ hcst_cal <- CST_Calibration(exp = hcst, obs = obs, ncores = 10) ``` - ## 3. Compute Anomalies -Now, we will compute the hindcast anomalies with the calibrated `hcst`. +In this section we will compute the hindcast anomalies from the calibrated data in the previous step. + +Anomalies are deviations from the average weather conditions over a long period. A positive anomaly indicates that the conditions are higher than the average while negative indicates that is lower. Calculating anomalies is an important step in the model quality assesment for several reasons such as removing seasonal variations, visualization and for policy and decision-making among others. + +We are going to use the function **CST_Anomaly** from **CSTools**. This function computes the anomalies relative to a climatology computed along the selected dimension (in our case starting dates). The computation is carried out independently for experimental and observational datasets. + +#### Exercise 3: +**Goal:** Calculate the hindcast anomalies from the calibrated hindcast and observations dataset. You can take a look on the [CSTools package documentation](https://cran.r-project.org/web/packages/CSTools/CSTools.pdf) on page 40. + ```r -hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, +hcst_anom <- CST_Anomaly(exp = hcst_cal, + obs = obs, cross = TRUE, memb = TRUE, memb_dim = 'ensemble', @@ -267,39 +267,42 @@ hcst_anom <- CST_Anomaly(exp = hcst_cal, obs = obs, ``` -**Exercise 3**: -1. Calculate the hcst anomalies with the raw hindcast but change the number of ncores. -```r -t1 <- Sys.time() -hcst_anom_raw <- CST_Anomaly(exp = hcst, obs = obs, - cross = TRUE, - memb = TRUE, - memb_dim = 'ensemble', - dim_anom = 'syear', - dat_dim = c('dat', 'ensemble'), - ftime_dim = 'time', - ncores = 5) -t2 <- Sys.time() -t2-t1 -``` - ## 4. Compute skill: RPSS -Compute Ranked Probability Skill Score for anomalies: +To trust the climate models we need to evaluate its skill. To do it, we are going to use the Ranked Probability Skill Score (RPSS; Wilks, 2011). Is the skill score based on the Ranked Probability Score (RPS; Wilks, 2011). It can be used to assess whether a forecast presents an improvement or worsening with respect to a reference forecast. + +The RPSS ranges between minus infinite and 1. If the RPSS is positive, it indicates that the forecast has higher skill than the reference forecast, while a negative value means that it has a lower skill. It is computed as `RPSS = 1 - RPS_exp / RPS_ref`. The statistical significance is obtained based on a Random Walk test at +the specified confidence level (DelSole and Tippett, 2016). + +Next, we compute the RPSS for anomalies: ```r -skill <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, +skill <- RPSS(exp = hcst_anom$exp$data, + obs = hcst_anom$obs$data, time_dim = 'syear', memb_dim = 'ensemble', Fair = FALSE, cross.val = TRUE, ncores = 10) ``` -**Exercise 4**: -1. Compare the result with the raw results. + +The output of the **RPSS** function is a list of two elements. The first element is the RPSS; the second element, `sign` is a logical array of the statistical significance of the RPSS with the same dimensions as `rpss`. + +#### Exercise 4: +**Goal:** Compare the **RPSS** results with calibrated and raw anomalies. ```r -skill_raw <- RPSS(exp = hcst_anom_raw$exp$data, obs = hcst_anom_raw$obs$data, +hcst_anom_raw <- CST_Anomaly(exp = hcst, + obs = obs, + cross = TRUE, + memb = TRUE, + memb_dim = 'ensemble', + dim_anom = 'syear', + dat_dim = c('dat', 'ensemble'), + ftime_dim = 'time', + ncores = 10) +skill_raw <- RPSS(exp = hcst_anom_raw$exp$data, + obs = hcst_anom_raw$obs$data, time_dim = 'syear', memb_dim = 'ensemble', Fair = FALSE, @@ -321,57 +324,39 @@ logical 7055 387 ## 5. Additional Exercises: Visualization -We can use the function from s2dv PlotEquiMap to visualize the data. -**Exercise 1: Visualization** - -**Example** +#### Exercise 5 +**Goal:** Use the function **PlotEquiMap** from **s2dv** to compare the raw and calibrated data. -With the following code, we will plot a map to compare the hindcast raw and calibrated data. Can we appreciate the differences? +We are going to plot the **last year** of the hindcast period (**2016**) for the last timestep (**December**). Also, we are going to use the **last ensemble member** (arbirtrary choice). -We are going to plot the last year of the hindcast period (2016) for the last timestep (December). Also, we are going to use the last ensemble member (arbirtrary choice). ```r -dim(hcst$data) - # dat var syear ensemble time latitude longitude - # 1 1 24 25 2 60 60 -``` -```r lat <- hcst$coords$lat lon <- hcst$coords$lon -# Visualization: compare raw and calibrated data -PlotEquiMap(hcst$data[,,24,1,1,,], lat = lat, lon = lon, - filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/raw_hcst_24.png") -PlotEquiMap(hcst_cal$data[,,24,1,1,,], lat = lat, lon = lon, - filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/cal_hcst_24.png") -``` - -```r -# Visualization of RPSS -PlotEquiMap(skill$rpss[,,1,,], lat = lat, lon = lon, brks = seq(-1, 1, by = 0.1), - filled.continents = FALSE, fileout = "/esarchive/scratch/erifarov/rpackages/PATC2023/figs_res/skill.png") +PlotEquiMap(hcst_anom$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, + filled.continents = FALSE, + fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/hcst_anom_cal.png") +PlotEquiMap(hcst_anom_raw$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, + filled.continents = FALSE, + fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/hcst_anom_raw.png") ``` +![](./Figures/hcst_anom_cal.png) +![](./Figures/hcst_anom_raw.png) -**Exercise 2: Change Calibration method and compare results** +#### Exercise 6 +**Goal:** Use the function **PlotEquiMap** from **s2dv** to compare the RPSS results with calibrated and raw data. -1. Test the hcst with a different calibration methods available and compare the results. -```r -hcst_cal_bias <- CST_Calibration(exp = hcst, obs = obs, - cal.method = "bias", - eval.method = "leave-one-out", - multi.model = FALSE, - na.fill = TRUE, - na.rm = TRUE, - apply_to = NULL, - alpha = NULL, - memb_dim = "ensemble", - sdate_dim = "syear", - ncores = 10) -summary(hcst_cal_bias$data) - Min. 1st Qu. Median Mean 3rd Qu. Max. - 240.4 273.6 280.0 280.3 288.1 302.7 -summary(hcst_cal$data) - Min. 1st Qu. Median Mean 3rd Qu. Max. - 237.9 272.9 279.8 279.6 287.6 303.5 -``` \ No newline at end of file +```r +PlotEquiMap(skill$rpss[ , , 2, , ], lat = lat, lon = lon, + brks = seq(-1, 1, by = 0.1), + filled.continents = FALSE, + fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/skill_cal.png") +PlotEquiMap(skill_raw$rpss[ , , 2, , ], lat = lat, lon = lon, + brks = seq(-1, 1, by = 0.1), + filled.continents = FALSE, + fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/skill_raw.png") +``` +![](./Figures/skill_cal.png) +![](./Figures/skill_raw.png) \ No newline at end of file -- GitLab From f887980d71e6d37c30cc364d77a745af6cc92e57 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 26 Oct 2023 17:58:02 +0200 Subject: [PATCH 09/66] Improve figures from PATC 2023 --- .../PATC2023/Figures/hcst_anom_cal.png | Bin 12792 -> 12464 bytes .../PATC2023/Figures/hcst_anom_raw.png | Bin 12676 -> 12089 bytes .../tutorial/PATC2023/Figures/skill_cal.png | Bin 14520 -> 14180 bytes .../tutorial/PATC2023/Figures/skill_raw.png | Bin 14718 -> 14152 bytes .../PATC2023/handson_2-data-assesment.md | 46 ++++++++---------- .../PATC2023/handson_2-data-assesment_ans.md | 32 ++++++------ 6 files changed, 37 insertions(+), 41 deletions(-) diff --git a/inst/doc/tutorial/PATC2023/Figures/hcst_anom_cal.png b/inst/doc/tutorial/PATC2023/Figures/hcst_anom_cal.png index 749830c4f39d8f610efb4dbd01339491a35fe516..90e676fddac41bae2331599c93c99edd3bb4a344 100644 GIT binary patch literal 12464 zcmdtIcTf{w_&!QWFbOT81nCRXtDuM!flvb?AiX0HM2etP>9RzOAP~9;h(JK;5;{^; zTBr&F0wN$yK~SoQ6mLG?`L)H}fpG-tVCboc2Xv|6+v&2t(IjoE#GMnVfo!vVl(5J(6O z^#6JD9Zk$TZ69ve#V1S?f!ZhiPaiJr%*eGQ^A3pr9QYC-2>WaPu3N#*P0=m}1eBL= zwy<)qyHO=!ZUJ^Nt&Yg6OP~F(9(Fsm*!f<%s49YTk9)Y z6~*jq+0tiUB|R>cs$XR41P1Yv7}vVlO1Ic@Z*B}e&+>{bnXm3)aP>XRx-VRXpOCt1 zZJyylsYuLfIDZ|26-f3F4o(R3D19Hy{ON5E%+2W4jBi85bmj2{m&l1rD|ySGDHkNF zfTr01)MOa{!@GgjLTHgHfonvv>g2#G+gDAT^ws^`_XBLHpE3h%H=kssN-YIflWS&F zzGnHim5X(LZoxS9);;nd(*u ze8u>ZxOrVU7pp`3&}o_8Wv1fQ#6qjb<`S#yNu~KBMODV1X2r0zirGNj+0KetldT?$ zYN2JkYV=qp_P%1_XjAjQl-}4?NFw-)!Axv^%SH+)E0+y=N0ZQl|O19e+X<%`zD8bolyd zS4;!}NQ3C*-K7;bc-c4+UTS7ig&^{sog-e~?Mdar{axlHvDj*)rb1FfGo|^(=8QD= z@+9;o?1;kOmK3}@Y`rmy_=|(0*cOy19bJ=pf$3{t)_FX4m`kdw#aMH&I~J@Dv69uc zDoEs3a*-i<5B3K;=l!h~Yc7*nQ%998mL#DMD>c1bC2s{yi(ZzLN#b%*q#RH@TF5N0JDdZtAX!Cy+Sfr-q_Bo0+qQCpn_(fNdxi--N^jqY^sb~yqPEYymb`i4 z&@*hnoHpQ+{^U|JNOMQ{|Hh-tD&2UeltXvpf4olu9aZ}}?RVPy60kj*1!x8B9?gw- z6m2*Svdr?n08Qoj7hyI1Fq(7VkN@Sfp40q~&w5}_v_4a@@qqSDs5(emDdFtW6ywYN z7n&G@I9^geeVtS^4+}KzK_mqJ>evc zNv#==kBmBD(c`KZt%UvLxH{fm`2yy4mXv_+khKLU2}U?$Ob#kPkvIr z>&joVtiybR5XJ5>RhZGSe~cYF{UX^7-WKxO^I!$E2(&Y9Rh&wNuElcVps^b zc#8@ZifYXPnBNukWoaNG+l*{ol7YGtg|{EbzyhC!J_IcBLrHn!2=Z+r4W_Xk5DKo* zuxK*$kxkGGsvrnP?p6DGVJaDBvBXa-3cplQkesYI!1FQYRvx3!`$W`0gEVHf#*L9I zW0#B((+Ll%{V9u;q4<9lv6iH0F6!w>?wwjJXhvfDFV|v)reqCA4YMZC;+gcu>?);> zD-wBK_&l1VoV993u9m$hKivsw54m$c5OUx(M>N4HLG5ypuz&<$Hgj(XVGTDVHnYVX zs+ZX1kCroq$zwV`gT|s6Ntn{9r-SUEqXfZ76vqGL!a=rW=o6&)`nV8Q2HcI57xiw<5bw>-UuPIYg!)hn`=+=ZV2NK;xX1-T%LE72=N|O;YUn)!UhFl&7%z8N^ z+MxoVqcAqCH5*0pR?5r!)q4l5m*@o?>FF?XXi_sv8YCye3b_oySg)#Xr5FH|w_v3# zOofoyl%YjrhlHKbV;ImPn$(k#1&u5B_*?uI>HKf>$@a5RRrI%MDMLgJ>i&%Ho zmUwUG4kEuN+2)QcE1mGy-}8OKdM)yQBJ?_i{v2Sx2KMZImr1bV*!2YXk!6thyrRMsX+<`AiQ&erfQQddO{b=x415c zc(#)_7QB6M03or$7)h-QQN$s412W-seLrO&hj_rGS`o7_mHz-JKz8?AiNW|@*CCg8 zy)pRKtOP#;WfI^P_f-8P4KH zo)}fmBU(p(;)_KYUdhW@eQ5mQ{Hrz$F8?pT?)M}3syIjd`I8#WUbE9^Lz~N=f@Wz1 zL3Nbpv;=6ZNPLrGjtiGjXlpZD>T$^fejf-b_MZHmxI+(aEUiDe!s$6pc#C-q_*Gt5 zl#!wgrgnN`jyXV%%`+z~V9>(*^QXgQ{vTy|>jz-94{x zt%ML0H9|kP!}it$#L93xkYAj;S9(zcx$L(wH9cicTrE=P=zFy8k){2HRT!I5TsvX* z^Ag1CZgBH|#(t#k1$Nx9X%K$Qt&Hk!vkFkZ58SS13DwLERr=vWltXWJ#YD9~1nlbI z7>@tsY9S#L?$H4eD!nUeHt#v{_vkQ8L5MEeOM24xHZ30<&m7E%dWLWDj7>|)e~z2~ z;TU0MFB2wBk70YyCgRUER+6-VPFBV zjM#VN|A=8{+CO;0bz(zNxGV@L@=P8(w+RBbF&}zb`L(yM620G#SaFY7pWL=xrjbF( zv|s-qnM?}{P;P{oKO4xbIWvbP`o;h-+bx84xJ40X69+zzcvW)|1l|Z4sLb9g0v1_$ z3jpj)j=T0+&b_@qL%l!>C*GFK5W;0stuf4-Ex)X}9W>T#%ZL0qd&=gvyg(_nJ^n1v zbPQY(OdCINw4&5#6Rz?FqJ0jti5gbBHqj<27^anvl_so1*QaDC%k&X6QO&>uX%cq% zlJ(-$$F@2)AE%m28P>=m)B~Y255G41`eURNOi;Lv5@|U3nZ%$;PQD2CE1?Nm>Jf$q z1QX9?tZJP*M8yIom@MLrFPH6!dMORR$K=_%T<-R^fJ2?3(?XQlw8w$y#Co1|zC_V4 zbU*!k*>H~#LY2t@OL_`*KOKkD^2D&O)HigS+@F1yro7jXQ4l`p`w5}n@T%p^gn`|{+w z7-h!vM%V5Q?TSPz!=Q5Du)+vy7m$FXtEylY#!g-ktehuSdb!1W*0_M zd(0lI=`H**3Z-AERh|e~a>B)bsMFoFiH!E|XvcN`(&9mfZaEZnTG*NAW&T2v)I+AJ zXGF5wW#PG&=Ge-Y?Pn%5Kx`7nVe4@Y8`b#_o6p!c+B2i$A)s+39*sLJY;#HJf_zoAQhQ3&2tQWDudknh++9;>1TLI| zv;7NMa}7q4yx-#=r;C#LI$s0VuJm%@Ea(GlxqJ|mW9Q#K(6Eyw&h9X}ziXhGD&v@8 zmj3l5QOe#!{V=AcB86If(-s!$mC6s|jusvV(9cnRas<--5~H9)EE}0L=}__KiV?!| z+vH-LpEQ9MwkQ-iDRq3)UsThOxPb2d^BBp6Bn?l2Lpl#n@K5Zsw|k;7(MbSyUwy^_ zUKIL@>ThNbhbLSZbf%+W-^eCxc=ecWwv|HLSt%?*c{`AQ_*3O)L|QkIDXbZAEgNU? zi>ziI{6KZwkuV*22#dZriZAN@54`8Zt&<~5ve8d|I*_8zGEMs$xK>#a!ECu63&irI$LTV2)?G+g6DEB9 z0-qS;np=(oG4r`b^uxDP-j(z$a|j1L5&H4f`)gP@?L4EWgkN*6GB#Y@2x))U0G`&p zP>oHr*Ok;hJ`0YrR{*1^r#5i_2}7A-@LZX1Knw@%&8|Y^I*MM-E(zM z1#xa@{6)0usc~npwFbm1upO{?pP=Ftcpn% z41H!}P_^>H%87rpx`r^{i)HEYKguJ&s-fS4nYZqwE$)YM#~smqy@AOiH=FM2UuZ2@ zy@DaCu+){trd;dTzidLxXi-$2c$}w{b%1zxSX%;pw43!EvHMRBfW21%yTwGzd*<2F zZeK}m(O^qBI@l{QJTJ}vsuUP|uWEi562+uFOo4)N&b!bLz4~=abIEc2f0@)QyyxFTQl!$!-@e__ZFn zk|=UPTYb#a^&~T&_M_=08z!M?XDHW6EvyZ&`Xt4oi^Z$BYcDf^A9zC#mGX`v%Yj&v z^sIL;AO78c3ob9E$s{Dk<4%{^anU^(QLF~U4f@r8JgVw!mvqqUt|I~P;#q5=8Pibz z*1J`Mr6V0Ee66V^R!;cJd7*prxZSiu^;9LePM|d-8yG1bkrtK+r$a7lVJ^Gxq{esn z`lOzNKGJ+iMx<$u41b8uNLjx268zq|JLRTD4+}^J?snha@R**-S%MV$;Sw|W0oQ$p;qz%NNG_I;jV%}03EKypIfJa|W;9Ul&4YiO?J z*snIl*Swcg11Lubw)bWxMUkZWiA$4DCtI|LJIw`3z6ERE2V5<8{*ViObxwzVNMm=O zzoW%4WP)lwf(F7XtdSYIWYZTga`$nocl5zjCCpH9-7=J|J)DqY;COuAML*4`E*<#i zOOJuRGG+D7Kb(<*eJNkEk~ZI`GKwfVbr+G7sxA}z+@UL`MB>SL z+@bLaHq6>&hDSN0#RisUkNlKNAniN^Y>M<%?A)GJc6f^&?~W^Aw<{c0Z5}!f0z+Bd z3#*>Fha%u*qR|dN^sesK=uqa!W?&=J5hmJ~+37D0hP0x?S*haKX39!G(6JoNx?``^ z%DST5$vnuS>d}Hl#>u3hNIwN3WmEk;J?p{VHLPupgNona!XoSgAFccGQjLStPx;b; zuY@5$qw3tqB8IabK+14q}kHx#b>iAf`X%w%A#Gnla z`N52aX$*RBk-<*j9Hst(CaJj^n#yGNNg1(o-Pt^ZQhQvMi(0zNLxpvWK`~58lzJ9v z>}#rh4Z7K!ZVMmJ-+Us9SQA6R%qLeP;BN3Hei4jsE&>z`6wVDOFrW>))F?uc zevVdtd^=euAr4w-n-MY~%-MuN$TQdo2oXGPInVx;E2&jD00k=nQje{^P;M-M>;_MH z5TlPAng11gpT#85!p~tTf_^34oRI!lAOcz@Qdq~KX>B@XoUqCXW_0b_6B}CY1IT4# zOmwAAkm~KWBaH@NB|OQ`Mh+`#AV*n0PmOHE)1j?1`Q!=x9a5u?g|CFhJj+8nzmx?2sdWKNQ=sJFhxN|!!4?=>8ZUt6UJNg zqW!=AE{DV1YKy&H5m|TpSfr`oS>^r-m=W?_c)b5Z_%GKpkg`}nE(Cj>==Myv)9Z=+ z`0Hr?Wez(CMrH%4o8}>dIsLjc49!V;E*m$l{{-sgD1oP1)c`3`VvKowLY+KR2u{g| zP8$g6PyIOxf~C=8eoA7m=(?;BwP~oF!3#z*53281kC#)&+JED*$4GeElGRUCD_kd% zigDBDMp&54&1ArOr+(bmpBXa%BSgiVXgIIVbVlG zOj+*6La8G{K_vCOC;$f*Ioz}AFfwyh?u6tenYLOWk*!5M2~A3lUOfsBgG>`HTmPcI!X#|Q&VpRN zpPz=v)7K`){Tco4*02E#Vu%{Bcs}F>Hh(GDXtB5M)W}DU1FP>>*$NXU;IM!Wz#@mY zI__2c#r;3KF-lzE#)265p=&(~YB&1?3H_Rz^4<6&D+7$oSjwpa+!V17ka^2ioYV1d zD;oUpsTL7^k?%?8vpqwe5+#zA5Jf=yI4S(a!ku2{bKw===pODtZyS>Y?Bp@GVhwkh zKX|?y0A*t3@ezJ{l=p`ShbGpcNEsCQZ$LOD&`F5X#87^<#(zpYjHji0{g1IRUoT2HE>C!rd6EvO( zXaHhbZG{Y}1e^oT)gkGU2Td=wfq&GFY&(<|9aZ@;_;&Aa#V@PoIZfOeHnWR&Cd5Ks z`EDRzJQFM(ObEdYoZEN_`bHJcpp-4E6sGumrbLE<$#`#z)HdrHc9F@?&tPj(lnn|y zbowQs#5`X{=vHzLT8LYiZ?EaBuxbDq(ClN!?fN`7Kree935h6kh+I;r|9r5`?cXp{ zg_Rv>0g1&k&tTlTNK8s0WcmiB19sE!Zj%MEVYkOoqW+@DWN+!66%h=1W`*dY)1#dBYc~C=QjS%F00l=^78c$^L$zru#+>)OeW}|zup(&|M0gj{gY4=kRU)0 zSIj=Be(t-3!p&!4x1|Awgs#WEie4>C5U+IwBA|KZyW4(%`?cXaFK~|IGKBrTNw_Sw^_>r@ z`?-Gm$d3YdXyIc?Y(ko_H`DcnB)hPQ4nLyhl|i1@*LxJT0|XgRGUnew+*6L~48~XU z^-P9Z#4##YE99|s%_w$mYq8lP5jQqwEMkZ^&J%)V3U|_1YgI?-XvyA{FEbzEJ9Vyn z4GYjpRIo7mHfzEZ({(Bx;LoKC-}25GgZqC`gGk~fBE9PAQ;?*}??(vp;XAhpy3yXE zd)haCi$wl`8amq)c0@Yj@=h}vZu!WE!^Ub8_w)D33|yr0Wp}8`Kr*YGvlH*RFvoy_ zUy=0>zubJ)0O)Ex9af^aw{aP4+Hgr*xH|0KZ3hgQ`iahgVT~fY}j&4`&B`n5WzdRzG*xcdPwz`KRbfTBW$~AiWKy2Oot=ObHPz5~@I) zIiD)FGacw~xw92KZ(mUiNOCQA%YM8OxvX|`{aL0mk2tTjrrXl16270(uwIXM`i`}IkRfEQnRq^NH@#{r{IIUuJ?J`(!+{hF0e3gzGpsN8$LavJw)*db? z+{oJ|mZUxS`Wp{(sSxs=8we}>LWm^NTR17iHi|h^FKJU}RPR=s3zSaAr6{v{aQQ?S z;pN>3E7@BsP(5d22GQAY#!LqniAm-s{UUn5$L zH^fYGuBJC80AW*%D%ThfLwY3j#4pVb^NL)FM~Vp4eoD|ZLCVqXbQQ5(4N+BB*EW8- zYTlCOye$68Gx3vpdH#S$>S2t5v%e5F?gzBr%ro$(9EuiOzAUc@{BdQei!;mu3FtQ+Wp}d*Lo(G5FR$yUNcn z#R+Yc_?LbcMXl-dawXfKRXH=x7)KfZmg>$@9&JMCz1p;`xEHhXyYUaXaGUo0rLxLsiVOZ@wWBb|ENz~AK0+|TU(#@;5q2n&ipvu@-x#{%W zyx_{(Q+LaASa-zJqbOe3wjx7%Sa@fDz{_&}LS;#wrtmWuv>uhf+TW7sn|ii)#kCy3 zDgM-Ll6~;5r{&9YOlQc-ttP+CVinb}10;^@j-v*Vxlxw>OkeMGH^8zn!3Ld~OMgee zFYm*Kl&?oh^a%~N+z(D0Bzas_D|##0qk$Q@M_!E9+!1o8X!vt1zjTTE@cjkPmS1;) z@CrZomHAq}NxW*KRgY&F++?Fy@8B_YaYDF!w%z~MO@Vn zFN)6!U>v7sTM8zJeP>w0@^rX268W?PYaj!iE+*V)J_2Up?^h&RiLL z7*g%o-nn+uN;ZfrA+yEAjh$Smn$9P=T8&!%82U!Qf+%ToW;|fcAn!hKZ8h9Zni8Yz zqqHl9(dcE4SjtKH;6SWjnPr8!d6sjPW=*E7J<8Yn&@s@?-C&NpI;aLMOM4V5$^vcq z(j*eq2?U5G%Mk;@ zf^HoYMoirN0ED-*^-1==Mz}@Y+W8LRy{%wr>2UINiZZ*%lX;dXF3h^cC=wUnZ8r6@ zj~7l3v6RhJZUq9lr1O*G0nYA}j{?q7rkL_w-mC#$XJXCEGoF;u!Y>a`tST_a_v(QN zT%k=FQ{u;gI+(ER$N8#x$jxY2%qgldTyObk8lYihzgY8xg!*SH_@0y04Eor{O->C% zP@mJ9xJ0y&$>A6MY#+%*OJl@JZzJ{4?kvf^zx~)W9Ue%Oojf^h&jD(US=sTtNxas@ ziSw4k#eS!`KFD-g+LxxnBn`{0=-F#TJ5-Z^HD+?VxeoDtv5AwU`}q6j%YNSzJy*ed znm@&{XO^`CzG&VqyYd3&EE#?Ds$mN)6-!cUlc_HbK)~lL{YeXd;A&NdV4wYYMs^s{jR5NRcR2_H#BnqCg zHH=TuLl0-iX=jvMVVuVS0Uka3i}hkybiywQm`=mK4`fXQ)I-9rUTSi*t#l7JZ9$aY z@-{Xfib)p==Cc$xlm%glTEtLJ5yLbmWv`(2Rs^~9sn@&mLJ9Vy=Ya@-0)cEJgr@=; zfgCYim@~7mp`&+sl<1%T8m2`l#t5rR_zF9IXrUoum?2R}5uoS>GsOa*A$Gn*7u=m@ zK!7Qg7K29Ujj(qMzmpe4V?qcwT={!EZzm{&tZ%rv(f|elG3@E6*%(q+JZT}6Z}C4V z61UnaH|Y*Fd@My+tG0pd6v;{H80sL;Eg{@ss<3xl8HFSvLMjHDBNr!eUEyCWP$3XM z)_`?IbxZcl<4wSzv>pfGTt8cLc3{@^uMLZrnx1Zb(pv%l&|>DU5j7LA^r^4*GtV@k zx)%*v9vA8gQR*NlXV)_Wo*5N>VCa+K=gpWSStwj z`y&J)e#}f1II4r(1M0Lp@PMg>jLsu%VjtXGnvJCM{SVl^5Tq<`Ned=3)-HC3Hw?> zz{*?VktMiWW-R9jh1+*H2GIZ&WL)P>{Vh3MB_Pd8m7onO$JGpH@tjjVB|jh=Im#L zb#$<$x~6mxh13s#~w=LFD*4Yfd2)6*1_+-TJbCw0LuRkeys z;cw!S2UJa>z?b3oAUReKsO;6p^4E~Brr!HVQsF}#Vu|E`jMm3;131}-X>7df93l8^ z1?l;{s_*hwU+_;OjjmjNo^`MaU~{X&UAUaiP-!QesqC+jw~_tD&SYL}#cbZ2}=`&+ke>N#9klub0L)r|#?nzMli!Wx)8 zmdW_}xS%kf+xjnTi!K+pq;o{zrq3Gi*IJ&IaQprPy>qDlYp6eaw*TEI_acP$Q3;va z56xi;-40Fa)DWJ?N^;s}9DB|2pZvz*Bg*^9;Jd}ZX9r}Q`5ga4Dno|vCu+!2ZenC^ zCx>=YwG{3q0W|7N4}ncg>#Mo+w<8*vzefPy&}k-dX-kXT3LH zo!`I-sRgd})C2lv8!C(trX(O4Vg^A~7vyi}tq@G;H0l@(sLb zqp}#XjuWD|Z*>pl+eO&4L;K$oIG0ksapD$bA+Ayf?`HW7mA1Z5^hn9Us7~Rn)z9D- z6}AAwS2e&iD|4bn+qOOBLfEODOk;pxYo{f#`bC(IMy8YAL*O93YWB=8d49rPI}6za z4XUd|&Mg>)D;$b}K^sPyhU^Jrqq&^g@%FQlk}L1O!kq%ue0wp+Mcwp+6M`mKC^XYlFRp=SQSiNoxXe;28=A%q$|P`3z`9i;B6|EU!D(f=na z^8No|ai4vSx6R4owI44p_hlc-XMF%`z%atI(};qNrN-99x7~(-LAZrVA%EUjfdBuj zl6pklL$_n8xt4q)u$CJau92iKx7RHB77le`!xCk}`Wb(@k$o$tUq4gg4T}%Q{gS|Z zw=@?$3gW)VDs*z;Js;q|JKs>LeLDPLx%S|S{_UI7SPP+oPWkO~6v*KSq@}I=lRBOV z--!yI(-!lSVEpBM8Kd8C^W1{u{7Zx)p;rzMkZl3xJrta;Jx?zlUp=@6g_B&Ao&fSY za@IVc<1?kP?2)>lu#t{BIvz{nugb(sjm>MvG%TCwpD%dgj&o4@FZ~qx{+8Ocs=Ba; zl|-jlcLT?xo(yWC9yVb zjTq0xGp~XeKgQ^Njuf3POksJvczM|H1&)(SF!@kZ`_c Rv4Mtq8S0tnR%_n?|2J@aG7$g( literal 12792 zcmd6NXH-+o*Y8O}2|ZM)a)1CLy$B-hfRrFbM2a*K2q03VBRwEhT0liQih!aB0)imz z0YO493QB)~(4-gX<;M5_-Y@sVz4yyq>t>y-Gc$W;&&-*&=eK|HMus|c)ST1+0MO}T zG>riO0RjMQ7l|OHZ0FV8AiXq<^i8x#Z*nO+X8kz+`yb?Rb$S`?&H%4(<&R`q!R)So ztGL7($9{iPSN*lLHR^1L!nJ)+Up(UVo_|H<}j`e%e{szgt^BeQx@Y3vY5HQgVX{+?=xY z{^stG_$plxCchxcwUc`yfZV>(DFZN&Sv-vZE+Xxp8j-?V@lrU-$N?;FAg9U*DWeCF zsZ+yXWFndjXb&uk`1=;DM5X_vTC0joE#a_*SU$I19#s2gNU(V0_X~-K`cs+8V>^fX zA464FlhmxjC>hRcMm@Bw6zU!m!zIoI4^#L%#ZIrEUu0%QQ9fp_ga@kLc%n%Ohmy(c zKkFjV9djxD8(_(^tKm;iQohOFf_L82D^`SIaFHkKY_jhdf_X)l0Yw&!$r%-Xpp$bf zF67ro==Tf`A5&OOE`sF;>)5cbQ(u+sz(6f9^G~)x+&^=jeii)uhr6&XnQQfTev(y2 ze-gIN1uze3?DdObotB&It82>m_Sn#)*oW$4o>*Lz+}#TVAC(`!$)KPJVl>(z@lLD% zO>M#h2Nw-hHDqd$kKq0HGEjktJ@-D0pS@6r)h+E-d@1dD;0?4M2(+dysKSqMSNQ{} z+-h7BzK8!ZBOcxiqh+e0r{&aceMt*%*ZiFQGo>Tb*{%BDp9bTLyQyN^?UI>JOkg%$ zvDRA-5fulQYw`lOR&H_OeCm|&HxlOu2Pokv6@2xbueV|gSOcZz@Brq^g@nr-Nc8m4 z1S4h~gh5$2v8y9qItCOj_pveTAokuM49yDeNL|5K`?T*@0K_}sI5NsC?A1Yjil)1M z-5{UpHt&Y2xOm*h=vox2UcMb+#maBfZdHX0Znfg|>}ckwXZv>@Vsl~gUhAmVFB&`fZU z8x_EfE&1T+jv#PA=}un-2z%wmpzpmrMVM>wT{g@>P6HcC{X%zH=Dm=uPg`U2A2ABzHfu~Y^w_;n$g2(dLvOFT2qh`(r^mE6A7roK)hJ^oDY z*3Yh~y8NV395P@>qJ!%3WCQ!-9Gb#){^iinXAy#D%1tJSrx$Apn=EwHrCbQNoGaqa3F-GR*;7h0bO_jo6D**#xkwIBuYQnL z^IYrQ3v&CyYCE5kuy7*&xbG!n*&TV;s)hzY&}$RO}VwjLp7yuBaY{YgP zcq-4yL1uV-fG3O~0)+vDACT(z?|J}M0o@@W^1K~DC|mj zcA#W?0l zZkb(5dQz5TLK`$P^?b`&!b6Ap`^NN{>$In7ndsTAVaZEp7g?@}7EFIm|EJFSS>K+E zIw4-DI z8DS`r9jLpCn(I4gH0UE|O1@V6gv%zV!}NE=GQGu{77{$_fn$#V zBo4svkyHW3q!1FvLJGmbXlquadD>(7lb1QBXZm}gRhD8ridJ;Iz^!>Ps=sky8$dbs zrTv4QR-geHfLjayhvchV$f^%@ravpl{om$~L-;~wNZk2(crZ}N32?lq>S|B{jOlQR z-4>2h7l9HA=-Yg$%tHp82?Ct+Hp~1>3B7~iy%gW~jQh}{oM=30hAYU?TuraTBy3L7 zd`p9}@?=VV#3-verp|5MbpGnjT@#k2ZR~ z*{%EcX>oX8x6GlK@Urb8xkyr(TUQimQa+r}9rnhqt@oRD>@=V0=I5_x#qXL^>|YAB z@t`V{PXYn{Zc9zO8l{U~sm#a~wUylda|+e_fqBW~&^8QZ5yIEr__pw^dxCvPK$n5~ z#_Z)k+=7oIXf4;2h^vvxa$SC3Sv&T=yEvV0_;XkX$^O{fpFWds>{uavy)dLSs7ejch zyUMV@`UmCn?15Se)Zlw<0XCpyan<|ho6|;1?)67C@#Wxi*K6x!L=G8W7(S>^A|Nzt z`oVz6=NINQI(6gh#4{+8Q5DDH?O(U@ealIO!TYr3sw7!dax+kvr|x*QMX%`bmSevA zqrGQJW~aikZQ{}6x$g~vw_JON^KY2z{pteZx_fbCZ$>OdW^`l1N0^b%qeFE-4iSK9 z(F_W4?h;l;{2X~)ny=~Y3hGS&jnQO^E7m`S4|{{K^#lm3wR@)k&XfvPsD-$gK_#AGpByRmZ#p z!Ionf3Y)-&TsEkU z5eJ_uZi-LJa9&KghYzy_!!WZnb|a{?*QUb--&DQ{zI;FvG$!Id*{he{cA7e|v6D38B5+o{Ea2+>P7YzAjIHNT zpk%U$kHC)Kk*L2&X(5_#a=t^K@>nEp`0HmB0_^vNQCZhE(Bp?ObaBMAf#Xl0?v4@8 zt|}r<$>P{#ntX_ssEf8W3x0Abgfsg#F67JftNUb?;wKW2rG_QlXD65@0=ps?ejY3G z-o`I#pRAYSmNy!o$JXXoRZOPw5{RoLw~$})f>w1bLc+)DEK|A3(cDpTfS1BIc&ACA zUJYl-es>wC?wmv)vG&8V-69e{aMU}%It%hYg3f)&ZVo=m?4Zb6sf(20Y@L2W*wN%5jYct zC+s9gKwbq8r4d@hZ1_aHNWTg*Ri}h{+maEOGI2Gk-e_1r)w5PHlb&$u*N_5d@_l_1 zVER)bqlZt7rl&}AxxV_O8|ZXo0!_-!Mx9^Oi``%`6+2{TCF_1!$jZIHQqZ{)Oifm9 z`QTb~ybiH!yNc)iAx`f7BNtG8+>0J{u|uZroQ$O!4axBZw8HP%c>i$_G8M5v@|Wyy*qUJ0lZ=C|B-) z0tO8z)YJIt)MK9WNt|1GFRMS`90bl*HzV*9Y&>K34b=EGfxfW5%r)mx*$d?p7ve%c zXxMdppvJM_a8e(BQlfB=+Pm(DLKR2zr-c2wjp4MU1Yr1L&HOR1#raF1^?p*Hin}7|xPIW; z`0zOMkEOyolTmi6Kt!#FP2hU*r_aAxBigQe0G;K3M#MW-N8@P1!ohV=haBTBPdW(e zs;52~+ z#w}B9BBBsU9Jma%NaH`MfM&ac2dKw6mNJPWb`amdr7wdw1%QHorPc7o6j&Y5t1)*x z63wIP)Uu2o;~CheFrFca>b~3=zx!Sc6?$6+And4J16%ByuHi4IsDh6_`j%=-C>?&B z&JYq@_$|K8C-QLN#bmASH5}Q7y2X6tebHc}UTKaHZtQ?@BY;0aMsXzT(?qGbR66tm z7!^r?Xm?SO`fzGk9ZFHvdFp9 z4^6e1f1DKzfEn?+UwNVji(jAEA7=KyH2wil88ys;p~rDta{^Zfv7$Qbv{PRr?g)Cg zoCt>iZ7sjfdM|GYXk^n%of; ziub1Rh~_9XA>|CrWW#<~j!-oN52x2t0bPkzId-JW4}$##bF*(LkH*E*nV5hyCwowU zIRqpB2^4DF9W?@d>k1(mD&#Xx)+%e?n^s~wK;WVJ5xvI6y&uJ7x;zAhltx_P1Fi^S zy_JiCo4ZmvWp%p@BT8oCUY_**k_BM)=tr=f#eTeq6^i$@%9~pn0vA)AO$hQ)k|DQv zj>1@OZ@w-N)8f7)wtVNabYrj--avc(9#?m3fz;kjhbY5BsFd+bDHGVV$g*4TjEfig zok@l%b&x|9pxree@y1gLxane{8pp%4eD*kZaW~=1+}KMBXU`+VY~E||uBbxR9P13k zr|V+xfX?gqy{ZRS?ThU}vluSNhWj*?wVUU44EO=qq5jR{2^i9+>yaVlPa|aX=A{w5 zL^>0}$~X1nOR8HQCoAyLF^269tq68D&42gA7i@i*?-Jwv1$HmYRIBKZgJ0O+%J6r4X z2o&E1a;6L6#$i^vBY~?vfb{y>^EX26)~k`G5yV*au+-mBL48O>BMQF;9MvpW_79ZN zra9oq+Ve$Q*EXtfOH9cYSqz*qbeMlWk)fGw@BP-Ol|lv0dYp0muAZrqL_KJ!<`WX% zK48vqx(u%HM?-QNa4?WNpFu{IbfJ2`G#_CiBZ=QniWT2oQ=(M_y;Keqd*E%2FFumr zVifrduBiB)lT)HsZR*rW%`m>;dKIsJ2@5H+Oc!|G)(cs+6I5eEYIkEb*ChRqbvLN} z8LR639e4-Ze~RP1Y0~mu)JO2K7MD`V1C_F<>kr4v_h@G-l<+U#Trle zN>tR~Qx#{*00_xYCAJOv&Ad+4R%#JasHP~fFd$Z zC23N<#LzbO6EEuAjyt6J$+612D`z%ewF1&en)8`_SgYDDsAS)Hnx~U- zGI3ZO2b)#9zk7l0Dla(>vm+KZor9I%BSLN+q8UX#xs>(=A#fy4$|7WdN8->HNKE9W zk7Bi-SpOhMTwGM$fojj#d4WI!uxWkLQN_T^fmz507`>4z2QF+ahx?6T!}?|pKi{Cl zHC)Bde;b#l?)R@qb8gHXY`+m9rYz6L5H6@N7pm;FN>{l3s+A0q0NTEtCQl0$o{{-z z(=hoZPHdE9&)17HC>icQE!UVe9N7=O9>LxcdxI0mWYKbJFkm2m!yx1HDx3#FlwG=^ zlFqv;7+#y7EzE3XzKm1$!5AUYm6fZbs1VC%mYM2M*bfA#-x?vWv|Yx{TFgA8Mvu_Y znR7ZGdU4!+evspLDD^0n>aWxOLi*TzwqFQk^ry|z=wws zfikciD6~%}S^a*XCCUS=9QHC<>2A(1xTW}Wv#mG1kqTUc$Q+}^ zY2xPPp%B-56E6#B(f3u;&e)e<%}0cM1_K3xfk2|9dv)Z?H5^sbEq2p@N6v%asL|o1 zYnC+wCaj?a)~TPn;cki}HtYI{;U{!XobUWv-}+$sN*sx6pum4FWu(W<*VBfjFS{*$ z-3a7U0{D0j3r01g*Ya}wNkEY!4y!#PhI)nXxza*#kavj4ppElD zN;Kwo?2j<{LfWLwq3?d8yp|v>Te2f1{*f#FKm3{2e>+_B`RfnAFLI?Qh3-}TCFTh; zhG;_I9l>F51PWND3RR2L#s(U)VM0X*g?~}BHBdGKeDijc9FhMekMHXkYwyZEHjI(B z2WK!UoRTIHFUdvR4FC?7^2fZ&QF>qV^z3Eo{S%;}xWr%U!$-D&?q?xfq8{<_8@m@V zYCj(UsLq!`N(o(mJ^|>dta5TqV-FsSDa?HA9)c{8xA~dc$jv;?YCbZ(be)qTHSrkg zC%F*=7>sCz=X)!h23N#$D}eD6&!C5i|HULdUjK^wu4(g(XI)mJ3;^lEU30W}*D)|L zyhir5^#M2P+P!{+^j6*S?SV42U*N)vg+ml~kGJ=6X<#lbKbAxY6n_p-(gnas*t;hE@RSim9 zAq>B;3)fXLC*FM6W|);pM}m2<&^05?_%He^Hfh2#E(S@|4{>q)PpYQh^a*?(_xv^l5#kjy-;lHS}hZ)sdY*y z;nu0R2x2xp$$a*S7+9r2*B%=7ec`vu#&6vT`qx(z7pV;|^bq`K`zaMJhZ_6ZuDn7Z zm4uXU+e6$eq=5he9a3UQbvDSMf+z~O(pPl2NVR-Mq^V(-HbiY}nV|xJgge%nDgb)j z(tqg1x>nsVht_NSm!?uKLFVn%XVvDSwmZnO>AAC)22|4mT;x$@ zQtM&rOVO9ud8gp8K9BE1yVa!AkM&eHem+c3?6^uoZJp-_@XtleO z*)7n-lE9b6`jtO{9N>a`y&FwQ)HVF0HoJqPTGC-UedAE zqB!N)<{$ZqDRCG-6`MrQrRCcV3}Jb5>d6(UkfY7yU@zmzBhqBj&3~e5@JlY#vdpGHtkO-z47z^vo^(mR{IwtbQwPC^wm(~`Cs)>4!Bb7yW~he zyC<*uXPT8ReK}HZ(HN`m{BegoCclgVn{DA9cyQb3E(-?227&H(e)LjI-rN;4`j%Ql zQDFb+N_YRQ9sZCV6#rX8{BB*%Qt8_nDXUj+G_L&{wJ)jip}mVP;*%yo7ewNhVKI zy85=)2z;N)1$SZPAAQ?30Oyfb+cC_}4(Y&1qS3R$6nLu|us5`nk(Jx=65bC-d?@LF z%48HN93Z4Z`g^-f?uO({40jgEhb|S?M}8mJUd>A7o%N|IPj8AtMtg&(FO5N$iR2M- zEMKhVYy>S*8ss_m!nOMQC(%AB+EQ2-Ym1dQ<=LGD4l$;yz7{*Yn3!WtF58=y*OP2Y zX#kG>=V}sf0K0y_SLLaQ$~?k*Sq;uSn}~*H>FXU^e|(Xsm<|%;Up;C1?9Y9iZxhQT zt91rJHJx%h^H~1*KLly0QiJF>drf{TcPr``VHc=RjxCljdx+4uW3uopuPwt&s;c{) zW?NixO#r-;Ds>nc&Dc<9n|XVYbff+k1}cJsn=hqDrGvs+_o?w8cgdjfU`Y)1f$jOvLDYHR*?UFSO^Gx1}fWIKZjvuzQ~ zf*L}M^p$fC>le)>-sg03(t&vmH%Cuh=XG|ncwbBxn<`p2>4mB^c+3r8dD;AvR~m-P zH}Oew+)Pe7g?~QHgvV`aIXblj%hplr-?PwG{ zzVXoRg1)2!6Ih+@%z}?GWrm3iqdtcU){E`D^#5Mb_+r!a_^x}fOlZ}f$1FeE_`pNS zMrJNQ{rp5YKvP>k4cnrGyL3wq;U#sI?>iupK5!pA-hPzQ#4dP^%{b$ySuh_;rgba- zWajDbm9MoNZt|7AxQf@FLGWUtN-qN%Qa=M%uxwyQHTbTHb!YTWs2Q^Gyy3avS+9`F zyNS#{egZ84b*0IhIyR(sEc2h1;A#bvCiz08dPSm>Oct2X9`Hc*mdWXYsSMo2zrL+Qii{3_PYEQZgDC zFn(sDh=@ukGx@x7t>`pB%LaYg?p05=bZF_%nDDe*&<4)W^~_z)H=pwc%i|$W`SO`Z z`tY{$p~(mng-wSKD#hthRQpm%6OL>lp_+TWhZ<@L5(Uz3r}dm)1n@V60Q>zbM|91+ zc>N0P(-;l$PKU)ysxvmSu9eS*^3y+(=LO#`n=@Hbv*5YG2gtNhs{C86gfCQM+<)Hz zj4sO&9H&((fWNJ7Xx4Qj&{1$`}Ut=JuEMryy?B|1< z!NX}@({BCT4AXauJ_6fCxfd}f3vbA_YfM({Y)U8Cnoy|Ya*s}0pD}^JiM6RdwNRz# zEhn2&l5yw@ugJf91MKoVeVi9~`F9bEQ}UMRm9@xX@yCG-;c)((xRncIciGjnKA||RcnIcWd<)QtYb9*;ok>%>c1ye?-L!ik=hMp)+baK`F^?Y z>L(wd+jWa-NbLIdmfNCqvb`^MdLy@h#!lFQ&E_ci+B^050xslvq6MyS@%Gcb21bHv zY~>YbBvk9m+;86-3fW#z0l8Q1yAv`SO#sV{`Z6vw29G%p^83I-##JHs=`22~CzIP> zWI6MrpJxe*#_ODf6K^mx=ABH||e#jA50A!H==!xkLhfOftN*~_!}GLy%>?69IivQlvmJL$xmQ%YLiI7HgY6R{%H8tMFIQAq zM`SGM@ihqHt<{s4G)inlyoi^rDt;A>`o=0s@ycD;UB|xF*t_qz?@=-B*}sF#J*oVu z4&Z~*42Fj_#b)||jX0uj=T!Se4(~`dHLCrL_m0MMUcGO)(gZ3ecz-^d=YwG2OxXcr zyL4I%J3;xHb(j5U$CMeM-l8P=sftC9a3YhtW~QaON(YpDd)UpG|4+Aa%nlxQOWbLL zi?vDr`-)ipQ3 zNEQWLx+-mE4OkpYntKM0>lQ>!yG`X48ksOMTyqsY%t24=q3BXqt6qfsGRXn#kovZe z_8U0cAk16u!5kydv=moh=Swosj}jgVqute4dCn%%)U$mKGqsyvWQ-<6Pd02f$OsT^mD!)(I8G=EBIF&v>~vkxrNY|+{E1j>nb`Oa&P3v-l1^(`I=QU9 zmG-G@D>;-UFzVyS3MQnya^}7Gf{*LFrw%hSxMUyyv*g~K4J>YLFG=-y^KIf-j;^q) zuCYk~yz)KkpvPcdzn>$g^ZA6`KjQOKMJhsb3gUF)1@ozn}tI`A?s>Xp`y^|^LjX{c%Ex9p2?7Woe{@>82MVm!vs4k09;hFaQF zw}#P((onIlA)${phAuv}%a-Swo%^Ww(S{jQi7=|%DyH17TWgo8lgIuQpBY$ny$6!N*W;{Hlt ze|f6ha9P8uQU)!;`21DOPej_WJ%CGu7nJR9W=FLqB4#~caXYVUUZ;sufHhNAd4*xy zAn2=r!nvvLx|&qBdmxE}KTemq!oKGxi5PS|6stN>Ejdkg=fD^}DMzB85{LU?GRR0W z$> zm_QyB+Dl+xXQ24L=%d(6E0G9(LL*L7gUpo^>hh|uy{0-F5LYP%U}-GzlWltgQkq$SP;_eGH!yB5b#Tz=;GYyXAbscURgFFO15y4 z<4LoIS^C2Zn|e0<5xRvZ@(MM$53i)w)q8#TB}y2^c!hFim+m%lN>~?F1VMp_wpXjlMzCd zup=~&zm-SK{}dvc?;0=_)=Xk@wpAx*XLa)P$^CGZjVP?TQv5W8p2 zP}D_S>McIP(l_$nwH1gRbb^@Hkd)l@safj9FOaY7ycrL!0ghVlaPFM6p0Q?&^Uj+f zjnfODRe^tx!u+Z^c>zzI=L#Kkmy2Mv{NhG_oe=7jkZ1J}*ye{0@}+F!J)?cJKyjs} z+G+$Si6|6{63t0~4dxxK6j*07iQUk&9=>t;?mY|@7WE`PqR=Y;+~C_1V_94=dQi6eaUxAdUt{VzF7o{mvjHLSGgNtm^@9J+LVC zXnZ(gw^y^yA4bfBs5^diK4-1v&*H!wZ-y9jWTmro^j)ewM7E}qHVrp{R_lz|KBos@ zZvO^p=6Q>PMb~38JC@u4G#60EfuHqjnfsqHF8u6NY3qL^JovGAiypTH@*yQnN)$jP zz^36WD}w*MY`#4_^o$3G;Rjk9E^`4}B0P*UPS+E>Il;(p*DEY7;a~unv2y+AFq2Rw zu(FEz%KIX|zs+VOTYr@qfw?&~{DI96AQnS6G^Wz>sj?X24v7Bq4+be959&hSt8E6- zlBmNMYQFEacz`-lR6JfJqEu<9XFCOGrypeNpofCcpBBe1*GAe4l_LTj%4IyS;d@Dh ztvKbm2u}?|O8Za}{zQ09#?RQA)8-|QT0}sA=0$N-1Zn{@(FryD#T)~FK$nAqrfWp& z93{vvlN<=Nux(~iHeMC{=gsk@9x-QxMu;aA^znWI6dSqlPvi@+`IicYuWj~=WZcGc z1l|>ZSaHIxWt}%DoqiYmX2eTE%C_Ig_s2Zcswv=xN6oxKZ0E3%Mt8YH#k7UJ^Y}ij zb=wGLAw-S>T!J3lG?mgB(SeC<8re`BT_6#-c5WZzu(A9OIF`UmICc_(m;7d(ywHc% zc;*$1KX+4%Mb46Su7Wm5fUZc6U&(ELN>FN|Tc}ZT-eZcN)FHgJ|F5n|y^sEOLn2RW z<_qUQ_cyhAGH=uG1BH^ZaEoZH(3+b%1?=$dmvl*8-TxN6fHTo3E?So^4qXZ9HA_>k z)_vfvb48RgLE!p^~_homvPTB0+$yV3Rc!x}%odsn8eJgVvv`KG#&_BYWIho3TC4vy~a_HFsM zcj^iqv~+-gM*aqaKo}ih@mL}FCyD=ZPL3y)3!uZ%QcbPee5c3O!9dqdr;V%{^)I9o z(CTmS+hoJ~H@y#HWV*Kx=}I4`!_|f8kRXY37YRlK$Vd|54rz>$MV(h)J`(nR8rQ_) zJKVPMZr74J~XR^pX^ME?+XWZDlIhm**_hv4LJMf z*tpSR^a)5kA+EGBj<7iMncgS)_5&!GKE7?T5XTeE{(SnDu(&6V^`D#KM2e5zf|^!) z3aVWI;&!L|1Q$X&81Sm9`941S#oX0$M6`|Ud_o~JDY!qws67u#%sx$0yJyQvunDlw zn@(@BWE!Ni$M&c2AAL>j67in`WLYJWyHg>bJ(N<03?A zMTh!HFOrbky-1#1sH0#@M)+4?naCo>{IymQZ?)db*Xc2F;i|>0Jtr+uFawsEiWq7n z$S#&SRth$I*#dE^y;I)gAmQ!*{KiP!|MJoQpCk|%A=*gpjEZ(8UymNwlh+yNqKoctSf0p z0x&aINGukhaFdg?R{+M3=>BFDX`uKo(&k;#CPfBOmy>)6B=Y7@5}*&mxBh=#48kL1 zYaYe+tu_3--_w>5Tw)_Sc;M_2ks3b}rSvxn1JeBqhOr(zpgJCbU#6>blc zs&77}15(jsGS^>oNovOZZEkUliGn>w3&fs!BYpU8YD-9Mnc6CSX2lR!+x=m{AH@|gPm2J`Cx1c$-gE!^6?e{HnC725oBJdVApLZ;3^mI%?nV9&U-z_f diff --git a/inst/doc/tutorial/PATC2023/Figures/hcst_anom_raw.png b/inst/doc/tutorial/PATC2023/Figures/hcst_anom_raw.png index 568e3bff8ccf3b812f0c799216e947e142084af7..62ad9574a39aa07731ef06e07c5aaa9c9f8690e0 100644 GIT binary patch literal 12089 zcmdsdXH=6>&}M`LLTDk<5qN>nk)}v5FGUi1?@9|Gf>H#LBIO0C0--8hIw(kyDhPsr zp@=jU1yPDpMQKt**!X?l?vFiZ_w4S^%{e(Ycjmcs=hn>3odgpjU3wHJ3I>DG<8j(% zFc>8OgPqw$Qj#g#h3$IeLes>+LWf);q@B+i#PeUPy%^c9|FGW!#v9N$l|m<&H}v%x zm&A)tYXhL#^vc#p53`k|n;YAEd$souey$x2HvIWIetLT9f9;kJSqhCW&h{1z#_0U- zd!|ag(h~*)V0dlN;!ggLg1B6P)`rXTUc+9&+DW-rx=x>5ZJ>M(vtThn7912Af8xvI zMz5iZDL>P~fQ1wz7|eqTf<1zPFq23CCZd6cF#|{#3Jr%LkQ9*=@G}}`{?FO7)@Zf~ zXOGp63GP;54Z~@ktQE#*lUt*`cX!rYa)YwFqJh@su3$sB=ir(xVs%n2n@2Qki(^*4 zt`(#G=?t9X&D|YNxRDn-4Pp0;RF-YXt3qgdU$*S&Y3@H+Te@=FPn8Z^vI^pEyd%u9Hefj6~`n z%Yo}{L^G{;V}?9BwXfMM&{B3q%%Exh$Wp&}a2jizN#uiXLLZ$-mUkzve6iABw~j9p zvFZIAmjvgYSn6{euk*FduR1W76WkIPow0ko)LAdut+Y=A9UQ7@rcHwg(A>PdPqo{w@xZrFc2bImg>wluCR|AwNGs;-fu;l^`bC7Y3Fo? z#hy+6xz^y3!2NR=1aOc#_s`RG>Ow8M&hnTG-{-WIgEk&s?sBTcz zzo(d{5me*h#|4xTXr84rk2*J&?0cr8%(m07(o~ySElhB+EeQS-2`OPT1FC?T;x@Ms zk!_k9mY9TZ3{dBSI74|9Xq0NFawR5tU>Dlh3l)f0XD+x)B3g7$oL-g7h z%dMRsm(|EtcV_Sw@gwhAqxH*`hmqSZDVcpOnfW5z`Dh1Jd46B$6asJF?`EktaCf)u z82}4_{qK(;AEr;|T!sj2^xts>P046L45kW0Bcga(r zh{^e`D#QoNXUeDkuk(%g&*P6YZZlP~@spy9eB#Q~S0{C2CES3W;q=>~>RZOBFGo9d z6esN<+vMjli;4U&8WnF9nlCEdV(7KCw^_TTr+?q%a@w zZfZybs+nh2C}{!+zxWJ7JR#}HPI?HNJpA(gG64$UX4gPBBmBBTo0ApFB7GZ1EDz3cxP)`)pb2)25FHD4&GD{qu5YxDC*}bQ;bg^Q`c9uKagfM! zrAB5$kYq=ca>j_WF0~#;g!Y{}@}7VzP5M13ypQLcVQ%Gxp>4Y056MO2dcEpf&<{P+ zxo6hvD({Q@c*QxRwdoVFv^u%`m(2XDj;0<~1{V~OhETood}zW$VAgi;CCd5b=z7D= zz9Re=2b|{Gn`PAY5Up&Mi4s-L@LA}bNW30D>1{W1Hc+S&xS9?_VrVe(JmF06kG5#% zku*ExtWpe)%U>8sM)9Et5(zB?tb*6if(Wn zg*nl@)NwC0a)~yHMLU;SmMTIR%OoN=?g(fKgHs=C17PaK2FqRQCr`57_TSBfHi<_6 z<&b})U;NA#y@UqCUNzeEBGa2VU;Da&{@SkSr)Q!7YFbPo5U?Be+Rqu+-VJcw;*MEl zLi$sHEvzwN3Eb?lnQt*muLGj_;EJ`tmc)A*GOQUeKR=mARSsq0mO#$j%BSIB5 z8mK48%c7hF(T6J-DXVk$Gh--z>y!J1X0CI?qW}qBaneuo)a(HmxguI5PV%N9{}AJq zjF<1LeJdXYpkuZhpqk0p;XIwRV58xLp!nl;bWC5oyw3*PF}+!)6HDcpRTFwTT=~mi z;Pa3PG^9afWM1087B86!0!a9wHt4P!k43}DbZZsd7v?B{jRo6D(S{+<{Q@=DXW_5` zSs+(=9G}SQC;2)7k;5dU&4)vv`xxM>qvM<;4YV@4Oj*|ppaeHGh@0nXqv+rbw?e?O zu?z1F>Jrt(S~|v_ppWXk@`KJmON#%a>ivILomYX7S)Yj0>BsUGznx{?PO?9zc??e#Mn=(BOvFDKdi z$qj3%eQN;C1UsJ@A5w%x-Dtb6C@X2bk@O)Zm)WhX6=ck4^G*HrLeI;Orax0>JK zkAhF$*Tg=$hWmRSA@f9r668^Z(=4b23XDg^H*_wzc_L9-YwDUrJ%ChdgjPG6yzCs@ zp}2VZ`FFa+FJ&@UZVz3?jNGoe0dD(Ch>hJdxWKpD*us5DJ2wA_8zHLTBFrSBOa0EW zeqdzV8rtA?z|FCT<>MWLChm{~a#H0lNokDYf`K6NVc`pdU#gz1NiTA@|UBY_}o zmYBk%yzX20Ww*O&F-~z`UY!#miBa3Wlv8OCA@4jBoFhR^xFKGX5{G*Ej5htz?QvE^ z^4^@^^(EZr+Fgtoq)Z5|W=7DiQDFcdZyCa?#3K z%Y?4t#vX*6-UHOzB|kbCpqE_OFOX<435<3mn7x%qt#Iyh78B6_J(q3Wjui3rRk zm1?P%2?{$uVY`L>2HET1gyB1{n1D#A^!JgGUbcTFse#Te%|dz(;wNfz z#q$PU?jdq)KfUTo|L^p{kaY3f*-9B-HgK+to=__3!PMtbA;^B@_aCBc_mHoFRu%&P zM9CnFlDD!MaruZChda6IH}R6DWR?`9^6Q{{fYe^%@KCc6Y!nYi|2hx;EI9{|wxxQy zld^T^`_9kG^og*s5*|-)u&vM|5;6fF1S8gOWkAc0Xq!;G*?I2mUL7VjjZZn2{aY^- zMx7;bj}QcYr?rM!x1RikC!Ax-LscXALOJ+IxBOyy87|tW_kMTtwn%U=C6<3chlg%r zNhQ=bT8_obxw%A0N1Y|t!S7NzuCD=28$*AIY&x2XwakJm#BbpsMsN`!8h;wt%#EPw z65N0ed~*DZtu{22^b}R-5Da>@cFm(Y`!w#9t!Poyy%f5R+xDb9%~?4WI+5v&?I1`^qF1x&5N4yv*sqe@M~a9dsz>Et0?~?DxhjYk<8y zMn=RtKjxvZI3t2X8na?K&?E99vd7uxB`8XJJ%{nFAA~*g7BwMxxfuvk*B|3M$=>oY z7K!gqR_(2ij5Zr+YbXK|1Po9igQmjqcUs%uHz(RM%cZE4d(rP}7RlJDi+2av3^0;7BNr}N0(N7EotpVZKp#bKX)9nUL3GLcdd(A)6`&h$UEo5i79ugr<6-YuYypMzF(cb@E~tlw<1=z)P03^LKh)*~r`H6Ij(+DRjD z*KRA<_KoKgqPkC;Pdgu_v2iXO|@CB!3%?kw)asqVB?JQVJ^quYaK0 zeN4sHAafHSRmZ>DwpW@go)n7j7E<79(K@n>PMT(y&$=Rkl%@L4Q<%%{(Y^UWgcb|HqK^I?;Yq3HTZJOGxezf)qt@Hj6avF8= zy}=A9j*p=(9-5weJjqVb9e^WfX>^_ei9D68F_-ylJ3Z(tul?v_f(#NfBOYj>^T>3< zO$B~c29}t0i?m-Vms6c`<%^=OpsqV>n;@r9>EnyD&`3U>xF@B)rl84aEq#mSyR)v& z076Hz0I*dhUuF}L=RWoO8|5LDG>H4^R^aao1Pc@CDOybq5t4#+Q^rw;{ZQ>QQe5XE&RdP&vryf;W z3(!@0vn-z{+m#)cc%py?K7=B>2HtWn^#TpD(bO*bsrlO(^^!eO6g3mpQxkP-EUS^$ z{?F^27(?`nX!6>>9|@w3y|Zv_xyurZJ6?i%C+b<_?WGo$m2!XT96ny-^}qy9cBb=v zINNt_?61&bLs&xn{*CyF&7F*v^r@VR2y zmDX^CsSC^Eqd60$h| zD3BHG`nZwkx5<(qS`~XyQtRlorRX(~p%3pdPK#KTvol5DyU|k==P&ZR&>g%Q)J#d^sSWpXuS&u@Q4uVF zGX{A~`)-6tKpWGo3%;gtbz!e!qvE$8Q5D@cXz!v;W2`sg#;!+YQHC^rd(HttmlKd&|iZb3#ip|`8!2g(3tM(;=_x%^m`*uZ?TkZ$E zQ3>iYwA=SN2X|lNrp)N|MAvT8%V-lz;EQS3-tq`ywNy7<6H?UbAQSmn5_ zFNo@uT#*&{ew0w3cZbPQ%*?bdO5^J7Z%q(Ep%#cjjXGvu>H2%*(u6u!_JmD>)mr{{ z)2xa-`X|=mWp=eq>N+}ZKIzy
Yn5KU)RUFJERRTDj9l
znNYwv;_R10hC^(}QQQ=BRO?iKBWJ3Pg^RvBriwib$I)Is-x8}KGCvAUYdu#VRbNv5
zf*1MZ+=0Y=oOZLG?i)Oi`RgdF`heBMa`3}+{TQGV{m%~fCVd)oQ6&c$ylK-fD)>b+
z$x4aib?uZa1sOp}zWCE>^leS?loASKvC?k1p9D|7Xnwv;|5wD;Wo#x^;C%06C=Osl
z6S(ggP`)uRPP$n-Ru+l+^S6ur>uKP4lSHrr6Nh(vW|_eGw8xEcfHj&>P@MnAYGLtu
zNo`YLNY%@X+oe$K4?K(i%*s+7R+ZapP}y37d0iy`V+6U
z*v)rr-Zo!)&j44nL-X!Ch|hnKb-edP+jH&SBQl(Zt%<{6`0M98W|U=j<&c>0?nj9B
zCIBzqQ@e0l;k?{3jO|UHzGA5x;b${I+v0hZE&saIi|-GaV*I4^29JeBI;jBCJXDoJ
zd^lJdGJG;I@#CP>@Yvu0ilqQ`eSiEzVrKpvjC{v#*4PsKDCWg}e`%7X
z>fQ^|xS9!o20AMJatzIy4tl2H
z*uCeKPfhpJF#`0@BUI5RuCG3PrSz_}E%oxP#$>IQzs&ID2(iy-U9$a*PdM7LLmkK*
zb!Kc+qC_Rz41cfINu1lMGDrG5gR!Q^dv&#K4}eMaSD%PfqEsZaU7d#y+;l-?E4u93
zwL;IU8>$kfM1!`oe3z3o$?|ASqLtZ>SgwyqXXP;-fUmaEND7c5rlMG6`vXg#WxuMg
zf7LUaQlVN58JtiY%N*FgV3_kmHWYvS(D9dPqoT?Xih`^)9bVa`CW@e&;}-$M#ht~n
zpIWe+p$i0U%B7!^o#^m@Wg`eXji3=w%;0waa34_Rq(^QeKq{vvXLfJ}4X%G_(kgya
zK=ovI)8_i45r#_HpW!dHS>T5dKq*oewYzL9xBHp*)>TTfPs_j2Jnvel-Mu{)E3wQ{
zmLVa0wyc{OO`vs`Kh11il&JnJNY+T6sVt;LZ^kh~9UyTbC+-%{@+rL%~H
z?~f@IAq3g^2qOaa2a}F=!jG{N8fc_HFF5p&8)wJ(6#Tdsq(&V-sewa(PsI
zI5N}p{x_`UeE0=BG=Wd4bk3A26A>(D2HE{VmZ(u?R#2y(0$VOa-+mp-#;o~?=Yv3@xv-BuB4Zg
z)gB0tl`c!}ao#)hOV3^D9&m6;o
zZbeeov*;RX@JRY4F_w4?fbl$d0e(a9%`39Y(z@zGUv)+ttzF^MU{G_jv+gFT@Q)My
z;M&&y=^6^%N78{8J3jfqLhy6dWY*^JSmq>BL<8|TqqR6KoS3_}kJM*~7>_w7DM&}C
ztS-uJ+cV!c2I4td-)0!--F|z-B!%oV{gf}3K$YoZ@@)y_WlI$Nb$cIz*{`t6+-C+g4
z{wur>Gon9t>Tnso|1I4~r2yBF#X}4;OJ#ts`G6u^WOWCDJ
zVuQjonnDM?;cPaN-wW~hfg(q;gw*nW39KH^f#19B`hR7$_
zJp%||hPj-}v9kxCS*|3N>(vXcU?GEd{8Jk@q2osdwZo`%(eJzVZI2sQHq%)WYk(-;
z>fYq%gEPlzD5{NOK9`~E7suU-*-^6z+!7q63g6oH?|cDp$Mm6AAk`c=~xdRJQXGBQ#RZ(xZ%Yq7ICjHEm{pMMd#s>h0Du7Ew-?IX(UT;U|KwI&t8=7F#
z?K13-Si1`-%{Qc)9rt>&vqT=?BK(TP53WxVqzofxmKr;W_?F>L-n)q4On4Z2|6bHH
zsy}x?kv~WmBEL+TNd9Hw&leV(?;b)`T>Y_kKx=(PLD3r~_BXUMXoBzMIz7H-a8sP*
zdTJ{ngzeY}oC$3)$Ci7A7dHvzSp4KiX7N8C5iawr@Ja~9XWf}!t?PfD2vFM5buVf>
z&s~(T_xfD>1GmAj!j%DpMGm`XUcrAro7AAwq;hV5Wra5G@(}WY)wGcgV2K)IVAP9n
zgAPQDHSY#@NO%~I1+0nVUQ50gUvVpNAQpHuT>iMVw9vfmw5iCbcn5?A1k@_RKKtfR
z1@==bs&1}4*(=63P(-0ZcKoh_o$Tp+!f5S8zUoltnB>l=5a}FGDeV>q`t8rdg
z+!KeoYqfRh3XbY@N2Y#Ky591S-DLDk#c@EtE%mBdGQ-(EOr6?^K>*9r;C;CRDuMPW
z!61iEM&3QV2Mrtqe~oX)5c;RF0M0Bl8a}8?pInQ*m=xc#GIh1ux=!JDnf~{eV5OK9
zU^8Z#DvtDj&Kg5o%m*>AGz{45+vq}5KjXhVnOH6P`7Gy7eCvjKMd}JrIsiX|bb+d8
z5||+7W}dK4+jwP-y->`){Y2Cdr>su6>5R#RDYRlL@PWPT!Dqq;o$Pdz+I7o&9mtd0
z`AqdK1~=6#tLtQzZ~u<0D|wd;)m+4x^5c3cSgd!H@Tum=#&Fl}Q;zfTVf$v*pDs%u
zJ?)pj9+J;KQ`Rhr>VD2wJ
z{dDZ1&sk)8zXYx4A2X2}dT8cJTixb*o{Or$0W)wq1V^)krQBrNMd
z1i!3@&X;tT=e@BSbh&W@SsDA&xjig8R1{qBiF=0HHh2qHrzXih{_u>}UchNI_RB?2
z$(6!3b3O~=yR^G-QP6iKg?#U@vjs+|Up{7V=bin%mtV-h=-U2!==G1l=j;2BE#f4&
zR_2P+}s{y1TINc3N7>$S};D=Gw$n$
z2cQ2jaERde3{|~#R?a+nNxBaz}%ts39Rvs=a
zH6Bzo+g$C4Q~C0QdEE&t`&cUkN^TMQuaQ=u{VeEy(X(y&3pY^MnT!076gfEh;{a;`2dL*Sj(kuAI%w>+X_{yJd
z+#=VtxZEMw+qP$2K-*!kbmt0}W|8-4_)&({>!W8n+iy8#N_?B`#F4iy42XbB=X1GD
zL|F-7mHDcuF&=l_(gO)NGg{CTU8EdsO(8tc!B>wYcFrtcirBjk_uIPDho~hry}9I&
zsxSU4dL=V8%lFNpTO$YD84q7&m3Ob>H1;T59m+Pb?$S&zrm3!8`K05oB>3@;oDN+k
z>HZBgtHwJujeovaxB>@3{y=A#}+KaPl*CM<4b{2zAl7zb+i%&#d`e^p6hF60q0eXL+
z8hyT6PrIIEhfV@Mhqx9z=F`3rg6SQT3Sp(NBf3EG)ij(^ajEUDaNF;ViSXdxUfN2Q
zuQV+)TvNGubJ5u0bLTI9IzQj@G$IrDa_G3SVT|X<7&X1!pC~qE;Pl6i0h+e)>Zevs
zhP{?7c(8jeE{JIKMd1crxdm_ZZj@C~Dl8G;Ou&!Z=xCFLbx}p7WKk9DVweDOxEKb=vjTc>3V_{vB
zbypIc4KW=WL~xK6D3yOj0$DnGb2j3Cv>|QE5~kwMacb`>rYdg^h>@rR8bs*7iQeV6
z{1t8%Z&e0WO(H^LyOYW(TIbBTDhoE8t#OjXULWXP#{
zfAJN3oI1{|d=nT*v_*IEeY~)AV@mN+dMbeB
z)^$|?{X2Uy
zhhLvbJ{)FR
ztgRx_t{qgQng+QlXu8SuFa2#R1Smb*!k_cG-)1fz&r5F@LPWJ1IjeOE9s
zFi_2asFR@t4%t284zYJ@Ailt_Q6fCsiO)OL&Z=ply=J~5k8-u(A4`}ZiKdk^;B#7x
zxBZ@$B0V%_DhbaC=H-DhktL3FDw^oc-!ng+HKOcVLuu3VmTA;tfU|4k)W`{ZV$mhe
zns^qOA{ztVplP%0zDzT@Y+bC<*83f)Q$-z284S5y4c);*J3`Naw@;zIifh+Z7H+aw
zx?dR3XQlv!fOt!EPxcE$P7ufR^TGKbQOh=apXXWPypVwgQ3&L>WCy3H&E;t|=W{b-
zNs;7!tLUNW^9g&tVHX+EF{OO600}C=RqDwlI?S8&O#Qz5lhPwVL6U67*|X=Nl9+ovj3K
ze*~C|_k8Fco@4Y61oe)`-pl->Rsg-m)9Dr8I3eas)D?Sq)Zdwr{&+Au?J|bu>5<2W
zlz?%CKVGy)fx6pFhl+Kt@+7VvsP4kP``wJ
z0!t+eLDzQTT0D!Nqb97tmMATMkvdJRQN^}_Q;!>Rup(M83E#a}=tz|7Y@Kbx1qq!z
zbE&zQN!F(xNaHuBE=Yd(3{}D?zdP1?f0OH96FXvEU{hg$E)+N_hWjXYrZ+n_>mOrQ
zlQ^<4i84zztcf-AL2IY$ByKDsRaTI6XQrsY??WA)-z
z(D%oXjj}sEc|13P_+2N%y7;D5IYa}a!$19IHPNddqlOJPCn~I&%iJBln5?0f7$6bj
zeUCz(CO;XlhV5npyZ&d+a|p=B%@f@x(%+0ImhKU-3ZIWQ8ti4rvT1ohNCjfSOi!3eUTOldkRd@q`mH-moYpnG$!5*A-_tqYAbr+
z3O|YlpMcy8uVEM&4ct026uTE%`3xb&@t4|
ztOoisvGG3{)ZlevPK=Qd?lsd98`-tz&_dn5B?Lat=iRWNybP8hZ`Ny~PB5FEh#oZZ
zQ;biKf3G8!f#X))C@g=#V&N
zAGHA__K~b^`{Y|umApf>4{qARY03Z}T!)J!eG`BtgeB`oJ(!UUy4q<%JYp@=s6Nw=
z00*3Ej=8<^lV3Dyaf{*bc^CdejCWSINAj~C>)TbM5X|=i9kfKf+Iu^QD(8jx;Rwol
z=XI~@q4|AS$%m$5+*$61$3|ZhFWFQ5Q=*U9Ws8-i$4bd<)ircv#x+8(FuLe~py(%qW
zzGjbL|0Cg2vl*DVBCvbDjYel*#UV?<+Rj~{_N8EkeHtJ~^MWdfJ|nrwpPZafGQ!IG
zvgJj@w=d~hb7_AD22tkiK_|VTIkNUv=st?m>!0_3AaYLHeNLu_UOWS$SU--~A@En-
z>OMVJX!YUyIP{_!xJU@4*|rN}&8mn79>hJF<)U4~poHZI?Xyk9(-gJLeYj2|q9BUTTQ
z;KLzRX*mHr70-+3HsdrP(qFi3NL5l!fUgq%2Zfov#%{RvbXjwA@UKS$SwYY6YT!y`
z@Rb8Gg(UL71HFB_{HPRn{H`iv{H{dD}Xd@)0$tmbr8
zN?8qI>d9Rs`28?gOQSPqucd;-fP~%-d_ufp!lH(3FZl5)=<{yWXN7*Sd{{4T`|;#KQu{zb>n6}TQUuI
z<}NN+qswlzru;^SwK~t9Nj|V>tH-4pfBs~T6#H}VlZEX*`2L=USX0}G(dT~tguP@c
zn*LXOrR|4rx4$3swd06;MWwpB;O;o5s#dWK6rev7IeX~I5CNMIdG{Y0}gwV
s=v1$Ik2L?}guVQ?B>#&9^7+#*zP(u$OMWjjZU6CRvHGqJC
zASk^T0RaI)g)hGEcfUV=Gk50RnfuSpOy*>-wf5R)=d82NdY-Jr8t7?|lQNS606?yZ
zQZ)hq0uTToKj8#;$x&Xb7XF}OpktzjKN5=D(db|}U27!Kt=d_AE&#iKj&bkhd%iI6}1T?wK;
zQ>%Qwutqx$hBx^}_C`XL54hOF>#QXoh&WoPZlsF
zSPp`KGT|&lGmQ*pZOw~x|JKTGlffJ&d|~m0Qu&Ipu1n}=i?06-k*CwCY=h4Vp6TYR
z+IB>=@W+kWF84)77baiPtl9F6|2^wk$6Ym9TBKab+~Y->iF2#WX%XL04rMqayM7{C4U29+Jl=ra;5ob>3a$<2qNO=Rfrq;Q8TI3cftY}-~H
zNQJOrM)vchnr^~khCw!RqDO;#KrsYS8g9{t@QhW#36e?0UI0w`{Uv~fAK|VuAR!Sg
zJK%`0U#k;QdjLa)#f)5il|WHwaDii-8KTpn1V^64fCLLR5z+c=-Iw|gbrPcS!WNbi
z=06$_;kIyKiJl`z?1ABad-+q&awk4zsV2j0cWLK3a6d3H&tF|H{{1L^_bMd)*Y1}(
zNuM7~syX!+U&b4K)umtS9(%7ow7Zt7NPlt!K~|!Jq%&e(<@FtH^KfxP*D{~2j^%TQ
z{y={w-WYJbjTF2`yuEs%2owW2`)QGIaf@OAa}aF`9CBJj8%ZFrv9p_wXfjqt04o8C
z$!6Sur`_i4=_1>-{o`yl1y$kS2Dtg|+xU3OATvhO=LVqQZ|I5!SR9|ui)F+<3K4X~
zDs02iKWqV8Z1ySA!HN9reO(m?q7v`rWM;{*+o-MxP)mf{KHD)WRkcao3)pcQxj6sh#_kmX-H<7f~)t@9%Ol^Gf
zuR(C5Xp`E?RQN+FjN}E~#S^ye58B-h38mf+K1JK_o;A-5rI5_PL)C3LO=IQ`D1lYH|h|;F5`A1)|=&>3E
zqt0sZDO;eRKI(o{47HL+r@Y`@~WhiU3qj-k^ow3AMaCe9&y&D{F4#Mbz5W
z^M}>I)se95&P1)epK17LiI5TtF(&NFnod3^O_flcAgd`Lo~8<{^DWav8B|
z!mL2CrUum46oOrrXC|-!dPM;VD-8$m8hVBh`E=Kn%n^cU2XGHq6CVo$_FuLkDb+3s
zAIsmfukUd%Pr)6UtHz$e(VrhZZ+Yfw!j&*yyTHZ?7=1BZU@J6C6jyH#3B>>?YmMMg
zjJi}eH+-JE^l+nZ9X=8b&ddr)R8A6w8<6C
zE0^4}cvALZbLNDWtD+ijrddvkF^JWe9uW>Ifj%A4*NI
zl&QL5yYS5PP2!sx0V66A*3wT?m*IdLEfCxcNu9fOGe}8F4#M$6#khZnSR?19uOA&?
zPmNc)AU1Yy4+0(;7-`M#{FHZ`4H|EQaf(Ld@}ot%kLWBEo5JEnC`}^Ujsx!^Z2BcT
z*j{&b>ix9p7tuGnI8OfQb@A1D{pIVP$ZkPkE%7rHH~Bo$b9=h1@>v%P_@QfIt?o(|
zDd-P#TB@lGRFwb2SB;MqlrvI+)ngRHIGEs}t_+Kr&}hW~0{#Q{{BVp=
z1t{ID>-|V%-q%07A^fxKh<(NlBnPC>V)K;xE0`GmaDd~h@6)SxJpw8M&4_kFkVF80p&i3Kt%W2Ie74mG5Y+R3R+@-&
zQn#3K7RZX?y%@&@$paXIcgt1io0G!(bP|UA({G}=A=#0^Qt&80z~1<|T9GgTQk3`P
zmE6|~QWCBm%eS_}gfX|!4+D_X32CWNWG(Od<8%H^^UeNnfqZ2EHO+I31+Z0`!S_e6
z0yrVPXHB^K*U`eM>W9FZy#(8RBFvR@F+R+?dX*J{*6ajg#u1#X@09$DIw;t63U!Q&
zJ)0PxPM-4P-hRu)8maVb6RkoI*gcY@ZU{+Iy`JTR{ys3!dyRHG=L7{om}ziPRc)g9
zLC2xgkfjT>Flpu?g$OOWUYO3w@qIKoZ1o$MF
zSr*;7|2;I95W6QHGAFL{Usq7JL-21WS|R-h!bbX)%BEBjZ0+WptLUqU=lb&g&`g%}
zT#00t8l&9d_3rCcx@!;8^ieKpJK7TmH94_pY__MlQtvR$Eyc{@;eckwsV9(N!Z%+>
z-g1B>LIPl`ucL3bGth`e-VYX?S>pftgOX#PKi<&#I#vXi%5y-D^9f;5{~f;JdV>Za
zk->pli=j6%w2X(tTpHTML4DOXzyTp@`P<#`-}wOUVDTGyM>cSno)5*CMb)@6c*ecR
z_6TrC=83#X7>U&kfE24J=ccxXNdk?|OU6(YBm}(o%g6)!=dK6T&g{vIlF6=uNV
z8uK+AXdArV^Q*^;f2$yS{grhSZt3^qBl-+4TM73kx@Dxk5OtKi+qf
znte*jYI2hr$N#n+NF^7XVg6}9PJm;`XX|aIa$22CQm*#@*z!tc;Iqzm^#d4cV!+=0
zQ?9f;gz9UV^q+#NJOYH4w+B<`w_{hjsE&9Zp`y7RFo@5qe<(ZIGUxTtIGU_1$~8|e
zIU~8+F-{Svm~T8LdjPBP>ThvsD4~Qoax%yB${y3+yK_Mn;Y5b7Ek?+;gPWzSGW6Z$
z99@trcAr$Gy@}w7oWZRp?Dltb2#aR|SKR$Ne$kuy*PjwPQu1<=A8suBCUo0>!1@7p
z7CK67`+FQaZhuM852)sO9Pl;Zn98p_%dP|0myJ&u$xsh%zX=w3De5WS$O3QzQ$Al5
zX$MR<3hevsEF8XtnoaT5?%!0XatGPcUe0W>0KS4w)1H0{AaZNrxUF--@+W`GjqJHz
z^+f<{EHINQ+w>%b3eDZ_RCTMwL8>)2u@eH8rFJ2`LsRbu*&W|y2R(O&c6YQ~?Byl@
zX>S0j6s-{4U6PwDB5b=K!8yGb7fMv(kSoTV>_&^gN9q0G+%HElOb(vzTRo+S{>
zuM_|`z^JRtozoh~_m&SfZa+Ae#cEMt{g7uzYO&-=$3|8ZU4uOh2xTiHirG+V3>t%P
z3ZA^8fN=8`+x%5z7I2Lc;68zHt~_){vIU<8xsSnJ1SuqwmbtPwYRn@uosY8ZNR<`Q
zv_E8JK(W^6SVgX!atYqsF_bi5{`fqk%p+UC_cei0@9H14Bgxu*q)nv$WZ#hQiyd%7
zntEK=SlpekIBMV=B4^I9>x6<}A9wXm5*+;*$Wji_;h5A(sUpV;vaJHM(dkyd`$AG=
z7@qZYhQ+)qd%lxwS1kF8`QWBz^jLI4-IAvurhlLfjDS&ZL(v~mV75Wx*$Zx<(M_fP
zn(rL<^knVtv)buDPK1tH#M(5Khe~OZzwT8Z(&gw9rYJpiXj=$o*eyt7_n@RiJ3_&i
zlH=DHo+Lboy7nZO2wNiU@nq*DkkyucLWEZGb~OurRcYi9YxQq}71sGJct5E>{Vw{0
z1{C*o{0Or}@qUhTR4y2P8tn8R1XN3mnVrb*k*EJ~_Xev5cbfkLw#qcqK#MHNW#@e@
zOASz=@Y!Bp-k+jFb@%6lSa{q^vWbz5fFAh;|AhT<+47cj0&4z2q2El9#waSCF~hSaIYt@97c9atMryJ%>{f(HB=p=pC|
zW>>_9psdFvXb!&^^f1`=A;d|i&aLQm{vk`sDNdkzq)|tS_(F&)!gT-C@#RIrIT#4n
zsE(t#h(>UcNq}V)(5I5pcnF5QY~~KYJAV
zJOy~7-mmzM3LH;Yv3py)7$)bcYd-A-j
zz~^4b3JG}q?YgM{@+vu-#0;XOLVbZ>*NYM3nj^z0oij=G<5If+02
z8@9(QtYbmcrxY@w4hLERP3xz-O=iRo!ZML@#hWfz^uh_!sG
z=#Ls>BFAB5K-Wv2OLYAox-M#ETS?qqONT!IE-d(q2VkgUAev~bn3^kr3N_!nImG1U
z31BUOqfagfLJsZKwShaQ^gqK~Ue{Frl>F89G161bu|Wn14n;KG35{|
zI-Bm~Hke(R6Cwc&s7sv!Y1s3a7gqAA
zYkcs#+>#3);z1QUoip+Scjh&UG*oawo-LO(?j;0FFtqraY$zt+$NNL704EOB@`j?F>jlQo9EsLw&L?_^wfHeZN4ps!Zk;HUI^b$6~2}$#1M|E
z)q7P%+J`4xeC?6n$5CxM?CfsxH|T?HkVGGRC6N7hAV@Q@y72N}Fb9p4DeVRTrDo&^
z!6BGNs2t*A*3-9$Q(+Snryd1=T1jPr{Zgp^L;%dRAK&WzIjxYJ2hCbVSL
zwU}%GrJ2nv^i`k637DwXgLTY|%K`%0#tv$~-3rN&*-qRX5RlnK*839#sp{rVhDv24V}2%`%aB1LRV48yjkf!5X&3}$5H>Q1WR%sdXTwKQbg%k*
z2m-$u8)`0OZ4f)0pp0btANt2uQW@yqmGX!>!CyNjZe3VjK2vExP4<07yfH6|s+f=c
zCXLp!`C15JdV`m1`Drv1?Kz?9X3ra3+2id)hVkQqyef-=z|$oRZfbC|2822wrB#TG
zAV>)}E<$8KZbn+t#O+HZ0an4hewF~uSn79!h>6qT+b09H*`9ZoEkO5K;oL=RGZQfG
zZYEQO_eq3v|B;{65}HDBn0Az5jz`wFMY1K#4(h+P7BJ3<>Z_qWSAARF7Y>4^Pu%9nZ+;kZ1>a(3NYZ&5exsf<>3ZkiTT`zA_jpzk$07mJ5TY*vY^N#=vb^^^3!13l^
z`8lire-1uESmncssQ&KL>-UwS-orTRD0&6&5kuA9a@>WMu*A&kfn-qK6n+NzU6~$7G
zeDL@-`D|n-MgSdz@+QG4`6~#cP1;O7rI*iWYl>iwJjIQ_Q|3jsQKX~~o8;}7!E0PH
zFh?;J$qig$HC;R3ndYP}%yh87_WOp|myehe6sbHH*zyr2w#_a3*u`;YWSI9Soo9b7j7d;l0bV{xdX;AH@-sj+%Rss5I
zGuI0$zk8#r_sA^Kj~+0}Tb13WoLKnSP9Q-XZ?Iw%Fw8csOFCbpHAM^<>5+M#
zrz-=G??kIi+Lce8zicF=Kqq{ONmw3E>{_)vT^-{4R-|}H_(_HrDQ#Ki#!<>;S9r9v
zM{(z&1!qyArym6IWPuS&(%Qq2b1-NXlPNh}OD}eHr22JTUETZSSB*{-dNK$}^r)eh
z5I5?8bqf>ck=b`Ii6M^JDEH6TLgE56)loB;5Kb~_2_`<`;*4ND0M0U@KUD!qZ14DCXk-c|%F~h}4zk>L)Ar>F7a5-`?7|92fU`YP%dU2ZJ
zMBMc}?7B0dJB!m3?*7eCebIz&>&9}^O$c{XR0oHbFFo>f7%c5f9}X3(tkLq>^L3MI
zWTOk5?zB1N4~D^35~y)ocQ~idA;FtuMuFsy9GrF2Q$>i@t5!~uN9CMvrz_qryfl_z
zWEfzXhl0E$RQUuLX!24^-oal1G}hZRJil`7jDLahy<%g-1tSEJmNnBUNq{gtUHa>6
z_)x3j&J#DGfISy8ftu2mOKIVjr9&<2wT?997sZoz>SBtx2&zm-I<+Ra0lX_7AB#5#XThp7U^K9?%#g=f#|O)ZX}@vb8C2Zs$g3NE0h`gTRgn32YN
zraAqf*bw8_9JsEGEe-42@oHlO#pFjRaoqB
ztYXUtNgCS53EoMOH0mP`?z*>RX#ErPgo#NAQg
zIIS7Jgy;=GY-B1EQE*vH9ah@+aXa2(sDnV|eMZRw!h1l$puy7J{=C@K7y_nt5<
zB~nM?hjOXcin#oFe~y%pzntbP5xHtuZ%}nzce}psz2RF{1e83AH0(C&i>BXv9!|0+
zycOU4J;PFZf>PQ84+2wxX%jEwXQ+=4e5Ti)!+#0VhF?Db5@B2Hkm6uG`
zV_h96@t_#HbFMJ~7H&EuPPX)e!kE{j6A&|%`G9NVV`@}x4H2~#xb5{9wQtifb+V4-
zG&OZoplzKQ^NNxoj9FUpRl+^ugQWa0rD8aScbQq4m51`sws!vhk*rbx1*?C-B)Vms
zYbT_H(P8J7&1dGJn3Yjs4)Ar*(1?%`TXIeH1C_k_wm1?VuIe~4`z7Ap1^i!C+c@_e*JKd3Lz~JL#Ff8U$(#S6%qH>
zU38B@<>xnaM1uK;XWS>Y_kT7`{Uo{O>ukXQ#a2q2CvRc9h8UBcQ6SnPUT41}Zo?Zp
zA>&vi7zQ>SgGx}P|&nw-~0Ko`Cr9HWT0&+nZ(lY{qX|v-o_NCv!pu=+BH*R
zhu14KMKAldl+m9CBt3TK0;(&Bo_wehSkJ0i{nR(%@lduu7kAv(;9AVu#BOR?MP
zr}gpb8ALZT`Nt*h4k#%2q&zI6jcx8A(K%tD9XS8Wib#A@#YAJ0^-$e~pMaX(AK8s0
zK$7{wuu|Ck?bw7rrz4@`cF^8jDue_fb#~ZtZ^?1%--fWn_XG?RC{_oAanVx-H-jg#
zbsrWIlW77Qgoi@!jkcb))(R)-bYuUajEewkTML2+c_f9bp{r@w;ns````aJ|qb&Td
z#HD8BjYPja(H*~J&&FCdbgtDk{8T_WB$Ffj8$Nq!*;q!0^CG*ksK4K^c(CGWFDv9&
z9W%#{s7xYCSHeO&*l9WG$iNbkXvIArnZL#;xPoqG#iLwbuxS9!>EbJZRFBe0c)2zC6rFuTA8K-lOm9(Itby*Qm<
zEH!zgR7;FgO+OHsB*zr6DsL_EYVnbhz@NV?jO}@BfG~|z!0{~yjM@y>jDA7Z!`8Xwf^nO
zb>y@;(7QVpsD6tbaFhji68Qj?vc~|faC0AmA;JzFXgUK&nm#OP5vA9#e#^0~>>r!R
ze8sAvpcvhm5(KMk@TajNIu1s46P)XLDD2hxYT14&-pc$h7}zCvm*Y|gs>%Tz)uJ6K
z$&7$hg~$gcmh>7)odIqwl}OD_T66~t>52Z)aUHXraP%n>_|evs{b)t1l+fbjv0(Riv3Qg~rU1q4^lnZ%h9gPAZ@J9)s!hhMFV0J{rW
z8M00A3iR*d02SE!&03&^=#{zv-KT1P_3FCHkIiX=QV?H$>0q@Xg#t!Hcnf0i3&C0u
zRyFUtaNRyBYA<`_k;pQQ1Kw8^w3}@%x50k>~+)!hAY
zQ-D!YysQyw#8n6V%8*u1SEljcen@o3~7d+jw$&8L4ZGKp0~WXUXqy+SOI2~Gjc_&q9tE0bcR
zEnn{WnsVY_>h=2v&n2`t+z
z{{+Mcs+6q?uGEL4WVo#5^Flo50>g~|NImD#eekE`6(&lIhbjtFm~{-9u;co_%a~l%+IFBD)V7jWploWe?1f|
zkAt^DE>qS%-UK8lDDe~xBNr_^IHszJ+b0jeriA^oVS42+?`VBTV)GD9woZWO@IJyz
zeo~pqNr57khPoIulvI!eQ591DmJ570`lFKI2$3)GKo;D5@wEgB`Ve7Mam(n0En8g-
z8PG}t#u?)7yv3}#BA+JQ=2ay^&ynJ()+h!_qBZQxo31eAJ|W7fG;o$q#@Z3+l)lmrUQj=s61B!1aqx!;u%G
zk#~zEaW{e9+9X^6V@zg_+Mmr?A`vt^Xa?B+Zf)^@l&8TlVm9~JPVCGXs^ahskqhI`
zq_mYFb-Du>5-5NQWEUJYv#
z;XB)Arl@9ZGU0vkX#XT-!JxYUlWz_^`H>DW$Z9|aEgoMI*-7$wNz?r8ZP=vR>`hgC
z4XvhtC&@_uBmT@Hw@gJcOTJDaPdO18+b(4fz|aPo$*8T5U^H-U+-{>%O@$}eVzk1Z
zqK9=&)MuLKpI46#z;8h2|WGX6Vr5dB9vrq%^oJoh2nw_zn_Nf&Tz
z)YE~cNd5Y)X4RP2tn`A&`{hPD-0hbl4l+WB+`wJ`GtVi?q(Aq*W(-viP%Gcq@IAXg
zIjZuu{byqH84EKjKi~i5xsJ&>1}tb1R0Wa#z4|yUP%Vv1((B*
zzy#S{#rawaLe(_i|H>?XoF&M5D9zUrQBH7dIX40-aDLC}2bXKz-5(ZAPollNC?r>{
zpF~U5-Uw*dCM6Cve+{UFn3pWlNq^an`e#Fq{&YwTSj0rXtNM;sbp^0$vNP?KOxSg?
z{|W&dV)_0-WdPDf3{T-+|86}cjiM9-Fbou@weC3^*0vp{4$20#LyGkQm4vWd6C@Bt
z3I!1+2;gx+6aa_OeI~=5HVIzPx5b*=CT**nC=z6?egmzlZ#*(t|7g-;$3R1YXYU1u|ev>Op$YS
z9Omx`L7fMb)gwAi1j&Zh)oU>z(3K(WJGWDm&KYA=ZANYiI_=6Is3b<1V$lB;>aP-aX!C@JQt&~Ap`*zgrkuL
z$^h(teCPk^gZ}^4#FY|x9ky5Au)%sSyFU3!U?+0;!qOsAY?s~XZ~2ys1-p~t(1q}B
zRrR9N-^#7J#!JiT$f1isiz~t1e5b#*YWVvOpY@>l7ES;H0
zasG6ZyJkDpyf}`y(xfSx&V27y;_fToZ{}i~t)tzc*UITa$&qtsO!n55vmbo7g@6h3
z>LSIM*Wbi_o~hToe(*4j&9?oMB_=BT&ski2KWMl7f?33?$vCh9<@BXkxjN`A&ei*T
zb@95bTqZz$DHq~T+nTKYlPs|xkf`>iN%GF!xEqLUJQ~ksY%6xioqb%gck^|`?ejCm0&kiD(9LC;4Bg?5Ftzjew(79QdEM&G8qx_}T(NI(!}fu<
z!Tc$d`$|he=a7LT{zzHgXXF<;ugHCo2^_BSBUqV!v2_si-ap7g!bPcZbm>Oirqqo0
zm0cW^Iln|ssaUA3IBs!s-#*-@2fJ3iv0St5N+}g}x^=h_>~We)_)|5LJuL0jfXr^|
nWg!RLp77rv4gT+MuuFEa=Nw(~5+w%-0xlq+AW<@_1WCfO0+JCVM_Ga(Ns=r<
zkQ@aiXC#RT!o^>B_q|v5pIi5zJ5@7Nr~B(ZQ)f<{nf|&@tf7I{EfOXY003@jBh-xn
z01pNLpc4q*jpQh|S?A`aYN%(badRgSb)eRZVSiL5iE7qO?{xuKeX$dXWL#MT-|KD*
zw#{vSR#BeWJe+kg-i-IyJvu$DO89-S{inb7;``|J^|kjSAI}>v#7_}cJ^(=B@b3hb
z-Yszj02rXHu3{RHy_XZ8x$@DYNG^j@AtzI3+ti}ty28Mko`e$w{jQO=*3jNo@g##?
z1?WJ0>t=<#P>*uC4X9Yq0ncF?0R9Ua5ENpJ4~HQr{|^O{%xHAngW>8fi^6*Gsi%$)
zs}g=XwBkQVU8=5C+oId
zK0M~rD^V;&w5VfP(ng3ogrQ~{RM_ssIBWdJ!NaFpKoZ;~vM%5-pZ@rRxxw%6kE_aN
z>Ojf-0BoJ4>|2j30gcrm7245t>@!)NYnCxy$Z(_Nw5c9|eA;qFk_+JEGD#ul=BwfM
zZ;qTYq@6Qw!CPq~ORMJ;1kj;NiqB@Gmdz_VeTmp
z+<@tQw8acr%-7Qe3T?r}ps@$X4S^Dz6&XK-&j&&g^I!IK1p4dB5LKbiS)pS~Zc(ar
z3k8)xhXrp?Zak|+;+BJ}86(grlA3|Fq6Z2kRrai?;stlv&b=OZH(64Mo=L5mk+5Pj
zO8j1xAaP9LssjO&nAb~{_hCGmuqy$Z<7#omXiSGi$VM0`vWx(MWan|
zZ^}b(>r%~GRKZngqDli_dl!x8
zF*>%Y3caF+0C`CkXlNm?d7Ypcz!#UkP$Mr{_c10Z9n~fM)<({c%pwO_=9tO|ZL6@+
znwgBt6G)rkFeQa=sZJefegQ>>6_-_Je!2cEw5Bw>e%1A;I#5VC8efpVnG@EUYK7~e
zolUx5@fk9SXZR7_&}w7?qO@_jZ)QCZge5|^CNGTIM6D4uS~X$>z+IO8oJ{luSPDYj
z1WIvyxQDXU*U}Ae*Gp@}3*UyzaT<7f0R3SZH|&k%?m()rSTx}*zX&Wl)hqD(woBY9
zV|jyH9GVNr)=pS^^^p4QBbZToHf(Y;UT;0)ZMB%FL4xV=Jh0=F1)L(N3F*-
zgz9Y=u+03UR$IMV(yp7e7X|Eo7tJNuqRIW;
z@nPo>Z?xeLJQRb1u=Z>UU0|NmC0$8FW=F?*tVP+^m52R*$T~Z>pSnA);=>)Uli(!L
z3dcQEpnI$B5AHz2F09Fc*}nI9#?dZ9OmVCF(*DYJtp-=#J
z_X+Bf#@hehQfOBf-R8&!a)Wp~+fw@G298;L~LVaQC_)RLdvZJUNrzwaUegNTbHY@5;)-7l}DoTdz*^L~XwGdIB8iVSQItlA{byR(34T7!AXiantgD5)&(AKAkfC#sAB}aZuR5p0Q6exifm1KG(5o
z|3LLQJ#grr6eHR)0FndBC#=b#Y|X3j6F;RS1K`#>4%IxU!+<%{@gmP9`&2EeHedXjO?
zk8C4x8MB@w2>lx~51{DBT1pF`@eO3&@F~h?|UXp@h
zjMtk{yuH*QG#jeYPdI!!<~qmb@86Lup5YaN
zNIXw61)~N9YKc*jmGX?x6;TI0auuM58yI`P{TSXp{rsI$c{CXzOd5oaHoFI`wzxBk
zAw_}OG79txdkQtAbW<5MBjR-cX*6}x((%VRKJUerFdQf$*1y=74F%FYGnMngr^JT?
zF-lMREEkNp0LGi8>8ANVubLos#>wx+wk(XZCQ8k%J=SY)6YrrVP7PAV|KujV?ga|l
zf&lOG?5Q)~Y(YZU-XtE{WUaovWCbY(xYpc3W%HMuDzWgygAl=|z^bWS)xnmm)ujzw
zUVPqoG4S{^e&D+ks_lMzHKnxT(!vNgb>mdu=J)(S27k$2J2e0wm4F#%tzo-)D$V|_
zq-iJsg1uWk>2rsm>hV;W-W>I0d>LLo|9EIc89Et&v@#pW5bG7H(An+}P)k)krGsW0
zBD@=Yt>KB*-zP6YFdgaOY1(>q2kpkv!k|B`V8ELY#x1!$381;TJ`AKjrb#-0*+0(P
zeh;99;mjGg;NlHHmrHozcSNJq9))xJ+3h-tXx#k!lw<-FfKFUaGvjIi*f~@y7JeWe
zOyxH6lQ9$Uijc;zp;^__PJ9P4%Ijlf-eh}gYLticz?GxAS<632j(?2K(C=KlSG^ckb
zjq@p?x1osvZz;EBTB?poXs}lAGsWa&EcAGEeNZJ!s-g>fA;K2Md;AK-4vYz_nONW>
zwW%>;^IdxiEYd?}Y4VI{mPDP?*-{8RL?!3zL<7KPJ$728*~_~e=97zjF43l7erkSC
z5%izxRG@!@NrCx@tmms$|BFPj{i1y1&@D>9k!(A%xEdd#g5>iZDxo4y^!|h8hzNEq
zHGL3W5{bM6i(X(zDL0miu1uYH_@ZuQLoRaPCd8>*C&msmvJ8W-4>1Tf`&_%3}L3hE5h*l*~)baM;c%Hcv)
zTtI}BaZO|ALrTE4QL}WN2?Zem%f7Rge@()+a%&3FCjSBtE?-Gm-Yg0`uXnf8bunf`
zwsta=S#Zj4+^ts`xK$qX=G(L^gWyi-=drR}=a;Kh3{K)_6%7K%(&ZTiG)+q&Vc>|MKtl;ADU-<)aieK9S!Qr)a_
zN!k380Wmse_HTc+dlE&HjE(j0YAHzo#Ldlpp;@y>j~{wzE83BxduuD-@cbv1vjA~v
z%Jp2$B3JxHC4$*mOSWACXc|2Uiu5fCx2-Q@4Josb
zcoIi@#1g+dOJ9+`$pTIC+5xQC$Jypg>0!5N1aH@YW`DjNjn5;D8eblKjM@YvS+sLo
zDA?7LY@W^SO@XlL#cK6scJMpl0kZ2LM>5lAl0xlzglMKaOfauYMl!@3A`&GVoxS4D
zG~M_qSSmwZ7U~(wQjs7HxpyNN{&+&a5gLC0ps%Qu>o!0I`zAfP9`>^G%KVC5in^cJ
zSWT9vNx_(a+>ZpPiCI|+w8*REs=UrW%vIi#`#&#nFM}G&%%lslDIi&E{py{|Y$iG+
zbiSP7Nlx8G`d;Ojj5@H{+YEn2-;51IE__ux+W4rsJTim=jWH9`*fAk0Qt~g8cr87;
zD)0_^{Cfb7iIi_7y<~MXOqmGHW}~jXPbC=O2OnD{&F5pe030YVRkqRx2}u|M*w!bN
zI>XYxrf{4j|I#{{u9~adXgr82l&&MLLvDf)G0H6Di1@(h7Oz(uO6QgbLSWUB1p`Bb
zKgfN&WOwZz_Fj4~%q*(8vDG}-i0%X-+V`*^@5X&F4(;HWKs+!aQJsGL$Ip!cfF!S;
z`hfb!k3!U!TZaw3UpU!y3=?-(f99hDYRy)X`rhXx7;xwn71!^d?j(pZyMfg|;h|_I
zWx--kiCS%Q(_At(PItbF(zpTYJshEbpIaC_vsy?OqVOu2o=!1d0|GM`Rw)`2;s-YN
zm~;4KU?g|m5cZ&>7Ws(+I@tON&((NWutcceKbs_B?wga5z814vLFSW_gy#`LD}wg>*quSJ=~l#aXX!`-d?cczKdM^MH53KNYS$~acNjA&%XHB9`s
zV$7P*-X-aLu>Il5voQE+MTW06u2hviSfYJYCK0Q&Bk2aGICdlJUAl#)M&d;-;Zgc)
zRg>V#R{GXJVh~hFpeFRH9`Cgu4`|uo%w)p?STzI6cI9P(S+|PD0CLz>NFETgCkM44
zP|vgf#tI$p=i~y9
zO1Zh@o|y$Lz}AfQFiXnIo3JwG(4Xj@IWSaA3%L#r{^MGmmdssWiQGz=cHscnLYrk^
z?)9m1uRro{{1rLRJ>v0`uD~*X9dn5E=k6)VEp`)o2&Yvo$yV-$7{3wq*(Bn8bX!O5
z+Ie9$^IT-k>8Tg9@MYM2DgPPRHUbJ5Pbbw~8Y>P+kg=lSsGALcpc#N>y68d~a{14jz4XG^SV
z$RM+p<6qO{2WB4bEtZVDOw=b}0<#R(?4ee3o&bd>^DSWPQ?E;hsYgVeq@;V1DKa+U
zADV$vY1W}nxneah{~kx+7$&@5m|Y5jsdw>)1e#697(
z-Ope&Ef36orZ`&b1+0!mZ!zBP^v&SXQNM*!U-33>B;aRpXI20?SV-Zieoow_e-d7}
zjJAm=uibnFIw#>`)vND?AdJ}E@R1V9p#$11=mUJgh}5zNzs)+%Baj-l1H6Z2dJTHy
zWzPnzUTJueA-ql9vP3pI6FVMXha}Trx>+U(5K{Jh)Y>lR4xiBGzC$`W)XTex0$sAz
zHjgDb->OVnPLM&^R(pZi`G?~4-&F@LL-dAHcqwxL1m>fbixR{5b_OMKOHqmP+>Az>
zg`1@%hzH(Cy_t1)Mk;b>Up8)SHz`2$WqXW6e3rPFePP-a6j^4u9s8BfQF|uX*B5&@
z@TprR*u`Sdjh0S%_OEVSi~y-JG%iLtt71xGtIJ_g0MDDbl+7nZmynf_!%ZBv71r-C
zZsB#TPRibo*PXZ@9b~KW@hVPv3}Iva^SPn=$*j#SWupKyI7T-48)=0bR|!~^utVhI
zYuIXRR`UBhw=Fw%Q#}vd%|Kh6=AbMrVUYq29eI%9Y}M%Hd;EySDE~
zOSPY}qyIR`rZwK`4K8PW=L6J!T4pAUoc*{L7Y}76hZasG#<~t3k(=p-UXGf(9H3>6
zYcEIctbZw|nSZin%2qNMl&u-c
zqL`cO6Icd$*}v5PVu_87nMnA%ToV?j`2;W#RBe9e@y+Xvpui9z?2G_8QbdsA6hu2C
zXO1mVdcsh~U|pX@@pyjZ!MyL|JcTtl199>VbgIq4O}8vG{W?gB0=+~yI+SL=m9Deq
z+7Mc3Qg%8*f~zU-s>wzC0cXz{bbRIUse1eRNOx9bUyisj5-m7bwRl-ZTWcx-ZwzZz
zzX)`PkG*=5DYCWyd&QFl{XpU~ItYzb5GHp_%zwHOp5>uk{*K)$ZI1a@mfI?Onp-qN
z|GD&N?jj>P-9p3lIa`k!_&3k7H#F;bifFI}zUzTn#
zmG=<5PIK*BKHRZ5EqLt!QdUiQ#>KtaNyNgPbit>qXFD)$lJi3QN}1j0PnwY|W%~4F{!)d
zorOWSaH;rFAgEfna-FO3<-nah5?_nR5N%Mhxspifc~NR_aqd)30=HS(B-ruxV=d;t
zcU4#n4Zjhhy~EIn;Ky+rJVbRh)+~as;0%+3lA^;a-3*+n$KWGB|H8WRPB5(AQ|oRVv1D0^YW8PF{Rydlb7F97|cqK$Y0b
z1>5;^{W$L23gkNf;Y^eJrCu&U)Mc|k{a!8Ksrsw!ls9oQVC2@2SVk$BoznZM1OBQJ
zmH;}YbBEf`gaS?jLf>BkD+Lq*T@-BB>Q+rL6-G@7C@4*i9Pm%6vm~vYp7U}Jg59W~
zM~4{Dyja72N@C&nz#k>J(L;+oXBF-6BJ<>`F77N+7DCF`%p5B~`D=y@fYC`g%218W
z&@$0+)ZlWCd?1cKrttmO$V(m*4pYWzavFoezIUxJ!UJMv{fray)9%&+;+u$u5L4?a3S-Go
zQXxzVL;KJp8K69xs;9oSYo!(@Cs~|PD*O^Ec0Te!vUZ;2xBX+Kxk_Kv+oI@y}&RLQb=&M
zVyL4wWQ_I%=-~y93Q#dGU9W{xra-W;HU*d}YbiIdjIX6qn0EON@su#wP!}UFoRkHN
z&udkpL2e2H4(B&XK_o5E&qmzK(E#Aeu)q7~)ow^4A_Te48qN2&^-}mZR%tgAqYQx5
zeBq_VsJOaAe!Wc-3zOhu*pomf?{MmK$dXvsxN|
zF}iK^ixemWVU+?MHj7D7A%5T^r$EhsDneC_OFZ~}z+nkGLwOrGy7^fD-eyu8sC~hr
zl*S6g5JMBuzW@;W<^2$}-PHT`nrhdEWg=I7z!5Jp!9->2`qz<
zYWS%7`{U%0PL2rmbVZ>Z(Xu@2pY=D<0SNZ1Sn|oabyYN%dV{l2{_q$OLjnCLz}|Gi
zgo??Ue1{_iRKL(y82#xN95S!E55TigkDui~po5AlT-(8a#J3T`e0PK=jA!pr%;#7C
z73}Ml7*&Yous_}>N(O3?jZty_ft~p~)nXFNN>*GM@Igc(l_}{X56hUlG3L)uFs^If
z#v`R9?7+jz-xsVKB`I-lbEIAHZwAaLDcDe(`>Xgv5Fb=dI^R*p28#YNO+@RR#1A$LvrM_2b`n7KJ(ep60sXjIde@LPdsV`vcfYtxNSr;(qi&hUJyT@$(w4y4>U3<
z`3mbV)Fc~h%s(b^+4uoHP8kkB@;|WYYtLInfuLAE{cvUa$5~e{pf394OBZzK@0yX7
z-%7N=p_OyGy|$VByEE-Zx1kXNc5PENX09fw{t?j`u7DzTV>`MRJ-X?8tu$GJhX5A
zwXJ6=z{a7(b<;(J78d5VHbPc
zT+*GBYgSwM!5fS&0a6L+)IZU7{x~F)6!DXLxClgfh<*?bi_-T_?-Zh?AarsW`D&b0
znkoi*u}4Ue51Ab$Pe|&GuWUbh_oiKI4dQOezL_BVMC<4^HHnCzD&xO6nI5dJMCbUZ
zOiK`^^PRv4dpQa?VD%6(G8_w&gVaF&lx|pg@s<4Mn(i=xdhd!ciZ_zs{(?@4aHSNc}|z4qmeEr
zGlIH4cJ7m6xsVnPHP9?mvS2mZD4!>-F7lP2|DAQS!zL&!k3QXT&H7P4erNVj_zs)k
z>uZd2gZW-CS4jeFNJIKWOZdP{QpbE9Pqnwo0S77}3$=HWXA{spsx^!jPJF(<^W^-U
zol|D?u4LRzt~4b?`uJI}K>qgTWAF8pd%mQ%7jl%e+SYQEyS~6b5EQ;f)4|hIb~pFh
zW0aqmWYP4`Z++UkT3?)F3uK429GeH5({E_Iu*#fdeIFRWux0{cH<6K?bXP=Bi^xEJ
zonjx)wNOf3@bJTm&)_bu06S&du#}yT*vWvJM;_MENgKwmr@=k`qgkb-Ak3S?yDcw2
zw@e)dD5A`%`CoT*{l4cdM5e;N^~w3m(LK9Y!L5%73(Xga5VtCKfU*9U^ibD6?7I&a
z1&VE)s^u+RKXiSD3`VeFIAMLtT-nsy_kcp@vC*VOMZLF0T6D@ce;%)%Dw3ih(dsd0
z{=|HrR4autSUAchE2_(_1y|FGGM(MAgL6?k)35sCeyQta=^XzOxO*n@O!TxncdoDD
za+b^XA_*CTcp3k=aOZ77gb+UkjF@IJo>r)tjo^1GaM=1$bUkk{GasccFg9<31@9~s
zs@rd%V|g%LMEr%zH3-9xX8C;Qe1hS&aL8E@lH`pJXrD;^iMg%W2U&~LUX$2|%k*!g
z${t*RtQ7n#ha>k!k-U{;PDFt&h26l`pXtM3?0Orhc>!YF!mVx9;OxckubRg_m&~#^
zg+%=vJ_KV5ea?du#k2#9ICja0&DpBNMTp=5|GL48WWhbIFc&iDIXD>O8EGfdV)U^n-1x|#%4d#4=ebn!ImMT
zV^PCkI+K?T7i4AYJXt&JzS6v@k62&I=F4qPMPU#Eb`4o0
zvihk3(Z-Y>Lwsh7LMV#2@l}k8e)%7X7byLRXsHP7FS!ME3Yhu%wjrR-KRU8_IJ~_7
z?E+22v&$R`#6mcCvqW~%$X~BIXC4G*w?wOqY;s(HfLb9IQ{=?tK^tkP!uiC%JhFJ7
z2od(XQ1CA9h5oF=JTa<1FTc#R?jcXkI!(FUPuw{F>Rgolw_Q;Y}ee@D>vD-4SM}C5#{>*{;p9>M&Fix`t%_!t~pIGmo1dfZhnGuW&FXA^mKRVX>Cs0uv~3@dSoQP0C~GWM{_y2p#a)LQ^fC;u_g(MUU)t-xx>(yS@MKxyJ$R)KPyAK}t|
zrDKuPd&hXzW3DgwZen~N`xKWB-bd;f22~$0&?jE-O^y8co&JD~UGGySvS~U(m_AtJ
z<0_$gD7IjHr+VQhfY}p|UTsOaO$)DgPiLegBWGh<Ez5o8IkXw5S4vEu5Zs2
zZH6?@j=t!WGcFO#r&1Spy?c=E^~b#jEZe`dN+H3Q{d333J>i
zAB#K#6fE=ce$Z(ys^_><@y;E1T99CG+thSb?N3A>ze|PP6BHurRD&JnaCiA4eu6

_qSHv9D#%2qJ$k)otg8P9Ajy#S8v3POv0`8C$$8gLM<(a_7^w zkHsP~hG|ju_kBIIP}*R4ZKGUuq0n$SYhFqn=LZ-ahb{yUSiXm#{qGDW{mMDJr= z69oqBS)k64HcM?`K3jO}}qQMq=|e$t4vz}iz$B@$Kuc((=R3f@rh|mUBm$%;_H@SrKDwuJWPP` z@TnwV1UMW#toQ$U3z)sCD;VbsJ_ud>2F6l4X-J&FoO_?m)iMU zZbe*mQESolAIV|8Y9==&51oIw;r-$pN4ey`@UV|uvlORto28@gyY@eJ2*li7nB-)= zDWGhP|49J2w!QDim@m0|q!2q`6GphYy~_f$?g{1y0r}G@p;gterP5SmRfH3^mw=v; zuRfCp+?CB70)pQ+09%i5XW6y7yw@CfyzyUFPC&9Aj@v;AC5lI=U2150tO#!!)XgQ0 zK}#r|ujp2IZuLGQ928_iw&n9Q4x6Vwv+q-X@gh#(U8f4M>l=D#Gt(~vHJ*5&r$lAy zc97%FW3vz$)!VQgh^el=Ye!G+>^s19b!8aE=_8XrOD9Tb?YSA7kDEn_imaK^#@J*i z5u%-y_el|r?i`XkeU441ler6ED8B-G1mK$c;2Q(_GI>X}E=(}m`oT{BV|XDwQE|a} zW2LFEztdyk_U!Bi-?q8ZiRXRVC`%;Q9eY1#=9jzb#-DEkwEj{SFgK(72{Ud|gwB>2!{|jT;spSEd7HEtR-~);YAsb4+GqoyqPO zw$E>edUsQq_T#HjmXpKD=s3A$D=H-ln0?Dj@{9QLIgAnIhy;cEH*OW1?atE`5alcZEHL8;?7*f2sF$su0$zEZzFX2L z{B>~9e~NDCe}f3zl<3BIPR`9djLuH>iS6#Dxy1**$+;)W#^3w-5V2GeoB>ycC9&BU4KJhks zO+P?@od*3|XWIg#uInVs_Bdcw`@0?l@UA9X4*<;y%UNAN>w#O0u z&~p0<75xvyZ_s(dfdDruCivhUfqK$dSJ#2YVR=2P!O~zGi^h|QmJ=NoT`J$gZJ1mY zBYw6N<2`vA?JBc#h2js!s`iJk)&OmQB3Y zw&luq*V(L$sfN((AzFr0Tb}iT<)jL-*q2Ui%L@HrgL_@TmU32HQgoSaR^-;oWco^F z5p4*^jnnYKwzJNBZSO$0Y<~Q&K!vDFN6_d4gR-q<6mDZIkLFf>tBjJ+T3;o<*~;Ww zH?pH>P9o0ts0lm6a|}xH*NzK*3kMUhx{cb6q(RlcVvhhNlj~+Ani~=9fAvVJAgDQ0 zA1X_%XHkDK{X(oN$O)u=zkRRVMXMHfSoxr1AAl&uD-TQptD%4u!~t}!ixUDESN(u7 zm(FzaHYNb41@&w`-H*LtKYaT_QYt_=1yF(Fe9j{3HwK05tI}*{LbcMu*x45(Zs*~} z$dt~z>C&pAHJgL>%S}^<=@P{&TB8t(gZ#Zo@*xOKhY5`yA0XcUweoq}H z^PIQAVD?r=esWO&AI0Ej0k>4eg(=>QKKC~|s7L1M90$(dA_mQ)L@Dd}-xe3(E+~~2 zIsrWn5?h5Iz3)8D<(HCU60fIYC>ui6n-~wZokFvJGLs5US*AUMJD}ENANS%Nr6U>L zTFGH2$vn}Id~c|0YM2~bVayu(Km+K{g&<=zUjjI55H|PXJ7A>+%}Evju#0F9NUP9w z1sLWDfm}e|vVaMaNdc@ZW-eGG?an{4C`0gVm0HsH`z=B8+)fEFbcqK=^uW^L#Lwg zlBMCQ#bT(Q93K}+0HJnccSh86l(HB=wQr0DVJs|HMJivuxKr|1QYJEha&NmCC9S7%r9< zd@bItbgtD;jJVh1zQ7xE2~|Z@+(N|!$byntw;mn}1Ibk&7!{252K_l3q`+vBJLK@85CBc+4j=?zwihrn zMI=1#O5#m}!Z+0m?&fLedsizlQx>cJC=Kt#d?q}I1gH7~owU>C*GJgbp?^}jCR&eS z6MjJ9JwV-v6xLBFb1zCUKNk-d=AP2o2CUvjmAZjf-<-HLCQ|mm6kt2oglAy1Cs0sM zdGk6N46wQ6IQBpIcC-J9Wg^4jB|F1Uh|PoU#uRx*Pit8Lj5|-wjNAPRb4`p*e4CHs zSanT?hmG~gV}Y1@f^XD!d9AIyiOl95>|$@Ixl?*57_A3z-XsG8sn#wZpH&0{+HYYG za81iZh3m!uh8Ts+1mm9f1MoR5=$5x&{>8tCq;bPrHOX=ubsWLx1fU%NqWBFM|AhxP z5$rU_*P%^;G^ed%JBPML~X3>1JvaV<5)XsQw*OmYt4Y^U?2iWQa_S(Cgv+N9S zq>}*T+cf~I_&VD%K!JEc2v|i8FPeQpRbGS9X8+S_4rFyxoBB^M+4#CW{44PM?FVRZ zx=a`V`)q<%Ic62!n7m&fJt=TU8|{A| zMg+USsRYY&PQk_@<#r9H%OC$j`U%gbP+wD62yl6A*r6;Cs@Ddf7ybXcT_4rfeL)5emuwC2fS|wdatMY4OMI6 zVeU{4{<^!rfh+w0L=x{)S7nqr_C1`S5l~=*!1pxM0lHcE#)#3bqigwhdOR7a|mp+F1{;l48 z@bSuG`g-$E_{S^j>F}*TBRAFVH*4{Ln`8$*iueC=v;Q8aU#-$ieh%aI*6#ou+;&ECW{MXOH>8)o z)hI=RBAbZPk9hJ548!~XLQ1Fc@L$-O|J$Y2CI&l98T(zQ%i*h$C45kSC|{KH#&>n; zrw3(wQ=r9IZ_cl;_T2`X8yX)Ze~c_%#AiL@vR!bc#@s^LNKOHT+BFcIR1PDeQY7YS z6DN%R*&I6{uE{pbU?)5?O|<&L!20AqDzbu5US`*g+iub`yn44@qlN$%`zpZi(DDHl zx3&P0@d|yl#GLWbtfH4M3tB)&Be_7~j&(BWf-mzXOK7^WBZMIYJF_z54iZ p{Hm2320bbNU+!%9f3`YY-EFMd?G=SPm;*P5wuXUvwW|G#{{h{VK$idj literal 14520 zcmd73XHXPh@HRRdfF&w`{9PV-)`02s@a*-=jqd36V9A|y61s~nE^cw7YzUa^oAHc zO8`KC007%TAxx-v*iu2c4Ob7y=RSLLf=G z&q}_UTzRLh`DtnMvxntUg5TQK?rv4$pN*A+-rB=&AO8OR8{iq}e>w{FO^ibz05D$r zx51uim3jdH2pH;VTi?!E&rQf)E3eKq3?M}wSgV$X3^{N7Ezeq{We2ILpZAA6+^ia_ z|8h?ob3p&cc9zd}w&ssHxRlc63aEpg>GlX8du3cUWq$I< zE=cxF`;#Y&O#&?e3u*&D=u9itZc3!l5>at_y~NwM#^x7iCChk4v+ zO&b+;y)zj__G*7hIaz3Zc>IdD01~8q@-0Q{K#N`osQsR#{myz$(;lv1_jciOUg)|s zyV?amFj?(W6%+2JAUTFIR@HYZOEevIAvW|D#PPT8AcZ?Pxy%B>udP2X8}8W2 zy=TPiykW<5-t@?`7x?zz>(C->|E;@&Cf9)DQV$I{TDsVo%e|T-{a%eDiBxFuZESYD zYjO(6FT96+dFnCKP0EcG+%LS|;#1;!U}F3)p%l3~w?watCRFvz9}jrOW^Kzg*q4h! zMBpUh_XB%dkO;rSY6g$=+6tuo#6WWl`&;@#!y% zYJ;s+J@+&SzrETdBLT3(7w7XsCu%*SvXu?b`pA)EmVN6-O6j4#=sr+(j<>(g_CJ&# zG^dTp?BbYvl97I7Ao=M5iK#HD@P{qcI)D)~<^#z7w4hLrHmKh6Vnn?Oe~JfXo2$h6%-Bgz$cE*l%Ae%_Y>&dB%-Mt7;y7{<78e0GW|97bFvQ z2Dj5S0piT~#cVExa(71v$B(Nn7VIs!o(z+D@PDXNs_yQZ;_ypCwaEf=O!n;~(d_h8c@C8QTLw_45y7{&M zAcsU-ZnOT{e}(y}e#^a4-q(wX7R7vlGEXvh_m&$;eqSoU{TCQCQxnOX8ILo$4G^7S z#3rv&Q5r<^0qc=GJwT$twVB>AtsRlK1B%Rn1>Z@3dN_1?KLN~aGI@V_O!9`#A0t6u zIOsom6=D}?e9|8#+0qT_ZjDB8SQiV|_pmnM<|U;KUCFfMR+h273+qpvdcSJ`4>l~b zqlG_M%2d$y@}qE{kPuhM6es$-Hu1no)HMJF2bEyNIHyp`qTij`0BPgK@>38PdA>T0 zJKE8NJjqu9R~EqQ(=GssD6+WzYg+MvUwjj(&=wyefe#sHdG2OBUF!WKTW5c@c1`Nt zsz-8vPw*>F$+XY?bCx_dKUHI%!@&u75i0-#ondGI4zd1U{^e-Uz0W9>t5(S5yp>-1 z_-S#f;0P)es|`q{YGe7rXF6!If(pFf=M`z*)9Vh5IBNhZ)@G}BRLztS#Jv?5IX~d? z%;>E;PF#Z>BgBw4dXV=hWe_NO6%GVE%NhS2l;aa61M+0~ifu$_hERnS22t-~pXc)$ zTUn1dcnFjkwK#oZ!>x&zYE|=}x~Jc}oaBwd0mMrI2lHHT5WuG9oV<(!Pyna!e~=Ld zl-HF$77r!0ggf=o7K)Z>mYiZ9?^v7G#LGnI>1}An9HE;!{Joz?s$f}sM>gA3xO1P^ zrkZRQ_kNaE9;_h%%l-;yi0u7Rd>;W7`S2o9)ED98y^$C^t)KuzBVXjLE3q8ogjwP$Tk$wp3xbgLd2bOL~@!;AHC8&e!cX2AAzb^#CO9+4YiZ zv%91L>_I9(vSlgxOCzPWjH?_&I_n0QTgp{&JkW*ceq2|0 z$u)!LQNElw0NtB$2VPfl64wduH7Cc{;0e~KlWIWDkONb2@zSY@)OH1IM?Olb{_1Nw z^yUnoDr7I2a^wrc*GAQg0cfD$fYi@d{L*<*uJH4g^gcs^g6L;);eMfLWjN^^kK=nr z0{W`hFUkprHa|nS2S}y3h>y8J)n8}nRi3h8Z~EP~!3b{Boe|66ZF_I|h(npLA7>Wn zhC|^R`iDP;!FuI9cs1(0(&~-MV_^=n7Fs#2-qef$r+}_iDFQo~vKw8X5}o7XFXQmd zWaE*O#+XOHI50=M!eE+9L-ZZ6KcOGEAfVE>-2aZw@6(v0#w`k9lOp(j$$KP5Q;c!H zp*-_0hpL3baB}A>1?zGd4LFpjJ(XW357R+s(NM2CbY3!Z3Blf!p-)@s|CLHyqayGt zQx=z7Kjilo-p%~B_U9JE0~6vkH%G#-irwz#EF;rVYD#Y-WwtXvV3<}#9YqhA z5Jth`MqY^YLOKVJA;sfO5${ZAUugu`9By(mBqI9$aIbZNYu?xw<_f_wwA}R7haW2* zByutXBYqDp2EBk;BdKl4mf|L1TLVjeY+EN+nVq+%b0R==_r~9N4&Yr5vXNaFVn{S=&v2?EH(8)gicuorF{?2G`m=Y+Tf5c$V)IEcEp&jo!#)j3aj z$y&0Wn&Gq@41=k-zdjfAjWxmfJLg9mW*$JUC_50o#jfDa(@L3-P0TbVEw!-`tIj%b z0&>ts;r#xii{bBd_uT~yoMe^mI8)Uwygk76cS>MN+c-B^eakhfGOe;c7Mt2|wWsWYUCR} zPrS#TNO<#F+fuhg$cGVfN24UK-SIF{)l4n)x}E0-Gz);;vp?Pf`w#sA+v{cSR8SOY z;#wDdxMt-CwOOHl<}8K}CGSjXi#+N>Z|WRfQQ=~YFX_(sjQT)9c0H2DDJ&o!PG23` z>-!Jb0%RG``z?JsOEju{*EKR{%*RIq-pUR(f(UM{Q>supIf$S%rWHd7T0;Tx+dpg`-{LzogN37dE^z&EOPh*xz!q3bM&KW)tdnDkk6n^a% zOjuN7!fSnOJ-2T=>K-lNhR+6BO3Fq zZogjKo19A5-%r}L@=S!;(XRCQLwM*L5Py%4<80fAwm36pPz>WN|XZs6{kN1 zhZHpb3`{ekgAKJ0S{yuA|2+HkYuUv|kQdwZgbA&B@u?pSloPPr+n^#GH2AG~g`7>! zuP=PFEs@*9eE%U9>I zLqG@hHD`&1z&d{KcHx;&&zomEm&0!zTs)BOwL-2vU2asKSR4YE)>}_0>IX zw;oA@3z-j$j%=PCUxh}l4?o)oB7GI+xTOtwzWcC{P#rY!i2*a>%s|eL@SXCpbi~Go zWmp_0;|3lrCaE}`LO2WtDNz8`r50_qEI*hx)2)!iuRa1cOQf7`C zXHx2adBv*VJm>pCU|#j-NPGUIKavyMWXIzm9X92#_;R?Pn_IEl?i^kJsPF1ombe$o zyQ}Tj0PwL;O$P<>MAkKtKFvI@VYjF@06GCNOy+Y1>X^WZ?qe}qD zBE75k;mmWPQ$D2$4EkLcg4VD9Q^520!7%L$l;G}-EBO}heYhnY3-365-(CH79t;0x zV}lX3FE{vdk;n68=GT6x#--=pU8{LkWB5XolUb<4*GylR0vUHmHx0r1P)_7482#(| z75nyGz#qjNcRJyK)ZCZeq zPq|z(*3(UOx9C}95_l&7uwpxeZe z0p6AS^S;@0Ge&~;6F4E;h{sCOum_kM3I*U$K=hbQ_!pS0i?$XG@iW)qD-kE<)`X<2 zps!0;hpTZ*ER~0dI&6b8nFgcTYFT}c8%drtSe=TOFgMemyL;E^wiD;SP!ID*S%#Vh zAsNTbk-^>O8sE`!{3`M-&chByTu5Slq#PVlU5R`V(cd|_lKwSKP?sh~T`Q2qW6e5c zNF66+@hf5H5W(E8;j(YXr#&8KWY6c~v|I9?#N~h+)fdFjf|9~>ub-W}?sc`aP4g@a zwcP&-mQ^SvBfjILvLUgt9%ki{BL3joCWW_{aFNVe!K~~x4hxDIvC^UQF(1?51@3K) z67xrB|9r`m*CX2D>UdK#VVd8f;MN)9u*~_(oTc6(^U&-Uo`RJEI1TFGS{jI-Pm26e5=-Kn=4w^*`1U0 zXW3STzTfVu+gLJ4AG%FR&8P)Z8b+q{sX3Ki|Ha)D&}VV2`#nppr9vXezrRQC(Ib3G z9WVM!DtED?NiPVcEI)dr`mCOkzTM(BA4Hym9^8^5hFgrel@hvXx*OdY5HoSDF9aAo zT|Jb|!;-|EI~Z0fwE?|fsw#wW-6<4i#6pumj?fdCYn;H$t-UPzbFy4%#6}EmuxAu4 ziS6ToWtG_Waj$CPhKqK)C+W!}&h1XGW2dr9eG3DAAp_jC=24-(dUB7v;8 zoRK7+(bfE)XSM&VbMuTem2q_Nrub{FOI6vPzrP__MOMOJh;;w8$Pn=4yHhVmK=yA_ z9PzUW?`737>Y_xA@skwi?zI~vwnt`A04*2bP-f5CWxN6iHN<|qcP~Vz?`(BP;JK8g z&DyUzCckU+j}H&Eu?*rJ4E%h3K?8p{hNP&+ks)@S8O5c9p*g-Fs5a9gtx8l^&*{(* zH^7-0m-z5%%OJCGfe1vnoB#Fch-^mzZ#zb6%A26$iB#k1r z`U-`fhxugLI~M4Dq`)X%Ehq{j@mC=ml}w_8AO$gFrcxG48@Edm;LbazsH9Q24kMX~ zW^vrd>~bpM5lqNk8YZN&Yr#~NXSPD%`|iIi9a;7#@z%3JWb8jRHIOa$3~~KI!lLE4 za&*bPFYMkH1$qryAT8Ho6TF(hQE3ucf85?OyPKl4Igs$2VLm-jgoTdF`r*Yxm;g4U z$IICxx~o2SN3+?#KUe1wrqlw1vxobDxp@lV?8+)iufB+*sXn|H5$(F?WUPFz!76|v zh7m7NFtj&D1*h>NI~#jQp~}^SeeVqAiX!zgzWw^S{VIx?ey04!FZD99^xb#L z`3f14*yhzB!vj?jcH$la&zqRSBt*ceJ0)V1HnT=8I?M-0>hf=!@4D@T;o3?luHDl! zjq1^fGJv)`$K>5;$I@zjQ)di&RC@pBdO+J}4=(UpSVEI! zm*pVJL9SD5@`EFSB;7C?VK`*>Dry|IAI=B{Eyc(7GgkccQm{ zoWnN+WGoY3#4;pL`qPr-+viHr4&g$^Osd@^SXP@7Gbx;yuC%UGQxt6~&_Vt+^j;KjJuue5(5NX$w8*jV@)|vulpKDz9!Ddhol& zECc3H{hW{aRavYhvgvRy!Nk2@&EiGYP-sz8buAoh@ONw|ZoU-tYp9y;``IOHV`R~B z-|Pj5u8dcU?=AjQaz0xqEe;#$Sl|!S>)5@45c0IDT&6kH7R3DEkN$M~5)JMOCDH#7 z%$D(N$JE1r-Qi{9wGKKeA@TEWsk|luvB)F{LOo7kUeDxZ%l1yUG;Nl3Y z9IeRO{IcJa)cbMZ98j=SjYY9x)sEfn8Li zlehQOIgv<w*YkpJpm6Q0an?>H&Wd0OYfGLO;>Z>gz263f5)y@DlaKp0?XkE!N z-oy?7^8`TAl1YELxLtJKr+Nm#>MGmPc0T^F`p2x{sjwx5IQ%0C0Z@f&5QooU5Yh0i zpa(`ofj5XP8%_PZp=Ld3oJsG(prgpE0fT1Q2YJ`D8K46s#0rfB$JJp-qa zS-Ox^E`HtwM<5LjMyhInI!ca!EQhfZ|6=)QVbDYl%g+yvWj3UfM8!9pF(dec@4UO# zVf5cANwp}P5Uwa|K=M$cN)GqY-3Guu{ZFbVTv}Ho?|Qj{md4vDPT1f>%Bd0z-2sp( zQBDs%OA{x~>di3=U%x9okt+fa&FF>Pm;i@O6eA5P*S$(hNoa`_agb|U=0bJJ1OTVF zFz*C_EkbF47+-H@?UB6!(WnAgh&&Y+bmD@rdkk{7ceH<&O?@}q>eGIf=ht{e$%{5R zq3As3dI)hM=MnFt2@izMIGV9;#Fl+rXBS(s+~R5_qR>l9J}lt?nlFH|g;F2T599LvM4@dv5 zZUtNqMP%;yKCUwdFlkO4J+RB!u?J$HEj@O%yhjUdoPh2qgcX26MYCFL}!NjWKE zoyHs`&6)0lmYa9R!PC?E>9M8a5C0cj>FBZSW!GJ*Y}w z)zHz;IHXEu6Gk{gq-W|Z-k=nEuPPvXrK|#QeWvf^USTEqmh<^BZ*v+@i*39Y9}?D? ze`qGHoaY$PH^en6M=C_x2xjvt&GW?%@_-lzwL~fR#bnzox|GBxI@O{MbpYCI3a37( ztcnpCmUDxN(G`YX;wW(}@!M?u2}3n0cvR%v5&h%!<<{cc2m%7E!ew$xW2@ror;K(4rR;=s0Rjtpf}Vjy-7s(!>iZ>QMHA+CTN8R?)KK5Y zN_?L8Ln^czRq%`jJ){Bs1Korc7^~qJg)xW4cLrDd$UK!-00(0jS_aI+7@zu82>YNl zYdM#QTN*QN^}UVqCI(E<2y9SbiK^LKm)!xq@{z2~29ySsmG@qn&}DI{F?+c$S&Qj& zadJBYX$xrWij+iYGj)N84PIc=!JPvt5*L=J82eS~+r1U7Tg7w!$$~(y3XD2}sW+OE zGckILu^Waiz21Y^qH)~1d!9thv;?+XXO9HQkAc~a*KD{ylJb?OgXDfVJei(m>x{Sp z>Qo$k_M`f-=fkP9a?DGCxqTM=xLD!GNLIR2^Yez+3K}Zty^IUg@k|Ruejh!FX-oZh zs6mI$CoE{i(!DDcJik9bjJT4-OO-CznYkPh1+T9!uH-P2<@XW2|kKa}%s{_6*i{5=@GbhFy8fz`0 z+l3^{1zh@qUpXr`Q*`f=Slu)_1h!?ry%dLIb-_Bbt8fvYUNpQAIppEt2?#L@h?JZ! zpdP!!GV|<(a@~^%{xwRh`$8NJAMe6ICG_kPbCG^56}U)7KSCHOBV>%z)R zYVC_Zt8~g1t7bR`a^@1R_OF$#-y?m*Yryh*(JvSg}=d13~Q*5+r) zb_;{@_80a1iiXDfG&}?KqqDp8GuJX*I&lENf&NzuaNwY6BBk!Xu0B{XcS{8N$lvwU z9f}yeWBlI7Qesg4u`E8+8bjI|#Hs%+QeDpo+X;;Tm~HE zd@cMZOueIqvZUn&%(3v$sB^!^X?p!|&wenbEAQyp+PR8xaVnVNiYkTJM(FRdBEZTc z+{bCJ_4V}@F+Qpz-1DT&o4k(|IEaHs#NT)&lk<7}S%$T>!$|UkQcSA#l9x2#I!uLN z4nYXX%y+cE;>lh$mfzb?ws?HB_%1r7$0S}`hG#hblJfX0*ZFYZL*EQLF?Iqx%cqtI zI^-X$PJV7_qNdw;exW-a(c4%-kG70d62O*-$|n~asXp&w%8L`MGC)`wN1 z_H+zsaI8Nq2gq$63QX*MRv3iWpm=VLDMx}guz#*_QK8gE>(?*8xWE*xPY~hcRGX~o zEwzsY3>p8}JdxTi@n=Kt5ig~?xI28&Tjj<5VPD>T9cpMIIa`dFmwL*g(wpw9DLv_E z7jkQQKd5}HZtlu;^dy5(RQ}WIcVq7m! z3{-u_XU9gm?$;Ce*OKa4mLgxNN6uD5wn>DE%=DLo+_WT>AOlET6$Y+0>kEI1mAp-g zRT+HZ%-bhjC6QHRI7l6n$uMlyE&{bT(ft^?ag5LF85})dS$HFt*ml4ptPrdWksPP+ zTc?_0LOtIi|76y5)K(xpF8WaA^>fun) z6Am<~P-kD8U9zv*dkw%0DMr1>W>f38fdsNAdiS=%|9A(&wJ)`cdH*I3f85;Y=1R>7 z;POdu%|8NZ^WMxioPlGH?am2W*K;3rH%u~I+Ns)?rJtu;^|O+s1#_9at(iK6^%@dy zGFxA0_6bW(eX?7W6sLO3uhQlC&_ZxZlOnnow!Eh9$j`$GcL)$o>Gx$xiI=!){NU~n zY$$PC!T(@G1(qe4YNy4GmaTerC-;Pd3ez?tR^xtr)pUA7i=m@rLPky%CDKRBW~;!v zK%MADdUf$UGI=WYF1<6_mkJ%!V+jzk*vzuKw;wY;m*j*pa&|&#I-Jd>Qf+<|z z`&e10?&N?=v)%6nW(J+Cd6Uv6o{nkoa!>W7F-;NoQ`1X)-Kw!05lODa*Ppv4^ed$f z##ac148^0Xl^A7}_wPwtLQe_GT3Tx*EL$ zCcmpSw@JOynQx^e9K0-Qij@Q%rGFqLZ&!9BWJS|iA3O2Xnw~iCd)7{aTV_M{!P%r5 zua7NUE>TmKid}*2B~8L`v_@J^t7Zn;U49Xb|8mm?+Ua><%0ri>1dY=@;*%}j_MiLe zEZ77qXb(HLCL1##X4ZqVC|YQHpXqXFct5#Eu-bIaT zVMavu9~Z5}^w`=_Ym?&xGl?r++D6~SV95p7FuG#Ok{?*gkIbamx~v=5vM1`iYGPC>7(Ce#F<5&2fdN%t(Nz8 zyFDA{NYp$$wdlGqb9IhHOYJyDqGGeu*tS)L-aaZu%vLPFulR>N_d`LwVDgyL{z@Tj%>)D?SZDNXmVTBdvKWz|cc7^llNYzq{z`)~csl~xHPhU0V# zCSjG979|U7-rnBJaxoT$6k?Z4ADH?n#WXJH43!w~7$-1(H!{&j0c9(W_F=$vGc#;pz*ys;nt2J|PRJ~5*(5^GShcp02B>NM> zs^R3L#*2c&{s{a8QEXf_f>oET#D1_YXT1#$t~Ly*_qGUvy<^)l`KK|PGzCHSIj^CZ zVT&Y%eci}a*_n}Q9=heJH0D4{14%DWGM~TV#mV$7Tisb^3Hk~3gcoin1RQz(zHkj) z74JwVHzQC0cfC!KG2Cg6&}+^1>6Or}xF2-B2Z8g%VSamZs{V1nJ)b~U*m6lmy+x6d zgmi9|2~H32S`Rt`vrq6%!M_3lYE_>2G|B`=S-_iVlI*S!`vn#8RJv>+h|{HK?+HE&zV8;!oAC zyJ0i-)AVR(6l9ChS=?m3aawNmAA}L?PT;BrFr$&tjSZK##IY`k?xa-XxOL0TM|t+D z#)`_Cx^*wXj!x2LuU?;PcSu@0A{8ufOU)-yFi7?1qpBYRIcKlyWJ=Bl_C9wKpZ>sN z>|uQRHg(*4QZGSv`PgR$8HtZ{cq#fL@y)Lm;JBB3Y5_%KN&8>=!}?!3(SIqP5Q8pj6^ykpqJQp@e!~S8Ow3RIjI7+B2K>nGHIxqiOAo5*Z7$8v z-xLsy7N}07Gr)EcL$cd&~;MQ{#6v2gkl`q=(-nKhf_*IbLMRMJFH+;#Md*#<-_Z@S?JOo6vziR%ul22v@D`Ft|5uE{^FJiHut$G7@dH5 zM(XGjpODFYZ)1O26bD0>+>aA@5JM!XA>L#X-wN1>Dbk*DME{BJjSoovTDlTT)=Zat zjz{9`i>Eui2OeHlSa~7ZA_Atwml=J%34m#HO_W^%x|+{bQeK!x@-M^*KO<4c30U*s zI3zJ9H*Fm}x`N8gjdYEzbPS4!*2x^9DvP!_EFN_U4jL(D1*%VU!pO_8PL!o;DI+=w zAb*DSK0Z@`gi>!*QxYz`gb~|J_UY2m!a#v_paz800Wgo~r-0^#GGyaOKq4xQ_D=(G6 zMaS$cxi>67<8Uyd+x7E36l95ey%N1FP>GCQnwOolMd*F6+K*$C6u zK=01cbcjoJPV(n3Dqyy6z$X=l?9shk3Sf;w4Rk(IB|YtE;>1NDL0az7Bvv9Uf?G8v zHNwKZZ>^?EYp%yK_Wv-)NDiO511D#~unJ&!stEhZ08R9GD@T;^;AfYQ`mc;P_8Nq= z4aylvquJUx%!yi!WK>_L!$wkRmZ{m6GX>F*mM?nupm$gn#{81gwo{OM;wPQdBe6m=G?gSY{5CkWu5K4 z3)cr90t>pZERL|QjMAE{KM7O;zfSx*Eq){+wqCWNJzcSmRrien z=Amo1aOV?(;{K6~>}9wCBXiGOZghWSc#K38zST73QCwG+#q{I6^ewlLwDQdx7K)T% zx}>6QeTrj0^Noy6CMN*hKB9b4_uPi5x$(YitI^nP#b;cxZT$H-;u%{jNYcNTB`6BQ ztIn{}=eH?qNGMN`t z+QpX3lFu@q1*>Aoy|Cnsp78&NZW0|ajmS;Z*E6FhuXauFcK#&9{og#c1elRfHj0yu z5Hvj=q?#e_Y|OWKqjOb*DX?77mn!+&i>oFT&JI9wqs))Gy3w`;{KmwD6u>WEajBXT z%PQkzBqyL@#G0+DPZMOyf#TP~DD7ogy0OYBc-!mo?V{mjK6 z636{QB28JCzV+wRR`?rco$<|6k~1^N5o!qChNA)3u}Ii8l*9f@5wJhDNhfbd@7qhK zvNkcVPr+1E97GcFLJStq%|PI%BKadT`5f(13i)-t?LGv28UdeW(j~p>7nm9 z;r{)9%K>sJjQIwRjHjd2yDgTpGJSo2HZkraznbWj1(9LTT5!$VP}bk7ZByf$eT3J| zNJdsT*n6jz*GAZXPn``+vuY)k!Z5n006<7|!30Ned202567cejS3Wtx*|!_rZ`VB5 z!}*kk=6(P`dehB!(Ev!PGv~`)3ddhLxv7u9DJXQG{Y#Bg|KtH#LgS+st8jP|UEmhQ zmpMya1xf7}DBKkkvFG3g>a_SRJs1VD|F-QwIPFz;JH?SNZ1X2CqQ3c*zm*f4|B@1& znuFWXYkT)lhfIgLRqr#;>Wu_?;=wHUe75{7c}P?jUOWQI^QA)X^g{^C^8*pj!Cg6HG z>e(~}Sf>aSu@T=ZVkj6ni3NWL)`iWQhT9E)Dk6QFDmls@N&bE#pweUU5Qf(f(sMCl z50CbL(5rqZ(5h;_U4F`iy*f=iy#VUIe(MKFq90`^w2h$e#!HmH;Y2>5pq!eRg?29) zQ;DI2FUat+@c(c54M~uN-xtTR0ALShzBvP12*_Fy{>HqPcT{d+9cOa618W+3I?=(v za(%?snPhiphuZdlTvhBJC!^g2$c@jI7e%aMG)jv~mO_4L6sDKt{lg9`-R}Vkm*Ed+(=h25|#_;Y5d}H75!v4wd_=ViJ}W1^W+G?wj0x zaykd+*N587drkl8H&CICF|wJy8%$(F2U)^kr(fnkAchIJ3tj|}F|3M+It|;4fAM|- zqq^F5Hq#PS^|n;_xix=EOhW#|u+u2kit}p{(Dl)w5Yc!%@4sn2vcMBOqwdg+{QaJ) zE&59C3zZlPbhP7Yy9PL#bhwv9|1%e6`21POI4x$=akueZ!s?!&(Alq!2py{sR`73S z+VyEVU}0-6=8hZmh8NdWKC$KfqV|P|W6e|IQJ{-#=c`g3qwYK$-v0O@Jp*Iv7@YWpoFZ>GuGGp!j|tb8bd3HwlzfQ~hGO}ue7U|f{$9^BG20MrH14b6 zm76mjgLwiO$R0RJ{lVkhIj5R?U#)&?C3I8%E;%m7(P<2x9ho6jECCCXuCaIKPe$&M zKdpjaYB<4AvnPx%NqNTy{5ZOHl9^v8jS7b{)k3;QAmP)usMmw>jqkI$iPn&KJTB;Q zg!`1DDf?#UF#!4GzA8q5o~Zx|DFz48MJOCZ6ae7b@&BI+mQWBJ$@8qXSFJ%Q|KZ>K zmFSfNmzERViNE8!HSz618bOh(;VTCpb}LrRFP_R~T27u+|E*Z<2s$pU{%geq!2ad3 zPOH)3rz-;h_CJjm^#7WK5z2o8Juk$+{(=VBKg6X;;2-9k?|&+(O8+Stojnx+=;HsM ztD%FQz7yFPo}8Z0FEsPv^6_Q`IG?dRru@Q10a31Wm2@`h!-#iPYrURVT{=x@|DPej z8W6Zt_wT|k#ljwBq`q@;;>wqw3<@uOIKdyOUM%=GF7&~#-a)k#v!4walxIUAE%Wdq z>c5q*w3@>_MtXA_b?acnly_V>h2?l$&S=)%u2U}Tz7;$mpwtu&y0MaCXP09lqad@T z2Ez%sP>$ko^ED6o&C0059|LBOgQOq^pRS6Oo`Y0e&Xku7jHn%7tGV2H@PvgcIHdG+ zlBM)gDdsa%&DGouCf2L23j=aSU*r~2MX_dHA3x{r4JpD-KgOf^f4wOW{zgs8Upk(% RhXJQQ!;5Bm)jC&W{udP&w_*SQ diff --git a/inst/doc/tutorial/PATC2023/Figures/skill_raw.png b/inst/doc/tutorial/PATC2023/Figures/skill_raw.png index 814fc08d2f9aa044cfa1fd9630f8f6f092c56599..0d5f0f7ec0e4adfa9ebf652f2ecc787b8cc289af 100644 GIT binary patch literal 14152 zcmd6OWmH?=yXHxPYjBDNin|s{ksyTtE$&dEc-TUl)PWHQBiPcb3BE+M?0|0G>4?jD_<*1Z0^zkNASfAwqZ?(WXR-qY<~3+_vVp(g+kTm16{mCKYl z000b7R*=>4d$X4p@3SKQy0kTOHha#wbfV(N)yM4&Z%hP$p`cQ_b|+KTPM3`#3lwk+ zG`zWmL(yDP^B^1=LjWU;00hMm!hpm2p};Gc{Qsrkdn!oTu#4#VhTIJLv78?{xmm$&^t{?-mpO^}9Wi#lL;Vw7Dq3{nd0| zLVBnaJ&NJrN@*H zKH-Z)7a8oKBMZFWYz@rhnm&(bWwLTA{5lU4kvUmE5H5HEM)pV3chorVs~_IBnx2cS ziw3a665i@&$^&C8}Qg_#hv*UW30Ax}N6*v?kh*$F$Uq!ayRwvvl}2mZTdUF{E_ z-=wW`eRw!xm}@m>_2ugiDR-s%gUTSLL&HSRSVeHeHIti|N5#{Gr{3z+q4C7VT&tpc zy;^m2iU%ZVwXNLaUoBN#!koh;zj&0EvgdPT%nwq5=Ta)TMWC>!;)8o0F-E^ zR#Widnt7*3QGXLxrD)ZTm4B83TK-H>)hi+2&U_r4_b89})A!?flH`Vd9%V{?}`k zQU0-bpvbGj<>w-E!Yy^((CSlQV395FoY&@fAxlp*G$JmgWdJn9{vwKf|6!yu^{D}Q zO$DAJ>pjsNIm7*QY~=kUfVN*&sre%s8t)Zy9je}FUVU9vPBZr0KX*%wfw|(L!^7xj zbd&T?A0r<4bR@4{u+o5_mpK^?4@fVh9ID=wSXDH2u4n&zqJGEDuWOYD;#?R|o!I5* zYE<-A)Wp@Yg*&GC-`^ToK{>vvs!4ZPOAL-zcOfU(In%70;Ek#9Qi(%^^R-sw85n(1 znN=w*IsY=&6c^f>mIhw=d8A(kFi(!3SbCml!B@viBTGm2pTRK#Be`iHt@8StH6Y~+ zC*daDx)P%tQ|Cle2;xDOrGb`hj_R`v_IeIHQwoRy4%)hN-mDHIX58q#tk_ze8bo1E z5x?1Tm>40gW$0&99Q1AdI*TqJ1E_eeo3Hx&f;B)2p0dxnpZ?KO9>Mz7OQ>`Gi;A74 zpDzZ~ou%u!GfRe{b4VroA_rb=CjDt^KT$z>CKx@_WN2(ycC9O#W}`_g({V#@=L z`SoO;S!B>?BBWAjd{Eji`MIC_(KVFKV$H-lI z<@sH$Gew1#ZF+&fTC><>QX9OLau%A}fk*T6~cj?yYNv6HFwD>PIW8Kt` zX^KS?Bn!!4R&4FR^Jl%ZU&gszD#J2c-8P(9;6Xd(bSw|Ov@+66eH^Py109v~)h{yj zME!djS`NxI{423-cK4Jciw*o(pX++ASW4y-;b0{20$|xF00-j%7zVY*LBdf?|Er*o z0F}1&SWWgj$3KvMIB*Y&0}hZd3g|D8EU~UDH2Up%1^_-lzSYIV`veK|!Oiq9Sh?5D z5k&MK%EAAxd`(c~D@ZSd980#Y76)b=SYvgWO-_BgfqSWUQqn(W>a;wzUlxPK5?YtR zeNn@Fku+`TRO-+hYE@~lQ-K4UCHsQ*Ua?C`qj6+aoyJ8v7wV@xv>tlBnIEX!{BQy) z3%`|$#Tw`&LauW6GtASh{B)E?(}IfjRpQ^;9hx0A>IppTv3N|nA9tLTsR=i-q?~ww zB9eS4@~tT=<3t`%u^Sa( zWYWU_IN(~_XV&b!j&24;WL|aT>V_2H{syf*#A5E%OUCtNMjz5$tZvAVFOrn2jqtq~ zptYKI)X+J*mg6^hkYj(*2N$P$1BHD~-LQcleVQPVnykh_+8YjY%`%STF%1u;e{9Za z`I4}}cd3?FK+M)<6bw$=41z|ie?Ux4Ku^o_* zYSpWkjy@oiQ0V;cNjDEe1W|W*duzS!*Mh@{J|AH8!pQc}C@M*P-Wbc;Um7i{fhmvSPH3zwh z*{H^xHEn}_+Ll(iwyh_{eC1+!3BQ2W>ym(3<2z{b$hWM?xoH05(mgpn3g`tyPkrP> z;nP$Nketdp?edfMYLheFZ6bt$|A~VD{PqV((?@;Cbx2!Sisap?DKd)-sL*RBh>2tZrSnM2(b`Hpdwue;Ex6K-@4T! zuJ=I_922MGD-5c8I-2^7iz5#T>L~-ir@f5z9&ep{UpYzc%+Q&Hwh6-%%SY=C_Ip+z z`&|_j8x1r5S$did6E74eKx_)}d#_&6A{DNDup%^J+CnZNuCH1KX)%~WUH}yiH!Phx z1()tPNWb31gy!dWV3aB_OV+&pR{usE18%AEfP0kh@KGlNEa?t}dKk`0Bu|Dwc{^Dq(bOfMJ? zteOEHOfYVeD?3Jf`eUD+A?NPF#Hr!dHx9*$S7(eWs@DVFYY!^Btu^w&nr-P;5QhSomN{090@bpPNve-uqx%*gJC8B zOX~P35B!$xHEh}ayS8eg#25zrLy*C)SfNN7J1_v$t3N7=_3hV=3LfOGjBvvx90+0ki^UJLcR=h#qYnR`1>tNJ}zVLkZmkkBsH%x+wsa(D*P)SuzYv6kXzGpq!NXRT+kS6kXmGs&{Bu5Zu3(`u8rnp z@9(e0&$d$>nU@da0Ih{ZHi7qTqw|rQG1e}Bj~e)ym%LIPZYM$REJxbOT9#wnnqK8( zm5&Vqa_sTa_Pjj9@$+kqpi|omjv{qV!LuC0cRy`S1TReYf}IRke5BlU$%7&=faG>8 zZ)5mWaelD!48Eut`??w?BQ5OfV~Z&hn3^0FG6;mS1IxwEnAIx{X~~_1v|Yfbgy7V^ zUvTm6pVdB-120xV^FFoRwq2u`&=#&@iQ(w?%{7c$JGu1Z&uht10BWT$(VK9a2#M1t ziazms9#=fKp8KBd`6FL%KX@93Q_-VOV_ z&Pl*ZrT^u(iNSv^wRQBK_Y2oDd-1ZN27A^I5zLk_+<#NUD18a}oG98%=o}c-K+pB4 z0KuM`gNT`2!$9i1A2avRf#cu}nJr@n3#R_Z)j>oi>J2tRo&>{+ygFg?X?|2;n4q~M zqGOp*s`c-;jQDHamqi(2-<&cZNjb*M@^yv79)Mup#FwvEwdCVG#g4pR34Ln8+St(g zeo+RI`{Of<33nxkX<#Go)7m&VY(t{Rzgd7`y7vSxL0BA7K>{{~R|o>m0EQ9$OGz1q zljCFhB+kw2cSwRu>!*EmlG%^h&r^)pXz2!pWgiC-B(4ffyj?MURB@*OAgBn55FwoB z{4n;ut(Z*N`inP@$bj*u5QyA1yJM-WCO@%L{-VUdEg1;*LWB)=LG(KH)X(~;#mI9M=;S_F!&Y$+XBs*^e)e2?L91wTS@dD*0@Rr17Z56kIKu&o!FO>EsDrI^ zUup&9@yp`ZyR(F(P)gdLHS1s99(vV(dIT1#V;Pmg)@)GdlEH=Z?6+n#G+JyzKR&a( z$z11uWt~(R@Olkm{bX2Te}heUN80%&I^183%q4=SMnOw$<9V5H_c{1bR zxmh2n^V##3XVBrbflTk2X%SXnxrsgdS`}Oq?H@KzG>;R1>Fjt-r+WpB4RI!>S+jGfTvcYvSCHmf|n9cWUE(-3QqzPodPbY-E=RYc*72n z>mbmp(Thqvau#H4Bvbd*^_=s&c#v+N^SHF)PRr2iO4? zyv{d2UC1lVF8<;zy;}-oa9{4(0FTnoVT)jH#;b4I`9t3w%A)03S9fBYqoC~km=Sb} z=}d7MHCq4C&Tv!tt^u+e^+i}CG9QmmcXuX2tSmP0^(oG`^RdrkiM2BlDx6MaG7+q8urjCa-Uzo2EKnFs~P5SwgHQ>bxa~>@+UNO^Q!z94o>L- z!-3Cz^rZ3C6IjUS;kNyZr1m2qjM0i5#k2<4gd(p&FhR*E zENiu3O_Vx@f<$?M5MTZ8`Y?<`YFd=xLr_Dz2p)q4asJvN3aN8Q&4_9M=i~Ih#5TYV z@}I(p!i-SaCb>#J6E?3X7cFEUy+)?xHn)5Z;R?i6cd-svx|H5XmS+Fi3fuY#IOvMO zSA|hg6Ulaf$JV8wB#a0I6@x7$g^_ED8+g+L6*BGY`|kY{;b2%DxW5>E1qj`IoN)a> zk1AR$0L3sQ=mWjdpW0+#JFCRMn9>beLqtr5wgH^LTSVvnSaEmbVlV%qVX;vl2>wVw zL@og2A;&?a`*OhPxPT|zLPwZ2JKG=M98{t zD_?#>vK%s?e<5gLXub7L#{%3bu|Ng<^v6TFyJp*3uI7^9P>JyhWtF!9fZV!=8+_e) zx=liHt}Fcd=YDiHg|8%;h>nEqpTA$s0wn~TLw3D;6*SnvM~}E;>Z*z_#-&C7a7g*4 z`?J{-{XTc1kr)3FN8oeVAdB`RfN#eV_;`z$fwH+@g&ayUFUq_33Cm6f3_o-zatH{h zepij4lRRw(vufn2kKfIfygRA`gHwxR*+#GP<`N%lKQDe*Gj_+vV8pDFZyuN5=+|-Q z)25TRYgt4JSBz)(Xl5Qe*!uLdL%C-~57|pfMel%nYZTd}g(h3r_PL_|(ph%y_i-w6 z!k$v-_gaZZBoq1@Bl#j0>X#|61nCHJP?F!VBT=*KZSv5lI=UX>t(TK6;;`byY2Haq3V?)zOGZCFQu$ml|E zYgiWT(d)MI(amAd0`ApjqN#EVi{z--_?u8U;ywy4LMXO4FImYcO6nzCWqo5~$*16b zqjS*I+B4G<_-wnDIm2#a8b$s7f%yKn{G!f113e8so2Q54`ph1?5G2nI_QlEzc{Ma zeXMH+&j`{f@F?mnwNY{jdc)kxbTWSQK=emD-({Q~C$z!NdiX|-;lf7qGEU(KR7Kn(%P;m6+B z_KF6wrL$bqeZ5GB>DonH)EeZ3mBBIz1|xdeA^Nb%E|yuQ_ds-GBv9w5)ngg<8R$*c zNs3vz8aSBt34bZ*-d8^Kr|7kwsKS@|wb=fW2n?&O@Am_pKGg9LqGjS#Pn1_-E4l1- z0*NTd+@1K#9Gzf1w9f%Mc_?i;N5XIsJU&LdZ0d2pp~#crpre3hl%2l(l-rKY7=5sb zftYjSDoh;@N$wHjcf7Mu{h+5=Z!U_vhm;u9<`wM3g+&gNB{%HJO1A+ir0sz(eN{!t z+5(*g_Yc#y99mKh{!zP+-5>CEZ(%_6`xqv45Gz;bk94kIuxajv+nQ>MXhMHq4agx& z!ADSufGXZ84@U(7A|7+YaY^+_!uHUOD#iq}b zhSSLn6K2|3GjY$Rrh~eJcqrct8*Qecq*Pa-YP+cc660km%L1n)ZN4kpZKytY(KBVdj1NLx zfg7l1rak;@^5ghBJ6@k)u`r66&GZX3e0;G8Uiw?pBiC1J*_0UYtTvnbFq8`><1ISO zd-qnr4mM(UZ$&femJqW-7@%RZD~aPWXQ!?=?@*(jxo-Ylnn|hp^*!IFCp$qFm??S2 zFLfC1>L5pjlfA9Tpd&JL9<@jHQ#F|s*B6Rw@L6)R%GN4ZLY&T(Lh;vgw+p6FBFa|W z$7CplX@cK)pWE!M%VVbIf!}>T2C}9!s16Nj)xC&*s#-Lw!mLX^0gheZMedQTgS%lS zKr%BF7ugkpFZqWEXP~~fPtUKyK(^6}XKf`~7&ME+HUnfX%X~*riD}%Tb!_2k~5>>?Xa)&VC`MxpfGb!dEIlf%S=QxaWOnP;!y&!~D za8mbI7U1ZIe6{)#o*Yg+U`RZ^8suFiA(7fBvlT2n+9xi}0%VI2zzAXX>Uz~+%W7Em z6j@rwuTb8(?e{q?X-fxH{+vIQwbHWPE)-Sx)4y%x_DKAZ)yYreXd{FXvxa!MfVFCg z{9`3R*;p@8ccL z<=OZp;3U%!NloMFkC-3ldr;F4U<>l+>NTlgi%3K==%Z> zezE^LZdLNaU@B+!)Px|8szf?DE^Ku|O176>8PsqYQ z_#7Kaq1;@99s^?y8wadLsI>WR;>jk#sytHtAgspvgd>-udFu zTAv-zd+@e(A;`VfbV2isB&o^m`@+Nhv@sTl~uHCl_e=ssJD40q*ohGfGBtSL>1bu!7^!~~D>&u0Nu8P!Y4D;f#kPRTs3CRZaRu==7HZn0Q!>kiUp7Bh$f7#2pf>Q)Yju>6bF+Vy ze**zaNflf3;K_M|tO(&;xlv_faCWgj`@VXHRhS!5|1g%6LJ_uVNYbpPfiJ%?2K+v( zL$ptt6KvgHn;OE?A~Sd#@kU*oe5^*81D_=v`{7!j!Cub3B)nHbQ; z*^>F4JBkvcP zynQZ-RMgh~QTfrE)FYBX*m@s8(ZCoBR^Ux;0D;?7x9YTuhqAw!94>zb(T-7t|cbBe^WJuZSCnu=bsVk z1{Q{qu|%58@6}D|M>ytFX8`?tql0p|Aeal@^LLVglwYbxWxBL*Z=lfz_w$!_r1q{^ zuYa$r>+G1UD(NiT+zA3S3_x&#XszMoA5g*~yQi+|M^?i(J&6Hb_6pqyu2s!0gl?qe)X@ zgsFCXnLeG{DF%zpoy2f?M36hPr6vH=g!l5;4+Mf?hL<0wPF&%w*F*U3~ zjPUS`p#3YGFa%yXl5H6z0!kUA2=BZqVRYrhp5A?abY_4=mw_4-H}U``s=qj}(7Vd( zlwZQ2`T>JwUF%{#}0k-rpj?- zV4a9(@sQ^Ny8q5lBVyhyU0B@^vmNg^n{Ph=ua~Lq&%|C%=GwdC#QQi{ zS2R5Z>c+g1sUtYOdf#X)i)-NFQ}k`&=7yb;LM?XwWL+ixvpF=~3(uO4K)l{UANYFj zPZAVGa@2uO`i;wrIW6TyQ-v!j>MOOSAH+-wRnfcc6wUZ1|4@cn)7fal7BNvh(FPA8 zNA)*9p3D2SS7O7siDR!@I|PPY6|EHS89^utG9_K7R1Ys3=IVbMu>%+V5+sK$ZbX|F zS})S88{c=TjCMR+j0X{-CwD7UG#TFbvqjn)tk5|%1wGjS6)hj0^Ll{7h^RXgXBe1| z%VjutJBau*TgvDgdI-|B;99(_zb6ib(!Ov;SsBzv>**OiqMt2Wv3Hj+z|Ejr@A+qI zi_stCQf@R(F%k&?lyCpZ1t64mVDk2B?T{*2vWViUD!3^z%4cR?u+K{_a+{SEuCtj) zzTx-igTD)N7}w_F!^@#!q+bt5X893dBKkHW9Pi1GKjs>N#=0Q-jnmRnW#e^5Ydzyc zKB?nY%#1MFxNf}L)#=}}tJ2fQGQv`&7f=|*{r)@~q!mddtRu-h&)I5p(@{9Xl1YYY zb!bu+hM9dy&}hOdwWcmXHTqB-Yx~N;bS1ICtDZ>t>LB7h0r~XE2-DA%Y3xtv>!VN^ z(AjHN?*8Tv{#-M!qD;t*Xd0yj;pJ$CE^8(9NVm#0`Hm4}9=$;m^C!XYq-aD%b5=5Y z+SLC+T(qVK=7CmfXSLMF?{CuBn{;MnQl@r#{Xse=6B6`HWDrd801j>TrSJV$Ab~+93@OI#IJ31zh3S~a_Df# zzVvH;G*UQ$GX|P{BMi*KlHFV-iDGnq`3FkNx8;tRxL_&3BHwBnV!MK3j`L@z02Zg( zpT+jT=ku=SO^4e@rP89oTE^*~n_cJF2k1S2?#S7kz8jzdOb$k4O^GA@_8;tH#Mp^< zW(=<<9WBq>$W>EVk6%l{2$P#ATLeGO@-2#Tn(kPS6pNv1?nrqPW*$-va))X=>IsiR3&5sryXkJM`CARm?)i zViwX{U#8XfBv)e6xMXadwJ@MqkwGc9QAmZT6vxs(vifizA_PgyXb@l5r<4!bq|hlp zrO4*LEYC#JYDxt`Obf)27(1^zySaB0@Kd+1DRZRfmAb@!Nj`A4WGh}Kmn5c0;qdWi zkMyOtjs#t=^pn4RveJP^8Ouu09r;Ch|E|9#lOcFWWR2_vmpqGmlfbeysO$f-@%d-! zfvnYQ>+jzEQ7KIr@J9}0q#l6^`4&|>2-CZ4E2k97&Z<~p7WNx^K%v# zGHYBTTR6=Q<#BrU1+`F% zaVC5oHBSsQQ4}G8FPztvY{d$A816OI@e!+%Io`*?Kw4wRAPXvUHEe~D-9pmSp5>%m zmjeU9dO2b7azmh4l%2}yFKa^sjW1%3)35LD<9;84q6ZmLp?~Elk-VLc zR2RueLtti~v{=$Wp*P$u2@jmGS0@4=ar!~MZdD(%P8KaHOMwjU68zsxxiR0|WV$ZJ zJ3nCJ>X+d1)*B%ItI5e4fFA+lKiDq%oEqBKw$h>&c_JakcaP z`iYzxr+Sljj_kAzM+=LbiC>+)+juB`(1Wta?X&kmIH3p}m!e3CU%`AMuN#6*WzW;o zT)#Q`f4t)&=x4w9r2fi!HedJA#?Xt(9!LYDk@u6{qHVmG8{Kgtq$GogC?8A&9Qb?O z3U-#>VgC9YkAEmu`tWb6$6MXzf@CUSb+p0tHF5teH{zD=(VPOl_opk9oCp5xmlZ=9 zC&TX{NNPqa6!GSU;)yZBxR~5j-V}twOg_i7UXmgri1FhmcQDnlCP#G!G;HTj6B?a3 zfUU3a#sgxS=!-F>d4mYyY!aW5_Qhg4E+r1`s^?md{q;wzXHTpCt|t{|h|s^$y}kLI zlTDB+y0OUKg^=yf{ce_(l~0?W=fhJsaVwo7E7`*N5PjfBy zmkxGBlSwoPW?~Be%Qv@e1X5Vzns&4dE>$+m1c&scb!fi=s|#|U#`EqRSy;mGPr z32hFfn#Q`fj4rB>yKbZlil(wf4{)ZGO|C&8x$qec3cAFs6OOVr-(fjqkgHvw%A}h2 zmFg7s9j@fI~DP2hrZa_cC z-QO3D*B=y@qWkDEdYLL*m~J!WIVJ0#YylF$a%g5pKia7MvMdM;1Ibn5zj+Hml2hzr zvmEvKU*s^o)&asOSW3Ndlb1xJ=YHW#nuNl;DrG)p2$G6~;=(2@GIeK3j@ztE8fPazn)C1nPkv}7|izx2+$C*2cfEP#p}yb zB>FQX1)F=-M={6^&zaVXN?5D&;IIG%Ok593*S{);2}P95&@cn z?>T+%jC zOmqH;Lc8lUUeKcb@2Rl2P5NjEv)ewX|8h3YV-Nygm`ix?>C8*(Ev$|mQ2@R7G%@bN zR5w=S+u|e%VEF+DKT>*LpN0Dwhc`!mz?6#r*1Lr62>XRYq}iB5tML3br9$ITyZP8< zoZm~$RFyZ2gs7>$6s&)6KYP2Hce45eK}B?dAXIg*K?d^NIHFxeNZASDKZp@SXE zdNyplv=YzcQe$85c{B=(E4X3Agsc<6GqkoL(Gx?PXu7>MwyQGX#b3})V%XR88Vlf% zBg0Ug{Uy0g#Gwc>So;WIVOv-=pRA^9)IdS zIOYPAZ{+$0#Obv54Y|u_{gK=WI)Mwl9QYf9^gE=+gZleY@u*j+q1*h3;WOdnvjvKh ziGbEi3)HgRrx)Fe(jn4{V1+IlNrU`=y#DqPUyy<}kqQL19>&zM#;@gz`J^4=FEu_2 zNmtUVAS186tMKCOp`*Zqn!nP6od$x305e3xA06{I;UFfA!uiv(^ql|(e7k7YUcg`O z^a;yVN+L*$kKW_O)Tk}~)%%5vgM!cJ_5$h5f)@x-yU9C%!ujrZkOn?% zeA-PO^RAmeZ|^4kv#Xs|vnMXh=<6lrtMhsnBeEJkp7#`*{HO2P>Ytl0f6*AUom%b3 zj{hqEo1mg&51mzfRJyA~HghFY|jIq}7&w2~92=aZp6 zLpI7p2Q>h+fCB@oxFZCqVe=YAEXxO2GoIQa{Tc#_8nE>rYyBFark535u|Jx`g^qNL zo6sr?D~F^0*2&XX>!_bPk;M{C#>QQNmjmOX$1#xBpouHeTo_z?}#VC|L{5NfKUvn=T&?2s{fA!@DYROh!s z&B-ODu0BpklJzE4XLs$rY} zQE+-(4t-bwEZN-xuFlbWwT(+v4QCZcNZY-Z?~UZ*gCp9)g!eC0AN|=vr<_=zq7%t5!E%{i;BN z?)kn}>>G7*IB#(0>_M9K{S?r&VM+Oj6Tm$bDK%!c5K;Z*NawOmFl_uWj^-)5K;s@E zE>X^*7h%BeOKKznnTA;qg{=UQjSlN?d*p04kSti^uR%iW*%}5DBmO$RQa{DEZXGW| z+&CjF=R>(!$Nc@hueyX89*UX~SgipuVe61XFTXS0i*Qk}d(jB!0t#aCE%qC~kU(Y? zbn5r)aNwwapu>|Gz<>ua*w>GOl$l}kn0)sPOq4hSu(}5_#N`AG^+?nuMj^eRu=Bx5 z5{cIWSXHxRU*9T^YN(JAlk4P3dJ1f0>j-Fcf5%@kANz=zGP5sUZbnT*k1BpC-ZMY& z><8M8qXZWa)B#(P7J2J7C*dLlXuA)5gvdm<9;i5g;RLuygNX0#X{KbTsI?xdMK#+Z zHYs_|=Rgb-E_7w~PpqW*0V~MiGvq!MiI&8tY6pe1e_+t|oK3=JC1Myu>R9I(*On$n zDcbv)LVQ8h^^O*Bkv`6^d_wtE{b{cRu;r?lYu;E(Gb|#KM6{C?L-`TF$Pj^t+>qiH z;u8S4x<2f}a~Q?mljfy`{$2)s3XOj;Vejsj8B)2?6%eBQJ|}66jk=}^OPFw^xJQDm z=nFsaiWLIg64+_dN3A{4Nz92=yp;HFbXbF!p>pcWb=VOR@uoH7pFj6-he#uG{XVY^ zq!jzh=h=_*Epdvj+j~T1CKlS0sq-|QspzX`R6K**Ds0+Rch=v22(S#MgQpF(CvH|2 z`b4~Ns%`xa1CEXf;0zJ{NFJG$Gg9OaMyQ>FK4H_tT_ewtNGuSl;65`99udT$^&KW4 zSDgg(Uq_y-jOohn;j~Ygcoxb6s~fRM778ec#4;FwXKaYlCDqb6D$O*l=in29VKV9v z1?k?R7!!<`Vyrj<9wq%mmZ$P%CsKvIMYbYTVG^@LWRiO1Bh^?qB%L0*n5(aFvUfF* z_DKvBq!9R=q-(Y_dlH2dBTD`9SA`W4{VW)mRr^LCd+%0bXnEG0+83vW6|s0*3u=(Y zy(x4DnDp!D{?o;8SgSpf%4WcY^5==~#0-|+5Ohv`h5%P^6VFp8#Y3UFE?M?K|FC;4 z*sE6UvD5c=Q$_~EV~~Iu*>1)r7nvLUFl+ZL*X9+~kC&<}doDoOy_d&^@CL&yQ_*)E zep_8(fz}w|4@t@3~b+1l92!$hWeX82_n`gdWTKZVKmytS#1L z$=^?R?R^8=4k4V2&(#J==YAsUH7zu&%XZESxIdqIfb6_GbG%9=@K^fTqNayC(0NR> zuWsMa1&87_>sRf~LxgVjyN=0H1MT3{l#>IBAWl=={kZ`5pCOUtsew*#`++m?>Zl`6 zye$~U&(GN^hQ*Qx>bDJW`Iz%l@yY5AH%EQwA8w}BbDd?KH@Jwo;Uz!pt6Jp3DYIS* zAjn1=19zZAE+{7F_}lC$>34#D>-RPsCtYEBOc(>qEa$O{Zz`61i>I0N&R)TQ)>QPW z^k}Z|Q)$v&e4MKy3!hlu8=lqtwmV%O!B(+6eoXX^&0?=vz4{oa-%0h^pM(unb%o0i zqK1#dK&$6PM>qS{?2s4;eaybKN?Dy2?(PY9^%spsJo`htP`#O3%?%6#f{mc@-J||C zr)-{-)GI9HU$(#@0ItFR|FT@?|6KC(odU^%)20y|-%=%N^ZL$WJ9O^StogR3=I-5T z0mV(jsm0F3RA!Bvu}pJ_{z+X;sPoPJHM^a=xl4ZIyE0|~32 z|2nlI_(u>Uhy{RPKri(_OA}!E8uSlF@auhP69$sn>R+VSg8$O-_5X7kY*X%(cd>RV zU+k+a;VwA={_GtGAgH|I}c) z5%ixWYu%}52G}bUL=hW>z7XnTf?SAoszzb^y0H*4G)7B2@#&Vt{yBk`+1dSf8Ce`5 zk|4(k@Ichz^k;QHW0{69zlp-IK6v{a3=2-1`S$XIbrNwuQa1O*2#MAt%X{)DC%*XW z-e~{7Bqrhro$!qS1YD7LP98v~Hk%N)y}uQHt6hw}#{`Tk53$~eZ2ms^UJ(A8rZE~o z@b8r|$(y1cj8}|>pKXYOUcJ@-SLgq?1!O08AkRH6Z5xh{yTJWN`Kg*ht(>xZHbX3dqjjO*|J2TFqlHb7(0a+vQ~(!sVIYF z7b=l`-(~mH=lgwr&+qs9<9R**JpVj@%s;4$&Yb(4b6@9u-dBvNi9RD84;=&o zVKl(%nnNIP00Mz-A>pTzjqJM1rw?sYBg;#tPik2wR-YqOt`I^0%7J)> zGg%$uRlHJ7qYGcOG=}HaM_kS4V(%<%Y;Tvx?XE8Dw^tmD_nw@b_}#wied>kQ2Wxv5 z0%3Cc_k$K`zI1~?0EmIEmSy0x<>#@flVzxajDxKKoJl~y_CfQ4|Hu~(Am*HO{N(M| z0ml~0c(k_Thh5TIn$@n@F>Bi5^H%5az*_rKE;Jug+SwBDZYu+fHWsN965OR znP1&2VMOb+`4qC&{>c)4Dk-jWdqEy*Uds7i$yRMw!_F~6gBm(rARK+Hc1aQV9M!&i zC_(c?@coB}yPFki8~TyWPo4RVjZq|8B0ob6@l8~;?)CW=T18nuaHnT8=<|e-Ue)m> zv3;MnUQ;@cyvn5T7*k>x<~OH-&G2|q7(+*3{unJBa6yo%!Q}SH>qtFxWvz|j-^0*vv?%8y~I-O4#&$q4zd3 zF*xq81b<<~UlMr}BG4ydD}+H_h2gH-w|ht-rtHDvGE3r_p?yw4n4j%|axy=$HLU6H zxjOt4ER{4BHtvg~6YE*{RPe23yh zMEFj`-N}hZGxrRLU-kSv-r%0dl5FC8 zeS%JGG5xwQz5HFv2Mp9EKI4wTzv6`oGhRrz{EIhWyEOz{&~iZa(ZfmEjCUMuvESl! z!S4?s8)8klMdh!t#g}h!9wLW+GoWw8eE#-(Jt7J2cLA8=x64r4{`zf}Q>Z05=yvog zII_-&%l%ygCt(TkUv0dD5QJG01{#Zhx@4URxufnjKw4(3M+4hlV^66i_ED6(5T{q~ zXun|qH@6I9&Ba8`pHE4vRjnl+&CXWQ-ANVg*{?OXpWE2^$gG;v#aqX z{IJ|~Myy{|_eKv7Ndct!zWM8PCcK0DX#njCw9tkt{FL$F_gT~U!+YR28eAk$Um* zOy^af*E^q9m2JY^$(u!@3H5UjG7d;rq%%OjSYQB$9%#RVUOV0uZ98K`w1;izlHjBc zXgkL8xl&{+bI0o9a{C$J3PuGe<|7(u4@xLtLBaCs8B#AM<^NKwY8~aU7}TnsVIhi< z_=uJ$tmumcDKHgEU;Tdzxj5Rq;`rz1@i_e4OWvVy1c5w}?q5~nl9Xw^yv9&mBKp12 zQE8FOH@_@gI8!Oo=K1rVVJ@ZoqQ!S06t#f3TC*XLSp{e@Hdg^oha!&x^#%FRmrjd+ z@Zx@_#j`-t!H7_?twiS|hHR-;mlw6wldM~Ok5yGy@B=N=__!pvyoFZ~lcD<%lv@me zD;(VrNF6AH%bTDmM%B~mQpL6i+%rC6hI{7&8}lj@MY-uB;FXc_GAlXJTA{D1QqN!p z%7l?YFzx^fkwzwHSSRp}P;%M<`Hm`DKXA9v55yVEDp01VRDg#LDnO-ZIycTQf)B0n znh$Nu`jtUmMGJ+@xr&M|qrF9w@htZTR9i}fTZ+H=DH*^@j5xH8l5i~Ikz`)sd;TM|-+rfxWAh?!1P{})fL13%8GuT%simQ^_#`FrIUcVE|N-hZ|6&974qL6#WECEBz2 z>wwM+f?S@&hy7cQ3Rd1Ft^qWI|NZ;#PmoE%EblmT~T5-uPNh0Q^v244Og;N!Y3 z%2UV~7H3~oU$7*<*J+~!kJjxAE2zxiBUS|I(_|!D>eUl)0sz0kjkrWPrLu-xeq(MI zVv)-@wf9aV(;jvF!>&R3@Mw8endKvqh3JtNTl5ie(Qxy>H;fF4s*h{eM6imc?mQ+J z0q*Bvs=!oeyEtNM2p>&f==}WVl7MLb=IKXC@Dn2og1quEJ@p%caQBX#Yp=+c6(@Ps zFfA~2tL4<&_CIeljBmD!zB#fHy>qnCOkdtox%EKkFJEbYY3}Sl2IVG+4JBqM z>^uur_3)&d2L#{a6qF#hR0mdWrhU2J3gym=g@3YNPUm~cA@lU|zCyVeih{n*g(hiQ zEAUqakCX-Su9jsh13LcK!B1V#m@5r08@^9Y;Td|*LxnsN@^+_`xf}U%j||64YaUL$2HMzn5S?6=Q_36i9J1h#`)43W1lOM!;S3wsm-Jz1maLUc4O~s{NF;@PhQp1D`)P+S zBif?Y9C`_$y@lQceRx=o6zbfEZb5n`@*8<(lAG&)DOc8fEPh`{whs9QxA*`FqrF)> zIg*BA(F4fKW}M45UGR^^1rj3tLskvZNznRPIlkr;kzr<(NWO&R!F;xV|Lr7)un~`X zS>U59vEC#i!u`G+g>#|!kMI%>l-H)@Ecp(Bwc@}OjUu^Lj(pv0_P>12%;)vb?_v&~ z4ihmAw%S1-j9N^6=NdQ?I0#Z!GYz58pS)~nf36G}$3A8WD^IPh3deW_DX}KkhZKH8(*%&vwXWVpc3FxYjk0UqvO>P9<{@n)i za}d-U;dU9j&2M@9OIw3`)P$XJBELI8^sHcL&M^bB$C4=(g|)g{4rduj2N;?7vl$a= z6#1qqDlGIiPZ#~B-eQTow8;(KjxcUi2lg%xYGb%h0)exn>#lw!zSJCWSIt$CmsxZ< zyq~wa)iO+m4jbCBj90XCEm-#aYL&6ieqqRbG)gR(;a&%nJM+`s@>yZc!a6~fb+bF# zG;b9+=q~~r`Y-_FddxM*EG=vGj={*>yC+wewje0rkIRn|BNMe=7nj;ZGmfu{P~mZ3 zH;`b}c^6uT9qL?88R~+Vc=!`|W@_h$2R!#T7*$gKwR?HM*@DYHNv^WQ7Ke zs?+S1!8=E;!B9%4Amce zBRk-zHQMTKU&8=teHd|X0k*h-hXiwinPX?nhk3Fbj__=#e}R|Py3l(eJVR5228=k! zyOSnK_`^Rt|L~HEz-vhK`w#V+0IvX}zpO;+>sP9;A2C31cU`)#Lx@j(lU^Qh!XNzE z?!YhIz!KtMIBoH3@BeI;@bab1Mw;4?S{EWkylX;hg?9$9K5N*O%k~!NsqK+B27?$O zQx1?tyfB;W$VL1xHsj^#hQM7(Jueo_qT_>@&jldkN5eEoLj(E)2}4&basUZp9ak7B zs-x8;F$=BEs80qn>BuK4H z9X(N|pIr0h~n!-ANpU?-VLfK7C%#Ie?H*VtXdu+*~d}7!eVEO_}Ih`pmqZ?-xHXJWeM+&c5(S zaVuAN^MW@f^7l*#BN&0yjdCCv9PQ2tLu4lYw}p#krGS@n-(Y~_XZA7GiC3e%jTPfZR%XPdX0Yuny|S* z%iH!nHbh`0ZT!r$Ge%3lX^C4#q&OFWS)U$X7U0M2sW{o5wHNbnj6ZavIL~LnNH{P? zlvW8N3L=G0m-+ii2Zn_UJ?5zvPZ8usf^{m|xqtIgxudt#Pw@qhs@6Kga~};|USZ(Q z5L^TPJtM!}~(F=_ZQ!nKDua)G4iUhd+tsezo}nq_nJ)qqcM$| zQ0a4Mrl)*D^RlB;gPk~7z(yRx;Z}m|_YXd=R!GRXcq;7=h<@yh_hEW(8fXi?ubGI= zuRXf1t1RGB*(iRb2wqU^aHRIsB|p=$*7+o$$SW(Pw_4rGgF-@3=?b1fcXE;9bCn@o z0l}lFPUWKLxk9rgic{f3>zXmiyW3g1hSi@|ym1JK>*IDVL(rQF>-Q3U(x23rxz0s{ zZ0?JV;V;S{-ftXpcNQIe{HY}@`}s8V8v|?drf3r_uvb_2rm1r=DnV3@21?TCxbGjS zv}r*Zl6?15cMcs}BzDH)5iZu)8-|sj2cGt~0bsnV_z)^9c%pCR++apzvuhjW=edPRwXf9j;d+M5n=LMuQ80Ubzt zP8a1DeH{0VM{HduMux_9EBI#&SbedLUnbJ_27CzvEMiedvAmsr!e^yYppwrtgDdV8 zsAWT}soLl=;#F;I+Wlxv zlbyI)uexvjmB;oANvG3M&?-hH9DjqaX~1m%wTl+lEZRN(SyZyv)L|)oxj@(#g<7Sa zCaA%ckNpn~%=Y%Y`H2a$kPYm8G|IWpPDWER<3Q6q|8UhiJxMg#e$46}4M9Wjc^H$R zW6OcBszq4@Z3Z0{xT{Bg))V@bJLl_|H#129#OCLqD5Y?cXh*&&l}lvQjQ#HT`tB~e z(pePvOn7&cwW0M=k5Z3}zCEX)GBQE`)ASuRM(`#FI)c)C(ifwB&4pH};u{U-x9;#U zyM;C7_Pu%{M8cCz^mk;m(W$CD%f)Lj(zAk-Xz2GoTJdBl!Zfd z343`z`5)*%XF@_Av8V19@O;-7To2Rb0}{waSd&K{R)Q@Ab|8^eepNcSsf=aTek?_3rJ3j-F^mQ}8x({Qybd6C1G&g^~A<_wJ@+r|@dtG1a%! zVaSp5DNo8C-Zm_|*=mxhrP{oabHOp$56q^)PP;quEImi_69KsVofOi}%?7x7MYiS@ z?=*Pj;&o_DsVD>SIhv99;c9`%1z@?xvHpneK)p6x18$DnInUB$g`yzQ4PX2K^WWVZV0ANI2XE`Bz5 zX^T`TxGNTVOsYl>2sCL#{TmEMV@~(l5;IG9uw`8tO4f9JKP)7>)5^u$x=-M`ALzCn zs9lA^R=ZdHbT&CSJH!nR*L@ylb@4Wfi?NSA*OWLhd--;C4I?f~n%DI}qpJRW1UeDa!TxH{E->5X}1Tn=$ z{G8Rvne+GS-#0;ooBGnFvQnFiT`7fW6=Gk8fg!Qh+7`poHrC7laH_=ay@duz#pU42 zGcC!;S^QQzPnA|zHaxlq=sTfh$?QV88W!f{&+C+fhV7D`jcNROEkLy-W6IWb8aBbH zBOH|fp*KiT(N!}s5oHh%bCt+%S|a_!>726EQH8w5=3L>aa}tL>G7VGz0K4H^XS~U> zL%f$?Wf`qi>Y$>95d<&m4L!Qa{o+6;5!|YI#`w*se+VD)N7)o3Z_D%OUBq9VJ0GED zU+u5X!uz1q_xtPoKAVW62CJhnC;P7Bx*qoT-#q=S-#+JirLCd}JJ?vqJugveJE1RhR94f#$`QV@Z>vFz%eqbQWMnDq0PddL%onMeuOcFsJca9SA z5#vWi+v+7eP}LAMIYl);(cJexps&I4k88=_Dv`SqjN7VxIqC98ciSG6;(lfTgF>T< zGrO6PeM5QiSVzP>Vut zk)U2lcKQ+J;xWcJdU}^j8L$F1=8WWnFpE)%C94qZ_RCP?S3QOwN5oKAb|NBZ{m%E? zA^plwuHtTrC+fcGJ}`Z;a!1#8(x2?tW3EyC%7m%Z>c|Cwk- z8#y>VV74UwcQ443x-^B5Q=O1=5omaGW@r&&zIW$gJs$N*T|72~Mfj ziOY#_XKKPxCW&U17LRK=;SRIk`rU#4g>oMNLG9^lxopq;eYAhho@3l)kY_@#cwFXb zdXlf=d@+l~*AA z6Z}#^O}Z6^bV7Sn$$O#=g%34!HLUfDyzxP9xbTvj_qClWMW#aVo-MwHpr;t`%iCHY z>XV!%ZKOU4dl;4LtsPBd8i}*ME0j1Ok?a0ym=~i3*@kb}=*>LF6H1|Zni|@Mbf?O5 zfxgh|Ts??Uo0b6hbG&l<<<-Bp=Bx14pfl-o{sWwrBk0N$8OQrP5tq1W0 zDhM&~VLlbbqu2tpDyGLqK-*f+2+f5xMr#d_ntZ_G9fANPPUkIno;=kjDY{-MG5Oe)=9^D5a%p9MQ$^$39zl9J&+8co{cFd8Mo?BKg^JKYQNp*^fe;wBkj@0g}Lj*Aus;QL(DFAc=D5MhX!9D1$qQ2<0**T-V zKLw1LvfvPDQ1}=AA;-JEA^G$SMS|scr`1=PL9c(X3=)0R_sq;38$Q>!>hOWVii6~3 zarN7A8RjY`ZGvi&4csaa=d_pQ7RKx$GWTNY#CstSuAzTi0GsbHEHzV5)>S%wpqOd& zL)eB(w}E_&CyRAsJEyGqYby57N)JGuzU$|TtIXrq9bhs872bQ7wM3q}NeHG-2sy8w zXI%6O{_0AK`_U*Va1}-s^2SXW*w_$gKhw19Y)NN%tlp(8JUjZ4c6Hhf-+HI;<dRSL91?pw`EO(@cD(oUC5MVJ-;=n7 zTtsYRmZtAs)$GpH`T{l23L@&?DOkJr;S=u7c8x)oLh;CC(Vm%o5`qgLwbiwk%P(Wq zjBJd;5$`5RIoZbjC(8HM0;3;9Sn%Ai(oR(x89jUpsf-k2_TvWYS%+X01~nF@zAY+& zBy*!Lt9M5er!F6@cZFFRDO`7#XwCEYO{oKAe)1a(Z%Zi*-h~?E0w;i69UU;+1$-D? zKyMb%1rkHT;IC1G#vG2lLKjz~nYw4+b7sq(z7ajuKJMwwiYr|F24@#D@r|1d=Ei?< z1s~5{To53iS9K*8A(~#9T3(lLTrzYMT|7JQzph7NCzgMXPad4YFOgJ}87v!!w}jMc zP85t4>kt+YmCEF%^Jo61<(9uC`}>6q?hoa`H;h2NXRJV%^=vV2<@R5|(Rl=R#(zPJ zR;bjaT4DcyFER8B&1UIjr2k&h*20@kky}*cxP3`g-7nANRAoXO`{o1|eXD$nx#^;N zO$EBadvBd9-9tS;h)sx&?la&*ke_})oM*X9@J@RrNiQKx5Dcc>ZSJ z_nXYyW86<)Ew#K1NEMZY)iYNX!2A*ZsV7_c`i(AUT2&&tTGgXgkl0I{5X{Q?nbT|5 zxNl*87TB6QZ?dMYy=b_+*0dDY5iM%>eoh4TAVR^dIxMd&N0>hn=?k7xQQp-XBgi%v zrxzD!l8i*!ch*cnYwbB&=nXx|P<+y|kivZJ>duacpB6Z!f9F8Ij2CSu9RS;~mC6-` zklX=aed?$3Y;U7UeA@SxcS4CIKZ@=CO-TF2FavO?>3e^w&Cij)^iY~wc zFYn=7+eOsAJSY6*M!Rh-L~XCr-<&}?%RRW0a5g>;Z}tIn_Qi za)q|fdzsU_1uosvS1j9XqlawV&r*8D6n$J1&y1&=RYO;kv~?AN(`kk7q0(pwsY06+xrO;GpE$WO&Z|bM)b;hF zVD75H4fou*MjC*g|2qP=!*VvvR`s0lqtWpY>2`)Nd%i~@Jqq1R_##f&&deWr4TO!z%HF=)S?E*WeQt0t<`&{kNlR=6k zz42fglF6UpfYZLbCYyp7tre{X`f+p0Mks8~1Wxcn|=-p8f;UOce;KOql z*<+)+;oqCcH?pN`ceI$`(4Zxcngpi(gZuhBVZ^lRM~5u|pI&6qpePT)3tl7WK7x%y zucdnqR9+oTF7~{0x#dMKW=I@eI%NG6-%3cX+!4DJwmzD4j2HfWeDfCLJoUhU z(M-;sL(P3^T=y-ZVlQK>p4)OU6x}vY(3wX-Y{_UQl!9OK0Gt)PD&| zaria-QHdHiuUbh{qGXk&%wv>SxyJ0s;@#^5YT&>ft3iGDb0Q+tQq&$gu5=y~<)PcpFlkv=;% zP6h4?)}+wNonBNDryF(Km*ntNj?UbdUh~PDXO?6?rO$*?I5#90XRI$mB+}GifD;V( z+Oc#5sgPX#I{|IzIAZZj_ym1BFm7vn$!dn~>5c1jN+c5A)gbSE0C!Ot+0Hlbpzj5@*@@_=S}u8)FO7!lb{m`Ot#LS=XF&70@VCyS;ok5 zDEp0>d3>uy-iF@8kM(SKB#`DoT34VLlPd+!^L{>pYHdK<&wPkmvrBFF`pqvNd}}%8 zMf5#ds=4U%Sr>JUkN*^-Q95t10e7FI=MRw4FaN^Ht5%D8-DT?Gv9|-20S3V(Cj?pi z_kjh~e+lQVBEsWOuc7+tV&J3{?*^(nTYaMvxeskICcvQ)%zx@Rgt_|{KiXVlt3N!* zZJEt9TcVQ4Olpf-{3sMG0#22mL*NVx#4KNG|>Ktjb$m+v|T*lH|2~j%kV0b%$PULH&`Bx#%0faJKa|!+5jeLC?)SEhn77M zEC}*^ZKejkmY%HAQ4-4N+wViErYFtuZzgXWU6C+zrVnDhkC>VZR6aJxbDu))NrTkT z@V#0VApEFCq3~s@n7uK~6qerDuMzyE^okDrqUGG0dINCP z=KvmwoXfcOe!WZFh34#=Twa?@K4SgcZU6LV#3Q`&mATk5FUc8%^x16%QV9*LueT?(TiZ7-0pb5NChC}C{uVHz zMSpL5n&e=p3bI=NDczyLB0nhrC~Qd*q}X!iQI_SYRP$82(-Mw~Ub5}qcu7x0gm}db zJ@J(9ag0X=@5u!|qmEW|c1ePSysNj=<&o?1yfk1KWEXzUWiGHc*+*HzxiKUOQ*gcw zDZKZZMb}}#I*D%Ncg963Jp@^H9xvOLQevCBpdQ3WP=pH`+&L8brh+(g$-=|;zpE2V zO8rf4!8T-CVI_wiGECSfb#wT3sDzO!@YRd6kZ4u*VQ?xzUIlG~yd7NOXIn zslB^_yg?*-$Ah-VT|Yv5gA4htx>(EdCkOHMI3BkEopNcui2?fYaK9k%A&tZUEi&sd ziV_Rk=x2zV7Vl_7sE&>^!?g_sBpBfNA=(*05t@k{@ zHk6f`oM8EE#o#jBQ4qt71{lLr%PPMZDUVH_LG$8yaaljzZ)w(!HB+D)Tvc)*K_J;c zLPDoz=_#!fzvV~X+%u(Ra;))x1Krv!Wvjf-X?4+T)U*H7cY&NmdBu{fPJ#?NnR72p<3??rAP~Hs!}8nQAs{Z!(qWC}cDtm78}^e1zf8Slfxg9--n_qQ+MxyV?0FVsKY}sYVwBB;2^{)v@=59 zV#7>LD191?zEse-z{OqMIL_EZvNCV00+fM-qxXLaRPy1chpmvc)P$2Vy0Aaj8GeM9 zJuEtWLfUTS!zy81RZA}48s*np2_r`_GZ802Gz2> zEBehKECY@cohyndpN8c==qX^W^4st&D5SxZ{*&m|(;gza~TA(mz+k*Sv36SjQuV&&#>(U0{?E69i(=IPoH*cc8EYEt^uuA17%kq^GZo6mvp>`~|TGm>}S@ zhR_t&Z~sTO{{wpNZ>KxOz1TN$j}m>L@^$I$(2r_F=Nuz=TksQ@_NuGyV%X-#zG#r5uj$oKAg*dmhobobj`iU&W6f{nG77D}T?=08! zyddV)?p5nN+hx61yb zd5Q|tpr4|POMDJV_^rkRX!IPQV4Z}u&}K&Hn$bq@ym~x!;VOb0a5~J5Bk?BO@S6*NoCt>f zm>}V?B5$nhR>B1pd06$n%yE;tSikzSy}Ll~ulqzW`8a?Ue@HjMDYpNNqJypoRR@oi zfnT>&pV(zls|tVahJ=gK5*?duv}-Jq$MNY34~X2wkwc9Ovi)X1N2Pk0B|>SCitYF} zgr3bnL~HU$k4450hA!p3U&Loq;-poy0%%;Vx;fuQ3dLg zce#NcGV~K#yL-ZsiMT8m0JxSVYIMg9Xn8mUq|b}r@#2_^X48aIWmDbpPS+`#XU{~E#4wo42v60*|~2@@)K_=#I9@# zJen4~jv0ClkM=oWM_-p40EY|ykm&jS!1p;&3X_l;z_gxb5VIW%F%L1Q=J?+gU=$|d zsS>9rux?~UC@q+J|M;!`RfhBEd&RX=+EIMORWT?DZT`npUc1Z@irzlI0{VY8g*&A^ zF_iYFK=(q27=)!NkRn%Ua1UJVAN=oiqb#K~SiMuf#}~ZiU_a&&ve=;>+5jL;~?FRQMQ=Qjk3b zn%mT$^G{Xew&QycMHrC6B!8Wj=<wCo&5hQh66%ZVB1>Yz9ZuC4wQ+Qz5fY#to;sYsq?U@P@X+s1D3DjK( z+)Z5s&hh4RRJ5YJ|13z9lWI>&`eMeDZ1`&=@7=10yWzzHZ6I2EKA_^JIX6O=dP%@xqJotNJe(pC2me z{s(?SQ7DIU1sbA=DWPUa_P2B71(okwm%;JhN^{6qIH?^P#3zIArVT=CV+T)fshOCP ziScigPdQISRUl#MYSFhk&R8hb8M27fDSQ0X$q9ktF#g*OS-j5&ayV&L4l#2eFiG%; zQ&VR1L7G#l(PhE9l>HnkJpgB9Lqw>Qi;N^@>m^}*=W8JmR$+m*7jLu4tHD_?4?Dio zQI^HMel)TO-2$n!X|^=I>BJ+7M1IAICjRL=jU6%pA(VIuIdx86kVWoDdpcP+q8|KL^mVb2ih>R*l5IoIg5} z$%H)t7O+bB2IN}*pV0Y>h!@dntY}|e5AbAj4R#3~K+fvLnA@Ln($%QZNz3kO>f8{fvP81X%u zObA@c{he2pa8m9BzU-g03tx7!`d`IU4qTAyYRNq?Rg;fs_0Q`vM*VmK@A*&f>5s+( zL2q`cv{K7yejBl%3BT)>@%YYyQ$~Fqn+hPE_Mcp$0S3yt4g1l{MbQWWx?rp6raskJ z8i1{#nHxA4|Nikxrf5{h#}Ilj_ge&1`E;5xU5#492Lv{I- zukVFx&$v2NQP0DOtrtms9`;j)*f0h|-}1?s>oQ^&6Ua}s8KKrfOqu+l`5QF$m8}k= z%t4$o_~{v|cR7 zf+Ek(kK}m6DC1}6r#Q-pk-n$#L0y(kD{6)JXIXRr2Oa2)=Ng61-`ne`_sI18gZRq` z`KbD!6H;rRKZQTd(Bea<879L2_(}MI_R^;r4FBiQ|9_y=%Z;Z>Thx7OJoaLUaN#(% z36C<^%h zzVI3m-ML-)pBL@TKnXQ)I~*)vuFt!Y#xxrEysyoWX5;tT9g&B=e_xSf5zG3K{1;#3 za2^%XhPkZDtc<1jujd+LFCeD!Tu$k!^X$;NTe5KOMS5XAXQF+u_ce z*Gv@Z$eOuo2<=t=xt|rKb%BGmM@@b`98Da6*ahdo&u#l8yuJEuO+S#vO>i+b)b^w) z*WxPEXaXNmYqVyo7;!v`JV*C|A*;& b6qX&dw&zvxM)ouX5@K-4ME8~UjfnpNiX51X diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index c32a2c67..f775a506 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -1,8 +1,6 @@ # Hands-on 2: Data assesment -**Goal** - -Use CSTools and s2dv to perform a quality assesment of a climate model. +**Goal:** Use CSTools and s2dv to perform a quality assesment of a climate model. **Load packages** ```r @@ -12,11 +10,11 @@ library(s2dv) ## 1. Load the data -In this section we will use the function **Start** to load the data. Then, we will transfrom the output **startR_array** to an **s2dv_cube** object in order that the data is easy to use within CSTools functions. +In this section we will use the function **Start** to load the data. Then, we will transfrom the output **startR_array** to an **s2dv_cube** object in order that the data is easy to use within **CSTools** functions. The **s2dv_cube** object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. -> **Note:** If you have already loaded the data with Start, go directly to section **b)**. +> **Note:** If you have already loaded the data with **Start**, go directly to section **b)**. ### a) Load the data @@ -97,7 +95,11 @@ Now we convert the **hindcast and observations data** (**startR_array**) into an hcst <- as.s2dv_cube(hcst) obs <- as.s2dv_cube(obs) ``` -By printing the object, we see that it has been organized following an order. +By printing the object, we can see the object structure. The first level elements are: +- **Data**: A multidimensional array containing the data +- **Dimensions**: A vector with the data dimensions. +- **Coordinates**: A list with vector coordinates. +- **Attributes**: A list containing the metadata. ```r > hcst @@ -133,28 +135,19 @@ Attributes ( dat1 ) : dat = dat1, var = tas, syear = 19931101 ... ... ``` -> **Note:** An **s2dv_cube** object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$coords`, `hcst$attrs`, ...) +> **Note:** An **s2dv_cube** object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$dims`, `hcst$coords`, `hcst$attrs`, ...) #### Exercise 1 **Goal:** To find **s2dv_cube** information of **hindcast** data. -1. What type of object is an **s2dv_cube** in base R? -```r -class(____) -typeof(____) -``` +1. What type of object is an **s2dv_cube** in base R? Use the function **class()** and **typeof()**: + 2. What type of object is the element `hcst$data` (common language)? Use the function **dim()** and **typeof()** to check `hcst$data`: -```r -typeof(____) -dim(____) -``` + 3. What are the **time dimensions** of the **hindcast** data? The Dates of an **s2dv_cube** can be found in element: `hcst$attrs$Dates`. 4. What are the coordinates names in the **hindcast**? Use the function **names()** to check. The coordinates in the **s2dv_cube** are stored in element `hcst$coords`. -```r -names(____) -``` 5. In which **latitude** and **longitude** we have loaded the data? 6. What is the **start date** dimension name of the `hcst`? What is the **ensemble member** dimension name? @@ -167,7 +160,6 @@ names(____) 10. What are the **units** of the data? - ## 2. Calibrate the data The first step to perform a quality assesment is to correct biases as well as dispersion errors of the model. The function **Calibration** from **CSTools** allows us to chose from different calibration member-by-member techniques. @@ -204,7 +196,7 @@ Anomalies are deviations from the average weather conditions over a long period. We are going to use the function **CST_Anomaly** from **CSTools**. This function computes the anomalies relative to a climatology computed along the selected dimension (in our case starting dates). The computation is carried out independently for experimental and observational datasets. #### Exercise 3: -**Goal:** Calculate the hindcast anomalies from the calibrated hindcast and observations dataset. You can take a look on the [CSTools package documentation](https://cran.r-project.org/web/packages/CSTools/CSTools.pdf) on page 40. +**Goal:** Calculate the hindcast anomalies from the calibrated hindcast and observations dataset. You can take a look on the [CSTools package documentation](https://cran.r-project.org/web/packages/CSTools/CSTools.pdf) on page 40 to find the missing parameters. ```r hcst_anom <- CST_Anomaly(exp = ____, @@ -212,7 +204,7 @@ hcst_anom <- CST_Anomaly(exp = ____, cross = TRUE, memb = TRUE, memb_dim = ____, - dim_anom = ____, + dim_anom = 'syear', dat_dim = c('dat', 'ensemble'), ftime_dim = ____, ncores = 10) @@ -221,14 +213,14 @@ hcst_anom <- CST_Anomaly(exp = ____, ## 4. Compute skill: RPSS -To trust the climate models we need to evaluate its skill. To do it, we are going to use the Ranked Probability Skill Score (RPSS; Wilks, 2011). Is the skill score based on the Ranked Probability Score (RPS; Wilks, 2011). It can be used to assess whether a forecast presents an improvement or worsening with respect to a reference forecast. +To trust the climate models we need to evaluate its skill. To do it, we are going to use the **Ranked Probability Skill Score** (RPSS; Wilks, 2011). Is the skill score based on the **Ranked Probability Score** (RPS; Wilks, 2011). It can be used to assess whether a forecast presents an improvement or worsening with respect to a reference forecast. -The RPSS ranges between minus infinite and 1. If the RPSS is positive, it indicates that the forecast has higher skill than the reference forecast, while a negative value means that it has a lower skill. It is computed as `RPSS = 1 - RPS_exp / RPS_ref`. The statistical significance is obtained based on a Random Walk test at -the specified confidence level (DelSole and Tippett, 2016). +The **RPSS** ranges between minus infinite and 1. If the **RPSS is positive**, it indicates that the **forecast has higher skill than the reference forecast**, while a **negative** value means that it has a **lower skill**. It is computed as `RPSS = 1 - RPS_exp / RPS_ref`. The statistical significance is obtained based on a Random Walk test at the specified confidence level (DelSole and Tippett, 2016). Next, we compute the RPSS for anomalies: ```r -skill <- RPSS(exp = hcst_anom$exp$data, obs = hcst_anom$obs$data, +skill <- RPSS(exp = hcst_anom$exp$data, + obs = hcst_anom$obs$data, time_dim = 'syear', memb_dim = 'ensemble', Fair = FALSE, @@ -240,7 +232,7 @@ The output of the **RPSS** function is a list of two elements. The first element #### Exercise 4: -**Goal:** Compare the **RPSS** results with calibrated and raw anomalies. +**Goal:** Compare the RPSS results with **calibrated** and **raw anomalies**. ```r hcst_anom <- CST_Anomaly(exp = ____, diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md index 33e9bed9..74c31352 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md @@ -1,8 +1,6 @@ # Hands-on 2: Data assesment -**Goal** - -Use CSTools and s2dv to perform a quality assesment of a climate model. +**Goal:** Use CSTools and s2dv to perform a quality assesment of a climate model. **Load packages** ```r @@ -12,11 +10,11 @@ library(s2dv) ## 1. Load the data -In this section we will use the function **Start** to load the data. Then, we will transfrom the output **startR_array** to an **s2dv_cube** object in order that the data is easy to use within CSTools functions. +In this section we will use the function **Start** to load the data. Then, we will transfrom the output **startR_array** to an **s2dv_cube** object in order that the data is easy to use within **CSTools** functions. The **s2dv_cube** object is a structured list that contains the information needed to work with multidimensional arrays of data. Coordinates, dimensions, and metadata are neatly combined to allow for a more intuitive, concise, and less error-prone experience. -> **Note:** If you have already loaded the data with Start, go directly to section **b)**. +> **Note:** If you have already loaded the data with **Start**, go directly to section **b)**. ### a) Load the data @@ -97,7 +95,11 @@ Now we convert the **hindcast and observations data** (**startR_array**) into an hcst <- as.s2dv_cube(hcst) obs <- as.s2dv_cube(obs) ``` -By printing the object, we see that it has been organized following an order. +By printing the object, we can see the object structure. The first level elements are: +- **Data**: A multidimensional array containing the data +- **Dimensions**: A vector with the data dimensions. +- **Coordinates**: A list with vector coordinates. +- **Attributes**: A list containing the metadata. ```r > hcst @@ -133,7 +135,7 @@ Attributes ( dat1 ) : dat = dat1, var = tas, syear = 19931101 ... ... ``` -> **Note:** An **s2dv_cube** object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$coords`, `hcst$attrs`, ...) +> **Note:** An **s2dv_cube** object is an structured list in base R. To acces the elements, you need to use the `$` operator (e.g. `hcst$data`, `hcst$dims`, `hcst$coords`, `hcst$attrs`, ...) #### Exercise 1 **Goal:** To find **s2dv_cube** information of **hindcast** data. @@ -252,7 +254,7 @@ Anomalies are deviations from the average weather conditions over a long period. We are going to use the function **CST_Anomaly** from **CSTools**. This function computes the anomalies relative to a climatology computed along the selected dimension (in our case starting dates). The computation is carried out independently for experimental and observational datasets. #### Exercise 3: -**Goal:** Calculate the hindcast anomalies from the calibrated hindcast and observations dataset. You can take a look on the [CSTools package documentation](https://cran.r-project.org/web/packages/CSTools/CSTools.pdf) on page 40. +**Goal:** Calculate the hindcast anomalies from the calibrated hindcast and observations dataset. You can take a look on the [CSTools package documentation](https://cran.r-project.org/web/packages/CSTools/CSTools.pdf) on page 40 to find the missing parameters. ```r hcst_anom <- CST_Anomaly(exp = hcst_cal, @@ -270,10 +272,9 @@ hcst_anom <- CST_Anomaly(exp = hcst_cal, ## 4. Compute skill: RPSS -To trust the climate models we need to evaluate its skill. To do it, we are going to use the Ranked Probability Skill Score (RPSS; Wilks, 2011). Is the skill score based on the Ranked Probability Score (RPS; Wilks, 2011). It can be used to assess whether a forecast presents an improvement or worsening with respect to a reference forecast. +To trust the climate models we need to evaluate its skill. To do it, we are going to use the **Ranked Probability Skill Score** (RPSS; Wilks, 2011). Is the skill score based on the **Ranked Probability Score** (RPS; Wilks, 2011). It can be used to assess whether a forecast presents an improvement or worsening with respect to a reference forecast. -The RPSS ranges between minus infinite and 1. If the RPSS is positive, it indicates that the forecast has higher skill than the reference forecast, while a negative value means that it has a lower skill. It is computed as `RPSS = 1 - RPS_exp / RPS_ref`. The statistical significance is obtained based on a Random Walk test at -the specified confidence level (DelSole and Tippett, 2016). +The **RPSS** ranges between minus infinite and 1. If the **RPSS is positive**, it indicates that the **forecast has higher skill than the reference forecast**, while a **negative** value means that it has a **lower skill**. It is computed as `RPSS = 1 - RPS_exp / RPS_ref`. The statistical significance is obtained based on a Random Walk test at the specified confidence level (DelSole and Tippett, 2016). Next, we compute the RPSS for anomalies: ```r @@ -289,7 +290,7 @@ skill <- RPSS(exp = hcst_anom$exp$data, The output of the **RPSS** function is a list of two elements. The first element is the RPSS; the second element, `sign` is a logical array of the statistical significance of the RPSS with the same dimensions as `rpss`. #### Exercise 4: -**Goal:** Compare the **RPSS** results with calibrated and raw anomalies. +**Goal:** Compare the RPSS results with **calibrated** and **raw anomalies**. ```r hcst_anom_raw <- CST_Anomaly(exp = hcst, @@ -330,16 +331,17 @@ logical 7055 387 We are going to plot the **last year** of the hindcast period (**2016**) for the last timestep (**December**). Also, we are going to use the **last ensemble member** (arbirtrary choice). ```r - lat <- hcst$coords$lat lon <- hcst$coords$lon PlotEquiMap(hcst_anom$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, - filled.continents = FALSE, + filled.continents = FALSE, + toptitle = "Calibrated Hindcast Anomalies", fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/hcst_anom_cal.png") PlotEquiMap(hcst_anom_raw$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, filled.continents = FALSE, + toptitle = "Raw Hindcast Anomalies", fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/hcst_anom_raw.png") ``` ![](./Figures/hcst_anom_cal.png) @@ -352,10 +354,12 @@ PlotEquiMap(hcst_anom_raw$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, PlotEquiMap(skill$rpss[ , , 2, , ], lat = lat, lon = lon, brks = seq(-1, 1, by = 0.1), filled.continents = FALSE, + toptitle = "RPSS from Calibrated Hindcast Anomalies", fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/skill_cal.png") PlotEquiMap(skill_raw$rpss[ , , 2, , ], lat = lat, lon = lon, brks = seq(-1, 1, by = 0.1), filled.continents = FALSE, + toptitle = "RPSS from Raw Hindcast Anomalies", fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/skill_raw.png") ``` ![](./Figures/skill_cal.png) -- GitLab From 9681f7dfedcdef3a64c8731270057ed6fb3417ef Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 26 Oct 2023 18:01:48 +0200 Subject: [PATCH 10/66] Remove figure path --- .../PATC2023/handson_2-data-assesment_ans.md | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md index 74c31352..09f29d53 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md @@ -336,13 +336,11 @@ lon <- hcst$coords$lon PlotEquiMap(hcst_anom$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, filled.continents = FALSE, - toptitle = "Calibrated Hindcast Anomalies", - fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/hcst_anom_cal.png") + toptitle = "Calibrated Hindcast Anomalies") PlotEquiMap(hcst_anom_raw$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, filled.continents = FALSE, - toptitle = "Raw Hindcast Anomalies", - fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/hcst_anom_raw.png") + toptitle = "Raw Hindcast Anomalies") ``` ![](./Figures/hcst_anom_cal.png) ![](./Figures/hcst_anom_raw.png) @@ -354,13 +352,11 @@ PlotEquiMap(hcst_anom_raw$exp$data[24, , 25, , 2, , ], lat = lat, lon = lon, PlotEquiMap(skill$rpss[ , , 2, , ], lat = lat, lon = lon, brks = seq(-1, 1, by = 0.1), filled.continents = FALSE, - toptitle = "RPSS from Calibrated Hindcast Anomalies", - fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/skill_cal.png") + toptitle = "RPSS from Calibrated Hindcast Anomalies") PlotEquiMap(skill_raw$rpss[ , , 2, , ], lat = lat, lon = lon, brks = seq(-1, 1, by = 0.1), filled.continents = FALSE, - toptitle = "RPSS from Raw Hindcast Anomalies", - fileout = "/esarchive/scratch/erifarov/git/cstools/inst/doc/tutorial/PATC2023/Figures/skill_raw.png") + toptitle = "RPSS from Raw Hindcast Anomalies") ``` ![](./Figures/skill_cal.png) ![](./Figures/skill_raw.png) \ No newline at end of file -- GitLab From 28c6c79e96222e22f880b468552a26d7c1240e0d Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Mon, 30 Oct 2023 10:10:57 +0100 Subject: [PATCH 11/66] Update path --- inst/doc/tutorial/PATC2023/handson_2-data-assesment.md | 5 ++--- inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md | 4 ++-- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index f775a506..a193bb6e 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -25,7 +25,7 @@ The following section is taken from [PATC 2023 startR tutorial](https://earth.bs path_exp <- "/esarchive/exp/meteofrance/system7c3s/monthly_mean/$var$_f6h/$var$_$syear$.nc" #---------------------------------------------------------------------- # Run these two lines if you're on Marenostrum4 and log in with training account -prefix <- '/gpfs/scratch/bsc32/bsc32734/bsc_training_2023/R_handson/' +prefix <- '/gpfs/scratch/nct01/nct01001/d2_handson_R/' path_exp <- paste0(prefix, path_exp) #---------------------------------------------------------------------- @@ -63,7 +63,7 @@ sdate_obs <- array(date_string, dim = c(syear = 24, time = 2)) path_obs <- '/esarchive/recon/ecmwf/era5/monthly_mean/$var$_f1h-r1440x721cds/$var$_$syear$.nc' #---------------------------------------------------------------------- # Run these two lines if you're on Marenostrum4 and log in with training account -prefix <- '/gpfs/scratch/bsc32/bsc32734/bsc_training_2023/R_handson/' +prefix <- '/gpfs/scratch/nct01/nct01001/d2_handson_R/' path_obs <- paste0(prefix, path_obs) #---------------------------------------------------------------------- @@ -264,7 +264,6 @@ summary(____) We are going to plot the **last year** of the hindcast period (**2016**) for the last timestep (**December**). Also, we are going to use the **last ensemble member** (arbirtrary choice). ```r - lat <- hcst$coords$lat lon <- hcst$coords$lon diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md index 09f29d53..fad95ccb 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md @@ -25,7 +25,7 @@ The following section is taken from [PATC 2023 startR tutorial](https://earth.bs path_exp <- "/esarchive/exp/meteofrance/system7c3s/monthly_mean/$var$_f6h/$var$_$syear$.nc" #---------------------------------------------------------------------- # Run these two lines if you're on Marenostrum4 and log in with training account -prefix <- '/gpfs/scratch/bsc32/bsc32734/bsc_training_2023/R_handson/' +prefix <- '/gpfs/scratch/nct01/nct01001/d2_handson_R/' path_exp <- paste0(prefix, path_exp) #---------------------------------------------------------------------- @@ -63,7 +63,7 @@ sdate_obs <- array(date_string, dim = c(syear = 24, time = 2)) path_obs <- '/esarchive/recon/ecmwf/era5/monthly_mean/$var$_f1h-r1440x721cds/$var$_$syear$.nc' #---------------------------------------------------------------------- # Run these two lines if you're on Marenostrum4 and log in with training account -prefix <- '/gpfs/scratch/bsc32/bsc32734/bsc_training_2023/R_handson/' +prefix <- '/gpfs/scratch/nct01/nct01001/d2_handson_R/' path_obs <- paste0(prefix, path_obs) #---------------------------------------------------------------------- -- GitLab From b31ef4857898cefc3d60016c8d6ca1dfeff0c726 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 31 Oct 2023 17:08:27 +0100 Subject: [PATCH 12/66] Correct dates dimensions in s2dv_cube --- inst/doc/tutorial/PATC2023/handson_2-data-assesment.md | 3 ++- inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md | 4 ++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index a193bb6e..02061f08 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -53,8 +53,9 @@ hcst <- CST_Start(dat = path_exp, ```r # Adjust the day to the correct month +dates_dim <- dim(hcst$attrs$Dates) hcst$attrs$Dates <- hcst$attrs$Dates - lubridate::days(1) - +dim(hcst$attrs$Dates) <- dates_dim date_string <- format(hcst$attrs$Dates, '%Y%m') sdate_obs <- array(date_string, dim = c(syear = 24, time = 2)) ``` diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md index fad95ccb..48b679ff 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment_ans.md @@ -52,8 +52,12 @@ hcst <- CST_Start(dat = path_exp, ``` ```r +# Save Dates dimensions +dates_dim <- dim(hcst$attrs$Dates) # Adjust the day to the correct month hcst$attrs$Dates <- hcst$attrs$Dates - lubridate::days(1) +# Add again Dates dimensions +dim(hcst$attrs$Dates) <- dates_dim date_string <- format(hcst$attrs$Dates, '%Y%m') sdate_obs <- array(date_string, dim = c(syear = 24, time = 2)) -- GitLab From 9fc9658dea29b7506c09f21409a0a6da77c629f5 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 31 Oct 2023 17:09:56 +0100 Subject: [PATCH 13/66] Minor correction --- inst/doc/tutorial/PATC2023/handson_2-data-assesment.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md index 02061f08..f02ed138 100644 --- a/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md +++ b/inst/doc/tutorial/PATC2023/handson_2-data-assesment.md @@ -52,10 +52,13 @@ hcst <- CST_Start(dat = path_exp, ``` ```r -# Adjust the day to the correct month +# Save Dates dimensions dates_dim <- dim(hcst$attrs$Dates) +# Adjust the day to the correct month hcst$attrs$Dates <- hcst$attrs$Dates - lubridate::days(1) +# Add again Dates dimensions dim(hcst$attrs$Dates) <- dates_dim + date_string <- format(hcst$attrs$Dates, '%Y%m') sdate_obs <- array(date_string, dim = c(syear = 24, time = 2)) ``` -- GitLab From 62c8e3462262d7c364fbcf6dbb052971fd5e6095 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 9 Nov 2023 10:37:22 +0100 Subject: [PATCH 14/66] Add lintr package in pipeline --- .gitlab-ci.yml | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index e245a6cc..cbc39ada 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -9,3 +9,10 @@ build: - R CMD build --resave-data . - R CMD check --as-cran --no-manual --run-donttest CSTools_*.tar.gz - R -e 'covr::package_coverage()' + +lint-check: + stage: build + script: + - module load R/4.1.2-foss-2015a-bare + - echo "Run lintr on the package..." + - Rscript -e 'lintr::lint_package(path = ".")' -- GitLab From 947876c64cabd03e1704d18564ba0ac43b693bc4 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Mon, 13 Nov 2023 12:43:12 +0100 Subject: [PATCH 15/66] Add citation --- inst/CITATION | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 inst/CITATION diff --git a/inst/CITATION b/inst/CITATION new file mode 100644 index 00000000..455af823 --- /dev/null +++ b/inst/CITATION @@ -0,0 +1,24 @@ +citHeader("To cite package 'CSTools' in publications use:") + +yr <- sub('.*(2[[:digit:]]{3})-.*', '\\1', meta$Date, perl = TRUE) +if (length(yr) == 0) yr <- format(Sys.Date(), '%Y') + +bibentry( + bibtype = 'Manual', + title = paste0(meta$Package, ': ', meta$Title), + author = Filter(function(p) 'aut' %in% p$role, as.person(meta$Author)), + year = yr, + note = paste('R package version', meta$Version), + url = meta$URL +) + +bibentry( + bibtype = "Article", + author = c(person("Núria", "Pérez-Zanón", email = "nuria.perez@bsc.es"), person("", "et al.")), + title = "Climate Services Toolbox (CSTools) v4.0: from climate forecasts to climate forecast information", + doi = "10.5194/gmd-15-6115-2022", + url = "https://gmd.copernicus.org/articles/15/6115/2022/", + journal = "Geoscientific Model Development", + publisher = "European Geosciences Union", + year = "2022" +) -- GitLab From c7403141e889d4a1c485489a7d40166b815d6ddb Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 28 Nov 2023 18:38:09 +0100 Subject: [PATCH 16/66] Add the development with easyNCDF into CST_SaveExp and changed name of the function --- DESCRIPTION | 2 +- NAMESPACE | 3 + R/CST_SaveCube.R | 729 ++++++++++++++++++++++++++++++++++++++++++++ man/CST_SaveCube.Rd | 114 +++++++ man/SaveCube.Rd | 143 +++++++++ 5 files changed, 990 insertions(+), 1 deletion(-) create mode 100644 R/CST_SaveCube.R create mode 100644 man/CST_SaveCube.Rd create mode 100644 man/SaveCube.Rd diff --git a/DESCRIPTION b/DESCRIPTION index 3af5dcb1..4e41770d 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -90,5 +90,5 @@ VignetteBuilder: knitr License: GPL-3 Encoding: UTF-8 LazyData: true -RoxygenNote: 7.2.0 +RoxygenNote: 7.2.3 Config/testthat/edition: 3 diff --git a/NAMESPACE b/NAMESPACE index 012f76cf..35bd3c4f 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -34,6 +34,7 @@ export(CST_RFTemp) export(CST_RFWeights) export(CST_RainFARM) export(CST_RegimesAssign) +export(CST_SaveCube) export(CST_SaveExp) export(CST_SplitDim) export(CST_Start) @@ -61,6 +62,7 @@ export(RFTemp) export(RF_Weights) export(RainFARM) export(RegimesAssign) +export(SaveCube) export(SaveExp) export(SplitDim) export(WeatherRegime) @@ -69,6 +71,7 @@ export(s2dv_cube) export(training_analogs) import(RColorBrewer) import(abind) +import(easyNCDF) import(ggplot2) import(lubridate) import(multiApply) diff --git a/R/CST_SaveCube.R b/R/CST_SaveCube.R new file mode 100644 index 00000000..908148fc --- /dev/null +++ b/R/CST_SaveCube.R @@ -0,0 +1,729 @@ +#'Save objects of class 's2dv_cube' to data in NetCDF format +#' +#'@author Perez-Zanon Nuria, \email{nuria.perez@bsc.es} +#' +#'@description This function allows to divide and save a object of class +#''s2dv_cube' into a NetCDF file, allowing to reload the saved data using +#'\code{Start} function from StartR package. If the original 's2dv_cube' object +#'has been created from \code{CST_Load()}, then it can be reloaded with +#'\code{Load()}. +#' +#'@param data An object of class \code{s2dv_cube}. +#'@param destination A character string containing the directory name in which +#' to save the data. NetCDF file for each starting date are saved into the +#' folder tree: \cr +#' destination/Dataset/variable/. By default the function +#' creates and saves the data into the working directory. +#'@param sdate_dim A character string indicating the name of the start date +#' dimension. By default, it is set to 'sdate'. It can be NULL if there is no +#' start date dimension. +#'@param ftime_dim A character string indicating the name of the forecast time +#' dimension. By default, it is set to 'time'. It can be NULL if there is no +#' forecast time dimension. +#'@param dat_dim A character string indicating the name of dataset dimension. +#' By default, it is set to 'dataset'. It can be NULL if there is no dataset +#' dimension. +#'@param var_dim A character string indicating the name of variable dimension. +#' By default, it is set to 'var'. It can be NULL if there is no variable +#' dimension. +#'@param memb_dim A character string indicating the name of the member dimension. +#' By default, it is set to 'member'. It can be NULL if there is no member +#' dimension. +#'@param startdates A vector of dates that will be used for the filenames +#' when saving the data in multiple files. It must be a vector of the same +#' length as the start date dimension of data. It must be a vector of class +#' \code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. +#' If it is NULL, the coordinate corresponding the the start date dimension or +#' the first Date of each time step will be used as the name of the files. +#' It is NULL by default. +#'@param drop_dims A vector of character strings indicating the dimension names +#' of length 1 that need to be dropped in order that they don't appear in the +#' netCDF file. It is NULL by default (optional). +#'@param single_file A logical value indicating if all object is saved in a +#' single file (TRUE) or in multiple files (FALSE). When it is FALSE, +#' the array is separated for Datasets, variable and start date. It is FALSE +#' by default. +#'@param extra_string A character string to be include as part of the file name, +#' for instance, to identify member or realization. It would be added to the +#' file name between underscore characters. +#' +#'@return Multiple or single NetCDF files containing the data array.\cr +#'\item{\code{single_file = TRUE}}{ +#' All data is saved in a single file located in the specified destination +#' path with the following name: +#' ___.nc. Multiple +#' variables are saved separately in the same file. The forecast time units +#' is extracted from the frequency of the time steps (hours, days, months). +#' The first value of forecast time is 1. If no frequency is found, the units +#' will be 'hours since' each start date and the time steps are assumed to be +#' equally spaced. +#'} +#'\item{\code{single_file = FALSE}}{ +#' The data array is subset and stored into multiple files. Each file +#' contains the data subset for each start date, variable and dataset. Files +#' with different variables and Datasets are stored in separated directories +#' within the following directory tree: destination/Dataset/variable/. +#' The name of each file will be: +#' __.nc. +#'} +#' +#'@seealso \code{\link[startR]{Start}}, \code{\link{as.s2dv_cube}} and +#'\code{\link{s2dv_cube}} +#' +#'@examples +#'\dontrun{ +#'data <- lonlat_temp_st$exp +#'destination <- "./" +#'CST_SaveExp(data = data, destination = destination, ftime_dim = 'ftime', +#' var_dim = 'var', dat_dim = 'dataset') +#'} +#' +#'@export +CST_SaveCube <- function(data, destination = "./", sdate_dim = 'sdate', + ftime_dim = 'time', dat_dim = 'dataset', + var_dim = 'var', memb_dim = 'member', + startdates = NULL, drop_dims = NULL, + single_file = FALSE, extra_string = NULL) { + # Check 's2dv_cube' + if (!inherits(data, 's2dv_cube')) { + stop("Parameter 'data' must be of the class 's2dv_cube'.") + } + # Check object structure + if (!all(c('data', 'attrs') %in% names(data))) { + stop("Parameter 'data' must have at least 'data' and 'attrs' elements ", + "within the 's2dv_cube' structure.") + } + if (!inherits(data$attrs, 'list')) { + stop("Level 'attrs' must be a list with at least 'Dates' element.") + } + if (!all(c('coords') %in% names(data))) { + warning("Element 'coords' not found. No coordinates will be used.") + } + # metadata + if (is.null(data$attrs$Variable$metadata)) { + warning("No metadata found in element Variable from attrs.") + } else { + if (!inherits(data$attrs$Variable$metadata, 'list')) { + stop("Element metadata from Variable element in attrs must be a list.") + } + if (!any(names(data$attrs$Variable$metadata) %in% names(data$coords))) { + warning("Metadata is not found for any coordinate.") + } else if (!any(names(data$attrs$Variable$metadata) %in% + data$attrs$Variable$varName)) { + warning("Metadata is not found for any variable.") + } + } + # Dates + if (is.null(data$attrs$Dates)) { + stop("Element 'Dates' from 'attrs' level cannot be NULL.") + } + if (is.null(dim(data$attrs$Dates))) { + stop("Element 'Dates' from 'attrs' level must have time dimensions.") + } + # sdate_dim + if (!is.null(sdate_dim)) { + if (!is.character(sdate_dim)) { + stop("Parameter 'sdate_dim' must be a character string.") + } + if (length(sdate_dim) > 1) { + warning("Parameter 'sdate_dim' has length greater than 1 and ", + "only the first element will be used.") + sdate_dim <- sdate_dim[1] + } + } else if (length(dim(data$attrs$Dates)) == 1) { + sdate_dim <- 'sdate' + dim(data$data) <- c(sdate = 1, dim(data$data)) + data$dims <- dim(data$data) + dim(data$attrs$Dates) <- c(sdate = 1, dim(data$attrs$Dates)) + data$coords[[sdate_dim]] <- data$attrs$Dates[1] + } + # startdates + if (is.null(startdates)) { + startdates <- data$coords[[sdate_dim]] + } else { + if (!is.character(startdates)) { + warning(paste0("Parameter 'startdates' is not a character string, ", + "it will not be used.")) + startdates <- data$coords[[sdate_dim]] + } + if (!is.null(sdate_dim)) { + if (dim(data$data)[sdate_dim] != length(startdates)) { + warning(paste0("Parameter 'startdates' doesn't have the same length ", + "as dimension '", sdate_dim,"', it will not be used.")) + startdates <- data$coords[[sdate_dim]] + } + } + } + + SaveCube(data = data$data, + destination = destination, + Dates = data$attrs$Dates, + coords = data$coords, + varname = data$attrs$Variable$varName, + metadata = data$attrs$Variable$metadata, + Datasets = data$attrs$Datasets, + startdates = startdates, + dat_dim = dat_dim, sdate_dim = sdate_dim, + ftime_dim = ftime_dim, var_dim = var_dim, + memb_dim = memb_dim, + drop_dims = drop_dims, + extra_string = extra_string, + single_file = single_file, + global_attrs = global_attrs) +} +#'Save a multidimensional array with metadata to data in NetCDF format +#'@description This function allows to save a data array with metadata into a +#'NetCDF file, allowing to reload the saved data using \code{Start} function +#'from StartR package. If the original 's2dv_cube' object has been created from +#'\code{CST_Load()}, then it can be reloaded with \code{Load()}. +#' +#'@author Perez-Zanon Nuria, \email{nuria.perez@bsc.es} +#' +#'@param data A multi-dimensional array with named dimensions. +#'@param destination A character string indicating the path where to store the +#' NetCDF files. +#'@param Dates A named array of dates with the corresponding sdate and forecast +#' time dimension. If there is no sdate_dim, you can set it to NULL. +#' It must have ftime_dim dimension. +#'@param coords A named list with elements of the coordinates corresponding to +#' the dimensions of the data parameter. The names and length of each element +#' must correspond to the names of the dimensions. If any coordinate is not +#' provided, it is set as an index vector with the values from 1 to the length +#' of the corresponding dimension. +#'@param varname A character string indicating the name of the variable to be +#' saved. +#'@param metadata A named list where each element is a variable containing the +#' corresponding information. The information must be contained in a list of +#' lists for each variable. +#'@param Datasets A vector of character string indicating the names of the +#' datasets. +#'@param startdates A vector of dates that will be used for the filenames +#' when saving the data in multiple files. It must be a vector of the same +#' length as the start date dimension of data. It must be a vector of class +#' \code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. +#' If it is NULL, the first Date of each time step will be used as the name of +#' the files. It is NULL by default. +#'@param sdate_dim A character string indicating the name of the start date +#' dimension. By default, it is set to 'sdate'. It can be NULL if there is no +#' start date dimension. +#'@param ftime_dim A character string indicating the name of the forecast time +#' dimension. By default, it is set to 'time'. It can be NULL if there is no +#' forecast time dimension. +#'@param dat_dim A character string indicating the name of dataset dimension. +#' By default, it is set to 'dataset'. It can be NULL if there is no dataset +#' dimension. +#'@param var_dim A character string indicating the name of variable dimension. +#' By default, it is set to 'var'. It can be NULL if there is no variable +#' dimension. +#'@param memb_dim A character string indicating the name of the member dimension. +#' By default, it is set to 'member'. It can be NULL if there is no member +#' dimension. +#'@param drop_dims A vector of character strings indicating the dimension names +#' of length 1 that need to be dropped in order that they don't appear in the +#' netCDF file. It is NULL by default (optional). +#'@param single_file A logical value indicating if all object is saved in a +#' unique file (TRUE) or in separated directories (FALSE). When it is FALSE, +#' the array is separated for Datasets, variable and start date. It is FALSE +#' by default (optional). +#'@param extra_string A character string to be include as part of the file name, +#' for instance, to identify member or realization. It would be added to the +#' file name between underscore characters (optional). +#'@param global_attrs A list with elements containing the global attributes +#' to be saved in the NetCDF. +#' +#'@return Multiple or single NetCDF files containing the data array.\cr +#'\item{\code{single_file = TRUE}}{ +#' All data is saved in a single file located in the specified destination +#' path with the following name: +#' ___.nc. Multiple +#' variables are saved separately in the same file. The forecast time units +#' is extracted from the frequency of the time steps (hours, days, months). +#' The first value of forecast time is 1. If no frequency is found, the units +#' will be 'hours since' each start date and the time steps are assumed to be +#' equally spaced. +#'} +#'\item{\code{single_file = FALSE}}{ +#' The data array is subset and stored into multiple files. Each file +#' contains the data subset for each start date, variable and dataset. Files +#' with different variables and Datasets are stored in separated directories +#' within the following directory tree: destination/Dataset/variable/. +#' The name of each file will be: +#' __.nc. +#'} +#' +#'@examples +#'\dontrun{ +#'data <- lonlat_temp_st$exp$data +#'lon <- lonlat_temp_st$exp$coords$lon +#'lat <- lonlat_temp_st$exp$coords$lat +#'coords <- list(lon = lon, lat = lat) +#'Datasets <- lonlat_temp_st$exp$attrs$Datasets +#'varname <- 'tas' +#'Dates <- lonlat_temp_st$exp$attrs$Dates +#'destination = './' +#'metadata <- lonlat_temp_st$exp$attrs$Variable$metadata +#'SaveExp(data = data, destination = destination, coords = coords, +#' Datasets = Datasets, varname = varname, Dates = Dates, +#' metadata = metadata, single_file = TRUE, ftime_dim = 'ftime', +#' var_dim = 'var', dat_dim = 'dataset') +#'} +#' +#'@import easyNCDF +#'@importFrom s2dv Reorder +#'@import multiApply +#'@importFrom ClimProjDiags Subset +#'@export +SaveCube <- function(data, destination = "./", Dates = NULL, coords = NULL, + varname = NULL, metadata = NULL, Datasets = NULL, + startdates = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', + ftime_dim = 'time', var_dim = 'var', memb_dim = 'member', + drop_dims = NULL, single_file = FALSE, extra_string = NULL, + global_attrs = NULL) { + ## Initial checks + # data + if (is.null(data)) { + stop("Parameter 'data' cannot be NULL.") + } + dimnames <- names(dim(data)) + if (is.null(dimnames)) { + stop("Parameter 'data' must be an array with named dimensions.") + } + # destination + if (!is.character(destination) | length(destination) > 1) { + stop("Parameter 'destination' must be a character string of one element ", + "indicating the name of the file (including the folder if needed) ", + "where the data will be saved.") + } + # Dates + if (!is.null(Dates)) { + if (!inherits(Dates, "POSIXct") & !inherits(Dates, "Date")) { + stop("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.") + } + if (is.null(dim(Dates))) { + stop("Parameter 'Dates' must have dimension names.") + } + } + # drop_dims + if (!is.null(drop_dims)) { + if (!is.character(drop_dims) | any(!drop_dims %in% names(dim(data)))) { + warning("Parameter 'drop_dims' must be character string containing ", + "the data dimension names to be dropped. It will not be used.") + } else if (!all(dim(data)[drop_dims] %in% 1)) { + warning("Parameter 'drop_dims' can only contain dimension names ", + "that are of length 1. It will not be used.") + } else if (any(drop_dims %in% c(ftime_dim, sdate_dim, dat_dim, memb_dim, var_dim))) { + warning("Parameter 'drop_dims' contains dimensions used in the computation. ", + "It will not be used.") + drop_dims <- NULL + } else { + data <- Subset(x = data, along = drop_dims, + indices = lapply(1:length(drop_dims), function(x) 1), + drop = 'selected') + dimnames <- names(dim(data)) + } + } + # coords + if (!is.null(coords)) { + if (!all(names(coords) %in% dimnames)) { + coords <- coords[-which(!names(coords) %in% dimnames)] + } + for (i_coord in dimnames) { + if (i_coord %in% names(coords)) { + if (length(coords[[i_coord]]) != dim(data)[i_coord]) { + warning(paste0("Coordinate '", i_coord, "' has different lenght as ", + "its dimension and it will not be used.")) + coords[[i_coord]] <- 1:dim(data)[i_coord] + } + } else { + coords[[i_coord]] <- 1:dim(data)[i_coord] + } + } + } else { + coords <- sapply(dimnames, function(x) 1:dim(data)[x]) + } + # varname + if (is.null(varname)) { + varname <- 'X' + } else if (length(varname) > 1) { + multiple_vars <- TRUE + } else { + multiple_vars <- FALSE + } + if (!all(sapply(varname, is.character))) { + stop("Parameter 'varname' must be a character string with the ", + "variable names.") + } + # single_file + if (!inherits(single_file, 'logical')) { + warning("Parameter 'single_file' must be a logical value. It will be ", + "set as FALSE.") + single_file <- FALSE + } + # extra_string + if (!is.null(extra_string)) { + if (!is.character(extra_string)) { + stop("Parameter 'extra_string' must be a character string.") + } + } + # global_attrs + if (!is.null(global_attrs)) { + if (!inherits(global_attrs, 'list')) { + stop("Parameter 'global_attrs' must be a list.") + } + } + + ## Dimensions checks + # Spatial coordinates + if (!any(dimnames %in% .KnownLonNames()) | + !any(dimnames %in% .KnownLatNames())) { + lon_dim <- NULL + lat_dim <- NULL + } else { + lon_dim <- dimnames[which(dimnames %in% .KnownLonNames())] + lat_dim <- dimnames[which(dimnames %in% .KnownLatNames())] + } + # ftime_dim + if (!is.null(ftime_dim)) { + if (!is.character(ftime_dim)) { + stop("Parameter 'ftime_dim' must be a character string.") + } + if (!all(ftime_dim %in% dimnames)) { + stop("Parameter 'ftime_dim' is not found in 'data' dimension.") + } + } + # sdate_dim + if (!is.null(sdate_dim)) { + if (!is.character(sdate_dim)) { + stop("Parameter 'sdate_dim' must be a character string.") + } + if (!all(sdate_dim %in% dimnames)) { + stop("Parameter 'sdate_dim' is not found in 'data' dimension.") + } + } + # memb_dim + if (!is.null(memb_dim)) { + if (!is.character(memb_dim)) { + stop("Parameter 'memb_dim' must be a character string.") + } + if (!all(memb_dim %in% dimnames)) { + stop("Parameter 'memb_dim' is not found in 'data' dimension. Set it ", + "as NULL if there is no member dimension.") + } + } + # dat_dim + if (!is.null(dat_dim)) { + if (!is.character(dat_dim)) { + stop("Parameter 'dat_dim' must be a character string.") + } + if (!all(dat_dim %in% dimnames)) { + stop("Parameter 'dat_dim' is not found in 'data' dimension. Set it ", + "as NULL if there is no Datasets dimension.") + } + if (length(dat_dim) > 1) { + warning("Parameter 'dat_dim' has length greater than 1 and ", + "only the first element will be used.") + dat_dim <- dat_dim[1] + } + n_datasets <- dim(data)[dat_dim] + } else { + n_datasets <- 1 + } + # var_dim + if (!is.null(var_dim)) { + if (!is.character(var_dim)) { + stop("Parameter 'var_dim' must be a character string.") + } + if (!all(var_dim %in% dimnames)) { + stop("Parameter 'var_dim' is not found in 'data' dimension. Set it ", + "as NULL if there is no variable dimension.") + } + n_vars <- dim(data)[var_dim] + } else { + n_vars <- 1 + } + # minimum dimensions + if (all(dimnames %in% c(var_dim, dat_dim))) { + if (!single_file) { + warning("Parameter data has only ", + paste(c(var_dim, dat_dim), collapse = ' and '), " dimensions ", + "and it cannot be splitted in multiple files. All data will ", + "be saved in a single file.") + single_file <- TRUE + } + } + # Dates dimension check + if (!is.null(Dates)) { + if (is.null(ftime_dim)) { + stop("Parameter 'Dates' must have 'ftime_dim'.") + } + if (all(c(ftime_dim, sdate_dim) %in% names(dim(Dates)))) { + if (any(!names(dim(Dates)) %in% c(ftime_dim, sdate_dim))) { + if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] == 1)) { + dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] + } else { + stop("Parameter 'Dates' must have only 'sdate_dim' and 'ftime_dim' dimensions.") + } + } + if (is.null(startdates)) { + startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') + } else if (any(inherits(startdates, "POSIXct"), inherits(startdates, "Date"))) { + startdates <- format(startdates, "%Y%m%d") + } else if (any(nchar(startdates) > 10, nchar(startdates) < 1)) { + warning("Parameter 'startdates' should be a character string containing ", + "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", + "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") + startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') + } + } else if (any(ftime_dim %in% names(dim(Dates)))) { + if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim)] == 1)) { + dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] + } + } + } else if (!single_file) { + warning("Dates must be provided if 'data' must be saved in separated files. ", + "All data will be saved in a single file.") + single_file <- TRUE + } + # startdates + if (is.null(startdates)) { + if (is.null(sdate_dim)) { + startdates <- 'XXX' + } else { + startdates <- rep('XXX', dim(data)[sdate_dim]) + } + } + # Datasets + if (is.null(Datasets)) { + Datasets <- rep('XXX', n_datasets ) + } + if (inherits(Datasets, 'list')) { + Datasets <- names(Datasets) + } + if (n_datasets > length(Datasets)) { + warning("Dimension 'Datasets' in 'data' is greater than those listed in ", + "element 'Datasets' and the first element will be reused.") + Datasets <- c(Datasets, rep(Datasets[1], n_datasets - length(Datasets))) + } else if (n_datasets < length(Datasets)) { + warning("Dimension 'Datasets' in 'data' is smaller than those listed in ", + "element 'Datasets' and only the firsts elements will be used.") + Datasets <- Datasets[1:n_datasets] + } + + ## Unknown dimensions check + alldims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, memb_dim, ftime_dim) + if (!all(dimnames %in% alldims)) { + unknown_dims <- dimnames[which(!dimnames %in% alldims)] + memb_dim <- c(memb_dim, unknown_dims) + alldims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, memb_dim, ftime_dim) + } + # Reorder + if (any(dimnames != alldims)) { + data <- Reorder(data, alldims) + dimnames <- names(dim(data)) + if (!is.null(attr(data, 'dimensions'))) { + attr(data, 'dimensions') <- dimnames + } + } + + ## NetCDF dimensions definition + defined_dims <- NULL + extra_info_dim <- NULL + if (is.null(Dates)) { + filedims <- dimnames[which(!dimnames %in% c(dat_dim, var_dim))] + } else { + filedims <- dimnames[which(!dimnames %in% c(dat_dim, var_dim, sdate_dim, ftime_dim))] + } + + for (i_coord in filedims) { + # vals + if (i_coord %in% names(coords)) { + if (is.numeric(coords[[i_coord]])) { + coords[[i_coord]] <- as.vector(coords[[i_coord]]) + } else { + coords[[i_coord]] <- 1:dim(data)[i_coord] + } + } else { + coords[[i_coord]] <- 1:dim(data)[i_coord] + } + dim(coords[[i_coord]]) <- dim(data)[i_coord] + + ## metadata + if (i_coord %in% names(metadata)) { + if ('variables' %in% names(attributes(metadata[[i_coord]]))) { + # from Start: 'lon' or 'lat' + attrs <- attributes(metadata[[i_coord]])[['variables']] + attr(coords[[i_coord]], 'variables') <- attrs + } else if (inherits(metadata[[i_coord]], 'list')) { + # from Start and Load: main var + attr(coords[[i_coord]], 'variables') <- list(metadata[[i_coord]]) + names(attributes(coords[[i_coord]])$variables) <- i_coord + } else if (!is.null(attributes(metadata[[i_coord]]))) { + # from Load + attr(coords[[i_coord]], 'variables') <- list(attributes(metadata[[i_coord]])) + names(attributes(coords[[i_coord]])$variables) <- i_coord + } else { + stop("Metadata is not correct.") + } + } + } + coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL + + defined_vars <- list() + if (!single_file) { + for (i in 1:n_datasets) { + path <- file.path(destination, Datasets[i], varname) + for (j in 1:n_vars) { + dir.create(path[j], recursive = TRUE) + startdates <- gsub("-", "", startdates) + dim(startdates) <- c(length(startdates)) + names(dim(startdates)) <- sdate_dim + if (is.null(dat_dim) & is.null(var_dim)) { + data_subset <- data + } else if (is.null(dat_dim)) { + data_subset <- Subset(data, c(var_dim), list(j), drop = 'selected') + } else if (is.null(var_dim)) { + data_subset <- Subset(data, along = c(dat_dim), list(i), drop = 'selected') + } else { + data_subset <- Subset(data, c(dat_dim, var_dim), list(i, j), drop = 'selected') + } + if (is.null(Dates)) { + input_data <- list(data_subset, startdates) + target_dims <- list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL) + } else { + input_data <- list(data_subset, startdates, Dates) + target_dims = list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL, ftime_dim) + } + print(varname) + Apply(data = input_data, + target_dims = target_dims, + fun = .savearray, + destination = path[j], + coords = coords, + ftime_dim = ftime_dim, + varname = varname[j], + metadata_var = metadata[[varname[j]]], + extra_string = extra_string, + global_attrs = global_attrs) + } + } + } else { + # Datasets definition + # From here + if (!is.null(dat_dim)) { + coords[[dat_dim]] <- array(1:dim(data)[dat_dim], dim = dim(data)[dat_dim]) + attr(coords[[dat_dim]], 'variables') <- list(list(units = 'adim')) + # extra_info_dim[[dat_dim]] <- list(Datasets = paste(Datasets, collapse = ', ')) + } + first_sdate <- last_sdate <- NULL + if (!is.null(Dates)) { + if (is.null(sdate_dim)) { + sdates <- Dates[1] + # ftime definition + leadtimes <- as.numeric(Dates - sdates)/3600 + } else { + # sdate definition + sdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') + differ <- as.numeric((sdates - sdates[1])/3600) + # new + dim(differ) <- dim(data)[sdate_dim] + coords[[sdate_dim]] <- differ + attr(coords[[sdate_dim]], 'variables') <- list(list(units = paste('hours since', sdates[1]), + calendar = 'proleptic_gregorian', + longname = sdate_dim)) + + first_sdate <- sdates[1] + last_sdate <- sdates[length(sdates)] + # ftime definition + Dates <- Reorder(Dates, c(ftime_dim, sdate_dim)) + differ_ftime <- apply(Dates, 2, function(x){as.numeric((x - x[1])/3600)}) + dim(differ_ftime) <- dim(Dates) + leadtimes <- Subset(differ_ftime, along = sdate_dim, 1, drop = 'selected') + + if (!all(apply(differ_ftime, 1, function(x){length(unique(x)) == 1}))) { + warning("Time steps are not equal for all start dates. Only ", + "forecast time values for the first start date will be saved ", + "correctly.") + } + } + + # Save in units 'hours since' + dim(leadtimes) <- dim(data)[ftime_dim] + coords[[ftime_dim]] <- leadtimes + attr(coords[[ftime_dim]], 'variables') <- list(list(units = paste('hours since', + paste(sdates, collapse = ', ')), + calendar = 'proleptic_gregorian', + longname = ftime_dim, unlim = TRUE)) + } + + # var definition + defined_vars <- list() + extra_info_var <- NULL + for (j in 1:n_vars) { + varname_j <- varname[j] + metadata_j <- metadata[[varname_j]] + if (is.null(var_dim)) { + coords[[varname_j]] <- data + } else { + coords[[varname_j]] <- Subset(data, var_dim, j, drop = 'selected') + } + if (!is.null(metadata_j)) { + attr(coords[[varname_j]], 'variables') <- list(metadata_j) + names(attributes(coords[[varname_j]])$variables) <- varname_j + } + # Add global attributes + if (!is.null(global_attrs)) { + attributes(coords[[varname_j]])$global_attrs <- global_attrs + } + } + if (is.null(extra_string)) { + gsub("-", "", first_sdate) + file_name <- paste0(paste(c(varname, + gsub("-", "", first_sdate), + gsub("-", "", last_sdate)), + collapse = '_'), ".nc") + } else { + nc <- substr(extra_string, nchar(extra_string)-2, nchar(extra_string)) + if (nc == ".nc") { + file_name <- extra_string + } else { + file_name <- paste0(extra_string, ".nc") + } + } + full_filename <- file.path(destination, file_name) + ArrayToNc(coords, full_filename) + } +} + +.savecube <- function(data, coords, destination = "./", + startdates = NULL, dates = NULL, + ftime_dim = 'time', varname = 'var', + metadata_var = NULL, extra_string = NULL, + global_attrs = NULL) { + if (!is.null(dates)) { + differ <- as.numeric((dates - dates[1])/3600) + dim(differ) <- dim(data)[ftime_dim] + coords[[ftime_dim]] <- differ + attr(coords[[ftime_dim]], 'variables') <- list(list(units = paste('hours since', Dates[1,1]), + calendar = 'proleptic_gregorian', + longname = ftime_dim, unlim = TRUE)) + names(attributes(coords[[ftime_dim]])$variables) <- ftime_dim + } + # Add data + coords[[varname]] <- data + if (!is.null(metadata_var)) { + attr(coords[[varname]], 'variables') <- list(metadata_var) + names(attributes(coords[[varname]])$variables) <- varname + } + # Add global attributes + if (!is.null(global_attrs)) { + attributes(coords[[varname]])$global_attrs <- global_attrs + } + + if (is.null(extra_string)) { + file_name <- paste0(varname, "_", startdates, ".nc") + } else { + file_name <- paste0(varname, "_", extra_string, "_", startdates, ".nc") + } + full_filename <- file.path(destination, file_name) + ArrayToNc(coords, full_filename) +} \ No newline at end of file diff --git a/man/CST_SaveCube.Rd b/man/CST_SaveCube.Rd new file mode 100644 index 00000000..cb12f09f --- /dev/null +++ b/man/CST_SaveCube.Rd @@ -0,0 +1,114 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/CST_SaveCube.R +\name{CST_SaveCube} +\alias{CST_SaveCube} +\title{Save objects of class 's2dv_cube' to data in NetCDF format} +\usage{ +CST_SaveCube( + data, + destination = "./", + sdate_dim = "sdate", + ftime_dim = "time", + dat_dim = "dataset", + var_dim = "var", + memb_dim = "member", + startdates = NULL, + drop_dims = NULL, + single_file = FALSE, + extra_string = NULL +) +} +\arguments{ +\item{data}{An object of class \code{s2dv_cube}.} + +\item{destination}{A character string containing the directory name in which +to save the data. NetCDF file for each starting date are saved into the +folder tree: \cr +destination/Dataset/variable/. By default the function +creates and saves the data into the working directory.} + +\item{sdate_dim}{A character string indicating the name of the start date +dimension. By default, it is set to 'sdate'. It can be NULL if there is no +start date dimension.} + +\item{ftime_dim}{A character string indicating the name of the forecast time +dimension. By default, it is set to 'time'. It can be NULL if there is no +forecast time dimension.} + +\item{dat_dim}{A character string indicating the name of dataset dimension. +By default, it is set to 'dataset'. It can be NULL if there is no dataset +dimension.} + +\item{var_dim}{A character string indicating the name of variable dimension. +By default, it is set to 'var'. It can be NULL if there is no variable +dimension.} + +\item{memb_dim}{A character string indicating the name of the member dimension. +By default, it is set to 'member'. It can be NULL if there is no member +dimension.} + +\item{startdates}{A vector of dates that will be used for the filenames +when saving the data in multiple files. It must be a vector of the same +length as the start date dimension of data. It must be a vector of class +\code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. +If it is NULL, the coordinate corresponding the the start date dimension or +the first Date of each time step will be used as the name of the files. +It is NULL by default.} + +\item{drop_dims}{A vector of character strings indicating the dimension names +of length 1 that need to be dropped in order that they don't appear in the +netCDF file. It is NULL by default (optional).} + +\item{single_file}{A logical value indicating if all object is saved in a +single file (TRUE) or in multiple files (FALSE). When it is FALSE, +the array is separated for Datasets, variable and start date. It is FALSE +by default.} + +\item{extra_string}{A character string to be include as part of the file name, +for instance, to identify member or realization. It would be added to the +file name between underscore characters.} +} +\value{ +Multiple or single NetCDF files containing the data array.\cr +\item{\code{single_file = TRUE}}{ + All data is saved in a single file located in the specified destination + path with the following name: + ___.nc. Multiple + variables are saved separately in the same file. The forecast time units + is extracted from the frequency of the time steps (hours, days, months). + The first value of forecast time is 1. If no frequency is found, the units + will be 'hours since' each start date and the time steps are assumed to be + equally spaced. +} +\item{\code{single_file = FALSE}}{ + The data array is subset and stored into multiple files. Each file + contains the data subset for each start date, variable and dataset. Files + with different variables and Datasets are stored in separated directories + within the following directory tree: destination/Dataset/variable/. + The name of each file will be: + __.nc. +} +} +\description{ +This function allows to divide and save a object of class +'s2dv_cube' into a NetCDF file, allowing to reload the saved data using +\code{Start} function from StartR package. If the original 's2dv_cube' object +has been created from \code{CST_Load()}, then it can be reloaded with +\code{Load()}. +} +\examples{ +\dontrun{ +data <- lonlat_temp_st$exp +destination <- "./" +CST_SaveExp(data = data, destination = destination, ftime_dim = 'ftime', + var_dim = 'var', dat_dim = 'dataset') +} + +} +\seealso{ +\code{\link[startR]{Start}}, \code{\link{as.s2dv_cube}} and +\code{\link{s2dv_cube}} +} +\author{ +Perez-Zanon Nuria, \email{nuria.perez@bsc.es} +} diff --git a/man/SaveCube.Rd b/man/SaveCube.Rd new file mode 100644 index 00000000..da9a1ed4 --- /dev/null +++ b/man/SaveCube.Rd @@ -0,0 +1,143 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/CST_SaveCube.R +\name{SaveCube} +\alias{SaveCube} +\title{Save a multidimensional array with metadata to data in NetCDF format} +\usage{ +SaveCube( + data, + destination = "./", + Dates = NULL, + coords = NULL, + varname = NULL, + metadata = NULL, + Datasets = NULL, + startdates = NULL, + dat_dim = "dataset", + sdate_dim = "sdate", + ftime_dim = "time", + var_dim = "var", + memb_dim = "member", + drop_dims = NULL, + single_file = FALSE, + extra_string = NULL, + global_attrs = NULL +) +} +\arguments{ +\item{data}{A multi-dimensional array with named dimensions.} + +\item{destination}{A character string indicating the path where to store the +NetCDF files.} + +\item{Dates}{A named array of dates with the corresponding sdate and forecast +time dimension. If there is no sdate_dim, you can set it to NULL. +It must have ftime_dim dimension.} + +\item{coords}{A named list with elements of the coordinates corresponding to +the dimensions of the data parameter. The names and length of each element +must correspond to the names of the dimensions. If any coordinate is not +provided, it is set as an index vector with the values from 1 to the length +of the corresponding dimension.} + +\item{varname}{A character string indicating the name of the variable to be +saved.} + +\item{metadata}{A named list where each element is a variable containing the +corresponding information. The information must be contained in a list of +lists for each variable.} + +\item{Datasets}{A vector of character string indicating the names of the +datasets.} + +\item{startdates}{A vector of dates that will be used for the filenames +when saving the data in multiple files. It must be a vector of the same +length as the start date dimension of data. It must be a vector of class +\code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. +If it is NULL, the first Date of each time step will be used as the name of +the files. It is NULL by default.} + +\item{dat_dim}{A character string indicating the name of dataset dimension. +By default, it is set to 'dataset'. It can be NULL if there is no dataset +dimension.} + +\item{sdate_dim}{A character string indicating the name of the start date +dimension. By default, it is set to 'sdate'. It can be NULL if there is no +start date dimension.} + +\item{ftime_dim}{A character string indicating the name of the forecast time +dimension. By default, it is set to 'time'. It can be NULL if there is no +forecast time dimension.} + +\item{var_dim}{A character string indicating the name of variable dimension. +By default, it is set to 'var'. It can be NULL if there is no variable +dimension.} + +\item{memb_dim}{A character string indicating the name of the member dimension. +By default, it is set to 'member'. It can be NULL if there is no member +dimension.} + +\item{drop_dims}{A vector of character strings indicating the dimension names +of length 1 that need to be dropped in order that they don't appear in the +netCDF file. It is NULL by default (optional).} + +\item{single_file}{A logical value indicating if all object is saved in a +unique file (TRUE) or in separated directories (FALSE). When it is FALSE, +the array is separated for Datasets, variable and start date. It is FALSE +by default (optional).} + +\item{extra_string}{A character string to be include as part of the file name, +for instance, to identify member or realization. It would be added to the +file name between underscore characters (optional).} + +\item{global_attrs}{A list with elements containing the global attributes +to be saved in the NetCDF.} +} +\value{ +Multiple or single NetCDF files containing the data array.\cr +\item{\code{single_file = TRUE}}{ + All data is saved in a single file located in the specified destination + path with the following name: + ___.nc. Multiple + variables are saved separately in the same file. The forecast time units + is extracted from the frequency of the time steps (hours, days, months). + The first value of forecast time is 1. If no frequency is found, the units + will be 'hours since' each start date and the time steps are assumed to be + equally spaced. +} +\item{\code{single_file = FALSE}}{ + The data array is subset and stored into multiple files. Each file + contains the data subset for each start date, variable and dataset. Files + with different variables and Datasets are stored in separated directories + within the following directory tree: destination/Dataset/variable/. + The name of each file will be: + __.nc. +} +} +\description{ +This function allows to save a data array with metadata into a +NetCDF file, allowing to reload the saved data using \code{Start} function +from StartR package. If the original 's2dv_cube' object has been created from +\code{CST_Load()}, then it can be reloaded with \code{Load()}. +} +\examples{ +\dontrun{ +data <- lonlat_temp_st$exp$data +lon <- lonlat_temp_st$exp$coords$lon +lat <- lonlat_temp_st$exp$coords$lat +coords <- list(lon = lon, lat = lat) +Datasets <- lonlat_temp_st$exp$attrs$Datasets +varname <- 'tas' +Dates <- lonlat_temp_st$exp$attrs$Dates +destination = './' +metadata <- lonlat_temp_st$exp$attrs$Variable$metadata +SaveExp(data = data, destination = destination, coords = coords, + Datasets = Datasets, varname = varname, Dates = Dates, + metadata = metadata, single_file = TRUE, ftime_dim = 'ftime', + var_dim = 'var', dat_dim = 'dataset') +} + +} +\author{ +Perez-Zanon Nuria, \email{nuria.perez@bsc.es} +} -- GitLab From 80ed199503e5dabd6871f691638b24b7f96eabe4 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 29 Nov 2023 13:15:07 +0100 Subject: [PATCH 17/66] Correct errors and rename function --- R/CST_SaveCube.R | 729 ----------------------------- R/CST_SaveExp.R | 559 +++++++--------------- tests/testthat/test-CST_SaveCube.R | 324 +++++++++++++ tests/testthat/test-CST_SaveExp.R | 268 +++++++---- 4 files changed, 663 insertions(+), 1217 deletions(-) delete mode 100644 R/CST_SaveCube.R create mode 100644 tests/testthat/test-CST_SaveCube.R diff --git a/R/CST_SaveCube.R b/R/CST_SaveCube.R deleted file mode 100644 index 908148fc..00000000 --- a/R/CST_SaveCube.R +++ /dev/null @@ -1,729 +0,0 @@ -#'Save objects of class 's2dv_cube' to data in NetCDF format -#' -#'@author Perez-Zanon Nuria, \email{nuria.perez@bsc.es} -#' -#'@description This function allows to divide and save a object of class -#''s2dv_cube' into a NetCDF file, allowing to reload the saved data using -#'\code{Start} function from StartR package. If the original 's2dv_cube' object -#'has been created from \code{CST_Load()}, then it can be reloaded with -#'\code{Load()}. -#' -#'@param data An object of class \code{s2dv_cube}. -#'@param destination A character string containing the directory name in which -#' to save the data. NetCDF file for each starting date are saved into the -#' folder tree: \cr -#' destination/Dataset/variable/. By default the function -#' creates and saves the data into the working directory. -#'@param sdate_dim A character string indicating the name of the start date -#' dimension. By default, it is set to 'sdate'. It can be NULL if there is no -#' start date dimension. -#'@param ftime_dim A character string indicating the name of the forecast time -#' dimension. By default, it is set to 'time'. It can be NULL if there is no -#' forecast time dimension. -#'@param dat_dim A character string indicating the name of dataset dimension. -#' By default, it is set to 'dataset'. It can be NULL if there is no dataset -#' dimension. -#'@param var_dim A character string indicating the name of variable dimension. -#' By default, it is set to 'var'. It can be NULL if there is no variable -#' dimension. -#'@param memb_dim A character string indicating the name of the member dimension. -#' By default, it is set to 'member'. It can be NULL if there is no member -#' dimension. -#'@param startdates A vector of dates that will be used for the filenames -#' when saving the data in multiple files. It must be a vector of the same -#' length as the start date dimension of data. It must be a vector of class -#' \code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. -#' If it is NULL, the coordinate corresponding the the start date dimension or -#' the first Date of each time step will be used as the name of the files. -#' It is NULL by default. -#'@param drop_dims A vector of character strings indicating the dimension names -#' of length 1 that need to be dropped in order that they don't appear in the -#' netCDF file. It is NULL by default (optional). -#'@param single_file A logical value indicating if all object is saved in a -#' single file (TRUE) or in multiple files (FALSE). When it is FALSE, -#' the array is separated for Datasets, variable and start date. It is FALSE -#' by default. -#'@param extra_string A character string to be include as part of the file name, -#' for instance, to identify member or realization. It would be added to the -#' file name between underscore characters. -#' -#'@return Multiple or single NetCDF files containing the data array.\cr -#'\item{\code{single_file = TRUE}}{ -#' All data is saved in a single file located in the specified destination -#' path with the following name: -#' ___.nc. Multiple -#' variables are saved separately in the same file. The forecast time units -#' is extracted from the frequency of the time steps (hours, days, months). -#' The first value of forecast time is 1. If no frequency is found, the units -#' will be 'hours since' each start date and the time steps are assumed to be -#' equally spaced. -#'} -#'\item{\code{single_file = FALSE}}{ -#' The data array is subset and stored into multiple files. Each file -#' contains the data subset for each start date, variable and dataset. Files -#' with different variables and Datasets are stored in separated directories -#' within the following directory tree: destination/Dataset/variable/. -#' The name of each file will be: -#' __.nc. -#'} -#' -#'@seealso \code{\link[startR]{Start}}, \code{\link{as.s2dv_cube}} and -#'\code{\link{s2dv_cube}} -#' -#'@examples -#'\dontrun{ -#'data <- lonlat_temp_st$exp -#'destination <- "./" -#'CST_SaveExp(data = data, destination = destination, ftime_dim = 'ftime', -#' var_dim = 'var', dat_dim = 'dataset') -#'} -#' -#'@export -CST_SaveCube <- function(data, destination = "./", sdate_dim = 'sdate', - ftime_dim = 'time', dat_dim = 'dataset', - var_dim = 'var', memb_dim = 'member', - startdates = NULL, drop_dims = NULL, - single_file = FALSE, extra_string = NULL) { - # Check 's2dv_cube' - if (!inherits(data, 's2dv_cube')) { - stop("Parameter 'data' must be of the class 's2dv_cube'.") - } - # Check object structure - if (!all(c('data', 'attrs') %in% names(data))) { - stop("Parameter 'data' must have at least 'data' and 'attrs' elements ", - "within the 's2dv_cube' structure.") - } - if (!inherits(data$attrs, 'list')) { - stop("Level 'attrs' must be a list with at least 'Dates' element.") - } - if (!all(c('coords') %in% names(data))) { - warning("Element 'coords' not found. No coordinates will be used.") - } - # metadata - if (is.null(data$attrs$Variable$metadata)) { - warning("No metadata found in element Variable from attrs.") - } else { - if (!inherits(data$attrs$Variable$metadata, 'list')) { - stop("Element metadata from Variable element in attrs must be a list.") - } - if (!any(names(data$attrs$Variable$metadata) %in% names(data$coords))) { - warning("Metadata is not found for any coordinate.") - } else if (!any(names(data$attrs$Variable$metadata) %in% - data$attrs$Variable$varName)) { - warning("Metadata is not found for any variable.") - } - } - # Dates - if (is.null(data$attrs$Dates)) { - stop("Element 'Dates' from 'attrs' level cannot be NULL.") - } - if (is.null(dim(data$attrs$Dates))) { - stop("Element 'Dates' from 'attrs' level must have time dimensions.") - } - # sdate_dim - if (!is.null(sdate_dim)) { - if (!is.character(sdate_dim)) { - stop("Parameter 'sdate_dim' must be a character string.") - } - if (length(sdate_dim) > 1) { - warning("Parameter 'sdate_dim' has length greater than 1 and ", - "only the first element will be used.") - sdate_dim <- sdate_dim[1] - } - } else if (length(dim(data$attrs$Dates)) == 1) { - sdate_dim <- 'sdate' - dim(data$data) <- c(sdate = 1, dim(data$data)) - data$dims <- dim(data$data) - dim(data$attrs$Dates) <- c(sdate = 1, dim(data$attrs$Dates)) - data$coords[[sdate_dim]] <- data$attrs$Dates[1] - } - # startdates - if (is.null(startdates)) { - startdates <- data$coords[[sdate_dim]] - } else { - if (!is.character(startdates)) { - warning(paste0("Parameter 'startdates' is not a character string, ", - "it will not be used.")) - startdates <- data$coords[[sdate_dim]] - } - if (!is.null(sdate_dim)) { - if (dim(data$data)[sdate_dim] != length(startdates)) { - warning(paste0("Parameter 'startdates' doesn't have the same length ", - "as dimension '", sdate_dim,"', it will not be used.")) - startdates <- data$coords[[sdate_dim]] - } - } - } - - SaveCube(data = data$data, - destination = destination, - Dates = data$attrs$Dates, - coords = data$coords, - varname = data$attrs$Variable$varName, - metadata = data$attrs$Variable$metadata, - Datasets = data$attrs$Datasets, - startdates = startdates, - dat_dim = dat_dim, sdate_dim = sdate_dim, - ftime_dim = ftime_dim, var_dim = var_dim, - memb_dim = memb_dim, - drop_dims = drop_dims, - extra_string = extra_string, - single_file = single_file, - global_attrs = global_attrs) -} -#'Save a multidimensional array with metadata to data in NetCDF format -#'@description This function allows to save a data array with metadata into a -#'NetCDF file, allowing to reload the saved data using \code{Start} function -#'from StartR package. If the original 's2dv_cube' object has been created from -#'\code{CST_Load()}, then it can be reloaded with \code{Load()}. -#' -#'@author Perez-Zanon Nuria, \email{nuria.perez@bsc.es} -#' -#'@param data A multi-dimensional array with named dimensions. -#'@param destination A character string indicating the path where to store the -#' NetCDF files. -#'@param Dates A named array of dates with the corresponding sdate and forecast -#' time dimension. If there is no sdate_dim, you can set it to NULL. -#' It must have ftime_dim dimension. -#'@param coords A named list with elements of the coordinates corresponding to -#' the dimensions of the data parameter. The names and length of each element -#' must correspond to the names of the dimensions. If any coordinate is not -#' provided, it is set as an index vector with the values from 1 to the length -#' of the corresponding dimension. -#'@param varname A character string indicating the name of the variable to be -#' saved. -#'@param metadata A named list where each element is a variable containing the -#' corresponding information. The information must be contained in a list of -#' lists for each variable. -#'@param Datasets A vector of character string indicating the names of the -#' datasets. -#'@param startdates A vector of dates that will be used for the filenames -#' when saving the data in multiple files. It must be a vector of the same -#' length as the start date dimension of data. It must be a vector of class -#' \code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. -#' If it is NULL, the first Date of each time step will be used as the name of -#' the files. It is NULL by default. -#'@param sdate_dim A character string indicating the name of the start date -#' dimension. By default, it is set to 'sdate'. It can be NULL if there is no -#' start date dimension. -#'@param ftime_dim A character string indicating the name of the forecast time -#' dimension. By default, it is set to 'time'. It can be NULL if there is no -#' forecast time dimension. -#'@param dat_dim A character string indicating the name of dataset dimension. -#' By default, it is set to 'dataset'. It can be NULL if there is no dataset -#' dimension. -#'@param var_dim A character string indicating the name of variable dimension. -#' By default, it is set to 'var'. It can be NULL if there is no variable -#' dimension. -#'@param memb_dim A character string indicating the name of the member dimension. -#' By default, it is set to 'member'. It can be NULL if there is no member -#' dimension. -#'@param drop_dims A vector of character strings indicating the dimension names -#' of length 1 that need to be dropped in order that they don't appear in the -#' netCDF file. It is NULL by default (optional). -#'@param single_file A logical value indicating if all object is saved in a -#' unique file (TRUE) or in separated directories (FALSE). When it is FALSE, -#' the array is separated for Datasets, variable and start date. It is FALSE -#' by default (optional). -#'@param extra_string A character string to be include as part of the file name, -#' for instance, to identify member or realization. It would be added to the -#' file name between underscore characters (optional). -#'@param global_attrs A list with elements containing the global attributes -#' to be saved in the NetCDF. -#' -#'@return Multiple or single NetCDF files containing the data array.\cr -#'\item{\code{single_file = TRUE}}{ -#' All data is saved in a single file located in the specified destination -#' path with the following name: -#' ___.nc. Multiple -#' variables are saved separately in the same file. The forecast time units -#' is extracted from the frequency of the time steps (hours, days, months). -#' The first value of forecast time is 1. If no frequency is found, the units -#' will be 'hours since' each start date and the time steps are assumed to be -#' equally spaced. -#'} -#'\item{\code{single_file = FALSE}}{ -#' The data array is subset and stored into multiple files. Each file -#' contains the data subset for each start date, variable and dataset. Files -#' with different variables and Datasets are stored in separated directories -#' within the following directory tree: destination/Dataset/variable/. -#' The name of each file will be: -#' __.nc. -#'} -#' -#'@examples -#'\dontrun{ -#'data <- lonlat_temp_st$exp$data -#'lon <- lonlat_temp_st$exp$coords$lon -#'lat <- lonlat_temp_st$exp$coords$lat -#'coords <- list(lon = lon, lat = lat) -#'Datasets <- lonlat_temp_st$exp$attrs$Datasets -#'varname <- 'tas' -#'Dates <- lonlat_temp_st$exp$attrs$Dates -#'destination = './' -#'metadata <- lonlat_temp_st$exp$attrs$Variable$metadata -#'SaveExp(data = data, destination = destination, coords = coords, -#' Datasets = Datasets, varname = varname, Dates = Dates, -#' metadata = metadata, single_file = TRUE, ftime_dim = 'ftime', -#' var_dim = 'var', dat_dim = 'dataset') -#'} -#' -#'@import easyNCDF -#'@importFrom s2dv Reorder -#'@import multiApply -#'@importFrom ClimProjDiags Subset -#'@export -SaveCube <- function(data, destination = "./", Dates = NULL, coords = NULL, - varname = NULL, metadata = NULL, Datasets = NULL, - startdates = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', - ftime_dim = 'time', var_dim = 'var', memb_dim = 'member', - drop_dims = NULL, single_file = FALSE, extra_string = NULL, - global_attrs = NULL) { - ## Initial checks - # data - if (is.null(data)) { - stop("Parameter 'data' cannot be NULL.") - } - dimnames <- names(dim(data)) - if (is.null(dimnames)) { - stop("Parameter 'data' must be an array with named dimensions.") - } - # destination - if (!is.character(destination) | length(destination) > 1) { - stop("Parameter 'destination' must be a character string of one element ", - "indicating the name of the file (including the folder if needed) ", - "where the data will be saved.") - } - # Dates - if (!is.null(Dates)) { - if (!inherits(Dates, "POSIXct") & !inherits(Dates, "Date")) { - stop("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.") - } - if (is.null(dim(Dates))) { - stop("Parameter 'Dates' must have dimension names.") - } - } - # drop_dims - if (!is.null(drop_dims)) { - if (!is.character(drop_dims) | any(!drop_dims %in% names(dim(data)))) { - warning("Parameter 'drop_dims' must be character string containing ", - "the data dimension names to be dropped. It will not be used.") - } else if (!all(dim(data)[drop_dims] %in% 1)) { - warning("Parameter 'drop_dims' can only contain dimension names ", - "that are of length 1. It will not be used.") - } else if (any(drop_dims %in% c(ftime_dim, sdate_dim, dat_dim, memb_dim, var_dim))) { - warning("Parameter 'drop_dims' contains dimensions used in the computation. ", - "It will not be used.") - drop_dims <- NULL - } else { - data <- Subset(x = data, along = drop_dims, - indices = lapply(1:length(drop_dims), function(x) 1), - drop = 'selected') - dimnames <- names(dim(data)) - } - } - # coords - if (!is.null(coords)) { - if (!all(names(coords) %in% dimnames)) { - coords <- coords[-which(!names(coords) %in% dimnames)] - } - for (i_coord in dimnames) { - if (i_coord %in% names(coords)) { - if (length(coords[[i_coord]]) != dim(data)[i_coord]) { - warning(paste0("Coordinate '", i_coord, "' has different lenght as ", - "its dimension and it will not be used.")) - coords[[i_coord]] <- 1:dim(data)[i_coord] - } - } else { - coords[[i_coord]] <- 1:dim(data)[i_coord] - } - } - } else { - coords <- sapply(dimnames, function(x) 1:dim(data)[x]) - } - # varname - if (is.null(varname)) { - varname <- 'X' - } else if (length(varname) > 1) { - multiple_vars <- TRUE - } else { - multiple_vars <- FALSE - } - if (!all(sapply(varname, is.character))) { - stop("Parameter 'varname' must be a character string with the ", - "variable names.") - } - # single_file - if (!inherits(single_file, 'logical')) { - warning("Parameter 'single_file' must be a logical value. It will be ", - "set as FALSE.") - single_file <- FALSE - } - # extra_string - if (!is.null(extra_string)) { - if (!is.character(extra_string)) { - stop("Parameter 'extra_string' must be a character string.") - } - } - # global_attrs - if (!is.null(global_attrs)) { - if (!inherits(global_attrs, 'list')) { - stop("Parameter 'global_attrs' must be a list.") - } - } - - ## Dimensions checks - # Spatial coordinates - if (!any(dimnames %in% .KnownLonNames()) | - !any(dimnames %in% .KnownLatNames())) { - lon_dim <- NULL - lat_dim <- NULL - } else { - lon_dim <- dimnames[which(dimnames %in% .KnownLonNames())] - lat_dim <- dimnames[which(dimnames %in% .KnownLatNames())] - } - # ftime_dim - if (!is.null(ftime_dim)) { - if (!is.character(ftime_dim)) { - stop("Parameter 'ftime_dim' must be a character string.") - } - if (!all(ftime_dim %in% dimnames)) { - stop("Parameter 'ftime_dim' is not found in 'data' dimension.") - } - } - # sdate_dim - if (!is.null(sdate_dim)) { - if (!is.character(sdate_dim)) { - stop("Parameter 'sdate_dim' must be a character string.") - } - if (!all(sdate_dim %in% dimnames)) { - stop("Parameter 'sdate_dim' is not found in 'data' dimension.") - } - } - # memb_dim - if (!is.null(memb_dim)) { - if (!is.character(memb_dim)) { - stop("Parameter 'memb_dim' must be a character string.") - } - if (!all(memb_dim %in% dimnames)) { - stop("Parameter 'memb_dim' is not found in 'data' dimension. Set it ", - "as NULL if there is no member dimension.") - } - } - # dat_dim - if (!is.null(dat_dim)) { - if (!is.character(dat_dim)) { - stop("Parameter 'dat_dim' must be a character string.") - } - if (!all(dat_dim %in% dimnames)) { - stop("Parameter 'dat_dim' is not found in 'data' dimension. Set it ", - "as NULL if there is no Datasets dimension.") - } - if (length(dat_dim) > 1) { - warning("Parameter 'dat_dim' has length greater than 1 and ", - "only the first element will be used.") - dat_dim <- dat_dim[1] - } - n_datasets <- dim(data)[dat_dim] - } else { - n_datasets <- 1 - } - # var_dim - if (!is.null(var_dim)) { - if (!is.character(var_dim)) { - stop("Parameter 'var_dim' must be a character string.") - } - if (!all(var_dim %in% dimnames)) { - stop("Parameter 'var_dim' is not found in 'data' dimension. Set it ", - "as NULL if there is no variable dimension.") - } - n_vars <- dim(data)[var_dim] - } else { - n_vars <- 1 - } - # minimum dimensions - if (all(dimnames %in% c(var_dim, dat_dim))) { - if (!single_file) { - warning("Parameter data has only ", - paste(c(var_dim, dat_dim), collapse = ' and '), " dimensions ", - "and it cannot be splitted in multiple files. All data will ", - "be saved in a single file.") - single_file <- TRUE - } - } - # Dates dimension check - if (!is.null(Dates)) { - if (is.null(ftime_dim)) { - stop("Parameter 'Dates' must have 'ftime_dim'.") - } - if (all(c(ftime_dim, sdate_dim) %in% names(dim(Dates)))) { - if (any(!names(dim(Dates)) %in% c(ftime_dim, sdate_dim))) { - if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] == 1)) { - dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] - } else { - stop("Parameter 'Dates' must have only 'sdate_dim' and 'ftime_dim' dimensions.") - } - } - if (is.null(startdates)) { - startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') - } else if (any(inherits(startdates, "POSIXct"), inherits(startdates, "Date"))) { - startdates <- format(startdates, "%Y%m%d") - } else if (any(nchar(startdates) > 10, nchar(startdates) < 1)) { - warning("Parameter 'startdates' should be a character string containing ", - "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", - "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") - startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') - } - } else if (any(ftime_dim %in% names(dim(Dates)))) { - if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim)] == 1)) { - dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] - } - } - } else if (!single_file) { - warning("Dates must be provided if 'data' must be saved in separated files. ", - "All data will be saved in a single file.") - single_file <- TRUE - } - # startdates - if (is.null(startdates)) { - if (is.null(sdate_dim)) { - startdates <- 'XXX' - } else { - startdates <- rep('XXX', dim(data)[sdate_dim]) - } - } - # Datasets - if (is.null(Datasets)) { - Datasets <- rep('XXX', n_datasets ) - } - if (inherits(Datasets, 'list')) { - Datasets <- names(Datasets) - } - if (n_datasets > length(Datasets)) { - warning("Dimension 'Datasets' in 'data' is greater than those listed in ", - "element 'Datasets' and the first element will be reused.") - Datasets <- c(Datasets, rep(Datasets[1], n_datasets - length(Datasets))) - } else if (n_datasets < length(Datasets)) { - warning("Dimension 'Datasets' in 'data' is smaller than those listed in ", - "element 'Datasets' and only the firsts elements will be used.") - Datasets <- Datasets[1:n_datasets] - } - - ## Unknown dimensions check - alldims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, memb_dim, ftime_dim) - if (!all(dimnames %in% alldims)) { - unknown_dims <- dimnames[which(!dimnames %in% alldims)] - memb_dim <- c(memb_dim, unknown_dims) - alldims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, memb_dim, ftime_dim) - } - # Reorder - if (any(dimnames != alldims)) { - data <- Reorder(data, alldims) - dimnames <- names(dim(data)) - if (!is.null(attr(data, 'dimensions'))) { - attr(data, 'dimensions') <- dimnames - } - } - - ## NetCDF dimensions definition - defined_dims <- NULL - extra_info_dim <- NULL - if (is.null(Dates)) { - filedims <- dimnames[which(!dimnames %in% c(dat_dim, var_dim))] - } else { - filedims <- dimnames[which(!dimnames %in% c(dat_dim, var_dim, sdate_dim, ftime_dim))] - } - - for (i_coord in filedims) { - # vals - if (i_coord %in% names(coords)) { - if (is.numeric(coords[[i_coord]])) { - coords[[i_coord]] <- as.vector(coords[[i_coord]]) - } else { - coords[[i_coord]] <- 1:dim(data)[i_coord] - } - } else { - coords[[i_coord]] <- 1:dim(data)[i_coord] - } - dim(coords[[i_coord]]) <- dim(data)[i_coord] - - ## metadata - if (i_coord %in% names(metadata)) { - if ('variables' %in% names(attributes(metadata[[i_coord]]))) { - # from Start: 'lon' or 'lat' - attrs <- attributes(metadata[[i_coord]])[['variables']] - attr(coords[[i_coord]], 'variables') <- attrs - } else if (inherits(metadata[[i_coord]], 'list')) { - # from Start and Load: main var - attr(coords[[i_coord]], 'variables') <- list(metadata[[i_coord]]) - names(attributes(coords[[i_coord]])$variables) <- i_coord - } else if (!is.null(attributes(metadata[[i_coord]]))) { - # from Load - attr(coords[[i_coord]], 'variables') <- list(attributes(metadata[[i_coord]])) - names(attributes(coords[[i_coord]])$variables) <- i_coord - } else { - stop("Metadata is not correct.") - } - } - } - coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL - - defined_vars <- list() - if (!single_file) { - for (i in 1:n_datasets) { - path <- file.path(destination, Datasets[i], varname) - for (j in 1:n_vars) { - dir.create(path[j], recursive = TRUE) - startdates <- gsub("-", "", startdates) - dim(startdates) <- c(length(startdates)) - names(dim(startdates)) <- sdate_dim - if (is.null(dat_dim) & is.null(var_dim)) { - data_subset <- data - } else if (is.null(dat_dim)) { - data_subset <- Subset(data, c(var_dim), list(j), drop = 'selected') - } else if (is.null(var_dim)) { - data_subset <- Subset(data, along = c(dat_dim), list(i), drop = 'selected') - } else { - data_subset <- Subset(data, c(dat_dim, var_dim), list(i, j), drop = 'selected') - } - if (is.null(Dates)) { - input_data <- list(data_subset, startdates) - target_dims <- list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL) - } else { - input_data <- list(data_subset, startdates, Dates) - target_dims = list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL, ftime_dim) - } - print(varname) - Apply(data = input_data, - target_dims = target_dims, - fun = .savearray, - destination = path[j], - coords = coords, - ftime_dim = ftime_dim, - varname = varname[j], - metadata_var = metadata[[varname[j]]], - extra_string = extra_string, - global_attrs = global_attrs) - } - } - } else { - # Datasets definition - # From here - if (!is.null(dat_dim)) { - coords[[dat_dim]] <- array(1:dim(data)[dat_dim], dim = dim(data)[dat_dim]) - attr(coords[[dat_dim]], 'variables') <- list(list(units = 'adim')) - # extra_info_dim[[dat_dim]] <- list(Datasets = paste(Datasets, collapse = ', ')) - } - first_sdate <- last_sdate <- NULL - if (!is.null(Dates)) { - if (is.null(sdate_dim)) { - sdates <- Dates[1] - # ftime definition - leadtimes <- as.numeric(Dates - sdates)/3600 - } else { - # sdate definition - sdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') - differ <- as.numeric((sdates - sdates[1])/3600) - # new - dim(differ) <- dim(data)[sdate_dim] - coords[[sdate_dim]] <- differ - attr(coords[[sdate_dim]], 'variables') <- list(list(units = paste('hours since', sdates[1]), - calendar = 'proleptic_gregorian', - longname = sdate_dim)) - - first_sdate <- sdates[1] - last_sdate <- sdates[length(sdates)] - # ftime definition - Dates <- Reorder(Dates, c(ftime_dim, sdate_dim)) - differ_ftime <- apply(Dates, 2, function(x){as.numeric((x - x[1])/3600)}) - dim(differ_ftime) <- dim(Dates) - leadtimes <- Subset(differ_ftime, along = sdate_dim, 1, drop = 'selected') - - if (!all(apply(differ_ftime, 1, function(x){length(unique(x)) == 1}))) { - warning("Time steps are not equal for all start dates. Only ", - "forecast time values for the first start date will be saved ", - "correctly.") - } - } - - # Save in units 'hours since' - dim(leadtimes) <- dim(data)[ftime_dim] - coords[[ftime_dim]] <- leadtimes - attr(coords[[ftime_dim]], 'variables') <- list(list(units = paste('hours since', - paste(sdates, collapse = ', ')), - calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) - } - - # var definition - defined_vars <- list() - extra_info_var <- NULL - for (j in 1:n_vars) { - varname_j <- varname[j] - metadata_j <- metadata[[varname_j]] - if (is.null(var_dim)) { - coords[[varname_j]] <- data - } else { - coords[[varname_j]] <- Subset(data, var_dim, j, drop = 'selected') - } - if (!is.null(metadata_j)) { - attr(coords[[varname_j]], 'variables') <- list(metadata_j) - names(attributes(coords[[varname_j]])$variables) <- varname_j - } - # Add global attributes - if (!is.null(global_attrs)) { - attributes(coords[[varname_j]])$global_attrs <- global_attrs - } - } - if (is.null(extra_string)) { - gsub("-", "", first_sdate) - file_name <- paste0(paste(c(varname, - gsub("-", "", first_sdate), - gsub("-", "", last_sdate)), - collapse = '_'), ".nc") - } else { - nc <- substr(extra_string, nchar(extra_string)-2, nchar(extra_string)) - if (nc == ".nc") { - file_name <- extra_string - } else { - file_name <- paste0(extra_string, ".nc") - } - } - full_filename <- file.path(destination, file_name) - ArrayToNc(coords, full_filename) - } -} - -.savecube <- function(data, coords, destination = "./", - startdates = NULL, dates = NULL, - ftime_dim = 'time', varname = 'var', - metadata_var = NULL, extra_string = NULL, - global_attrs = NULL) { - if (!is.null(dates)) { - differ <- as.numeric((dates - dates[1])/3600) - dim(differ) <- dim(data)[ftime_dim] - coords[[ftime_dim]] <- differ - attr(coords[[ftime_dim]], 'variables') <- list(list(units = paste('hours since', Dates[1,1]), - calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) - names(attributes(coords[[ftime_dim]])$variables) <- ftime_dim - } - # Add data - coords[[varname]] <- data - if (!is.null(metadata_var)) { - attr(coords[[varname]], 'variables') <- list(metadata_var) - names(attributes(coords[[varname]])$variables) <- varname - } - # Add global attributes - if (!is.null(global_attrs)) { - attributes(coords[[varname]])$global_attrs <- global_attrs - } - - if (is.null(extra_string)) { - file_name <- paste0(varname, "_", startdates, ".nc") - } else { - file_name <- paste0(varname, "_", extra_string, "_", startdates, ".nc") - } - full_filename <- file.path(destination, file_name) - ArrayToNc(coords, full_filename) -} \ No newline at end of file diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index 7d5733f1..72b45b41 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -78,16 +78,13 @@ #' var_dim = 'var', dat_dim = 'dataset') #'} #' -#'@import ncdf4 -#'@importFrom s2dv Reorder -#'@importFrom ClimProjDiags Subset -#'@import multiApply #'@export CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', - ftime_dim = 'time', dat_dim = 'dataset', - var_dim = 'var', memb_dim = 'member', - startdates = NULL, drop_dims = NULL, - single_file = FALSE, extra_string = NULL) { + ftime_dim = 'time', dat_dim = 'dataset', + var_dim = 'var', memb_dim = 'member', + startdates = NULL, drop_dims = NULL, + single_file = FALSE, extra_string = NULL, + global_attrs = NULL) { # Check 's2dv_cube' if (!inherits(data, 's2dv_cube')) { stop("Parameter 'data' must be of the class 's2dv_cube'.") @@ -160,19 +157,20 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', } SaveExp(data = data$data, - destination = destination, - Dates = data$attrs$Dates, - coords = data$coords, - varname = data$attrs$Variable$varName, - metadata = data$attrs$Variable$metadata, - Datasets = data$attrs$Datasets, - startdates = startdates, - dat_dim = dat_dim, sdate_dim = sdate_dim, - ftime_dim = ftime_dim, var_dim = var_dim, - memb_dim = memb_dim, - drop_dims = drop_dims, - extra_string = extra_string, - single_file = single_file) + destination = destination, + Dates = data$attrs$Dates, + coords = data$coords, + varname = data$attrs$Variable$varName, + metadata = data$attrs$Variable$metadata, + Datasets = data$attrs$Datasets, + startdates = startdates, + dat_dim = dat_dim, sdate_dim = sdate_dim, + ftime_dim = ftime_dim, var_dim = var_dim, + memb_dim = memb_dim, + drop_dims = drop_dims, + extra_string = extra_string, + single_file = single_file, + global_attrs = global_attrs) } #'Save a multidimensional array with metadata to data in NetCDF format #'@description This function allows to save a data array with metadata into a @@ -186,7 +184,8 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #'@param destination A character string indicating the path where to store the #' NetCDF files. #'@param Dates A named array of dates with the corresponding sdate and forecast -#' time dimension. +#' time dimension. If there is no sdate_dim, you can set it to NULL. +#' It must have ftime_dim dimension. #'@param coords A named list with elements of the coordinates corresponding to #' the dimensions of the data parameter. The names and length of each element #' must correspond to the names of the dimensions. If any coordinate is not @@ -230,6 +229,8 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #'@param extra_string A character string to be include as part of the file name, #' for instance, to identify member or realization. It would be added to the #' file name between underscore characters (optional). +#'@param global_attrs A list with elements containing the global attributes +#' to be saved in the NetCDF. #' #'@return Multiple or single NetCDF files containing the data array.\cr #'\item{\code{single_file = TRUE}}{ @@ -268,16 +269,17 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #' var_dim = 'var', dat_dim = 'dataset') #'} #' -#'@import ncdf4 +#'@import easyNCDF #'@importFrom s2dv Reorder #'@import multiApply #'@importFrom ClimProjDiags Subset #'@export SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, - varname = NULL, metadata = NULL, Datasets = NULL, - startdates = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', - ftime_dim = 'time', var_dim = 'var', memb_dim = 'member', - drop_dims = NULL, single_file = FALSE, extra_string = NULL) { + varname = NULL, metadata = NULL, Datasets = NULL, + startdates = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', + ftime_dim = 'time', var_dim = 'var', memb_dim = 'member', + drop_dims = NULL, single_file = FALSE, extra_string = NULL, + global_attrs = NULL) { ## Initial checks # data if (is.null(data)) { @@ -310,6 +312,10 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } else if (!all(dim(data)[drop_dims] %in% 1)) { warning("Parameter 'drop_dims' can only contain dimension names ", "that are of length 1. It will not be used.") + } else if (any(drop_dims %in% c(ftime_dim, sdate_dim, dat_dim, memb_dim, var_dim))) { + warning("Parameter 'drop_dims' contains dimensions used in the computation. ", + "It will not be used.") + drop_dims <- NULL } else { data <- Subset(x = data, along = drop_dims, indices = lapply(1:length(drop_dims), function(x) 1), @@ -330,8 +336,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, coords[[i_coord]] <- 1:dim(data)[i_coord] } } else { - warning(paste0("Coordinate '", i_coord, "' is not provided ", - "and it will be set as index in element coords.")) coords[[i_coord]] <- 1:dim(data)[i_coord] } } @@ -340,7 +344,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } # varname if (is.null(varname)) { - warning("Parameter 'varname' is NULL. It will be assigned to 'X'.") varname <- 'X' } else if (length(varname) > 1) { multiple_vars <- TRUE @@ -351,11 +354,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, stop("Parameter 'varname' must be a character string with the ", "variable names.") } - # metadata - if (is.null(metadata)) { - warning("Parameter 'metadata' is not provided so the metadata saved ", - "will be incomplete.") - } # single_file if (!inherits(single_file, 'logical')) { warning("Parameter 'single_file' must be a logical value. It will be ", @@ -368,6 +366,12 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, stop("Parameter 'extra_string' must be a character string.") } } + # global_attrs + if (!is.null(global_attrs)) { + if (!inherits(global_attrs, 'list')) { + stop("Parameter 'global_attrs' must be a list.") + } + } ## Dimensions checks # Spatial coordinates @@ -378,16 +382,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } else { lon_dim <- dimnames[which(dimnames %in% .KnownLonNames())] lat_dim <- dimnames[which(dimnames %in% .KnownLatNames())] - if (length(lon_dim) > 1) { - warning("Found more than one longitudinal dimension. Only the first one ", - "will be used.") - lon_dim <- lon_dim[1] - } - if (length(lat_dim) > 1) { - warning("Found more than one latitudinal dimension. Only the first one ", - "will be used.") - lat_dim <- lat_dim[1] - } } # ftime_dim if (!is.null(ftime_dim)) { @@ -395,12 +389,8 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, stop("Parameter 'ftime_dim' must be a character string.") } if (!all(ftime_dim %in% dimnames)) { - stop("Parameter 'ftime_dim' is not found in 'data' dimension.") - } - if (length(ftime_dim) > 1) { - warning("Parameter 'ftime_dim' has length greater than 1 and ", - "only the first element will be used.") - ftime_dim <- ftime_dim[1] + stop("Parameter 'ftime_dim' is not found in 'data' dimension. Set it ", + "as NULL if there is no forecast time dimension.") } } # sdate_dim @@ -408,11 +398,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, if (!is.character(sdate_dim)) { stop("Parameter 'sdate_dim' must be a character string.") } - if (length(sdate_dim) > 1) { - warning("Parameter 'sdate_dim' has length greater than 1 and ", - "only the first element will be used.") - sdate_dim <- sdate_dim[1] - } if (!all(sdate_dim %in% dimnames)) { stop("Parameter 'sdate_dim' is not found in 'data' dimension.") } @@ -454,11 +439,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, stop("Parameter 'var_dim' is not found in 'data' dimension. Set it ", "as NULL if there is no variable dimension.") } - if (length(var_dim) > 1) { - warning("Parameter 'var_dim' has length greater than 1 and ", - "only the first element will be used.") - var_dim <- var_dim[1] - } n_vars <- dim(data)[var_dim] } else { n_vars <- 1 @@ -475,26 +455,36 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } # Dates dimension check if (!is.null(Dates)) { - if (all(names(dim(Dates)) == c(ftime_dim, sdate_dim)) | - all(names(dim(Dates)) == c(sdate_dim, ftime_dim))) { + if (is.null(ftime_dim)) { + stop("Parameter 'Dates' must have 'ftime_dim'.") + } + if (all(c(ftime_dim, sdate_dim) %in% names(dim(Dates)))) { + if (any(!names(dim(Dates)) %in% c(ftime_dim, sdate_dim))) { + if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] == 1)) { + dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] + } else { + stop("Parameter 'Dates' must have only 'sdate_dim' and 'ftime_dim' dimensions.") + } + } if (is.null(startdates)) { startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') - } else if ((!inherits(startdates, "POSIXct") & !inherits(startdates, "Date")) && - (!is.character(startdates) | (any(nchar(startdates) > 10) | any(nchar(startdates) < 1)))) { + } else if (any(inherits(startdates, "POSIXct"), inherits(startdates, "Date"))) { + startdates <- format(startdates, "%Y%m%d") + } else if (any(nchar(startdates) > 10, nchar(startdates) < 1)) { warning("Parameter 'startdates' should be a character string containing ", "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') - if (!is.null(format(startdates, "%Y%m%d"))) { - startdates <- format(startdates, "%Y%m%d") - } } - } else if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] == 1)) { - dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] - } else { - stop("Parameter 'Dates' must have start date dimension and ", - "forecast time dimension.") + } else if (any(ftime_dim %in% names(dim(Dates)))) { + if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim)] == 1)) { + dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] + } } + } else if (!single_file) { + warning("Dates must be provided if 'data' must be saved in separated files. ", + "All data will be saved in a single file.") + single_file <- TRUE } # startdates if (is.null(startdates)) { @@ -503,21 +493,9 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } else { startdates <- rep('XXX', dim(data)[sdate_dim]) } - } else { - if (is.null(sdate_dim)) { - if (length(startdates) != 1) { - warning("Parameter 'startdates' has length more than 1. Only first ", - "value will be used.") - startdates <- startdates[[1]] - } - } } # Datasets if (is.null(Datasets)) { - if (!single_file) { - warning("Parameter 'Datasets' is NULL. Files will be saved with a ", - "directory name of 'XXX'.") - } Datasets <- rep('XXX', n_datasets ) } if (inherits(Datasets, 'list')) { @@ -557,96 +535,42 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } else { filedims <- dimnames[which(!dimnames %in% c(dat_dim, var_dim, sdate_dim, ftime_dim))] } + for (i_coord in filedims) { - dim_info <- list() # vals if (i_coord %in% names(coords)) { if (is.numeric(coords[[i_coord]])) { - dim_info[['vals']] <- as.vector(coords[[i_coord]]) + coords[[i_coord]] <- as.vector(coords[[i_coord]]) } else { - dim_info[['vals']] <- 1:dim(data)[i_coord] + coords[[i_coord]] <- 1:dim(data)[i_coord] } } else { - dim_info[['vals']] <- 1:dim(data)[i_coord] - } - # name - dim_info[['name']] <- i_coord - # len - dim_info[['len']] <- as.numeric(dim(data)[i_coord]) - # unlim - dim_info[['unlim']] <- FALSE - # create_dimvar - dim_info[['create_dimvar']] <- TRUE + coords[[i_coord]] <- 1:dim(data)[i_coord] + } + dim(coords[[i_coord]]) <- dim(data)[i_coord] + ## metadata if (i_coord %in% names(metadata)) { if ('variables' %in% names(attributes(metadata[[i_coord]]))) { # from Start: 'lon' or 'lat' - attrs <- attributes(metadata[[i_coord]])[['variables']][[i_coord]] - i_coord_info <- attrs[!sapply(attrs, inherits, 'list')] + attrs <- attributes(metadata[[i_coord]])[['variables']] + attr(coords[[i_coord]], 'variables') <- attrs } else if (inherits(metadata[[i_coord]], 'list')) { # from Start and Load: main var - i_coord_info <- metadata[[i_coord]] + attr(coords[[i_coord]], 'variables') <- list(metadata[[i_coord]]) + names(attributes(coords[[i_coord]])$variables) <- i_coord } else if (!is.null(attributes(metadata[[i_coord]]))) { # from Load - i_coord_info <- attributes(metadata[[i_coord]]) + attr(coords[[i_coord]], 'variables') <- list(attributes(metadata[[i_coord]])) + names(attributes(coords[[i_coord]])$variables) <- i_coord } else { stop("Metadata is not correct.") } - # len - if ('size' %in% names(i_coord_info)) { - if (i_coord_info[['size']] != dim(data)[i_coord]) { - dim_info[['original_len']] <- i_coord_info[['size']] - i_coord_info[['size']] <- NULL - } - } - # units - if (!('units' %in% names(i_coord_info))) { - dim_info[['units']] <- '' - } else { - dim_info[['units']] <- i_coord_info[['units']] - i_coord_info[['units']] <- NULL - } - # calendar - if (!('calendar' %in% names(i_coord_info))) { - dim_info[['calendar']] <- NA - } else { - dim_info[['calendar']] <- i_coord_info[['calendar']] - i_coord_info[['calendar']] <- NULL - } - # longname - if ('long_name' %in% names(i_coord_info)) { - dim_info[['longname']] <- i_coord_info[['long_name']] - i_coord_info[['long_name']] <- NULL - } else if ('longname' %in% names(i_coord_info)) { - dim_info[['longname']] <- i_coord_info[['longname']] - i_coord_info[['longname']] <- NULL - } else { - if (i_coord %in% .KnownLonNames()) { - dim_info[['longname']] <- 'longitude' - } else if (i_coord %in% .KnownLatNames()) { - dim_info[['longname']] <- 'latitude' - } - } - # extra information - if (!is.null(names(i_coord_info))) { - extra_info_dim[[i_coord]] <- i_coord_info - } - } else { - # units - dim_info[['units']] <- "adim" - # longname - dim_info[['longname']] <- i_coord - # calendar - dim_info[['calendar']] <- NA - } - new_dim <- list(ncdim_def(name = dim_info[['name']], units = dim_info[['units']], - vals = dim_info[['vals']], unlim = dim_info[['unlim']], - create_dimvar = dim_info[['create_dimvar']], - calendar = dim_info[['calendar']], - longname = dim_info[['longname']])) - names(new_dim) <- i_coord - defined_dims <- c(defined_dims, new_dim) + } } + # Reorder coords + coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL + coords <- coords[filedims] defined_vars <- list() if (!single_file) { @@ -675,146 +599,84 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } Apply(data = input_data, target_dims = target_dims, - fun = .saveExp, + fun = .saveexp, destination = path[j], - defined_dims = defined_dims, + coords = coords, ftime_dim = ftime_dim, varname = varname[j], metadata_var = metadata[[varname[j]]], - extra_info_dim = extra_info_dim, - extra_string = extra_string) + extra_string = extra_string, + global_attrs = global_attrs) } } } else { # Datasets definition # From here if (!is.null(dat_dim)) { - new_dim <- list(ncdim_def(name = dat_dim, units = "adim", - vals = 1 : dim(data)[dat_dim], - longname = 'Datasets', create_dimvar = TRUE)) - names(new_dim) <- dat_dim - defined_dims <- c(new_dim, defined_dims) - extra_info_dim[[dat_dim]] <- list(Datasets = paste(Datasets, collapse = ', ')) + coords[[dat_dim]] <- array(1:dim(data)[dat_dim], dim = dim(data)[dat_dim]) + attr(coords[[dat_dim]], 'variables') <- list(list(units = 'adim')) + # extra_info_dim[[dat_dim]] <- list(Datasets = paste(Datasets, collapse = ', ')) } first_sdate <- last_sdate <- NULL if (!is.null(Dates)) { - # sdate definition - sdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') - differ <- as.numeric((sdates - sdates[1])/3600) - new_dim <- list(ncdim_def(name = sdate_dim, units = paste('hours since', sdates[1]), - vals = differ, - longname = sdate_dim, create_dimvar = TRUE)) - names(new_dim) <- sdate_dim - defined_dims <- c(defined_dims, new_dim) - first_sdate <- sdates[1] - last_sdate <- sdates[length(sdates)] - # ftime definition - Dates <- Reorder(Dates, c(ftime_dim, sdate_dim)) - differ_ftime <- apply(Dates, 2, function(x){as.numeric((x - x[1])/3600)}) - dim(differ_ftime) <- dim(Dates) - differ_ftime_subset <- Subset(differ_ftime, along = sdate_dim, 1, drop = 'selected') - if (all(apply(differ_ftime, 1, function(x){length(unique(x)) == 1}))) { - if (all(diff(differ_ftime_subset/24) == 1)) { - # daily values - dim_time <- list(ncdim_def(name = ftime_dim, units = 'days', - vals = round(differ_ftime_subset/24) + 1, - calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) - names(dim_time) <- ftime_dim - defined_dims <- c(defined_dims, dim_time) - } else if (all(diff(differ_ftime_subset/24) %in% c(28, 29, 30, 31))) { - # monthly values - dim_time <- list(ncdim_def(name = ftime_dim, units = 'months', - vals = round(differ_ftime_subset/730) + 1, - calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) - names(dim_time) <- ftime_dim - defined_dims <- c(defined_dims, dim_time) - } else { - # other frequency - dim_time <- list(ncdim_def(name = ftime_dim, units = 'hours', - vals = differ_ftime_subset + 1, - calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) - names(dim_time) <- ftime_dim - defined_dims <- c(defined_dims, dim_time) - } + if (is.null(sdate_dim)) { + sdates <- Dates[1] + # ftime definition + leadtimes <- as.numeric(difftime(Dates, sdates, units = "hours")) } else { - warning("Time steps are not equal for all start dates. Only ", - "forecast time values for the first start date will be saved ", - "correctly.") - dim_time <- list(ncdim_def(name = ftime_dim, - units = paste('hours since', - paste(sdates, collapse = ', ')), - vals = differ_ftime_subset, - calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) - names(dim_time) <- ftime_dim - defined_dims <- c(defined_dims, dim_time) + # sdate definition + sdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') + differ <- as.numeric(difftime(sdates, sdates[1], units = "hours")) + # new + dim(differ) <- dim(data)[sdate_dim] + coords[[sdate_dim]] <- differ + attr(coords[[sdate_dim]], 'variables') <- list(list(units = paste('hours since', sdates[1]), + calendar = 'proleptic_gregorian', + longname = sdate_dim)) + first_sdate <- sdates[1] + last_sdate <- sdates[length(sdates)] + # ftime definition + Dates <- Reorder(Dates, c(ftime_dim, sdate_dim)) + differ_ftime <- array(dim = dim(Dates)) + for (i in 1:length(sdates)) differ_ftime[, i] <- as.numeric(difftime(Dates[, i], Dates[1, i], + units = "hours")) + dim(differ_ftime) <- dim(Dates) + leadtimes <- Subset(differ_ftime, along = sdate_dim, 1, drop = 'selected') + if (!all(apply(differ_ftime, 1, function(x){length(unique(x)) == 1}))) { + warning("Time steps are not equal for all start dates. Only ", + "forecast time values for the first start date will be saved ", + "correctly.") + } } + + # Save in units 'hours since' + dim(leadtimes) <- dim(data)[ftime_dim] + coords[[ftime_dim]] <- leadtimes + attr(coords[[ftime_dim]], 'variables') <- list(list(units = paste('hours since', + paste(sdates, collapse = ', ')), + calendar = 'proleptic_gregorian', + longname = ftime_dim, unlim = TRUE)) } # var definition defined_vars <- list() extra_info_var <- NULL for (j in 1:n_vars) { - var_info <- list() - i_var_info <- metadata[[varname[j]]][!sapply(metadata[[varname[j]]], inherits, 'list')] - ## Define metadata - # name - var_info[['name']] <- varname[j] - # units - if ('units' %in% names(i_var_info)) { - var_info[['units']] <- i_var_info[['units']] - i_var_info[['units']] <- NULL - } else { - var_info[['units']] <- '' - } - # dim - var_info[['dim']] <- defined_dims - # missval - if ('missval' %in% names(i_var_info)) { - var_info[['missval']] <- i_var_info[['missval']] - i_var_info[['missval']] <- NULL + varname_j <- varname[j] + metadata_j <- metadata[[varname_j]] + if (is.null(var_dim)) { + coords[[varname_j]] <- data } else { - var_info[['missval']] <- NULL + coords[[varname_j]] <- Subset(data, var_dim, j, drop = 'selected') } - # longname - if (any(c('longname', 'long_name') %in% names(i_var_info))) { - longname <- names(i_var_info)[which(names(i_var_info) %in% c('longname', 'long_name'))] - var_info[['longname']] <- i_var_info[[longname]] - i_var_info[[longname]] <- NULL - } else { - var_info[['longname']] <- varname[j] - } - # prec - if ('prec' %in% names(i_var_info)) { - var_info[['prec']] <- i_var_info[['prec']] - i_var_info[['prec']] <- NULL - } else { - prec <- typeof(data) - if (prec == 'character') { - var_info[['prec']] <- 'char' - } - if (any(prec %in% c('short', 'float', 'double', 'integer', 'char', 'byte'))) { - var_info[['prec']] <- prec - } else { - var_info[['prec']] <- 'double' - } + if (!is.null(metadata_j)) { + attr(coords[[varname_j]], 'variables') <- list(metadata_j) + names(attributes(coords[[varname_j]])$variables) <- varname_j } - # extra information - if (!is.null(names(i_var_info))) { - extra_info_var[[varname[j]]] <- i_var_info + # Add global attributes + if (!is.null(global_attrs)) { + attributes(coords[[varname_j]])$global_attrs <- global_attrs } - new_var <- list(ncvar_def(name = var_info[['name']], - units = var_info[['units']], - dim = var_info[['dim']], - missval = var_info[['missval']], - longname = var_info[['longname']], - prec = var_info[['prec']])) - - names(new_var) <- varname[j] - defined_vars <- c(defined_vars, new_var) } if (is.null(extra_string)) { gsub("-", "", first_sdate) @@ -823,140 +685,49 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, gsub("-", "", last_sdate)), collapse = '_'), ".nc") } else { - file_name <- paste0(paste(c(varname, extra_string, - gsub("-", "", first_sdate), - gsub("-", "", last_sdate)), - collapse = '_'), ".nc") - } - full_filename <- file.path(destination, file_name) - file_nc <- nc_create(full_filename, defined_vars) - if (is.null(var_dim)) { - ncvar_put(file_nc, varname, vals = data) - } else { - for (j in 1:n_vars) { - ncvar_put(file_nc, defined_vars[[j]]$name, - vals = Subset(data, var_dim, j, drop = 'selected')) - } - } - # Additional dimension attributes - for (dim in names(defined_dims)) { - if (dim %in% names(extra_info_dim)) { - for (info_dim in names(extra_info_dim[[dim]])) { - add_info_dim <- paste0(extra_info_dim[[dim]][[info_dim]], collapse = ', ') - ncatt_put(file_nc, dim, info_dim, add_info_dim) - } - } - } - # Additional dimension attributes - for (var in names(defined_vars)) { - if (var %in% names(extra_info_var)) { - for (info_var in names(extra_info_var[[var]])) { - add_info_var <- paste0(extra_info_var[[var]][[info_var]], collapse = ', ') - ncatt_put(file_nc, var, info_var, add_info_var) - } + nc <- substr(extra_string, nchar(extra_string)-2, nchar(extra_string)) + if (nc == ".nc") { + file_name <- extra_string + } else { + file_name <- paste0(extra_string, ".nc") } } - nc_close(file_nc) + full_filename <- file.path(destination, file_name) + ArrayToNc(coords, full_filename) } } -.saveExp <- function(data, startdates = NULL, dates = NULL, destination = "./", - defined_dims, ftime_dim = 'time', varname = 'var', - metadata_var = NULL, extra_info_dim = NULL, - extra_string = NULL) { - # ftime_dim +.saveexp <- function(data, coords, destination = "./", + startdates = NULL, dates = NULL, + ftime_dim = 'time', varname = 'var', + metadata_var = NULL, extra_string = NULL, + global_attrs = NULL) { if (!is.null(dates)) { - differ <- as.numeric((dates - dates[1])/3600) - dim_time <- list(ncdim_def(name = ftime_dim, units = paste('hours since', dates[1]), - vals = differ, calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) - names(dim_time) <- ftime_dim - defined_dims <- c(defined_dims, dim_time) - } - - ## Define var metadata - var_info <- NULL - extra_info_var <- NULL - i_var_info <- metadata_var[!sapply(metadata_var, inherits, 'list')] - - # name - var_info[['name']] <- varname - # units - if ('units' %in% names(i_var_info)) { - var_info[['units']] <- i_var_info[['units']] - i_var_info[['units']] <- NULL - } else { - var_info[['units']] <- '' - } - # dim - var_info[['dim']] <- defined_dims - # missval - if ('missval' %in% names(i_var_info)) { - var_info[['missval']] <- i_var_info[['missval']] - i_var_info[['missval']] <- NULL - } else { - var_info[['missval']] <- NULL - } - # longname - if (any(c('longname', 'long_name') %in% names(i_var_info))) { - longname <- names(i_var_info)[which(names(i_var_info) %in% c('longname', 'long_name'))] - var_info[['longname']] <- i_var_info[[longname]] - i_var_info[[longname]] <- NULL - } else { - var_info[['longname']] <- varname - } - # prec - if ('prec' %in% names(i_var_info)) { - var_info[['prec']] <- i_var_info[['prec']] - i_var_info[['prec']] <- NULL - } else { - prec <- typeof(data) - if (prec == 'character') { - var_info[['prec']] <- 'char' - } - if (any(prec %in% c('short', 'float', 'double', 'integer', 'char', 'byte'))) { - var_info[['prec']] <- prec - } else { - var_info[['prec']] <- 'double' - } - } - # extra information - if (!is.null(names(i_var_info))) { - extra_info_var <- i_var_info + differ <- as.numeric(difftime(dates, dates[1], units = "hours")) + dim(differ) <- dim(data)[ftime_dim] + coords[[ftime_dim]] <- differ + attr(coords[[ftime_dim]], 'variables') <- list(list(units = paste('hours since', dates[1]), + calendar = 'proleptic_gregorian', + longname = ftime_dim, unlim = TRUE)) + names(attributes(coords[[ftime_dim]])$variables) <- ftime_dim + } + # Add data + coords[[varname]] <- data + if (!is.null(metadata_var)) { + metadata_var$dim <- NULL + attr(coords[[varname]], 'variables') <- list(metadata_var) + names(attributes(coords[[varname]])$variables) <- varname + } + # Add global attributes + if (!is.null(global_attrs)) { + attributes(coords[[varname]])$global_attrs <- global_attrs } - datanc <- ncvar_def(name = var_info[['name']], - units = var_info[['units']], - dim = var_info[['dim']], - missval = var_info[['missval']], - longname = var_info[['longname']], - prec = var_info[['prec']]) - if (is.null(extra_string)) { file_name <- paste0(varname, "_", startdates, ".nc") } else { file_name <- paste0(varname, "_", extra_string, "_", startdates, ".nc") } full_filename <- file.path(destination, file_name) - file_nc <- nc_create(full_filename, datanc) - ncvar_put(file_nc, datanc, data) - - # Additional attributes - for (dim in names(defined_dims)) { - if (dim %in% names(extra_info_dim)) { - for (info_dim in names(extra_info_dim[[dim]])) { - add_info_dim <- paste0(extra_info_dim[[dim]][[info_dim]], collapse = ', ') - ncatt_put(file_nc, dim, info_dim, add_info_dim) - } - } - } - # Additional dimension attributes - if (!is.null(extra_info_var)) { - for (info_var in names(extra_info_var)) { - add_info_var <- paste0(extra_info_var[[info_var]], collapse = ', ') - ncatt_put(file_nc, varname, info_var, add_info_var) - } - } - - nc_close(file_nc) -} + ArrayToNc(coords, full_filename) +} \ No newline at end of file diff --git a/tests/testthat/test-CST_SaveCube.R b/tests/testthat/test-CST_SaveCube.R new file mode 100644 index 00000000..0951c392 --- /dev/null +++ b/tests/testthat/test-CST_SaveCube.R @@ -0,0 +1,324 @@ +############################################## + +# cube0 +cube0 <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1)) +class(cube0) <- 's2dv_cube' + +# cube1 +cube1 <- NULL +cube1$data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1)) +coords2 <- list(sdate = c('20000101', '20010102', '20020103', '20030104', '20040105'), + var = 'tas', + lon = 1.:4., + lat = 1.:4.) +cube1$coords <- coords2 +dates2 <- c('20000101', '20010102', '20020103', '20030104', '20040105') +dates2 <- as.Date(dates2, format = "%Y%m%d", tz = "UTC") +dim(dates2) <- c(sdate = 5, ftime = 1) +cube1$attrs$Dates <- dates2 +class(cube1) <- 's2dv_cube' + +# cube2 +cube2 <- cube1 +cube2$data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1, + test = 2, test2 = 3)) +dim(cube2$data) <- c(sdate = 5, lon = 4, lat = 4, ftime = 1, member = 1, + ensemble = 1, test = 2, test2 = 3) + +# cube3 +cube3 <- cube1 + +# dat0 +dates0 <- as.Date('2022-02-01', format = "%Y-%m-%d") +dim(dates0) <- c(sdate = 1) + +# dat1 +dat1 <- array(1, dim = c(test = 1)) + +# dat2 +dat2 <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1)) +coords2 <- list(sdate = c('20000101', '20010102', '20020103', '20030104', '20040105'), + var = 'tas', + lon = 1.:4., + lat = 1.:4.) +dates2 <- c('20000101', '20010102', '20020103', '20030104', '20040105') +dates2 <- as.Date(dates2, format = "%Y%m%d", tz = "UTC") +dim(dates2) <- c(sdate = 5, ftime = 1) + +# dat3 (without sdate dim) +dat3 <- array(1:5, dim = c(lon = 4, lat = 4, ftime = 2)) +coords3 <- list(sdate = c('20000101', '20010102'), + var = 'tas', + lon = 1.:4., + lat = 1.:4.) +dates3 <- c('20000101', '20010102') +dates3 <- as.Date(dates3, format = "%Y%m%d", tz = "UTC") +dim(dates3) <- c(ftime = 2) + +# dat4 (without ftime dim) +dat4 <- array(1:5, dim = c(sdate = 2, lon = 4, lat = 4)) +coords4 <- list(sdate = c('20000101', '20010102'), + var = 'tas', + lon = 1.:4., + lat = 1.:4.) +dates4 <- c('20000101', '20010102') +dates4 <- as.Date(dates4, format = "%Y%m%d", tz = "UTC") +dim(dates4) <- c(sdate = 2) + +# dates5 (Dates with extra dimensions) +dates5 <- c('20000101', '20010102', '20010102', '20010102') +dates5 <- as.Date(dates5, format = "%Y%m%d", tz = "UTC") +dim(dates5) <- c(ftime = 2, test = 1, test2 = 2) + +############################################## + +test_that("1. Input checks: CST_SaveCube", { + # s2dv_cube + expect_error( + CST_SaveCube(data = 1), + paste0("Parameter 'data' must be of the class 's2dv_cube'.") + ) + # structure + expect_error( + CST_SaveCube(data = cube0), + paste0("Parameter 'data' must have at least 'data' and 'attrs' elements ", + "within the 's2dv_cube' structure.") + ) + cube0 <- list(data = cube0, attrs = 1) + class(cube0) <- 's2dv_cube' + expect_error( + CST_SaveCube(data = cube0), + paste0("Level 'attrs' must be a list with at least 'Dates' element.") + ) + cube0$attrs <- NULL + cube0$attrs$Dates <- dates2 + expect_warning( + CST_SaveCube(data = cube0, sdate_dim = c('sdate', 'sweek'), + ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, single_file = FALSE), + paste0("Element 'coords' not found. No coordinates will be used.") + ) + + # sdate_dim + suppressWarnings( + expect_error( + CST_SaveCube(data = cube1, sdate_dim = 1), + paste0("Parameter 'sdate_dim' must be a character string.") + ) + ) + expect_warning( + CST_SaveCube(data = cube1, sdate_dim = c('sdate', 'sweek'), + ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, extra_string = 'test'), + paste0("Parameter 'sdate_dim' has length greater than 1 and ", + "only the first element will be used.") + ) + suppressWarnings( + expect_error( + CST_SaveCube(data = cube1, sdate_dim = 'a', ftime_dim = 'ftime'), + paste0("Parameter 'sdate_dim' is not found in 'data' dimension.") + ) + ) + # startdates + expect_warning( + CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, startdates = 1), + "Parameter 'startdates' is not a character string, it will not be used." + ) + expect_warning( + CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, startdates = '20100101'), + paste0("Parameter 'startdates' doesn't have the same length ", + "as dimension '", 'sdate',"', it will not be used.") + ) + # metadata + expect_warning( + CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("No metadata found in element Variable from attrs.") + ) + cube1$attrs$Variable$metadata <- 'metadata' + expect_error( + CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Element metadata from Variable element in attrs must be a list.") + ) + cube1$attrs$Variable$metadata <- list(test = 'var') + expect_warning( + CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Metadata is not found for any coordinate.") + ) + cube1$attrs$Variable$metadata <- list(var = 'var') + expect_warning( + CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Metadata is not found for any variable.") + ) + # memb_dim + suppressWarnings( + expect_error( + CST_SaveCube(data = cube1, memb_dim = 1, ftime_dim = 'ftime'), + paste0("Parameter 'memb_dim' must be a character string.") + ) + ) + suppressWarnings( + expect_error( + CST_SaveCube(data = cube1, memb_dim = 'member', ftime_dim = 'ftime'), + paste0("Parameter 'memb_dim' is not found in 'data' dimension. Set it ", + "as NULL if there is no member dimension.") + ) + ) +}) + +############################################## + +test_that("1. Input checks", { + # data + expect_error( + SaveCube(data = NULL), + "Parameter 'data' cannot be NULL." + ) + expect_error( + SaveCube(data = 1:10), + "Parameter 'data' must be an array with named dimensions." + ) + # destination + expect_error( + SaveCube(data = array(1, dim = c(a = 1)), destination = NULL), + paste0("Parameter 'destination' must be a character string of one element ", + "indicating the name of the file (including the folder if needed) ", + "where the data will be saved."), + fixed = TRUE + ) + # Dates + expect_error( + SaveCube(data = array(1, dim = c(a = 1)), Dates = 'a'), + paste0("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.") + ) + expect_error( + SaveCube(data = array(1, dim = c(a = 1)), + Dates = as.Date('2022-02-01', format = "%Y-%m-%d")), + paste0("Parameter 'Dates' must have dimension names.") + ) + # drop_dims + expect_warning( + SaveCube(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, drop_dims = 1), + paste0("Parameter 'drop_dims' must be character string containing ", + "the data dimension names to be dropped. It will not be used.") + ) + expect_warning( + SaveCube(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, drop_dims = 'time'), + paste0("Parameter 'drop_dims' must be character string containing ", + "the data dimension names to be dropped. It will not be used.") + ) + expect_warning( + SaveCube(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, drop_dims = 'sdate'), + paste0("Parameter 'drop_dims' can only contain dimension names ", + "that are of length 1. It will not be used.") + ) + expect_warning( + SaveCube(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, drop_dims = 'ftime'), + paste0("Parameter 'drop_dims' contains dimensions used in the ", + "computation. It will not be used.") + ) + # varname + suppressWarnings( + expect_error( + SaveCube(data = dat2, coords = coords2, varname = 1, + metadata = list(tas = list(level = '2m')), + Dates = dates2), + "Parameter 'varname' must be a character." + ) + ) + # varname, metadata, spatial coords, unknown dim + expect_error( + SaveCube(data = dat1, varname = 1, ftime_dim = NULL, sdate_dim = NULL, + memb_dim = NULL, dat_dim = NULL, var_dim = NULL), + paste0("Parameter 'varname' must be a character string with the ", + "variable names.") + ) + # ftime_dim + expect_error( + SaveCube(data = dat4, coords = coords4, + metadata = list(tas = list(level = '2m')), + Dates = dates4, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Parameter 'ftime_dim' is not found in 'data' dimension.") + ) + # Dates dimension check + expect_error( + SaveCube(data = dat4, coords = coords4, + metadata = list(tas = list(level = '2m')), + Dates = dates4, ftime_dim = NULL, memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Parameter 'Dates' must have 'ftime_dim'.") + ) + expect_warning( + SaveCube(data = dat4, coords = coords4, + metadata = list(tas = list(level = '2m')), + Dates = NULL, ftime_dim = NULL, memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Dates must be provided if 'data' must be saved in separated files. ", + "All data will be saved in a single file.") + ) + # Without ftime and sdate + expect_error( + SaveCube(data = dat3, coords = coords3, + metadata = list(tas = list(level = '2m')), + Dates = dates5, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = NULL), + paste0("Parameter 'Dates' must have only 'sdate_dim' and 'ftime_dim' dimensions.") + ) + expect_warning( + SaveCube(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + startdates = c(paste(1:11, collapse = '')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), + paste0("Parameter 'startdates' should be a character string containing ", + "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", + "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") + ) + expect_warning( + SaveCube(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), + paste0("Dates must be provided if 'data' must be saved in separated files. ", + "All data will be saved in a single file.") + ) + # (dat3) Without sdate_dim + expect_warning( + SaveCube(data = dat3, coords = coords3, + metadata = list(tas = list(level = '2m')), + Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = NULL, + extra_string = 'nosdate3.nc', single_file = FALSE), + paste0("Dates must be provided if 'data' must be saved in separated files. ", + "All data will be saved in a single file.") + ) + # (dat4) Without ftime_dim + expect_error( + SaveCube(data = dat4, coords = coords4, + metadata = list(tas = list(level = '2m')), + Dates = dates4, ftime_dim = NULL, memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate', + single_file = TRUE), + paste0("Parameter 'Dates' must have 'ftime_dim'.") + ) +}) + +############################################## diff --git a/tests/testthat/test-CST_SaveExp.R b/tests/testthat/test-CST_SaveExp.R index f39dffe9..17226161 100644 --- a/tests/testthat/test-CST_SaveExp.R +++ b/tests/testthat/test-CST_SaveExp.R @@ -31,8 +31,10 @@ cube3 <- cube1 # dat0 dates0 <- as.Date('2022-02-01', format = "%Y-%m-%d") dim(dates0) <- c(sdate = 1) + # dat1 dat1 <- array(1, dim = c(test = 1)) + # dat2 dat2 <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1)) coords2 <- list(sdate = c('20000101', '20010102', '20020103', '20030104', '20040105'), @@ -43,6 +45,31 @@ dates2 <- c('20000101', '20010102', '20020103', '20030104', '20040105') dates2 <- as.Date(dates2, format = "%Y%m%d", tz = "UTC") dim(dates2) <- c(sdate = 5, ftime = 1) +# dat3 (without sdate dim) +dat3 <- array(1:5, dim = c(lon = 4, lat = 4, ftime = 2)) +coords3 <- list(sdate = c('20000101', '20010102'), + var = 'tas', + lon = 1.:4., + lat = 1.:4.) +dates3 <- c('20000101', '20010102') +dates3 <- as.Date(dates3, format = "%Y%m%d", tz = "UTC") +dim(dates3) <- c(ftime = 2) + +# dat4 (without ftime dim) +dat4 <- array(1:5, dim = c(sdate = 2, lon = 4, lat = 4)) +coords4 <- list(sdate = c('20000101', '20010102'), + var = 'tas', + lon = 1.:4., + lat = 1.:4.) +dates4 <- c('20000101', '20010102') +dates4 <- as.Date(dates4, format = "%Y%m%d", tz = "UTC") +dim(dates4) <- c(sdate = 2) + +# dates5 (Dates with extra dimensions) +dates5 <- c('20000101', '20010102', '20010102', '20010102') +dates5 <- as.Date(dates5, format = "%Y%m%d", tz = "UTC") +dim(dates5) <- c(ftime = 2, test = 1, test2 = 2) + ############################################## test_that("1. Input checks: CST_SaveExp", { @@ -63,14 +90,14 @@ test_that("1. Input checks: CST_SaveExp", { CST_SaveExp(data = cube0), paste0("Level 'attrs' must be a list with at least 'Dates' element.") ) - # cube0$attrs <- NULL - # cube0$attrs$Dates <- dates2 - # expect_warning( - # CST_SaveExp(data = cube0, sdate_dim = c('sdate', 'sweek'), - # ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, - # var_dim = NULL, single_file = FALSE), - # paste0("Element 'coords' not found. No coordinates will be used.") - # ) + cube0$attrs <- NULL + cube0$attrs$Dates <- dates2 + expect_warning( + CST_SaveExp(data = cube0, sdate_dim = c('sdate', 'sweek'), + ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, single_file = FALSE), + paste0("Element 'coords' not found. No coordinates will be used.") + ) # sdate_dim suppressWarnings( @@ -79,37 +106,37 @@ test_that("1. Input checks: CST_SaveExp", { paste0("Parameter 'sdate_dim' must be a character string.") ) ) - # expect_warning( - # CST_SaveExp(data = cube1, sdate_dim = c('sdate', 'sweek'), - # ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, - # var_dim = NULL), - # paste0("Parameter 'sdate_dim' has length greater than 1 and ", - # "only the first element will be used.") - # ) + expect_warning( + CST_SaveExp(data = cube1, sdate_dim = c('sdate', 'sweek'), + ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, extra_string = 'test'), + paste0("Parameter 'sdate_dim' has length greater than 1 and ", + "only the first element will be used.") + ) suppressWarnings( expect_error( CST_SaveExp(data = cube1, sdate_dim = 'a', ftime_dim = 'ftime'), paste0("Parameter 'sdate_dim' is not found in 'data' dimension.") ) ) - # # startdates - # expect_warning( - # CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL, startdates = 1), - # "Parameter 'startdates' is not a character string, it will not be used." - # ) - # expect_warning( - # CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL, startdates = '20100101'), - # paste0("Parameter 'startdates' doesn't have the same length ", - # "as dimension '", 'sdate',"', it will not be used.") - # ) - # # metadata - # expect_warning( - # CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL), - # paste0("No metadata found in element Variable from attrs.") - # ) + # startdates + expect_warning( + CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, startdates = 1), + "Parameter 'startdates' is not a character string, it will not be used." + ) + expect_warning( + CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, startdates = '20100101'), + paste0("Parameter 'startdates' doesn't have the same length ", + "as dimension '", 'sdate',"', it will not be used.") + ) + # metadata + expect_warning( + CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("No metadata found in element Variable from attrs.") + ) cube1$attrs$Variable$metadata <- 'metadata' expect_error( CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, @@ -117,17 +144,17 @@ test_that("1. Input checks: CST_SaveExp", { paste0("Element metadata from Variable element in attrs must be a list.") ) cube1$attrs$Variable$metadata <- list(test = 'var') - # expect_warning( - # CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL), - # paste0("Metadata is not found for any coordinate.") - # ) + expect_warning( + CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Metadata is not found for any coordinate.") + ) cube1$attrs$Variable$metadata <- list(var = 'var') - # expect_warning( - # CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL), - # paste0("Metadata is not found for any variable.") - # ) + expect_warning( + CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Metadata is not found for any variable.") + ) # memb_dim suppressWarnings( expect_error( @@ -174,39 +201,40 @@ test_that("1. Input checks", { Dates = as.Date('2022-02-01', format = "%Y-%m-%d")), paste0("Parameter 'Dates' must have dimension names.") ) - # # drop_dims - # expect_warning( - # SaveExp(data = dat2, coords = coords2, - # metadata = list(tas = list(level = '2m')), - # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL, drop_dims = 1), - # paste0("Parameter 'drop_dims' must be character string containing ", - # "the data dimension names to be dropped. It will not be used.") - # ) - # expect_warning( - # SaveExp(data = dat2, coords = coords2, - # metadata = list(tas = list(level = '2m')), - # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL, drop_dims = 'time'), - # paste0("Parameter 'drop_dims' must be character string containing ", - # "the data dimension names to be dropped. It will not be used.") - # ) - # expect_warning( - # SaveExp(data = dat2, coords = coords2, - # metadata = list(tas = list(level = '2m')), - # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL, drop_dims = 'sdate'), - # paste0("Parameter 'drop_dims' can only contain dimension names ", - # "that are of length 1. It will not be used.") - # ) - # # varname - # expect_warning( - # SaveExp(data = dat2, coords = coords2, - # metadata = list(tas = list(level = '2m')), - # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL), - # paste0("Parameter 'varname' is NULL. It will be assigned to 'X'.") - # ) + # drop_dims + expect_warning( + SaveExp(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, drop_dims = 1), + paste0("Parameter 'drop_dims' must be character string containing ", + "the data dimension names to be dropped. It will not be used.") + ) + expect_warning( + SaveExp(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, drop_dims = 'time'), + paste0("Parameter 'drop_dims' must be character string containing ", + "the data dimension names to be dropped. It will not be used.") + ) + expect_warning( + SaveExp(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, drop_dims = 'sdate'), + paste0("Parameter 'drop_dims' can only contain dimension names ", + "that are of length 1. It will not be used.") + ) + expect_warning( + SaveExp(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, drop_dims = 'ftime'), + paste0("Parameter 'drop_dims' contains dimensions used in the ", + "computation. It will not be used.") + ) + # varname suppressWarnings( expect_error( SaveExp(data = dat2, coords = coords2, varname = 1, @@ -215,30 +243,82 @@ test_that("1. Input checks", { "Parameter 'varname' must be a character." ) ) - # # coords - # expect_warning( - # SaveExp(data = dat2, coords = list(sdate = coords2[[1]]), - # varname = 'tas', metadata = list(tas = list(level = '2m')), - # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL), - # "Coordinate 'lon' is not provided and it will be set as index in element coords.", - # "Coordinate 'lat' is not provided and it will be set as index in element coords.", - # "Coordinate 'ftime' is not provided and it will be set as index in element coords." - # ) - # # varname, metadata, spatial coords, unknown dim - # expect_warning( - # SaveExp(data = dat1, ftime_dim = NULL, sdate_dim = NULL, memb_dim = NULL, - # dat_dim = NULL, var_dim = NULL, single_file = TRUE), - # "Parameter 'varname' is NULL. It will be assigned to 'X'.", - # "Parameter 'metadata' is not provided so the metadata saved will be incomplete.", - # paste0("Spatial coordinates not found.") - # ) + # varname, metadata, spatial coords, unknown dim expect_error( SaveExp(data = dat1, varname = 1, ftime_dim = NULL, sdate_dim = NULL, memb_dim = NULL, dat_dim = NULL, var_dim = NULL), paste0("Parameter 'varname' must be a character string with the ", "variable names.") ) + # ftime_dim + expect_error( + SaveExp(data = dat4, coords = coords4, + metadata = list(tas = list(level = '2m')), + Dates = dates4, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Parameter 'ftime_dim' is not found in 'data' dimension.") + ) + # Dates dimension check + expect_error( + SaveExp(data = dat4, coords = coords4, + metadata = list(tas = list(level = '2m')), + Dates = dates4, ftime_dim = NULL, memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Parameter 'Dates' must have 'ftime_dim'.") + ) + expect_warning( + SaveExp(data = dat4, coords = coords4, + metadata = list(tas = list(level = '2m')), + Dates = NULL, ftime_dim = NULL, memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Dates must be provided if 'data' must be saved in separated files. ", + "All data will be saved in a single file.") + ) + # Without ftime and sdate + expect_error( + SaveExp(data = dat3, coords = coords3, + metadata = list(tas = list(level = '2m')), + Dates = dates5, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = NULL), + paste0("Parameter 'Dates' must have only 'sdate_dim' and 'ftime_dim' dimensions.") + ) + expect_warning( + SaveExp(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + startdates = c(paste(1:11, collapse = '')), + Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), + paste0("Parameter 'startdates' should be a character string containing ", + "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", + "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") + ) + expect_warning( + SaveExp(data = dat2, coords = coords2, + metadata = list(tas = list(level = '2m')), + Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), + paste0("Dates must be provided if 'data' must be saved in separated files. ", + "All data will be saved in a single file.") + ) + # (dat3) Without sdate_dim + expect_warning( + SaveExp(data = dat3, coords = coords3, + metadata = list(tas = list(level = '2m')), + Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = NULL, + extra_string = 'nosdate3.nc', single_file = FALSE), + paste0("Dates must be provided if 'data' must be saved in separated files. ", + "All data will be saved in a single file.") + ) + # (dat4) Without ftime_dim + expect_error( + SaveExp(data = dat4, coords = coords4, + metadata = list(tas = list(level = '2m')), + Dates = dates4, ftime_dim = NULL, memb_dim = NULL, + dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate', + single_file = TRUE), + paste0("Parameter 'Dates' must have 'ftime_dim'.") + ) }) ############################################## -- GitLab From b869c87ef843f41445aa6322c916aa32bd1c6e9a Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 29 Nov 2023 16:14:33 +0100 Subject: [PATCH 18/66] Regenerate documentation and add example scripts --- NAMESPACE | 2 - R/CST_SaveExp.R | 67 +++++--- inst/doc/usecase.md | 11 ++ inst/doc/usecase/UseCase4_CST_SaveExp.R | 215 ++++++++++++++++++++++++ man/CST_SaveCube.Rd | 114 ------------- man/CST_SaveExp.Rd | 11 +- man/SaveCube.Rd | 143 ---------------- man/SaveExp.Rd | 10 +- 8 files changed, 289 insertions(+), 284 deletions(-) create mode 100644 inst/doc/usecase.md create mode 100644 inst/doc/usecase/UseCase4_CST_SaveExp.R delete mode 100644 man/CST_SaveCube.Rd delete mode 100644 man/SaveCube.Rd diff --git a/NAMESPACE b/NAMESPACE index 35bd3c4f..f6cc47b3 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -34,7 +34,6 @@ export(CST_RFTemp) export(CST_RFWeights) export(CST_RainFARM) export(CST_RegimesAssign) -export(CST_SaveCube) export(CST_SaveExp) export(CST_SplitDim) export(CST_Start) @@ -62,7 +61,6 @@ export(RFTemp) export(RF_Weights) export(RainFARM) export(RegimesAssign) -export(SaveCube) export(SaveExp) export(SplitDim) export(WeatherRegime) diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index 72b45b41..ebe25941 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -46,6 +46,12 @@ #'@param extra_string A character string to be include as part of the file name, #' for instance, to identify member or realization. It would be added to the #' file name between underscore characters. +#'@param units_hours_since (Optional) A logical value only used for the case +#' Dates have forecast time and start date dimension and single_file is TRUE. +#' When it is TRUE, it saves the forecast time with units of 'hours since'; +#' if it is FALSE, the time units will be a number of time steps with its +#' corresponding frequency (e.g. n days, n months or n hours). It is TRUE +#' by default. #' #'@return Multiple or single NetCDF files containing the data array.\cr #'\item{\code{single_file = TRUE}}{ @@ -84,7 +90,7 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', var_dim = 'var', memb_dim = 'member', startdates = NULL, drop_dims = NULL, single_file = FALSE, extra_string = NULL, - global_attrs = NULL) { + global_attrs = NULL, units_hours_since = TRUE) { # Check 's2dv_cube' if (!inherits(data, 's2dv_cube')) { stop("Parameter 'data' must be of the class 's2dv_cube'.") @@ -140,13 +146,12 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', } # startdates if (is.null(startdates)) { - startdates <- data$coords[[sdate_dim]] - } else { - if (!is.character(startdates)) { - warning(paste0("Parameter 'startdates' is not a character string, ", - "it will not be used.")) + if (is.character(data$coords[[sdate_dim]])) { startdates <- data$coords[[sdate_dim]] } + } + + if (!is.null(startdates)) { if (!is.null(sdate_dim)) { if (dim(data$data)[sdate_dim] != length(startdates)) { warning(paste0("Parameter 'startdates' doesn't have the same length ", @@ -170,7 +175,8 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', drop_dims = drop_dims, extra_string = extra_string, single_file = single_file, - global_attrs = global_attrs) + global_attrs = global_attrs, + units_hours_since = units_hours_since) } #'Save a multidimensional array with metadata to data in NetCDF format #'@description This function allows to save a data array with metadata into a @@ -279,7 +285,7 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, startdates = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', ftime_dim = 'time', var_dim = 'var', memb_dim = 'member', drop_dims = NULL, single_file = FALSE, extra_string = NULL, - global_attrs = NULL) { + global_attrs = NULL, units_hours_since = TRUE) { ## Initial checks # data if (is.null(data)) { @@ -297,7 +303,7 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } # Dates if (!is.null(Dates)) { - if (!inherits(Dates, "POSIXct") & !inherits(Dates, "Date")) { + if (!any(inherits(Dates, "POSIXct"), inherits(Dates, "Date"))) { stop("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.") } if (is.null(dim(Dates))) { @@ -468,14 +474,15 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } if (is.null(startdates)) { startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') - } else if (any(inherits(startdates, "POSIXct"), inherits(startdates, "Date"))) { - startdates <- format(startdates, "%Y%m%d") } else if (any(nchar(startdates) > 10, nchar(startdates) < 1)) { warning("Parameter 'startdates' should be a character string containing ", "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') } + if (any(inherits(startdates, "POSIXct"), inherits(startdates, "Date"))) { + startdates <- format(startdates, "%Y%m%d") + } } else if (any(ftime_dim %in% names(dim(Dates)))) { if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim)] == 1)) { dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] @@ -554,6 +561,7 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, if ('variables' %in% names(attributes(metadata[[i_coord]]))) { # from Start: 'lon' or 'lat' attrs <- attributes(metadata[[i_coord]])[['variables']] + attrs[[i_coord]]$dim <- NULL attr(coords[[i_coord]], 'variables') <- attrs } else if (inherits(metadata[[i_coord]], 'list')) { # from Start and Load: main var @@ -571,7 +579,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, # Reorder coords coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL coords <- coords[filedims] - defined_vars <- list() if (!single_file) { for (i in 1:n_datasets) { @@ -617,7 +624,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, attr(coords[[dat_dim]], 'variables') <- list(list(units = 'adim')) # extra_info_dim[[dat_dim]] <- list(Datasets = paste(Datasets, collapse = ', ')) } - first_sdate <- last_sdate <- NULL if (!is.null(Dates)) { if (is.null(sdate_dim)) { sdates <- Dates[1] @@ -633,8 +639,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, attr(coords[[sdate_dim]], 'variables') <- list(list(units = paste('hours since', sdates[1]), calendar = 'proleptic_gregorian', longname = sdate_dim)) - first_sdate <- sdates[1] - last_sdate <- sdates[length(sdates)] # ftime definition Dates <- Reorder(Dates, c(ftime_dim, sdate_dim)) differ_ftime <- array(dim = dim(Dates)) @@ -648,14 +652,30 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, "correctly.") } } - - # Save in units 'hours since' - dim(leadtimes) <- dim(data)[ftime_dim] - coords[[ftime_dim]] <- leadtimes - attr(coords[[ftime_dim]], 'variables') <- list(list(units = paste('hours since', - paste(sdates, collapse = ', ')), + if (!units_hours_since) { + if (all(diff(leadtimes/24) == 1)) { + # daily values + units <- 'days' + vals <- round(leadtimes/24) + 1 + } else if (all(diff(leadtimes/24) %in% c(28, 29, 30, 31))) { + # monthly values + units <- 'months' + vals <- round(leadtimes/730) + 1 + } else { + # other frequency + units <- 'hours' + vals <- leadtimes + 1 + } + } else { + units <- paste('hours since', paste(sdates, collapse = ', ')) + vals <- leadtimes + } + dim(vals) <- dim(data)[ftime_dim] + coords[[ftime_dim]] <- vals + attr(coords[[ftime_dim]], 'variables') <- list(list(units = units, calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) + longname = ftime_dim, + unlim = TRUE)) } # var definition @@ -670,6 +690,7 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, coords[[varname_j]] <- Subset(data, var_dim, j, drop = 'selected') } if (!is.null(metadata_j)) { + metadata_j$dim <- NULL attr(coords[[varname_j]], 'variables') <- list(metadata_j) names(attributes(coords[[varname_j]])$variables) <- varname_j } @@ -679,6 +700,8 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } } if (is.null(extra_string)) { + first_sdate <- startdates[1] + last_sdate <- startdates[length(startdates)] gsub("-", "", first_sdate) file_name <- paste0(paste(c(varname, gsub("-", "", first_sdate), diff --git a/inst/doc/usecase.md b/inst/doc/usecase.md new file mode 100644 index 00000000..5a2dc8ba --- /dev/null +++ b/inst/doc/usecase.md @@ -0,0 +1,11 @@ +# Usecase scripts + +In this document, you can link to the example scripts for different usage of the function: + +1. **Climate data assesment and downscaling** + 1. [Bias adjustment for assessment of an extreme event](inst/doc/usecase/UseCase1_WindEvent_March2018.R) + 2. [Precipitation Downscaling with RainFARM RF 4](inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF4.R) + 3. [Precipitation Downscaling with RainFARM RF 100](inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF100.R) + +2. **Examples using 's2dv_cube'** + 1. [Save 's2dv_cube'](inst/doc/usecase/UseCase4_CST_SaveCube.R) \ No newline at end of file diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R new file mode 100644 index 00000000..5e9ded12 --- /dev/null +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -0,0 +1,215 @@ +#**************************************************************************** +# Script to test examples of CST_SaveExp +# Eva Rifà Rovira +# 29/11/2024 +#**************************************************************************** + +# Tests 1: Multidimensional array and Dates, without metadata and coordinates +# (1) Minimal use case, without Dates +data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4)) +SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, single_file = TRUE) +SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = NULL, single_file = TRUE) # same result + +# (2) Forecast time dimension, without Dates +data <- array(1:5, dim = c(ftime = 5, lon = 4, lat = 4)) +SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = NULL, single_file = TRUE) + +# (2) Start date dimension, without Dates +data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4)) +SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = 'sdate', single_file = TRUE) + +# (3) Only forecast time dimension (no sdate), with Dates +data <- array(1:5, dim = c(ftime = 5, lon = 4, lat = 4)) +dates <- c('20000101', '20010102', '20020103', '20030104', '20040105') +dates <- as.Date(dates, format = "%Y%m%d", tz = "UTC") +dim(dates) <- c(ftime = 5) +SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = NULL, Dates = dates, single_file = FALSE) +# For this case we have the same result using: single_file = FALSE /TRUE. + +# (4) Forecast time and 1 sdate, with Dates +data <- array(1:5, dim = c(sdate = 1, ftime = 5, lon = 4, lat = 4)) +dates <- c('20000101', '20010102', '20020103', '20030104', '20040105') +dates <- as.Date(dates, format = "%Y%m%d", tz = "UTC") +dim(dates) <- c(sdate = 1, ftime = 5) +SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = TRUE) +SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = FALSE) + +################################################################################ + +# Tests 2: Test sample data from CSTools + +# (a) Data loaded with Start +# (1) lonlat_temp_st$exp in a single file with units 'hours since' + +data <- lonlat_temp_st$exp +CST_SaveExp(data = data, ftime_dim = 'ftime', + var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', + single_file = TRUE) + +# Now we read the output with Start: +sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) +path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") +out <- Start(dat = path, + var = 'tas', + lon = 'all', + lat = 'all', + ftime = 'all', + sdate = 'all', + member = 'all', + return_vars = list(lon = 'dat', + lat = 'dat', + ftime = NULL, + sdate = NULL), + retrieve = TRUE) + +attributes(out)$Variables$common$ftime + +# (2) lonlat_temp_st$exp in a single file with units of time frequency +data <- lonlat_temp_st$exp +CST_SaveExp(data = data, ftime_dim = 'ftime', + var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', + single_file = TRUE, units_hours_since = FALSE) + +# Now we read the output with Start: +sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) +path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") +out <- Start(dat = path, + var = 'tas', + lon = 'all', + lat = 'all', + ftime = 'all', + sdate = 'all', + member = 'all', + return_vars = list( + lon = 'dat', + lat = 'dat', + ftime = NULL, + sdate = NULL), + retrieve = TRUE) + +attributes(out)$Variables$common$ftime +# [1] "1 months" "2 months" "3 months" + +# (3) lonlat_temp_st$exp in separated files with units of hours since +data <- lonlat_temp_st$exp +CST_SaveExp(data = data, ftime_dim = 'ftime', + var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', + single_file = FALSE) +sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) +path <- paste0(getwd(),"/dat1/$var$/$var$_$sdate$.nc") + +out <- Start(dat = path, var = 'tas', + sdate = sdate, + lon = 'all', + lat = 'all', + ftime = 'all', + member = 'all', + return_vars = list(lon = 'dat', + lat = 'dat', + ftime = NULL), + retrieve = TRUE) + +# (4) lonlat_prec_st$exp in a single file with units of time frequency +data <- lonlat_prec_st +CST_SaveExp(data = data, ftime_dim = 'ftime', + var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', + single_file = TRUE, units_hours_since = FALSE) + +# Now we read the output with Start: +sdate <- as.vector(data$coords$sdate) +path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") +out <- Start(dat = path, + var = 'prlr', + lon = 'all', + lat = 'all', + ftime = 'all', + sdate = 'all', + member = 'all', + return_vars = list( + lon = 'dat', + lat = 'dat', + ftime = NULL, + sdate = NULL), + retrieve = TRUE) + +attributes(out)$Variables$common$sdate +# [1] "1 months" "2 months" "3 months" + +# (b) Data loaded with Load +data <- lonlat_temp$exp +data <- lonlat_temp$obs +dtaa <- lonlat_prec +CST_SaveExp(data = data, ftime_dim = 'ftime', + var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', + single_file = TRUE, units_hours_since = FALSE) +# Error + + +################################################################################ +# Test 3: Special cases + +# (1) two variables and two datasets in separated files + +repos <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" +repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" + +data3 <- Start(dat = list(list(name = 'system4_m1', path = repos2), + list(name = 'system5_m1', path = repos)), + var = c('tas', 'sfcWind'), + sdate = c('20160101', '20170101'), + ensemble = indices(1), + time = indices(1:2), + lat = indices(1:10), + lon = indices(1:10), + synonims = list(lat = c('lat', 'latitude'), + lon = c('lon', 'longitude')), + return_vars = list(time = 'sdate', + longitude = 'dat', + latitude = 'dat'), + metadata_dims = c('dat', 'var'), + retrieve = T + ) +cube3 <- as.s2dv_cube(data3) + +CST_SaveExp(data = cube3, ftime_dim = 'time', var_dim = 'var', + memb_dim = 'ensemble', dat_dim = 'dat') + +# We read again the data with start +repos <- paste0(getwd(), "/system4_m1/$var$/$var$_$sdate$.nc") +repos2 <- paste0(getwd(), "/system5_m1/$var$/$var$_$sdate$.nc") + +data3out <- Start(dat = list(list(name = 'system4_m1', path = repos2), + list(name = 'system5_m1', path = repos)), + var = c('tas', 'sfcWind'), + sdate = c('20160101', '20170101'), + ensemble = indices(1), + time = indices(1:2), + lat = indices(1:10), + lon = indices(1:10), + synonims = list(lat = c('lat', 'latitude'), + lon = c('lon', 'longitude')), + return_vars = list(time = 'sdate', + longitude = 'dat', + latitude = 'dat'), + metadata_dims = c('dat', 'var'), + retrieve = T) + +summary(data3out) +summary(data3) + +dim(data3) +dim(data3out) + +# (1) two variables and two datasets in the same file + +CST_SaveExp(data = cube3, ftime_dim = 'time', var_dim = 'var', + memb_dim = 'ensemble', dat_dim = 'dat', + single_file = TRUE) +################################################################################ \ No newline at end of file diff --git a/man/CST_SaveCube.Rd b/man/CST_SaveCube.Rd deleted file mode 100644 index cb12f09f..00000000 --- a/man/CST_SaveCube.Rd +++ /dev/null @@ -1,114 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/CST_SaveCube.R -\name{CST_SaveCube} -\alias{CST_SaveCube} -\title{Save objects of class 's2dv_cube' to data in NetCDF format} -\usage{ -CST_SaveCube( - data, - destination = "./", - sdate_dim = "sdate", - ftime_dim = "time", - dat_dim = "dataset", - var_dim = "var", - memb_dim = "member", - startdates = NULL, - drop_dims = NULL, - single_file = FALSE, - extra_string = NULL -) -} -\arguments{ -\item{data}{An object of class \code{s2dv_cube}.} - -\item{destination}{A character string containing the directory name in which -to save the data. NetCDF file for each starting date are saved into the -folder tree: \cr -destination/Dataset/variable/. By default the function -creates and saves the data into the working directory.} - -\item{sdate_dim}{A character string indicating the name of the start date -dimension. By default, it is set to 'sdate'. It can be NULL if there is no -start date dimension.} - -\item{ftime_dim}{A character string indicating the name of the forecast time -dimension. By default, it is set to 'time'. It can be NULL if there is no -forecast time dimension.} - -\item{dat_dim}{A character string indicating the name of dataset dimension. -By default, it is set to 'dataset'. It can be NULL if there is no dataset -dimension.} - -\item{var_dim}{A character string indicating the name of variable dimension. -By default, it is set to 'var'. It can be NULL if there is no variable -dimension.} - -\item{memb_dim}{A character string indicating the name of the member dimension. -By default, it is set to 'member'. It can be NULL if there is no member -dimension.} - -\item{startdates}{A vector of dates that will be used for the filenames -when saving the data in multiple files. It must be a vector of the same -length as the start date dimension of data. It must be a vector of class -\code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. -If it is NULL, the coordinate corresponding the the start date dimension or -the first Date of each time step will be used as the name of the files. -It is NULL by default.} - -\item{drop_dims}{A vector of character strings indicating the dimension names -of length 1 that need to be dropped in order that they don't appear in the -netCDF file. It is NULL by default (optional).} - -\item{single_file}{A logical value indicating if all object is saved in a -single file (TRUE) or in multiple files (FALSE). When it is FALSE, -the array is separated for Datasets, variable and start date. It is FALSE -by default.} - -\item{extra_string}{A character string to be include as part of the file name, -for instance, to identify member or realization. It would be added to the -file name between underscore characters.} -} -\value{ -Multiple or single NetCDF files containing the data array.\cr -\item{\code{single_file = TRUE}}{ - All data is saved in a single file located in the specified destination - path with the following name: - ___.nc. Multiple - variables are saved separately in the same file. The forecast time units - is extracted from the frequency of the time steps (hours, days, months). - The first value of forecast time is 1. If no frequency is found, the units - will be 'hours since' each start date and the time steps are assumed to be - equally spaced. -} -\item{\code{single_file = FALSE}}{ - The data array is subset and stored into multiple files. Each file - contains the data subset for each start date, variable and dataset. Files - with different variables and Datasets are stored in separated directories - within the following directory tree: destination/Dataset/variable/. - The name of each file will be: - __.nc. -} -} -\description{ -This function allows to divide and save a object of class -'s2dv_cube' into a NetCDF file, allowing to reload the saved data using -\code{Start} function from StartR package. If the original 's2dv_cube' object -has been created from \code{CST_Load()}, then it can be reloaded with -\code{Load()}. -} -\examples{ -\dontrun{ -data <- lonlat_temp_st$exp -destination <- "./" -CST_SaveExp(data = data, destination = destination, ftime_dim = 'ftime', - var_dim = 'var', dat_dim = 'dataset') -} - -} -\seealso{ -\code{\link[startR]{Start}}, \code{\link{as.s2dv_cube}} and -\code{\link{s2dv_cube}} -} -\author{ -Perez-Zanon Nuria, \email{nuria.perez@bsc.es} -} diff --git a/man/CST_SaveExp.Rd b/man/CST_SaveExp.Rd index 9352e036..8659b215 100644 --- a/man/CST_SaveExp.Rd +++ b/man/CST_SaveExp.Rd @@ -15,7 +15,9 @@ CST_SaveExp( startdates = NULL, drop_dims = NULL, single_file = FALSE, - extra_string = NULL + extra_string = NULL, + global_attrs = NULL, + units_hours_since = TRUE ) } \arguments{ @@ -67,6 +69,13 @@ by default.} \item{extra_string}{A character string to be include as part of the file name, for instance, to identify member or realization. It would be added to the file name between underscore characters.} + +\item{units_hours_since}{(Optional) A logical value only used for the case +Dates have forecast time and start date dimension and single_file is TRUE. +When it is TRUE, it saves the forecast time with units of 'hours since'; +if it is FALSE, the time units will be a number of time steps with its +corresponding frequency (e.g. n days, n months or n hours). It is TRUE +by default.} } \value{ Multiple or single NetCDF files containing the data array.\cr diff --git a/man/SaveCube.Rd b/man/SaveCube.Rd deleted file mode 100644 index da9a1ed4..00000000 --- a/man/SaveCube.Rd +++ /dev/null @@ -1,143 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/CST_SaveCube.R -\name{SaveCube} -\alias{SaveCube} -\title{Save a multidimensional array with metadata to data in NetCDF format} -\usage{ -SaveCube( - data, - destination = "./", - Dates = NULL, - coords = NULL, - varname = NULL, - metadata = NULL, - Datasets = NULL, - startdates = NULL, - dat_dim = "dataset", - sdate_dim = "sdate", - ftime_dim = "time", - var_dim = "var", - memb_dim = "member", - drop_dims = NULL, - single_file = FALSE, - extra_string = NULL, - global_attrs = NULL -) -} -\arguments{ -\item{data}{A multi-dimensional array with named dimensions.} - -\item{destination}{A character string indicating the path where to store the -NetCDF files.} - -\item{Dates}{A named array of dates with the corresponding sdate and forecast -time dimension. If there is no sdate_dim, you can set it to NULL. -It must have ftime_dim dimension.} - -\item{coords}{A named list with elements of the coordinates corresponding to -the dimensions of the data parameter. The names and length of each element -must correspond to the names of the dimensions. If any coordinate is not -provided, it is set as an index vector with the values from 1 to the length -of the corresponding dimension.} - -\item{varname}{A character string indicating the name of the variable to be -saved.} - -\item{metadata}{A named list where each element is a variable containing the -corresponding information. The information must be contained in a list of -lists for each variable.} - -\item{Datasets}{A vector of character string indicating the names of the -datasets.} - -\item{startdates}{A vector of dates that will be used for the filenames -when saving the data in multiple files. It must be a vector of the same -length as the start date dimension of data. It must be a vector of class -\code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. -If it is NULL, the first Date of each time step will be used as the name of -the files. It is NULL by default.} - -\item{dat_dim}{A character string indicating the name of dataset dimension. -By default, it is set to 'dataset'. It can be NULL if there is no dataset -dimension.} - -\item{sdate_dim}{A character string indicating the name of the start date -dimension. By default, it is set to 'sdate'. It can be NULL if there is no -start date dimension.} - -\item{ftime_dim}{A character string indicating the name of the forecast time -dimension. By default, it is set to 'time'. It can be NULL if there is no -forecast time dimension.} - -\item{var_dim}{A character string indicating the name of variable dimension. -By default, it is set to 'var'. It can be NULL if there is no variable -dimension.} - -\item{memb_dim}{A character string indicating the name of the member dimension. -By default, it is set to 'member'. It can be NULL if there is no member -dimension.} - -\item{drop_dims}{A vector of character strings indicating the dimension names -of length 1 that need to be dropped in order that they don't appear in the -netCDF file. It is NULL by default (optional).} - -\item{single_file}{A logical value indicating if all object is saved in a -unique file (TRUE) or in separated directories (FALSE). When it is FALSE, -the array is separated for Datasets, variable and start date. It is FALSE -by default (optional).} - -\item{extra_string}{A character string to be include as part of the file name, -for instance, to identify member or realization. It would be added to the -file name between underscore characters (optional).} - -\item{global_attrs}{A list with elements containing the global attributes -to be saved in the NetCDF.} -} -\value{ -Multiple or single NetCDF files containing the data array.\cr -\item{\code{single_file = TRUE}}{ - All data is saved in a single file located in the specified destination - path with the following name: - ___.nc. Multiple - variables are saved separately in the same file. The forecast time units - is extracted from the frequency of the time steps (hours, days, months). - The first value of forecast time is 1. If no frequency is found, the units - will be 'hours since' each start date and the time steps are assumed to be - equally spaced. -} -\item{\code{single_file = FALSE}}{ - The data array is subset and stored into multiple files. Each file - contains the data subset for each start date, variable and dataset. Files - with different variables and Datasets are stored in separated directories - within the following directory tree: destination/Dataset/variable/. - The name of each file will be: - __.nc. -} -} -\description{ -This function allows to save a data array with metadata into a -NetCDF file, allowing to reload the saved data using \code{Start} function -from StartR package. If the original 's2dv_cube' object has been created from -\code{CST_Load()}, then it can be reloaded with \code{Load()}. -} -\examples{ -\dontrun{ -data <- lonlat_temp_st$exp$data -lon <- lonlat_temp_st$exp$coords$lon -lat <- lonlat_temp_st$exp$coords$lat -coords <- list(lon = lon, lat = lat) -Datasets <- lonlat_temp_st$exp$attrs$Datasets -varname <- 'tas' -Dates <- lonlat_temp_st$exp$attrs$Dates -destination = './' -metadata <- lonlat_temp_st$exp$attrs$Variable$metadata -SaveExp(data = data, destination = destination, coords = coords, - Datasets = Datasets, varname = varname, Dates = Dates, - metadata = metadata, single_file = TRUE, ftime_dim = 'ftime', - var_dim = 'var', dat_dim = 'dataset') -} - -} -\author{ -Perez-Zanon Nuria, \email{nuria.perez@bsc.es} -} diff --git a/man/SaveExp.Rd b/man/SaveExp.Rd index c690d97e..2ff92489 100644 --- a/man/SaveExp.Rd +++ b/man/SaveExp.Rd @@ -20,7 +20,9 @@ SaveExp( memb_dim = "member", drop_dims = NULL, single_file = FALSE, - extra_string = NULL + extra_string = NULL, + global_attrs = NULL, + units_hours_since = TRUE ) } \arguments{ @@ -30,7 +32,8 @@ SaveExp( NetCDF files.} \item{Dates}{A named array of dates with the corresponding sdate and forecast -time dimension.} +time dimension. If there is no sdate_dim, you can set it to NULL. +It must have ftime_dim dimension.} \item{coords}{A named list with elements of the coordinates corresponding to the dimensions of the data parameter. The names and length of each element @@ -87,6 +90,9 @@ by default (optional).} \item{extra_string}{A character string to be include as part of the file name, for instance, to identify member or realization. It would be added to the file name between underscore characters (optional).} + +\item{global_attrs}{A list with elements containing the global attributes +to be saved in the NetCDF.} } \value{ Multiple or single NetCDF files containing the data array.\cr -- GitLab From bd1408913c2c0d103b853414f3441203718cc673 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 29 Nov 2023 16:19:15 +0100 Subject: [PATCH 19/66] Add dependency easyNCDF --- DESCRIPTION | 3 ++- inst/doc/usecase.md | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/DESCRIPTION b/DESCRIPTION index 4e41770d..362663ff 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -79,7 +79,8 @@ Imports: utils, verification, lubridate, - scales + scales, + easyNCDF Suggests: zeallot, testthat, diff --git a/inst/doc/usecase.md b/inst/doc/usecase.md index 5a2dc8ba..1fc17ae4 100644 --- a/inst/doc/usecase.md +++ b/inst/doc/usecase.md @@ -8,4 +8,4 @@ In this document, you can link to the example scripts for different usage of the 3. [Precipitation Downscaling with RainFARM RF 100](inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF100.R) 2. **Examples using 's2dv_cube'** - 1. [Save 's2dv_cube'](inst/doc/usecase/UseCase4_CST_SaveCube.R) \ No newline at end of file + 1. [Save 's2dv_cube'](inst/doc/usecase/UseCase4_SaveExp.R) \ No newline at end of file -- GitLab From 417732158eba3c8968e45bc91b39df59021fef70 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 29 Nov 2023 16:29:24 +0100 Subject: [PATCH 20/66] Delete file --- tests/testthat/test-CST_SaveCube.R | 324 ----------------------------- 1 file changed, 324 deletions(-) delete mode 100644 tests/testthat/test-CST_SaveCube.R diff --git a/tests/testthat/test-CST_SaveCube.R b/tests/testthat/test-CST_SaveCube.R deleted file mode 100644 index 0951c392..00000000 --- a/tests/testthat/test-CST_SaveCube.R +++ /dev/null @@ -1,324 +0,0 @@ -############################################## - -# cube0 -cube0 <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1)) -class(cube0) <- 's2dv_cube' - -# cube1 -cube1 <- NULL -cube1$data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1)) -coords2 <- list(sdate = c('20000101', '20010102', '20020103', '20030104', '20040105'), - var = 'tas', - lon = 1.:4., - lat = 1.:4.) -cube1$coords <- coords2 -dates2 <- c('20000101', '20010102', '20020103', '20030104', '20040105') -dates2 <- as.Date(dates2, format = "%Y%m%d", tz = "UTC") -dim(dates2) <- c(sdate = 5, ftime = 1) -cube1$attrs$Dates <- dates2 -class(cube1) <- 's2dv_cube' - -# cube2 -cube2 <- cube1 -cube2$data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1, - test = 2, test2 = 3)) -dim(cube2$data) <- c(sdate = 5, lon = 4, lat = 4, ftime = 1, member = 1, - ensemble = 1, test = 2, test2 = 3) - -# cube3 -cube3 <- cube1 - -# dat0 -dates0 <- as.Date('2022-02-01', format = "%Y-%m-%d") -dim(dates0) <- c(sdate = 1) - -# dat1 -dat1 <- array(1, dim = c(test = 1)) - -# dat2 -dat2 <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4, ftime = 1)) -coords2 <- list(sdate = c('20000101', '20010102', '20020103', '20030104', '20040105'), - var = 'tas', - lon = 1.:4., - lat = 1.:4.) -dates2 <- c('20000101', '20010102', '20020103', '20030104', '20040105') -dates2 <- as.Date(dates2, format = "%Y%m%d", tz = "UTC") -dim(dates2) <- c(sdate = 5, ftime = 1) - -# dat3 (without sdate dim) -dat3 <- array(1:5, dim = c(lon = 4, lat = 4, ftime = 2)) -coords3 <- list(sdate = c('20000101', '20010102'), - var = 'tas', - lon = 1.:4., - lat = 1.:4.) -dates3 <- c('20000101', '20010102') -dates3 <- as.Date(dates3, format = "%Y%m%d", tz = "UTC") -dim(dates3) <- c(ftime = 2) - -# dat4 (without ftime dim) -dat4 <- array(1:5, dim = c(sdate = 2, lon = 4, lat = 4)) -coords4 <- list(sdate = c('20000101', '20010102'), - var = 'tas', - lon = 1.:4., - lat = 1.:4.) -dates4 <- c('20000101', '20010102') -dates4 <- as.Date(dates4, format = "%Y%m%d", tz = "UTC") -dim(dates4) <- c(sdate = 2) - -# dates5 (Dates with extra dimensions) -dates5 <- c('20000101', '20010102', '20010102', '20010102') -dates5 <- as.Date(dates5, format = "%Y%m%d", tz = "UTC") -dim(dates5) <- c(ftime = 2, test = 1, test2 = 2) - -############################################## - -test_that("1. Input checks: CST_SaveCube", { - # s2dv_cube - expect_error( - CST_SaveCube(data = 1), - paste0("Parameter 'data' must be of the class 's2dv_cube'.") - ) - # structure - expect_error( - CST_SaveCube(data = cube0), - paste0("Parameter 'data' must have at least 'data' and 'attrs' elements ", - "within the 's2dv_cube' structure.") - ) - cube0 <- list(data = cube0, attrs = 1) - class(cube0) <- 's2dv_cube' - expect_error( - CST_SaveCube(data = cube0), - paste0("Level 'attrs' must be a list with at least 'Dates' element.") - ) - cube0$attrs <- NULL - cube0$attrs$Dates <- dates2 - expect_warning( - CST_SaveCube(data = cube0, sdate_dim = c('sdate', 'sweek'), - ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, - var_dim = NULL, single_file = FALSE), - paste0("Element 'coords' not found. No coordinates will be used.") - ) - - # sdate_dim - suppressWarnings( - expect_error( - CST_SaveCube(data = cube1, sdate_dim = 1), - paste0("Parameter 'sdate_dim' must be a character string.") - ) - ) - expect_warning( - CST_SaveCube(data = cube1, sdate_dim = c('sdate', 'sweek'), - ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, - var_dim = NULL, extra_string = 'test'), - paste0("Parameter 'sdate_dim' has length greater than 1 and ", - "only the first element will be used.") - ) - suppressWarnings( - expect_error( - CST_SaveCube(data = cube1, sdate_dim = 'a', ftime_dim = 'ftime'), - paste0("Parameter 'sdate_dim' is not found in 'data' dimension.") - ) - ) - # startdates - expect_warning( - CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, startdates = 1), - "Parameter 'startdates' is not a character string, it will not be used." - ) - expect_warning( - CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, startdates = '20100101'), - paste0("Parameter 'startdates' doesn't have the same length ", - "as dimension '", 'sdate',"', it will not be used.") - ) - # metadata - expect_warning( - CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("No metadata found in element Variable from attrs.") - ) - cube1$attrs$Variable$metadata <- 'metadata' - expect_error( - CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Element metadata from Variable element in attrs must be a list.") - ) - cube1$attrs$Variable$metadata <- list(test = 'var') - expect_warning( - CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Metadata is not found for any coordinate.") - ) - cube1$attrs$Variable$metadata <- list(var = 'var') - expect_warning( - CST_SaveCube(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Metadata is not found for any variable.") - ) - # memb_dim - suppressWarnings( - expect_error( - CST_SaveCube(data = cube1, memb_dim = 1, ftime_dim = 'ftime'), - paste0("Parameter 'memb_dim' must be a character string.") - ) - ) - suppressWarnings( - expect_error( - CST_SaveCube(data = cube1, memb_dim = 'member', ftime_dim = 'ftime'), - paste0("Parameter 'memb_dim' is not found in 'data' dimension. Set it ", - "as NULL if there is no member dimension.") - ) - ) -}) - -############################################## - -test_that("1. Input checks", { - # data - expect_error( - SaveCube(data = NULL), - "Parameter 'data' cannot be NULL." - ) - expect_error( - SaveCube(data = 1:10), - "Parameter 'data' must be an array with named dimensions." - ) - # destination - expect_error( - SaveCube(data = array(1, dim = c(a = 1)), destination = NULL), - paste0("Parameter 'destination' must be a character string of one element ", - "indicating the name of the file (including the folder if needed) ", - "where the data will be saved."), - fixed = TRUE - ) - # Dates - expect_error( - SaveCube(data = array(1, dim = c(a = 1)), Dates = 'a'), - paste0("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.") - ) - expect_error( - SaveCube(data = array(1, dim = c(a = 1)), - Dates = as.Date('2022-02-01', format = "%Y-%m-%d")), - paste0("Parameter 'Dates' must have dimension names.") - ) - # drop_dims - expect_warning( - SaveCube(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, drop_dims = 1), - paste0("Parameter 'drop_dims' must be character string containing ", - "the data dimension names to be dropped. It will not be used.") - ) - expect_warning( - SaveCube(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, drop_dims = 'time'), - paste0("Parameter 'drop_dims' must be character string containing ", - "the data dimension names to be dropped. It will not be used.") - ) - expect_warning( - SaveCube(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, drop_dims = 'sdate'), - paste0("Parameter 'drop_dims' can only contain dimension names ", - "that are of length 1. It will not be used.") - ) - expect_warning( - SaveCube(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, drop_dims = 'ftime'), - paste0("Parameter 'drop_dims' contains dimensions used in the ", - "computation. It will not be used.") - ) - # varname - suppressWarnings( - expect_error( - SaveCube(data = dat2, coords = coords2, varname = 1, - metadata = list(tas = list(level = '2m')), - Dates = dates2), - "Parameter 'varname' must be a character." - ) - ) - # varname, metadata, spatial coords, unknown dim - expect_error( - SaveCube(data = dat1, varname = 1, ftime_dim = NULL, sdate_dim = NULL, - memb_dim = NULL, dat_dim = NULL, var_dim = NULL), - paste0("Parameter 'varname' must be a character string with the ", - "variable names.") - ) - # ftime_dim - expect_error( - SaveCube(data = dat4, coords = coords4, - metadata = list(tas = list(level = '2m')), - Dates = dates4, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Parameter 'ftime_dim' is not found in 'data' dimension.") - ) - # Dates dimension check - expect_error( - SaveCube(data = dat4, coords = coords4, - metadata = list(tas = list(level = '2m')), - Dates = dates4, ftime_dim = NULL, memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Parameter 'Dates' must have 'ftime_dim'.") - ) - expect_warning( - SaveCube(data = dat4, coords = coords4, - metadata = list(tas = list(level = '2m')), - Dates = NULL, ftime_dim = NULL, memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Dates must be provided if 'data' must be saved in separated files. ", - "All data will be saved in a single file.") - ) - # Without ftime and sdate - expect_error( - SaveCube(data = dat3, coords = coords3, - metadata = list(tas = list(level = '2m')), - Dates = dates5, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = NULL), - paste0("Parameter 'Dates' must have only 'sdate_dim' and 'ftime_dim' dimensions.") - ) - expect_warning( - SaveCube(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - startdates = c(paste(1:11, collapse = '')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), - paste0("Parameter 'startdates' should be a character string containing ", - "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", - "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") - ) - expect_warning( - SaveCube(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), - paste0("Dates must be provided if 'data' must be saved in separated files. ", - "All data will be saved in a single file.") - ) - # (dat3) Without sdate_dim - expect_warning( - SaveCube(data = dat3, coords = coords3, - metadata = list(tas = list(level = '2m')), - Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = NULL, - extra_string = 'nosdate3.nc', single_file = FALSE), - paste0("Dates must be provided if 'data' must be saved in separated files. ", - "All data will be saved in a single file.") - ) - # (dat4) Without ftime_dim - expect_error( - SaveCube(data = dat4, coords = coords4, - metadata = list(tas = list(level = '2m')), - Dates = dates4, ftime_dim = NULL, memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate', - single_file = TRUE), - paste0("Parameter 'Dates' must have 'ftime_dim'.") - ) -}) - -############################################## -- GitLab From 704cedc8288b61c6c86e960762d27fec08eecb3f Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 30 Nov 2023 10:45:50 +0100 Subject: [PATCH 21/66] Update documentation; correct unit test --- R/CST_SaveExp.R | 258 +++++++++++++----------- inst/doc/usecase/UseCase4_CST_SaveExp.R | 115 ++++++++++- man/CST_SaveExp.Rd | 104 +++++----- man/SaveExp.Rd | 93 +++++---- tests/testthat/test-CST_SaveExp.R | 8 +- 5 files changed, 369 insertions(+), 209 deletions(-) diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index ebe25941..26280fdd 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -4,93 +4,104 @@ #' #'@description This function allows to divide and save a object of class #''s2dv_cube' into a NetCDF file, allowing to reload the saved data using -#'\code{Start} function from StartR package. If the original 's2dv_cube' object -#'has been created from \code{CST_Load()}, then it can be reloaded with -#'\code{Load()}. +#'\code{CST_Start} or \code{CST_Load} functions. It also allows to save any +#''s2dv_cube' object that follows the NetCDF attributes conventions. #' #'@param data An object of class \code{s2dv_cube}. #'@param destination A character string containing the directory name in which #' to save the data. NetCDF file for each starting date are saved into the -#' folder tree: \cr -#' destination/Dataset/variable/. By default the function -#' creates and saves the data into the working directory. +#' folder tree: 'destination/Dataset/variable/'. By default the function +#' saves the data into the working directory. #'@param sdate_dim A character string indicating the name of the start date #' dimension. By default, it is set to 'sdate'. It can be NULL if there is no #' start date dimension. #'@param ftime_dim A character string indicating the name of the forecast time -#' dimension. By default, it is set to 'time'. It can be NULL if there is no -#' forecast time dimension. +#' dimension. If 'Dates' are used, it can't be NULL. If there is no forecast +#' time dimension, 'Dates' will be set to NULL and will not be used. By +#' default, it is set to 'time'. #'@param dat_dim A character string indicating the name of dataset dimension. -#' By default, it is set to 'dataset'. It can be NULL if there is no dataset -#' dimension. +#' It can be NULL if there is no dataset dimension. By default, it is set to +#' 'dataset'. #'@param var_dim A character string indicating the name of variable dimension. -#' By default, it is set to 'var'. It can be NULL if there is no variable -#' dimension. +#' It can be NULL if there is no variable dimension. By default, it is set to +#' 'var'. #'@param memb_dim A character string indicating the name of the member dimension. -#' By default, it is set to 'member'. It can be NULL if there is no member -#' dimension. +#' It can be NULL if there is no member dimension. By default, it is set to +#' 'member'. #'@param startdates A vector of dates that will be used for the filenames -#' when saving the data in multiple files. It must be a vector of the same -#' length as the start date dimension of data. It must be a vector of class -#' \code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. -#' If it is NULL, the coordinate corresponding the the start date dimension or -#' the first Date of each time step will be used as the name of the files. -#' It is NULL by default. -#'@param drop_dims A vector of character strings indicating the dimension names -#' of length 1 that need to be dropped in order that they don't appear in the -#' netCDF file. It is NULL by default (optional). +#' when saving the data in multiple files (single_file = FALSE). It must be a +#' vector of the same length as the start date dimension of data. It must be a +#' vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts +#' between 1 and 10. If it is NULL, the coordinate corresponding the the start +#' date dimension or the first Date of each time step will be used as the name +#' of the files. It is NULL by default. #'@param single_file A logical value indicating if all object is saved in a #' single file (TRUE) or in multiple files (FALSE). When it is FALSE, -#' the array is separated for Datasets, variable and start date. It is FALSE -#' by default. -#'@param extra_string A character string to be include as part of the file name, -#' for instance, to identify member or realization. It would be added to the -#' file name between underscore characters. +#' the array is separated for datasets, variable and start date. When there are +#' no specified time dimensions, the data will be saved in a single file by +#' default. The output file name when 'single_file' is TRUE is a character +#' string containing: '__.nc'; when it is FALSE, +#' it is '_.nc'. It is FALSE by default. +#'@param drop_dims (optional) A vector of character strings indicating the +#' dimension names of length 1 that need to be dropped in order that they don't +#' appear in the netCDF file. Only is allowed to drop dimensions that are not +#' used in the computation. The dimensions used in the computation are the ones +#' specified in: sdate_dim, ftime_dim, dat_dim, var_dim and memb_dim. It is +#' NULL by default. +#'@param extra_string (Optional) A character string to be included as part of +#' the file name, for instance, to identify member or realization. When +#' single_file is TRUE, the 'extra_string' will substitute all the default +#' file name; when single_file is FALSE, the 'extra_string' will be added +#' in the file name as: '__.nc'. It is NULL by default. #'@param units_hours_since (Optional) A logical value only used for the case #' Dates have forecast time and start date dimension and single_file is TRUE. #' When it is TRUE, it saves the forecast time with units of 'hours since'; #' if it is FALSE, the time units will be a number of time steps with its #' corresponding frequency (e.g. n days, n months or n hours). It is TRUE #' by default. +#'@param global_attrs (Optional) A list with elements containing the global +#' attributes to be saved in the NetCDF. #' #'@return Multiple or single NetCDF files containing the data array.\cr -#'\item{\code{single_file = TRUE}}{ +#'\item{\code{single_file is TRUE}}{ #' All data is saved in a single file located in the specified destination -#' path with the following name: -#' ___.nc. Multiple -#' variables are saved separately in the same file. The forecast time units -#' is extracted from the frequency of the time steps (hours, days, months). -#' The first value of forecast time is 1. If no frequency is found, the units -#' will be 'hours since' each start date and the time steps are assumed to be -#' equally spaced. +#' path with the following name (by default): +#' '__.nc'. Multiple variables +#' are saved separately in the same file. The forecast time units +#' are calculated from each start date (if sdate_dim is not NULL) or from +#' the time step. If 'units_hours_since' is TRUE, the forecast time units +#' will be 'hours since '. If 'units_hours_since' is FALSE, +#' the forecast time units are extracted from the frequency of the time steps +#' (hours, days, months); if no frequency is found, the units will be ’hours +#' since’. When the time units are 'hours since' the time ateps are assumed to +#' be equally spaced. #'} -#'\item{\code{single_file = FALSE}}{ +#'\item{\code{single_file is FALSE}}{ #' The data array is subset and stored into multiple files. Each file #' contains the data subset for each start date, variable and dataset. Files -#' with different variables and Datasets are stored in separated directories -#' within the following directory tree: destination/Dataset/variable/. -#' The name of each file will be: -#' __.nc. +#' with different variables and datasets are stored in separated directories +#' within the following directory tree: 'destination/Dataset/variable/'. +#' The name of each file will be by default: '_.nc'. +#' The forecast time units are calculated from each start date (if sdate_dim +#' is not NULL) or from the time step. The forecast time units will be 'hours +#' since '. #'} #' #'@seealso \code{\link[startR]{Start}}, \code{\link{as.s2dv_cube}} and #'\code{\link{s2dv_cube}} #' #'@examples -#'\dontrun{ #'data <- lonlat_temp_st$exp -#'destination <- "./" -#'CST_SaveExp(data = data, destination = destination, ftime_dim = 'ftime', -#' var_dim = 'var', dat_dim = 'dataset') -#'} +#'CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', +#' dat_dim = 'dataset', sdate_dim = 'sdate') #' #'@export CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', - ftime_dim = 'time', dat_dim = 'dataset', - var_dim = 'var', memb_dim = 'member', - startdates = NULL, drop_dims = NULL, - single_file = FALSE, extra_string = NULL, - global_attrs = NULL, units_hours_since = TRUE) { + ftime_dim = 'time', dat_dim = 'dataset', + var_dim = 'var', memb_dim = 'member', + startdates = NULL, drop_dims = NULL, + single_file = FALSE, extra_string = NULL, + global_attrs = NULL, units_hours_since = TRUE) { # Check 's2dv_cube' if (!inherits(data, 's2dv_cube')) { stop("Parameter 'data' must be of the class 's2dv_cube'.") @@ -107,9 +118,7 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', warning("Element 'coords' not found. No coordinates will be used.") } # metadata - if (is.null(data$attrs$Variable$metadata)) { - warning("No metadata found in element Variable from attrs.") - } else { + if (!is.null(data$attrs$Variable$metadata)) { if (!inherits(data$attrs$Variable$metadata, 'list')) { stop("Element metadata from Variable element in attrs must be a list.") } @@ -144,6 +153,10 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', dim(data$attrs$Dates) <- c(sdate = 1, dim(data$attrs$Dates)) data$coords[[sdate_dim]] <- data$attrs$Dates[1] } + # ftime_dim + if (is.null(ftime_dim)) { + data$attrs$Dates <- NULL + } # startdates if (is.null(startdates)) { if (is.character(data$coords[[sdate_dim]])) { @@ -162,21 +175,21 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', } SaveExp(data = data$data, - destination = destination, - Dates = data$attrs$Dates, - coords = data$coords, - varname = data$attrs$Variable$varName, - metadata = data$attrs$Variable$metadata, - Datasets = data$attrs$Datasets, - startdates = startdates, - dat_dim = dat_dim, sdate_dim = sdate_dim, - ftime_dim = ftime_dim, var_dim = var_dim, - memb_dim = memb_dim, - drop_dims = drop_dims, - extra_string = extra_string, - single_file = single_file, - global_attrs = global_attrs, - units_hours_since = units_hours_since) + destination = destination, + Dates = data$attrs$Dates, + coords = data$coords, + varname = data$attrs$Variable$varName, + metadata = data$attrs$Variable$metadata, + Datasets = data$attrs$Datasets, + startdates = startdates, + dat_dim = dat_dim, sdate_dim = sdate_dim, + ftime_dim = ftime_dim, var_dim = var_dim, + memb_dim = memb_dim, + drop_dims = drop_dims, + extra_string = extra_string, + single_file = single_file, + global_attrs = global_attrs, + units_hours_since = units_hours_since) } #'Save a multidimensional array with metadata to data in NetCDF format #'@description This function allows to save a data array with metadata into a @@ -204,12 +217,6 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #' lists for each variable. #'@param Datasets A vector of character string indicating the names of the #' datasets. -#'@param startdates A vector of dates that will be used for the filenames -#' when saving the data in multiple files. It must be a vector of the same -#' length as the start date dimension of data. It must be a vector of class -#' \code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. -#' If it is NULL, the first Date of each time step will be used as the name of -#' the files. It is NULL by default. #'@param sdate_dim A character string indicating the name of the start date #' dimension. By default, it is set to 'sdate'. It can be NULL if there is no #' start date dimension. @@ -225,41 +232,66 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #'@param memb_dim A character string indicating the name of the member dimension. #' By default, it is set to 'member'. It can be NULL if there is no member #' dimension. -#'@param drop_dims A vector of character strings indicating the dimension names -#' of length 1 that need to be dropped in order that they don't appear in the -#' netCDF file. It is NULL by default (optional). +#'@param startdates A vector of dates that will be used for the filenames +#' when saving the data in multiple files (single_file = FALSE). It must be a +#' vector of the same length as the start date dimension of data. It must be a +#' vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts +#' between 1 and 10. If it is NULL, the coordinate corresponding the the start +#' date dimension or the first Date of each time step will be used as the name +#' of the files. It is NULL by default. #'@param single_file A logical value indicating if all object is saved in a -#' unique file (TRUE) or in separated directories (FALSE). When it is FALSE, -#' the array is separated for Datasets, variable and start date. It is FALSE -#' by default (optional). -#'@param extra_string A character string to be include as part of the file name, -#' for instance, to identify member or realization. It would be added to the -#' file name between underscore characters (optional). -#'@param global_attrs A list with elements containing the global attributes -#' to be saved in the NetCDF. +#' single file (TRUE) or in multiple files (FALSE). When it is FALSE, +#' the array is separated for datasets, variable and start date. When there are +#' no specified time dimensions, the data will be saved in a single file by +#' default. The output file name when 'single_file' is TRUE is a character +#' string containing: __.nc; when it is FALSE, +#' it is _.nc. It is FALSE by default. +#'@param drop_dims (optional) A vector of character strings indicating the +#' dimension names of length 1 that need to be dropped in order that they don't +#' appear in the netCDF file. Only is allowed to drop dimensions that are not +#' used in the computation. The dimensions used in the computation are the ones +#' specified in: sdate_dim, ftime_dim, dat_dim, var_dim and memb_dim. It is +#' NULL by default. +#'@param extra_string (Optional) A character string to be included as part of +#' the file name, for instance, to identify member or realization. When +#' single_file is TRUE, the 'extra_string' will substitute all the default +#' file name; when single_file is FALSE, the 'extra_string' will be added +#' in the file name as: __.nc. It is NULL by default. +#'@param units_hours_since (Optional) A logical value only used for the case +#' Dates have forecast time and start date dimension and single_file is TRUE. +#' When it is TRUE, it saves the forecast time with units of 'hours since'; +#' if it is FALSE, the time units will be a number of time steps with its +#' corresponding frequency (e.g. n days, n months or n hours). It is TRUE +#' by default. +#'@param global_attrs (Optional) A list with elements containing the global +#' attributes to be saved in the NetCDF. #' #'@return Multiple or single NetCDF files containing the data array.\cr -#'\item{\code{single_file = TRUE}}{ +#'\item{\code{single_file is TRUE}}{ #' All data is saved in a single file located in the specified destination -#' path with the following name: -#' ___.nc. Multiple -#' variables are saved separately in the same file. The forecast time units -#' is extracted from the frequency of the time steps (hours, days, months). -#' The first value of forecast time is 1. If no frequency is found, the units -#' will be 'hours since' each start date and the time steps are assumed to be -#' equally spaced. +#' path with the following name (by default): +#' '__.nc'. Multiple variables +#' are saved separately in the same file. The forecast time units +#' are calculated from each start date (if sdate_dim is not NULL) or from +#' the time step. If 'units_hours_since' is TRUE, the forecast time units +#' will be 'hours since '. If 'units_hours_since' is FALSE, +#' the forecast time units are extracted from the frequency of the time steps +#' (hours, days, months); if no frequency is found, the units will be ’hours +#' since’. When the time units are 'hours since' the time ateps are assumed to +#' be equally spaced. #'} -#'\item{\code{single_file = FALSE}}{ +#'\item{\code{single_file is FALSE}}{ #' The data array is subset and stored into multiple files. Each file #' contains the data subset for each start date, variable and dataset. Files -#' with different variables and Datasets are stored in separated directories -#' within the following directory tree: destination/Dataset/variable/. -#' The name of each file will be: -#' __.nc. +#' with different variables and datasets are stored in separated directories +#' within the following directory tree: 'destination/Dataset/variable/'. +#' The name of each file will be by default: '_.nc'. +#' The forecast time units are calculated from each start date (if sdate_dim +#' is not NULL) or from the time step. The forecast time units will be 'hours +#' since '. #'} #' #'@examples -#'\dontrun{ #'data <- lonlat_temp_st$exp$data #'lon <- lonlat_temp_st$exp$coords$lon #'lat <- lonlat_temp_st$exp$coords$lat @@ -267,13 +299,10 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #'Datasets <- lonlat_temp_st$exp$attrs$Datasets #'varname <- 'tas' #'Dates <- lonlat_temp_st$exp$attrs$Dates -#'destination = './' #'metadata <- lonlat_temp_st$exp$attrs$Variable$metadata -#'SaveExp(data = data, destination = destination, coords = coords, -#' Datasets = Datasets, varname = varname, Dates = Dates, -#' metadata = metadata, single_file = TRUE, ftime_dim = 'ftime', -#' var_dim = 'var', dat_dim = 'dataset') -#'} +#'SaveExp(data = data, coords = coords, Datasets = Datasets, varname = varname, +#' Dates = Dates, metadata = metadata, single_file = TRUE, +#' ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset') #' #'@import easyNCDF #'@importFrom s2dv Reorder @@ -281,11 +310,11 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #'@importFrom ClimProjDiags Subset #'@export SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, - varname = NULL, metadata = NULL, Datasets = NULL, - startdates = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', - ftime_dim = 'time', var_dim = 'var', memb_dim = 'member', - drop_dims = NULL, single_file = FALSE, extra_string = NULL, - global_attrs = NULL, units_hours_since = TRUE) { + varname = NULL, metadata = NULL, Datasets = NULL, + startdates = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', + ftime_dim = 'time', var_dim = 'var', memb_dim = 'member', + drop_dims = NULL, single_file = FALSE, extra_string = NULL, + global_attrs = NULL, units_hours_since = TRUE) { ## Initial checks # data if (is.null(data)) { @@ -569,10 +598,11 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, names(attributes(coords[[i_coord]])$variables) <- i_coord } else if (!is.null(attributes(metadata[[i_coord]]))) { # from Load - attr(coords[[i_coord]], 'variables') <- list(attributes(metadata[[i_coord]])) + attrs <- attributes(metadata[[i_coord]]) + # We remove because some attributes can't be saved + attrs <- NULL + attr(coords[[i_coord]], 'variables') <- list(attrs) names(attributes(coords[[i_coord]])$variables) <- i_coord - } else { - stop("Metadata is not correct.") } } } diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R index 5e9ded12..a6cbbe86 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -139,18 +139,70 @@ out <- Start(dat = path, sdate = NULL), retrieve = TRUE) -attributes(out)$Variables$common$sdate -# [1] "1 months" "2 months" "3 months" +attributes(out)$Variables$common$ftime +# [1] "1 days" "2 days" "3 days" "4 days" "5 days" "6 days" "7 days" +# [8] "8 days" "9 days" "10 days" "11 days" "12 days" "13 days" "14 days" +# [15] "15 days" "16 days" "17 days" "18 days" "19 days" "20 days" "21 days" +# [22] "22 days" "23 days" "24 days" "25 days" "26 days" "27 days" "28 days" +# [29] "29 days" "30 days" "31 days" + +# (5) Test observations +data <- lonlat_temp$obs +CST_SaveExp(data = data, ftime_dim = 'ftime', memb_dim = NULL, + var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', + single_file = TRUE, units_hours_since = FALSE) +# Now we read the output with Start: +sdate <- c('20001101', '20051101') +path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") +out <- Start(dat = path, + var = 'tas', # tas + lon = 'all', + lat = 'all', + ftime = 'all', + member = 1, + sdate = 'all', + return_vars = list( + lon = 'dat', + lat = 'dat', + ftime = NULL, + sdate = NULL), + retrieve = TRUE) +dim(out) +attributes(out)$Variables$common$ftime # (b) Data loaded with Load data <- lonlat_temp$exp data <- lonlat_temp$obs -dtaa <- lonlat_prec +data <- lonlat_prec CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', single_file = TRUE, units_hours_since = FALSE) -# Error +# Now we read the output with Start: +sdate <- as.vector(data$coords$sdate) +path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") +out <- Start(dat = path, + var = 'prlr', # tas + lon = 'all', + lat = 'all', + ftime = 'all', + sdate = 'all', + member = 'all', + return_vars = list( + lon = 'dat', + lat = 'dat', + ftime = NULL, + sdate = NULL), + retrieve = TRUE) +dim(out) +lonlat_prec$dims + +# Test with ftime_dim NULL +data <- lonlat_temp$exp +data <- CST_Subset(data, along = 'ftime', indices = 1, drop = 'selected') +CST_SaveExp(data = data, ftime_dim = NULL, + var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', + single_file = FALSE, units_hours_since = FALSE) ################################################################################ # Test 3: Special cases @@ -207,9 +259,62 @@ summary(data3) dim(data3) dim(data3out) -# (1) two variables and two datasets in the same file +# (2) two variables and two datasets in the same file CST_SaveExp(data = cube3, ftime_dim = 'time', var_dim = 'var', memb_dim = 'ensemble', dat_dim = 'dat', single_file = TRUE) + +# (3) Observations +repos_exp <- paste0('/esarchive/exp/ecearth/a1tr/cmorfiles/CMIP/EC-Earth-Consortium/', + 'EC-Earth3/historical/r24i1p1f1/Amon/$var$/gr/v20190312/', + '$var$_Amon_EC-Earth3_historical_r24i1p1f1_gr_$sdate$01-$sdate$12.nc') + +exp <- Start(dat = repos_exp, + var = 'tas', + sdate = as.character(c(2005:2008)), + time = indices(1:3), + lat = 1:10, + lat_reorder = Sort(), + lon = 1:10, + lon_reorder = CircularSort(0, 360), + synonims = list(lat = c('lat', 'latitude'), + lon = c('lon', 'longitude')), + return_vars = list(lon = NULL, + lat = NULL, + time = 'sdate'), + retrieve = FALSE) +lats <- attr(exp, 'Variables')$common$lat +lons <- attr(exp, 'Variables')$common$lon +## The 'time' attribute is a two-dim array +dates <- attr(exp, 'Variables')$common$time +dim(dates) +repos_obs <- '/esarchive/recon/ecmwf/erainterim/monthly_mean/$var$_f6h/$var$_$date$.nc' + +obs <- Start(dat = repos_obs, + var = 'tas', + date = unique(format(dates, '%Y%m')), + time = values(dates), #dim: [sdate = 4, time = 3] + lat = 1:10, + lat_reorder = Sort(), + lon = 1:10, + lon_reorder = CircularSort(0, 360), + time_across = 'date', + merge_across_dims = TRUE, + split_multiselected_dims = TRUE, + synonims = list(lat = c('lat', 'latitude'), + lon = c('lon', 'longitude')), + return_vars = list(lon = NULL, + lat = NULL, + time = 'date'), + retrieve = TRUE) +dim(obs) +attributes(obs)$Variables$common$time +obscube <- as.s2dv_cube(obs) +CST_SaveExp(data = obscube, ftime_dim = 'time', var_dim = 'var', + memb_dim = 'ensemble', dat_dim = 'dat', + single_file = TRUE) + + + ################################################################################ \ No newline at end of file diff --git a/man/CST_SaveExp.Rd b/man/CST_SaveExp.Rd index 8659b215..6b5006fa 100644 --- a/man/CST_SaveExp.Rd +++ b/man/CST_SaveExp.Rd @@ -25,50 +25,61 @@ CST_SaveExp( \item{destination}{A character string containing the directory name in which to save the data. NetCDF file for each starting date are saved into the -folder tree: \cr -destination/Dataset/variable/. By default the function -creates and saves the data into the working directory.} +folder tree: 'destination/Dataset/variable/'. By default the function +saves the data into the working directory.} \item{sdate_dim}{A character string indicating the name of the start date dimension. By default, it is set to 'sdate'. It can be NULL if there is no start date dimension.} \item{ftime_dim}{A character string indicating the name of the forecast time -dimension. By default, it is set to 'time'. It can be NULL if there is no -forecast time dimension.} +dimension. If 'Dates' are used, it can't be NULL. If there is no forecast +time dimension, 'Dates' will be set to NULL and will not be used. By +default, it is set to 'time'.} \item{dat_dim}{A character string indicating the name of dataset dimension. -By default, it is set to 'dataset'. It can be NULL if there is no dataset -dimension.} +It can be NULL if there is no dataset dimension. By default, it is set to +'dataset'.} \item{var_dim}{A character string indicating the name of variable dimension. -By default, it is set to 'var'. It can be NULL if there is no variable -dimension.} +It can be NULL if there is no variable dimension. By default, it is set to +'var'.} \item{memb_dim}{A character string indicating the name of the member dimension. -By default, it is set to 'member'. It can be NULL if there is no member -dimension.} +It can be NULL if there is no member dimension. By default, it is set to +'member'.} \item{startdates}{A vector of dates that will be used for the filenames -when saving the data in multiple files. It must be a vector of the same -length as the start date dimension of data. It must be a vector of class -\code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. -If it is NULL, the coordinate corresponding the the start date dimension or -the first Date of each time step will be used as the name of the files. -It is NULL by default.} +when saving the data in multiple files (single_file = FALSE). It must be a +vector of the same length as the start date dimension of data. It must be a +vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts +between 1 and 10. If it is NULL, the coordinate corresponding the the start +date dimension or the first Date of each time step will be used as the name +of the files. It is NULL by default.} -\item{drop_dims}{A vector of character strings indicating the dimension names -of length 1 that need to be dropped in order that they don't appear in the -netCDF file. It is NULL by default (optional).} +\item{drop_dims}{(optional) A vector of character strings indicating the +dimension names of length 1 that need to be dropped in order that they don't +appear in the netCDF file. Only is allowed to drop dimensions that are not +used in the computation. The dimensions used in the computation are the ones +specified in: sdate_dim, ftime_dim, dat_dim, var_dim and memb_dim. It is +NULL by default.} \item{single_file}{A logical value indicating if all object is saved in a single file (TRUE) or in multiple files (FALSE). When it is FALSE, -the array is separated for Datasets, variable and start date. It is FALSE -by default.} +the array is separated for datasets, variable and start date. When there are +no specified time dimensions, the data will be saved in a single file by +default. The output file name when 'single_file' is TRUE is a character +string containing: '__.nc'; when it is FALSE, +it is '_.nc'. It is FALSE by default.} + +\item{extra_string}{(Optional) A character string to be included as part of +the file name, for instance, to identify member or realization. When +single_file is TRUE, the 'extra_string' will substitute all the default +file name; when single_file is FALSE, the 'extra_string' will be added +in the file name as: '__.nc'. It is NULL by default.} -\item{extra_string}{A character string to be include as part of the file name, -for instance, to identify member or realization. It would be added to the -file name between underscore characters.} +\item{global_attrs}{(Optional) A list with elements containing the global +attributes to be saved in the NetCDF.} \item{units_hours_since}{(Optional) A logical value only used for the case Dates have forecast time and start date dimension and single_file is TRUE. @@ -79,39 +90,40 @@ by default.} } \value{ Multiple or single NetCDF files containing the data array.\cr -\item{\code{single_file = TRUE}}{ +\item{\code{single_file is TRUE}}{ All data is saved in a single file located in the specified destination - path with the following name: - ___.nc. Multiple - variables are saved separately in the same file. The forecast time units - is extracted from the frequency of the time steps (hours, days, months). - The first value of forecast time is 1. If no frequency is found, the units - will be 'hours since' each start date and the time steps are assumed to be - equally spaced. + path with the following name (by default): + '__.nc'. Multiple variables + are saved separately in the same file. The forecast time units + are calculated from each start date (if sdate_dim is not NULL) or from + the time step. If 'units_hours_since' is TRUE, the forecast time units + will be 'hours since '. If 'units_hours_since' is FALSE, + the forecast time units are extracted from the frequency of the time steps + (hours, days, months); if no frequency is found, the units will be ’hours + since’. When the time units are 'hours since' the time ateps are assumed to + be equally spaced. } -\item{\code{single_file = FALSE}}{ +\item{\code{single_file is FALSE}}{ The data array is subset and stored into multiple files. Each file contains the data subset for each start date, variable and dataset. Files - with different variables and Datasets are stored in separated directories - within the following directory tree: destination/Dataset/variable/. - The name of each file will be: - __.nc. + with different variables and datasets are stored in separated directories + within the following directory tree: 'destination/Dataset/variable/'. + The name of each file will be by default: '_.nc'. + The forecast time units are calculated from each start date (if sdate_dim + is not NULL) or from the time step. The forecast time units will be 'hours + since '. } } \description{ This function allows to divide and save a object of class 's2dv_cube' into a NetCDF file, allowing to reload the saved data using -\code{Start} function from StartR package. If the original 's2dv_cube' object -has been created from \code{CST_Load()}, then it can be reloaded with -\code{Load()}. +\code{CST_Start} or \code{CST_Load} functions. It also allows to save any +'s2dv_cube' object that follows the NetCDF attributes conventions. } \examples{ -\dontrun{ data <- lonlat_temp_st$exp -destination <- "./" -CST_SaveExp(data = data, destination = destination, ftime_dim = 'ftime', - var_dim = 'var', dat_dim = 'dataset') -} +CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', + dat_dim = 'dataset', sdate_dim = 'sdate') } \seealso{ diff --git a/man/SaveExp.Rd b/man/SaveExp.Rd index 2ff92489..7acd17d1 100644 --- a/man/SaveExp.Rd +++ b/man/SaveExp.Rd @@ -52,11 +52,12 @@ lists for each variable.} datasets.} \item{startdates}{A vector of dates that will be used for the filenames -when saving the data in multiple files. It must be a vector of the same -length as the start date dimension of data. It must be a vector of class -\code{Dates}, \code{'POSIXct'} or character with lenghts between 1 and 10. -If it is NULL, the first Date of each time step will be used as the name of -the files. It is NULL by default.} +when saving the data in multiple files (single_file = FALSE). It must be a +vector of the same length as the start date dimension of data. It must be a +vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts +between 1 and 10. If it is NULL, the coordinate corresponding the the start +date dimension or the first Date of each time step will be used as the name +of the files. It is NULL by default.} \item{dat_dim}{A character string indicating the name of dataset dimension. By default, it is set to 'dataset'. It can be NULL if there is no dataset @@ -78,41 +79,61 @@ dimension.} By default, it is set to 'member'. It can be NULL if there is no member dimension.} -\item{drop_dims}{A vector of character strings indicating the dimension names -of length 1 that need to be dropped in order that they don't appear in the -netCDF file. It is NULL by default (optional).} +\item{drop_dims}{(optional) A vector of character strings indicating the +dimension names of length 1 that need to be dropped in order that they don't +appear in the netCDF file. Only is allowed to drop dimensions that are not +used in the computation. The dimensions used in the computation are the ones +specified in: sdate_dim, ftime_dim, dat_dim, var_dim and memb_dim. It is +NULL by default.} \item{single_file}{A logical value indicating if all object is saved in a -unique file (TRUE) or in separated directories (FALSE). When it is FALSE, -the array is separated for Datasets, variable and start date. It is FALSE -by default (optional).} - -\item{extra_string}{A character string to be include as part of the file name, -for instance, to identify member or realization. It would be added to the -file name between underscore characters (optional).} - -\item{global_attrs}{A list with elements containing the global attributes -to be saved in the NetCDF.} +single file (TRUE) or in multiple files (FALSE). When it is FALSE, +the array is separated for datasets, variable and start date. When there are +no specified time dimensions, the data will be saved in a single file by +default. The output file name when 'single_file' is TRUE is a character +string containing: __.nc; when it is FALSE, +it is _.nc. It is FALSE by default.} + +\item{extra_string}{(Optional) A character string to be included as part of +the file name, for instance, to identify member or realization. When +single_file is TRUE, the 'extra_string' will substitute all the default +file name; when single_file is FALSE, the 'extra_string' will be added +in the file name as: __.nc. It is NULL by default.} + +\item{global_attrs}{(Optional) A list with elements containing the global +attributes to be saved in the NetCDF.} + +\item{units_hours_since}{(Optional) A logical value only used for the case +Dates have forecast time and start date dimension and single_file is TRUE. +When it is TRUE, it saves the forecast time with units of 'hours since'; +if it is FALSE, the time units will be a number of time steps with its +corresponding frequency (e.g. n days, n months or n hours). It is TRUE +by default.} } \value{ Multiple or single NetCDF files containing the data array.\cr -\item{\code{single_file = TRUE}}{ +\item{\code{single_file is TRUE}}{ All data is saved in a single file located in the specified destination - path with the following name: - ___.nc. Multiple - variables are saved separately in the same file. The forecast time units - is extracted from the frequency of the time steps (hours, days, months). - The first value of forecast time is 1. If no frequency is found, the units - will be 'hours since' each start date and the time steps are assumed to be - equally spaced. + path with the following name (by default): + '__.nc'. Multiple variables + are saved separately in the same file. The forecast time units + are calculated from each start date (if sdate_dim is not NULL) or from + the time step. If 'units_hours_since' is TRUE, the forecast time units + will be 'hours since '. If 'units_hours_since' is FALSE, + the forecast time units are extracted from the frequency of the time steps + (hours, days, months); if no frequency is found, the units will be ’hours + since’. When the time units are 'hours since' the time ateps are assumed to + be equally spaced. } -\item{\code{single_file = FALSE}}{ +\item{\code{single_file is FALSE}}{ The data array is subset and stored into multiple files. Each file contains the data subset for each start date, variable and dataset. Files - with different variables and Datasets are stored in separated directories - within the following directory tree: destination/Dataset/variable/. - The name of each file will be: - __.nc. + with different variables and datasets are stored in separated directories + within the following directory tree: 'destination/Dataset/variable/'. + The name of each file will be by default: '_.nc'. + The forecast time units are calculated from each start date (if sdate_dim + is not NULL) or from the time step. The forecast time units will be 'hours + since '. } } \description{ @@ -122,7 +143,6 @@ from StartR package. If the original 's2dv_cube' object has been created from \code{CST_Load()}, then it can be reloaded with \code{Load()}. } \examples{ -\dontrun{ data <- lonlat_temp_st$exp$data lon <- lonlat_temp_st$exp$coords$lon lat <- lonlat_temp_st$exp$coords$lat @@ -130,13 +150,10 @@ coords <- list(lon = lon, lat = lat) Datasets <- lonlat_temp_st$exp$attrs$Datasets varname <- 'tas' Dates <- lonlat_temp_st$exp$attrs$Dates -destination = './' metadata <- lonlat_temp_st$exp$attrs$Variable$metadata -SaveExp(data = data, destination = destination, coords = coords, - Datasets = Datasets, varname = varname, Dates = Dates, - metadata = metadata, single_file = TRUE, ftime_dim = 'ftime', - var_dim = 'var', dat_dim = 'dataset') -} +SaveExp(data = data, coords = coords, Datasets = Datasets, varname = varname, + Dates = Dates, metadata = metadata, single_file = TRUE, + ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset') } \author{ diff --git a/tests/testthat/test-CST_SaveExp.R b/tests/testthat/test-CST_SaveExp.R index 17226161..65c0cb22 100644 --- a/tests/testthat/test-CST_SaveExp.R +++ b/tests/testthat/test-CST_SaveExp.R @@ -123,7 +123,8 @@ test_that("1. Input checks: CST_SaveExp", { expect_warning( CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, var_dim = NULL, startdates = 1), - "Parameter 'startdates' is not a character string, it will not be used." + paste0("Parameter 'startdates' doesn't have the same length ", + "as dimension 'sdate', it will not be used.") ) expect_warning( CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, @@ -132,11 +133,6 @@ test_that("1. Input checks: CST_SaveExp", { "as dimension '", 'sdate',"', it will not be used.") ) # metadata - expect_warning( - CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("No metadata found in element Variable from attrs.") - ) cube1$attrs$Variable$metadata <- 'metadata' expect_error( CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, -- GitLab From e7b04c705456a32ff78fad3b13bf60b25fcde407 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 1 Dec 2023 15:27:11 +0100 Subject: [PATCH 22/66] Improve CST_SaveExp and correct unit test --- R/CST_SaveExp.R | 386 +++++++++++++++++++----------- man/CST_SaveExp.Rd | 43 ++-- man/SaveExp.Rd | 71 +++--- tests/testthat/test-CST_SaveExp.R | 66 +---- 4 files changed, 317 insertions(+), 249 deletions(-) diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index 26280fdd..c5742031 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -25,9 +25,9 @@ #'@param var_dim A character string indicating the name of variable dimension. #' It can be NULL if there is no variable dimension. By default, it is set to #' 'var'. -#'@param memb_dim A character string indicating the name of the member dimension. -#' It can be NULL if there is no member dimension. By default, it is set to -#' 'member'. +#'@param memb_dim A character string indicating the name of the member +#' dimension. It can be NULL if there is no member dimension. By default, it is +#' set to 'member'. #'@param startdates A vector of dates that will be used for the filenames #' when saving the data in multiple files (single_file = FALSE). It must be a #' vector of the same length as the start date dimension of data. It must be a @@ -52,13 +52,14 @@ #' the file name, for instance, to identify member or realization. When #' single_file is TRUE, the 'extra_string' will substitute all the default #' file name; when single_file is FALSE, the 'extra_string' will be added -#' in the file name as: '__.nc'. It is NULL by default. -#'@param units_hours_since (Optional) A logical value only used for the case -#' Dates have forecast time and start date dimension and single_file is TRUE. -#' When it is TRUE, it saves the forecast time with units of 'hours since'; -#' if it is FALSE, the time units will be a number of time steps with its -#' corresponding frequency (e.g. n days, n months or n hours). It is TRUE -#' by default. +#' in the file name as: '__.nc'. It is NULL by +#' default. +#'@param units_hours_since (Optional) A logical value only available for the +#' case: Dates have forecast time and start date dimension, single_file is +#' TRUE and 'time_bounds' are not used. When it is TRUE, it saves the forecast +#' time with units of 'hours since'; if it is FALSE, the time units will be a +#' number of time steps with its corresponding frequency (e.g. n days, n months +#' or n hours). It is TRUE by default. #'@param global_attrs (Optional) A list with elements containing the global #' attributes to be saved in the NetCDF. #' @@ -96,10 +97,10 @@ #' dat_dim = 'dataset', sdate_dim = 'sdate') #' #'@export -CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', - ftime_dim = 'time', dat_dim = 'dataset', - var_dim = 'var', memb_dim = 'member', - startdates = NULL, drop_dims = NULL, +CST_SaveExp <- function(data, destination = "./", startdates = NULL, + sdate_dim = 'sdate', ftime_dim = 'time', + memb_dim = 'member', dat_dim = 'dataset', + var_dim = 'var', drop_dims = NULL, single_file = FALSE, extra_string = NULL, global_attrs = NULL, units_hours_since = TRUE) { # Check 's2dv_cube' @@ -114,20 +115,11 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', if (!inherits(data$attrs, 'list')) { stop("Level 'attrs' must be a list with at least 'Dates' element.") } - if (!all(c('coords') %in% names(data))) { - warning("Element 'coords' not found. No coordinates will be used.") - } # metadata if (!is.null(data$attrs$Variable$metadata)) { if (!inherits(data$attrs$Variable$metadata, 'list')) { stop("Element metadata from Variable element in attrs must be a list.") } - if (!any(names(data$attrs$Variable$metadata) %in% names(data$coords))) { - warning("Metadata is not found for any coordinate.") - } else if (!any(names(data$attrs$Variable$metadata) %in% - data$attrs$Variable$varName)) { - warning("Metadata is not found for any variable.") - } } # Dates if (is.null(data$attrs$Dates)) { @@ -141,21 +133,6 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', if (!is.character(sdate_dim)) { stop("Parameter 'sdate_dim' must be a character string.") } - if (length(sdate_dim) > 1) { - warning("Parameter 'sdate_dim' has length greater than 1 and ", - "only the first element will be used.") - sdate_dim <- sdate_dim[1] - } - } else if (length(dim(data$attrs$Dates)) == 1) { - sdate_dim <- 'sdate' - dim(data$data) <- c(sdate = 1, dim(data$data)) - data$dims <- dim(data$data) - dim(data$attrs$Dates) <- c(sdate = 1, dim(data$attrs$Dates)) - data$coords[[sdate_dim]] <- data$attrs$Dates[1] - } - # ftime_dim - if (is.null(ftime_dim)) { - data$attrs$Dates <- NULL } # startdates if (is.null(startdates)) { @@ -163,31 +140,22 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', startdates <- data$coords[[sdate_dim]] } } - - if (!is.null(startdates)) { - if (!is.null(sdate_dim)) { - if (dim(data$data)[sdate_dim] != length(startdates)) { - warning(paste0("Parameter 'startdates' doesn't have the same length ", - "as dimension '", sdate_dim,"', it will not be used.")) - startdates <- data$coords[[sdate_dim]] - } - } - } SaveExp(data = data$data, destination = destination, - Dates = data$attrs$Dates, coords = data$coords, + Dates = data$attrs$Dates, + time_bounds = data$attrs$time_bounds, + startdates = startdates, varname = data$attrs$Variable$varName, metadata = data$attrs$Variable$metadata, Datasets = data$attrs$Datasets, - startdates = startdates, - dat_dim = dat_dim, sdate_dim = sdate_dim, - ftime_dim = ftime_dim, var_dim = var_dim, + sdate_dim = sdate_dim, ftime_dim = ftime_dim, memb_dim = memb_dim, + dat_dim = dat_dim, var_dim = var_dim, drop_dims = drop_dims, - extra_string = extra_string, single_file = single_file, + extra_string = extra_string, global_attrs = global_attrs, units_hours_since = units_hours_since) } @@ -202,14 +170,25 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #'@param data A multi-dimensional array with named dimensions. #'@param destination A character string indicating the path where to store the #' NetCDF files. -#'@param Dates A named array of dates with the corresponding sdate and forecast -#' time dimension. If there is no sdate_dim, you can set it to NULL. -#' It must have ftime_dim dimension. #'@param coords A named list with elements of the coordinates corresponding to #' the dimensions of the data parameter. The names and length of each element #' must correspond to the names of the dimensions. If any coordinate is not #' provided, it is set as an index vector with the values from 1 to the length #' of the corresponding dimension. +#'@param Dates A named array of dates with the corresponding sdate and forecast +#' time dimension. If there is no sdate_dim, you can set it to NULL. +#' It must have ftime_dim dimension. +#'@param time_bounds (Optional) A list of two arrays of dates containing +#' the lower (first array) and the upper (second array) time bounds +#' corresponding to Dates. Each array must have the same dimensions as Dates. +#' It is NULL by default. +#'@param startdates A vector of dates that will be used for the filenames +#' when saving the data in multiple files (single_file = FALSE). It must be a +#' vector of the same length as the start date dimension of data. It must be a +#' vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts +#' between 1 and 10. If it is NULL, the coordinate corresponding the the start +#' date dimension or the first Date of each time step will be used as the name +#' of the files. It is NULL by default. #'@param varname A character string indicating the name of the variable to be #' saved. #'@param metadata A named list where each element is a variable containing the @@ -229,42 +208,36 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #'@param var_dim A character string indicating the name of variable dimension. #' By default, it is set to 'var'. It can be NULL if there is no variable #' dimension. -#'@param memb_dim A character string indicating the name of the member dimension. -#' By default, it is set to 'member'. It can be NULL if there is no member -#' dimension. -#'@param startdates A vector of dates that will be used for the filenames -#' when saving the data in multiple files (single_file = FALSE). It must be a -#' vector of the same length as the start date dimension of data. It must be a -#' vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts -#' between 1 and 10. If it is NULL, the coordinate corresponding the the start -#' date dimension or the first Date of each time step will be used as the name -#' of the files. It is NULL by default. -#'@param single_file A logical value indicating if all object is saved in a -#' single file (TRUE) or in multiple files (FALSE). When it is FALSE, -#' the array is separated for datasets, variable and start date. When there are -#' no specified time dimensions, the data will be saved in a single file by -#' default. The output file name when 'single_file' is TRUE is a character -#' string containing: __.nc; when it is FALSE, -#' it is _.nc. It is FALSE by default. +#'@param memb_dim A character string indicating the name of the member +#' dimension. By default, it is set to 'member'. It can be NULL if there is no +#' member dimension. #'@param drop_dims (optional) A vector of character strings indicating the #' dimension names of length 1 that need to be dropped in order that they don't #' appear in the netCDF file. Only is allowed to drop dimensions that are not #' used in the computation. The dimensions used in the computation are the ones #' specified in: sdate_dim, ftime_dim, dat_dim, var_dim and memb_dim. It is #' NULL by default. +#'@param single_file A logical value indicating if all object is saved in a +#' single file (TRUE) or in multiple files (FALSE). When it is FALSE, +#' the array is separated for datasets, variable and start date. When there are +#' no specified time dimensions, the data will be saved in a single file by +#' default. The output file name when 'single_file' is TRUE is a character +#' string containing: '__.nc'; when it is FALSE, +#' it is '_.nc'. It is FALSE by default. #'@param extra_string (Optional) A character string to be included as part of #' the file name, for instance, to identify member or realization. When #' single_file is TRUE, the 'extra_string' will substitute all the default #' file name; when single_file is FALSE, the 'extra_string' will be added -#' in the file name as: __.nc. It is NULL by default. -#'@param units_hours_since (Optional) A logical value only used for the case -#' Dates have forecast time and start date dimension and single_file is TRUE. -#' When it is TRUE, it saves the forecast time with units of 'hours since'; -#' if it is FALSE, the time units will be a number of time steps with its -#' corresponding frequency (e.g. n days, n months or n hours). It is TRUE -#' by default. +#' in the file name as: '__.nc'. It is NULL by +#' default. #'@param global_attrs (Optional) A list with elements containing the global #' attributes to be saved in the NetCDF. +#'@param units_hours_since (Optional) A logical value only available for the +#' case: Dates have forecast time and start date dimension, single_file is +#' TRUE and 'time_bounds' is NULL. When it is TRUE, it saves the forecast time +#' with units of 'hours since'; if it is FALSE, the time units will be a number +#' of time steps with its corresponding frequency (e.g. n days, n months or n +#' hours). It is TRUE by default. #' #'@return Multiple or single NetCDF files containing the data array.\cr #'\item{\code{single_file is TRUE}}{ @@ -309,10 +282,11 @@ CST_SaveExp <- function(data, destination = "./", sdate_dim = 'sdate', #'@import multiApply #'@importFrom ClimProjDiags Subset #'@export -SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, +SaveExp <- function(data, destination = "./", coords = NULL, + Dates = NULL, time_bounds = NULL, startdates = NULL, varname = NULL, metadata = NULL, Datasets = NULL, - startdates = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', - ftime_dim = 'time', var_dim = 'var', memb_dim = 'member', + sdate_dim = 'sdate', ftime_dim = 'time', + memb_dim = 'member', dat_dim = 'dataset', var_dim = 'var', drop_dims = NULL, single_file = FALSE, extra_string = NULL, global_attrs = NULL, units_hours_since = TRUE) { ## Initial checks @@ -330,15 +304,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, "indicating the name of the file (including the folder if needed) ", "where the data will be saved.") } - # Dates - if (!is.null(Dates)) { - if (!any(inherits(Dates, "POSIXct"), inherits(Dates, "Date"))) { - stop("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.") - } - if (is.null(dim(Dates))) { - stop("Parameter 'Dates' must have dimension names.") - } - } # drop_dims if (!is.null(drop_dims)) { if (!is.character(drop_dims) | any(!drop_dims %in% names(dim(data)))) { @@ -456,11 +421,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, stop("Parameter 'dat_dim' is not found in 'data' dimension. Set it ", "as NULL if there is no Datasets dimension.") } - if (length(dat_dim) > 1) { - warning("Parameter 'dat_dim' has length greater than 1 and ", - "only the first element will be used.") - dat_dim <- dat_dim[1] - } n_datasets <- dim(data)[dat_dim] } else { n_datasets <- 1 @@ -488,35 +448,110 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, single_file <- TRUE } } - # Dates dimension check + # Dates (1): initial checks if (!is.null(Dates)) { - if (is.null(ftime_dim)) { - stop("Parameter 'Dates' must have 'ftime_dim'.") + if (!any(inherits(Dates, "POSIXct"), inherits(Dates, "Date"))) { + stop("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.") } - if (all(c(ftime_dim, sdate_dim) %in% names(dim(Dates)))) { - if (any(!names(dim(Dates)) %in% c(ftime_dim, sdate_dim))) { - if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] == 1)) { - dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] - } else { - stop("Parameter 'Dates' must have only 'sdate_dim' and 'ftime_dim' dimensions.") - } + if (is.null(dim(Dates))) { + stop("Parameter 'Dates' must have dimension names.") + } + if (all(is.null(ftime_dim), is.null(sdate_dim))) { + warning("Parameters 'ftime_dim' and 'sdate_dim' can't both be NULL ", + "if 'Dates' are used. 'Dates' will not be used.") + Dates <- NULL + } + # sdate_dim in Dates + if (!is.null(sdate_dim)) { + if (!sdate_dim %in% names(dim(Dates))) { + warning("Parameter 'sdate_dim' is not found in 'Dates' dimension. ", + "Dates will not be used.") + Dates <- NULL } - if (is.null(startdates)) { - startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') - } else if (any(nchar(startdates) > 10, nchar(startdates) < 1)) { - warning("Parameter 'startdates' should be a character string containing ", - "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", - "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") - startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') + } + # ftime_dim in Dates + if (!is.null(ftime_dim)) { + if (!ftime_dim %in% names(dim(Dates))) { + warning("Parameter 'ftime_dim' is not found in 'Dates' dimension. ", + "Dates will not be used.") + Dates <- NULL } - if (any(inherits(startdates, "POSIXct"), inherits(startdates, "Date"))) { - startdates <- format(startdates, "%Y%m%d") + } + } + # time_bounds + if (!is.null(time_bounds)) { + if (!inherits(time_bounds, 'list')) { + stop("Parameter 'time_bounds' must be a list with two dates arrays.") + } + time_bounds_dims <- lapply(time_bounds, function(x) dim(x)) + if (!identical(time_bounds_dims[[1]], time_bounds_dims[[2]])) { + stop("Parameter 'time_bounds' must have 2 arrays with same dimensions.") + } + name_tb <- sort(names(time_bounds_dims[[1]])) + name_dt <- sort(names(dim(Dates))) + if (!identical(dim(Dates)[name_dt], time_bounds_dims[[1]][name_tb])) { + stop(paste0("Parameter 'Dates' and 'time_bounds' must have same length ", + "of all dimensions.")) + } + } + # Dates (2): Check dimensions + if (!is.null(Dates)) { + if (any(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] != 1)) { + stop("Parameter 'Dates' can have only 'sdate_dim' and 'ftime_dim' ", + "dimensions of length greater than 1.") + } + # drop dimensions of length 1 different from sdate_dim and ftime_dim + dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] + + # add ftime if needed + if (is.null(ftime_dim)) { + warning("A 'time' dimension of length 1 will be added to 'Dates'.") + dim(Dates) <- c(time = 1, dim(Dates)) + dim(data) <- c(time = 1, dim(data)) + dimnames <- names(dim(data)) + ftime_dim <- 'time' + if (!is.null(time_bounds)) { + time_bounds <- lapply(time_bounds, function(x) { + dim(x) <- c(time = 1, dim(x)) + return(x) + }) } - } else if (any(ftime_dim %in% names(dim(Dates)))) { - if (all(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim)] == 1)) { - dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] + } + # add sdate if needed + if (is.null(sdate_dim)) { + if (!single_file) { + warning("A 'sdate' dimension of length 1 will be added to 'Dates'.") + dim(Dates) <- c(dim(Dates), sdate = 1) + dim(data) <- c(dim(data), sdate = 1) + dimnames <- names(dim(data)) + sdate_dim <- 'sdate' + if (!is.null(time_bounds)) { + time_bounds <- lapply(time_bounds, function(x) { + dim(x) <- c(dim(x), sdate = 1) + return(x) + }) + } + if (!is.null(startdates)) { + if (length(startdates) != 1) { + warning("Parameter 'startdates' must be of length 1 if 'sdate_dim' is NULL.", + "They won't be used.") + startdates <- NULL + } + } } } + } + # startdates + if (!is.null(Dates)) { + # check startdates + if (is.null(startdates)) { + startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') + } else if (any(nchar(startdates) > 10, nchar(startdates) < 1)) { + warning("Parameter 'startdates' should be a character string containing ", + "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", + "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") + startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') + } } else if (!single_file) { warning("Dates must be provided if 'data' must be saved in separated files. ", "All data will be saved in a single file.") @@ -529,7 +564,20 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } else { startdates <- rep('XXX', dim(data)[sdate_dim]) } + } else { + if (any(inherits(startdates, "POSIXct"), inherits(startdates, "Date"))) { + startdates <- format(startdates, "%Y%m%d") + } + if (!is.null(sdate_dim)) { + if (dim(data)[sdate_dim] != length(startdates)) { + warning(paste0("Parameter 'startdates' doesn't have the same length ", + "as dimension '", sdate_dim,"', it will not be used.")) + startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') + startdates <- format(startdates, "%Y%m%d") + } + } } + # Datasets if (is.null(Datasets)) { Datasets <- rep('XXX', n_datasets ) @@ -574,7 +622,7 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, for (i_coord in filedims) { # vals - if (i_coord %in% names(coords)) { + if (i_coord %in% names(coords)) {str if (is.numeric(coords[[i_coord]])) { coords[[i_coord]] <- as.vector(coords[[i_coord]]) } else { @@ -609,7 +657,6 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, # Reorder coords coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL coords <- coords[filedims] - defined_vars <- list() if (!single_file) { for (i in 1:n_datasets) { path <- file.path(destination, Datasets[i], varname) @@ -630,6 +677,11 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, if (is.null(Dates)) { input_data <- list(data_subset, startdates) target_dims <- list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL) + } else if (!is.null(time_bounds)) { + input_data <- list(data_subset, startdates, Dates, + time_bounds[[1]], time_bounds[[2]]) + target_dims = list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL, + ftime_dim, ftime_dim, ftime_dim) } else { input_data <- list(data_subset, startdates, Dates) target_dims = list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL, ftime_dim) @@ -652,8 +704,12 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, if (!is.null(dat_dim)) { coords[[dat_dim]] <- array(1:dim(data)[dat_dim], dim = dim(data)[dat_dim]) attr(coords[[dat_dim]], 'variables') <- list(list(units = 'adim')) - # extra_info_dim[[dat_dim]] <- list(Datasets = paste(Datasets, collapse = ', ')) } + # time_bnds + if (!is.null(time_bounds)) { + time_bnds <- c(time_bounds[[1]], time_bounds[[2]]) + } + # Dates if (!is.null(Dates)) { if (is.null(sdate_dim)) { sdates <- Dates[1] @@ -663,17 +719,18 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, # sdate definition sdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') differ <- as.numeric(difftime(sdates, sdates[1], units = "hours")) - # new dim(differ) <- dim(data)[sdate_dim] coords[[sdate_dim]] <- differ - attr(coords[[sdate_dim]], 'variables') <- list(list(units = paste('hours since', sdates[1]), - calendar = 'proleptic_gregorian', - longname = sdate_dim)) + attrs <- list(units = paste('hours since', sdates[1]), + calendar = 'proleptic_gregorian', longname = sdate_dim) + attr(coords[[sdate_dim]], 'variables')[[sdate_dim]] <- attrs # ftime definition Dates <- Reorder(Dates, c(ftime_dim, sdate_dim)) differ_ftime <- array(dim = dim(Dates)) - for (i in 1:length(sdates)) differ_ftime[, i] <- as.numeric(difftime(Dates[, i], Dates[1, i], - units = "hours")) + for (i in 1:length(sdates)) { + differ_ftime[, i] <- as.numeric(difftime(Dates[, i], Dates[1, i], + units = "hours")) + } dim(differ_ftime) <- dim(Dates) leadtimes <- Subset(differ_ftime, along = sdate_dim, 1, drop = 'selected') if (!all(apply(differ_ftime, 1, function(x){length(unique(x)) == 1}))) { @@ -682,7 +739,7 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, "correctly.") } } - if (!units_hours_since) { + if (all(!units_hours_since, is.null(time_bounds))) { if (all(diff(leadtimes/24) == 1)) { # daily values units <- 'days' @@ -690,7 +747,7 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } else if (all(diff(leadtimes/24) %in% c(28, 29, 30, 31))) { # monthly values units <- 'months' - vals <- round(leadtimes/730) + 1 + vals <- round(leadtimes/(30.437*24)) + 1 } else { # other frequency units <- 'hours' @@ -700,16 +757,47 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, units <- paste('hours since', paste(sdates, collapse = ', ')) vals <- leadtimes } + # Add ftime var dim(vals) <- dim(data)[ftime_dim] coords[[ftime_dim]] <- vals - attr(coords[[ftime_dim]], 'variables') <- list(list(units = units, - calendar = 'proleptic_gregorian', - longname = ftime_dim, - unlim = TRUE)) + attrs <- list(units = units, calendar = 'proleptic_gregorian', + longname = ftime_dim, unlim = TRUE) + attr(coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs + + # Add time_bnds + if (!is.null(time_bounds)) { + if (is.null(sdate_dim)) { + sdates <- Dates[1] + time_bnds <- c(time_bounds[[1]], time_bounds[[2]]) + leadtimes_bnds <- as.numeric(difftime(time_bnds, sdates, units = "hours")) + dim(leadtimes_bnds) <- c(dim(Dates), bnds = 2) + } else { + # assuming they have sdate and ftime + time_bnds <- lapply(time_bounds, function(x) { + x <- Reorder(x, c(ftime_dim, sdate_dim)) + return(x) + }) + time_bnds <- c(time_bounds[[1]], time_bounds[[2]]) + dim(time_bnds) <- c(dim(Dates), bnds = 2) + differ_bnds <- array(dim = c(dim(time_bnds))) + for (i in 1:length(sdates)) { + differ_bnds[, i, ] <- as.numeric(difftime(time_bnds[, i, ], Dates[1, i], + units = "hours")) + } + # NOTE: Add a warning when they are not equally spaced? + leadtimes_bnds <- Subset(differ_bnds, along = sdate_dim, 1, drop = 'selected') + } + # Add time_bnds + leadtimes_bnds <- Reorder(leadtimes_bnds, c('bnds', ftime_dim)) + coords[['time_bnds']] <- leadtimes_bnds + attrs <- list(units = paste('hours since', paste(sdates, collapse = ', ')), + calendar = 'proleptic_gregorian', + longname = 'time bounds', unlim = FALSE) + attr(coords[['time_bnds']], 'variables')$time_bnds <- attrs + } } # var definition - defined_vars <- list() extra_info_var <- NULL for (j in 1:n_vars) { varname_j <- varname[j] @@ -751,18 +839,30 @@ SaveExp <- function(data, destination = "./", Dates = NULL, coords = NULL, } .saveexp <- function(data, coords, destination = "./", - startdates = NULL, dates = NULL, - ftime_dim = 'time', varname = 'var', - metadata_var = NULL, extra_string = NULL, - global_attrs = NULL) { + startdates = NULL, dates = NULL, + time_bnds1 = NULL, time_bnds2 = NULL, + ftime_dim = 'time', varname = 'var', + metadata_var = NULL, extra_string = NULL, + global_attrs = NULL) { if (!is.null(dates)) { differ <- as.numeric(difftime(dates, dates[1], units = "hours")) dim(differ) <- dim(data)[ftime_dim] coords[[ftime_dim]] <- differ - attr(coords[[ftime_dim]], 'variables') <- list(list(units = paste('hours since', dates[1]), - calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE)) - names(attributes(coords[[ftime_dim]])$variables) <- ftime_dim + attrs <- list(units = paste('hours since', dates[1]), + calendar = 'proleptic_gregorian', + longname = ftime_dim, unlim = TRUE) + attr(coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs + } + if (!any(is.null(time_bnds1), is.null(time_bnds2))) { + time_bnds <- c(time_bnds1, time_bnds2) + time_bnds <- as.numeric(difftime(time_bnds, dates[1], units = "hours")) + dim(time_bnds) <- c(dim(data)[ftime_dim], bnds = 2) + time_bnds <- Reorder(time_bnds, c('bnds', ftime_dim)) + coords[['time_bnds']] <- time_bnds + attrs <- list(units = paste('hours since', dates[1]), + calendar = 'proleptic_gregorian', + longname = 'time bounds', unlim = FALSE) + attr(coords[['time_bnds']], 'variables')$time_bnds <- attrs } # Add data coords[[varname]] <- data diff --git a/man/CST_SaveExp.Rd b/man/CST_SaveExp.Rd index 6b5006fa..1520eb08 100644 --- a/man/CST_SaveExp.Rd +++ b/man/CST_SaveExp.Rd @@ -7,12 +7,12 @@ CST_SaveExp( data, destination = "./", + startdates = NULL, sdate_dim = "sdate", ftime_dim = "time", + memb_dim = "member", dat_dim = "dataset", var_dim = "var", - memb_dim = "member", - startdates = NULL, drop_dims = NULL, single_file = FALSE, extra_string = NULL, @@ -28,6 +28,14 @@ to save the data. NetCDF file for each starting date are saved into the folder tree: 'destination/Dataset/variable/'. By default the function saves the data into the working directory.} +\item{startdates}{A vector of dates that will be used for the filenames +when saving the data in multiple files (single_file = FALSE). It must be a +vector of the same length as the start date dimension of data. It must be a +vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts +between 1 and 10. If it is NULL, the coordinate corresponding the the start +date dimension or the first Date of each time step will be used as the name +of the files. It is NULL by default.} + \item{sdate_dim}{A character string indicating the name of the start date dimension. By default, it is set to 'sdate'. It can be NULL if there is no start date dimension.} @@ -37,6 +45,10 @@ dimension. If 'Dates' are used, it can't be NULL. If there is no forecast time dimension, 'Dates' will be set to NULL and will not be used. By default, it is set to 'time'.} +\item{memb_dim}{A character string indicating the name of the member +dimension. It can be NULL if there is no member dimension. By default, it is + set to 'member'.} + \item{dat_dim}{A character string indicating the name of dataset dimension. It can be NULL if there is no dataset dimension. By default, it is set to 'dataset'.} @@ -45,18 +57,6 @@ It can be NULL if there is no dataset dimension. By default, it is set to It can be NULL if there is no variable dimension. By default, it is set to 'var'.} -\item{memb_dim}{A character string indicating the name of the member dimension. -It can be NULL if there is no member dimension. By default, it is set to -'member'.} - -\item{startdates}{A vector of dates that will be used for the filenames -when saving the data in multiple files (single_file = FALSE). It must be a -vector of the same length as the start date dimension of data. It must be a -vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts -between 1 and 10. If it is NULL, the coordinate corresponding the the start -date dimension or the first Date of each time step will be used as the name -of the files. It is NULL by default.} - \item{drop_dims}{(optional) A vector of character strings indicating the dimension names of length 1 that need to be dropped in order that they don't appear in the netCDF file. Only is allowed to drop dimensions that are not @@ -76,17 +76,18 @@ it is '_.nc'. It is FALSE by default.} the file name, for instance, to identify member or realization. When single_file is TRUE, the 'extra_string' will substitute all the default file name; when single_file is FALSE, the 'extra_string' will be added -in the file name as: '__.nc'. It is NULL by default.} +in the file name as: '__.nc'. It is NULL by +default.} \item{global_attrs}{(Optional) A list with elements containing the global attributes to be saved in the NetCDF.} -\item{units_hours_since}{(Optional) A logical value only used for the case -Dates have forecast time and start date dimension and single_file is TRUE. -When it is TRUE, it saves the forecast time with units of 'hours since'; -if it is FALSE, the time units will be a number of time steps with its -corresponding frequency (e.g. n days, n months or n hours). It is TRUE -by default.} +\item{units_hours_since}{(Optional) A logical value only available for the +case: Dates have forecast time and start date dimension, single_file is +TRUE and 'time_bounds' are not used. When it is TRUE, it saves the forecast +time with units of 'hours since'; if it is FALSE, the time units will be a +number of time steps with its corresponding frequency (e.g. n days, n months +or n hours). It is TRUE by default.} } \value{ Multiple or single NetCDF files containing the data array.\cr diff --git a/man/SaveExp.Rd b/man/SaveExp.Rd index 7acd17d1..dbef149d 100644 --- a/man/SaveExp.Rd +++ b/man/SaveExp.Rd @@ -7,17 +7,18 @@ SaveExp( data, destination = "./", - Dates = NULL, coords = NULL, + Dates = NULL, + time_bounds = NULL, + startdates = NULL, varname = NULL, metadata = NULL, Datasets = NULL, - startdates = NULL, - dat_dim = "dataset", sdate_dim = "sdate", ftime_dim = "time", - var_dim = "var", memb_dim = "member", + dat_dim = "dataset", + var_dim = "var", drop_dims = NULL, single_file = FALSE, extra_string = NULL, @@ -31,25 +32,20 @@ SaveExp( \item{destination}{A character string indicating the path where to store the NetCDF files.} -\item{Dates}{A named array of dates with the corresponding sdate and forecast -time dimension. If there is no sdate_dim, you can set it to NULL. -It must have ftime_dim dimension.} - \item{coords}{A named list with elements of the coordinates corresponding to the dimensions of the data parameter. The names and length of each element must correspond to the names of the dimensions. If any coordinate is not provided, it is set as an index vector with the values from 1 to the length of the corresponding dimension.} -\item{varname}{A character string indicating the name of the variable to be -saved.} - -\item{metadata}{A named list where each element is a variable containing the -corresponding information. The information must be contained in a list of -lists for each variable.} +\item{Dates}{A named array of dates with the corresponding sdate and forecast +time dimension. If there is no sdate_dim, you can set it to NULL. +It must have ftime_dim dimension.} -\item{Datasets}{A vector of character string indicating the names of the -datasets.} +\item{time_bounds}{(Optional) A list of two arrays of dates containing +the lower (first array) and the upper (second array) time bounds +corresponding to Dates. Each array must have the same dimensions as Dates. +It is NULL by default.} \item{startdates}{A vector of dates that will be used for the filenames when saving the data in multiple files (single_file = FALSE). It must be a @@ -59,9 +55,15 @@ between 1 and 10. If it is NULL, the coordinate corresponding the the start date dimension or the first Date of each time step will be used as the name of the files. It is NULL by default.} -\item{dat_dim}{A character string indicating the name of dataset dimension. -By default, it is set to 'dataset'. It can be NULL if there is no dataset -dimension.} +\item{varname}{A character string indicating the name of the variable to be +saved.} + +\item{metadata}{A named list where each element is a variable containing the +corresponding information. The information must be contained in a list of +lists for each variable.} + +\item{Datasets}{A vector of character string indicating the names of the +datasets.} \item{sdate_dim}{A character string indicating the name of the start date dimension. By default, it is set to 'sdate'. It can be NULL if there is no @@ -71,12 +73,16 @@ start date dimension.} dimension. By default, it is set to 'time'. It can be NULL if there is no forecast time dimension.} -\item{var_dim}{A character string indicating the name of variable dimension. -By default, it is set to 'var'. It can be NULL if there is no variable +\item{memb_dim}{A character string indicating the name of the member +dimension. By default, it is set to 'member'. It can be NULL if there is no +member dimension.} + +\item{dat_dim}{A character string indicating the name of dataset dimension. +By default, it is set to 'dataset'. It can be NULL if there is no dataset dimension.} -\item{memb_dim}{A character string indicating the name of the member dimension. -By default, it is set to 'member'. It can be NULL if there is no member +\item{var_dim}{A character string indicating the name of variable dimension. +By default, it is set to 'var'. It can be NULL if there is no variable dimension.} \item{drop_dims}{(optional) A vector of character strings indicating the @@ -91,24 +97,25 @@ single file (TRUE) or in multiple files (FALSE). When it is FALSE, the array is separated for datasets, variable and start date. When there are no specified time dimensions, the data will be saved in a single file by default. The output file name when 'single_file' is TRUE is a character -string containing: __.nc; when it is FALSE, -it is _.nc. It is FALSE by default.} +string containing: '__.nc'; when it is FALSE, +it is '_.nc'. It is FALSE by default.} \item{extra_string}{(Optional) A character string to be included as part of the file name, for instance, to identify member or realization. When single_file is TRUE, the 'extra_string' will substitute all the default file name; when single_file is FALSE, the 'extra_string' will be added -in the file name as: __.nc. It is NULL by default.} +in the file name as: '__.nc'. It is NULL by +default.} \item{global_attrs}{(Optional) A list with elements containing the global attributes to be saved in the NetCDF.} -\item{units_hours_since}{(Optional) A logical value only used for the case -Dates have forecast time and start date dimension and single_file is TRUE. -When it is TRUE, it saves the forecast time with units of 'hours since'; -if it is FALSE, the time units will be a number of time steps with its -corresponding frequency (e.g. n days, n months or n hours). It is TRUE -by default.} +\item{units_hours_since}{(Optional) A logical value only available for the +case: Dates have forecast time and start date dimension, single_file is +TRUE and 'time_bounds' is NULL. When it is TRUE, it saves the forecast time +with units of 'hours since'; if it is FALSE, the time units will be a number +of time steps with its corresponding frequency (e.g. n days, n months or n +hours). It is TRUE by default.} } \value{ Multiple or single NetCDF files containing the data array.\cr diff --git a/tests/testthat/test-CST_SaveExp.R b/tests/testthat/test-CST_SaveExp.R index 65c0cb22..b4e17554 100644 --- a/tests/testthat/test-CST_SaveExp.R +++ b/tests/testthat/test-CST_SaveExp.R @@ -90,14 +90,6 @@ test_that("1. Input checks: CST_SaveExp", { CST_SaveExp(data = cube0), paste0("Level 'attrs' must be a list with at least 'Dates' element.") ) - cube0$attrs <- NULL - cube0$attrs$Dates <- dates2 - expect_warning( - CST_SaveExp(data = cube0, sdate_dim = c('sdate', 'sweek'), - ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, - var_dim = NULL, single_file = FALSE), - paste0("Element 'coords' not found. No coordinates will be used.") - ) # sdate_dim suppressWarnings( @@ -106,13 +98,6 @@ test_that("1. Input checks: CST_SaveExp", { paste0("Parameter 'sdate_dim' must be a character string.") ) ) - expect_warning( - CST_SaveExp(data = cube1, sdate_dim = c('sdate', 'sweek'), - ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, - var_dim = NULL, extra_string = 'test'), - paste0("Parameter 'sdate_dim' has length greater than 1 and ", - "only the first element will be used.") - ) suppressWarnings( expect_error( CST_SaveExp(data = cube1, sdate_dim = 'a', ftime_dim = 'ftime'), @@ -132,25 +117,6 @@ test_that("1. Input checks: CST_SaveExp", { paste0("Parameter 'startdates' doesn't have the same length ", "as dimension '", 'sdate',"', it will not be used.") ) - # metadata - cube1$attrs$Variable$metadata <- 'metadata' - expect_error( - CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Element metadata from Variable element in attrs must be a list.") - ) - cube1$attrs$Variable$metadata <- list(test = 'var') - expect_warning( - CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Metadata is not found for any coordinate.") - ) - cube1$attrs$Variable$metadata <- list(var = 'var') - expect_warning( - CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Metadata is not found for any variable.") - ) # memb_dim suppressWarnings( expect_error( @@ -165,6 +131,13 @@ test_that("1. Input checks: CST_SaveExp", { "as NULL if there is no member dimension.") ) ) + # metadata + cube1$attrs$Variable$metadata <- 'metadata' + expect_error( + CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + dat_dim = NULL, var_dim = NULL), + paste0("Element metadata from Variable element in attrs must be a list.") + ) }) ############################################## @@ -189,11 +162,13 @@ test_that("1. Input checks", { ) # Dates expect_error( - SaveExp(data = array(1, dim = c(a = 1)), Dates = 'a'), + SaveExp(data = array(1, dim = c(a = 1)), Dates = 'a', sdate_dim = NULL, + memb_dim = NULL, ftime_dim = 'a', dat_dim = NULL, var_dim = NULL), paste0("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.") ) expect_error( - SaveExp(data = array(1, dim = c(a = 1)), + SaveExp(data = array(1, dim = c(time = 1, sdate = 1, member = 1)), + dat_dim = NULL, var_dim = NULL, Dates = as.Date('2022-02-01', format = "%Y-%m-%d")), paste0("Parameter 'Dates' must have dimension names.") ) @@ -255,13 +230,6 @@ test_that("1. Input checks", { paste0("Parameter 'ftime_dim' is not found in 'data' dimension.") ) # Dates dimension check - expect_error( - SaveExp(data = dat4, coords = coords4, - metadata = list(tas = list(level = '2m')), - Dates = dates4, ftime_dim = NULL, memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Parameter 'Dates' must have 'ftime_dim'.") - ) expect_warning( SaveExp(data = dat4, coords = coords4, metadata = list(tas = list(level = '2m')), @@ -276,7 +244,8 @@ test_that("1. Input checks", { metadata = list(tas = list(level = '2m')), Dates = dates5, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, var_dim = NULL, sdate_dim = NULL), - paste0("Parameter 'Dates' must have only 'sdate_dim' and 'ftime_dim' dimensions.") + paste0("Parameter 'Dates' can have only 'sdate_dim' and 'ftime_dim' ", + "dimensions of length greater than 1.") ) expect_warning( SaveExp(data = dat2, coords = coords2, @@ -306,15 +275,6 @@ test_that("1. Input checks", { paste0("Dates must be provided if 'data' must be saved in separated files. ", "All data will be saved in a single file.") ) - # (dat4) Without ftime_dim - expect_error( - SaveExp(data = dat4, coords = coords4, - metadata = list(tas = list(level = '2m')), - Dates = dates4, ftime_dim = NULL, memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate', - single_file = TRUE), - paste0("Parameter 'Dates' must have 'ftime_dim'.") - ) }) ############################################## -- GitLab From 6eef0c208b28bd55fea3428ddf5ab955cad2c2f4 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 1 Dec 2023 16:10:15 +0100 Subject: [PATCH 23/66] Update usecase for saveexp --- R/CST_SaveExp.R | 6 +- inst/doc/usecase.md | 1 + inst/doc/usecase/UseCase4_CST_SaveExp.R | 245 +++++++++++++++++------- 3 files changed, 182 insertions(+), 70 deletions(-) diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index c5742031..ea2cf6ac 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -298,6 +298,9 @@ SaveExp <- function(data, destination = "./", coords = NULL, if (is.null(dimnames)) { stop("Parameter 'data' must be an array with named dimensions.") } + if (!is.null(attributes(data)$dimensions)) { + attributes(data)$dimensions <- NULL + } # destination if (!is.character(destination) | length(destination) > 1) { stop("Parameter 'destination' must be a character string of one element ", @@ -606,9 +609,6 @@ SaveExp <- function(data, destination = "./", coords = NULL, if (any(dimnames != alldims)) { data <- Reorder(data, alldims) dimnames <- names(dim(data)) - if (!is.null(attr(data, 'dimensions'))) { - attr(data, 'dimensions') <- dimnames - } } ## NetCDF dimensions definition diff --git a/inst/doc/usecase.md b/inst/doc/usecase.md index 1fc17ae4..8ebf0478 100644 --- a/inst/doc/usecase.md +++ b/inst/doc/usecase.md @@ -6,6 +6,7 @@ In this document, you can link to the example scripts for different usage of the 1. [Bias adjustment for assessment of an extreme event](inst/doc/usecase/UseCase1_WindEvent_March2018.R) 2. [Precipitation Downscaling with RainFARM RF 4](inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF4.R) 3. [Precipitation Downscaling with RainFARM RF 100](inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF100.R) + 4. [Seasonal forecasts for a river flow](inst/doc/usecase/UseCase3_data_preparation_SCHEME_model.R) 2. **Examples using 's2dv_cube'** 1. [Save 's2dv_cube'](inst/doc/usecase/UseCase4_SaveExp.R) \ No newline at end of file diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R index a6cbbe86..4a7db4f4 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -1,59 +1,102 @@ -#**************************************************************************** +#******************************************************************************* # Script to test examples of CST_SaveExp # Eva Rifà Rovira # 29/11/2024 -#**************************************************************************** +#******************************************************************************* + +#------------------------------------------------------------------------------- +# Needed packages before a new version is installed +library(CSIndicators) +library(multiApply) +library(easyNCDF) +library(s2dv) +library(ClimProjDiags) +library(CSTools) +library(startR) +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/develop-SaveCube/R/CST_SaveExp.R") +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/develop-SaveCube/R/zzz.R") +################################################################################ +# Tests: +#----------------------------------------------------- # Tests 1: Multidimensional array and Dates, without metadata and coordinates -# (1) Minimal use case, without Dates +#----------------------------------------------------- +# (1.1) Minimal use case, without Dates data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4)) SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL, var_dim = NULL, single_file = TRUE) SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL, - var_dim = NULL, sdate_dim = NULL, single_file = TRUE) # same result + var_dim = NULL, sdate_dim = NULL, single_file = FALSE) # same result -# (2) Forecast time dimension, without Dates +# (1.2) Forecast time dimension, without Dates data <- array(1:5, dim = c(ftime = 5, lon = 4, lat = 4)) SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, var_dim = NULL, sdate_dim = NULL, single_file = TRUE) -# (2) Start date dimension, without Dates +# (1.3) Start date dimension, without Dates data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4)) SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate', single_file = TRUE) -# (3) Only forecast time dimension (no sdate), with Dates +# (1.4) Only forecast time dimension (no sdate), with Dates data <- array(1:5, dim = c(ftime = 5, lon = 4, lat = 4)) dates <- c('20000101', '20010102', '20020103', '20030104', '20040105') dates <- as.Date(dates, format = "%Y%m%d", tz = "UTC") dim(dates) <- c(ftime = 5) SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, var_dim = NULL, sdate_dim = NULL, Dates = dates, single_file = FALSE) +SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = NULL, Dates = dates, single_file = TRUE) # For this case we have the same result using: single_file = FALSE /TRUE. -# (4) Forecast time and 1 sdate, with Dates +# (1.5) Forecast time and 1 sdate, with Dates data <- array(1:5, dim = c(sdate = 1, ftime = 5, lon = 4, lat = 4)) dates <- c('20000101', '20010102', '20020103', '20030104', '20040105') dates <- as.Date(dates, format = "%Y%m%d", tz = "UTC") dim(dates) <- c(sdate = 1, ftime = 5) -SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, - var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = TRUE) SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = FALSE) +SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = TRUE) -################################################################################ - -# Tests 2: Test sample data from CSTools - -# (a) Data loaded with Start -# (1) lonlat_temp_st$exp in a single file with units 'hours since' - +# (1.6) Test global attributes +SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = TRUE, + extra_string = 'test', + global_attrs = list(system = 'tes1', reference = 'test2')) +# (1.6) Test global attributes +SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, + var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = FALSE, + extra_string = 'test', + global_attrs = list(system = 'tes1', reference = 'test2')) +#----------------------------------------------------- +# Tests 2: Test sample data from Start and from Load +#----------------------------------------------------- + +# (2.1) Test SaveExp +exp <- CSTools::lonlat_prec_st +data <- exp$data +Dates = exp$attrs$Dates +coords = exp$coords +varname = exp$attrs$Variable$varName +metadata = exp$attrs$Variable$metadata +SaveExp(data = data, Dates = Dates, coords = coords, varname = varname, + metadata = metadata, ftime_dim = 'ftime', startdates = 1:4, + var_dim = 'var', memb_dim = 'member', dat_dim = 'dataset', + sdate_dim = 'sdate', single_file = FALSE) +SaveExp(data = data, Dates = Dates, coords = coords, varname = varname, + metadata = metadata, ftime_dim = 'ftime', startdates = 1:4, + var_dim = 'var', memb_dim = 'member', dat_dim = 'dataset', + sdate_dim = 'sdate', single_file = TRUE) + +# (2.2) lonlat_temp_st$exp in a single file with units 'hours since' +# (2.2.1) We save the data data <- lonlat_temp_st$exp CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', single_file = TRUE) -# Now we read the output with Start: +# (2.2.2) Now we read the output with Start: sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") out <- Start(dat = path, @@ -71,13 +114,14 @@ out <- Start(dat = path, attributes(out)$Variables$common$ftime -# (2) lonlat_temp_st$exp in a single file with units of time frequency +# (2.3) lonlat_temp_st$exp in a single file with units of time frequency +# (2.3.1) we save the data data <- lonlat_temp_st$exp CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', single_file = TRUE, units_hours_since = FALSE) -# Now we read the output with Start: +# (2.3.2) Now we read the output with Start: sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") out <- Start(dat = path, @@ -97,11 +141,13 @@ out <- Start(dat = path, attributes(out)$Variables$common$ftime # [1] "1 months" "2 months" "3 months" -# (3) lonlat_temp_st$exp in separated files with units of hours since +# (2.4) lonlat_temp_st$exp in separated files with units of hours since +# (2.4.1) we save the data data <- lonlat_temp_st$exp CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', single_file = FALSE) +# (2.4.2) we load the data sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) path <- paste0(getwd(),"/dat1/$var$/$var$_$sdate$.nc") @@ -116,13 +162,14 @@ out <- Start(dat = path, var = 'tas', ftime = NULL), retrieve = TRUE) -# (4) lonlat_prec_st$exp in a single file with units of time frequency +# (2.5) lonlat_prec_st$exp in a single file with units of time frequency +# (2.5.1) we save the data data <- lonlat_prec_st CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', single_file = TRUE, units_hours_since = FALSE) -# Now we read the output with Start: +# (2.5.2) we read the data sdate <- as.vector(data$coords$sdate) path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") out <- Start(dat = path, @@ -146,12 +193,13 @@ attributes(out)$Variables$common$ftime # [22] "22 days" "23 days" "24 days" "25 days" "26 days" "27 days" "28 days" # [29] "29 days" "30 days" "31 days" -# (5) Test observations +# (2.6) Test observations: lonlat_temp +# (2.6.1) Save the data data <- lonlat_temp$obs CST_SaveExp(data = data, ftime_dim = 'ftime', memb_dim = NULL, var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', single_file = TRUE, units_hours_since = FALSE) -# Now we read the output with Start: +# (2.6.2) Now we read the output with Start: sdate <- c('20001101', '20051101') path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") out <- Start(dat = path, @@ -162,22 +210,21 @@ out <- Start(dat = path, member = 1, sdate = 'all', return_vars = list( - lon = 'dat', - lat = 'dat', - ftime = NULL, - sdate = NULL), + lon = 'dat', + lat = 'dat', + ftime = NULL, + sdate = NULL), retrieve = TRUE) dim(out) attributes(out)$Variables$common$ftime -# (b) Data loaded with Load -data <- lonlat_temp$exp -data <- lonlat_temp$obs +# (2.7) Test lonlat_prec +# (2.7.1) Save the data data <- lonlat_prec -CST_SaveExp(data = data, ftime_dim = 'ftime', +CST_SaveExp(data = data, ftime_dim = 'ftime', memb_dim = NULL, var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', single_file = TRUE, units_hours_since = FALSE) -# Now we read the output with Start: +# (2.7.2) Now we read the output with Start: sdate <- as.vector(data$coords$sdate) path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") out <- Start(dat = path, @@ -196,7 +243,7 @@ out <- Start(dat = path, dim(out) lonlat_prec$dims -# Test with ftime_dim NULL +# (2.8) Test with ftime_dim NULL data <- lonlat_temp$exp data <- CST_Subset(data, along = 'ftime', indices = 1, drop = 'selected') @@ -204,36 +251,37 @@ CST_SaveExp(data = data, ftime_dim = NULL, var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', single_file = FALSE, units_hours_since = FALSE) -################################################################################ +#----------------------------------------------------- # Test 3: Special cases +#----------------------------------------------------- -# (1) two variables and two datasets in separated files - +# (3.1) Two variables and two datasets in separated files +# (3.1.1) We load the data from Start repos <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" data3 <- Start(dat = list(list(name = 'system4_m1', path = repos2), list(name = 'system5_m1', path = repos)), - var = c('tas', 'sfcWind'), - sdate = c('20160101', '20170101'), - ensemble = indices(1), - time = indices(1:2), - lat = indices(1:10), - lon = indices(1:10), - synonims = list(lat = c('lat', 'latitude'), - lon = c('lon', 'longitude')), - return_vars = list(time = 'sdate', - longitude = 'dat', - latitude = 'dat'), - metadata_dims = c('dat', 'var'), - retrieve = T - ) + var = c('tas', 'sfcWind'), + sdate = c('20160101', '20170101'), + ensemble = indices(1), + time = indices(1:2), + lat = indices(1:10), + lon = indices(1:10), + synonims = list(lat = c('lat', 'latitude'), + lon = c('lon', 'longitude')), + return_vars = list(time = 'sdate', + longitude = 'dat', + latitude = 'dat'), + metadata_dims = c('dat', 'var'), + retrieve = T) cube3 <- as.s2dv_cube(data3) +# (3.1.2) We save the data CST_SaveExp(data = cube3, ftime_dim = 'time', var_dim = 'var', memb_dim = 'ensemble', dat_dim = 'dat') -# We read again the data with start +# (3.1.3) We read again the data with start repos <- paste0(getwd(), "/system4_m1/$var$/$var$_$sdate$.nc") repos2 <- paste0(getwd(), "/system5_m1/$var$/$var$_$sdate$.nc") @@ -259,13 +307,13 @@ summary(data3) dim(data3) dim(data3out) -# (2) two variables and two datasets in the same file +# (3.2) Two variables and two datasets in the same file CST_SaveExp(data = cube3, ftime_dim = 'time', var_dim = 'var', - memb_dim = 'ensemble', dat_dim = 'dat', - single_file = TRUE) + memb_dim = 'ensemble', dat_dim = 'dat', + single_file = TRUE) -# (3) Observations +# (3.3) Observations (from startR usecase) repos_exp <- paste0('/esarchive/exp/ecearth/a1tr/cmorfiles/CMIP/EC-Earth-Consortium/', 'EC-Earth3/historical/r24i1p1f1/Amon/$var$/gr/v20190312/', '$var$_Amon_EC-Earth3_historical_r24i1p1f1_gr_$sdate$01-$sdate$12.nc') @@ -284,11 +332,7 @@ exp <- Start(dat = repos_exp, lat = NULL, time = 'sdate'), retrieve = FALSE) -lats <- attr(exp, 'Variables')$common$lat -lons <- attr(exp, 'Variables')$common$lon -## The 'time' attribute is a two-dim array dates <- attr(exp, 'Variables')$common$time -dim(dates) repos_obs <- '/esarchive/recon/ecmwf/erainterim/monthly_mean/$var$_f6h/$var$_$date$.nc' obs <- Start(dat = repos_obs, @@ -308,13 +352,80 @@ obs <- Start(dat = repos_obs, lat = NULL, time = 'date'), retrieve = TRUE) -dim(obs) -attributes(obs)$Variables$common$time obscube <- as.s2dv_cube(obs) CST_SaveExp(data = obscube, ftime_dim = 'time', var_dim = 'var', - memb_dim = 'ensemble', dat_dim = 'dat', - single_file = TRUE) - - + memb_dim = NULL, dat_dim = 'dat', + single_file = TRUE, extra_string = 'obs_tas') +CST_SaveExp(data = obscube, ftime_dim = 'time', var_dim = 'var', + memb_dim = NULL, dat_dim = 'dat', + single_file = FALSE, extra_string = 'obs_tas') + +# (4) Time bounds: +# example: /esarchive/exp/ncep/cfs-v2/weekly_mean/s2s/tas_f24h/tas_20231128.nc +library(CSIndicators) +exp <- CSTools::lonlat_prec_st +exp$attrs$Dates <- Reorder(exp$attrs$Dates, c(2,1)) +res <- CST_PeriodAccumulation(data = exp, time_dim = 'ftime', + start = list(10, 03), end = list(20, 03)) +# > dim(res$attrs$Dates) +# sdate +# 3 +# (4.1) All data in a single file +CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var', + memb_dim = 'member', dat_dim = 'dataset', + startdates = res$attrs$Dates, single_file = TRUE) +# (4.2) All data in separated files +CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var', + memb_dim = 'member', dat_dim = 'dataset', + startdates = res$attrs$Dates, single_file = FALSE) +# (4.3) +CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var', + memb_dim = 'member', dat_dim = 'dataset', + startdates = 1:4, single_file = FALSE) + +# (4.4) We change the time dimensions to ftime and sdate_dim = NULL +dim(res$attrs$time_bounds[[1]]) <- c(time = 3) +dim(res$attrs$time_bounds[[2]]) <- c(time = 3) +dim(res$attrs$Dates) <- c(time = 3) +dim(res$data) <- c(dataset = 1, var = 1, member = 6, time = 3, lat = 4, lon = 4) + +# (4.4.1) All data in a single file +CST_SaveExp(data = res, ftime_dim = 'time', var_dim = 'var', + memb_dim = 'member', dat_dim = 'dataset', sdate_dim = NULL, + startdates = res$attrs$Dates, single_file = TRUE) +# (4.4.2) All data in separated files +CST_SaveExp(data = res, ftime_dim = 'time', var_dim = 'var', + memb_dim = 'member', dat_dim = 'dataset', sdate_dim = NULL, + startdates = res$attrs$Dates, single_file = FALSE) + +# (4.5) Forecast time units +CST_SaveExp(data = res, ftime_dim = 'time', var_dim = 'var', + memb_dim = 'member', dat_dim = 'dataset', sdate_dim = NULL, + startdates = res$attrs$Dates, single_file = TRUE, + units_hours_since = FALSE) + +#----------------------------------------------------- +# Test 4: Read data with Load +#----------------------------------------------------- +data <- lonlat_temp$exp +# data <- lonlat_temp$obs +# data <- lonlat_prec +CST_SaveExp(data = data, ftime_dim = 'ftime', + var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate', + single_file = FALSE, units_hours_since = FALSE) +# Now we read the output with Load: +# startDates <- c('20001101', '20011101', '20021101', +# '20031101', '20041101', '20051101') + +# infile <- list(path = paste0(getwd(), +# '/system5c3s/$VAR_NAME$/$VAR_NAME$_$START_DATE$.nc')) +# out_lonlat_temp <- CST_Load(var = 'tas', exp = list(infile), obs = NULL, +# sdates = startDates, +# nmember = 15, +# leadtimemax = 3, +# latmin = 27, latmax = 48, +# lonmin = -12, lonmax = 40, +# output = "lonlat") +# Error ################################################################################ \ No newline at end of file -- GitLab From 9d16b4558909960c5ea03843f87aed503ca513e9 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 1 Dec 2023 16:20:45 +0100 Subject: [PATCH 24/66] Correct usecase --- inst/doc/usecase/UseCase4_CST_SaveExp.R | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R index 4a7db4f4..ba43a0ac 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -360,7 +360,10 @@ CST_SaveExp(data = obscube, ftime_dim = 'time', var_dim = 'var', memb_dim = NULL, dat_dim = 'dat', single_file = FALSE, extra_string = 'obs_tas') +#----------------------------------------------------- # (4) Time bounds: +#----------------------------------------------------- + # example: /esarchive/exp/ncep/cfs-v2/weekly_mean/s2s/tas_f24h/tas_20231128.nc library(CSIndicators) exp <- CSTools::lonlat_prec_st @@ -405,7 +408,7 @@ CST_SaveExp(data = res, ftime_dim = 'time', var_dim = 'var', units_hours_since = FALSE) #----------------------------------------------------- -# Test 4: Read data with Load +# Test 5: Read data with Load #----------------------------------------------------- data <- lonlat_temp$exp -- GitLab From 0f814db160ffd4e36eba072f2af1c8d0a12f053d Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 1 Dec 2023 16:48:46 +0100 Subject: [PATCH 25/66] Minor change --- inst/doc/usecase/UseCase4_CST_SaveExp.R | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R index ba43a0ac..10ca74d6 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -64,7 +64,7 @@ SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = TRUE, extra_string = 'test', global_attrs = list(system = 'tes1', reference = 'test2')) -# (1.6) Test global attributes +# (1.7) Test global attributes SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = FALSE, extra_string = 'test', @@ -361,7 +361,7 @@ CST_SaveExp(data = obscube, ftime_dim = 'time', var_dim = 'var', single_file = FALSE, extra_string = 'obs_tas') #----------------------------------------------------- -# (4) Time bounds: +# Test 4: Time bounds: #----------------------------------------------------- # example: /esarchive/exp/ncep/cfs-v2/weekly_mean/s2s/tas_f24h/tas_20231128.nc -- GitLab From 92fa27f2dd097f3f08462b4c5f19d8b1d3ceceea Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 14 Dec 2023 17:35:07 +0100 Subject: [PATCH 26/66] Corrections after review: unlimited dim, dimension order, reduce warnings --- R/CST_SaveExp.R | 71 ++++++++++++++++++++++++++++--------------------- 1 file changed, 40 insertions(+), 31 deletions(-) diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index ea2cf6ac..270cbf2c 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -519,11 +519,11 @@ SaveExp <- function(data, destination = "./", coords = NULL, return(x) }) } + units_hours_since <- TRUE } # add sdate if needed if (is.null(sdate_dim)) { if (!single_file) { - warning("A 'sdate' dimension of length 1 will be added to 'Dates'.") dim(Dates) <- c(dim(Dates), sdate = 1) dim(data) <- c(dim(data), sdate = 1) dimnames <- names(dim(data)) @@ -542,6 +542,7 @@ SaveExp <- function(data, destination = "./", coords = NULL, } } } + units_hours_since <- TRUE } } # startdates @@ -603,12 +604,6 @@ SaveExp <- function(data, destination = "./", coords = NULL, if (!all(dimnames %in% alldims)) { unknown_dims <- dimnames[which(!dimnames %in% alldims)] memb_dim <- c(memb_dim, unknown_dims) - alldims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, memb_dim, ftime_dim) - } - # Reorder - if (any(dimnames != alldims)) { - data <- Reorder(data, alldims) - dimnames <- names(dim(data)) } ## NetCDF dimensions definition @@ -661,7 +656,9 @@ SaveExp <- function(data, destination = "./", coords = NULL, for (i in 1:n_datasets) { path <- file.path(destination, Datasets[i], varname) for (j in 1:n_vars) { - dir.create(path[j], recursive = TRUE) + if (!dir.exists(path[j])) { + dir.create(path[j], recursive = TRUE) + } startdates <- gsub("-", "", startdates) dim(startdates) <- c(length(startdates)) names(dim(startdates)) <- sdate_dim @@ -674,17 +671,19 @@ SaveExp <- function(data, destination = "./", coords = NULL, } else { data_subset <- Subset(data, c(dat_dim, var_dim), list(i, j), drop = 'selected') } + target <- names(dim(data_subset))[which(!names(dim(data_subset)) %in% c(sdate_dim, ftime_dim))] + target_dims_data <- c(target, ftime_dim) if (is.null(Dates)) { input_data <- list(data_subset, startdates) - target_dims <- list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL) + target_dims <- list(target_dims_data, NULL) } else if (!is.null(time_bounds)) { input_data <- list(data_subset, startdates, Dates, time_bounds[[1]], time_bounds[[2]]) - target_dims = list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL, + target_dims = list(target_dims_data, NULL, ftime_dim, ftime_dim, ftime_dim) } else { input_data <- list(data_subset, startdates, Dates) - target_dims = list(c(lon_dim, lat_dim, memb_dim, ftime_dim), NULL, ftime_dim) + target_dims = list(target_dims_data, NULL, ftime_dim) } Apply(data = input_data, target_dims = target_dims, @@ -757,12 +756,6 @@ SaveExp <- function(data, destination = "./", coords = NULL, units <- paste('hours since', paste(sdates, collapse = ', ')) vals <- leadtimes } - # Add ftime var - dim(vals) <- dim(data)[ftime_dim] - coords[[ftime_dim]] <- vals - attrs <- list(units = units, calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE) - attr(coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs # Add time_bnds if (!is.null(time_bounds)) { @@ -795,6 +788,21 @@ SaveExp <- function(data, destination = "./", coords = NULL, longname = 'time bounds', unlim = FALSE) attr(coords[['time_bnds']], 'variables')$time_bnds <- attrs } + # Add ftime var + dim(vals) <- dim(data)[ftime_dim] + coords[[ftime_dim]] <- vals + attrs <- list(units = units, calendar = 'proleptic_gregorian', + longname = ftime_dim, + dim = list(list(name = ftime_dim, unlim = TRUE))) + attr(coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs + for (j in 1:n_vars) { + metadata[[varname[j]]]$dim <- list(list(name = ftime_dim, unlim = TRUE)) + } + # Reorder ftime_dim to last + if (length(dim(data)) != which(names(dim(data)) == ftime_dim)) { + order <- c(names(dim(data))[which(!names(dim(data)) %in% c(ftime_dim))], ftime_dim) + data <- Reorder(data, order) + } } # var definition @@ -808,7 +816,6 @@ SaveExp <- function(data, destination = "./", coords = NULL, coords[[varname_j]] <- Subset(data, var_dim, j, drop = 'selected') } if (!is.null(metadata_j)) { - metadata_j$dim <- NULL attr(coords[[varname_j]], 'variables') <- list(metadata_j) names(attributes(coords[[varname_j]])$variables) <- varname_j } @@ -845,29 +852,31 @@ SaveExp <- function(data, destination = "./", coords = NULL, metadata_var = NULL, extra_string = NULL, global_attrs = NULL) { if (!is.null(dates)) { + if (!any(is.null(time_bnds1), is.null(time_bnds2))) { + time_bnds <- c(time_bnds1, time_bnds2) + time_bnds <- as.numeric(difftime(time_bnds, dates[1], units = "hours")) + dim(time_bnds) <- c(dim(data)[ftime_dim], bnds = 2) + time_bnds <- Reorder(time_bnds, c('bnds', ftime_dim)) + coords[['time_bnds']] <- time_bnds + attrs <- list(units = paste('hours since', dates[1]), + calendar = 'proleptic_gregorian', + longname = 'time bounds') + attr(coords[['time_bnds']], 'variables')$time_bnds <- attrs + } + # Add ftime_dim differ <- as.numeric(difftime(dates, dates[1], units = "hours")) dim(differ) <- dim(data)[ftime_dim] coords[[ftime_dim]] <- differ attrs <- list(units = paste('hours since', dates[1]), calendar = 'proleptic_gregorian', - longname = ftime_dim, unlim = TRUE) + longname = ftime_dim, + dim = list(list(name = ftime_dim, unlim = TRUE))) attr(coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs - } - if (!any(is.null(time_bnds1), is.null(time_bnds2))) { - time_bnds <- c(time_bnds1, time_bnds2) - time_bnds <- as.numeric(difftime(time_bnds, dates[1], units = "hours")) - dim(time_bnds) <- c(dim(data)[ftime_dim], bnds = 2) - time_bnds <- Reorder(time_bnds, c('bnds', ftime_dim)) - coords[['time_bnds']] <- time_bnds - attrs <- list(units = paste('hours since', dates[1]), - calendar = 'proleptic_gregorian', - longname = 'time bounds', unlim = FALSE) - attr(coords[['time_bnds']], 'variables')$time_bnds <- attrs + metadata_var$dim <- list(list(name = ftime_dim, unlim = TRUE)) } # Add data coords[[varname]] <- data if (!is.null(metadata_var)) { - metadata_var$dim <- NULL attr(coords[[varname]], 'variables') <- list(metadata_var) names(attributes(coords[[varname]])$variables) <- varname } -- GitLab From f27f154dfd66c6216699f4d5ef2c64c437331dcd Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 15 Dec 2023 11:51:59 +0100 Subject: [PATCH 27/66] Correct dimensions order for single_file is TRUE --- R/CST_SaveExp.R | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index 270cbf2c..30b6d775 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -610,10 +610,14 @@ SaveExp <- function(data, destination = "./", coords = NULL, defined_dims <- NULL extra_info_dim <- NULL if (is.null(Dates)) { - filedims <- dimnames[which(!dimnames %in% c(dat_dim, var_dim))] + excluded_dims <- c(dat_dim, var_dim) } else { - filedims <- dimnames[which(!dimnames %in% c(dat_dim, var_dim, sdate_dim, ftime_dim))] + excluded_dims <- c(dat_dim, var_dim, sdate_dim, ftime_dim) } + if (single_file) { + excluded_dims <- excluded_dims[which(!excluded_dims %in% c(dat_dim))] + } + filedims <- dimnames[which(!dimnames %in% excluded_dims)] for (i_coord in filedims) { # vals @@ -649,9 +653,9 @@ SaveExp <- function(data, destination = "./", coords = NULL, } } } - # Reorder coords + # Delete unneded coords coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL - coords <- coords[filedims] +# coords <- coords[filedims] if (!single_file) { for (i in 1:n_datasets) { path <- file.path(destination, Datasets[i], varname) @@ -698,12 +702,6 @@ SaveExp <- function(data, destination = "./", coords = NULL, } } } else { - # Datasets definition - # From here - if (!is.null(dat_dim)) { - coords[[dat_dim]] <- array(1:dim(data)[dat_dim], dim = dim(data)[dat_dim]) - attr(coords[[dat_dim]], 'variables') <- list(list(units = 'adim')) - } # time_bnds if (!is.null(time_bounds)) { time_bnds <- c(time_bounds[[1]], time_bounds[[2]]) @@ -777,7 +775,7 @@ SaveExp <- function(data, destination = "./", coords = NULL, differ_bnds[, i, ] <- as.numeric(difftime(time_bnds[, i, ], Dates[1, i], units = "hours")) } - # NOTE: Add a warning when they are not equally spaced? + # NOTE (TODO): Add a warning when they are not equally spaced? leadtimes_bnds <- Subset(differ_bnds, along = sdate_dim, 1, drop = 'selected') } # Add time_bnds @@ -804,7 +802,6 @@ SaveExp <- function(data, destination = "./", coords = NULL, data <- Reorder(data, order) } } - # var definition extra_info_var <- NULL for (j in 1:n_vars) { -- GitLab From 917d3155f5dc71ed1b6c89ce7580d8feee539033 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 20 Dec 2023 14:58:32 +0100 Subject: [PATCH 28/66] Add bounds attribute to 'time' and reorder variables --- R/CST_SaveExp.R | 155 ++++++++++++++---------- inst/doc/usecase/UseCase4_CST_SaveExp.R | 16 +++ man/SaveExp.Rd | 3 +- 3 files changed, 107 insertions(+), 67 deletions(-) diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index 30b6d775..33158008 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -181,7 +181,8 @@ CST_SaveExp <- function(data, destination = "./", startdates = NULL, #'@param time_bounds (Optional) A list of two arrays of dates containing #' the lower (first array) and the upper (second array) time bounds #' corresponding to Dates. Each array must have the same dimensions as Dates. -#' It is NULL by default. +#' If 'Dates' parameter is NULL, 'time_bounds' are not used. It is NULL by +#' default. #'@param startdates A vector of dates that will be used for the filenames #' when saving the data in multiple files (single_file = FALSE). It must be a #' vector of the same length as the start date dimension of data. It must be a @@ -328,19 +329,11 @@ SaveExp <- function(data, destination = "./", coords = NULL, } # coords if (!is.null(coords)) { - if (!all(names(coords) %in% dimnames)) { - coords <- coords[-which(!names(coords) %in% dimnames)] - } - for (i_coord in dimnames) { - if (i_coord %in% names(coords)) { - if (length(coords[[i_coord]]) != dim(data)[i_coord]) { - warning(paste0("Coordinate '", i_coord, "' has different lenght as ", - "its dimension and it will not be used.")) - coords[[i_coord]] <- 1:dim(data)[i_coord] - } - } else { - coords[[i_coord]] <- 1:dim(data)[i_coord] - } + if (!inherits(coords, 'list')) { + stop("Parameter 'coords' must be a named list of coordinates.") + } + if (is.null(names(coords))) { + stop("Parameter 'coords' must have names corresponding to coordinates.") } } else { coords <- sapply(dimnames, function(x) 1:dim(data)[x]) @@ -490,11 +483,15 @@ SaveExp <- function(data, destination = "./", coords = NULL, if (!identical(time_bounds_dims[[1]], time_bounds_dims[[2]])) { stop("Parameter 'time_bounds' must have 2 arrays with same dimensions.") } - name_tb <- sort(names(time_bounds_dims[[1]])) - name_dt <- sort(names(dim(Dates))) - if (!identical(dim(Dates)[name_dt], time_bounds_dims[[1]][name_tb])) { - stop(paste0("Parameter 'Dates' and 'time_bounds' must have same length ", - "of all dimensions.")) + if (is.null(Dates)) { + time_bounds <- NULL + } else { + name_tb <- sort(names(time_bounds_dims[[1]])) + name_dt <- sort(names(dim(Dates))) + if (!identical(dim(Dates)[name_dt], time_bounds_dims[[1]][name_tb])) { + stop(paste0("Parameter 'Dates' and 'time_bounds' must have same length ", + "of all dimensions.")) + } } } # Dates (2): Check dimensions @@ -599,38 +596,44 @@ SaveExp <- function(data, destination = "./", coords = NULL, Datasets <- Datasets[1:n_datasets] } + ## NetCDF dimensions definition + excluded_dims <- var_dim + if (!is.null(Dates)) { + excluded_dims <- c(excluded_dims, sdate_dim, ftime_dim) + } + if (!single_file) { + excluded_dims <- c(excluded_dims, dat_dim) + } + ## Unknown dimensions check - alldims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, memb_dim, ftime_dim) + alldims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, ftime_dim, memb_dim) if (!all(dimnames %in% alldims)) { unknown_dims <- dimnames[which(!dimnames %in% alldims)] memb_dim <- c(memb_dim, unknown_dims) } - ## NetCDF dimensions definition - defined_dims <- NULL - extra_info_dim <- NULL - if (is.null(Dates)) { - excluded_dims <- c(dat_dim, var_dim) - } else { - excluded_dims <- c(dat_dim, var_dim, sdate_dim, ftime_dim) - } - if (single_file) { - excluded_dims <- excluded_dims[which(!excluded_dims %in% c(dat_dim))] - } - filedims <- dimnames[which(!dimnames %in% excluded_dims)] + filedims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, ftime_dim, memb_dim) + filedims <- filedims[which(!filedims %in% excluded_dims)] + # Delete unneded coords + coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL + out_coords <- NULL for (i_coord in filedims) { # vals - if (i_coord %in% names(coords)) {str - if (is.numeric(coords[[i_coord]])) { - coords[[i_coord]] <- as.vector(coords[[i_coord]]) + if (i_coord %in% names(coords)) { + if (length(coords[[i_coord]]) != dim(data)[i_coord]) { + warning(paste0("Coordinate '", i_coord, "' has different lenght as ", + "its dimension and it will not be used.")) + out_coords[[i_coord]] <- 1:dim(data)[i_coord] + } else if (is.numeric(coords[[i_coord]])) { + out_coords[[i_coord]] <- as.vector(coords[[i_coord]]) } else { - coords[[i_coord]] <- 1:dim(data)[i_coord] + out_coords[[i_coord]] <- 1:dim(data)[i_coord] } } else { - coords[[i_coord]] <- 1:dim(data)[i_coord] + out_coords[[i_coord]] <- 1:dim(data)[i_coord] } - dim(coords[[i_coord]]) <- dim(data)[i_coord] + dim(out_coords[[i_coord]]) <- dim(data)[i_coord] ## metadata if (i_coord %in% names(metadata)) { @@ -638,24 +641,22 @@ SaveExp <- function(data, destination = "./", coords = NULL, # from Start: 'lon' or 'lat' attrs <- attributes(metadata[[i_coord]])[['variables']] attrs[[i_coord]]$dim <- NULL - attr(coords[[i_coord]], 'variables') <- attrs + attr(out_coords[[i_coord]], 'variables') <- attrs } else if (inherits(metadata[[i_coord]], 'list')) { # from Start and Load: main var - attr(coords[[i_coord]], 'variables') <- list(metadata[[i_coord]]) - names(attributes(coords[[i_coord]])$variables) <- i_coord + attr(out_coords[[i_coord]], 'variables') <- list(metadata[[i_coord]]) + names(attributes(out_coords[[i_coord]])$variables) <- i_coord } else if (!is.null(attributes(metadata[[i_coord]]))) { # from Load attrs <- attributes(metadata[[i_coord]]) # We remove because some attributes can't be saved attrs <- NULL - attr(coords[[i_coord]], 'variables') <- list(attrs) - names(attributes(coords[[i_coord]])$variables) <- i_coord + attr(out_coords[[i_coord]], 'variables') <- list(attrs) + names(attributes(out_coords[[i_coord]])$variables) <- i_coord } } } - # Delete unneded coords - coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL -# coords <- coords[filedims] + if (!single_file) { for (i in 1:n_datasets) { path <- file.path(destination, Datasets[i], varname) @@ -693,7 +694,7 @@ SaveExp <- function(data, destination = "./", coords = NULL, target_dims = target_dims, fun = .saveexp, destination = path[j], - coords = coords, + coords = out_coords, ftime_dim = ftime_dim, varname = varname[j], metadata_var = metadata[[varname[j]]], @@ -707,6 +708,7 @@ SaveExp <- function(data, destination = "./", coords = NULL, time_bnds <- c(time_bounds[[1]], time_bounds[[2]]) } # Dates + remove_metadata_dim <- TRUE if (!is.null(Dates)) { if (is.null(sdate_dim)) { sdates <- Dates[1] @@ -717,10 +719,12 @@ SaveExp <- function(data, destination = "./", coords = NULL, sdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected') differ <- as.numeric(difftime(sdates, sdates[1], units = "hours")) dim(differ) <- dim(data)[sdate_dim] - coords[[sdate_dim]] <- differ + differ <- list(differ) + names(differ) <- sdate_dim + out_coords <- c(differ, out_coords) attrs <- list(units = paste('hours since', sdates[1]), calendar = 'proleptic_gregorian', longname = sdate_dim) - attr(coords[[sdate_dim]], 'variables')[[sdate_dim]] <- attrs + attr(out_coords[[sdate_dim]], 'variables')[[sdate_dim]] <- attrs # ftime definition Dates <- Reorder(Dates, c(ftime_dim, sdate_dim)) differ_ftime <- array(dim = dim(Dates)) @@ -740,19 +744,19 @@ SaveExp <- function(data, destination = "./", coords = NULL, if (all(diff(leadtimes/24) == 1)) { # daily values units <- 'days' - vals <- round(leadtimes/24) + 1 + leadtimes_vals <- round(leadtimes/24) + 1 } else if (all(diff(leadtimes/24) %in% c(28, 29, 30, 31))) { # monthly values units <- 'months' - vals <- round(leadtimes/(30.437*24)) + 1 + leadtimes_vals <- round(leadtimes/(30.437*24)) + 1 } else { # other frequency units <- 'hours' - vals <- leadtimes + 1 + leadtimes_vals <- leadtimes + 1 } } else { units <- paste('hours since', paste(sdates, collapse = ', ')) - vals <- leadtimes + leadtimes_vals <- leadtimes } # Add time_bnds @@ -780,20 +784,28 @@ SaveExp <- function(data, destination = "./", coords = NULL, } # Add time_bnds leadtimes_bnds <- Reorder(leadtimes_bnds, c('bnds', ftime_dim)) - coords[['time_bnds']] <- leadtimes_bnds + leadtimes_bnds <- list(leadtimes_bnds) + names(leadtimes_bnds) <- 'time_bnds' + out_coords <- c(leadtimes_bnds, out_coords) attrs <- list(units = paste('hours since', paste(sdates, collapse = ', ')), calendar = 'proleptic_gregorian', longname = 'time bounds', unlim = FALSE) - attr(coords[['time_bnds']], 'variables')$time_bnds <- attrs + attr(out_coords[['time_bnds']], 'variables')$time_bnds <- attrs } # Add ftime var - dim(vals) <- dim(data)[ftime_dim] - coords[[ftime_dim]] <- vals + dim(leadtimes_vals) <- dim(data)[ftime_dim] + leadtimes_vals <- list(leadtimes_vals) + names(leadtimes_vals) <- ftime_dim + out_coords <- c(leadtimes_vals, out_coords) attrs <- list(units = units, calendar = 'proleptic_gregorian', longname = ftime_dim, dim = list(list(name = ftime_dim, unlim = TRUE))) - attr(coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs + if (!is.null(time_bounds)) { + attrs$bounds = 'time_bnds' + } + attr(out_coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs for (j in 1:n_vars) { + remove_metadata_dim <- FALSE metadata[[varname[j]]]$dim <- list(list(name = ftime_dim, unlim = TRUE)) } # Reorder ftime_dim to last @@ -808,17 +820,18 @@ SaveExp <- function(data, destination = "./", coords = NULL, varname_j <- varname[j] metadata_j <- metadata[[varname_j]] if (is.null(var_dim)) { - coords[[varname_j]] <- data + out_coords[[varname_j]] <- data } else { - coords[[varname_j]] <- Subset(data, var_dim, j, drop = 'selected') + out_coords[[varname_j]] <- Subset(data, var_dim, j, drop = 'selected') } if (!is.null(metadata_j)) { - attr(coords[[varname_j]], 'variables') <- list(metadata_j) - names(attributes(coords[[varname_j]])$variables) <- varname_j + if (remove_metadata_dim) metadata_j$dim <- NULL + attr(out_coords[[varname_j]], 'variables') <- list(metadata_j) + names(attributes(out_coords[[varname_j]])$variables) <- varname_j } # Add global attributes if (!is.null(global_attrs)) { - attributes(coords[[varname_j]])$global_attrs <- global_attrs + attributes(out_coords[[varname_j]])$global_attrs <- global_attrs } } if (is.null(extra_string)) { @@ -838,7 +851,7 @@ SaveExp <- function(data, destination = "./", coords = NULL, } } full_filename <- file.path(destination, file_name) - ArrayToNc(coords, full_filename) + ArrayToNc(out_coords, full_filename) } } @@ -848,13 +861,16 @@ SaveExp <- function(data, destination = "./", coords = NULL, ftime_dim = 'time', varname = 'var', metadata_var = NULL, extra_string = NULL, global_attrs = NULL) { + remove_metadata_dim <- TRUE if (!is.null(dates)) { if (!any(is.null(time_bnds1), is.null(time_bnds2))) { time_bnds <- c(time_bnds1, time_bnds2) time_bnds <- as.numeric(difftime(time_bnds, dates[1], units = "hours")) dim(time_bnds) <- c(dim(data)[ftime_dim], bnds = 2) time_bnds <- Reorder(time_bnds, c('bnds', ftime_dim)) - coords[['time_bnds']] <- time_bnds + time_bnds <- list(time_bnds) + names(time_bnds) <- 'time_bnds' + coords <- c(time_bnds, coords) attrs <- list(units = paste('hours since', dates[1]), calendar = 'proleptic_gregorian', longname = 'time bounds') @@ -863,17 +879,24 @@ SaveExp <- function(data, destination = "./", coords = NULL, # Add ftime_dim differ <- as.numeric(difftime(dates, dates[1], units = "hours")) dim(differ) <- dim(data)[ftime_dim] - coords[[ftime_dim]] <- differ + differ <- list(differ) + names(differ) <- ftime_dim + coords <- c(differ, coords) attrs <- list(units = paste('hours since', dates[1]), calendar = 'proleptic_gregorian', longname = ftime_dim, dim = list(list(name = ftime_dim, unlim = TRUE))) + if (!is.null(time_bnds1)) { + attrs$bounds = 'time_bnds' + } attr(coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs metadata_var$dim <- list(list(name = ftime_dim, unlim = TRUE)) + remove_metadata_dim <- FALSE } # Add data coords[[varname]] <- data if (!is.null(metadata_var)) { + if (remove_metadata_dim) metadata_var$dim <- NULL attr(coords[[varname]], 'variables') <- list(metadata_var) names(attributes(coords[[varname]])$variables) <- varname } diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R index 10ca74d6..5bd7db76 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -377,10 +377,26 @@ res <- CST_PeriodAccumulation(data = exp, time_dim = 'ftime', CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var', memb_dim = 'member', dat_dim = 'dataset', startdates = res$attrs$Dates, single_file = TRUE) +# (4.1.1) Same with SaveExp +SaveExp(data = res$data, coords = res$coords, + Dates = NULL, time_bounds = res$attrs$time_bounds, + ftime_dim = NULL, var_dim = 'var', + varname = res$attrs$Variable$varName, + metadata = res$attrs$Variable$metadata, + memb_dim = 'member', dat_dim = 'dataset', + startdates = res$attrs$Dates, single_file = TRUE) # (4.2) All data in separated files CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var', memb_dim = 'member', dat_dim = 'dataset', startdates = res$attrs$Dates, single_file = FALSE) +# (4.2.1) Same with SaveExp +SaveExp(data = res$data, coords = res$coords, + Dates = res$attrs$Dates, time_bounds = res$attrs$time_bounds, + ftime_dim = NULL, var_dim = 'var', + varname = res$attrs$Variable$varName, + metadata = res$attrs$Variable$metadata, + memb_dim = 'member', dat_dim = 'dataset', + startdates = res$attrs$Dates, single_file = FALSE) # (4.3) CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var', memb_dim = 'member', dat_dim = 'dataset', diff --git a/man/SaveExp.Rd b/man/SaveExp.Rd index dbef149d..53c791f7 100644 --- a/man/SaveExp.Rd +++ b/man/SaveExp.Rd @@ -45,7 +45,8 @@ It must have ftime_dim dimension.} \item{time_bounds}{(Optional) A list of two arrays of dates containing the lower (first array) and the upper (second array) time bounds corresponding to Dates. Each array must have the same dimensions as Dates. -It is NULL by default.} +If 'Dates' parameter is NULL, 'time_bounds' are not used. It is NULL by +default.} \item{startdates}{A vector of dates that will be used for the filenames when saving the data in multiple files (single_file = FALSE). It must be a -- GitLab From 45247e44105f9826fbd638a0f9b24fb377c61b79 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 20 Dec 2023 17:01:09 +0100 Subject: [PATCH 29/66] Add long_name to time_bounds and set units_hours_since to FALSE by default --- R/CST_SaveExp.R | 12 ++++++------ inst/doc/usecase/UseCase4_CST_SaveExp.R | 12 ++++++++---- 2 files changed, 14 insertions(+), 10 deletions(-) diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index 33158008..72a97b8c 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -55,11 +55,11 @@ #' in the file name as: '__.nc'. It is NULL by #' default. #'@param units_hours_since (Optional) A logical value only available for the -#' case: Dates have forecast time and start date dimension, single_file is +#' case: 'Dates' have forecast time and start date dimension, 'single_file' is #' TRUE and 'time_bounds' are not used. When it is TRUE, it saves the forecast #' time with units of 'hours since'; if it is FALSE, the time units will be a #' number of time steps with its corresponding frequency (e.g. n days, n months -#' or n hours). It is TRUE by default. +#' or n hours). It is FALSE by default. #'@param global_attrs (Optional) A list with elements containing the global #' attributes to be saved in the NetCDF. #' @@ -102,7 +102,7 @@ CST_SaveExp <- function(data, destination = "./", startdates = NULL, memb_dim = 'member', dat_dim = 'dataset', var_dim = 'var', drop_dims = NULL, single_file = FALSE, extra_string = NULL, - global_attrs = NULL, units_hours_since = TRUE) { + global_attrs = NULL, units_hours_since = FALSE) { # Check 's2dv_cube' if (!inherits(data, 's2dv_cube')) { stop("Parameter 'data' must be of the class 's2dv_cube'.") @@ -238,7 +238,7 @@ CST_SaveExp <- function(data, destination = "./", startdates = NULL, #' TRUE and 'time_bounds' is NULL. When it is TRUE, it saves the forecast time #' with units of 'hours since'; if it is FALSE, the time units will be a number #' of time steps with its corresponding frequency (e.g. n days, n months or n -#' hours). It is TRUE by default. +#' hours). It is FALSE by default. #' #'@return Multiple or single NetCDF files containing the data array.\cr #'\item{\code{single_file is TRUE}}{ @@ -289,7 +289,7 @@ SaveExp <- function(data, destination = "./", coords = NULL, sdate_dim = 'sdate', ftime_dim = 'time', memb_dim = 'member', dat_dim = 'dataset', var_dim = 'var', drop_dims = NULL, single_file = FALSE, extra_string = NULL, - global_attrs = NULL, units_hours_since = TRUE) { + global_attrs = NULL, units_hours_since = FALSE) { ## Initial checks # data if (is.null(data)) { @@ -789,7 +789,7 @@ SaveExp <- function(data, destination = "./", coords = NULL, out_coords <- c(leadtimes_bnds, out_coords) attrs <- list(units = paste('hours since', paste(sdates, collapse = ', ')), calendar = 'proleptic_gregorian', - longname = 'time bounds', unlim = FALSE) + long_name = 'time bounds', unlim = FALSE) attr(out_coords[['time_bnds']], 'variables')$time_bnds <- attrs } # Add ftime var diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R index 5bd7db76..d700e1b9 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -94,7 +94,7 @@ SaveExp(data = data, Dates = Dates, coords = coords, varname = varname, data <- lonlat_temp_st$exp CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', - single_file = TRUE) + units_hours_since = TRUE, single_file = TRUE) # (2.2.2) Now we read the output with Start: sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) @@ -113,6 +113,7 @@ out <- Start(dat = path, retrieve = TRUE) attributes(out)$Variables$common$ftime +out_cube <- as.s2dv_cube(out) # (2.3) lonlat_temp_st$exp in a single file with units of time frequency # (2.3.1) we save the data @@ -120,7 +121,7 @@ data <- lonlat_temp_st$exp CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate', single_file = TRUE, units_hours_since = FALSE) - +dates <- lonlat_temp_st$exp$attrs$Dates # (2.3.2) Now we read the output with Start: sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") @@ -140,6 +141,7 @@ out <- Start(dat = path, attributes(out)$Variables$common$ftime # [1] "1 months" "2 months" "3 months" +out_cube2 <- as.s2dv_cube(out) # (2.4) lonlat_temp_st$exp in separated files with units of hours since # (2.4.1) we save the data @@ -159,9 +161,9 @@ out <- Start(dat = path, var = 'tas', member = 'all', return_vars = list(lon = 'dat', lat = 'dat', - ftime = NULL), + ftime = 'sdate'), retrieve = TRUE) - +out_cube1 <- as.s2dv_cube(out) # (2.5) lonlat_prec_st$exp in a single file with units of time frequency # (2.5.1) we save the data data <- lonlat_prec_st @@ -192,6 +194,7 @@ attributes(out)$Variables$common$ftime # [15] "15 days" "16 days" "17 days" "18 days" "19 days" "20 days" "21 days" # [22] "22 days" "23 days" "24 days" "25 days" "26 days" "27 days" "28 days" # [29] "29 days" "30 days" "31 days" +out_cube <- as.s2dv_cube(out) # (2.6) Test observations: lonlat_temp # (2.6.1) Save the data @@ -312,6 +315,7 @@ dim(data3out) CST_SaveExp(data = cube3, ftime_dim = 'time', var_dim = 'var', memb_dim = 'ensemble', dat_dim = 'dat', single_file = TRUE) +# TODO: Read the output with Start # (3.3) Observations (from startR usecase) repos_exp <- paste0('/esarchive/exp/ecearth/a1tr/cmorfiles/CMIP/EC-Earth-Consortium/', -- GitLab From bb03318cb4dbb9e8c643cb32be7d4fb46df04d29 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 21 Dec 2023 14:34:30 +0100 Subject: [PATCH 30/66] Add lintr tags and exclusions to the pipeline --- .Rbuildignore | 1 + .gitlab-ci.yml | 1 + .lintr | 35 +++++++++++++++++++++++++++++++++++ 3 files changed, 37 insertions(+) create mode 100644 .lintr diff --git a/.Rbuildignore b/.Rbuildignore index 31cdda42..6cd6d8c8 100644 --- a/.Rbuildignore +++ b/.Rbuildignore @@ -6,6 +6,7 @@ ./.nc$ .*^(?!data)\.RData$ .*\.gitlab-ci.yml$ +.lintr ^tests$ #^inst/doc$ ^inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF100\.R$ diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index cbc39ada..2750bb3f 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -16,3 +16,4 @@ lint-check: - module load R/4.1.2-foss-2015a-bare - echo "Run lintr on the package..." - Rscript -e 'lintr::lint_package(path = ".")' + - R -e 'covr::package_coverage()' diff --git a/.lintr b/.lintr new file mode 100644 index 00000000..bbec668d --- /dev/null +++ b/.lintr @@ -0,0 +1,35 @@ +linters: linters_with_tags( + tags = c("package_development", "readability", "best_practices"), + line_length_linter = line_length_linter(100L), + T_and_F_symbol_linter = NULL, + quotes_linter = NULL, + commented_code_linter = NULL, + implicit_integer_linter = NULL, + vector_logic_linter = NULL, + extraction_operator_linter = NULL, + function_left_parentheses_linter = NULL, + semicolon_linter = NULL, + indentation_linter = NULL, + unnecessary_nested_if_linter = NULL, + if_not_else_linter = NULL, + object_length_linter = NULL, + infix_spaces_linter(exclude_operators = "~") +) +exclusions: list( + "R/AnalogsPred_train.R", + "R/BEI_PDFBest.R", + "R/BEI_Weights.R", + "R/CST_AdamontAnalog.R", + "R/CST_AdamontQQCorr.R", + "R/CST_AnalogsPredictors.R", + "R/CST_BEI_Weighting.R", + "R/CST_CategoricalEnsCombination.R", + "R/CST_DynBiasCorrection.R", + "R/CST_EnsClustering.R", + "R/PlotCombinedMap.R", + "R/PlotForecastPDF.R", + "R/PlotMostLikelyQuantileMap.R", + "R/PlotPDFsOLE.R", + "R/PlotTriangles4Categories.R", + "R/PlotWeeklyClim.R" +) -- GitLab From 1cf7293fa2282bd0b6dcb3f8632ee3b1dfc022e6 Mon Sep 17 00:00:00 2001 From: Victoria Agudetse Roures Date: Thu, 21 Dec 2023 14:36:28 +0100 Subject: [PATCH 31/66] Subset indices parameter as list --- R/CST_Subset.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/R/CST_Subset.R b/R/CST_Subset.R index 2e69c1f9..fa314329 100644 --- a/R/CST_Subset.R +++ b/R/CST_Subset.R @@ -87,7 +87,7 @@ CST_Subset <- function(x, along, indices, drop = FALSE, var_dim = NULL, # Adjust coordinates for (dimension in 1:length(along)) { dim_name <- along[dimension] - index <- indices[[dimension]] + index <- indices[dimension] # Only rename coordinates that have not been dropped if (dim_name %in% names(x$dims)) { # Subset coordinate by indices -- GitLab From a6ac6dbfeb4635fb8687c1f4041885dc950113d6 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 21 Dec 2023 15:24:28 +0100 Subject: [PATCH 32/66] Add unit test for the correction and add any() inside .subset_with_attrs --- R/CST_Subset.R | 2 +- tests/testthat/test-CST_Subset.R | 20 ++++++++++++++++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/R/CST_Subset.R b/R/CST_Subset.R index fa314329..28c8498c 100644 --- a/R/CST_Subset.R +++ b/R/CST_Subset.R @@ -150,7 +150,7 @@ CST_Subset <- function(x, along, indices, drop = FALSE, var_dim = NULL, # Function to subset with attributes .subset_with_attrs <- function(x, ...) { args_subset <- list(...) - if (is.null(dim(x)) | length(dim(x)) == 1) { + if (any(is.null(dim(x)), length(dim(x)) == 1)) { l <- x[args_subset[['indices']][[1]]] } else { l <- ClimProjDiags::Subset(x, along = args_subset[['along']], diff --git a/tests/testthat/test-CST_Subset.R b/tests/testthat/test-CST_Subset.R index 9fc04b48..2a4fdd1f 100644 --- a/tests/testthat/test-CST_Subset.R +++ b/tests/testthat/test-CST_Subset.R @@ -50,6 +50,26 @@ test_that("2. Output checks: CST_Subset", { names(res1$coords), c("member", "ftime", "lon") ) + ## lat + expect_equal( + res1$coords$lat, + NULL + ) + ## lon + expect_equal( + as.vector(res1$coords$lon), + c(6, 7) + ) + ## sdate + expect_equal( + res1$coords$sdate, + NULL + ) + ## member + expect_equal( + as.vector(res1$coords$member), + c(1,2) + ) # Check attrs expect_equal( names(res1$attrs), -- GitLab From 9f31d837bbe072a8064e5f16a745d018c189b37e Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 21 Dec 2023 15:28:56 +0100 Subject: [PATCH 33/66] Correct spacing in .lintr config file --- .lintr | 66 +++++++++++++++++++++++++++++----------------------------- 1 file changed, 33 insertions(+), 33 deletions(-) diff --git a/.lintr b/.lintr index bbec668d..753b9131 100644 --- a/.lintr +++ b/.lintr @@ -1,35 +1,35 @@ -linters: linters_with_tags( - tags = c("package_development", "readability", "best_practices"), - line_length_linter = line_length_linter(100L), - T_and_F_symbol_linter = NULL, - quotes_linter = NULL, - commented_code_linter = NULL, - implicit_integer_linter = NULL, - vector_logic_linter = NULL, - extraction_operator_linter = NULL, - function_left_parentheses_linter = NULL, - semicolon_linter = NULL, - indentation_linter = NULL, - unnecessary_nested_if_linter = NULL, - if_not_else_linter = NULL, - object_length_linter = NULL, - infix_spaces_linter(exclude_operators = "~") -) +linters: linters_with_tags( # lintr_3.1.1 + tags = c("package_development", "readability", "best_practices"), + line_length_linter = line_length_linter(100L), + T_and_F_symbol_linter = NULL, + quotes_linter = NULL, + commented_code_linter = NULL, + implicit_integer_linter = NULL, + vector_logic_linter = NULL, + extraction_operator_linter = NULL, + function_left_parentheses_linter = NULL, + semicolon_linter = NULL, + indentation_linter = NULL, + unnecessary_nested_if_linter = NULL, + if_not_else_linter = NULL, + object_length_linter = NULL, + infix_spaces_linter(exclude_operators = "~") + ) exclusions: list( - "R/AnalogsPred_train.R", - "R/BEI_PDFBest.R", - "R/BEI_Weights.R", - "R/CST_AdamontAnalog.R", - "R/CST_AdamontQQCorr.R", - "R/CST_AnalogsPredictors.R", - "R/CST_BEI_Weighting.R", - "R/CST_CategoricalEnsCombination.R", - "R/CST_DynBiasCorrection.R", - "R/CST_EnsClustering.R", - "R/PlotCombinedMap.R", - "R/PlotForecastPDF.R", - "R/PlotMostLikelyQuantileMap.R", - "R/PlotPDFsOLE.R", - "R/PlotTriangles4Categories.R", - "R/PlotWeeklyClim.R" + "R/AnalogsPred_train.R", + "R/BEI_PDFBest.R", + "R/BEI_Weights.R", + "R/CST_AdamontAnalog.R", + "R/CST_AdamontQQCorr.R", + "R/CST_AnalogsPredictors.R", + "R/CST_BEI_Weighting.R", + "R/CST_CategoricalEnsCombination.R", + "R/CST_DynBiasCorrection.R", + "R/CST_EnsClustering.R", + "R/PlotCombinedMap.R", + "R/PlotForecastPDF.R", + "R/PlotMostLikelyQuantileMap.R", + "R/PlotPDFsOLE.R", + "R/PlotTriangles4Categories.R", + "R/PlotWeeklyClim.R" ) -- GitLab From 40767084472fd26117c6f50ebc5106041baaeafe Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 21 Dec 2023 15:50:48 +0100 Subject: [PATCH 34/66] Correct .lintr config and error in a vignette --- .gitlab-ci.yml | 1 - .lintr | 6 ++++-- vignettes/RainFARM_vignette.Rmd | 20 ++++++++++---------- 3 files changed, 14 insertions(+), 13 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 2750bb3f..cbc39ada 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -16,4 +16,3 @@ lint-check: - module load R/4.1.2-foss-2015a-bare - echo "Run lintr on the package..." - Rscript -e 'lintr::lint_package(path = ".")' - - R -e 'covr::package_coverage()' diff --git a/.lintr b/.lintr index 753b9131..11c9deeb 100644 --- a/.lintr +++ b/.lintr @@ -31,5 +31,7 @@ exclusions: list( "R/PlotMostLikelyQuantileMap.R", "R/PlotPDFsOLE.R", "R/PlotTriangles4Categories.R", - "R/PlotWeeklyClim.R" -) + "R/PlotWeeklyClim.R", + "tests/testthat/", + "tests/testthat.R" + ) diff --git a/vignettes/RainFARM_vignette.Rmd b/vignettes/RainFARM_vignette.Rmd index a51d75cb..27d74233 100644 --- a/vignettes/RainFARM_vignette.Rmd +++ b/vignettes/RainFARM_vignette.Rmd @@ -180,21 +180,21 @@ slopes <- CST_RFSlope(exp, time_dim = c("member", "ftime")) dim(slopes) # dataset var sdate # 1 1 3 -slopes -, , 1 +# slopes +# , , 1 - [,1] -[1,] 1.09957 +# [,1] +# [1,] 1.09957 -, , 2 +# , , 2 - [,1] -[1,] 1.768861 +# [,1] +# [1,] 1.768861 -, , 3 +# , , 3 - [,1] -[1,] 1.190176 +# [,1] +# [1,] 1.190176 ``` which return an array of spectral slopes, one for each "dataset" and starting date "sdate". -- GitLab From 107f8cbf64e6b75ac798c6d64ee6ea89573a8734 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 21 Dec 2023 16:12:01 +0100 Subject: [PATCH 35/66] Exclude inst and Analogs from lintr checks --- .lintr | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.lintr b/.lintr index 11c9deeb..31497af1 100644 --- a/.lintr +++ b/.lintr @@ -16,11 +16,13 @@ linters: linters_with_tags( # lintr_3.1.1 infix_spaces_linter(exclude_operators = "~") ) exclusions: list( + "inst", "R/AnalogsPred_train.R", "R/BEI_PDFBest.R", "R/BEI_Weights.R", "R/CST_AdamontAnalog.R", "R/CST_AdamontQQCorr.R", + "R/Analogs.R", "R/CST_AnalogsPredictors.R", "R/CST_BEI_Weighting.R", "R/CST_CategoricalEnsCombination.R", -- GitLab From 0b36ea1ed72d13e0b0da6e4d0e610ccb40e52ade Mon Sep 17 00:00:00 2001 From: vagudets Date: Wed, 10 Jan 2024 12:59:22 +0100 Subject: [PATCH 36/66] Add new function CST_ChangeDimNames() --- R/CST_ChangeDimNames.R | 74 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 R/CST_ChangeDimNames.R diff --git a/R/CST_ChangeDimNames.R b/R/CST_ChangeDimNames.R new file mode 100644 index 00000000..503b67aa --- /dev/null +++ b/R/CST_ChangeDimNames.R @@ -0,0 +1,74 @@ +#'Change the name of one or more dimensions for an object of class s2dv_cube +#' +#'Change the names of the dimensions specified in 'original_names' to the names +#'in 'new_names'. The coordinate names and the dimensions of any attributes +#'are also modified accordingly. +#' +#'@author Agudetse Roures Victoria, \email{victoria.agudetse@bsc.es} +#' +#'@param data An object of class \code{s2dv_cube} whose dimension names +#' should be changed. +#'@param original_names A single character string or a vector indicating the +#' dimensions to be renamed. +#'@param new_names A single character string or a vector indicating the new +#' dimension names, in the same order as the dimensions in 'original_names'. +#' +#'@return An object of class \code{s2dv_cube} with similar data, coordinates and +#'attributes as the \code{data} input, but with modified dimension names. +#' +#'@examples +#'#Example with sample data: +#'# Check original dimensions and coordinates +#'lonlat_temp$exp$dims +#'names(lonlat_temp$exp$coords) +#'dim(lonlat_temp$exp$attrs$Dates) +#'# Change 'dataset' to 'dat' and 'ftime' to 'time' +#'exp <- CST_ChangeDimNames(lonlat_temp$exp, +#' original_names = c("dataset", "ftime"), +#' new_names = c("dat", "time")) +#'# Check new dimensions and coordinates +#'exp$dims +#'names(exp$coords) +#'dim(lonlat_temp$exp$attrs$Dates) +#' +#'@export +CST_ChangeDimNames <- function(data, original_names, new_names) { + if (!inherits(data, "s2dv_cube")) { + stop("Parameter 'data' must be an object of class 's2dv_cube'") + } + if (!(length(original_names) == length(new_names))) { + stop("The number of dimension names in 'new_names' must be the same + as in 'original_names'") + } + if (!all(original_names %in% names(data$dims))) { + stop("Some of the dimensions in 'original_names' could not be found in + 'data'") + } + for (index in 1:length(original_names)) { + original_name <- original_names[index] + new_name <- new_names[index] + # Step 1: Change dims and data + names(data$dims)[which(names(data$dims) == original_name)] <- new_name + dim(data$data) <- data$dims + # Step 2: Change coords + names(data$coords)[which(names(data$coords) == original_name)] <- new_name + # Step 3: Change attrs + # 3.1 - Dates + if (original_name %in% names(dim(data$attrs$Dates))) { + names(dim(data$attrs$Dates))[which(names(dim(data$attrs$Dates)) + == original_name)] <- new_name + } + # 3.2 - Variable metadata + if (original_name %in% names(data$attrs$Variable$metadata)) { + names(data$attrs$Variable$metadata)[which(names(data$attrs$Variable$metadata) + == original_name)] <- new_name + } + # 3.3 - Source files + if (original_name %in% names(dim(data$attrs$source_files))) { + names(dim(data$attrs$source_files))[which(names(dim(data$attrs$source_files)) + == original_name)] <- new_name + } + } + return(data) +} + -- GitLab From 16d762d1d3f2c4fcfca3564e9f1a557db651fd33 Mon Sep 17 00:00:00 2001 From: vagudets Date: Wed, 10 Jan 2024 13:23:28 +0100 Subject: [PATCH 37/66] Update DESCRIPTION, fix example --- DESCRIPTION | 2 +- R/CST_ChangeDimNames.R | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/DESCRIPTION b/DESCRIPTION index 3af5dcb1..4e41770d 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -90,5 +90,5 @@ VignetteBuilder: knitr License: GPL-3 Encoding: UTF-8 LazyData: true -RoxygenNote: 7.2.0 +RoxygenNote: 7.2.3 Config/testthat/edition: 3 diff --git a/R/CST_ChangeDimNames.R b/R/CST_ChangeDimNames.R index 503b67aa..a36f42a3 100644 --- a/R/CST_ChangeDimNames.R +++ b/R/CST_ChangeDimNames.R @@ -29,7 +29,7 @@ #'# Check new dimensions and coordinates #'exp$dims #'names(exp$coords) -#'dim(lonlat_temp$exp$attrs$Dates) +#'dim(exp$attrs$Dates) #' #'@export CST_ChangeDimNames <- function(data, original_names, new_names) { -- GitLab From 897035a01258146107b3d1df330e23d31d3478b3 Mon Sep 17 00:00:00 2001 From: vagudets Date: Wed, 10 Jan 2024 17:00:06 +0100 Subject: [PATCH 38/66] Improve efficiency, add modification of attributes, add dots to error messages --- R/CST_ChangeDimNames.R | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/R/CST_ChangeDimNames.R b/R/CST_ChangeDimNames.R index a36f42a3..99347b00 100644 --- a/R/CST_ChangeDimNames.R +++ b/R/CST_ChangeDimNames.R @@ -34,22 +34,26 @@ #'@export CST_ChangeDimNames <- function(data, original_names, new_names) { if (!inherits(data, "s2dv_cube")) { - stop("Parameter 'data' must be an object of class 's2dv_cube'") + stop("Parameter 'data' must be an object of class 's2dv_cube'.") } if (!(length(original_names) == length(new_names))) { stop("The number of dimension names in 'new_names' must be the same - as in 'original_names'") + as in 'original_names'.") } if (!all(original_names %in% names(data$dims))) { stop("Some of the dimensions in 'original_names' could not be found in - 'data'") + 'data'.") } for (index in 1:length(original_names)) { original_name <- original_names[index] new_name <- new_names[index] - # Step 1: Change dims and data + # Step 1: Change dims names(data$dims)[which(names(data$dims) == original_name)] <- new_name - dim(data$data) <- data$dims + # dim(data$data) <- data$dims + # ## TODO: Improve code + # if !(is.null(attributes(data$data)$dimensions)) { + # attributes(data$data)$dimensions <- names(data$dims) + # } # Step 2: Change coords names(data$coords)[which(names(data$coords) == original_name)] <- new_name # Step 3: Change attrs @@ -69,6 +73,13 @@ CST_ChangeDimNames <- function(data, original_names, new_names) { == original_name)] <- new_name } } + # Change data dimnames after renaming all dimensions + dim(data$data) <- data$dims + if (!is.null(attributes(data$data)$dimensions)) { + attributes(data$data)$dimensions <- names(data$dims) + } + # Change $Dates 'dim' attribute + attr(attributes(data$attrs$Dates)$dim, "names") <- names(dim(data$attrs$Dates)) return(data) } -- GitLab From d66509df44d3d45458cda54be862a78a6641361a Mon Sep 17 00:00:00 2001 From: vagudets Date: Wed, 10 Jan 2024 17:00:48 +0100 Subject: [PATCH 39/66] update function --- R/CST_ChangeDimNames.R | 5 ----- 1 file changed, 5 deletions(-) diff --git a/R/CST_ChangeDimNames.R b/R/CST_ChangeDimNames.R index 99347b00..876f5183 100644 --- a/R/CST_ChangeDimNames.R +++ b/R/CST_ChangeDimNames.R @@ -49,11 +49,6 @@ CST_ChangeDimNames <- function(data, original_names, new_names) { new_name <- new_names[index] # Step 1: Change dims names(data$dims)[which(names(data$dims) == original_name)] <- new_name - # dim(data$data) <- data$dims - # ## TODO: Improve code - # if !(is.null(attributes(data$data)$dimensions)) { - # attributes(data$data)$dimensions <- names(data$dims) - # } # Step 2: Change coords names(data$coords)[which(names(data$coords) == original_name)] <- new_name # Step 3: Change attrs -- GitLab From 93544eb67d98e1b4e88ec2db319b56df23a7f247 Mon Sep 17 00:00:00 2001 From: vagudets Date: Wed, 10 Jan 2024 17:03:22 +0100 Subject: [PATCH 40/66] Add checks for 'original_names' and 'new_names' parameters --- R/CST_ChangeDimNames.R | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/R/CST_ChangeDimNames.R b/R/CST_ChangeDimNames.R index 876f5183..1c79890e 100644 --- a/R/CST_ChangeDimNames.R +++ b/R/CST_ChangeDimNames.R @@ -36,6 +36,15 @@ CST_ChangeDimNames <- function(data, original_names, new_names) { if (!inherits(data, "s2dv_cube")) { stop("Parameter 'data' must be an object of class 's2dv_cube'.") } + if (!is.character(original_names)) { + stop("Parameter 'original_names' must be a character string or a + vector of character strings.") + } + if (!is.character(new_names)) { + stop("Parameter 'new_names' must be a character string or a + vector of character strings.") + } + if (!(length(original_names) == length(new_names))) { stop("The number of dimension names in 'new_names' must be the same as in 'original_names'.") -- GitLab From 90894e1a2171e0bcbf01c6bc97bf8c2a1098de0f Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 11 Jan 2024 09:24:43 +0100 Subject: [PATCH 41/66] Add unit test, add few style spaces and update NAMESPACE. --- NAMESPACE | 1 + R/CST_ChangeDimNames.R | 18 ++--- man/CST_ChangeDimNames.Rd | 46 ++++++++++++ tests/testthat/test-CST_ChangeDimNames.R | 95 ++++++++++++++++++++++++ 4 files changed, 151 insertions(+), 9 deletions(-) create mode 100644 man/CST_ChangeDimNames.Rd create mode 100644 tests/testthat/test-CST_ChangeDimNames.R diff --git a/NAMESPACE b/NAMESPACE index 012f76cf..a03e7c8b 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -19,6 +19,7 @@ export(CST_BEI_Weighting) export(CST_BiasCorrection) export(CST_Calibration) export(CST_CategoricalEnsCombination) +export(CST_ChangeDimNames) export(CST_DynBiasCorrection) export(CST_EnsClustering) export(CST_InsertDim) diff --git a/R/CST_ChangeDimNames.R b/R/CST_ChangeDimNames.R index 1c79890e..433039b6 100644 --- a/R/CST_ChangeDimNames.R +++ b/R/CST_ChangeDimNames.R @@ -17,7 +17,7 @@ #'attributes as the \code{data} input, but with modified dimension names. #' #'@examples -#'#Example with sample data: +#'# Example with sample data: #'# Check original dimensions and coordinates #'lonlat_temp$exp$dims #'names(lonlat_temp$exp$coords) @@ -37,21 +37,21 @@ CST_ChangeDimNames <- function(data, original_names, new_names) { stop("Parameter 'data' must be an object of class 's2dv_cube'.") } if (!is.character(original_names)) { - stop("Parameter 'original_names' must be a character string or a - vector of character strings.") + stop("Parameter 'original_names' must be a character string or a ", + "vector of character strings.") } if (!is.character(new_names)) { - stop("Parameter 'new_names' must be a character string or a - vector of character strings.") + stop("Parameter 'new_names' must be a character string or a ", + "vector of character strings.") } if (!(length(original_names) == length(new_names))) { - stop("The number of dimension names in 'new_names' must be the same - as in 'original_names'.") + stop("The number of dimension names in 'new_names' must be the same ", + "as in 'original_names'.") } if (!all(original_names %in% names(data$dims))) { - stop("Some of the dimensions in 'original_names' could not be found in - 'data'.") + stop("Some of the dimensions in 'original_names' could not be found in ", + "'data'.") } for (index in 1:length(original_names)) { original_name <- original_names[index] diff --git a/man/CST_ChangeDimNames.Rd b/man/CST_ChangeDimNames.Rd new file mode 100644 index 00000000..86806be0 --- /dev/null +++ b/man/CST_ChangeDimNames.Rd @@ -0,0 +1,46 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/CST_ChangeDimNames.R +\name{CST_ChangeDimNames} +\alias{CST_ChangeDimNames} +\title{Change the name of one or more dimensions for an object of class s2dv_cube} +\usage{ +CST_ChangeDimNames(data, original_names, new_names) +} +\arguments{ +\item{data}{An object of class \code{s2dv_cube} whose dimension names +should be changed.} + +\item{original_names}{A single character string or a vector indicating the +dimensions to be renamed.} + +\item{new_names}{A single character string or a vector indicating the new +dimension names, in the same order as the dimensions in 'original_names'.} +} +\value{ +An object of class \code{s2dv_cube} with similar data, coordinates and +attributes as the \code{data} input, but with modified dimension names. +} +\description{ +Change the names of the dimensions specified in 'original_names' to the names +in 'new_names'. The coordinate names and the dimensions of any attributes +are also modified accordingly. +} +\examples{ +# Example with sample data: +# Check original dimensions and coordinates +lonlat_temp$exp$dims +names(lonlat_temp$exp$coords) +dim(lonlat_temp$exp$attrs$Dates) +# Change 'dataset' to 'dat' and 'ftime' to 'time' +exp <- CST_ChangeDimNames(lonlat_temp$exp, + original_names = c("dataset", "ftime"), + new_names = c("dat", "time")) +# Check new dimensions and coordinates +exp$dims +names(exp$coords) +dim(exp$attrs$Dates) + +} +\author{ +Agudetse Roures Victoria, \email{victoria.agudetse@bsc.es} +} diff --git a/tests/testthat/test-CST_ChangeDimNames.R b/tests/testthat/test-CST_ChangeDimNames.R new file mode 100644 index 00000000..ceef7375 --- /dev/null +++ b/tests/testthat/test-CST_ChangeDimNames.R @@ -0,0 +1,95 @@ +############################################## +test_that("1. Input checks", { + expect_error( + CST_ChangeDimNames(1), + "Parameter 'data' must be an object of class 's2dv_cube'." + ) + expect_error( + CST_ChangeDimNames(lonlat_prec_st, 1, 'bbb'), + paste0("Parameter 'original_names' must be a character string or a ", + "vector of character strings.") + ) + expect_error( + CST_ChangeDimNames(lonlat_prec_st, 'aaa', 1), + paste0("Parameter 'new_names' must be a character string or a ", + "vector of character strings.") + ) + expect_error( + CST_ChangeDimNames(lonlat_prec_st, 'aaa', c('aaa', 'bbb')), + paste0("The number of dimension names in 'new_names' must be the same ", + "as in 'original_names'.") + ) + expect_error( + CST_ChangeDimNames(lonlat_prec_st, 'aaa', 'bbb'), + paste0("Some of the dimensions in 'original_names' could not be found in ", + "'data'.") + ) +}) +############################################## +test_that("2. Output checks", { + exp <- CST_ChangeDimNames(lonlat_temp_st$exp, + original_names = c("lon", 'ftime', 'sdate'), + new_names = c("lons", 'ftimes', 'sdates')) + # dims + expect_equal( + dim(exp$data), + c(dataset = 1, var = 1, member = 15, sdates = 6, ftimes = 3, lat = 22, lons = 53) + ) + expect_equal( + exp$dims, + dim(exp$data) + ) + expect_equal( + as.vector(exp$data), + as.vector(lonlat_temp_st$exp$data) + ) + # coords + expect_equal( + names(exp$coords), + c("dataset", "var", "member", "sdates", "ftimes", "lat", "lons") + ) + # dim Dates + expect_equal( + dim(exp$attrs$Dates), + c(sdates = 6, ftimes = 3) + ) + # variable metadata + expect_equal( + names(exp$attrs$Variable$metadata), + c("lat", "lons", "ftimes", "tas" ) + ) + # source_files + expect_equal( + dim(exp$attrs$source_files), + c(dataset = 1, var = 1, sdates = 6) + ) + # Dates 'dim' attribute + dat <- CST_ChangeDimNames(lonlat_prec, + original_names = c("lon", 'ftime', 'sdate', 'member'), + new_names = c("lons", 'ftimes', 'sdates', 'members')) + expect_equal( + as.vector(lonlat_prec$data), + as.vector(dat$data) + ) + expect_equal( + attributes(dat$attrs$Dates)$dim, + c(ftimes = 31, sdates = 3) + ) + expect_equal( + attributes(exp$attrs$Dates)$dim, + c(sdates = 6, ftimes = 3) + ) + expect_equal( + as.vector(dat$attrs$Dates), + as.vector(lonlat_prec$attrs$Dates) + ) + # attribute dimensions + expect_equal( + attributes(dat$data)$dimensions, + names(dim(dat$data)) + ) + expect_equal( + attributes(exp$data)$dimensions, + NULL + ) +}) \ No newline at end of file -- GitLab From 07a6f67b3bcef3602c54e6b7ea61c7fb801cff10 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 16 Jan 2024 12:08:10 +0100 Subject: [PATCH 42/66] Add test in UseCase4 --- inst/doc/usecase/UseCase4_CST_SaveExp.R | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R index d700e1b9..e85ef8d1 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -101,11 +101,11 @@ sdate <- as.vector(lonlat_temp_st$exp$coords$sdate) path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc") out <- Start(dat = path, var = 'tas', - lon = 'all', - lat = 'all', - ftime = 'all', - sdate = 'all', member = 'all', + sdate = 'all', + ftime = 'all', + lat = 'all', + lon = 'all', return_vars = list(lon = 'dat', lat = 'dat', ftime = NULL, @@ -114,6 +114,11 @@ out <- Start(dat = path, attributes(out)$Variables$common$ftime out_cube <- as.s2dv_cube(out) +out_cube <- CST_ChangeDimNames(out_cube, + original_names = c("dat"), + new_names = c("dataset")) +all.equal(data$data, out_cube$data) +identical(data$data, out_cube$data) # (2.3) lonlat_temp_st$exp in a single file with units of time frequency # (2.3.1) we save the data -- GitLab From c78601a4591b780e70b65e4e47a1357ee60ecc47 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 16 Jan 2024 15:06:29 +0100 Subject: [PATCH 43/66] Add plotting test to use case SaveExp --- inst/doc/usecase/UseCase4_CST_SaveExp.R | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/UseCase4_CST_SaveExp.R index e85ef8d1..926754f3 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/UseCase4_CST_SaveExp.R @@ -120,6 +120,13 @@ out_cube <- CST_ChangeDimNames(out_cube, all.equal(data$data, out_cube$data) identical(data$data, out_cube$data) +# Plot the results and compare +PlotEquiMap(out_cube$data[,,1,1,1,,], lon = out_cube$coords$lon, + lat = out_cube$coords$lat, filled.continents = FALSE) + +PlotEquiMap(lonlat_temp_st$exp$data[,,1,1,1,,], lon = out_cube$coords$lon, + lat = out_cube$coords$lat, filled.continents = FALSE) + # (2.3) lonlat_temp_st$exp in a single file with units of time frequency # (2.3.1) we save the data data <- lonlat_temp_st$exp -- GitLab From 59afd3e85b812ae22f25b3412860622903d0e81f Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 16 Jan 2024 15:30:59 +0100 Subject: [PATCH 44/66] Update README with the use case and s2dv_cube example --- README.md | 54 ++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 38 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index 2dfbb5b7..4431be20 100644 --- a/README.md +++ b/README.md @@ -19,6 +19,7 @@ A part from this GitLab project, that allows you to monitor CSTools progress, to - The CRAN repository [https://CRAN.R-project.org/package=CSTools](https://CRAN.R-project.org/package=CSTools) which includes the user manual and vignettes. - Video tutorials [https://www.medscope-project.eu/products/tool-box/cstools-video-tutorials/](https://www.medscope-project.eu/products/tool-box/cstools-video-tutorials/). - Other resources are under-development such [training material](https://earth.bsc.es/gitlab/external/cstools/-/tree/MEDCOF2022/inst/doc/MEDCOF2022) and a [full reproducible use case for forecast calibration](https://earth.bsc.es/gitlab/external/cstools/-/tree/develop-CalibrationVignette/FOCUS_7_2). +- See [**use cases**](inst/doc/usecase.md) Installation ------------ @@ -58,22 +59,43 @@ This package is designed to be compatible with other R packages such as [s2dv](h The class `s2dv_cube` is mainly a list of named elements to keep data and metadata in a single object. Basic structure of the object: ```r -$ data: [data array] -$ dims: [dimensions vector] -$ coords: [List of coordinates vectors] - $ sdate - $ time - $ lon - [...] -$ attrs: [List of the attributes] - $ Variable: - $ varName - $ metadata - $ Datasets - $ Dates - $ source_files - $ when - $ load_parameters +> lonlat_temp_st$exp +'s2dv_cube' +Data [ 279.994110107422, 280.337463378906, 279.450866699219, 281.992889404297, 280.921813964844, 278.924621582031, 280.738433837891, 279.409515380859 ... ] +Dimensions ( dataset = 1, var = 1, member = 15, sdate = 6, ftime = 3, lat = 22, lon = 53 ) +Coordinates + * dataset : dat1 + * var : tas + member : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 + * sdate : 20001101, 20011101, 20021101, 20031101, 20041101, 20051101 + ftime : 1, 2, 3 + * lat : 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27 + * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359 +Attributes + Dates : 2000-11-01 2001-11-01 2002-11-01 2003-11-01 2004-11-01 ... + varName : tas + metadata : + lat + units : degrees_north + long name : latitude + other : ndims, size, standard_name, axis + lon + units : degrees_east + long name : longitude + other : ndims, size, standard_name, axis + ftime + units : hours since 2000-11-01 00:00:00 + other : ndims, size, standard_name, calendar + tas + units : K + long name : 2 metre temperature + other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table + Datasets : dat1 + when : 2023-10-02 10:11:06 + source_files : /esarchive/exp/ecmwf/system5c3s/monthly_mean/tas_f6h/tas_20001101.nc ... + load_parameters : + ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ... + ... ``` More information about the `s2dv_cube` object class can be found here: [description of the s2dv_cube object structure document](https://docs.google.com/document/d/1ko37JFl_h6mOjDKM5QSQGikfLBKZq1naL11RkJIwtMM/edit?usp=sharing). -- GitLab From f28fbe9f669dabab30180f3540ba2ad5c3aae4f1 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 16 Jan 2024 15:34:11 +0100 Subject: [PATCH 45/66] Update README --- README.md | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/README.md b/README.md index 4431be20..6da47ff3 100644 --- a/README.md +++ b/README.md @@ -100,11 +100,7 @@ Attributes More information about the `s2dv_cube` object class can be found here: [description of the s2dv_cube object structure document](https://docs.google.com/document/d/1ko37JFl_h6mOjDKM5QSQGikfLBKZq1naL11RkJIwtMM/edit?usp=sharing). -The current `s2dv_cube` object (CSTools 5.0.0) differs from the original object used in the previous versions of the packages. If you have **questions** on this change you can follow some of the points below: - -- [New s2dv_cube object discussion](https://earth.bsc.es/gitlab/external/cstools/-/issues/94) -- [How to deal with the compatibility break](https://earth.bsc.es/gitlab/external/cstools/-/issues/112) -- [Testing issue and specifications](https://earth.bsc.es/gitlab/external/cstools/-/issues/110) +The current `s2dv_cube` object (CSTools version > 5.0.0) differs from the original object used in the previous versions of the packages. If you have doubts on this change you can follow some of the issues: [New s2dv_cube object discussion](https://earth.bsc.es/gitlab/external/cstools/-/issues/94), [How to deal with the compatibility break](https://earth.bsc.es/gitlab/external/cstools/-/issues/112) and [Testing issue and specifications](https://earth.bsc.es/gitlab/external/cstools/-/issues/110) Contribute ---------- -- GitLab From dbbeabfca5ec98f1ce99e802b89070ae0a9d0801 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 16 Jan 2024 15:53:04 +0100 Subject: [PATCH 46/66] Update description of the s2dv_cube in README --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 6da47ff3..46edb088 100644 --- a/README.md +++ b/README.md @@ -47,21 +47,21 @@ Overview The CSTools package functions can be distributed in the following methods: -- **Data retrieval and formatting:** CST_Load, CST_Anomaly, CST_MergeDims, CST_SplitDims, CST_Subset, as.s2dv_cube, s2dv_cube, CST_SaveExp. +- **Data retrieval and formatting:** CST_Start, CST_SaveExp, CST_MergeDims, CST_SplitDim, CST_Subset, CST_InsertDim, CST_ChangeDimNames, as.s2dv_cube and s2dv_cube. - **Classification:** CST_MultiEOF, CST_WeatherRegimes, CST_RegimsAssign, CST_CategoricalEnsCombination, CST_EnsClustering. - **Downscaling:** CST_Analogs, CST_RainFARM, CST_RFTemp, CST_AdamontAnalog, CST_AnalogsPredictors. -- **Correction:** CST_BEI_Weighting, CST_BiasCorrection, CST_Calibration, CST_QuantileMapping, CST_DynBiasCorrection. +- **Correction and transformation:** CST_BEI_Weighting, CST_BiasCorrection, CST_Calibration, CST_QuantileMapping, CST_DynBiasCorrection, CST_Anomaly. - **Assessment:** CST_MultiMetric, CST_MultivarRMSE - **Visualization:** PlotCombinedMap, PlotForecastPDF, PlotMostLikelyQuantileMap, PlotPDFsOLE, PlotTriangles4Categories, PlotWeeklyClim. This package is designed to be compatible with other R packages such as [s2dv](https://CRAN.R-project.org/package=s2dv), [startR](https://CRAN.R-project.org/package=startR), [CSIndicators](https://CRAN.R-project.org/package=CSIndicators), [CSDownscale](https://earth.bsc.es/gitlab/es/csdownscale). Functions with the prefix **CST_** deal with a common object called `s2dv_cube` as inputs. Also, this object can be created from Load (s2dv) and from Start (startR) directly. Multiple functions from different packages can operate on this common data structure to easily define a complete post-processing workflow. -The class `s2dv_cube` is mainly a list of named elements to keep data and metadata in a single object. Basic structure of the object: +An `s2dv_cube` is an object to store ordered multidimensional array with named dimensions, specific coordinates and stored metadata. Its “methods” are the **CST** prefix functions. The basic structure of the class `s2dv_cube` is a list of lists. The first level elements are: `data`, `dims`, `coords` and `attrs`. To access any specific element it will be done using the `$` operator. +As an example, this is how th sample data looks like (`lonlat_temp_st$exp`) ```r -> lonlat_temp_st$exp 's2dv_cube' -Data [ 279.994110107422, 280.337463378906, 279.450866699219, 281.992889404297, 280.921813964844, 278.924621582031, 280.738433837891, 279.409515380859 ... ] +Data [ 279.994110107422, 280.337463378906, 279.450866699219, 281.992889404297, 280.921813964844, ... ] Dimensions ( dataset = 1, var = 1, member = 15, sdate = 6, ftime = 3, lat = 22, lon = 53 ) Coordinates * dataset : dat1 @@ -100,7 +100,7 @@ Attributes More information about the `s2dv_cube` object class can be found here: [description of the s2dv_cube object structure document](https://docs.google.com/document/d/1ko37JFl_h6mOjDKM5QSQGikfLBKZq1naL11RkJIwtMM/edit?usp=sharing). -The current `s2dv_cube` object (CSTools version > 5.0.0) differs from the original object used in the previous versions of the packages. If you have doubts on this change you can follow some of the issues: [New s2dv_cube object discussion](https://earth.bsc.es/gitlab/external/cstools/-/issues/94), [How to deal with the compatibility break](https://earth.bsc.es/gitlab/external/cstools/-/issues/112) and [Testing issue and specifications](https://earth.bsc.es/gitlab/external/cstools/-/issues/110) +**Note:** The current `s2dv_cube` object (CSTools version > 5.0.0) differs from the original object used in the previous versions of the packages. If you have doubts on this change you can follow some of the issues: [New s2dv_cube object discussion](https://earth.bsc.es/gitlab/external/cstools/-/issues/94), [How to deal with the compatibility break](https://earth.bsc.es/gitlab/external/cstools/-/issues/112) and [Testing issue and specifications](https://earth.bsc.es/gitlab/external/cstools/-/issues/110) Contribute ---------- -- GitLab From 031b0451555da0fdc8022e3a1e8d136ecf06b779 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 16 Jan 2024 15:55:25 +0100 Subject: [PATCH 47/66] Update sentence in README --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 46edb088..581ea145 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ A part from this GitLab project, that allows you to monitor CSTools progress, to - The CRAN repository [https://CRAN.R-project.org/package=CSTools](https://CRAN.R-project.org/package=CSTools) which includes the user manual and vignettes. - Video tutorials [https://www.medscope-project.eu/products/tool-box/cstools-video-tutorials/](https://www.medscope-project.eu/products/tool-box/cstools-video-tutorials/). - Other resources are under-development such [training material](https://earth.bsc.es/gitlab/external/cstools/-/tree/MEDCOF2022/inst/doc/MEDCOF2022) and a [full reproducible use case for forecast calibration](https://earth.bsc.es/gitlab/external/cstools/-/tree/develop-CalibrationVignette/FOCUS_7_2). -- See [**use cases**](inst/doc/usecase.md) +- See and run package [**use cases**](inst/doc/usecase.md) Installation ------------ @@ -54,8 +54,6 @@ The CSTools package functions can be distributed in the following methods: - **Assessment:** CST_MultiMetric, CST_MultivarRMSE - **Visualization:** PlotCombinedMap, PlotForecastPDF, PlotMostLikelyQuantileMap, PlotPDFsOLE, PlotTriangles4Categories, PlotWeeklyClim. -This package is designed to be compatible with other R packages such as [s2dv](https://CRAN.R-project.org/package=s2dv), [startR](https://CRAN.R-project.org/package=startR), [CSIndicators](https://CRAN.R-project.org/package=CSIndicators), [CSDownscale](https://earth.bsc.es/gitlab/es/csdownscale). Functions with the prefix **CST_** deal with a common object called `s2dv_cube` as inputs. Also, this object can be created from Load (s2dv) and from Start (startR) directly. Multiple functions from different packages can operate on this common data structure to easily define a complete post-processing workflow. - An `s2dv_cube` is an object to store ordered multidimensional array with named dimensions, specific coordinates and stored metadata. Its “methods” are the **CST** prefix functions. The basic structure of the class `s2dv_cube` is a list of lists. The first level elements are: `data`, `dims`, `coords` and `attrs`. To access any specific element it will be done using the `$` operator. As an example, this is how th sample data looks like (`lonlat_temp_st$exp`) @@ -98,6 +96,8 @@ Attributes ... ``` +This package is designed to be compatible with other R packages such as [s2dv](https://CRAN.R-project.org/package=s2dv), [startR](https://CRAN.R-project.org/package=startR), [CSIndicators](https://CRAN.R-project.org/package=CSIndicators), [CSDownscale](https://earth.bsc.es/gitlab/es/csdownscale). + More information about the `s2dv_cube` object class can be found here: [description of the s2dv_cube object structure document](https://docs.google.com/document/d/1ko37JFl_h6mOjDKM5QSQGikfLBKZq1naL11RkJIwtMM/edit?usp=sharing). **Note:** The current `s2dv_cube` object (CSTools version > 5.0.0) differs from the original object used in the previous versions of the packages. If you have doubts on this change you can follow some of the issues: [New s2dv_cube object discussion](https://earth.bsc.es/gitlab/external/cstools/-/issues/94), [How to deal with the compatibility break](https://earth.bsc.es/gitlab/external/cstools/-/issues/112) and [Testing issue and specifications](https://earth.bsc.es/gitlab/external/cstools/-/issues/110) -- GitLab From fac6c02a8be5a3d2794e929dade255cd1b43024a Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 16 Jan 2024 16:52:46 +0100 Subject: [PATCH 48/66] Add subset example and modify names of use case saveexp --- inst/doc/usecase/ex1_create.R | 26 ++++ .../{UseCase4_CST_SaveExp.R => ex2_save.R} | 21 ++- inst/doc/usecase/ex3_modify_dims.R | 26 ++++ inst/doc/usecase/ex4_subset.R | 133 ++++++++++++++++++ 4 files changed, 195 insertions(+), 11 deletions(-) create mode 100644 inst/doc/usecase/ex1_create.R rename inst/doc/usecase/{UseCase4_CST_SaveExp.R => ex2_save.R} (97%) create mode 100644 inst/doc/usecase/ex3_modify_dims.R create mode 100644 inst/doc/usecase/ex4_subset.R diff --git a/inst/doc/usecase/ex1_create.R b/inst/doc/usecase/ex1_create.R new file mode 100644 index 00000000..b1bd9f0e --- /dev/null +++ b/inst/doc/usecase/ex1_create.R @@ -0,0 +1,26 @@ +#******************************************************************************* +# Title: Script to test examples of CST_SaveExp +# Author: Eva Rifà Rovira +# Date: 29/11/2024 +#******************************************************************************* + +#------------------------------------------------------------------------------- +# Needed packages before a new version is installed +library(CSIndicators) +library(multiApply) +library(easyNCDF) +library(s2dv) +library(ClimProjDiags) +library(CSTools) +library(startR) +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_SaveExp.R") +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/zzz.R") + +################################################################################ +#----------------------------------------------------- +# Example 1: Multidimensional array and Dates, without metadata and coordinates +#----------------------------------------------------- +# (1.1) Minimal use case, without Dates + + +################################################################################ \ No newline at end of file diff --git a/inst/doc/usecase/UseCase4_CST_SaveExp.R b/inst/doc/usecase/ex2_save.R similarity index 97% rename from inst/doc/usecase/UseCase4_CST_SaveExp.R rename to inst/doc/usecase/ex2_save.R index 926754f3..ba8da370 100644 --- a/inst/doc/usecase/UseCase4_CST_SaveExp.R +++ b/inst/doc/usecase/ex2_save.R @@ -1,7 +1,7 @@ #******************************************************************************* -# Script to test examples of CST_SaveExp -# Eva Rifà Rovira -# 29/11/2024 +# Title: Script to test examples of CST_SaveExp +# Author: Eva Rifà Rovira +# Date: 29/11/2024 #******************************************************************************* #------------------------------------------------------------------------------- @@ -13,13 +13,12 @@ library(s2dv) library(ClimProjDiags) library(CSTools) library(startR) -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/develop-SaveCube/R/CST_SaveExp.R") -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/develop-SaveCube/R/zzz.R") +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_SaveExp.R") +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/zzz.R") ################################################################################ -# Tests: #----------------------------------------------------- -# Tests 1: Multidimensional array and Dates, without metadata and coordinates +# Example 1: Multidimensional array and Dates, without metadata and coordinates #----------------------------------------------------- # (1.1) Minimal use case, without Dates data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4)) @@ -70,7 +69,7 @@ SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL, extra_string = 'test', global_attrs = list(system = 'tes1', reference = 'test2')) #----------------------------------------------------- -# Tests 2: Test sample data from Start and from Load +# Example 2: Test sample data from Start and from Load #----------------------------------------------------- # (2.1) Test SaveExp @@ -267,7 +266,7 @@ CST_SaveExp(data = data, ftime_dim = NULL, single_file = FALSE, units_hours_since = FALSE) #----------------------------------------------------- -# Test 3: Special cases +# Example 3: Special cases #----------------------------------------------------- # (3.1) Two variables and two datasets in separated files @@ -377,7 +376,7 @@ CST_SaveExp(data = obscube, ftime_dim = 'time', var_dim = 'var', single_file = FALSE, extra_string = 'obs_tas') #----------------------------------------------------- -# Test 4: Time bounds: +# Example 4: Time bounds: #----------------------------------------------------- # example: /esarchive/exp/ncep/cfs-v2/weekly_mean/s2s/tas_f24h/tas_20231128.nc @@ -440,7 +439,7 @@ CST_SaveExp(data = res, ftime_dim = 'time', var_dim = 'var', units_hours_since = FALSE) #----------------------------------------------------- -# Test 5: Read data with Load +# Example 5: Read data with Load #----------------------------------------------------- data <- lonlat_temp$exp diff --git a/inst/doc/usecase/ex3_modify_dims.R b/inst/doc/usecase/ex3_modify_dims.R new file mode 100644 index 00000000..b1bd9f0e --- /dev/null +++ b/inst/doc/usecase/ex3_modify_dims.R @@ -0,0 +1,26 @@ +#******************************************************************************* +# Title: Script to test examples of CST_SaveExp +# Author: Eva Rifà Rovira +# Date: 29/11/2024 +#******************************************************************************* + +#------------------------------------------------------------------------------- +# Needed packages before a new version is installed +library(CSIndicators) +library(multiApply) +library(easyNCDF) +library(s2dv) +library(ClimProjDiags) +library(CSTools) +library(startR) +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_SaveExp.R") +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/zzz.R") + +################################################################################ +#----------------------------------------------------- +# Example 1: Multidimensional array and Dates, without metadata and coordinates +#----------------------------------------------------- +# (1.1) Minimal use case, without Dates + + +################################################################################ \ No newline at end of file diff --git a/inst/doc/usecase/ex4_subset.R b/inst/doc/usecase/ex4_subset.R new file mode 100644 index 00000000..57dadfa5 --- /dev/null +++ b/inst/doc/usecase/ex4_subset.R @@ -0,0 +1,133 @@ +#******************************************************************************* +# Title: Script to test examples of CST_Subset +# Author: Eva Rifà Rovira +# Date: 16/11/2024 +#******************************************************************************* +# This example shows how to subset an array. The basic function to subset +# arrays is Subset, from the package ClimProjDiags. Then, in CSTools, there is +# the CST version of the function to subset all other related information +# of the object 's2dv_cube'. +#------------------------------------------------------------------------------- +# Needed packages before a new version is installed +library(CSTools) +library(ClimProjDiags) +source("https://earth.bsc.es/gitlab/es/ClimProjDiags/-/raw/master/R/Subset.R") +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_Subset.R") + +################################################################################ +#----------------------------------------------------- +# Example 1: Subset an example array +#----------------------------------------------------- +# (1.1) Minimal use case. Spatial coordinates subset. +dat <- array(1:100, dim = c(lat = 10, lon = 10)) +# > dat +# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] +# [1,] 1 11 21 31 41 51 61 71 81 91 +# [2,] 2 12 22 32 42 52 62 72 82 92 +# [3,] 3 13 23 33 43 53 63 73 83 93 +# [4,] 4 14 24 34 44 54 64 74 84 94 +# [5,] 5 15 25 35 45 55 65 75 85 95 +# [6,] 6 16 26 36 46 56 66 76 86 96 +# [7,] 7 17 27 37 47 57 67 77 87 97 +# [8,] 8 18 28 38 48 58 68 78 88 98 +# [9,] 9 19 29 39 49 59 69 79 89 99 +# [10,] 10 20 30 40 50 60 70 80 90 100 +dat_subset <- Subset(x = dat, along = c('lat', 'lon'), indices = list(1:5, 1:7), + drop = 'all') +# > dat_subset +# [,1] [,2] [,3] [,4] [,5] [,6] [,7] +# [1,] 1 11 21 31 41 51 61 +# [2,] 2 12 22 32 42 52 62 +# [3,] 3 13 23 33 43 53 63 +# [4,] 4 14 24 34 44 54 64 +# [5,] 5 15 25 35 45 55 65 + +#----------------------------------------------------- +# Example 2: Subset an 's2dv_cube' using sample data +#----------------------------------------------------- +# (2.1) We don't want to drop any dimension, select only the first member, +# the first and the second start dates, and also subset the longitude and +# keep only the values from [0, 21]: + +dat <- lonlat_temp_st$exp + +# > dat$dims +# dataset var member sdate ftime lat lon +# 1 1 15 6 3 22 53 + +# > dat +# 's2dv_cube' +# Data [ 279.994110107422, 280.337463378906, 279.450866699219, ... ] +# Dimensions ( dataset = 1, var = 1, member = 15, sdate = 6, ftime = 3, +# lat = 22, lon = 53 ) +# Coordinates +# * dataset : dat1 +# * var : tas +# member : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 +# * sdate : 20001101, 20011101, 20021101, 20031101, 20041101, 20051101 +# ftime : 1, 2, 3 +# * lat : 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, ... +# * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, ... +# Attributes +# Dates : 2000-11-01 2001-11-01 2002-11-01 2003-11-01 2004-11-01 ... +# varName : tas +# metadata : +# lat +# units : degrees_north +# long name : latitude +# lon +# units : degrees_east +# long name : longitude +# ftime +# units : hours since 2000-11-01 00:00:00 +# tas +# units : K +# long name : 2 metre temperature +# Datasets : dat1 +# when : 2023-10-02 10:11:06 +# source_files : /monthly_mean/tas_f6h/tas_20001101.nc ... +# load_parameters : +# ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ... +# ... + +dat_subset <- CST_Subset(x = dat, along = c('member', 'sdate', 'lon'), + indices = list(1, 1:2, 1:22), drop = 'none') + +# > dat_subset +# 's2dv_cube' +# Data [ 279.994110107422, 277.161102294922, 278.825836181641, 276.8271484375, 276.052703857422, 276.950805664062, 280.677215576172, 277.285247802734 ... ] +# Dimensions ( dataset = 1, var = 1, member = 1, sdate = 2, ftime = 3, lat = 22, lon = 22 ) +# Coordinates +# * dataset : dat1 +# * var : tas +# member : 1 +# * sdate : 20001101, 20011101 +# ftime : 1, 2, 3 +# * lat : 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27 +# * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 +# Attributes +# Dates : 2000-11-01 2001-11-01 2000-12-01 2001-12-01 2001-01-01 ... +# varName : tas +# metadata : +# ftime +# units : hours since 2000-11-01 00:00:00 +# other : ndims, size, standard_name, calendar +# lat +# units : degrees_north +# long name : latitude +# other : ndims, size, standard_name, axis +# lon +# units : degrees_east +# long name : longitude +# other : ndims, size, standard_name, axis +# tas +# units : K +# long name : 2 metre temperature +# other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table +# Datasets : dat1 +# when : 2023-10-02 10:11:06 +# source_files : /esarchive/exp/ecmwf/system5c3s/monthly_mean/tas_f6h/tas_20001101.nc ... +# load_parameters : +# ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ... +# ... +################################################################################ \ No newline at end of file -- GitLab From 2925f1ae282e86d2c4bebe54da54a169246fe562 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 16 Jan 2024 17:26:06 +0100 Subject: [PATCH 49/66] Correct function namespace of CST_Subset and add use case of creating an 's2dv_cube' --- R/CST_Subset.R | 28 +++++++------- inst/doc/usecase/ex1_create.R | 71 ++++++++++++++++++++++++++++------- 2 files changed, 71 insertions(+), 28 deletions(-) diff --git a/R/CST_Subset.R b/R/CST_Subset.R index 28c8498c..098df3df 100644 --- a/R/CST_Subset.R +++ b/R/CST_Subset.R @@ -79,9 +79,9 @@ CST_Subset <- function(x, along, indices, drop = FALSE, var_dim = NULL, } # Subset data - x$data <- ClimProjDiags::Subset(x$data, along = along, - indices = indices, - drop = drop) + x$data <- Subset(x$data, along = along, + indices = indices, + drop = drop) # Adjust dimensions x$dims <- dim(x$data) # Adjust coordinates @@ -113,10 +113,10 @@ CST_Subset <- function(x, along, indices, drop = FALSE, var_dim = NULL, } if ((!is.null(x$attrs$source_files)) && (dim_name %in% names(dim(x$attrs$source_files)))) { - x$attrs$source_files <- ClimProjDiags::Subset(x$attrs$source_files, - along = dim_name, - indices = index, - drop = drop) + x$attrs$source_files <- Subset(x$attrs$source_files, + along = dim_name, + indices = index, + drop = drop) } } # Remove metadata from variables that were dropped @@ -128,10 +128,10 @@ CST_Subset <- function(x, along, indices, drop = FALSE, var_dim = NULL, if (!(length(time_along) == 0)) { time_indices <- indices[match(time_along, along)] original_dates <- x$attrs$Dates - x$attrs$Dates <- ClimProjDiags::Subset(x$attrs$Dates, - along = time_along, - indices = time_indices, - drop = drop) + x$attrs$Dates <- Subset(x$attrs$Dates, + along = time_along, + indices = time_indices, + drop = drop) } # Subset metadata for (variable in 1:length(names(x$attrs$Variable$metadata))) { @@ -153,9 +153,9 @@ CST_Subset <- function(x, along, indices, drop = FALSE, var_dim = NULL, if (any(is.null(dim(x)), length(dim(x)) == 1)) { l <- x[args_subset[['indices']][[1]]] } else { - l <- ClimProjDiags::Subset(x, along = args_subset[['along']], - indices = args_subset[['indices']], - drop = args_subset[['drop']]) + l <- Subset(x, along = args_subset[['along']], + indices = args_subset[['indices']], + drop = args_subset[['drop']]) } attr.names <- names(attributes(x)) attr.names <- attr.names[attr.names != 'names'] diff --git a/inst/doc/usecase/ex1_create.R b/inst/doc/usecase/ex1_create.R index b1bd9f0e..afe7ece1 100644 --- a/inst/doc/usecase/ex1_create.R +++ b/inst/doc/usecase/ex1_create.R @@ -1,26 +1,69 @@ #******************************************************************************* -# Title: Script to test examples of CST_SaveExp +# Title: Example script to create 's2dv_cube' objects # Author: Eva Rifà Rovira -# Date: 29/11/2024 +# Date: 16/01/2024 #******************************************************************************* - +# This example shows how to create an 's2dv_cube' object. +# There are two ways of creating an 's2dv_cube' object. The first way is +# to use the function s2dv_cube(): create an 's2dv_cube' from scratch with any +# data. In the second example we see the other method with the function +# as.s2dv_cube(). This function is to create an 's2dv_cube' from a +# 'startR_array' or 'load' object. #------------------------------------------------------------------------------- -# Needed packages before a new version is installed -library(CSIndicators) -library(multiApply) -library(easyNCDF) -library(s2dv) -library(ClimProjDiags) +# Needed packages library(CSTools) -library(startR) -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_SaveExp.R") -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/zzz.R") ################################################################################ #----------------------------------------------------- -# Example 1: Multidimensional array and Dates, without metadata and coordinates +# Example 1: Function s2dv_cube() from defined data +#----------------------------------------------------- +# (1.1) Minimal use case, with s2dv_cube function +# We define the array with named dimensions: +dat <- array(1:100, dim = c(time = 10, lat = 4, lon = 10)) +# We define the coordinates as a list of vectors: +coords <- list(time = 1:10, lat = 43:40, lon = 0:9) +# The metadata: +metadata <- list(tas = list(level = '2m'), + lon = list(cdo_grid_name = 'r360x181'), + lat = list(cdo_grid_name = 'r360x181')) +# The creation of Dates array. First the initial date: +ini_date <- as.POSIXct('2010-01-01', format = '%Y-%m-%d') +# The sequence of dates +dates <- seq(ini_date, by = 'days', length.out = 10) +# We define the dates dimensions +dim(dates) <- c(time = 10) +dat_cube <- s2dv_cube(data = dat, coords = coords, + varName = 'tas', metadata = metadata, + Dates = dates, + when = "2019-10-23 19:15:29 CET", + source_files = c("/path/to/file1.nc", "/path/to/file2.nc"), + Datasets = 'test_dataset') +# We print the result: +# > dat_cube +# 's2dv_cube' +# Data [ 1, 2, 3, 4, 5, 6, 7, 8 ... ] +# Dimensions ( time = 10, lat = 4, lon = 10 ) +# Coordinates +# * time : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 +# * lat : 43, 42, 41, 40 +# * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 +# Attributes +# Dates : 2010-01-01 2010-01-02 2010-01-03 2010-01-04 2010-01-05 ... +# varName : tas +# metadata : +# tas +# other : level +# lon +# other : cdo_grid_name +# lat +# other : cdo_grid_name +# Datasets : test_dataset +# when : 2019-10-23 19:15:29 CET +# source_files : /path/to/file1.nc ... + +#----------------------------------------------------- +# Example 2: Function as.s2dv_cube() from 'startR_array' #----------------------------------------------------- -# (1.1) Minimal use case, without Dates ################################################################################ \ No newline at end of file -- GitLab From 02e1d48dc69744f844f33290698b513559eb6842 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 17 Jan 2024 12:03:06 +0100 Subject: [PATCH 50/66] Add example 1, create an s2dv_cube --- inst/doc/usecase/ex1_create.R | 141 +++++++++++++++++++++++++++++++++- 1 file changed, 138 insertions(+), 3 deletions(-) diff --git a/inst/doc/usecase/ex1_create.R b/inst/doc/usecase/ex1_create.R index afe7ece1..2af830fb 100644 --- a/inst/doc/usecase/ex1_create.R +++ b/inst/doc/usecase/ex1_create.R @@ -7,18 +7,22 @@ # There are two ways of creating an 's2dv_cube' object. The first way is # to use the function s2dv_cube(): create an 's2dv_cube' from scratch with any # data. In the second example we see the other method with the function -# as.s2dv_cube(). This function is to create an 's2dv_cube' from a +# CST_Start(). This function is to create an 's2dv_cube' from a # 'startR_array' or 'load' object. #------------------------------------------------------------------------------- # Needed packages library(CSTools) - +library(startR) ################################################################################ #----------------------------------------------------- # Example 1: Function s2dv_cube() from defined data #----------------------------------------------------- # (1.1) Minimal use case, with s2dv_cube function # We define the array with named dimensions: + +# In this example we use the function s2dv_cube() to create an object of class +# 's2dv_cube' with the correct structure. + dat <- array(1:100, dim = c(time = 10, lat = 4, lon = 10)) # We define the coordinates as a list of vectors: coords <- list(time = 1:10, lat = 43:40, lon = 0:9) @@ -62,8 +66,139 @@ dat_cube <- s2dv_cube(data = dat, coords = coords, # source_files : /path/to/file1.nc ... #----------------------------------------------------- -# Example 2: Function as.s2dv_cube() from 'startR_array' +# Example 2: Function as.s2dv_cube() #----------------------------------------------------- +# (2.1) Using CST_Start + +# For this case, we use a random example. + +# NOTE 1: CST_Start() is just a wrapper of function Start() with the transformation +# to 's2dv_cube' object. +# NOTE 2: In order that the input argument auxiliary functions from startR +# work, we need to call them explicitly the startR namespace. +# (e.g. startR::indices()) + +repos1 <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" +repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" + +res <- CST_Start(dat = list(list(name = 'system4_m1', path = repos2), + list(name = 'system5_m1', path = repos1)), + var = c('tas', 'sfcWind'), + sdate = c('20160101', '20170101'), + ensemble = startR::indices(1:2), + time = startR::indices(1:2), + lat = startR::indices(1:10), + lon = startR::indices(1:10), + synonims = list(lat = c('lat', 'latitude'), + lon = c('lon', 'longitude')), + return_vars = list(time = 'sdate', + longitude = 'dat', + latitude = 'dat'), + metadata_dims = c('dat', 'var'), + retrieve = TRUE) + + +# Now we can explore the object +# 1st level +names(res) +# "data" "dims" "coords" "attrs" + +dim(res$data) +# dat var sdate ensemble time lat lon +# 2 2 2 2 2 10 10 + +res$coords$lon +# [1] 0.000000 0.703125 1.406250 2.109375 2.812500 3.515625 4.218750 4.921875 +# [9] 5.625000 6.328125 +attr(res$coords$lon, 'indices') +# [1] FALSE +# NOTE: The attribute 'indices' is FALSE, it means that the longitude elements +# are the actual values of longitude coordinate. + +res$coords$ensemble +# [1] 1 2 +# attr(,"indices") +# [1] TRUE +# Now we take a look into the Dates array. It must have the time dimensions +# of the data. +dim(res$attrs$Dates) +# sdate time +# 2 2 + +# To see the nested list structure of the object, we just need to use the +# function str(): +str(res) + +# (2.1) Using as.s2dv_cube() function +# We'll load the data with Start and then we'll transform the 'startR_array' +# to 's2dv_cube' object with the function as.s2dv_cube(). We are going +# to load the same data as before: +repos1 <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" +repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" + +res <- Start(dat = list(list(name = 'system4_m1', path = repos2), + list(name = 'system5_m1', path = repos1)), + var = c('tas', 'sfcWind'), + sdate = c('20160101', '20170101'), + ensemble = startR::indices(1:2), + time = startR::indices(1:2), + lat = startR::indices(1:10), + lon = startR::indices(1:10), + synonims = list(lat = c('lat', 'latitude'), + lon = c('lon', 'longitude')), + return_vars = list(time = 'sdate', + longitude = 'dat', + latitude = 'dat'), + metadata_dims = c('dat', 'var'), + retrieve = TRUE) + +# Now, we use the function as.s2dv_cube() to transform the 'startR_array' +# into an 's2dv_cube': +res_cube <- as.s2dv_cube(res) +# If we call directly the object directly into the terminal, we can see +# all the elements nicely: + +# > res_cube +# 's2dv_cube' +# Data [ 248.241973876953, 247.365753173828, 6.80753087997437, 5.46453714370728, 247.256896972656, 248.500869750977, 6.25862503051758, 5.76889991760254 ... ] +# Dimensions ( dat = 2, var = 2, sdate = 2, ensemble = 2, time = 2, lat = 10, lon = 10 ) +# Coordinates +# * dat : system4_m1, system5_m1 +# * var : tas, sfcWind +# * sdate : 20160101, 20170101 +# ensemble : 1, 2 +# time : 1, 2 +# * lat : 89.4628215685774, 88.7669513528422, 88.0669716474306, 87.366063433082, 86.6648030134408, 85.9633721608804, 85.2618460607126, 84.5602613830534, 83.8586381286076, 83.1569881285417 +# * lon : 0, 0.703125, 1.40625, 2.109375, 2.8125, 3.515625, 4.21875, 4.921875, 5.625, 6.328125 +# Attributes +# Dates : 2016-02-01 2017-02-01 2016-03-01 2017-03-01 +# varName : tas sfcWind +# metadata : +# time +# units : hours since 2016-01-01 00:00:00 +# other : ndims, size, standard_name, calendar +# lon +# units : degrees_east +# long name : longitude +# other : ndims, size, standard_name, axis +# lat +# units : degrees_north +# long name : latitude +# other : ndims, size, standard_name, axis +# tas +# units : K +# long name : 2 metre temperature +# other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table, grid_type +# sfcWind +# units : m s**-1 +# long name : 10 meter windspeed +# other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table, grid_type +# Datasets : system4_m1 ... +# when : 2024-01-17 11:38:27 +# source_files : /esarchive/exp/ecmwf/system4_m1/monthly_mean/tas_f6h/tas_20160101.nc ... +# load_parameters : +# ( system4_m1 ) : dat = system4_m1, var = tas ..., sdate = 20160101 ... +# ... ################################################################################ \ No newline at end of file -- GitLab From 8b2f914aa068d5aaa91f5d109f026c2143137622 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 17 Jan 2024 12:06:28 +0100 Subject: [PATCH 51/66] Add use cases description in file usecase.md --- inst/doc/usecase.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/inst/doc/usecase.md b/inst/doc/usecase.md index 8ebf0478..9b2783a4 100644 --- a/inst/doc/usecase.md +++ b/inst/doc/usecase.md @@ -9,4 +9,8 @@ In this document, you can link to the example scripts for different usage of the 4. [Seasonal forecasts for a river flow](inst/doc/usecase/UseCase3_data_preparation_SCHEME_model.R) 2. **Examples using 's2dv_cube'** - 1. [Save 's2dv_cube'](inst/doc/usecase/UseCase4_SaveExp.R) \ No newline at end of file + 1. [Create an 's2dv_cube'](inst/doc/usecase/ex1_create.R) + 2. [Save 's2dv_cube'](inst/doc/usecase/ex2_save.R) + 3. [Modify any 's2dv_cube' dimension](inst/doc/usecase/ex3_modify_dims.R) + 4. [Subset any 's2dv_cube' dimension](inst/doc/usecase/ex4_subset.R) + \ No newline at end of file -- GitLab From ee957adc4c0fd458f7298d1a40ba107012af1cc1 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Thu, 18 Jan 2024 17:31:44 +0100 Subject: [PATCH 52/66] Add examples 2, 3 and 4; improve CST_MergeDims --- R/CST_MergeDims.R | 26 +++++- inst/doc/usecase.md | 9 +- inst/doc/usecase/ex2_save.R | 2 +- inst/doc/usecase/ex3_modify_dims.R | 137 ++++++++++++++++++++++++++--- inst/doc/usecase/ex4_subset.R | 2 +- 5 files changed, 153 insertions(+), 23 deletions(-) diff --git a/R/CST_MergeDims.R b/R/CST_MergeDims.R index 4b66629e..50bab764 100644 --- a/R/CST_MergeDims.R +++ b/R/CST_MergeDims.R @@ -15,8 +15,6 @@ #' \code{merge_dims} will be used. #'@param na.rm A logical indicating if the NA values should be removed or not. #' -#'@import abind -#'@importFrom ClimProjDiags Subset #'@examples #'data <- 1 : c(2 * 3 * 4 * 5 * 6 * 7) #'dim(data) <- c(time = 7, lat = 2, lon = 3, monthly = 4, member = 6, @@ -35,8 +33,28 @@ CST_MergeDims <- function(data, merge_dims = c('ftime', 'monthly'), if (!inherits(data, 's2dv_cube')) { stop("Parameter 'data' must be of the class 's2dv_cube'.") } + if (is.null(rename_dim)) { + rename_dim <- merge_dims[1] + } + # data data$data <- MergeDims(data$data, merge_dims = merge_dims, rename_dim = rename_dim, na.rm = na.rm) + # dims + data$dims <- dim(data$data) + + # coords + data$coords[merge_dims] <- NULL + data$coords[[rename_dim]] <- 1:dim(data$data)[rename_dim] + attr(data$coords[[rename_dim]], 'indices') <- TRUE + + # attrs + if (all(merge_dims %in% names(dim(data$attrs$Dates)))) { + dim(data$attrs$Dates) <- dim(data$data)[rename_dim] + } else if (any(merge_dims %in% names(dim(data$attrs$Dates)))) { + warning("The dimensions of 'Dates' array will be different from ", + "the temporal dimensions in 'data'. Parameter 'merge_dims' ", + "only includes one temporal dimension of 'Dates'.") + } return(data) } #'Function to Split Dimension @@ -55,12 +73,12 @@ CST_MergeDims <- function(data, merge_dims = c('ftime', 'monthly'), #' \code{merge_dims} will be used. #'@param na.rm A logical indicating if the NA values should be removed or not. #' -#'@import abind -#'@importFrom ClimProjDiags Subset #'@examples #'data <- 1 : 20 #'dim(data) <- c(time = 10, lat = 2) #'new_data <- MergeDims(data, merge_dims = c('time', 'lat')) +#'@import abind +#'@importFrom ClimProjDiags Subset #'@export MergeDims <- function(data, merge_dims = c('time', 'monthly'), rename_dim = NULL, na.rm = FALSE) { diff --git a/inst/doc/usecase.md b/inst/doc/usecase.md index 9b2783a4..272964ab 100644 --- a/inst/doc/usecase.md +++ b/inst/doc/usecase.md @@ -1,16 +1,15 @@ -# Usecase scripts +# Use case and example scripts -In this document, you can link to the example scripts for different usage of the function: +In this document, you will find example scripts of the package. The first ones are use cases of cliimate data assessment. The second ones are example scripts on the use of the 's2dv_cube' object. -1. **Climate data assesment and downscaling** +1. **Use cases of climate data assesment and downscaling** 1. [Bias adjustment for assessment of an extreme event](inst/doc/usecase/UseCase1_WindEvent_March2018.R) 2. [Precipitation Downscaling with RainFARM RF 4](inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF4.R) 3. [Precipitation Downscaling with RainFARM RF 100](inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF100.R) 4. [Seasonal forecasts for a river flow](inst/doc/usecase/UseCase3_data_preparation_SCHEME_model.R) -2. **Examples using 's2dv_cube'** +2. **Examples on how to use 's2dv_cube'** 1. [Create an 's2dv_cube'](inst/doc/usecase/ex1_create.R) 2. [Save 's2dv_cube'](inst/doc/usecase/ex2_save.R) 3. [Modify any 's2dv_cube' dimension](inst/doc/usecase/ex3_modify_dims.R) 4. [Subset any 's2dv_cube' dimension](inst/doc/usecase/ex4_subset.R) - \ No newline at end of file diff --git a/inst/doc/usecase/ex2_save.R b/inst/doc/usecase/ex2_save.R index ba8da370..0f8235c2 100644 --- a/inst/doc/usecase/ex2_save.R +++ b/inst/doc/usecase/ex2_save.R @@ -1,5 +1,5 @@ #******************************************************************************* -# Title: Script to test examples of CST_SaveExp +# Title: Example script to save 's2dv_cube' to NetCDF using CST_SaveExp # Author: Eva Rifà Rovira # Date: 29/11/2024 #******************************************************************************* diff --git a/inst/doc/usecase/ex3_modify_dims.R b/inst/doc/usecase/ex3_modify_dims.R index b1bd9f0e..6db1d2b6 100644 --- a/inst/doc/usecase/ex3_modify_dims.R +++ b/inst/doc/usecase/ex3_modify_dims.R @@ -1,26 +1,139 @@ #******************************************************************************* -# Title: Script to test examples of CST_SaveExp +# Title: Script to modify the dimensions of the 's2dv_cube' # Author: Eva Rifà Rovira -# Date: 29/11/2024 +# Date: 18/01/2024 #******************************************************************************* #------------------------------------------------------------------------------- # Needed packages before a new version is installed -library(CSIndicators) -library(multiApply) -library(easyNCDF) -library(s2dv) -library(ClimProjDiags) library(CSTools) -library(startR) -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_SaveExp.R") -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/zzz.R") +source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_ChangeDimNames.R") ################################################################################ +# NOTE: for all the examples, we are going to use the sample data. #----------------------------------------------------- -# Example 1: Multidimensional array and Dates, without metadata and coordinates +# Example 1: Change dimension names with CST_ChangeDimNames #----------------------------------------------------- -# (1.1) Minimal use case, without Dates +# With using this function, we can change the dimension names in all elements +# of the 's2dv_cube' object: +# Example with sample data: +# Check original dimensions and coordinates +lonlat_temp$exp$dims +names(lonlat_temp$exp$coords) +dim(lonlat_temp$exp$attrs$Dates) +# Change 'dataset' to 'dat' and 'ftime' to 'time' +exp <- CST_ChangeDimNames(lonlat_temp$exp, + original_names = c("dataset", "ftime", "lon", "lat"), + new_names = c("dat", "time", "longitude", "latitude")) +# Check new dimensions and coordinates +exp$dims +names(exp$coords) +dim(exp$attrs$Dates) +#----------------------------------------------------- +# Example 2: Insert a new dimension with CST_InsertDim +#----------------------------------------------------- +# With this function, we can add a dimension into the 's2dv_cube'. +# When the dimension that we want to add has lenght greater than 1, the +# values of the data are repeated for that new dimension. + +# Example with sample data: +# Check original dimensions and coordinates +lonlat_temp$exp$dims +names(lonlat_temp$exp$coords) +# Add 'variable' dimension +exp <- CST_InsertDim(lonlat_temp$exp, + posdim = 2, + lendim = 2, + name = "variable", + values = c("tas", "tos")) +# Check new dimensions and coordinates +exp$dims +exp$coords$variable +# We see that the values will be repeated along the new dimension: +exp$data[, , 1, 1, 1, 1, 1] + +#----------------------------------------------------- +# Example 3: Merge two dimensions with CST_MergeDims +#----------------------------------------------------- +# In this example, we will merge the dimensions corresponding to the latitude +# and the longitude of the data. The new dimension will be named 'grid'. +new_data <- CST_MergeDims(lonlat_temp$exp, merge_dims = c('lat', 'lon'), + rename_dim = 'grid') +dim(new_data$data) +# dataset member sdate ftime grid +# 1 15 6 3 1166 +names(new_data$coords) +# [1] "dataset" "member" "sdate" "ftime" "grid" +new_data +# NOTE: Be aware that when we print the object, we see that its name in +# "Coordinates" field appears without the asterisk (*) at its left. This means +# that the values of that coordinate, are indices, not the actual values. We +# can also find this information with the attribute "indices": +attributes(new_data$coords$grid) +# $indices +# [1] TRUE + +# Now, we want to merge time dimensions start date and forecast time: +new_data <- CST_MergeDims(data = lonlat_temp_st$exp, merge_dims = c('sdate', 'ftime')) +# In this case, the Dates dimensions will be merged too. +dim(new_data$attrs$Dates) +# sdate +# 18 + +# However, when we want to merge temporal and other dimensions nature, +# the Dates dimensions are kept as the original. In this case, the function +# returns a Warning Message, we must pay attention! +new_data <- CST_MergeDims(data = lonlat_temp$exp, + merge_dims = c('lat', 'ftime'), + rename_dim = 'newdim') + +#----------------------------------------------------- +# Example 4: Split two dimensions with CST_SplitDim +#----------------------------------------------------- +# In this example, we will start working first with the function SplitDim, +# that it can be used to split dimensions of an array. + +# NOTE: Take into account that time dimensions will be treated differently than +# other dimensions: + +# Decadal example: We define an array of consecutive days of different years: +dates <- seq(as.Date("01-01-2000", "%d-%m-%Y", tz = 'UTC'), + as.Date("31-12-2005","%d-%m-%Y", tz = 'UTC'), "day") +dim(dates) <- c(time = 2192) + +# Now, we will split the array in a new 'year' dimension: +dates_year <- SplitDim(dates, indices = dates, + split_dim = 'time', freq = 'year') +# time year +# 366 6 +# Now, we can try: freq = 'month' and 'day' +dates_month <- SplitDim(dates, indices = dates, + split_dim = 'time', freq = 'month') + +dates_day <- SplitDim(dates, indices = dates, + split_dim = 'time', freq = 'day') +dates <- as.POSIXct(dates * 24 * 3600, origin = '1970-01-01', tz = 'UTC') + +# In the following example, we will use the sample data of the package. We +# will use lonlat_prec_st because it is daily data: + +# NOTE: By Jan 2024, a development is needed regarding updates in other fields +# of the 's2dv_cube' +data_day <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[1,], + split_dim = 'ftime', freq = 'day') +dim(data_day$data) +# dataset var member sdate ftime lat lon day +# 1 1 6 3 1 4 4 31 +data_month <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[1,], + split_dim = 'ftime', freq = 'month') +dim(data_month$data) +# dataset var member sdate ftime lat lon month +# 1 1 6 3 31 4 4 1 +data_year <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[,1], + split_dim = 'sdate', freq = 'year') +dim(data_year$data) +# dataset var member sdate ftime lat lon year +# 1 1 6 1 31 4 4 3 ################################################################################ \ No newline at end of file diff --git a/inst/doc/usecase/ex4_subset.R b/inst/doc/usecase/ex4_subset.R index 57dadfa5..d86b227e 100644 --- a/inst/doc/usecase/ex4_subset.R +++ b/inst/doc/usecase/ex4_subset.R @@ -1,5 +1,5 @@ #******************************************************************************* -# Title: Script to test examples of CST_Subset +# Title: Example script to subset any dimension of an 's2dv_cube' # Author: Eva Rifà Rovira # Date: 16/11/2024 #******************************************************************************* -- GitLab From cd81c935c6ed9b02e6e5a9b97559535e967b5c24 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 19 Jan 2024 11:28:11 +0100 Subject: [PATCH 53/66] Correct unit test MergeDims --- R/CST_MergeDims.R | 6 +- tests/testthat/test-CST_MergeDims.R | 111 ++++++++++++++++++---------- 2 files changed, 76 insertions(+), 41 deletions(-) diff --git a/R/CST_MergeDims.R b/R/CST_MergeDims.R index 50bab764..dabdc57f 100644 --- a/R/CST_MergeDims.R +++ b/R/CST_MergeDims.R @@ -42,11 +42,15 @@ CST_MergeDims <- function(data, merge_dims = c('ftime', 'monthly'), # dims data$dims <- dim(data$data) + # rename_dim + if (length(rename_dim) > 1) { + rename_dim <- as.character(rename_dim[1]) + } # coords data$coords[merge_dims] <- NULL data$coords[[rename_dim]] <- 1:dim(data$data)[rename_dim] attr(data$coords[[rename_dim]], 'indices') <- TRUE - + # attrs if (all(merge_dims %in% names(dim(data$attrs$Dates)))) { dim(data$attrs$Dates) <- dim(data$data)[rename_dim] diff --git a/tests/testthat/test-CST_MergeDims.R b/tests/testthat/test-CST_MergeDims.R index f7eac6ac..a99d5eba 100644 --- a/tests/testthat/test-CST_MergeDims.R +++ b/tests/testthat/test-CST_MergeDims.R @@ -1,65 +1,96 @@ ############################################## -test_that("Sanity checks", { +# data1 +data1 <- list(data = 1:10) +class(data1) <- 's2dv_cube' + +# data2 +data <- 1 : 20 +dim(data) <- c(time = 20) +data2 <- list(data = data) +data2$dims <- dim(data) +data2$coords <- list(time = 1:20) +attr(data2$coords$time, 'indices') <- TRUE +class(data2) <- 's2dv_cube' + +# exp +exp <- 1 : 20 +dim(exp) <- c(time = 10, lat = 2) +exp <- list(data = exp) +class(exp) <- 's2dv_cube' + +# data3 +data3 <- data2 +names(dim(data3$data)) <- 'Dim1' +data3$dims <- dim(data3$data) +names(data3$coords) <- 'Dim1' + +############################################## + +test_that("1. Sanity checks", { expect_error( CST_MergeDims(data = 1), - paste0("Parameter 'data' must be of the class 's2dv_cube'.")) -data <- list(data = 1:10) -class(data) <- 's2dv_cube' + paste0("Parameter 'data' must be of the class 's2dv_cube'.") + ) expect_error( - CST_MergeDims(data = data), - paste0("Parameter 'data' must have dimensions.")) + CST_MergeDims(data = data1), + paste0("Parameter 'data' must have dimensions.") + ) - data <- 1 : 20 - dim(data) <- c(time = 20) - data <- list(data = data) - class(data) <- 's2dv_cube' expect_error( - CST_MergeDims(data = data), - "Parameter 'merge_dims' must match with dimension names in parameter 'data'.") + CST_MergeDims(data = data2), + "Parameter 'merge_dims' must match with dimension names in parameter 'data'." + ) expect_error( - CST_MergeDims(data = data, merge_dims = 1), - paste0("Parameter 'merge_dims' must be a character vector indicating the names", - " of the dimensions to be merged.")) + CST_MergeDims(data = data2, merge_dims = 1), + paste0("Parameter 'merge_dims' must be a character vector indicating the ", + "names of the dimensions to be merged.") + ) expect_error( - CST_MergeDims(data = data, merge_dims = 'time'), - "Parameter 'merge_dims' must be of length two.") + CST_MergeDims(data = data2, merge_dims = 'time'), + "Parameter 'merge_dims' must be of length two." + ) expect_error( - CST_MergeDims(data = data, merge_dims = c('time', 'sdates')), + CST_MergeDims(data = data2, merge_dims = c('time', 'sdates')), paste0("Parameter 'merge_dims' must match with dimension ", - "names in parameter 'data'.")) + "names in parameter 'data'.") + ) +}) + +############################################## - exp <- 1 : 20 - dim(exp) <- c(time = 10, lat = 2) - exp <- list(data = exp) - class(exp) <- 's2dv_cube' +test_that("2. Output checks", { expect_equal( - CST_MergeDims(data = exp, merge_dims = c('time', 'lat')), data) - + CST_MergeDims(data = exp, merge_dims = c('time', 'lat')), + data2 + ) expect_warning( CST_MergeDims(data = exp, merge_dims = c('time', 'lat', 'lon')), paste0("Only two dimensions can be merge, only the first two dimension", " will be used. To merge further dimensions consider to use this ", - "function multiple times.")) + "function multiple times.") + ) expect_warning( - CST_MergeDims(data = exp, merge_dims = c('time', 'lat'), rename_dim = c('lat', 'lon')), + CST_MergeDims(data = exp, merge_dims = c('time', 'lat'), + rename_dim = c('lat', 'lon')), paste0("Parameter 'rename_dim' has length greater than 1 and only the ", - "first element will be used.")) - names(dim(data$data)) <- 'Dim1' + "first element will be used.") + ) expect_equal( - CST_MergeDims(data = exp, merge_dims = c('time', 'lat'), rename_dim = 'Dim1'), - data) - + CST_MergeDims(data = exp, merge_dims = c('time', 'lat'), + rename_dim = 'Dim1'), + data3 + ) expect_equal( CST_MergeDims(data = exp, merge_dims = c('time', 'lat'), - rename_dim = 'Dim1', na.rm = TRUE), data) - - exp$data[1,] <- NA - data <- c(2 : 10, 12 : 20) - dim(data) <- c(Dim1 = 18) - data <- list(data = data) - class(data) <- 's2dv_cube' + rename_dim = 'Dim1', na.rm = TRUE), + data3 + ) expect_equal( CST_MergeDims(data = exp, merge_dims = c('time', 'lat'), - rename_dim = 'Dim1', na.rm = TRUE), data) + rename_dim = 'Dim1', na.rm = TRUE), + data3 + ) }) + +############################################## \ No newline at end of file -- GitLab From 8c21208ddda8adb0a644e698dc3bac3c95d1ecee Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 19 Jan 2024 11:56:07 +0100 Subject: [PATCH 54/66] Improve readability of 's2dv_cube' example in README --- README.md | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 581ea145..24fb07e6 100644 --- a/README.md +++ b/README.md @@ -54,12 +54,12 @@ The CSTools package functions can be distributed in the following methods: - **Assessment:** CST_MultiMetric, CST_MultivarRMSE - **Visualization:** PlotCombinedMap, PlotForecastPDF, PlotMostLikelyQuantileMap, PlotPDFsOLE, PlotTriangles4Categories, PlotWeeklyClim. -An `s2dv_cube` is an object to store ordered multidimensional array with named dimensions, specific coordinates and stored metadata. Its “methods” are the **CST** prefix functions. The basic structure of the class `s2dv_cube` is a list of lists. The first level elements are: `data`, `dims`, `coords` and `attrs`. To access any specific element it will be done using the `$` operator. +An `s2dv_cube` is an object to store ordered multidimensional array with named dimensions, specific coordinates and stored metadata (in-memory representation of a NetCDF file). Its “methods” are the **CST** prefix functions. The basic structure of the class `s2dv_cube` is a list of lists. The first level elements are: `data`, `dims`, `coords` and `attrs`. To access any specific element it will be done using the `$` operator. -As an example, this is how th sample data looks like (`lonlat_temp_st$exp`) +As an example, this is how an `s2dv_cube` looks like (see `lonlat_temp_st$exp`): ```r 's2dv_cube' -Data [ 279.994110107422, 280.337463378906, 279.450866699219, 281.992889404297, 280.921813964844, ... ] +Data [ 279.99, 280.34, 279.45, 281.99, 280.92, ... ] Dimensions ( dataset = 1, var = 1, member = 15, sdate = 6, ftime = 3, lat = 22, lon = 53 ) Coordinates * dataset : dat1 @@ -67,8 +67,8 @@ Coordinates member : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 * sdate : 20001101, 20011101, 20021101, 20031101, 20041101, 20051101 ftime : 1, 2, 3 - * lat : 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27 - * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359 + * lat : 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, ... + * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, ... Attributes Dates : 2000-11-01 2001-11-01 2002-11-01 2003-11-01 2004-11-01 ... varName : tas @@ -76,21 +76,17 @@ Attributes lat units : degrees_north long name : latitude - other : ndims, size, standard_name, axis lon units : degrees_east long name : longitude - other : ndims, size, standard_name, axis ftime units : hours since 2000-11-01 00:00:00 - other : ndims, size, standard_name, calendar tas units : K long name : 2 metre temperature - other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table Datasets : dat1 when : 2023-10-02 10:11:06 - source_files : /esarchive/exp/ecmwf/system5c3s/monthly_mean/tas_f6h/tas_20001101.nc ... + source_files : "/ecmwf/system5c3s/monthly_mean/tas_f6h/tas_20001101.nc" ... load_parameters : ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ... ... -- GitLab From 102a75587d31dc9a295ac4ea39c785fd11a5cb63 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 19 Jan 2024 12:02:57 +0100 Subject: [PATCH 55/66] Improve README --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index 24fb07e6..7b9df56d 100644 --- a/README.md +++ b/README.md @@ -47,7 +47,7 @@ Overview The CSTools package functions can be distributed in the following methods: -- **Data retrieval and formatting:** CST_Start, CST_SaveExp, CST_MergeDims, CST_SplitDim, CST_Subset, CST_InsertDim, CST_ChangeDimNames, as.s2dv_cube and s2dv_cube. +- **Data retrieval and formatting:** [CST_Start](R/CST_Start.R), [CST_SaveExp](R/CST_SaveExp.R), [CST_MergeDims](R/CST_MergeDims.R), CST_SplitDim, CST_Subset, CST_InsertDim, CST_ChangeDimNames, as.s2dv_cube and s2dv_cube. - **Classification:** CST_MultiEOF, CST_WeatherRegimes, CST_RegimsAssign, CST_CategoricalEnsCombination, CST_EnsClustering. - **Downscaling:** CST_Analogs, CST_RainFARM, CST_RFTemp, CST_AdamontAnalog, CST_AnalogsPredictors. - **Correction and transformation:** CST_BEI_Weighting, CST_BiasCorrection, CST_Calibration, CST_QuantileMapping, CST_DynBiasCorrection, CST_Anomaly. @@ -89,7 +89,6 @@ Attributes source_files : "/ecmwf/system5c3s/monthly_mean/tas_f6h/tas_20001101.nc" ... load_parameters : ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ... - ... ``` This package is designed to be compatible with other R packages such as [s2dv](https://CRAN.R-project.org/package=s2dv), [startR](https://CRAN.R-project.org/package=startR), [CSIndicators](https://CRAN.R-project.org/package=CSIndicators), [CSDownscale](https://earth.bsc.es/gitlab/es/csdownscale). -- GitLab From f3c2adef7c1fbd1ef35118fa764a1187e34133ce Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 19 Jan 2024 12:12:14 +0100 Subject: [PATCH 56/66] Add function links in README --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 7b9df56d..76c82d59 100644 --- a/README.md +++ b/README.md @@ -47,12 +47,12 @@ Overview The CSTools package functions can be distributed in the following methods: -- **Data retrieval and formatting:** [CST_Start](R/CST_Start.R), [CST_SaveExp](R/CST_SaveExp.R), [CST_MergeDims](R/CST_MergeDims.R), CST_SplitDim, CST_Subset, CST_InsertDim, CST_ChangeDimNames, as.s2dv_cube and s2dv_cube. -- **Classification:** CST_MultiEOF, CST_WeatherRegimes, CST_RegimsAssign, CST_CategoricalEnsCombination, CST_EnsClustering. -- **Downscaling:** CST_Analogs, CST_RainFARM, CST_RFTemp, CST_AdamontAnalog, CST_AnalogsPredictors. -- **Correction and transformation:** CST_BEI_Weighting, CST_BiasCorrection, CST_Calibration, CST_QuantileMapping, CST_DynBiasCorrection, CST_Anomaly. -- **Assessment:** CST_MultiMetric, CST_MultivarRMSE -- **Visualization:** PlotCombinedMap, PlotForecastPDF, PlotMostLikelyQuantileMap, PlotPDFsOLE, PlotTriangles4Categories, PlotWeeklyClim. +- **Data retrieval and formatting:** [CST_Start](R/CST_Start.R), [CST_SaveExp](R/CST_SaveExp.R), [CST_MergeDims](R/CST_MergeDims.R), [CST_SplitDim](R/CST_SplitDim.R), [CST_Subset](R/CST_Subset), [CST_InsertDim](R/CST_InsertDim.R), [CST_ChangeDimNames](R/CST_ChangeDimNames.R), [as.s2dv_cube](R/as.s2dv_cube.R) and [s2dv_cube](R/s2dv_cube.R). +- **Classification:** [CST_MultiEOF](R/CST_MultiEOF.R), [CST_WeatherRegimes](R/CST_WeatherRegimes.R), [CST_RegimsAssign](R/CST_RegimesAssign.R), [CST_CategoricalEnsCombination](R/CST_CategoricalEnsCombination.R), [CST_EnsClustering](R/CST_EnsClustering.R). +- **Downscaling:** [CST_Analogs](R/CST_Analogs.R), [CST_RainFARM](R/CST_RainFARM.R), [CST_RFTemp](R/CST_RFTemp.R), [CST_AdamontAnalog](R/CST_AdamontAnalog.R), [CST_AnalogsPredictors](R/CST_AnalogsPredictors.R). +- **Correction and transformation:** [CST_BiasCorrection](R/CST_BiasCorrection.R), [CST_Calibration](R/CST_Calibration.R), [CST_QuantileMapping](R/CST_QuantileMapping.R), [CST_Anomaly](R/CST_Anomaly.R), [CST_BEI_Weighting](R/CST_BEI_Weighting.R), [CST_DynBiasCorrection](R/CST_DynBiasCorrection.R). +- **Assessment:** [CST_MultiMetric](R/CST_MultiMetric.R), [CST_MultivarRMSE](R/CST_MultivarRMSE.R) +- **Visualization:** [PlotCombinedMap](R/PlotCombinedMap.R), [PlotForecastPDF](R/PlotForecastPDF.R), [PlotMostLikelyQuantileMap](R/PlotMostLikelyQuantileMap.R), [PlotPDFsOLE](R/PlotPDFsOLE.R), [PlotTriangles4Categories](R/PlotTriangles4Categories.R), [PlotWeeklyClim](R/PlotWeeklyClim.R). An `s2dv_cube` is an object to store ordered multidimensional array with named dimensions, specific coordinates and stored metadata (in-memory representation of a NetCDF file). Its “methods” are the **CST** prefix functions. The basic structure of the class `s2dv_cube` is a list of lists. The first level elements are: `data`, `dims`, `coords` and `attrs`. To access any specific element it will be done using the `$` operator. -- GitLab From b49bee5121972cd629efc5383afd8fac3b6e852e Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 19 Jan 2024 14:11:32 +0100 Subject: [PATCH 57/66] Improve text of 's2dv_cube' operations examples --- inst/doc/usecase/ex1_create.R | 50 +++++++++++--------- inst/doc/usecase/ex2_save.R | 16 +++---- inst/doc/usecase/ex3_modify_dims.R | 74 ++++++++++++++++++++---------- inst/doc/usecase/ex4_subset.R | 43 +++++++++-------- 4 files changed, 111 insertions(+), 72 deletions(-) diff --git a/inst/doc/usecase/ex1_create.R b/inst/doc/usecase/ex1_create.R index 2af830fb..3c07db41 100644 --- a/inst/doc/usecase/ex1_create.R +++ b/inst/doc/usecase/ex1_create.R @@ -4,12 +4,11 @@ # Date: 16/01/2024 #******************************************************************************* # This example shows how to create an 's2dv_cube' object. -# There are two ways of creating an 's2dv_cube' object. The first way is -# to use the function s2dv_cube(): create an 's2dv_cube' from scratch with any -# data. In the second example we see the other method with the function -# CST_Start(). This function is to create an 's2dv_cube' from a -# 'startR_array' or 'load' object. -#------------------------------------------------------------------------------- +# There are two ways of creating an 's2dv_cube' object. +# (1) With the function s2dv_cube(): create it from scratch with any data. +# (2) With the function CST_Start(). This function returns an 's2dv_cube' +# from an 'startR_array'. + # Needed packages library(CSTools) library(startR) @@ -17,32 +16,35 @@ library(startR) #----------------------------------------------------- # Example 1: Function s2dv_cube() from defined data #----------------------------------------------------- -# (1.1) Minimal use case, with s2dv_cube function -# We define the array with named dimensions: +# Minimal use case, with s2dv_cube function. # In this example we use the function s2dv_cube() to create an object of class # 's2dv_cube' with the correct structure. +# (1) We define the array with named dimensions: dat <- array(1:100, dim = c(time = 10, lat = 4, lon = 10)) -# We define the coordinates as a list of vectors: +# (2) We define the coordinates as a list of vectors: coords <- list(time = 1:10, lat = 43:40, lon = 0:9) -# The metadata: +# (3) The metadata: metadata <- list(tas = list(level = '2m'), lon = list(cdo_grid_name = 'r360x181'), lat = list(cdo_grid_name = 'r360x181')) -# The creation of Dates array. First the initial date: +# (4) The creation of Dates array. +# First the initial date: ini_date <- as.POSIXct('2010-01-01', format = '%Y-%m-%d') # The sequence of dates dates <- seq(ini_date, by = 'days', length.out = 10) # We define the dates dimensions dim(dates) <- c(time = 10) +# (5) We call the function s2dv_cube() dat_cube <- s2dv_cube(data = dat, coords = coords, varName = 'tas', metadata = metadata, Dates = dates, when = "2019-10-23 19:15:29 CET", source_files = c("/path/to/file1.nc", "/path/to/file2.nc"), - Datasets = 'test_dataset') -# We print the result: + Datasets = 'test_dataset') + +# We print the result to see the 's2dv_cube' structure: # > dat_cube # 's2dv_cube' # Data [ 1, 2, 3, 4, 5, 6, 7, 8 ... ] @@ -68,16 +70,16 @@ dat_cube <- s2dv_cube(data = dat, coords = coords, #----------------------------------------------------- # Example 2: Function as.s2dv_cube() #----------------------------------------------------- -# (2.1) Using CST_Start - -# For this case, we use a random example. +# (1) Example using CST_Start -# NOTE 1: CST_Start() is just a wrapper of function Start() with the transformation -# to 's2dv_cube' object. +# NOTE 1: CST_Start() is just a wrapper of function Start() with the +# transformation to 's2dv_cube' object. # NOTE 2: In order that the input argument auxiliary functions from startR # work, we need to call them explicitly the startR namespace. # (e.g. startR::indices()) +# We just need to define a CST_Start call with all the information: + repos1 <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" @@ -98,7 +100,8 @@ res <- CST_Start(dat = list(list(name = 'system4_m1', path = repos2), retrieve = TRUE) -# Now we can explore the object +# Now we can explore the object: + # 1st level names(res) # "data" "dims" "coords" "attrs" @@ -130,10 +133,14 @@ dim(res$attrs$Dates) # function str(): str(res) -# (2.1) Using as.s2dv_cube() function +#----------------------------------------------------- + +# (2) Example using as.s2dv_cube() function + # We'll load the data with Start and then we'll transform the 'startR_array' # to 's2dv_cube' object with the function as.s2dv_cube(). We are going -# to load the same data as before: +# to load the same data as before, with the same call: + repos1 <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc" @@ -156,6 +163,7 @@ res <- Start(dat = list(list(name = 'system4_m1', path = repos2), # Now, we use the function as.s2dv_cube() to transform the 'startR_array' # into an 's2dv_cube': res_cube <- as.s2dv_cube(res) + # If we call directly the object directly into the terminal, we can see # all the elements nicely: diff --git a/inst/doc/usecase/ex2_save.R b/inst/doc/usecase/ex2_save.R index 0f8235c2..7fedd349 100644 --- a/inst/doc/usecase/ex2_save.R +++ b/inst/doc/usecase/ex2_save.R @@ -3,18 +3,14 @@ # Author: Eva Rifà Rovira # Date: 29/11/2024 #******************************************************************************* +# In this script, we'll see multiple ways to store the 's2dv_cube' (CST_SaveExp) +# or the multidimensional array (SaveExp) to NetCDF. -#------------------------------------------------------------------------------- -# Needed packages before a new version is installed +# Needed packages: +library(CSTools) library(CSIndicators) -library(multiApply) -library(easyNCDF) library(s2dv) -library(ClimProjDiags) -library(CSTools) library(startR) -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_SaveExp.R") -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/zzz.R") ################################################################################ #----------------------------------------------------- @@ -461,5 +457,7 @@ CST_SaveExp(data = data, ftime_dim = 'ftime', # latmin = 27, latmax = 48, # lonmin = -12, lonmax = 40, # output = "lonlat") -# Error + +# NOTE: This case hasn't been developed since the function to load data +# that will be maintianed will be CST_Start. ################################################################################ \ No newline at end of file diff --git a/inst/doc/usecase/ex3_modify_dims.R b/inst/doc/usecase/ex3_modify_dims.R index 6db1d2b6..6d1fa5b8 100644 --- a/inst/doc/usecase/ex3_modify_dims.R +++ b/inst/doc/usecase/ex3_modify_dims.R @@ -3,30 +3,32 @@ # Author: Eva Rifà Rovira # Date: 18/01/2024 #******************************************************************************* +# In this example, we will explore different methods to modify the dimensions +# of the 's2dv_cube': +# (1) Changing dimension names +# (2) Adding new dimensions +# (3) Merge 2 dimensions +# (4) Split a dimension -#------------------------------------------------------------------------------- -# Needed packages before a new version is installed +# Needed packages: library(CSTools) -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_ChangeDimNames.R") ################################################################################ -# NOTE: for all the examples, we are going to use the sample data. #----------------------------------------------------- # Example 1: Change dimension names with CST_ChangeDimNames #----------------------------------------------------- # With using this function, we can change the dimension names in all elements # of the 's2dv_cube' object: -# Example with sample data: -# Check original dimensions and coordinates +# (1) Check original dimensions and coordinates lonlat_temp$exp$dims names(lonlat_temp$exp$coords) dim(lonlat_temp$exp$attrs$Dates) -# Change 'dataset' to 'dat' and 'ftime' to 'time' +# (2) Change 'dataset' to 'dat' and 'ftime' to 'time' exp <- CST_ChangeDimNames(lonlat_temp$exp, original_names = c("dataset", "ftime", "lon", "lat"), new_names = c("dat", "time", "longitude", "latitude")) -# Check new dimensions and coordinates +# (3) Check new dimensions and coordinates exp$dims names(exp$coords) dim(exp$attrs$Dates) @@ -35,20 +37,19 @@ dim(exp$attrs$Dates) # Example 2: Insert a new dimension with CST_InsertDim #----------------------------------------------------- # With this function, we can add a dimension into the 's2dv_cube'. -# When the dimension that we want to add has lenght greater than 1, the -# values of the data are repeated for that new dimension. +# NOTE: When the dimension that we want to add has length greater than 1, the +# values of the data are repeated for that new dimension. -# Example with sample data: -# Check original dimensions and coordinates +# (1) Check original dimensions and coordinates lonlat_temp$exp$dims names(lonlat_temp$exp$coords) -# Add 'variable' dimension +# (2) Add 'variable' dimension exp <- CST_InsertDim(lonlat_temp$exp, posdim = 2, lendim = 2, name = "variable", values = c("tas", "tos")) -# Check new dimensions and coordinates +# (3) Check new dimensions and coordinates exp$dims exp$coords$variable # We see that the values will be repeated along the new dimension: @@ -59,13 +60,21 @@ exp$data[, , 1, 1, 1, 1, 1] #----------------------------------------------------- # In this example, we will merge the dimensions corresponding to the latitude # and the longitude of the data. The new dimension will be named 'grid'. + +# (1) Call the function: new_data <- CST_MergeDims(lonlat_temp$exp, merge_dims = c('lat', 'lon'), rename_dim = 'grid') + +# (2) Check the dimensions of the data: dim(new_data$data) # dataset member sdate ftime grid # 1 15 6 3 1166 + +# (3) Check the names of the coordinates: names(new_data$coords) # [1] "dataset" "member" "sdate" "ftime" "grid" + +# (4) Explore the object by printing it in the terminal: new_data # NOTE: Be aware that when we print the object, we see that its name in # "Coordinates" field appears without the asterisk (*) at its left. This means @@ -75,65 +84,82 @@ attributes(new_data$coords$grid) # $indices # [1] TRUE -# Now, we want to merge time dimensions start date and forecast time: +# (5) Now, we want to merge time dimensions start date and forecast time: new_data <- CST_MergeDims(data = lonlat_temp_st$exp, merge_dims = c('sdate', 'ftime')) # In this case, the Dates dimensions will be merged too. +# (6) Check the dimensions of Dates: dim(new_data$attrs$Dates) # sdate # 18 -# However, when we want to merge temporal and other dimensions nature, -# the Dates dimensions are kept as the original. In this case, the function -# returns a Warning Message, we must pay attention! +# NOTE: When we want to merge temporal and other dimensions nature, +# the Dates dimensions are kept as the original. In this case, the function +# returns a Warning Message, we must pay attention! new_data <- CST_MergeDims(data = lonlat_temp$exp, merge_dims = c('lat', 'ftime'), rename_dim = 'newdim') #----------------------------------------------------- -# Example 4: Split two dimensions with CST_SplitDim +# Example 4: Split two dimensions with SplitDim and CST_SplitDim #----------------------------------------------------- -# In this example, we will start working first with the function SplitDim, +# In this example, we will start working with the function SplitDim, # that it can be used to split dimensions of an array. # NOTE: Take into account that time dimensions will be treated differently than # other dimensions: -# Decadal example: We define an array of consecutive days of different years: +# (1) Decadal example: We define an array of consecutive days of different years: dates <- seq(as.Date("01-01-2000", "%d-%m-%Y", tz = 'UTC'), as.Date("31-12-2005","%d-%m-%Y", tz = 'UTC'), "day") dim(dates) <- c(time = 2192) -# Now, we will split the array in a new 'year' dimension: +# (2) Now, we will split the array in a new 'year' dimension: dates_year <- SplitDim(dates, indices = dates, split_dim = 'time', freq = 'year') # time year # 366 6 -# Now, we can try: freq = 'month' and 'day' + +# (3) Now, we can try: freq = 'month' and 'day' dates_month <- SplitDim(dates, indices = dates, split_dim = 'time', freq = 'month') dates_day <- SplitDim(dates, indices = dates, split_dim = 'time', freq = 'day') -dates <- as.POSIXct(dates * 24 * 3600, origin = '1970-01-01', tz = 'UTC') + +# (4) Finnally, we need to convert them again from numeric to 'POSIXct': +dates_year <- as.POSIXct(dates_year * 24 * 3600, origin = '1970-01-01', tz = 'UTC') +dates_month <- as.POSIXct(dates_month * 24 * 3600, origin = '1970-01-01', tz = 'UTC') +dates_day <- as.POSIXct(dates_day * 24 * 3600, origin = '1970-01-01', tz = 'UTC') + +#----------------------------------------------------- # In the following example, we will use the sample data of the package. We # will use lonlat_prec_st because it is daily data: # NOTE: By Jan 2024, a development is needed regarding updates in other fields # of the 's2dv_cube' + +# (1) Call the function CST_SplitDim with adding 'day' dimension: data_day <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[1,], split_dim = 'ftime', freq = 'day') +# (2) Explore the dimensions of the data array dim(data_day$data) # dataset var member sdate ftime lat lon day # 1 1 6 3 1 4 4 31 + +# (3) Call the function CST_SplitDim with adding 'month' dimension: data_month <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[1,], split_dim = 'ftime', freq = 'month') + dim(data_month$data) # dataset var member sdate ftime lat lon month # 1 1 6 3 31 4 4 1 + +# (4) Call the function CST_SplitDim with adding 'year' dimension: data_year <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[,1], split_dim = 'sdate', freq = 'year') dim(data_year$data) # dataset var member sdate ftime lat lon year # 1 1 6 1 31 4 4 3 + ################################################################################ \ No newline at end of file diff --git a/inst/doc/usecase/ex4_subset.R b/inst/doc/usecase/ex4_subset.R index d86b227e..360ca08f 100644 --- a/inst/doc/usecase/ex4_subset.R +++ b/inst/doc/usecase/ex4_subset.R @@ -3,24 +3,25 @@ # Author: Eva Rifà Rovira # Date: 16/11/2024 #******************************************************************************* -# This example shows how to subset an array. The basic function to subset -# arrays is Subset, from the package ClimProjDiags. Then, in CSTools, there is -# the CST version of the function to subset all other related information -# of the object 's2dv_cube'. -#------------------------------------------------------------------------------- -# Needed packages before a new version is installed +# This example shows how to subset any dimension of an 's2dv_cube'. To do it, +# we will use the function CST_Subset. This function is the 's2dv_cube' method +# version of Subset from the package ClimProjDiags. +# (1) First we will see how Subset works. +# (2) Then, we will use CST_Subset with an 's2dv_cube' + +# Needed packages: library(CSTools) library(ClimProjDiags) -source("https://earth.bsc.es/gitlab/es/ClimProjDiags/-/raw/master/R/Subset.R") -source("https://earth.bsc.es/gitlab/external/cstools/-/raw/master/R/CST_Subset.R") ################################################################################ #----------------------------------------------------- # Example 1: Subset an example array #----------------------------------------------------- -# (1.1) Minimal use case. Spatial coordinates subset. +# This is a minimal use case about spatial coordinates subset. + +# (1) We create the array amd we print it: dat <- array(1:100, dim = c(lat = 10, lon = 10)) -# > dat +dat # [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] # [1,] 1 11 21 31 41 51 61 71 81 91 # [2,] 2 12 22 32 42 52 62 72 82 92 @@ -32,9 +33,11 @@ dat <- array(1:100, dim = c(lat = 10, lon = 10)) # [8,] 8 18 28 38 48 58 68 78 88 98 # [9,] 9 19 29 39 49 59 69 79 89 99 # [10,] 10 20 30 40 50 60 70 80 90 100 + +# (2) We call the function Subset from ClimProjDiags and we see the result: dat_subset <- Subset(x = dat, along = c('lat', 'lon'), indices = list(1:5, 1:7), drop = 'all') -# > dat_subset +dat_subset # [,1] [,2] [,3] [,4] [,5] [,6] [,7] # [1,] 1 11 21 31 41 51 61 # [2,] 2 12 22 32 42 52 62 @@ -45,17 +48,18 @@ dat_subset <- Subset(x = dat, along = c('lat', 'lon'), indices = list(1:5, 1:7), #----------------------------------------------------- # Example 2: Subset an 's2dv_cube' using sample data #----------------------------------------------------- -# (2.1) We don't want to drop any dimension, select only the first member, -# the first and the second start dates, and also subset the longitude and -# keep only the values from [0, 21]: +# In this example we will not drop any dimension, we will select only the first +# member, the first and the second start dates, and also subset the longitude and +# keep only the values from [0, 21]: +# (1) Explore the sample data: dat <- lonlat_temp_st$exp -# > dat$dims +dat$dims # dataset var member sdate ftime lat lon # 1 1 15 6 3 22 53 -# > dat +dat # 's2dv_cube' # Data [ 279.994110107422, 280.337463378906, 279.450866699219, ... ] # Dimensions ( dataset = 1, var = 1, member = 15, sdate = 6, ftime = 3, @@ -90,10 +94,12 @@ dat <- lonlat_temp_st$exp # ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ... # ... +# (2) Call the function CST_Subset: dat_subset <- CST_Subset(x = dat, along = c('member', 'sdate', 'lon'), indices = list(1, 1:2, 1:22), drop = 'none') - -# > dat_subset + +# (3) Explore the 's2dv_cube' +dat_subset # 's2dv_cube' # Data [ 279.994110107422, 277.161102294922, 278.825836181641, 276.8271484375, 276.052703857422, 276.950805664062, 280.677215576172, 277.285247802734 ... ] # Dimensions ( dataset = 1, var = 1, member = 1, sdate = 2, ftime = 3, lat = 22, lon = 22 ) @@ -130,4 +136,5 @@ dat_subset <- CST_Subset(x = dat, along = c('member', 'sdate', 'lon'), # load_parameters : # ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ... # ... + ################################################################################ \ No newline at end of file -- GitLab From 24d35652b6f889864ec6511a85f2a3ec23640411 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eva=20Rif=C3=A0?= Date: Fri, 19 Jan 2024 16:45:42 +0100 Subject: [PATCH 58/66] Update README.md --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index 76c82d59..d8560b8d 100644 --- a/README.md +++ b/README.md @@ -93,9 +93,7 @@ Attributes This package is designed to be compatible with other R packages such as [s2dv](https://CRAN.R-project.org/package=s2dv), [startR](https://CRAN.R-project.org/package=startR), [CSIndicators](https://CRAN.R-project.org/package=CSIndicators), [CSDownscale](https://earth.bsc.es/gitlab/es/csdownscale). -More information about the `s2dv_cube` object class can be found here: [description of the s2dv_cube object structure document](https://docs.google.com/document/d/1ko37JFl_h6mOjDKM5QSQGikfLBKZq1naL11RkJIwtMM/edit?usp=sharing). - -**Note:** The current `s2dv_cube` object (CSTools version > 5.0.0) differs from the original object used in the previous versions of the packages. If you have doubts on this change you can follow some of the issues: [New s2dv_cube object discussion](https://earth.bsc.es/gitlab/external/cstools/-/issues/94), [How to deal with the compatibility break](https://earth.bsc.es/gitlab/external/cstools/-/issues/112) and [Testing issue and specifications](https://earth.bsc.es/gitlab/external/cstools/-/issues/110) +> **Note:** The current `s2dv_cube` object (CSTools version > 5.0.0) differs from the original object used in the previous versions of the packages. If you have doubts on this change you can follow some of the issues: [New s2dv_cube object discussion](https://earth.bsc.es/gitlab/external/cstools/-/issues/94), [How to deal with the compatibility break](https://earth.bsc.es/gitlab/external/cstools/-/issues/112) and [Testing issue and specifications](https://earth.bsc.es/gitlab/external/cstools/-/issues/110). More information can be found in this document: [About the new ‘s2dv_cube’](https://docs.google.com/document/d/1ko37JFl_h6mOjDKM5QSQGikfLBKZq1naL11RkJIwtMM/edit?usp=sharing). Contribute ---------- -- GitLab From cd9a0a5a5902ecf7f50300279bef5425c150d0be Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Fri, 19 Jan 2024 17:45:08 +0100 Subject: [PATCH 59/66] Started working on avoid hard coded parts regarding dimension names --- R/CST_SplitDim.R | 27 +++++++----- tests/testthat/test-CST_SplitDim.R | 70 +++++++++++++++--------------- 2 files changed, 52 insertions(+), 45 deletions(-) diff --git a/R/CST_SplitDim.R b/R/CST_SplitDim.R index 71f51bab..bb52bb69 100644 --- a/R/CST_SplitDim.R +++ b/R/CST_SplitDim.R @@ -54,7 +54,8 @@ #'@export CST_SplitDim <- function(data, split_dim = 'time', indices = NULL, freq = 'monthly', new_dim_name = NULL, - insert_ftime = NULL) { + insert_ftime = NULL, ftime_dim = 'ftime', + sdate_dim = 'sdate') { # Check 's2dv_cube' if (!inherits(data, 's2dv_cube')) { stop("Parameter 'data' must be of the class 's2dv_cube'.") @@ -69,33 +70,38 @@ CST_SplitDim <- function(data, split_dim = 'time', indices = NULL, insert_ftime <- insert_ftime[1] } # adding NAs at the begining of the data in ftime dim - ftimedim <- which(names(dim(data$data)) == 'ftime') + ftimedim <- which(names(dim(data$data)) == ftime_dim) dims <- dim(data$data) dims[ftimedim] <- insert_ftime empty_array <- array(NA, dims) data$data <- abind(empty_array, data$data, along = ftimedim) names(dim(data$data)) <- names(dims) + # Reorder dates + data$attrs$Dates <- Reorder(data$attrs$Dates, c(ftime_dim, sdate_dim)) + dates <- data$attrs$Dates + dates_subset <- Subset(dates, sdate_dim, 1) # adding dates to Dates for the new NAs introduced - if ((data$attrs$Dates[2] - data$attrs$Dates[1]) == 1) { + if ((dates_subset[2] - dates_subset[1]) == 1) { timefreq <- 'days' } else { timefreq <- 'months' warning("Time frequency of forecast time is considered monthly.") } - start <- data$attrs$Dates - dim(start) <- c(ftime = length(start)/dims['sdate'], sdate = dims['sdate']) + + dim(dates) <- c(length(dates)/dims[sdate_dim], dims[sdate_dim]) + names(dim(dates)) <- c(ftime_dim, sdate_dim) # new <- array(NA, prod(dim(data$data)[c('ftime', 'sdate')])) # Pending fix transform to UTC when concatenaiting - data$attrs$Dates <- do.call(c, lapply(1:dim(start)[2], function(x) { - seq(start[1,x] - as.difftime(insert_ftime, + data$attrs$Dates <- do.call(c, lapply(1:dim(dates)[2], function(x) { + seq(dates[1,x] - as.difftime(insert_ftime, units = timefreq), - start[dim(start)[1],x], by = timefreq, tz = "UTC")})) + dates[dim(dates)[1],x], by = timefreq, tz = "UTC")})) } } if (is.null(indices)) { - if (any(split_dim %in% c('ftime', 'time', 'sdate'))) { + if (any(split_dim %in% c(ftime_dim, sdate_dim))) { indices <- data$attrs$Dates - if (any(names(dim(data$data)) %in% 'sdate')) { + if (any(names(dim(data$data)) %in% sdate_dim)) { if (!any(names(dim(data$data)) %in% split_dim)) { stop("Parameter 'split_dims' must be one of the dimension ", "names in parameter 'data'.") @@ -106,6 +112,7 @@ CST_SplitDim <- function(data, split_dim = 'time', indices = NULL, } data$data <- SplitDim(data$data, split_dim = split_dim, indices = indices, freq = freq, new_dim_name = new_dim_name) + data$dims <- dim(data$data) return(data) } #'Function to Split Dimension diff --git a/tests/testthat/test-CST_SplitDim.R b/tests/testthat/test-CST_SplitDim.R index 45e2b1a8..ca305c63 100644 --- a/tests/testthat/test-CST_SplitDim.R +++ b/tests/testthat/test-CST_SplitDim.R @@ -92,38 +92,38 @@ test_that("2. Output checks", { ############################################## -# test_that("3. Output checks: sample data", { -# output <- lonlat_temp$exp$data -# output <- abind(output[, , , 1, ,], output[, , , 2, ,], output[, , , 3, ,], along = 5) -# dim(output) <- c(dataset = 1, member = 15, sdate = 6, ftime = 1, -# lat = 22, lon = 53, monthly = 3) -# result <- lonlat_temp$exp -# result$data <- output -# expect_equal( -# CST_SplitDim(data = lonlat_temp$exp, split_dim = 'ftime'), -# result -# ) -# expect_equal( -# dim(CST_SplitDim(data = lonlat_temp$exp, split_dim = 'member', -# freq = 5)$data), -# c(dataset = 1, member = 5, sdate = 6, ftime = 3, lat = 22, -# lon = 53, index = 3) -# ) -# expect_warning( -# CST_SplitDim(data = lonlat_temp$exp, split_dim = 'member', freq = 5, -# new_dim_name = c('a', 'b')), -# paste0("Parameter 'new_dim_name' has length greater than 1 ", -# "and only the first elemenst is used.") -# ) -# expect_error( -# CST_SplitDim(data = lonlat_temp$exp, split_dim = 'member', freq = 5, -# new_dim_name = 3), -# "Parameter 'new_dim_name' must be character string" -# ) -# expect_equal( -# dim(CST_SplitDim(data = lonlat_temp$exp, split_dim = 'member', -# freq = 5, new_dim_name = 'wt')$data), -# c(dataset = 1, member = 5, sdate = 6, ftime = 3, lat = 22, -# lon = 53, wt = 3) -# ) -# }) +test_that("3. Output checks: sample data", { + output <- lonlat_temp$exp$data + output <- abind(output[, , , 1, ,], output[, , , 2, ,], output[, , , 3, ,], along = 5) + dim(output) <- c(dataset = 1, member = 15, sdate = 6, ftime = 1, + lat = 22, lon = 53, monthly = 3) + result <- lonlat_temp$exp + result$data <- output + expect_equal( + CST_SplitDim(data = lonlat_temp$exp, split_dim = 'ftime'), + result + ) + expect_equal( + dim(CST_SplitDim(data = lonlat_temp$exp, split_dim = 'member', + freq = 5)$data), + c(dataset = 1, member = 5, sdate = 6, ftime = 3, lat = 22, + lon = 53, index = 3) + ) + expect_warning( + CST_SplitDim(data = lonlat_temp$exp, split_dim = 'member', freq = 5, + new_dim_name = c('a', 'b')), + paste0("Parameter 'new_dim_name' has length greater than 1 ", + "and only the first elemenst is used.") + ) + expect_error( + CST_SplitDim(data = lonlat_temp$exp, split_dim = 'member', freq = 5, + new_dim_name = 3), + "Parameter 'new_dim_name' must be character string" + ) + expect_equal( + dim(CST_SplitDim(data = lonlat_temp$exp, split_dim = 'member', + freq = 5, new_dim_name = 'wt')$data), + c(dataset = 1, member = 5, sdate = 6, ftime = 3, lat = 22, + lon = 53, wt = 3) + ) +}) -- GitLab From 0b04d080f07dae836d0feecf30d9e00880174a9c Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Mon, 22 Jan 2024 16:15:04 +0100 Subject: [PATCH 60/66] Improve function SplitDim --- R/CST_SplitDim.R | 168 ++++++++++++++++++++--------- inst/doc/usecase/ex3_modify_dims.R | 3 +- man/CST_SaveExp.Rd | 6 +- man/CST_SplitDim.Rd | 16 ++- man/SaveExp.Rd | 4 +- man/SplitDim.Rd | 11 +- tests/testthat/test-CST_SplitDim.R | 29 ++++- 7 files changed, 176 insertions(+), 61 deletions(-) diff --git a/R/CST_SplitDim.R b/R/CST_SplitDim.R index bb52bb69..bac378be 100644 --- a/R/CST_SplitDim.R +++ b/R/CST_SplitDim.R @@ -10,7 +10,7 @@ #' #'@param data A 's2dv_cube' object #'@param split_dim A character string indicating the name of the dimension to -#' split. +#' split. It is set as 'time' by default. #'@param indices A vector of numeric indices or dates. If left at NULL, the #' dates provided in the s2dv_cube object (element Dates) will be used. #'@param freq A character string indicating the frequency: by 'day', 'month' and @@ -21,6 +21,12 @@ #' dimension. #'@param insert_ftime An integer indicating the number of time steps to add at #' the begining of the time series. +#'@param ftime_dim A character string indicating the name of the forecast time +#' dimension. It is set as 'time' by default. +#'@param sdate_dim A character string indicating the name of the start date +#' dimension. It is set as 'sdate' by default. +#'@param return_indices A logical value that if it is TRUE, the indices +#' used in splitting the dimension will be returned. It is FALSE by default. #' #'@details Parameter 'insert_ftime' has been included for the case of using #'daily data, requiring split the temporal dimensions by months (or similar) and @@ -51,11 +57,12 @@ #'new_data <- CST_SplitDim(data, indices = time, freq = 'year') #'@import abind #'@importFrom ClimProjDiags Subset +#'@importFrom s2dv Reorder #'@export CST_SplitDim <- function(data, split_dim = 'time', indices = NULL, freq = 'monthly', new_dim_name = NULL, - insert_ftime = NULL, ftime_dim = 'ftime', - sdate_dim = 'sdate') { + insert_ftime = NULL, ftime_dim = 'time', + sdate_dim = 'sdate', return_indices = FALSE) { # Check 's2dv_cube' if (!inherits(data, 's2dv_cube')) { stop("Parameter 'data' must be of the class 's2dv_cube'.") @@ -63,40 +70,47 @@ CST_SplitDim <- function(data, split_dim = 'time', indices = NULL, if (!is.null(insert_ftime)) { if (!is.numeric(insert_ftime)) { stop("Parameter 'insert_ftime' should be an integer.") + } + if (length(insert_ftime) > 1) { + warning("Parameter 'insert_ftime' must be of length 1, and only the", + " first element will be used.") + insert_ftime <- insert_ftime[1] + } + # Check Dates + if (is.null(dim(data$attrs$Dates))) { + warning("Parameter 'Dates' must have dimensions, 'insert_ftime' won't ", + "be used.") + insert_ftime <- NULL + } + } + if (!is.null(insert_ftime)) { + # adding NAs at the begining of the data in ftime dim + ftimedim <- which(names(dim(data$data)) == ftime_dim) + dims <- dim(data$data) + dims[ftimedim] <- insert_ftime + empty_array <- array(NA, dims) + data$data <- abind(empty_array, data$data, along = ftimedim) + names(dim(data$data)) <- names(dims) + # Reorder dates + data$attrs$Dates <- Reorder(data$attrs$Dates, c(ftime_dim, sdate_dim)) + dates <- data$attrs$Dates + dates_subset <- Subset(dates, sdate_dim, 1) + # adding dates to Dates for the new NAs introduced + if ((dates_subset[2] - dates_subset[1]) == 1) { + timefreq <- 'days' } else { - if (length(insert_ftime) > 1) { - warning("Parameter 'insert_ftime' must be of length 1, and only the", - " first element will be used.") - insert_ftime <- insert_ftime[1] - } - # adding NAs at the begining of the data in ftime dim - ftimedim <- which(names(dim(data$data)) == ftime_dim) - dims <- dim(data$data) - dims[ftimedim] <- insert_ftime - empty_array <- array(NA, dims) - data$data <- abind(empty_array, data$data, along = ftimedim) - names(dim(data$data)) <- names(dims) - # Reorder dates - data$attrs$Dates <- Reorder(data$attrs$Dates, c(ftime_dim, sdate_dim)) - dates <- data$attrs$Dates - dates_subset <- Subset(dates, sdate_dim, 1) - # adding dates to Dates for the new NAs introduced - if ((dates_subset[2] - dates_subset[1]) == 1) { - timefreq <- 'days' - } else { - timefreq <- 'months' - warning("Time frequency of forecast time is considered monthly.") - } - - dim(dates) <- c(length(dates)/dims[sdate_dim], dims[sdate_dim]) - names(dim(dates)) <- c(ftime_dim, sdate_dim) - # new <- array(NA, prod(dim(data$data)[c('ftime', 'sdate')])) - # Pending fix transform to UTC when concatenaiting - data$attrs$Dates <- do.call(c, lapply(1:dim(dates)[2], function(x) { - seq(dates[1,x] - as.difftime(insert_ftime, - units = timefreq), - dates[dim(dates)[1],x], by = timefreq, tz = "UTC")})) + timefreq <- 'months' + warning("Time frequency of forecast time is considered monthly.") } + + dim(dates) <- c(length(dates)/dims[sdate_dim], dims[sdate_dim]) + names(dim(dates)) <- c(ftime_dim, sdate_dim) + # new <- array(NA, prod(dim(data$data)[c('ftime', 'sdate')])) + # Pending fix transform to UTC when concatenaiting + data$attrs$Dates <- do.call(c, lapply(1:dim(dates)[2], function(x) { + seq(dates[1,x] - as.difftime(insert_ftime, + units = timefreq), + dates[dim(dates)[1],x], by = timefreq, tz = "UTC")})) } if (is.null(indices)) { if (any(split_dim %in% c(ftime_dim, sdate_dim))) { @@ -106,14 +120,34 @@ CST_SplitDim <- function(data, split_dim = 'time', indices = NULL, stop("Parameter 'split_dims' must be one of the dimension ", "names in parameter 'data'.") } - indices <- indices[1 : dim(data$data)[which(names(dim(data$data)) == split_dim)]] + indices <- indices[1:dim(data$data)[which(names(dim(data$data)) == split_dim)]] } } } - data$data <- SplitDim(data$data, split_dim = split_dim, indices = indices, - freq = freq, new_dim_name = new_dim_name) + # Call the function + res <- SplitDim(data = data$data, split_dim = split_dim, + indices = indices, freq = freq, + new_dim_name = new_dim_name, + dates = data$attrs$Dates, + return_indices = return_indices) + if (inherits(res, 'list')) { + data$data <- res$data + # Split dim on Dates + if (!is.null(res$dates)) { + data$attrs$Dates <- res$dates + } + } else { + data$data <- res + } data$dims <- dim(data$data) - return(data) + + # Coordinates + # TO DO: Subset splitted coordinate and add the new dimension coordinate. + if (return_indices) { + return(list(data = data, indices = res$indices)) + } else { + return(data) + } } #'Function to Split Dimension #' @@ -135,6 +169,11 @@ CST_SplitDim <- function(data, split_dim = 'time', indices = NULL, #' the length in which to subset the dimension. #'@param new_dim_name A character string indicating the name of the new #' dimension. +#'@param dates An optional parameter containing an array of dates of class +#' 'POSIXct' with the corresponding time dimensions of 'data'. It is NULL +#' by default. +#'@param return_indices A logical value that if it is TRUE, the indices +#' used in splitting the dimension will be returned. It is FALSE by default. #'@examples #'data <- 1 : 20 #'dim(data) <- c(time = 10, lat = 2) @@ -151,7 +190,8 @@ CST_SplitDim <- function(data, split_dim = 'time', indices = NULL, #'@importFrom ClimProjDiags Subset #'@export SplitDim <- function(data, split_dim = 'time', indices, freq = 'monthly', - new_dim_name = NULL) { + new_dim_name = NULL, dates = NULL, + return_indices = FALSE) { # check data if (is.null(data)) { stop("Parameter 'data' cannot be NULL.") @@ -173,7 +213,7 @@ SplitDim <- function(data, split_dim = 'time', indices, freq = 'monthly', "one and only the first element will be used.") } if (!any(names(dims) %in% split_dim)) { - stop("Parameter 'split_dims' must be one of the dimension ", + stop("Parameter 'split_dim' must be one of the dimension ", "names in parameter 'data'.") } pos_split <- which(names(dims) == split_dim) @@ -216,8 +256,8 @@ SplitDim <- function(data, split_dim = 'time', indices, freq = 'monthly', }) if ('try-error' %in% class(indices) | sum(is.na(indices)) == length(indices)) { - stop("Dates provided in parameter 'indices' must be of class", - " 'POSIXct' or convertable to 'POSIXct'.") + stop("Dates provided in parameter 'indices' must be of class ", + "'POSIXct' or convertable to 'POSIXct'.") } } } @@ -236,7 +276,7 @@ SplitDim <- function(data, split_dim = 'time', indices, freq = 'monthly', } else if (freq == 'year') { indices <- as.numeric(strftime(indices, format = "%Y")) repited <- unique(indices) - } else if (freq == 'monthly' ) { + } else if (freq == 'monthly') { indices <- as.numeric(strftime(indices, format = "%m%Y")) repited <- unique(indices) } else { @@ -261,15 +301,41 @@ SplitDim <- function(data, split_dim = 'time', indices, freq = 'monthly', data <- lapply(repited, function(x) {rebuild(x, data, along = split_dim, indices = indices, max_times)}) data <- abind(data, along = length(dims) + 1) - if (is.character(freq)) { - names(dim(data)) <- c(names(dims), freq) - } else { - names(dim(data)) <- c(names(dims), 'index') + + # Add new dim name + if (is.null(new_dim_name)) { + if (is.character(freq)) { + new_dim_name <- freq + } else { + new_dim_name <- 'index' + } } - if (!is.null(new_dim_name)) { - names(dim(data)) <- c(names(dims), new_dim_name) + names(dim(data)) <- c(names(dims), new_dim_name) + + # Split also Dates + dates_exist <- FALSE + if (!is.null(dates)) { + if (any(split_dim %in% names(dim(dates)))) { + datesdims <- dim(dates) + dates <- lapply(repited, function(x) {rebuild(x, dates, along = split_dim, + indices = indices, max_times)}) + dates <- abind(dates, along = length(datesdims) + 1) + dates <- as.POSIXct(dates, origin = '1970-01-01', tz = "UTC") + names(dim(dates)) <- c(names(datesdims), new_dim_name) + } + dates_exist <- TRUE + } + + # Return objects + if (all(dates_exist, return_indices)) { + return(list(data = data, dates = dates, indices = indices)) + } else if (all(dates_exist, !return_indices)) { + return(list(data = data, dates = dates)) + } else if (all(!dates_exist, return_indices)) { + return(list(data = data, indices = indices)) + } else { + return(data) } - return(data) } rebuild <- function(x, data, along, indices, max_times) { diff --git a/inst/doc/usecase/ex3_modify_dims.R b/inst/doc/usecase/ex3_modify_dims.R index 6d1fa5b8..1cf2984b 100644 --- a/inst/doc/usecase/ex3_modify_dims.R +++ b/inst/doc/usecase/ex3_modify_dims.R @@ -116,6 +116,7 @@ dim(dates) <- c(time = 2192) # (2) Now, we will split the array in a new 'year' dimension: dates_year <- SplitDim(dates, indices = dates, split_dim = 'time', freq = 'year') +dim(dates_year) # time year # 366 6 @@ -140,7 +141,7 @@ dates_day <- as.POSIXct(dates_day * 24 * 3600, origin = '1970-01-01', tz = 'UTC' # of the 's2dv_cube' # (1) Call the function CST_SplitDim with adding 'day' dimension: -data_day <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[1,], +data_day <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[1, ], split_dim = 'ftime', freq = 'day') # (2) Explore the dimensions of the data array dim(data_day$data) diff --git a/man/CST_SaveExp.Rd b/man/CST_SaveExp.Rd index 1520eb08..1ac3e7ab 100644 --- a/man/CST_SaveExp.Rd +++ b/man/CST_SaveExp.Rd @@ -17,7 +17,7 @@ CST_SaveExp( single_file = FALSE, extra_string = NULL, global_attrs = NULL, - units_hours_since = TRUE + units_hours_since = FALSE ) } \arguments{ @@ -83,11 +83,11 @@ default.} attributes to be saved in the NetCDF.} \item{units_hours_since}{(Optional) A logical value only available for the -case: Dates have forecast time and start date dimension, single_file is +case: 'Dates' have forecast time and start date dimension, 'single_file' is TRUE and 'time_bounds' are not used. When it is TRUE, it saves the forecast time with units of 'hours since'; if it is FALSE, the time units will be a number of time steps with its corresponding frequency (e.g. n days, n months -or n hours). It is TRUE by default.} +or n hours). It is FALSE by default.} } \value{ Multiple or single NetCDF files containing the data array.\cr diff --git a/man/CST_SplitDim.Rd b/man/CST_SplitDim.Rd index b07d9897..4b55d6da 100644 --- a/man/CST_SplitDim.Rd +++ b/man/CST_SplitDim.Rd @@ -10,14 +10,17 @@ CST_SplitDim( indices = NULL, freq = "monthly", new_dim_name = NULL, - insert_ftime = NULL + insert_ftime = NULL, + ftime_dim = "time", + sdate_dim = "sdate", + return_indices = FALSE ) } \arguments{ \item{data}{A 's2dv_cube' object} \item{split_dim}{A character string indicating the name of the dimension to -split.} +split. It is set as 'time' by default.} \item{indices}{A vector of numeric indices or dates. If left at NULL, the dates provided in the s2dv_cube object (element Dates) will be used.} @@ -32,6 +35,15 @@ dimension.} \item{insert_ftime}{An integer indicating the number of time steps to add at the begining of the time series.} + +\item{ftime_dim}{A character string indicating the name of the forecast time +dimension. It is set as 'time' by default.} + +\item{sdate_dim}{A character string indicating the name of the start date +dimension. It is set as 'sdate' by default.} + +\item{return_indices}{A logical value that if it is TRUE, the indices +used in splitting the dimension will be returned. It is FALSE by default.} } \description{ This function split a dimension in two. The user can select the diff --git a/man/SaveExp.Rd b/man/SaveExp.Rd index 53c791f7..d7f0b30d 100644 --- a/man/SaveExp.Rd +++ b/man/SaveExp.Rd @@ -23,7 +23,7 @@ SaveExp( single_file = FALSE, extra_string = NULL, global_attrs = NULL, - units_hours_since = TRUE + units_hours_since = FALSE ) } \arguments{ @@ -116,7 +116,7 @@ case: Dates have forecast time and start date dimension, single_file is TRUE and 'time_bounds' is NULL. When it is TRUE, it saves the forecast time with units of 'hours since'; if it is FALSE, the time units will be a number of time steps with its corresponding frequency (e.g. n days, n months or n -hours). It is TRUE by default.} +hours). It is FALSE by default.} } \value{ Multiple or single NetCDF files containing the data array.\cr diff --git a/man/SplitDim.Rd b/man/SplitDim.Rd index a0dc8bc6..b0785b58 100644 --- a/man/SplitDim.Rd +++ b/man/SplitDim.Rd @@ -9,7 +9,9 @@ SplitDim( split_dim = "time", indices, freq = "monthly", - new_dim_name = NULL + new_dim_name = NULL, + dates = NULL, + return_indices = FALSE ) } \arguments{ @@ -28,6 +30,13 @@ the length in which to subset the dimension.} \item{new_dim_name}{A character string indicating the name of the new dimension.} + +\item{dates}{An optional parameter containing an array of dates of class +'POSIXct' with the corresponding time dimensions of 'data'. It is NULL +by default.} + +\item{return_indices}{A logical value that if it is TRUE, the indices +used in splitting the dimension will be returned. It is FALSE by default.} } \description{ This function split a dimension in two. The user can select the diff --git a/tests/testthat/test-CST_SplitDim.R b/tests/testthat/test-CST_SplitDim.R index ca305c63..f8ad88ee 100644 --- a/tests/testthat/test-CST_SplitDim.R +++ b/tests/testthat/test-CST_SplitDim.R @@ -10,6 +10,7 @@ indices1 <- c(rep(1,5), rep(2,5), rep (3, 5), rep(4, 5)) output1 <- matrix(data1$data, nrow = 5, ncol = 4) names(dim(output1)) <- c('time', 'monthly') output1 <- list(data = output1) +output1$dims <- dim(output1$data) class(output1) <- 's2dv_cube' exp_cor <- 1 : 20 @@ -21,6 +22,7 @@ class(exp_cor) <- 's2dv_cube' output2 <- matrix(data1$data, nrow = 5, ncol = 4) names(dim(output2)) <- c('time', 'index') output2 <- list(data = output2) +output2$dims <- dim(output2$data) class(output2) <- 's2dv_cube' time2 <- c(seq(ISOdate(1903, 1, 1), ISOdate(1903, 1, 4), "days"), @@ -41,6 +43,7 @@ output3 <- c(data3$data, rep(NA, 4)) dim(output3) <- c(time = 8, monthly = 3) result3 <- data3 result3$data <- output3 +result3$dims <- dim(result3$data) # dat4 data4 <- list(data = array(rnorm(10), dim = c(sdate = 2, lon = 5))) @@ -99,8 +102,13 @@ test_that("3. Output checks: sample data", { lat = 22, lon = 53, monthly = 3) result <- lonlat_temp$exp result$data <- output + result$attrs$Dates <- s2dv::Reorder(result$attrs$Dates, c('sdate', 'ftime')) + dim(result$attrs$Dates) <- c(ftime = 1, sdate = 6, monthly = 3) + result$dims <- c(dataset = 1, member = 15, sdate = 6, ftime = 1, lat = 22, + lon = 53, monthly = 3) + attributes(result$attrs$Dates)$end <- NULL expect_equal( - CST_SplitDim(data = lonlat_temp$exp, split_dim = 'ftime'), + CST_SplitDim(data = lonlat_temp$exp, split_dim = 'ftime', ftime_dim = 'ftime'), result ) expect_equal( @@ -127,3 +135,22 @@ test_that("3. Output checks: sample data", { lon = 53, wt = 3) ) }) + +############################################## + +test_that("4. Output checks II", { + res <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[,1], + split_dim = 'sdate', freq = 'year', return_indices = T) + expect_equal( + names(res), + c('data', 'indices') + ) + expect_equal( + res$dims, + dim(res$data) + ) + expect_equal( + all(names(dim(res$data$attrs$Dates)) %in% names(res$data$dims)), + TRUE + ) +}) -- GitLab From 338571a4c6e81cf465f324a50f0adb85f2cae280 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 23 Jan 2024 16:29:12 +0100 Subject: [PATCH 61/66] Version bump 5.2.0 --- DESCRIPTION | 2 +- NEWS.md | 11 +++++++++++ README.md | 2 +- 3 files changed, 13 insertions(+), 2 deletions(-) diff --git a/DESCRIPTION b/DESCRIPTION index 362663ff..b352859d 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,6 +1,6 @@ Package: CSTools Title: Assessing Skill of Climate Forecasts on Seasonal-to-Decadal Timescales -Version: 5.1.1 +Version: 5.2.0 Authors@R: c( person("Nuria", "Perez-Zanon", , "nuria.perez@bsc.es", role = "aut", comment = c(ORCID = "0000-0001-8568-3071")), person("Louis-Philippe", "Caron", , "louis-philippe.caron@bsc.es", role = "aut", comment = c(ORCID = "0000-0001-5221-0147")), diff --git a/NEWS.md b/NEWS.md index 25041008..0f3602fd 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,3 +1,14 @@ +# CSTools 5.2.0 (Release date: 24-01-2024) + +### Development +- New function CST_ChangeDimNames +- CST_SplitDim: added dimension names and split also Dates +- CST_SaveExp: save time bounds and global attributes; improved code + +### Other +- Updated README +- Added citation file + # CSTools 5.1.1 (Release date: 19-10-2023) ### Fixes diff --git a/README.md b/README.md index d8560b8d..9be3c57c 100644 --- a/README.md +++ b/README.md @@ -47,7 +47,7 @@ Overview The CSTools package functions can be distributed in the following methods: -- **Data retrieval and formatting:** [CST_Start](R/CST_Start.R), [CST_SaveExp](R/CST_SaveExp.R), [CST_MergeDims](R/CST_MergeDims.R), [CST_SplitDim](R/CST_SplitDim.R), [CST_Subset](R/CST_Subset), [CST_InsertDim](R/CST_InsertDim.R), [CST_ChangeDimNames](R/CST_ChangeDimNames.R), [as.s2dv_cube](R/as.s2dv_cube.R) and [s2dv_cube](R/s2dv_cube.R). +- **Data retrieval and formatting:** [CST_Start](R/CST_Start.R), [CST_SaveExp](R/CST_SaveExp.R), [CST_MergeDims](R/CST_MergeDims.R), [CST_SplitDim](R/CST_SplitDim.R), [CST_Subset](R/CST_Subset.R), [CST_InsertDim](R/CST_InsertDim.R), [CST_ChangeDimNames](R/CST_ChangeDimNames.R), [as.s2dv_cube](R/as.s2dv_cube.R) and [s2dv_cube](R/s2dv_cube.R). - **Classification:** [CST_MultiEOF](R/CST_MultiEOF.R), [CST_WeatherRegimes](R/CST_WeatherRegimes.R), [CST_RegimsAssign](R/CST_RegimesAssign.R), [CST_CategoricalEnsCombination](R/CST_CategoricalEnsCombination.R), [CST_EnsClustering](R/CST_EnsClustering.R). - **Downscaling:** [CST_Analogs](R/CST_Analogs.R), [CST_RainFARM](R/CST_RainFARM.R), [CST_RFTemp](R/CST_RFTemp.R), [CST_AdamontAnalog](R/CST_AdamontAnalog.R), [CST_AnalogsPredictors](R/CST_AnalogsPredictors.R). - **Correction and transformation:** [CST_BiasCorrection](R/CST_BiasCorrection.R), [CST_Calibration](R/CST_Calibration.R), [CST_QuantileMapping](R/CST_QuantileMapping.R), [CST_Anomaly](R/CST_Anomaly.R), [CST_BEI_Weighting](R/CST_BEI_Weighting.R), [CST_DynBiasCorrection](R/CST_DynBiasCorrection.R). -- GitLab From 382aa29bea487ba7eef871e3f6ad850ae6780800 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 23 Jan 2024 17:02:01 +0100 Subject: [PATCH 62/66] Add inst/doc to .gitignore and .Rbuildignore --- .Rbuildignore | 2 +- .gitignore | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/.Rbuildignore b/.Rbuildignore index 6cd6d8c8..df7a1177 100644 --- a/.Rbuildignore +++ b/.Rbuildignore @@ -8,7 +8,7 @@ .*\.gitlab-ci.yml$ .lintr ^tests$ -#^inst/doc$ +^inst/doc$ ^inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF100\.R$ ^inst/doc/usecase/UseCase1_WindEvent_March2018\.R$ ^inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF4\.R$ diff --git a/.gitignore b/.gitignore index 2f6c062a..4b9792d6 100644 --- a/.gitignore +++ b/.gitignore @@ -17,3 +17,4 @@ Rplots.pdf .nfs* *.RData !data/*.RData +inst/doc -- GitLab From 3d6acfb8ef47e50daca2863a489a44d74412c804 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Tue, 23 Jan 2024 17:10:10 +0100 Subject: [PATCH 63/66] Correct .gitignore --- .gitignore | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.gitignore b/.gitignore index 4b9792d6..9e03b0a3 100644 --- a/.gitignore +++ b/.gitignore @@ -17,4 +17,4 @@ Rplots.pdf .nfs* *.RData !data/*.RData -inst/doc +inst/doc/* -- GitLab From 6b3bfd94eef644d899876ab088effca9fc7e2fa5 Mon Sep 17 00:00:00 2001 From: Eva Rifa Date: Wed, 24 Jan 2024 10:23:45 +0100 Subject: [PATCH 64/66] Add dontrun in CST_SaveExp examples, comment some tests in CST_SaveExp to avoid creating files, remove some changes in -gitignore, .Rbuildignore --- .Rbuildignore | 2 +- .gitignore | 1 - R/CST_SaveExp.R | 4 + man/CST_SaveExp.Rd | 2 + man/SaveExp.Rd | 2 + tests/testthat/test-CST_SaveExp.R | 158 +++++++++++++++--------------- 6 files changed, 88 insertions(+), 81 deletions(-) diff --git a/.Rbuildignore b/.Rbuildignore index df7a1177..6cd6d8c8 100644 --- a/.Rbuildignore +++ b/.Rbuildignore @@ -8,7 +8,7 @@ .*\.gitlab-ci.yml$ .lintr ^tests$ -^inst/doc$ +#^inst/doc$ ^inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF100\.R$ ^inst/doc/usecase/UseCase1_WindEvent_March2018\.R$ ^inst/doc/usecase/UseCase2_PrecipitationDownscaling_RainFARM_RF4\.R$ diff --git a/.gitignore b/.gitignore index 9e03b0a3..2f6c062a 100644 --- a/.gitignore +++ b/.gitignore @@ -17,4 +17,3 @@ Rplots.pdf .nfs* *.RData !data/*.RData -inst/doc/* diff --git a/R/CST_SaveExp.R b/R/CST_SaveExp.R index 72a97b8c..4e25a51d 100644 --- a/R/CST_SaveExp.R +++ b/R/CST_SaveExp.R @@ -92,9 +92,11 @@ #'\code{\link{s2dv_cube}} #' #'@examples +#'\dontrun{ #'data <- lonlat_temp_st$exp #'CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', #' dat_dim = 'dataset', sdate_dim = 'sdate') +#'} #' #'@export CST_SaveExp <- function(data, destination = "./", startdates = NULL, @@ -266,6 +268,7 @@ CST_SaveExp <- function(data, destination = "./", startdates = NULL, #'} #' #'@examples +#'\dontrun{ #'data <- lonlat_temp_st$exp$data #'lon <- lonlat_temp_st$exp$coords$lon #'lat <- lonlat_temp_st$exp$coords$lat @@ -277,6 +280,7 @@ CST_SaveExp <- function(data, destination = "./", startdates = NULL, #'SaveExp(data = data, coords = coords, Datasets = Datasets, varname = varname, #' Dates = Dates, metadata = metadata, single_file = TRUE, #' ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset') +#'} #' #'@import easyNCDF #'@importFrom s2dv Reorder diff --git a/man/CST_SaveExp.Rd b/man/CST_SaveExp.Rd index 1ac3e7ab..c7976bcc 100644 --- a/man/CST_SaveExp.Rd +++ b/man/CST_SaveExp.Rd @@ -122,9 +122,11 @@ This function allows to divide and save a object of class 's2dv_cube' object that follows the NetCDF attributes conventions. } \examples{ +\dontrun{ data <- lonlat_temp_st$exp CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate') +} } \seealso{ diff --git a/man/SaveExp.Rd b/man/SaveExp.Rd index d7f0b30d..6ec767a0 100644 --- a/man/SaveExp.Rd +++ b/man/SaveExp.Rd @@ -151,6 +151,7 @@ from StartR package. If the original 's2dv_cube' object has been created from \code{CST_Load()}, then it can be reloaded with \code{Load()}. } \examples{ +\dontrun{ data <- lonlat_temp_st$exp$data lon <- lonlat_temp_st$exp$coords$lon lat <- lonlat_temp_st$exp$coords$lat @@ -162,6 +163,7 @@ metadata <- lonlat_temp_st$exp$attrs$Variable$metadata SaveExp(data = data, coords = coords, Datasets = Datasets, varname = varname, Dates = Dates, metadata = metadata, single_file = TRUE, ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset') +} } \author{ diff --git a/tests/testthat/test-CST_SaveExp.R b/tests/testthat/test-CST_SaveExp.R index b4e17554..385d2793 100644 --- a/tests/testthat/test-CST_SaveExp.R +++ b/tests/testthat/test-CST_SaveExp.R @@ -105,18 +105,18 @@ test_that("1. Input checks: CST_SaveExp", { ) ) # startdates - expect_warning( - CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, startdates = 1), - paste0("Parameter 'startdates' doesn't have the same length ", - "as dimension 'sdate', it will not be used.") - ) - expect_warning( - CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, startdates = '20100101'), - paste0("Parameter 'startdates' doesn't have the same length ", - "as dimension '", 'sdate',"', it will not be used.") - ) + # expect_warning( + # CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, startdates = 1), + # paste0("Parameter 'startdates' doesn't have the same length ", + # "as dimension 'sdate', it will not be used.") + # ) + # expect_warning( + # CST_SaveExp(data = cube1, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, startdates = '20100101'), + # paste0("Parameter 'startdates' doesn't have the same length ", + # "as dimension '", 'sdate',"', it will not be used.") + # ) # memb_dim suppressWarnings( expect_error( @@ -173,38 +173,38 @@ test_that("1. Input checks", { paste0("Parameter 'Dates' must have dimension names.") ) # drop_dims - expect_warning( - SaveExp(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, drop_dims = 1), - paste0("Parameter 'drop_dims' must be character string containing ", - "the data dimension names to be dropped. It will not be used.") - ) - expect_warning( - SaveExp(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, drop_dims = 'time'), - paste0("Parameter 'drop_dims' must be character string containing ", - "the data dimension names to be dropped. It will not be used.") - ) - expect_warning( - SaveExp(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, drop_dims = 'sdate'), - paste0("Parameter 'drop_dims' can only contain dimension names ", - "that are of length 1. It will not be used.") - ) - expect_warning( - SaveExp(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, drop_dims = 'ftime'), - paste0("Parameter 'drop_dims' contains dimensions used in the ", - "computation. It will not be used.") - ) + # expect_warning( + # SaveExp(data = dat2, coords = coords2, + # metadata = list(tas = list(level = '2m')), + # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, drop_dims = 1), + # paste0("Parameter 'drop_dims' must be character string containing ", + # "the data dimension names to be dropped. It will not be used.") + # ) + # expect_warning( + # SaveExp(data = dat2, coords = coords2, + # metadata = list(tas = list(level = '2m')), + # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, drop_dims = 'time'), + # paste0("Parameter 'drop_dims' must be character string containing ", + # "the data dimension names to be dropped. It will not be used.") + # ) + # expect_warning( + # SaveExp(data = dat2, coords = coords2, + # metadata = list(tas = list(level = '2m')), + # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, drop_dims = 'sdate'), + # paste0("Parameter 'drop_dims' can only contain dimension names ", + # "that are of length 1. It will not be used.") + # ) + # expect_warning( + # SaveExp(data = dat2, coords = coords2, + # metadata = list(tas = list(level = '2m')), + # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, drop_dims = 'ftime'), + # paste0("Parameter 'drop_dims' contains dimensions used in the ", + # "computation. It will not be used.") + # ) # varname suppressWarnings( expect_error( @@ -230,14 +230,14 @@ test_that("1. Input checks", { paste0("Parameter 'ftime_dim' is not found in 'data' dimension.") ) # Dates dimension check - expect_warning( - SaveExp(data = dat4, coords = coords4, - metadata = list(tas = list(level = '2m')), - Dates = NULL, ftime_dim = NULL, memb_dim = NULL, - dat_dim = NULL, var_dim = NULL), - paste0("Dates must be provided if 'data' must be saved in separated files. ", - "All data will be saved in a single file.") - ) + # expect_warning( + # SaveExp(data = dat4, coords = coords4, + # metadata = list(tas = list(level = '2m')), + # Dates = NULL, ftime_dim = NULL, memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL), + # paste0("Dates must be provided if 'data' must be saved in separated files. ", + # "All data will be saved in a single file.") + # ) # Without ftime and sdate expect_error( SaveExp(data = dat3, coords = coords3, @@ -247,34 +247,34 @@ test_that("1. Input checks", { paste0("Parameter 'Dates' can have only 'sdate_dim' and 'ftime_dim' ", "dimensions of length greater than 1.") ) - expect_warning( - SaveExp(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - startdates = c(paste(1:11, collapse = '')), - Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), - paste0("Parameter 'startdates' should be a character string containing ", - "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", - "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") - ) - expect_warning( - SaveExp(data = dat2, coords = coords2, - metadata = list(tas = list(level = '2m')), - Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), - paste0("Dates must be provided if 'data' must be saved in separated files. ", - "All data will be saved in a single file.") - ) + # expect_warning( + # SaveExp(data = dat2, coords = coords2, + # metadata = list(tas = list(level = '2m')), + # startdates = c(paste(1:11, collapse = '')), + # Dates = dates2, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), + # paste0("Parameter 'startdates' should be a character string containing ", + # "the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ", + # "'POSIXct' or 'Dates' class. Files will be named with Dates instead.") + # ) + # expect_warning( + # SaveExp(data = dat2, coords = coords2, + # metadata = list(tas = list(level = '2m')), + # Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, sdate_dim = 'sdate'), + # paste0("Dates must be provided if 'data' must be saved in separated files. ", + # "All data will be saved in a single file.") + # ) # (dat3) Without sdate_dim - expect_warning( - SaveExp(data = dat3, coords = coords3, - metadata = list(tas = list(level = '2m')), - Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, - dat_dim = NULL, var_dim = NULL, sdate_dim = NULL, - extra_string = 'nosdate3.nc', single_file = FALSE), - paste0("Dates must be provided if 'data' must be saved in separated files. ", - "All data will be saved in a single file.") - ) + # expect_warning( + # SaveExp(data = dat3, coords = coords3, + # metadata = list(tas = list(level = '2m')), + # Dates = NULL, ftime_dim = 'ftime', memb_dim = NULL, + # dat_dim = NULL, var_dim = NULL, sdate_dim = NULL, + # extra_string = 'nosdate3.nc', single_file = FALSE), + # paste0("Dates must be provided if 'data' must be saved in separated files. ", + # "All data will be saved in a single file.") + # ) }) ############################################## -- GitLab From ab18f5173e7eb48cc98a71b3884760c54b710a1a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eva=20Rif=C3=A0?= Date: Thu, 25 Jan 2024 15:56:20 +0100 Subject: [PATCH 65/66] Update README.md without gitlab short links --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 9be3c57c..8f56d956 100644 --- a/README.md +++ b/README.md @@ -47,12 +47,12 @@ Overview The CSTools package functions can be distributed in the following methods: -- **Data retrieval and formatting:** [CST_Start](R/CST_Start.R), [CST_SaveExp](R/CST_SaveExp.R), [CST_MergeDims](R/CST_MergeDims.R), [CST_SplitDim](R/CST_SplitDim.R), [CST_Subset](R/CST_Subset.R), [CST_InsertDim](R/CST_InsertDim.R), [CST_ChangeDimNames](R/CST_ChangeDimNames.R), [as.s2dv_cube](R/as.s2dv_cube.R) and [s2dv_cube](R/s2dv_cube.R). -- **Classification:** [CST_MultiEOF](R/CST_MultiEOF.R), [CST_WeatherRegimes](R/CST_WeatherRegimes.R), [CST_RegimsAssign](R/CST_RegimesAssign.R), [CST_CategoricalEnsCombination](R/CST_CategoricalEnsCombination.R), [CST_EnsClustering](R/CST_EnsClustering.R). -- **Downscaling:** [CST_Analogs](R/CST_Analogs.R), [CST_RainFARM](R/CST_RainFARM.R), [CST_RFTemp](R/CST_RFTemp.R), [CST_AdamontAnalog](R/CST_AdamontAnalog.R), [CST_AnalogsPredictors](R/CST_AnalogsPredictors.R). -- **Correction and transformation:** [CST_BiasCorrection](R/CST_BiasCorrection.R), [CST_Calibration](R/CST_Calibration.R), [CST_QuantileMapping](R/CST_QuantileMapping.R), [CST_Anomaly](R/CST_Anomaly.R), [CST_BEI_Weighting](R/CST_BEI_Weighting.R), [CST_DynBiasCorrection](R/CST_DynBiasCorrection.R). -- **Assessment:** [CST_MultiMetric](R/CST_MultiMetric.R), [CST_MultivarRMSE](R/CST_MultivarRMSE.R) -- **Visualization:** [PlotCombinedMap](R/PlotCombinedMap.R), [PlotForecastPDF](R/PlotForecastPDF.R), [PlotMostLikelyQuantileMap](R/PlotMostLikelyQuantileMap.R), [PlotPDFsOLE](R/PlotPDFsOLE.R), [PlotTriangles4Categories](R/PlotTriangles4Categories.R), [PlotWeeklyClim](R/PlotWeeklyClim.R). +- **Data retrieval and formatting:** CST_Start, CST_SaveExp, CST_MergeDims, CST_SplitDim, CST_Subset, CST_InsertDim, CST_ChangeDimNames, as.s2dv_cube and s2dv_cube. +- **Classification:** CST_MultiEOF, CST_WeatherRegimes, CST_RegimsAssign, CST_CategoricalEnsCombination, CST_EnsClustering. +- **Downscaling:** CST_Analogs, CST_RainFARM, CST_RFTemp, CST_AdamontAnalog, CST_AnalogsPredictors. +- **Correction and transformation:** CST_BiasCorrection, CST_Calibration, CST_QuantileMapping, CST_Anomaly, CST_BEI_Weighting, CST_DynBiasCorrection. +- **Assessment:** CST_MultiMetric, CST_MultivarRMSE +- **Visualization:** PlotCombinedMap, PlotForecastPDF, PlotMostLikelyQuantileMap, PlotPDFsOLE, PlotTriangles4Categories, PlotWeeklyClim. An `s2dv_cube` is an object to store ordered multidimensional array with named dimensions, specific coordinates and stored metadata (in-memory representation of a NetCDF file). Its “methods” are the **CST** prefix functions. The basic structure of the class `s2dv_cube` is a list of lists. The first level elements are: `data`, `dims`, `coords` and `attrs`. To access any specific element it will be done using the `$` operator. -- GitLab From 21dd81ef4b5bba38a87acbb930f8f15bf2cb2668 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eva=20Rif=C3=A0?= Date: Thu, 25 Jan 2024 15:57:11 +0100 Subject: [PATCH 66/66] Update NEWS.md --- NEWS.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/NEWS.md b/NEWS.md index 0f3602fd..cb826651 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,4 +1,4 @@ -# CSTools 5.2.0 (Release date: 24-01-2024) +# CSTools 5.2.0 (Release date: 25-01-2024) ### Development - New function CST_ChangeDimNames -- GitLab