diff --git a/DESCRIPTION b/DESCRIPTION index 78935b854eb76f5ab7dce4fe18216a6c0588ccde..6780c83d1760c794b0289f671433edacbab07636 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,6 +1,6 @@ Package: s2dverification Title: Set of Common Tools for Forecast Verification -Version: 2.8.5 +Version: 2.8.6 Authors@R: c( person("BSC-CNS", role = c("aut", "cph")), person("Virginie", "Guemas", , "virginie.guemas@bsc.es", role = "aut"), @@ -50,7 +50,8 @@ Imports: ncdf4, parallel, plyr, - SpecsVerification (>= 0.5.0) + SpecsVerification (>= 0.5.0), + multiApply (>= 2.0.0) Suggests: easyVerification, testthat diff --git a/NAMESPACE b/NAMESPACE index 3e35710e2d54618b5a2311a1a03f839de76e59f3..00c59bfa8c0477bfe32e3cfa00002bf6501b2e13 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -56,6 +56,7 @@ export(PlotBoxWhisker) export(PlotClim) export(PlotEquiMap) export(PlotLayout) +export(PlotMatrix) export(PlotSection) export(PlotStereoMap) export(PlotVsLTime) @@ -89,6 +90,7 @@ import(graphics) import(mapproj) import(maps) import(methods) +import(multiApply) import(ncdf4) import(parallel) import(plyr) diff --git a/NEWS.md b/NEWS.md index 51b198b1aa2272eb0830ff43f2626d207b821ddf..6d81252db9d51a1ac86b6fa5bd5fca14ead4dc89 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,3 +1,140 @@ -# s2dverification 2.9.0 (Release date: ) -- Apply Roxygen2 to all the files. (2019.07.31) (An-Chi) +# s2dverification 2.8.6 (Release date: ) +- Apply Roxygen2 format to all the files. +- Bug fix in Composite(). +- Bug fix in Ano(). Recommend to assign the dimensions by name to avoid confusion when the dimensions have same length. +- Trend() documentation error fix. +- Introduce new function PlotMatrix(). +# s2dverification 2.8.0 (Release date: 2017-02-13) +- Change licence from GPL-3 to LGPL-3. +- New veriApply compatible score functions (.BrierScore, .Corr, .RMS, .RMSSS, .RatioRMS, .RatioSDRMS and .Trend). +- New function CDORemap to interpolate R arrays with CDO. +- New function ArrayToNetCDF to save R arrays with metadata in NetCDF files. +- Enhance plot_timeseries.R and plot_maps.R example scripts to work with file-per-chunk data, for auto-ecearth v3.2.1a. +- Add colour-blind colour bars for the map plots. +- Add warning in Load when extrapolating data. +- Bug fix in ProbBins when called with cross-validation. +- Enhance documentation. +- Adapt UltimateBrier to SpecsVerification 0.5-0. +- Enhancements to adjust size and resolution in plotting functions. +- Solve PlotEquiMap bug when values equal to the lower limit. +- Bug fix in Ano. +- Bug fix in PlotVsLTime. +- Small update in the configuration file. + +# s2dverification 2.7.0 (Release date: 2016-08-24) +Enhanced PlotEquiMap() and PlotStereoMap() with lots of new options and fixed issues: +- Colour bar with triangle ends and lots of new features. +- Margins, labels, ticks, colour bar, titles now arranged properly. +- Now possibile to adjust colour and border of continents, size, colour and type of contour lines, size of labels, ticks and margins, colour and width of boxes, etc. +- Draw multiple superimposed dot/symbol layers. +- Draw boxes in PlotStereoMap(). +- PlotStereoMap() with bounding circle. +- Add function PlotLayout() to automatically generate complex figure layouts and populate with plots from multi-dimensional arrays. +- Fix and updated corrupted example scripts (required for new auto-ecearth releases to work). +- Add function Subset() to easily take chunks of data arrays. +- Fix bug in Load() under some particular configurations. +- Enhance margins in PlotAno(). +- Update sample data to be together with metadata as provided by Load(). +- Update and fix in the BSC Load() configuration file. + +# s2dverification 2.6.0 (Release date: 2016-06-06) +- Update configuration file. +- Functions to compute variability modes and project data on EOF() and ProjectField(). +- Function to compute co·variability modes: SVD(), by Javi. +- Function to compute the NAO: NAO(), by Fabian, Virginie, Lauriane, Martin. +- Brier score/skill score accounting for small ensemble/start date sample size: UltimateBrier(). +- K-means spatial clustering: Cluster(). +- Synthetic data generator: ToyModel(). +- Tropical cyclone downscaling: StatSeasAtlHurr(). +- Function to composite fields: Composite(). +- Function to generate map animations: AnimateMap(). +- Function to plot time series with box-and-whisker plots: PlotBoxWhisker(). +- Possible to disable computation of confidence intervals or p-values in ACC(), Corr(), RatioRMS(), RatioSDRMS(), RMS(), RMSSS(), Spread() and Trend(). +- Possible to adjust confidence level in all functions that provide confidence intervals: ACC(), Corr(), RMS(), Spread() and Trend(). +- Possible to plot arrows in PlotEquiMap(). +- Possible to save plots in multiple formats, to file or onscreen from all plot functions. +- Objects returned by Load() are now closer to the format in downscaleR. The initial and end date of each time step is provided now. +- Enhancements in Smoothing(). +- Load() now stops if the tag $START_DATE$/$YEAR$+$MONTH$+$DAY$ is not in the path pattern of any of the experiments. + +# s2dverification 2.5.0 (Release date: 2016-01-04) +- Fix bugs when using masks in Load() +- Able to specify masks with paths to NetCDF files + +# s2dverification 2.4.7 (Release date: 2015-11-15) +- Update plot_timeseries.R to new paths and to 'ncdf4'. +- Improve performance when retrieving subsets of data (regions of earth or time periods). +- Add possibility to use Load() without a configuration file. See details on parameters 'exp' and 'obs' in ?Load + +Load() now returns plenty of metadata. Highlighted: +- Paths to all loaded files +- Paths to not found files +- Stamp with all the provided parameters to exactly reproduce any Load() call +- The name of the common grid (if any), following CDO's naming conventions + +Other enhancements in Load(): +- Enhance error handling and error messages +- Add “progress bar” +- Detect automatically grid of the files. No need to specify it +- Detect automatically if the requested variable is 2-dimensional or global mean. No need to specify it +- Possibility to load observations only, from a limited period of time only +- Possibility to load NetCDF files with disordered dimensions +- Remove system dependency of 's2dverification' to NCO and some GNU tools +- Simplify configuration file: removed lists of variables and reduced from 5 tables to 2, one for experimental datasets and one for observational datasets. You can convert old configuration files to the new format with the script in /shared/earth/software/scripts/convertConfig.R as follows: /shared/earth/software/scripts/convertConfig.R /path/to/configfile.conf +- Fix and updated the sample script plot_timeseries.R +- Fix wrong entries in BSC configuration file for some ice variables. + +# s2dverification 2.4.0 (Release date: 2015-07-27) +- Option to draw rectangles in PlotEquiMap() +- Motification of Corr() to accomodate ranked correlation +- Add the possibility to load the second set of HadCM3 decadal data (i3p1) +- Add functions to assist in manipulating the configuration file +- Improve examples that use extremely reduced experimental and observational datasets +- Uniformize documentation style +- Add possibility to configure dimension names to look for inside NetCDF files +- Add the possibility to load ESA observations from SMHI +- Fix bug that happened in some cases when a common grid is not specified + +# s2dverification 2.3.2 (Release date: 2015-04-23) +- New CRPS() function to compute the continuous ranked probability score for ensemble forecasts. +- New ProbBins() function to compute probabilistic information of a forecast relative to a threshold or a quantile. +- Load() stops and warns if the masks provided are not in the correct grid. +- Load() didn't apply, as expected, the same masks in observations as in experiments when possible. Now fixed. +- Enhancement in Clim() documentation. +- Enhancements in Load() and configuration file documentation. +- HadSLP dataset is now loadable + +# s2dverification 2.3.1 (Release date: 2015-03-09) +- Loading observations only fixed +- Loading only one leadtime fixed +- Loading a 2D variable when the first observation was not stored in file-per-dataset format fixed +- Parameter 'ncores' changed to 'nprocs' +- Improvements in configuration file mechanism and documentation + +# s2dverification 2.3.0 (Release date: 2015-03-02) +- Configuration file mechanism to specify new dataset or variable paths, grids, etc. +- New parameters in Load() to specify maximum and minimum values. +- New supported dataset formats. See '?Load' in R after loadings2dverificationfor more information. +- More efficient memory usage in Load() and usage of multiple parallel processes (faster). +- NetCDF4 + OPeNDAP support + +# s2dverification 2.2.0 (Release date: 2014-12-16) +- ACC provides confidence intervals obtained with bootstrap method +- Function to plot ACC score +- Function to plot variables on a polar stereographic projection +- Possibility of loading observations only +- Possibility to load more ice variables +- Adjustable significance level in the Corr function +- Adjustable number size in ColorBar + +# s2dverification 2.1.0 (Release date: 2014-01-23) +- Demo scripts 'plot_timeseries.R' and 'plot_maps.R' available in the 'inst/doc' directory in thes2dverificationrepository. +- Documentation on how to specify the grids and masks to the function Load() has been added to its help page, code and package manual. + +# s2dverification 2.0.0 (Release date: 2013-08-02) +- Use of the standard R package structure. +- Use of the google's R style guide. +- Functions that involved RClim set of funcions have been kept apartfrom the package (AnimVsLTime, BlueRed, PlotMap, ProjMap) as well as the authors. +- New functions have been added: Alpha, EnoNew, Filter, FitAcfCoef, FitAutocor, GenSeries, Spectrum. +- Extended help. diff --git a/R/ACC.R b/R/ACC.R index aeabc0796950d180691a8e4fc6abd5c5e5309c59..a00bacf71d7f2f5df67b08cc9c3c1480c1d5c433 100644 --- a/R/ACC.R +++ b/R/ACC.R @@ -77,7 +77,9 @@ #'ano_exp <- Ano(sampleData$mod, clim$clim_exp) #'ano_obs <- Ano(sampleData$obs, clim$clim_obs) #'acc <- ACC(Mean1Dim(ano_exp, 2), Mean1Dim(ano_obs, 2)) +#' \donttest{ #'PlotACC(acc$ACC, startDates) +#' } #'@references Joliffe and Stephenson (2012). Forecast Verification: A #' Practitioner's Guide in Atmospheric Science. Wiley-Blackwell. #'@keywords datagen diff --git a/R/AnimateMap.R b/R/AnimateMap.R index 0f4d4b38d50731a796d329c20aeaaeb19577090b..83667d2a2199029677094018d6a4ed248450d795 100644 --- a/R/AnimateMap.R +++ b/R/AnimateMap.R @@ -148,11 +148,13 @@ #'dim_to_mean <- 2 # Mean along members #'rms <- RMS(Mean1Dim(season_means_mod, dim_to_mean), #' Mean1Dim(season_means_obs, dim_to_mean)) +#' \donttest{ #'AnimateMap(rms, sampleData$lon, sampleData$lat, toptitle = #' "RMSE decadal prediction", sizetit = 1, units = "degree", #' monini = 1, freq = 1, msk95lev = FALSE, brks = seq(0, 0.8, 0.08), #' intlon = 10, intlat = 10, filled.continents = TRUE, #' fileout = 'rmse_dec.gif') +#' } #'@importFrom grDevices postscript dev.off #'@export AnimateMap <- function(var, lon, lat, toptitle = rep("", 11), sizetit = 1, diff --git a/R/Ano.R b/R/Ano.R index 657e7f9b8816f00680c6d337f5ca84872833ce24..1c8ffee9059e29aac9b9c20496424b234d917263 100644 --- a/R/Ano.R +++ b/R/Ano.R @@ -31,9 +31,11 @@ #'dim_to_smooth <- 4 # Smooth along lead-times #'smooth_ano_exp <- Smoothing(ano_exp, runmean_nb_months, dim_to_smooth) #'smooth_ano_obs <- Smoothing(ano_obs, runmean_nb_months, dim_to_smooth) +#' \donttest{ #'PlotAno(smooth_ano_exp, smooth_ano_obs, startDates, #' toptitle = paste('smoothed anomalies'), ytitle = c('K', 'K', 'K'), #' legends = 'ERSST', biglab = FALSE, fileout = 'tos_ano.eps') +#' } #'@export Ano <- function(var, clim) { # diff --git a/R/Ano_CrossValid.R b/R/Ano_CrossValid.R index e42941f256f3fcb85859c430d64317c56bc43e66..954a08144565c42e76ec80eb5e3b996fbd822149 100644 --- a/R/Ano_CrossValid.R +++ b/R/Ano_CrossValid.R @@ -26,9 +26,11 @@ #'# Load sample data as in Load() example: #'example(Load) #'anomalies <- Ano_CrossValid(sampleData$mod, sampleData$obs) +#' \donttest{ #'PlotAno(anomalies$ano_exp, anomalies$ano_obs, startDates, #' toptitle = paste('anomalies'), ytitle = c('K', 'K', 'K'), #' legends = 'ERSST', biglab = FALSE, fileout = 'tos_ano_crossvalid.eps') +#' } #'@export Ano_CrossValid <- function(var_exp, var_obs, memb = TRUE) { # diff --git a/R/Clim.R b/R/Clim.R index 8a390ad4a96b6f57821e094f57c47f759ff41d21..8445e46277bcba517dd1256d957dbe3347c1b0fb 100644 --- a/R/Clim.R +++ b/R/Clim.R @@ -41,10 +41,12 @@ #'# Load sample data as in Load() example: #'example(Load) #'clim <- Clim(sampleData$mod, sampleData$obs) +#' \donttest{ #'PlotClim(clim$clim_exp, clim$clim_obs, #' toptitle = paste('sea surface temperature climatologies'), #' ytitle = 'K', monini = 11, listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, fileout = 'tos_clim.eps') +#' } #'@export Clim <- function(var_exp, var_obs, memb = TRUE, kharin = FALSE, NDV = FALSE) { # diff --git a/R/Consist_Trend.R b/R/Consist_Trend.R index 827440fc8fedb0b093c6055686df6c65d88acca5..2a82fd9c021ab1028379515fb94bbef325ffd3ff 100644 --- a/R/Consist_Trend.R +++ b/R/Consist_Trend.R @@ -57,6 +57,7 @@ #' Mean1Dim(smooth_ano_obs, dim_to_mean), #' years_between_startdates) #' +#' \donttest{ #'PlotVsLTime(trend$trend, toptitle = "trend", ytitle = "K/(5 years)", #' monini = 11, limits = c(-0.8, 0.8), listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, hlines = c(0), @@ -64,6 +65,7 @@ #'PlotAno(InsertDim(trend$detrendedmod,2,1), InsertDim(trend$detrendedobs,2,1), #' startDates, "Detrended tos anomalies", ytitle = 'K', #' legends = 'ERSST', biglab = FALSE, fileout = 'tos_detrended_ano.eps') +#' } #' #'@export Consist_Trend <- function(var_exp, var_obs, interval = 1) { diff --git a/R/Corr.R b/R/Corr.R index 18ae9b1455f4445a5112b0ba462d64bad7a3974c..419002305586a9920628244c6445ee7f7ef20774 100644 --- a/R/Corr.R +++ b/R/Corr.R @@ -86,10 +86,12 @@ #' compROW = required_complete_row, #' limits = c(ceiling((runmean_months + 1) / 2), #' leadtimes_per_startdate - floor(runmean_months / 2))) +#' \donttest{ #'PlotVsLTime(corr, toptitle = "correlations", ytitle = "correlation", #' monini = 11, limits = c(-1, 2), listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, hlines = c(-1, 0, 1), #' fileout = 'tos_cor.eps') +#' } #' #'# The following example uses veriApply combined with .Corr instead of Corr #' \dontrun{ diff --git a/R/Filter.R b/R/Filter.R index 42c847f91cd017086576b2d0d423600d94afb5aa..474c9c597b33a4a83f38f0aed7b12e1dea06879c 100644 --- a/R/Filter.R +++ b/R/Filter.R @@ -30,8 +30,10 @@ #' } #' } #'} +#' \donttest{ #'PlotAno(InsertDim(ensmod, 2, 1), sdates = startDates, fileout = #' 'filtered_ensemble_mean.eps') +#' } #' #'@importFrom stats lm #'@export diff --git a/R/Histo2Hindcast.R b/R/Histo2Hindcast.R index cb93ae8606ce81bbd32b69b4883e290e025ad732..7693319cc9ecfd053761e068f24793494386eb48 100644 --- a/R/Histo2Hindcast.R +++ b/R/Histo2Hindcast.R @@ -58,9 +58,11 @@ #' start_dates_out, leadtimes_per_startdate) #'observational_data <- Histo2Hindcast(sampleData$obs, startDates[1], #' start_dates_out, leadtimes_per_startdate) +#' \donttest{ #'PlotAno(experimental_data, observational_data, start_dates_out, #' toptitle = paste('anomalies reorganized into shorter chunks'), #' ytitle = 'K', fileout='tos_histo2hindcast.eps') +#' } #' #'@export Histo2Hindcast <- function(varin, sdatesin, sdatesout, nleadtimesout) { diff --git a/R/InsertDim.R b/R/InsertDim.R index 2e7a9b82e9f5ac8131bb57f58a2c5fd888cdfbe7..c2c128bd50689b6c5e2b83433fa9d2bb78e1d003 100644 --- a/R/InsertDim.R +++ b/R/InsertDim.R @@ -1,59 +1,106 @@ -#'Adds A Dimension To An Array +#'Add a dimension to an array #' -#'Inserts an extra dimension into an array at position 'posdim' with length -#''lendim' and which correspond to 'lendim' repetitions of the 'var' array. +#'Insert an extra dimension into an array at position 'posdim' with length +#''lendim'. The array repeats along the new dimension. #' -#'@param var Matrix to which a dimension should be added. -#'@param posdim Position of the new dimension. -#'@param lendim Length of the new dimension. +#'@param data An array to which the additional dimension to be added. +#'@param posdim An integer indicating the position of the new dimension. +#'@param lendim An integer indicating the length of the new dimension. +#'@param name A character string indicating the name for the new dimension. #' -#'@return Matrix with the added dimension. +#'@return An array as parameter 'data' but with the added dimension. #' #'@keywords datagen #'@author History:\cr #'0.1 - 2011-03 (V. Guemas, \email{virginie.guemas@ic3.cat}) - Original code\cr #'1.0 - 2013-09 (N. Manubens, \email{nicolau.manubens@ic3.cat}) - Formatting to R CRAN\cr #'1.1 - 2015-03 (N. Manubens, \email{nicolau.manubens@ic3.cat}) - Improvements +#'3.0 - 2019-12 (A. Ho, \email{an.ho@bsc.es}) - Modify with multiApply +#' #'@examples #'a <- array(rnorm(15), dim = c(3, 1, 5, 1)) -#'print(dim(a)) -#'print(dim(a[, , , ])) -#'print(dim(InsertDim(InsertDim(a[, , , ], 2, 1), 4, 1))) +#'res <- InsertDim(InsertDim(a[, , , ], 2, 1), 4, 1) #' +#'@import multiApply #'@export -InsertDim <- function(var, posdim, lendim) { - if (is.numeric(var) || is.logical(var)) { - dimsvar <- dim(var) - if (is.null(dimsvar)) { - dimsvar <- length(var) +InsertDim <- function(data, posdim, lendim, name = 'new', ncores = NULL) { + + # Check inputs + ## data + if (is.null(data)) { + stop("Parameter 'data' cannot be NULL.") + } + if (is.vector(data) & !is.list(data)) { #is vector + data <- as.array(data) + } + if (!is.array(data)) { + stop("Parameter 'data' must be an array.") + } + ## posdim + if (!is.numeric(posdim)) { + stop("Parameter 'posdim' must be a positive integer.") + } else if (posdim %% 1 != 0 | posdim <= 0 | length(posdim) > 1) { + stop("Parameter 'posdim' must be a positive integer.") + } + if (posdim > (length(dim(data)) + 1)) { + stop("Parameter 'posdim' cannot excess the number of dimensions of parameter 'data' plus 1") + } + ## lendim + if (!is.numeric(lendim)) { + stop("Parameter 'lendim' must be a positive integer.") + } else if (lendim %% 1 != 0 | lendim <= 0 | length(lendim) > 1) { + stop("Parameter 'lendim' must be a positive integer.") + } + ## name + if (!is.character(name) | length(name) > 1) { + stop("Parameter 'name' must be a character string.") + } + ## ncores + if (!is.null(ncores)) { + if (!is.numeric(ncores)) { + stop("Parameter 'ncores' must be a positive integer.") + } else if (ncores %% 1 != 0 | ncores <= 0 | length(ncores) > 1) { + stop("Parameter 'ncores' must be a positive integer.") } - outdim <- lendim - if (posdim <= (length(dimsvar) + 1)) { - if (posdim > 1) { - outdim <- c(dimsvar[1:(posdim - 1)], outdim) - } - if (posdim <= length(dimsvar)) { - outdim <- c(outdim, dimsvar[posdim:length(dimsvar)]) - } - outvar <- array(dim = c(outdim, rep(1, 10 - length(outdim)))) - # - # Duplicate the matrix along the required (posdim)th dimension - # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - # - u <- IniListDims(outdim, 10) - for (jindex in 1:lendim) { - u[[posdim]] <- jindex - outvar[u[[1]], u[[2]], u[[3]], u[[4]], u[[5]], u[[6]], u[[7]], u[[8]], u[[9]], u[[10]]] <- var - } - # - # Outputs - # ~~~~~~~~~ - dim(outvar) <- outdim - outvar - } else { - stop("'posdim' must be smaller or equal to the number of dimensions of 'var' plus 1") + } + + ############################### + # Calculate InsertDim + + ## create output dimension + outdim <- lendim + if (posdim > 1) { + outdim <- c(dim(data)[1:(posdim - 1)], outdim) + } + if (posdim <= length(dim(data))) { + outdim <- c(outdim, dim(data)[posdim:length(dim(data))]) + } + ## create output array + outvar <- array(dim = c(outdim)) + ## give temporary names for Apply(). The name will be replaced by data in the end + names(dim(outvar)) <- paste0('D', 1:length(outdim)) + names(dim(outvar))[posdim] <- name #'new' + + res <- Apply(data = list(outvar), + margins = name, #'new', + fun = .InsertDim, + dat = data, + ncores = ncores)$output1 + + if (posdim != 1) { + if (posdim < length(outdim)) { + res <- .aperm2(res, c(1:(posdim - 1), length(outdim), posdim:(length(outdim) - 1))) + } else { #posdim = length(outdim) + res <- .aperm2(res, c(1:(posdim - 1), length(outdim))) } } else { - stop("'var' must be a numeric object.") + res <- .aperm2(res, c(length(outdim), 1:(length(outdim) - 1))) } + + return(res) +} + +.InsertDim <- function(x, dat) { + x <- data + return(x) } diff --git a/R/Load.R b/R/Load.R index e17f3cfa683dbaf839d5f3cb022e091bec0352ff..cded9565fe77ae89528bd4c701d171982647c795 100644 --- a/R/Load.R +++ b/R/Load.R @@ -2209,7 +2209,7 @@ Load <- function(var, exp = NULL, obs = NULL, sdates, nmember = NULL, } else if (storefreq == 'hourly') { store_period <- 'hour' } else if (storefreq == 'daily') { - store_period <- 'day' + store_period <- 'DSTday' } else if (storefreq == 'monthly') { store_period <- 'month' } diff --git a/R/NAO.R b/R/NAO.R index c0d7c94d10ec2bb96d7109508081cb0d777917f2..bab4d3e0a4a56dabbc8a8c258f89e67ffb613ea8 100644 --- a/R/NAO.R +++ b/R/NAO.R @@ -111,8 +111,10 @@ #'# example data is not the full NAO area: NAO() will raise a warning. #'nao <- NAO(ano$ano_exp, ano$ano_obs, sampleData$lon, sampleData$lat) #'# Finally plot the NAO index +#' \donttest{ #'PlotBoxWhisker(nao$NAO_exp, nao$NAO_obs, "NAO index, DJF", "NAO index (PC1) TOS", #' monini = 12, yearini = 1985, freq = 1, "Exp. A", "Obs. X") +#' } #' #'@export NAO <- function(ano_exp = NULL, ano_obs = NULL, lon, lat, ftime_average = 2:4, obsproj = TRUE) { diff --git a/R/Plot2VarsVsLTime.R b/R/Plot2VarsVsLTime.R index c5d5abcf88420044f33e426e834889365df23bda..1e2fc6d9081b33a4ecae72395e6b052ab4c535f0 100644 --- a/R/Plot2VarsVsLTime.R +++ b/R/Plot2VarsVsLTime.R @@ -76,10 +76,12 @@ #'smooth_ano_exp_m_sub <- smooth_ano_exp - InsertDim(Mean1Dim(smooth_ano_exp, 2, #' narm = TRUE), 2, dim(smooth_ano_exp)[2]) #'spread <- Spread(smooth_ano_exp_m_sub, c(2, 3)) +#' \donttest{ #'Plot2VarsVsLTime(InsertDim(rms[, , , ], 1, 1), spread$sd, #' toptitle = 'RMSE and spread', monini = 11, freq = 12, #' listexp = c('CMIP5 IC3'), listvar = c('RMSE', 'spread'), #' fileout = 'plot2vars.eps') +#' } #' #'@importFrom grDevices png jpeg postscript pdf svg bmp tiff postscript dev.cur dev.new dev.off #'@importFrom stats ts diff --git a/R/PlotACC.R b/R/PlotACC.R index bff6511ad2bb0d556221b15aaf5b8dfd04103296..872327f15830093f9de97fd9bac33d15008edd6b 100644 --- a/R/PlotACC.R +++ b/R/PlotACC.R @@ -85,8 +85,10 @@ #'ano_obs <- Ano(sampleData$obs, clim$clim_obs) #'acc <- ACC(Mean1Dim(sampleData$mod, 2), #' Mean1Dim(sampleData$obs, 2)) +#' \donttest{ #'PlotACC(acc$ACC, startDates, toptitle = "Anomaly Correlation Coefficient") #' +#' } #'@importFrom grDevices dev.cur dev.new dev.off #'@importFrom stats ts #'@export diff --git a/R/PlotAno.R b/R/PlotAno.R index cc897a923c11844a09ac28a6eeaf8bd1469f9dc6..922806aa227ec5ab9993a87212f110cb718f9561 100644 --- a/R/PlotAno.R +++ b/R/PlotAno.R @@ -66,9 +66,11 @@ #'dim_to_smooth <- 4 # Smooth along lead-times #'smooth_ano_exp <- Smoothing(ano_exp, runmean_nb_months, dim_to_smooth) #'smooth_ano_obs <- Smoothing(ano_obs, runmean_nb_months, dim_to_smooth) +#' \donttest{ #'PlotAno(smooth_ano_exp, smooth_ano_obs, startDates, #' toptitle = paste('smoothed anomalies'), ytitle = c('K', 'K', 'K'), #' legends = 'ERSST', biglab = FALSE, fileout = 'tos_ano.eps') +#' } #' #'@importFrom grDevices dev.cur dev.new dev.off #'@importFrom stats ts diff --git a/R/PlotBoxWhisker.R b/R/PlotBoxWhisker.R index 1574522c0a102c7723506669ddd8de3de68b255f..46c5335507e395c5dd281d31e7ea7664c809668a 100644 --- a/R/PlotBoxWhisker.R +++ b/R/PlotBoxWhisker.R @@ -95,8 +95,10 @@ #'ano <- Ano_CrossValid(sampleData$mod, sampleData$obs) #'nao <- NAO(ano$ano_exp, ano$ano_obs, sampleData$lon, sampleData$lat) #'# Finally plot the nao index +#' \donttest{ #'PlotBoxWhisker(nao$NAO_exp, nao$NAO_obs, "NAO index, DJF", "NAO index (PC1) TOS", #' monini = 12, yearini = 1985, freq = 1, "Exp. A", "Obs. X") +#' } #' #'@importFrom grDevices dev.cur dev.new dev.off #'@importFrom stats cor diff --git a/R/PlotClim.R b/R/PlotClim.R index e7e6ac313ebc5df09a2701b2e4451dbfeecfe89b..a002429f5f80d1a11692ec1624a58d225f08d083 100644 --- a/R/PlotClim.R +++ b/R/PlotClim.R @@ -49,9 +49,11 @@ #'# Load sample data as in Load() example: #'example(Load) #'clim <- Clim(sampleData$mod, sampleData$obs) +#' \donttest{ #'PlotClim(clim$clim_exp, clim$clim_obs, toptitle = paste('climatologies'), #' ytitle = 'K', monini = 11, listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, fileout = 'tos_clim.eps') +#' } #' #'@importFrom grDevices dev.cur dev.new dev.off #'@importFrom stats ts diff --git a/R/PlotMatrix.R b/R/PlotMatrix.R new file mode 100644 index 0000000000000000000000000000000000000000..8c830d74b4761f1f3d19a98f4f5a3cbef271d5f6 --- /dev/null +++ b/R/PlotMatrix.R @@ -0,0 +1,227 @@ +#'Function to convert any numerical table to a grid of coloured squares. +#' +#'This function converts a numerical data matrix into a coloured +#'grid. It is useful for a slide or article to present tabular results as +#'colors instead of numbers. +#' +#'@param var A numerical matrix containing the values to be displayed in a +#' colored image. +#'@param brks A vector of the color bar intervals. The length must be one more +#' than the parameter 'cols'. Use ColorBar() to generate default values. +#'@param cols A vector of valid color identifiers for color bar. The length +#' must be one less than the parameter 'brks'. Use ColorBar() to generate +#' default values. +#'@param toptitle A string of the title of the grid. Set NULL as default. +#'@param title.color A string of valid color identifier to decide the title +#' color. Set "royalblue4" as default. +#'@param xtitle A string of title of the x-axis. Set NULL as default. +#'@param ytitle A string of title of the y-axis. Set NULL as default. +#'@param xlabels A vector of labels of the x-axis. The length must be +#' length of the column of parameter 'var'. Set the sequence from 1 to the +#' length of the column of parameter 'var' as default. +#'@param xvert A logical value to decide whether to place x-axis labels +#' vertically. Set FALSE as default, which keeps the labels horizontally. +#'@param ylabels A vector of labels of the y-axis The length must be +#' length of the row of parameter 'var'. Set the sequence from 1 to the +#' length of the row of parameter 'var' as default. +#'@param line An integer specifying the distance between the title of the +#' x-axis and the x-axis. Set 3 as default. Adjust if the x-axis labels +#' are long. +#'@param figure.width A positive number as a ratio adjusting the width of the +#' grids. Set 1 as default. +#'@param legend A logical value to decide to draw the grid color legend or not. +#' Set TRUE as default. +#'@param legend.width A number between 0 and 0.5 to adjust the legend width. +#' Set 0.15 as default. +#'@param xlab_dist A number specifying the distance between the x labels and +#' the x axis. If not specified, it equals to -1 - (nrow(var) / 10 - 1). +#'@param ylab_dist A number specifying the distance between the y labels and +#' the y axis. If not specified, it equals to 0.5 - ncol(var) / 10. +#'@param fileout A string of full directory path and file name indicating where +#' to save the plot. If not specified (default), a graphics device will pop up. +#'@param size_units A string indicating the units of the size of the device +#' (file or window) to plot in. Set 'px' as default. See ?Devices and the +#' creator function of the corresponding device. +#'@param res A positive number indicating resolution of the device (file or window) +#' to plot in. See ?Devices and the creator function of the corresponding device. +#'@param ... The additional parameters to be passed to function ColorBar() in +#' s2dverification for color legend creation. +#'@return A figure in popup window by default, or saved to the specified path. +#' +#'@examples +#'#Example with random data +#' PlotMatrix(var = matrix(rnorm(n = 120, mean = 0.3), 10, 12), +#' cols = c('white','#fef0d9','#fdd49e','#fdbb84','#fc8d59', +#' '#e34a33','#b30000', '#7f0000'), +#' brks = c(-1, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 1), +#' toptitle = "Mean Absolute Error", +#' xtitle = "Forecast time (month)", ytitle = "Start date", +#' xlabels = c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", +#' "Aug", "Sep", "Oct", "Nov", "Dec")) +#'@importFrom grDevices dev.new dev.off dev.cur +#'@export +PlotMatrix <- function(var, brks = NULL, cols = NULL, + toptitle = NULL, title.color = "royalblue4", + xtitle = NULL, ytitle = NULL, xlabels = NULL, xvert = FALSE, + ylabels = NULL, line = 3, figure.width = 1, legend = TRUE, + legend.width = 0.15, xlab_dist = NULL, ylab_dist = NULL, + fileout = NULL, size_units = 'px', res = 100, ...) { + + # Check variables: + if (!is.matrix(var)) + stop("Input values are not a matrix") + if (!is.numeric(var)) + stop("Input values are not always numbers") + + # Build: brks, cols + colorbar <- ColorBar(brks = brks, cols = cols, vertical = FALSE, + plot = FALSE, ...) + brks <- colorbar$brks + cols <- colorbar$cols + + n.cols <- length(cols) ## number of colours + n.brks <- length(brks) ## number of intervals + + if (n.brks != n.cols + 1) + stop("There must be one break more than the number of colors") + ncols <- ncol(var) ## number of columns of the image + nrows <- nrow(var) ## number of rows of the image + if (ncols < 2) + stop("Matrix must have at least two columns") + if (nrows < 2) + stop("Matrix must have at least two rows") + if (!is.null(xlabels) && length(xlabels) != ncols) + stop(paste0("The number of x labels must be equal to the number of ", + "columns of the data matrix")) + if (!is.null(ylabels) && length(ylabels) != nrows) + stop(paste0("The number of y labels must be equal to the number of ", + "rows of the data matrix")) + if (!is.numeric(figure.width) || figure.width < 0) + stop("figure.width must be a positive number") + if (!is.numeric(legend.width) || legend.width < 0 || legend.width > 0.5) + stop("legend.width must be a number from 0 to 0.5") + + + # If there is any filenames to store the graphics, process them + # to select the right device + if (!is.null(fileout)) { + deviceInfo <- .SelectDevice(fileout = fileout, + width = 80 * ncols * figure.width, + height = 80 * nrows, + units = size_units, res = res) + saveToFile <- deviceInfo$fun + fileout <- deviceInfo$files + } + + # Open connection to graphical device + if (!is.null(fileout)) { + saveToFile(fileout) + } else if (names(dev.cur()) == 'null device') { + dev.new(units = size_units, res = res, + width = 8 * figure.width, height = 5) + } + + if (!is.null(fileout)) { + + # Draw empty plot: + par(mar = c(4, 4, 1, 0), fig = c(0.1, 1 - legend.width, 0.1, 0.9)) + plot(1, 1, type = "n", xaxt = "n", yaxt = "n", ylim = c(0.5, nrows + 0.5), + xlim = c(-0.5, ncols - 1 + 0.5), ann = F, bty = "n") + + # Add axes titles: + label.size <- 1.2 * (max(ncols, nrows) / 10) ^ 0.5 + mtext(side = 1, text = xtitle, line = line, cex = label.size, font = 3) + mtext(side = 2, text = ytitle, line = 3, cex = label.size, font = 3) + + # Add plot title: + if (is.null(title.color)) title.color <- "royalblue4" + mtext(side = 3, text = toptitle, cex = 1.75 * (nrows / 10) ^ 0.7, + col = title.color) + + # Add axis labels: + axis.size <- (max(ncols, nrows) / 10) ^ 0.3 + if (is.null(xlabels)) xlabels = 1:ncols + if (is.null(ylabels)) ylabels = 1:nrows + + if(is.null(xlab_dist)) { ## Add x axis labels + axis(1, at = seq(0, ncols - 1), las = ifelse(xvert, 2, 1), labels = xlabels, + cex.axis = axis.size, tck = 0, lwd = 0, line = - 1 - (nrows / 10 - 1)) + } else { + axis(1, at = seq(0, ncols - 1), las = ifelse(xvert, 2, 1), labels = xlabels, + cex.axis = axis.size, tck = 0, lwd = 0, line = xlab_dist) + } + if(is.null(ylab_dist)) { ## Add y axis labels + axis(2, at = seq(1, nrows), las = 1, labels = rev(ylabels), + cex.axis = axis.size, tck = 0, lwd = 0, line = 0.5 - ncols / 10) + } else { + axis(2, at = seq(1, nrows), las = 1, labels = rev(ylabels), + cex.axis = axis.size, tck = 0, lwd = 0, line = ylab_dist) + } + + } else { + + # Draw empty plot: + par(mar = c(4, 4, 1, 0), fig = c(0.1, 1 - legend.width, 0.1, 0.9)) + plot(1, 1, type = "n", xaxt = "n", yaxt = "n", ylim = c(0.5, nrows + 0.5), + xlim = c(-0.5, ncols - 1 + 0.5), ann = F, bty = "n") + + # Add axes titles: + label.size <- 1.2 # * (max(ncols, nrows) / 10) ^ 0.5 + mtext(side = 1, text = xtitle, line = line, cex = label.size, font = 3) + mtext(side = 2, text = ytitle, line = 3, cex = label.size, font = 3) + + # Add plot title: + if (is.null(title.color)) title.color <- "royalblue4" + mtext(side = 3, text = toptitle, cex = 1.5, #* (nrows / 10) ^ 0.7, + col = title.color) + + # Add axis labels: + axis.size <- 1 #(max(ncols, nrows) / 10) ^ 0.3 + if (is.null(xlabels)) xlabels = 1:ncols + if (is.null(ylabels)) ylabels = 1:nrows + + if(is.null(xlab_dist)){ ## Add x axis labels + axis(1, at = seq(0, ncols - 1), las = ifelse(xvert, 2, 1), labels = xlabels, + cex.axis = axis.size, tck = 0, lwd = 0, line = - 1 - (nrows / 10 - 1)) + } else { + axis(1, at = seq(0, ncols - 1), las = ifelse(xvert, 2, 1), labels = xlabels, + cex.axis = axis.size, tck = 0, lwd = 0, line = xlab_dist) + } + if(is.null(ylab_dist)){ ## Add y axis labels + axis(2, at = seq(1, nrows), las = 1, labels = rev(ylabels), + cex.axis = axis.size, tck = 0, lwd = 0, line = 0.5 - ncols / 10) + } else { + axis(2, at = seq(1, nrows), las = 1, labels = rev(ylabels), + cex.axis = axis.size, tck = 0, lwd = 0, line = ylab_dist) + } + + } + + # Create an array of colors instead of numbers (it starts all gray): + array.colors <- array("gray", c(nrows, ncols)) + for (int in n.cols:1) array.colors[var <= brks[int + 1]] <- cols[int] + + # fill with colors the cells in the figure: + for (p in 1:nrows) { + for (l in 0:(ncols - 1)) { + polygon(c(0.5 + l - 1, 0.5 + l - 1, 1.5 + l - 1, 1.5 + l - 1), + c(-0.5 + nrows + 1 - p, 0.5 + nrows + 1 - p, + 0.5 + nrows + 1 - p, -0.5 + nrows + 1 - p), + col = array.colors[p, 1 + l], border = "black") + } + } + + # Draw color legend: + if (legend) { + par(fig = c(1 - legend.width - 0.01, + 1 - legend.width + legend.width * min(1, 10 / ncols), + 0.3, 0.8), new = TRUE) + #legend.label.size <- (max(ncols, nrows) / 10) ^ 0.4 + ColorBar(brks = brks, cols = cols, vertical = TRUE, ...) + } + + # If the graphic was saved to file, close the connection with the device + if (!is.null(fileout)) dev.off() + invisible(list(brks = brks, cols = cols)) + +} diff --git a/R/PlotVsLTime.R b/R/PlotVsLTime.R index e3f7b7f758fc9a82f661fdbcae4927084e3b994e..8920438ca315a4be531eb9ba72584df8b33dfb1d 100644 --- a/R/PlotVsLTime.R +++ b/R/PlotVsLTime.R @@ -90,10 +90,12 @@ #' compROW = required_complete_row, #' limits = c(ceiling((runmean_months + 1) / 2), #' leadtimes_per_startdate - floor(runmean_months / 2))) +#' \donttest{ #'PlotVsLTime(corr, toptitle = "correlations", ytitle = "correlation", #' monini = 11, limits = c(-1, 2), listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, hlines = c(-1, 0, 1), #' fileout = 'tos_cor.eps') +#' } #' #'@importFrom grDevices dev.cur dev.new dev.off #'@importFrom stats ts diff --git a/R/RMS.R b/R/RMS.R index bb3dd3890a800a3a0ec0f3c132a6b66998a0791a..c83b478256da2cb21359e330f2032caf15238507 100644 --- a/R/RMS.R +++ b/R/RMS.R @@ -78,10 +78,12 @@ #' compROW = required_complete_row, #' limits = c(ceiling((runmean_months + 1) / 2), #' leadtimes_per_startdate - floor(runmean_months / 2))) +#' \donttest{ #'PlotVsLTime(rms, toptitle = "Root Mean Square Error", ytitle = "K", #' monini = 11, limits = NULL, listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, hlines = c(0), #' fileout = 'tos_rms.eps') +#' } #'# The following example uses veriApply combined with .RMS instead of RMS #' \dontrun{ #'require(easyVerification) diff --git a/R/RMSSS.R b/R/RMSSS.R index 16623b3262762ccab1673a296e57601d5d0ed7a0..0ffe36f687f684d3f5d32f3569d79a7a0cd9f866 100644 --- a/R/RMSSS.R +++ b/R/RMSSS.R @@ -56,10 +56,12 @@ #'rmsss_plot <- array(dim = c(dim(rmsss)[1:2], 4, dim(rmsss)[4])) #'rmsss_plot[, , 2, ] <- rmsss[, , 1, ] #'rmsss_plot[, , 4, ] <- rmsss[, , 2, ] +#' \donttest{ #'PlotVsLTime(rmsss_plot, toptitle = "Root Mean Square Skill Score", ytitle = "", #' monini = 11, limits = c(-1, 1.3), listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, hlines = c(-1, 0, 1), #' fileout = 'tos_rmsss.eps') +#' } #'# The following example uses veriApply combined with .RMSSS instead of RMSSS #' \dontrun{ #'require(easyVerification) diff --git a/R/RatioSDRMS.R b/R/RatioSDRMS.R index ec16bc3b43483ef891c08c98181360047b5ea33e..62fe3ebcfd1934c9d67d8e31ebdc2af0388d0ca1 100644 --- a/R/RatioSDRMS.R +++ b/R/RatioSDRMS.R @@ -50,10 +50,12 @@ #'rsdrms_plot <- array(dim = c(dim(rsdrms)[1:2], 4, dim(rsdrms)[4])) #'rsdrms_plot[, , 2, ] <- rsdrms[, , 1, ] #'rsdrms_plot[, , 4, ] <- rsdrms[, , 2, ] +#' \donttest{ #'PlotVsLTime(rsdrms_plot, toptitle = "Ratio ensemble spread / RMSE", ytitle = "", #' monini = 11, limits = c(-1, 1.3), listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, siglev = TRUE, #' fileout = 'tos_rsdrms.eps') +#' } #' #'# The following example uses veriApply combined with .RatioSDRMS instead of RatioSDRMS #' \dontrun{ diff --git a/R/Season.R b/R/Season.R index d9f95a51cd33a77d4a2f29079a21ca3a5d2f49d3..baa19bbc016bb4d7baa0396769b3f8ae4f15da67 100644 --- a/R/Season.R +++ b/R/Season.R @@ -32,9 +32,11 @@ #' mean_start_month, mean_stop_month) #'season_means_obs <- Season(sampleData$obs, leadtimes_dimension, initial_month, #' mean_start_month, mean_stop_month) +#' \donttest{ #'PlotAno(season_means_mod, season_means_obs, startDates, #' toptitle = paste('winter (DJF) temperatures'), ytitle = c('K'), #' legends = 'ERSST', biglab = FALSE, fileout = 'tos_season_means.eps') +#' } #'@export Season <- function(var, posdim = 4, monini, moninf, monsup) { while (monsup < moninf) { diff --git a/R/Smoothing.R b/R/Smoothing.R index 65aac018e861d6956ea012cfc40f341907ac18c4..baf7edd24203bdf644e7a2520a22684015787f27 100644 --- a/R/Smoothing.R +++ b/R/Smoothing.R @@ -28,9 +28,11 @@ #'dim_to_smooth <- 4 # Smooth along lead-times #'smooth_ano_exp <- Smoothing(ano_exp, runmean_months, dim_to_smooth) #'smooth_ano_obs <- Smoothing(ano_obs, runmean_months, dim_to_smooth) +#' \donttest{ #'PlotAno(smooth_ano_exp, smooth_ano_obs, startDates, #' toptitle = "Smoothed Mediterranean mean SST", ytitle = "K", #' fileout = "tos_smoothed_ano.eps") +#' } #'@import plyr #'@export Smoothing <- function(var, runmeanlen = 12, numdimt = 4) { diff --git a/R/Spectrum.R b/R/Spectrum.R index bb9b2ca6623f3e3b48dbd243c65c6ca268c108e4..48f3823b24f3ba6f7a37e9be9ce96d2986ec415b 100644 --- a/R/Spectrum.R +++ b/R/Spectrum.R @@ -31,8 +31,10 @@ #' } #' } #'} +#' \donttest{ #'PlotAno(InsertDim(ensmod, 2, 1), sdates = startDates, fileout = #' 'filtered_ensemble_mean.eps') +#' } #' #'@importFrom stats spectrum cor rnorm sd quantile #'@export diff --git a/R/Spread.R b/R/Spread.R index 5a8e01896237b882672b8ab69aa76db64b740565..f69aa1b848eb591929a29c7bbcf1a0ffd755d665 100644 --- a/R/Spread.R +++ b/R/Spread.R @@ -58,6 +58,7 @@ #'smooth_ano_exp_m_sub <- smooth_ano_exp - InsertDim(Mean1Dim(smooth_ano_exp, 2, #' narm = TRUE), 2, dim(smooth_ano_exp)[2]) #'spread <- Spread(smooth_ano_exp_m_sub, c(2, 3)) +#' \donttest{ #'PlotVsLTime(spread$iqr, #' toptitle = "Inter-Quartile Range between ensemble members", #' ytitle = "K", monini = 11, limits = NULL, @@ -75,6 +76,7 @@ #' ytitle = "K", monini = 11, limits = NULL, #' listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, #' hlines = c(0), fileout = 'tos_mad.eps') +#' } #' #'@importFrom stats IQR sd mad runif quantile #'@export diff --git a/R/ToyModel.R b/R/ToyModel.R index f0c9a22dfeee70e38ba98424cb45e3ff5294bb70..4b06facf23208e31dfbe24aaf9b121e1d181fb9a 100644 --- a/R/ToyModel.R +++ b/R/ToyModel.R @@ -97,9 +97,11 @@ #' #'toyforecast <- ToyModel(alpha = a, beta = b, gamma = g, nmemb = nm, #' obsini = sampleData$obs, nstartd = 5, nleadt = 60) +#' \donttest{ #'PlotAno(toyforecast$mod, toyforecast$obs, startDates, #' toptitle = c("Synthetic decadal temperature prediction"), -#' fileout = "ex_toymodel.eps") +#' fileout = "ex_toymodel.eps") +#' } #' #'@importFrom stats rnorm #'@export diff --git a/R/Trend.R b/R/Trend.R index 15ec7ebe91338be57dbc456994e6942ffbfc8ac0..ba35b4d7d7f4b2038b0fc9df783aadcd8a28f327 100644 --- a/R/Trend.R +++ b/R/Trend.R @@ -8,30 +8,36 @@ #'.Trend provides the same functionality but taking a matrix ensemble members #'as input (exp). #' -#'@param var Array of any number of dimensions up to 10. -#'@param exp M by N matrix of M forecasts from N ensemble members. -#'@param interval Number of months/years between 2 points along posTR -#' dimension.\cr -#' Default = 1.\cr -#' The trend would be provided in number of units per month or year. -#'@param siglev Confidence level for the computation of confidence interval. -#' 0.95 by default. -#'@param conf Whether to compute the confidence levels or not. TRUE by default. -#'@param posTR Position along which to compute the trend. +#'@param var An array of any number of dimensions up to 10. +#'@param posTR An integer indicating the position along which to compute the +#' trend. +#'@param interval A number of months/years between 2 points along posTR +#' dimension. Set 1 as default. +#'@param siglev A numeric value indicating the confidence level for the +#' computation of confidence interval. Set 0.95 as default. +#'@param conf A logical value indicating whether to compute the confidence +#' levels or not. Set TRUE as default. +#'@param exp An M by N matrix representing M forecasts from N ensemble members. #' #'@return #'\item{$trend}{ #' The intercept and slope coefficients for the least squares fitting of the #' trend. +#' An array with same dimensions as parameter 'var' except along the posTR +#' dimension, which is replaced by a length 4 (or length 2 if conf = FALSE) +#' dimension, corresponding to the lower limit of the confidence interval +#' (only present if conf = TRUE), the slope, the upper limit of the confidence +#' interval (only present if conf = TRUE), and the intercept. #'} -#'\item{$conf.int}{ -#' Corresponding to the limits of the \code{siglev}\% confidence interval -#' (only present if \code{conf = TRUE}) for the slope coefficient. -#'} #'\item{$detrended}{ #' Same dimensions as var with linearly detrended var along the posTR #' dimension. #'} +#'Only in .Trend: +#'\item{$conf.int}{ +#' Corresponding to the limits of the \code{siglev}\% confidence interval +#' (only present if \code{conf = TRUE}) for the slope coefficient. +#'} #' #'@keywords datagen #'@author History:\cr @@ -43,6 +49,7 @@ #'example(Load) #'months_between_startdates <- 60 #'trend <- Trend(sampleData$obs, 3, months_between_startdates) +#' \donttest{ #'PlotVsLTime(trend$trend, toptitle = "trend", ytitle = "K / (5 year)", #' monini = 11, limits = c(-1,1), listexp = c('CMIP5 IC3'), #' listobs = c('ERSST'), biglab = FALSE, hlines = 0, @@ -50,6 +57,7 @@ #'PlotAno(trend$detrended, NULL, startDates, #' toptitle = 'detrended anomalies (along the startdates)', ytitle = 'K', #' legends = 'ERSST', biglab = FALSE, fileout = 'tos_detrended_obs.eps') +#' } #' #'@rdname Trend #'@export diff --git a/inst/config/BSC.conf b/inst/config/BSC.conf index d9d617eb1a4f505013fd9af4bf7b998e28176015..aca91dbb44fbb0fc2212fbcecfa06ef3cd2a0f0a 100644 --- a/inst/config/BSC.conf +++ b/inst/config/BSC.conf @@ -22,7 +22,7 @@ DEFAULT_DIM_NAME_MEMBERS = ensemble # Helper variables EXP_FILE = $VAR_NAME$_*$START_DATE$*.nc OBS_FILE = $VAR_NAME$_$YEAR$$MONTH$*.nc -RECON_LIST = (20thcr_v2|copernicus012|ds083.2|ecco|era40|era40-erainterim|eraclim|erainterim|erainterim-lores|eraland|gecco_v2|gfs|glorys2_v1|glorys2_v3|glosea5|ishii-kimoto|jra55|merra|merra_v2|ncep-reanalysis|oaflux|oi_v2|orap5|piomas|seaice-lim2|sst|tropflux|nemovar_system4) +RECON_LIST = (20thcr_v2|copernicus012|ds083.2|ecco|era40|era40-erainterim|eraclim|erainterim|erainterim-lores|eraland|gecco_v2|gfs|glorys2_v1|glorys2_v3|glosea5|ishii-kimoto|jra55|merra|merra_v2|ncep-reanalysis|oaflux|oi_v2|orap5|piomas|seaice-lim2|sst|tropflux|nemovar_system4|era5) # Defaults for extra variables from Load DEFAULT_FILE_GRID = diff --git a/man/ACC.Rd b/man/ACC.Rd index 17e2d7cf0369e6a2f3a6b1f6a4bf98e0f566e415..e3db377ff89f82c9d4d3cab82033aa487320b6b1 100644 --- a/man/ACC.Rd +++ b/man/ACC.Rd @@ -93,7 +93,9 @@ clim <- Clim(sampleData$mod, sampleData$obs) ano_exp <- Ano(sampleData$mod, clim$clim_exp) ano_obs <- Ano(sampleData$obs, clim$clim_obs) acc <- ACC(Mean1Dim(ano_exp, 2), Mean1Dim(ano_obs, 2)) + \donttest{ PlotACC(acc$ACC, startDates) + } } \author{ History:\cr diff --git a/man/AnimateMap.Rd b/man/AnimateMap.Rd index 85eb04a9609b368928d8ca85d4db9ce1870eb9be..96074b01680e82364e133050b4853fbf54e374ee 100644 --- a/man/AnimateMap.Rd +++ b/man/AnimateMap.Rd @@ -179,11 +179,13 @@ AnimateMap(Mean1Dim(season_means_mod, 2)[1, 1, , , ], sampleData$lon, dim_to_mean <- 2 # Mean along members rms <- RMS(Mean1Dim(season_means_mod, dim_to_mean), Mean1Dim(season_means_obs, dim_to_mean)) + \donttest{ AnimateMap(rms, sampleData$lon, sampleData$lat, toptitle = "RMSE decadal prediction", sizetit = 1, units = "degree", monini = 1, freq = 1, msk95lev = FALSE, brks = seq(0, 0.8, 0.08), intlon = 10, intlat = 10, filled.continents = TRUE, fileout = 'rmse_dec.gif') + } } \author{ History:\cr diff --git a/man/Ano.Rd b/man/Ano.Rd index a2deb37384718231f72d51fdbe0c1602b846903d..6143be88a1d78f87901e0dc6c7774f26dd4e4055 100644 --- a/man/Ano.Rd +++ b/man/Ano.Rd @@ -37,9 +37,11 @@ runmean_nb_months <- 12 dim_to_smooth <- 4 # Smooth along lead-times smooth_ano_exp <- Smoothing(ano_exp, runmean_nb_months, dim_to_smooth) smooth_ano_obs <- Smoothing(ano_obs, runmean_nb_months, dim_to_smooth) + \donttest{ PlotAno(smooth_ano_exp, smooth_ano_obs, startDates, toptitle = paste('smoothed anomalies'), ytitle = c('K', 'K', 'K'), legends = 'ERSST', biglab = FALSE, fileout = 'tos_ano.eps') + } } \author{ History:\cr diff --git a/man/Ano_CrossValid.Rd b/man/Ano_CrossValid.Rd index bb2be436f59096d452240bae72da7698805de64c..85d1badbe8213a7c3bd5f275d09f36a3d254ce27 100644 --- a/man/Ano_CrossValid.Rd +++ b/man/Ano_CrossValid.Rd @@ -31,9 +31,11 @@ with a cross-validation technique and a per-pair method. # Load sample data as in Load() example: example(Load) anomalies <- Ano_CrossValid(sampleData$mod, sampleData$obs) + \donttest{ PlotAno(anomalies$ano_exp, anomalies$ano_obs, startDates, toptitle = paste('anomalies'), ytitle = c('K', 'K', 'K'), legends = 'ERSST', biglab = FALSE, fileout = 'tos_ano_crossvalid.eps') + } } \author{ History:\cr diff --git a/man/Clim.Rd b/man/Clim.Rd index 3b11fe25276104183b94c85f00ca06009728dc63..1ef5e3dabb76b6bf4fc7b5b3956351de2ef2f265 100644 --- a/man/Clim.Rd +++ b/man/Clim.Rd @@ -50,10 +50,12 @@ all the data (model and obs) are excluded when computing the climatologies. # Load sample data as in Load() example: example(Load) clim <- Clim(sampleData$mod, sampleData$obs) + \donttest{ PlotClim(clim$clim_exp, clim$clim_obs, toptitle = paste('sea surface temperature climatologies'), ytitle = 'K', monini = 11, listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, fileout = 'tos_clim.eps') + } } \author{ History:\cr diff --git a/man/Consist_Trend.Rd b/man/Consist_Trend.Rd index a2911c6407fdcb730d5b6a07bb88cfb2b6b3f94f..d067dcd7069e1fe2ee1b28bb612aba1668761f3c 100644 --- a/man/Consist_Trend.Rd +++ b/man/Consist_Trend.Rd @@ -63,6 +63,7 @@ trend <- Consist_Trend(Mean1Dim(smooth_ano_exp, dim_to_mean), Mean1Dim(smooth_ano_obs, dim_to_mean), years_between_startdates) + \donttest{ PlotVsLTime(trend$trend, toptitle = "trend", ytitle = "K/(5 years)", monini = 11, limits = c(-0.8, 0.8), listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, hlines = c(0), @@ -70,6 +71,7 @@ PlotVsLTime(trend$trend, toptitle = "trend", ytitle = "K/(5 years)", PlotAno(InsertDim(trend$detrendedmod,2,1), InsertDim(trend$detrendedobs,2,1), startDates, "Detrended tos anomalies", ytitle = 'K', legends = 'ERSST', biglab = FALSE, fileout = 'tos_detrended_ano.eps') + } } \author{ diff --git a/man/Corr.Rd b/man/Corr.Rd index fcaf149adf0f3193beb8373462f48e42f09ef64f..e311813de042355a4499b46d3dd6d9d6fb11c1f9 100644 --- a/man/Corr.Rd +++ b/man/Corr.Rd @@ -104,10 +104,12 @@ corr <- Corr(Mean1Dim(smooth_ano_exp, dim_to_mean), compROW = required_complete_row, limits = c(ceiling((runmean_months + 1) / 2), leadtimes_per_startdate - floor(runmean_months / 2))) + \donttest{ PlotVsLTime(corr, toptitle = "correlations", ytitle = "correlation", monini = 11, limits = c(-1, 2), listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, hlines = c(-1, 0, 1), fileout = 'tos_cor.eps') + } # The following example uses veriApply combined with .Corr instead of Corr \dontrun{ diff --git a/man/Filter.Rd b/man/Filter.Rd index 3ee7df89e18fde582ed2f6e951736a60e5169f0b..3e40f105c3e557f1fcb68516e019e1845d705c1a 100644 --- a/man/Filter.Rd +++ b/man/Filter.Rd @@ -36,8 +36,10 @@ for (jstartdate in 1:3) { } } } + \donttest{ PlotAno(InsertDim(ensmod, 2, 1), sdates = startDates, fileout = 'filtered_ensemble_mean.eps') + } } \author{ diff --git a/man/Histo2Hindcast.Rd b/man/Histo2Hindcast.Rd index af494f1a481e798005cc3d33719739c6a799247d..7000628a8ca108cd47e8fee055300871ec6ea695 100644 --- a/man/Histo2Hindcast.Rd +++ b/man/Histo2Hindcast.Rd @@ -66,9 +66,11 @@ experimental_data <- Histo2Hindcast(sampleData$mod, startDates[1], start_dates_out, leadtimes_per_startdate) observational_data <- Histo2Hindcast(sampleData$obs, startDates[1], start_dates_out, leadtimes_per_startdate) + \donttest{ PlotAno(experimental_data, observational_data, start_dates_out, toptitle = paste('anomalies reorganized into shorter chunks'), ytitle = 'K', fileout='tos_histo2hindcast.eps') + } } \author{ diff --git a/man/InsertDim.Rd b/man/InsertDim.Rd index 4cdc5377dc69001eed6ff3cfe93192d5086ecb08..734f49676feb09e6bcbd4dbc7e22e77780110dea 100644 --- a/man/InsertDim.Rd +++ b/man/InsertDim.Rd @@ -2,29 +2,29 @@ % Please edit documentation in R/InsertDim.R \name{InsertDim} \alias{InsertDim} -\title{Adds A Dimension To An Array} +\title{Add a dimension to an array} \usage{ -InsertDim(var, posdim, lendim) +InsertDim(data, posdim, lendim, name = "new", ncores = NULL) } \arguments{ -\item{var}{Matrix to which a dimension should be added.} +\item{data}{An array to which the additional dimension to be added.} -\item{posdim}{Position of the new dimension.} +\item{posdim}{An integer indicating the position of the new dimension.} -\item{lendim}{Length of the new dimension.} +\item{lendim}{An integer indicating the length of the new dimension.} + +\item{name}{A character string indicating the name for the new dimension.} } \value{ -Matrix with the added dimension. +An array as parameter 'data' but with the added dimension. } \description{ -Inserts an extra dimension into an array at position 'posdim' with length -'lendim' and which correspond to 'lendim' repetitions of the 'var' array. +Insert an extra dimension into an array at position 'posdim' with length +'lendim'. The array repeats along the new dimension. } \examples{ a <- array(rnorm(15), dim = c(3, 1, 5, 1)) -print(dim(a)) -print(dim(a[, , , ])) -print(dim(InsertDim(InsertDim(a[, , , ], 2, 1), 4, 1))) +res <- InsertDim(InsertDim(a[, , , ], 2, 1), 4, 1) } \author{ @@ -32,6 +32,7 @@ History:\cr 0.1 - 2011-03 (V. Guemas, \email{virginie.guemas@ic3.cat}) - Original code\cr 1.0 - 2013-09 (N. Manubens, \email{nicolau.manubens@ic3.cat}) - Formatting to R CRAN\cr 1.1 - 2015-03 (N. Manubens, \email{nicolau.manubens@ic3.cat}) - Improvements +3.0 - 2019-12 (A. Ho, \email{an.ho@bsc.es}) - Modify with multiApply } \keyword{datagen} diff --git a/man/NAO.Rd b/man/NAO.Rd index 2767f7e857f6b0806d3559c41484a94d336265e8..24d58830e7363fee6ed12b2e9aa70a63eb6dd475 100644 --- a/man/NAO.Rd +++ b/man/NAO.Rd @@ -107,8 +107,10 @@ ano <- Ano_CrossValid(sampleData$mod, sampleData$obs) # example data is not the full NAO area: NAO() will raise a warning. nao <- NAO(ano$ano_exp, ano$ano_obs, sampleData$lon, sampleData$lat) # Finally plot the NAO index + \donttest{ PlotBoxWhisker(nao$NAO_exp, nao$NAO_obs, "NAO index, DJF", "NAO index (PC1) TOS", monini = 12, yearini = 1985, freq = 1, "Exp. A", "Obs. X") + } } \author{ diff --git a/man/Plot2VarsVsLTime.Rd b/man/Plot2VarsVsLTime.Rd index e79fb34edfae731da6e1409f6365726b6db3f01b..8ba44e4ee8838e58042d9bb6d0fa39cc306433e1 100644 --- a/man/Plot2VarsVsLTime.Rd +++ b/man/Plot2VarsVsLTime.Rd @@ -106,10 +106,12 @@ rms <- RMS(Mean1Dim(smooth_ano_exp, dim_to_mean), smooth_ano_exp_m_sub <- smooth_ano_exp - InsertDim(Mean1Dim(smooth_ano_exp, 2, narm = TRUE), 2, dim(smooth_ano_exp)[2]) spread <- Spread(smooth_ano_exp_m_sub, c(2, 3)) + \donttest{ Plot2VarsVsLTime(InsertDim(rms[, , , ], 1, 1), spread$sd, toptitle = 'RMSE and spread', monini = 11, freq = 12, listexp = c('CMIP5 IC3'), listvar = c('RMSE', 'spread'), fileout = 'plot2vars.eps') + } } \author{ diff --git a/man/PlotACC.Rd b/man/PlotACC.Rd index 3427f652e8d76ff3a93ec9295f2cf47497237092..fc66200a0388b94d810c868be5335b9dcf1c18fe 100644 --- a/man/PlotACC.Rd +++ b/man/PlotACC.Rd @@ -111,8 +111,10 @@ ano_exp <- Ano(sampleData$mod, clim$clim_exp) ano_obs <- Ano(sampleData$obs, clim$clim_obs) acc <- ACC(Mean1Dim(sampleData$mod, 2), Mean1Dim(sampleData$obs, 2)) + \donttest{ PlotACC(acc$ACC, startDates, toptitle = "Anomaly Correlation Coefficient") + } } \author{ History:\cr diff --git a/man/PlotAno.Rd b/man/PlotAno.Rd index 8a42d9fe53aab7e156a2125b56b50cc58a8f5945..dd05931046ca62108d5b8916ba2c5f6f13e92054 100644 --- a/man/PlotAno.Rd +++ b/man/PlotAno.Rd @@ -96,9 +96,11 @@ runmean_nb_months <- 12 dim_to_smooth <- 4 # Smooth along lead-times smooth_ano_exp <- Smoothing(ano_exp, runmean_nb_months, dim_to_smooth) smooth_ano_obs <- Smoothing(ano_obs, runmean_nb_months, dim_to_smooth) + \donttest{ PlotAno(smooth_ano_exp, smooth_ano_obs, startDates, toptitle = paste('smoothed anomalies'), ytitle = c('K', 'K', 'K'), legends = 'ERSST', biglab = FALSE, fileout = 'tos_ano.eps') + } } \author{ diff --git a/man/PlotBoxWhisker.Rd b/man/PlotBoxWhisker.Rd index 31b7d4d42fdcf4b22f2c264ab33dc2f334a7a8ce..a536686be6087cfb54b8c967ef4bfbbe80a37e3b 100644 --- a/man/PlotBoxWhisker.Rd +++ b/man/PlotBoxWhisker.Rd @@ -114,8 +114,10 @@ attr(sampleData$lat, 'last_lat') <- 80 ano <- Ano_CrossValid(sampleData$mod, sampleData$obs) nao <- NAO(ano$ano_exp, ano$ano_obs, sampleData$lon, sampleData$lat) # Finally plot the nao index + \donttest{ PlotBoxWhisker(nao$NAO_exp, nao$NAO_obs, "NAO index, DJF", "NAO index (PC1) TOS", monini = 12, yearini = 1985, freq = 1, "Exp. A", "Obs. X") + } } \author{ diff --git a/man/PlotClim.Rd b/man/PlotClim.Rd index 38c916cbe20c6bac672337c0ff775bf93bc62bd8..7ee001ee198eb506524cde9c967557dc3ec9ebc1 100644 --- a/man/PlotClim.Rd +++ b/man/PlotClim.Rd @@ -74,9 +74,11 @@ c(nobs, nmemb, nltime) or c(nobs, nltime) for the observational data # Load sample data as in Load() example: example(Load) clim <- Clim(sampleData$mod, sampleData$obs) + \donttest{ PlotClim(clim$clim_exp, clim$clim_obs, toptitle = paste('climatologies'), ytitle = 'K', monini = 11, listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, fileout = 'tos_clim.eps') + } } \author{ diff --git a/man/PlotMatrix.Rd b/man/PlotMatrix.Rd new file mode 100644 index 0000000000000000000000000000000000000000..70c1211e9dd333d8dc947671add5f6ef55c4f9cc --- /dev/null +++ b/man/PlotMatrix.Rd @@ -0,0 +1,96 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/PlotMatrix.R +\name{PlotMatrix} +\alias{PlotMatrix} +\title{Function to convert any numerical table to a grid of coloured squares.} +\usage{ +PlotMatrix(var, brks = NULL, cols = NULL, toptitle = NULL, + title.color = "royalblue4", xtitle = NULL, ytitle = NULL, + xlabels = NULL, xvert = FALSE, ylabels = NULL, line = 3, + figure.width = 1, legend = TRUE, legend.width = 0.15, + xlab_dist = NULL, ylab_dist = NULL, fileout = NULL, size_units = "px", + res = 100, ...) +} +\arguments{ +\item{var}{A numerical matrix containing the values to be displayed in a +colored image.} + +\item{brks}{A vector of the color bar intervals. The length must be one more +than the parameter 'cols'. Use ColorBar() to generate default values.} + +\item{cols}{A vector of valid color identifiers for color bar. The length +must be one less than the parameter 'brks'. Use ColorBar() to generate +default values.} + +\item{toptitle}{A string of the title of the grid. Set NULL as default.} + +\item{title.color}{A string of valid color identifier to decide the title +color. Set "royalblue4" as default.} + +\item{xtitle}{A string of title of the x-axis. Set NULL as default.} + +\item{ytitle}{A string of title of the y-axis. Set NULL as default.} + +\item{xlabels}{A vector of labels of the x-axis. The length must be +length of the column of parameter 'var'. Set the sequence from 1 to the +length of the column of parameter 'var' as default.} + +\item{xvert}{A logical value to decide whether to place x-axis labels +vertically. Set FALSE as default, which keeps the labels horizontally.} + +\item{ylabels}{A vector of labels of the y-axis The length must be +length of the row of parameter 'var'. Set the sequence from 1 to the +length of the row of parameter 'var' as default.} + +\item{line}{An integer specifying the distance between the title of the +x-axis and the x-axis. Set 3 as default. Adjust if the x-axis labels +are long.} + +\item{figure.width}{A positive number as a ratio adjusting the width of the +grids. Set 1 as default.} + +\item{legend}{A logical value to decide to draw the grid color legend or not. +Set TRUE as default.} + +\item{legend.width}{A number between 0 and 0.5 to adjust the legend width. +Set 0.15 as default.} + +\item{xlab_dist}{A number specifying the distance between the x labels and +the x axis. If not specified, it equals to -1 - (nrow(var) / 10 - 1).} + +\item{ylab_dist}{A number specifying the distance between the y labels and +the y axis. If not specified, it equals to 0.5 - ncol(var) / 10.} + +\item{fileout}{A string of full directory path and file name indicating where +to save the plot. If not specified (default), a graphics device will pop up.} + +\item{size_units}{A string indicating the units of the size of the device +(file or window) to plot in. Set 'px' as default. See ?Devices and the +creator function of the corresponding device.} + +\item{res}{A positive number indicating resolution of the device (file or window) +to plot in. See ?Devices and the creator function of the corresponding device.} + +\item{...}{The additional parameters to be passed to function ColorBar() in +s2dverification for color legend creation.} +} +\value{ +A figure in popup window by default, or saved to the specified path. +} +\description{ +This function converts a numerical data matrix into a coloured +grid. It is useful for a slide or article to present tabular results as +colors instead of numbers. +} +\examples{ +#Example with random data +PlotMatrix(var = matrix(rnorm(n = 120, mean = 0.3), 10, 12), + cols = c('white','#fef0d9','#fdd49e','#fdbb84','#fc8d59', + '#e34a33','#b30000', '#7f0000'), + brks = c(-1, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 1), + toptitle = "Mean Absolute Error", + xtitle = "Forecast time (month)", ytitle = "Start date", + xlabels = c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", + "Aug", "Sep", "Oct", "Nov", "Dec")) +} + diff --git a/man/PlotVsLTime.Rd b/man/PlotVsLTime.Rd index e908e57b2ce6c8bd8b0810d266316ca9f3e2491b..2c71e9fac1915afa9d9fcba50006d360da2a44c7 100644 --- a/man/PlotVsLTime.Rd +++ b/man/PlotVsLTime.Rd @@ -117,10 +117,12 @@ corr <- Corr(Mean1Dim(smooth_ano_exp, dim_to_mean), compROW = required_complete_row, limits = c(ceiling((runmean_months + 1) / 2), leadtimes_per_startdate - floor(runmean_months / 2))) + \donttest{ PlotVsLTime(corr, toptitle = "correlations", ytitle = "correlation", monini = 11, limits = c(-1, 2), listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, hlines = c(-1, 0, 1), fileout = 'tos_cor.eps') + } } \author{ diff --git a/man/RMS.Rd b/man/RMS.Rd index cf26c75cb7c8dbfb48b5182c73b521bc0f61d916..715f84d10e82bf233cf2e6016668006a965067ea 100644 --- a/man/RMS.Rd +++ b/man/RMS.Rd @@ -94,10 +94,12 @@ rms <- RMS(Mean1Dim(smooth_ano_exp, dim_to_mean), compROW = required_complete_row, limits = c(ceiling((runmean_months + 1) / 2), leadtimes_per_startdate - floor(runmean_months / 2))) + \donttest{ PlotVsLTime(rms, toptitle = "Root Mean Square Error", ytitle = "K", monini = 11, limits = NULL, listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, hlines = c(0), fileout = 'tos_rms.eps') + } # The following example uses veriApply combined with .RMS instead of RMS \dontrun{ require(easyVerification) diff --git a/man/RMSSS.Rd b/man/RMSSS.Rd index 234170b86f376b2e4bc993324b930e82e7d172d5..f602702bf715d5adca99971942cb90a4facd47b3 100644 --- a/man/RMSSS.Rd +++ b/man/RMSSS.Rd @@ -70,10 +70,12 @@ rmsss <- RMSSS(Mean1Dim(ano_exp, 2), Mean1Dim(ano_obs, 2)) rmsss_plot <- array(dim = c(dim(rmsss)[1:2], 4, dim(rmsss)[4])) rmsss_plot[, , 2, ] <- rmsss[, , 1, ] rmsss_plot[, , 4, ] <- rmsss[, , 2, ] + \donttest{ PlotVsLTime(rmsss_plot, toptitle = "Root Mean Square Skill Score", ytitle = "", monini = 11, limits = c(-1, 1.3), listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, hlines = c(-1, 0, 1), fileout = 'tos_rmsss.eps') + } # The following example uses veriApply combined with .RMSSS instead of RMSSS \dontrun{ require(easyVerification) diff --git a/man/RatioSDRMS.Rd b/man/RatioSDRMS.Rd index bb5848a5c71ca3f04d93a0ae66d7d4114072d47c..0948474075c4bc25f572dafd0ab3180373ba0d6a 100644 --- a/man/RatioSDRMS.Rd +++ b/man/RatioSDRMS.Rd @@ -61,10 +61,12 @@ rsdrms <- RatioSDRMS(sampleData$mod, sampleData$obs) rsdrms_plot <- array(dim = c(dim(rsdrms)[1:2], 4, dim(rsdrms)[4])) rsdrms_plot[, , 2, ] <- rsdrms[, , 1, ] rsdrms_plot[, , 4, ] <- rsdrms[, , 2, ] + \donttest{ PlotVsLTime(rsdrms_plot, toptitle = "Ratio ensemble spread / RMSE", ytitle = "", monini = 11, limits = c(-1, 1.3), listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, siglev = TRUE, fileout = 'tos_rsdrms.eps') + } # The following example uses veriApply combined with .RatioSDRMS instead of RatioSDRMS \dontrun{ diff --git a/man/Season.Rd b/man/Season.Rd index 08487256eea4cb2072078adbc9cdaf1e7b1158fd..cc97941c4dcf956e61c2e3c6442275949ec87ae0 100644 --- a/man/Season.Rd +++ b/man/Season.Rd @@ -42,9 +42,11 @@ season_means_mod <- Season(sampleData$mod, leadtimes_dimension, initial_month, mean_start_month, mean_stop_month) season_means_obs <- Season(sampleData$obs, leadtimes_dimension, initial_month, mean_start_month, mean_stop_month) + \donttest{ PlotAno(season_means_mod, season_means_obs, startDates, toptitle = paste('winter (DJF) temperatures'), ytitle = c('K'), legends = 'ERSST', biglab = FALSE, fileout = 'tos_season_means.eps') + } } \author{ History:\cr diff --git a/man/Smoothing.Rd b/man/Smoothing.Rd index 77a23cc996ee0c0f5f6ad8469da529f119c4af05..4fa9c596f8ff9cb7272922b85b67a1dc1966d3da 100644 --- a/man/Smoothing.Rd +++ b/man/Smoothing.Rd @@ -32,9 +32,11 @@ runmean_months <- 12 dim_to_smooth <- 4 # Smooth along lead-times smooth_ano_exp <- Smoothing(ano_exp, runmean_months, dim_to_smooth) smooth_ano_obs <- Smoothing(ano_obs, runmean_months, dim_to_smooth) + \donttest{ PlotAno(smooth_ano_exp, smooth_ano_obs, startDates, toptitle = "Smoothed Mediterranean mean SST", ytitle = "K", fileout = "tos_smoothed_ano.eps") + } } \author{ History:\cr diff --git a/man/Spectrum.Rd b/man/Spectrum.Rd index cd2dad7a269fb4e395bd199946c0647bb04ef7e0..de5a2e5efa5acf10a5f6b15174269247778cf02d 100644 --- a/man/Spectrum.Rd +++ b/man/Spectrum.Rd @@ -36,8 +36,10 @@ for (jstartdate in 1:3) { } } } + \donttest{ PlotAno(InsertDim(ensmod, 2, 1), sdates = startDates, fileout = 'filtered_ensemble_mean.eps') + } } \author{ diff --git a/man/Spread.Rd b/man/Spread.Rd index bc14fa7a0d58fcc04859105b48e453c73b5d4c6a..f84cecf3e1219702108a872ed42021c05fa072ff 100644 --- a/man/Spread.Rd +++ b/man/Spread.Rd @@ -66,6 +66,7 @@ smooth_ano_exp <- Smoothing(ano_exp, runmean_months, dim_to_smooth) smooth_ano_exp_m_sub <- smooth_ano_exp - InsertDim(Mean1Dim(smooth_ano_exp, 2, narm = TRUE), 2, dim(smooth_ano_exp)[2]) spread <- Spread(smooth_ano_exp_m_sub, c(2, 3)) + \donttest{ PlotVsLTime(spread$iqr, toptitle = "Inter-Quartile Range between ensemble members", ytitle = "K", monini = 11, limits = NULL, @@ -83,6 +84,7 @@ PlotVsLTime(spread$mad, toptitle = "Median Absolute Deviation of the members", ytitle = "K", monini = 11, limits = NULL, listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, hlines = c(0), fileout = 'tos_mad.eps') + } } \author{ diff --git a/man/ToyModel.Rd b/man/ToyModel.Rd index da07596095086e4ef17463aa8c33b44902ea3e6a..ca47b4498808f86c340cfb70b6bdb0670560a0fa 100644 --- a/man/ToyModel.Rd +++ b/man/ToyModel.Rd @@ -112,9 +112,11 @@ nm <- 10 toyforecast <- ToyModel(alpha = a, beta = b, gamma = g, nmemb = nm, obsini = sampleData$obs, nstartd = 5, nleadt = 60) + \donttest{ PlotAno(toyforecast$mod, toyforecast$obs, startDates, toptitle = c("Synthetic decadal temperature prediction"), - fileout = "ex_toymodel.eps") + fileout = "ex_toymodel.eps") + } } \author{ diff --git a/man/Trend.Rd b/man/Trend.Rd index da1fe9560cb63ebde5aceea25cde13a3db4b98b8..3b7f7bfdd541cac1246494570807500ac8718caa 100644 --- a/man/Trend.Rd +++ b/man/Trend.Rd @@ -10,35 +10,41 @@ Trend(var, posTR = 2, interval = 1, siglev = 0.95, conf = TRUE) .Trend(exp, interval = 1, siglev = 0.95, conf = TRUE) } \arguments{ -\item{var}{Array of any number of dimensions up to 10.} +\item{var}{An array of any number of dimensions up to 10.} -\item{posTR}{Position along which to compute the trend.} +\item{posTR}{An integer indicating the position along which to compute the +trend.} -\item{interval}{Number of months/years between 2 points along posTR -dimension.\cr -Default = 1.\cr -The trend would be provided in number of units per month or year.} +\item{interval}{A number of months/years between 2 points along posTR +dimension. Set 1 as default.} -\item{siglev}{Confidence level for the computation of confidence interval. -0.95 by default.} +\item{siglev}{A numeric value indicating the confidence level for the +computation of confidence interval. Set 0.95 as default.} -\item{conf}{Whether to compute the confidence levels or not. TRUE by default.} +\item{conf}{A logical value indicating whether to compute the confidence +levels or not. Set TRUE as default.} -\item{exp}{M by N matrix of M forecasts from N ensemble members.} +\item{exp}{An M by N matrix representing M forecasts from N ensemble members.} } \value{ \item{$trend}{ The intercept and slope coefficients for the least squares fitting of the trend. + An array with same dimensions as parameter 'var' except along the posTR + dimension, which is replaced by a length 4 (or length 2 if conf = FALSE) + dimension, corresponding to the lower limit of the confidence interval + (only present if conf = TRUE), the slope, the upper limit of the confidence + interval (only present if conf = TRUE), and the intercept. } -\item{$conf.int}{ - Corresponding to the limits of the \code{siglev}\% confidence interval - (only present if \code{conf = TRUE}) for the slope coefficient. -} \item{$detrended}{ Same dimensions as var with linearly detrended var along the posTR dimension. } +Only in .Trend: +\item{$conf.int}{ + Corresponding to the limits of the \code{siglev}\% confidence interval + (only present if \code{conf = TRUE}) for the slope coefficient. +} } \description{ Computes the trend along the forecast time of the ensemble mean by least @@ -54,6 +60,7 @@ as input (exp). example(Load) months_between_startdates <- 60 trend <- Trend(sampleData$obs, 3, months_between_startdates) + \donttest{ PlotVsLTime(trend$trend, toptitle = "trend", ytitle = "K / (5 year)", monini = 11, limits = c(-1,1), listexp = c('CMIP5 IC3'), listobs = c('ERSST'), biglab = FALSE, hlines = 0, @@ -61,6 +68,7 @@ PlotVsLTime(trend$trend, toptitle = "trend", ytitle = "K / (5 year)", PlotAno(trend$detrended, NULL, startDates, toptitle = 'detrended anomalies (along the startdates)', ytitle = 'K', legends = 'ERSST', biglab = FALSE, fileout = 'tos_detrended_obs.eps') + } } \author{ diff --git a/tests/testthat/test-InsertDim.R b/tests/testthat/test-InsertDim.R new file mode 100644 index 0000000000000000000000000000000000000000..6a783876a1676f31cf4f4a72f03ea8308900b2c1 --- /dev/null +++ b/tests/testthat/test-InsertDim.R @@ -0,0 +1,90 @@ +context("s2dverification::InsertDim tests") + +############################################## + dat1 <- array(c(1:26), dim = c(dat = 1, sdate = 13, ftime = 2)) + dat2 <- array(c(1:24), dim = c(2, 3, c = 4)) +############################################## +test_that("1. Input checks", { + + expect_error( + InsertDim(c()), + "Parameter 'data' cannot be NULL." + ) + expect_error( + InsertDim(list(1:10), posdim = 1, lendim = 1), + "Parameter 'data' must be an array." + ) + expect_error( + InsertDim(1:10, posdim = 'a', lendim = 2), + "Parameter 'posdim' must be a positive integer." + ) + expect_error( + InsertDim(1:10, posdim = 0, lendim = 2), + "Parameter 'posdim' must be a positive integer." + ) + expect_error( + InsertDim(1:10, posdim = 5, lendim = 2), + "Parameter 'posdim' cannot excess the number of dimensions of parameter 'data' plus 1" + ) + expect_error( + InsertDim(1:10, posdim = 1, lendim = 0.2), + "Parameter 'lendim' must be a positive integer." + ) + expect_error( + InsertDim(1:10, posdim = 1, lendim = T), + "Parameter 'lendim' must be a positive integer." + ) + expect_error( + InsertDim(1:10, posdim = 1, lendim = 1, ncores = 'a'), + "Parameter 'ncores' must be a positive integer." + ) + expect_error( + InsertDim(1:10, posdim = 1, lendim = 1, ncores = 0), + "Parameter 'ncores' must be a positive integer." + ) + expect_error( + InsertDim(1:10, posdim = 1, lendim = 1, name = 1), + "Parameter 'name' must be a character string." + ) +}) + +############################################## +test_that("2. Output checks: dat1", { + + expect_equal( + dim(InsertDim(dat1, posdim = 1, lendim = 2)), + dim(array(dim = c(new = 2, dat = 1, sdate = 13, ftime = 2))) + ) + expect_equal( + as.vector(InsertDim(dat1, posdim = 1, lendim = 2)[1,,,]), + as.vector(dat1) + ) + expect_equal( + as.vector(InsertDim(dat1, posdim = 1, lendim = 2)[2,,,]), + as.vector(dat1) + ) + +}) + +############################################## +test_that("3. Output checks: dat2", { + + expect_equal( + dim(InsertDim(dat2, posdim = 4, lendim = 2)), + dim(array(dim = c(2, 3, c = 4, new = 2))) + ) + + expect_equal( + as.vector(InsertDim(dat2, posdim = 3, lendim = 1)[,,1,]), + as.vector(dat2) + ) + + expect_equal( + dim(InsertDim(InsertDim(dat2, posdim = 4, lendim = 2), posdim = 1, lendim = 4)), + dim(array(dim = c(new = 4, 2, 3, c = 4, new = 2))) + ) + +}) + +############################################## + diff --git a/vignettes/NAOindex_81to91.png b/vignettes/NAOindex_81to91.png new file mode 100644 index 0000000000000000000000000000000000000000..7ee1220690eab76395e8d4dcde3f11a1c4d76553 Binary files /dev/null and b/vignettes/NAOindex_81to91.png differ diff --git a/vignettes/NAOpredictions.png b/vignettes/NAOpredictions.png new file mode 100644 index 0000000000000000000000000000000000000000..ec96e8651f8ccf72933be25b53213ac1afa36d8f Binary files /dev/null and b/vignettes/NAOpredictions.png differ diff --git a/vignettes/RMSSSforNAOprediction.png b/vignettes/RMSSSforNAOprediction.png new file mode 100644 index 0000000000000000000000000000000000000000..6e394b353f118262080045fb28c7423945405514 Binary files /dev/null and b/vignettes/RMSSSforNAOprediction.png differ diff --git a/vignettes/ScoringForecast.md b/vignettes/ScoringForecast.md new file mode 100644 index 0000000000000000000000000000000000000000..2f3679f74d96404cbd387f100e6d36a54676fee1 --- /dev/null +++ b/vignettes/ScoringForecast.md @@ -0,0 +1,254 @@ +--- +title: "Untitled" +output: github_document +--- + + +This vignette illustrates several examples of how to score a seasonal forecast using different functions in s2dverification. + + +### 1-Load dependencies + +This example requires the following system libraries: + +- libssl-dev +- libnecdf-dev +- cdo + + +The **s2dverification R package** should be loaded by running the following lines in R, onces it is integrated into CRAN mirror. + +```r +library(s2dverification) +library(devtools) +``` + + + +### 2-Define the problem and loading the data with the corresponding parameters + +In this vignette we will quantify how skilled a seasonal forecast is. The model will be the EUROSIP multi-model seasonal forecasting system and our real truth will be observations represented by the ERAinterim reanalysis dataset. + +For more information about both datasets see the next documentation: +- [**ERAinterim**](https://www.ecmwf.int/en/forecasts/datasets/archive-datasets/reanalysis-datasets/era-interim). +- [**EUROSIP system**](https://www.ecmwf.int/en/forecasts/documentation-and-support/long-range/seasonal-forecast-documentation/eurosip-user-guide/multi-model). + +Both datasets can be downloaded from the corresponding server. However in this example we will use the BSC esarchive database. Files path parameters for the Load() function are defined as follows: + +```r +path_exp_EUROSIP <- list(name = 'system4_m1', + path = file.path('/esarchive/exp/EUROSIP/ecmwf', + '$EXP_NAME$/monthly_mean', + '$VAR_NAME$_f6h/$VAR_NAME$_$START_DATE$.nc')) + +path_obs_eraI <- list(name = 'erainterim', + path = file.path('/esarchive/recon/ecmwf', + '$OBS_NAME$/monthly_mean', + '$VAR_NAME$_f6h/$VAR_NAME$_$YEAR$$MONTH$.nc')) +``` + +Our case of study will be predicting the North Atlantic Oscillation index ([**NAO**](https://en.wikipedia.org/wiki/North_Atlantic_oscillation)), usually defined as the difference in the sea level pressure anomaly between the permanent High pressure system over the Azores Island and the Low pressure system over Iceland. Changes in the NAO index have a direct impact on European season weather since it controls the intensity and pathways of the storms in the continent; Positive NAO values are related with stronger storms and thus wet winters in Central and Northern Europe and its Atlantic facade. Negative NAO values correspond to reduced storm activity and rainfall in Northern Europe but increased in the South and Mediterranean Sea. Especially during the months of November to April, the NAO is responsible for much of the variability of weather in the North Atlantic region, affecting wind speed and wind direction changes, changes in temperature and moisture distribution and the intensity, number and track of storms. + +For this vignette we will use the whole North Atlantic as geographical domain and we will select all forecasts starting in November for the 1981 - 1991 time period. The EUROSIP system provides 7-months long window forecast inluding 51 different members. + +```r +#North Atlantic domain definition +lonNA <- c(-90, 40) +latNA <- c(20, 80) + +#Selected time periods: +startDates <- paste(1981:1991, '1101', sep = '') + +#Loading sea level pressure maps for the whole North Atlantic and for the specific period +data <- Load(var = 'psl', + exp = list(path_exp_EUROSIP), + obs = list(path_obs_eraI), + sdates = startDates, + output = 'lonlat', + lonmin = lonNA[1], lonmax = lonNA[2], + latmin = latNA[1], latmax = latNA[2]) + + +#Retrieving observational and forecast data +obs <- data$obs +exp <- data$mod + +``` + + +### 3-NAO index, Forecast vs Observations + +As a first step we will compare the predicted intensity of the NAO index for observations and for the forecast system. + +In s2dverification the NAO index is computed using a common alternative definition; The NAO is defined as the first mode of variability (computed using Single Value Decomposition method) for the sea level pressure in the North Atlantic region [20N-80N, 80W-40E]. The `NAO()` function requires as input anomalies that are previously computed using `Ano_CrossValid()` function. + +```r +#Computing anomalies along sdates dimension +ano <- Ano_CrossValid(exp, obs) + +#Computing NAO index as the first mode of slp variability in the North Atlantic +nao <- NAO(ano$ano_exp, ano$ano_obs, data$lon, data$lat) + +#For clarification, we normalize the NAO index for each member dividing by the corresponding standard deviation +nao_exp_sd <- apply(nao$NAO_exp, MARGIN = 1, sd, na.rm = T) +nao_obs_sd <- sd(nao$NAO_obs) +nao_exp_n <- nao$NAO_exp / nao_exp_sd +nao_obs_n <- nao$NAO_obs / nao_obs_sd + + +# Finally plot the NAO index using Box plots for all decembers. +PlotBoxWhisker(nao_exp_n, nao_obs_n, toptitle = "NAO index, DJF", + monini = 12, yearini = 1981, freq = 1, + drawleg = F, fileout = NULL) +legend(x = 3.8, y = 2.6, c('EUROSIP', 'EraInterim'), col = c(2, 4), pch = 15) +``` + + + +The figure above does not represent a good agreement between observations (blue line) and forecast (whisker boxes) due to the large dispersion through the 51 model members. The NAO signal is too weak due to the large dispersion among ensemble members thus almost disappearing (close to 0). + + +### 4-Quantifying the skillfulness of the prediction. The RMSSS + +Let's quantify how good or bad it is this prediction looking also for a better understanding of what is happening. To do so, we will use the Root Mean Square Error (RMSE), that it is just the squared and then rooted difference between the predicted and the true value. + +s2dverification includes the `RMS()` function to directly calculate the RMSE, however, in order to get a better understanding of the output values we will use the `RMSSS()` function (Root Mean Square Error Skill Score). The RMSSS equals the RMSE but it is normalized by a reference RMSE, usually asociated with the intrinsic variability of the system (for example the standard deviation of the climatology). In s2dverification the score is computed such that the best RMSSS score would be 1 (RMSE equals 0), while if RMSSS equals 0, the RMSE equals the variability reference of the system (i.e. the standard deviation). RMSSS can also be negative, meaning RMSE is greater than the variability reference. + +```r +#Defining NAO events +Lmember <- length(exp[1, , 1, 1, 1, 1]) + +#Calculating a RMSSS for the ensemble of members average from the mean NAO timeseries of all members +rmsss_m = RMSSS(var_exp = array(apply(nao_exp_n, MARGIN = 2, FUN = mean), dim = c(1,11)), + var_obs = nao_obs_n, pval = F) + +#Computing the RMSSS for each member +rmsss =RMSSS(var_exp = nao_exp_n, var_obs = nao_obs_n, pval = F) + +#Plotting..... +layout(matrix(c(1, 2), 1 , 2, byrow = TRUE)) + +#Plotting RMSSS for all members +plot(rmsss, type = 'h', lwd = 5, col = 'grey', xlab = 'Members', mgp = c(3,1,0), + main = sprintf('RMSSS for each member (RMSSS average %1.3f )', rmsss_m), + ylim = c(-1,1), cex.lab=2, cex.axis=2, cex.main = 1.9) +grid(col ='grey') +lines(rep(0, Lmember), lwd = 1, col = 'black') + +#Plotting boxplot for selected members with higher rmsss +isel = which(rmsss > 0.1) +rmsss_m_sel = RMSSS(var_exp = array(apply(nao_exp_n[isel, ], MARGIN = 2, FUN = mean), dim = c(1,11)), + var_obs = nao_obs_n, pval = F) + +PlotBoxWhisker(nao_exp_n[isel, ], nao_obs_n, + toptitle = sprintf('NAO index, selected-members (RMSSS=%1.2f)', rmsss_m_sel), + monini = 12, yearini = 1981, freq = 1, + drawleg = F, fileout = NULL) +legend(x = 4.95, y = 2.4, c('EUROSIP', 'EraInterim'), + col = c(2, 4), pch = 15, cex = 0.9, lty = 0) +``` + + + + +The above figure shows very different RMSSS for different members (left plot). Most of them have RMSSS close to 0, thus the prediction error is close to the system variability. **The RMSSS for the whole ensemble is 0.091**, what means a not very useful ensemble prediction. + +However, we can select the members that present a better RMSSS, i.e. those closer to 1, and recompute the ensemble RMSSS. This is shown in the right plot where only members 8th, 23rd, 40th and 45th are used. Now most marked NAO events are correctly predicted giving a **RMSSS of 0.66 for this selected-members ensemble**. + + +### 5-Quantifying the skillfulness of the prediction. The Brier Score + +Another option to quantify the goodness of this prediction is using the `BrierScore()` function. The BS measures the accuracy of a probabilistic categorical prediction calculating the mean squared difference between the predicted probability and the actual observation. BS scores perfect prediction when BS equals 0 (no difference between prediction and observation) and worst score corresponds to value of 1. + +Moreover, the BS can be descomposed in 3 terms: BS = Reliability - Resolution + Uncertainty. Reliability refers to how close the predicted probabilities are to the true probabilities (i.e. the lower the reliability, the lower the difference between prediction and observation and the better the score). Resolution relates with the difference between the predicted observation and the climatology (i.e. the higher the resolution, the greater the ability of the forecast to predict events different from the climatology). Uncertainty represents the inherent uncertainty of ocurring the event (i.e. the higher the uncertainty, the more difficult to correctly predict the event). + +In the case of the NAO index, we can discretize it in to a categorical variable defining a NAO event always that the index is greater than 0 and asociating the event with the value 1. Negative NAO index values then correspond to the non-ocurrence of a NAO event and thus they are assigned the value 0. For the forecast, we can convert the NAO index prediction of all ensemble members in to a NAO prediction probability computing the proportion of members with a positive NAO prediction. For comparison we do the same for the good scored selected-members ensemble of previous section. + + +```r + +#Defining number of sdates +Lsdates <- length(startDates) + +#For observations +nao_event_obs <- rep(0, Lsdates) +nao_event_obs[which(nao_obs_n > 0)] <- 1 + +#For all members +nao_event_exp <- array(0, dim = c(Lmember, Lsdates)) +nao_event_exp[which(nao_exp_n > 0)] = 1 +nao_event_exp_prob <- apply(nao_event_exp, MARGIN = 2, mean) + +BS <- BrierScore(nao_event_obs, nao_event_exp_prob) + +BSscore <- BS$bs +BSrel <- BS$rel +BSres <- BS$res +BSunc <- BS$unc + +#For selected members only +nao_event_exp_sel <- array(0, dim = c(length(isel), Lsdates)) +nao_event_exp_sel[which(nao_exp_n[isel, ] > 0)] = 1 +nao_event_exp_sel_prob <- apply(nao_event_exp_sel, MARGIN = 2, mean) + +BS_sel <- BrierScore(nao_event_obs, nao_event_exp_sel_prob) + +BSscore_sel <- BS_sel$bs +BSrel_sel <- BS_sel$rel +BSres_sel <- BS_sel$res +BSunc_sel <- BS_sel$unc + + +#Plotting NAO events and prediction probabilities +layout(matrix(c(1, 2), 1, 2, byrow = TRUE)) + +#For all ensemble members +plot(BS$obs - 0.5, type='p', lwd = 5, col = 'red', + xlab = 'NAO events', yaxt = 'n', ylab = 'NAO probabilities', mgp = c(3,1,0)) +grid(col = 'grey') +lines(BS$obs - 0.5, type = 'h', lwd = 5, col = 'red', yaxt = 'n') +lines(BS$pred - 0.5, type = 'h', lwd = 4, col = 'blue', yaxt = 'n') +lines(BS$pred - 0.5, type = 'p', lwd = 3, col = 'blue', yaxt = 'n') +axis(2, at = seq(-0.5, 0.5, 0.25), labels = seq(0, 1, 0.25), mgp = c(3,1,0)) +lines(rep(0, Lmember), lwd=2, col = 'black') +title('Predictions for all-members ensemble') + +#For selected-member ensemble only +plot(BS_sel$obs - 0.5, type = 'p', lwd = 5, col = 'red', + xlab = 'NAO events', yaxt = 'n', ylab = 'NAO probabilities', mgp = c(3,1,0)) +grid(col = 'grey') +lines(BS_sel$obs - 0.5, type = 'h', lwd = 5, col = 'red', yaxt = 'n') +lines(BS_sel$pred - 0.5, type = 'h', lwd = 4, col = 'blue', yaxt = 'n') +lines(BS_sel$pred - 0.5, type = 'p', lwd = 3, col = 'blue', yaxt = 'n') +axis(2, at = seq(-0.5, 0.5, 0.25), labels = seq(0, 1, 0.25), mgp = c(3, 1, 0)) +lines(rep(0, Lmember), lwd = 2, col = 'black') +title('Predictions for selected-members ensemble') + +``` + + + + +For the all-members ensemble, the results are: +**Total BS = 0.213,** +Reliability = 0.138, +Resolution = 0.172, +Uncertainty = 0.248. + +For the selected-members ensemble, the results are: +**Total BS = 0.136,** +Reliability = 0.015, +Resolution = 0.127, +Uncertainty = 0.248. + +To put the total Brier Score values in to context, note that the BS definition is such that in general a forecast that always gives a 50% probability prediction for all cases would have a BS of 0.25. + +Taking this into account, the all-members ensemble prediction with a BS of 0.213 seems only a bit better than flipping a coin in terms of Brier Scoring. The selected-members ensemble instead presents a clear improvement in the total Brier Score (lower value). This is due to the better scoring (lower value) achieved by the reliability term in this ensemble, in agreement with the previous RMSSS results and also shown in the right plot of the figure above; The prediction probabilities are much closer to the NAO ocurrence distribution. However, the resolution scores are not that different between the all-members and the selected-members ensemble. This is due to the fact that even with a notable difference in RMSSS and reliability, the two ensembles mostly predict equally the correct occurrence of positive/negative NAO events (left plot). Finally, the uncertainty is obviously the same for the two cases, since the observations to compare with are the same. + + +### 6-Appendix: Forecast scores ranges + +| SCORE | RMSSS | BS (total) | Reliability | Resolution | Uncertainty | +|:-----:|:-----:|:----------:|:-----------:|:----------:|:--------------:| +| Best | 1 | 0 | 0 | 1 | Obs. dependant | +| Worst | -Inf | 1 | 1 | 0 | Obs. dependant |