diff --git a/CHANGELOG.rst b/CHANGELOG.rst new file mode 100644 index 0000000000000000000000000000000000000000..18bc8e71ddf058151872e38f146b1defdd1d06a2 --- /dev/null +++ b/CHANGELOG.rst @@ -0,0 +1,158 @@ +============ +CHANGELOG +============ + +.. start-here + +1.1.4 +============ + +* Release date: Unknown +* Changes and new features: + + * Statistics: + + * Rolling mean + + * Documentation + +1.1.3 +============ + +* Release date: 2023/06/16 +* Changes and new features: + + * Rotated nested projection + * Improved documentation + * New function get_fids() + * Climatology options added + * Milliseconds, seconds, minutes and days time units accepted + * Option to change the time units' resolution. + * Bugs fixing: + + * The input arguments in function new() have been corrected + * Months to day time units fixed + +1.1.2 +============ + +* Release date: 2023/05/15 +* Changes and new features: + + * Minor bug fixes + * Tutorial updates + * Writing formats (CMAQ, MONARCH, and WRF_CHEM added) ([#63](https://earth.bsc.es/gitlab/es/NES/-/issues/63)) + +1.1.1 +============ + +* Release date: 2023/04/12 +* Changes and new features: + + * Sum of Nes objects ([#48](https://earth.bsc.es/gitlab/es/NES/-/issues/48)) + * Write 2D string data to save variables from shapefiles after doing a spatial join ([#49](https://earth.bsc.es/gitlab/es/NES/-/issues/49)) + * Horizontal Interpolation Conservative: Improvement on memory usage when calculating the weight matrix ([#54](https://earth.bsc.es/gitlab/es/NES/-/issues/54)) + * Improved time on **concatenate_netcdfs** function ([#55](https://earth.bsc.es/gitlab/es/NES/-/issues/55)) + * Write by time step to avoid memory issues ([#57](https://earth.bsc.es/gitlab/es/NES/-/issues/57)) + * Flux conservative horizontal interpolation ([#60](https://earth.bsc.es/gitlab/es/NES/-/issues/60)) + * Bugs fixing: + + * Bug on `cell_methods` serial write ([#53](https://earth.bsc.es/gitlab/es/NES/-/issues/53)) + * Bug on avoid_first_hours that where not filtered after read the dimensions ([#59](https://earth.bsc.es/gitlab/es/NES/-/issues/59)) + * Bug while reading masked data. + * grid_mapping NetCDF variable as integer instead of character. + +1.1.0 +============ + +* Release date: 2023/03/02 +* Changes and new features: + + * Improve Lat-Lon to Cartesian coordinates method (used in Providentia). + * Horizontal interpolation: Conservative + * Function to_shapefile() to create shapefiles from a NES object without losing data from the original grid and being able to select the time and level. + * Function from_shapefile() to create a new grid with data from a shapefile after doing a spatial join. + * Function create_shapefile() can now be used in parallel. + * Function calculate_grid_area() to calculate the area of each cell in a grid. + * Function calculate_geometry_area() to calculate the area of each cell given a set of geometries. + * Function get_spatial_bounds_mesh_format() to get the lon-lat boundaries in a mesh format (used in pcolormesh). + * Bugs fixing: + + * Correct the dimensions of the resulting points datasets from any interpolation. + * Amend the interpolation method to take into account the cases in which the distance among points equals zero. + * Correct the way we retrieve the level positive value. + * Correct how to calculate the spatial bounds of LCC and Mercator grids: the dimensions were flipped. + * Correct how to calculate the spatial bounds for all grids: use read axis limits instead of write axis limits. + * Calculate centroids from coordinates in the creation of shapefiles, instead of using the geopandas function 'centroid', that raises a warning for possible errors. + * Enable selection of variables on the creation of shapefiles. + * Correct read and write parallel limits. + * Correct data type in the parallelization of points datasets. + * Correct error that appear when trying to select coordinates and write the file. + +1.0.0 +============ + +* Release date: 2022/11/24 +* Changes and new features: + + * First beta release + * Open: + + * NetCDF: + + * Regular Latitude-Longitude + * Rotated Lat-Lon + * Lambert Conformal Conic + * Mercator + * Points + * Points in GHOST format + * Points in PROVIDENTIA format + + * Parallelization: + + * Balanced / Unbalanced + * By time axis + * By Y axis + * By X axis + + * Create: + + * NetCDF: + + * Regular Latitude-Longitude + * Rotated Lat-Lon + * Lambert Conformal Conic + * Mercator + * Points + + * Shapefile + + * Write: + + * NetCDF + + * CAMS REANALYSIS format + + * Grib2 + * Shapefile + + * Interpolation: + + * Vertical interpolation + * Horizontal interpolation + + * Nearest Neighbours + + * Providentia interpolation + + * Statistics: + + * Daily_mean + * Daily_max + * Daily_min + * Last time step + + * Methods: + + * Concatenate (variables of the same period in different files) + \ No newline at end of file diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst new file mode 100644 index 0000000000000000000000000000000000000000..0692bc8a36381daff7b2821a888f5a9029136d23 --- /dev/null +++ b/CONTRIBUTING.rst @@ -0,0 +1,17 @@ +============ +Contributing +============ + +.. start-here + +1. Create an issue and complete the description. Complete the issue description as much as possible with (estimated time, corresponding milestone, assigned person, etc.) +2. Create a branch (from master) directly in the issue. Its name should start with ``develop-``, followed by the number and title of the issue that appears by default. +3. Clone and checkout to the new branch, without any modification, in Nord3v2. It is recommended to run some tests to ensure the current behavior. +4. Modify the code. +5. Run the simulation with the new branch: It is important to prepend the cloned path in the ``PYTHONPATH``, e.g. ``export PYTHONPATH=/gpfs/scratch/bsc32/bsc32538/NES:${PYTHONPATH}``. +6. Create and run a specific test for your case in the folder ``tests``. +7. Update the ``CHANGELOG.rst`` and include information on the new development or bug fix. +8. Update the wiki with the new specifications. +9. Merge ``master`` into your development branch. To ensure that if there has been any changes in the master branch, these are included. +10. Run all tests in ``tests``. +11. Create a merge request and assign it to Alba (`@avilanov `_) or Carles (`@ctena `_), who will review it. \ No newline at end of file diff --git a/README.rst b/README.rst new file mode 100644 index 0000000000000000000000000000000000000000..6562c619e2a76161c9622fd2215f119aa6b331dd --- /dev/null +++ b/README.rst @@ -0,0 +1,39 @@ +============ +CHANGELOG +============ + +.. start-here + +About +============ + +NES (NetCDF for Earth Science) is the Python I/O library used by SNES, the framework that implements the data post-processing pipelines at the Earth Sciences department, to read and write netCDF files. + +How to clone +============ + +Use the following command to get a copy of the repository: + +:: + + git clone https://earth.bsc.es/gitlab/es/NES.git + +You can use the latest stable version of NES +by accessing the production branch: + +:: + + git checkout production + +You can also access the master branch to test new features, +that are to be implemented in the upcoming release: + +:: + + git checkout master + +How to run +============ + +For running NES, please follow the instruction in +the `wiki `_. \ No newline at end of file diff --git a/docs/Makefile b/docs/Makefile new file mode 100644 index 0000000000000000000000000000000000000000..d0c3cbf1020d5c292abdedf27627c6abe25e2293 --- /dev/null +++ b/docs/Makefile @@ -0,0 +1,20 @@ +# Minimal makefile for Sphinx documentation +# + +# You can set these variables from the command line, and also +# from the environment for the first two. +SPHINXOPTS ?= +SPHINXBUILD ?= sphinx-build +SOURCEDIR = source +BUILDDIR = build + +# Put it first so that "make" without argument is like "make help". +help: + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +.PHONY: help Makefile + +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/build/doctrees/authors.doctree b/docs/build/doctrees/authors.doctree new file mode 100644 index 0000000000000000000000000000000000000000..db6083d44eda424e14fffbbe8ffabc8caab8c5ec Binary files /dev/null and b/docs/build/doctrees/authors.doctree differ diff --git a/docs/build/doctrees/changelog.doctree b/docs/build/doctrees/changelog.doctree new file mode 100644 index 0000000000000000000000000000000000000000..5c70d842b1b4c6ab163adf8acbc846233edff16a Binary files /dev/null and b/docs/build/doctrees/changelog.doctree differ diff --git a/docs/build/doctrees/contributing.doctree b/docs/build/doctrees/contributing.doctree new file mode 100644 index 0000000000000000000000000000000000000000..bf5a70832a9eec21a04b30c828c7babd153b8441 Binary files /dev/null and b/docs/build/doctrees/contributing.doctree differ diff --git a/docs/build/doctrees/environment.pickle b/docs/build/doctrees/environment.pickle new file mode 100644 index 0000000000000000000000000000000000000000..a6dbcc1cf34232bf14f764eb73b15d4abd43571c Binary files /dev/null and b/docs/build/doctrees/environment.pickle differ diff --git a/docs/build/doctrees/formats.doctree b/docs/build/doctrees/formats.doctree new file mode 100644 index 0000000000000000000000000000000000000000..92f908ee82d4fffde2425a066a2207bf45b7c75d Binary files /dev/null and b/docs/build/doctrees/formats.doctree differ diff --git a/docs/build/doctrees/index.doctree b/docs/build/doctrees/index.doctree new file mode 100644 index 0000000000000000000000000000000000000000..7c7eb57551b0f0e9c390300e9fd1faf69ee54cbf Binary files /dev/null and b/docs/build/doctrees/index.doctree differ diff --git a/docs/build/doctrees/methods.doctree b/docs/build/doctrees/methods.doctree new file mode 100644 index 0000000000000000000000000000000000000000..2b6ac637d7cad743112d061f2de42c878f340ee2 Binary files /dev/null and b/docs/build/doctrees/methods.doctree differ diff --git a/docs/build/doctrees/object.doctree b/docs/build/doctrees/object.doctree new file mode 100644 index 0000000000000000000000000000000000000000..9f99562373effd54947c1e34763c14e72d4f5941 Binary files /dev/null and b/docs/build/doctrees/object.doctree differ diff --git a/docs/build/doctrees/projections.doctree b/docs/build/doctrees/projections.doctree new file mode 100644 index 0000000000000000000000000000000000000000..57a0dfd9fe272e409c74dec200861761866f202c Binary files /dev/null and b/docs/build/doctrees/projections.doctree differ diff --git a/docs/build/doctrees/readme.doctree b/docs/build/doctrees/readme.doctree new file mode 100644 index 0000000000000000000000000000000000000000..172e1047dc2ba951e6386df181570419ac9972c6 Binary files /dev/null and b/docs/build/doctrees/readme.doctree differ diff --git a/docs/build/html/.buildinfo b/docs/build/html/.buildinfo new file mode 100644 index 0000000000000000000000000000000000000000..4ae2fafc1a0d7b5fab891e25b61bcb600a944881 --- /dev/null +++ b/docs/build/html/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: ae473bf9fa7e837e717efd8d2c738be6 +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/docs/build/html/_modules/index.html b/docs/build/html/_modules/index.html new file mode 100644 index 0000000000000000000000000000000000000000..90ef5380e155c65f7d35534707d19c6a84852056 --- /dev/null +++ b/docs/build/html/_modules/index.html @@ -0,0 +1,127 @@ + + + + + + Overview: module code — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/create_nes.html b/docs/build/html/_modules/nes/create_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..57490b3ff886baa4dd28b15ec0a3f0c0b5bb3cb3 --- /dev/null +++ b/docs/build/html/_modules/nes/create_nes.html @@ -0,0 +1,273 @@ + + + + + + nes.create_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for nes.create_nes

+#!/usr/bin/env python
+
+import warnings
+import sys
+from netCDF4 import num2date
+from mpi4py import MPI
+import geopandas as gpd
+from .nc_projections import *
+
+
+
+[docs] +def create_nes(comm=None, info=False, projection=None, parallel_method='Y', balanced=False, + times=None, avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, + **kwargs): + """ + Create a Nes class from scratch. + + Parameters + ---------- + comm: MPI.Communicator + MPI Communicator. + info: bool + Indicates if you want to get reading/writing info. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + accepted values: ['X', 'Y', 'T']. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + """ + + if comm is None: + comm = MPI.COMM_WORLD + else: + comm = comm + + # Create time array + if times is None: + units = 'days since 1996-12-31 00:00:00' + calendar = 'standard' + times = num2date([0], units=units, calendar=calendar) + times = [aux.replace(second=0, microsecond=0) for aux in times] + else: + if not isinstance(times, list): + times = times.tolist() + + # Check if the parameters that are required to create the object have been defined in kwargs + kwargs_list = [] + for name, value in kwargs.items(): + kwargs_list.append(name) + + if projection is None: + required_vars = ['lat', 'lon'] + elif projection == 'regular': + required_vars = ['lat_orig', 'lon_orig', 'inc_lat', 'inc_lon', 'n_lat', 'n_lon'] + elif projection == 'global': + required_vars = ['inc_lat', 'inc_lon'] + elif projection == 'rotated': + required_vars = ['centre_lat', 'centre_lon', 'west_boundary', 'south_boundary', 'inc_rlat', 'inc_rlon'] + elif projection == 'rotated-nested': + required_vars = ['parent_grid_path', 'parent_ratio', 'i_parent_start', 'j_parent_start', 'n_rlat', 'n_rlon'] + elif projection == 'lcc': + required_vars = ['lat_1', 'lat_2', 'lon_0', 'lat_0', 'nx', 'ny', 'inc_x', 'inc_y', 'x_0', 'y_0'] + elif projection == 'mercator': + required_vars = ['lat_ts', 'lon_0', 'nx', 'ny', 'inc_x', 'inc_y', 'x_0', 'y_0'] + else: + raise ValueError("Unknown projection: {0}".format(projection)) + + for var in required_vars: + if var not in kwargs_list: + msg = 'Variable {0} has not been defined. '.format(var) + msg += 'For a {} projection, it is necessary to define {}'.format(projection, required_vars) + raise ValueError(msg) + + for var in kwargs_list: + if var not in required_vars: + msg = 'Variable {0} has been defined. '.format(var) + msg += 'For a {} projection, you can only define {}'.format(projection, required_vars) + raise ValueError(msg) + + if projection is None: + if parallel_method == 'Y': + warnings.warn("Parallel method cannot be 'Y' to create points NES. Setting it to 'X'") + sys.stderr.flush() + parallel_method = 'X' + elif parallel_method == 'T': + raise NotImplementedError("Parallel method T not implemented yet") + nessy = PointsNes(comm=comm, dataset=None, xarray=False, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, balanced=balanced, + create_nes=True, times=times, **kwargs) + elif projection in ['regular', 'global']: + nessy = LatLonNes(comm=comm, dataset=None, xarray=False, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, balanced=balanced, + create_nes=True, times=times, **kwargs) + elif projection == 'rotated': + nessy = RotatedNes(comm=comm, dataset=None, xarray=False, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, balanced=balanced, + create_nes=True, times=times, **kwargs) + elif projection == 'rotated-nested': + nessy = RotatedNestedNes(comm=comm, dataset=None, xarray=False, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, balanced=balanced, + create_nes=True, times=times, **kwargs) + elif projection == 'lcc': + nessy = LCCNes(comm=comm, dataset=None, xarray=False, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, balanced=balanced, + create_nes=True, times=times, **kwargs) + elif projection == 'mercator': + nessy = MercatorNes(comm=comm, dataset=None, xarray=False, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, balanced=balanced, + create_nes=True, times=times, **kwargs) + else: + raise NotImplementedError(projection) + + return nessy
+ + + +
+[docs] +def from_shapefile(path, method=None, parallel_method='Y', **kwargs): + """ + Create NES from shapefile data. + + 1. Create NES grid. + 2. Create shapefile for grid. + 3. Spatial join to add shapefile variables to NES variables. + + Parameters + ---------- + path : str + Path to shapefile. + method : str + Overlay method. Accepted values: ['nearest', 'intersection', None]. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + accepted values: ['X', 'Y', 'T']. + """ + + # Create NES + nessy = create_nes(comm=None, info=False, parallel_method=parallel_method, **kwargs) + + # Create shapefile for grid + nessy.create_shapefile() + + # Make spatial join + nessy.spatial_join(path, method=method) + + return nessy
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/load_nes.html b/docs/build/html/_modules/nes/load_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..d51ea5a41ee039178195ac2adf038153483a9214 --- /dev/null +++ b/docs/build/html/_modules/nes/load_nes.html @@ -0,0 +1,442 @@ + + + + + + nes.load_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for nes.load_nes

+#!/usr/bin/env python
+
+import os
+import sys
+from mpi4py import MPI
+from netCDF4 import Dataset
+import warnings
+import numpy as np
+from .nc_projections import *
+
+DIM_VAR_NAMES = ['lat', 'latitude', 'lat_bnds', 'lon', 'longitude', 'lon_bnds', 'time', 'time_bnds', 'lev', 'level',
+                 'cell_area', 'crs', 'rotated_pole', 'x', 'y', 'rlat', 'rlon', 'Lambert_conformal', 'mercator']
+
+
+
+[docs] +def open_netcdf(path, comm=None, xarray=False, info=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, + first_level=0, last_level=None, balanced=False): + """ + Open a netCDF file. + + Parameters + ---------- + path : str + Path to the NetCDF file to read. + comm : MPI.COMM + MPI communicator to use in that netCDF. Default: MPI.COMM_WORLD. + xarray : bool + (Not working) Indicates if you want to use xarray. Default: False. + info : bool + Indicates if you want to print (stdout) the reading/writing steps. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T'] + balanced : bool + Indicates if you want a balanced parallelization or not. Balanced dataset cannot be written in chunking mode. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + + Returns + ------- + Nes + Nes object. Variables read in lazy mode (only metadata). + """ + + if comm is None: + comm = MPI.COMM_WORLD + else: + comm = comm + + if not os.path.exists(path): + raise FileNotFoundError(path) + if xarray: + dataset = None + else: + # dataset = Dataset(path, format="NETCDF4", mode='r', parallel=False) + if comm.Get_size() == 1: + dataset = Dataset(path, format="NETCDF4", mode='r', parallel=False) + else: + dataset = Dataset(path, format="NETCDF4", mode='r', parallel=True, comm=comm, info=MPI.Info()) + + if __is_rotated(dataset): + # Rotated grids + nessy = RotatedNes(comm=comm, dataset=dataset, xarray=xarray, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=False, balanced=balanced,) + elif __is_points(dataset): + if parallel_method == 'Y': + warnings.warn("Parallel method cannot be 'Y' to create points NES. Setting it to 'X'") + sys.stderr.flush() + parallel_method = 'X' + if __is_points_ghost(dataset): + # Points - GHOST + nessy = PointsNesGHOST(comm=comm, dataset=dataset, xarray=xarray, info=info, + parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=False, balanced=balanced,) + elif __is_points_providentia(dataset): + # Points - Providentia + nessy = PointsNesProvidentia(comm=comm, dataset=dataset, xarray=xarray, info=info, + parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=False, + balanced=balanced,) + else: + # Points - non-GHOST + nessy = PointsNes(comm=comm, dataset=dataset, xarray=xarray, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=False, balanced=balanced,) + elif __is_lcc(dataset): + # Lambert conformal conic grids + nessy = LCCNes(comm=comm, dataset=dataset, xarray=xarray, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=False, balanced=balanced,) + elif __is_mercator(dataset): + # Mercator grids + nessy = MercatorNes(comm=comm, dataset=dataset, xarray=xarray, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=False, balanced=balanced,) + else: + # Regular grids + nessy = LatLonNes(comm=comm, dataset=dataset, xarray=xarray, info=info, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=False, balanced=balanced,) + + return nessy
+ + + +def __is_rotated(dataset): + """ + Check if the netCDF is in rotated pole projection or not. + + Parameters + ---------- + dataset : Dataset + netcdf4-python open dataset object. + + Returns + ------- + value : bool + Indicated if the netCDF is a rotated one. + """ + + if 'rotated_pole' in dataset.variables.keys(): + return True + elif ('rlat' in dataset.dimensions) and ('rlon' in dataset.dimensions): + return True + else: + return False + + +def __is_points(dataset): + """ + Check if the netCDF is a points dataset in non-GHOST format or not. + + Parameters + ---------- + dataset : Dataset + netcdf4-python open dataset object. + + Returns + ------- + value : bool + Indicated if the netCDF is a points non-GHOST one. + """ + + if 'station' in dataset.dimensions: + return True + else: + return False + + +def __is_points_ghost(dataset): + """ + Check if the netCDF is a points dataset in GHOST format or not. + + Parameters + ---------- + dataset : Dataset + netcdf4-python open dataset object. + + Returns + ------- + value : bool + Indicated if the netCDF is a points GHOST one. + """ + + if 'N_flag_codes' in dataset.dimensions and 'N_qa_codes' in dataset.dimensions: + return True + else: + return False + + +def __is_points_providentia(dataset): + """ + Check if the netCDF is a points dataset in Providentia format or not. + + Parameters + ---------- + dataset : Dataset + netcdf4-python open dataset object. + + Returns + ------- + value : bool + Indicated if the netCDF is a points Providentia one. + """ + + if (('grid_edge' in dataset.dimensions) and ('model_latitude' in dataset.dimensions) + and ('model_longitude' in dataset.dimensions)): + return True + else: + return False + + +def __is_lcc(dataset): + """ + Check if the netCDF is in Lambert Conformal Conic (LCC) projection or not. + + Parameters + ---------- + dataset : Dataset + netcdf4-python open dataset object. + + Returns + ------- + value : bool + Indicated if the netCDF is a LCC one. + """ + + if 'Lambert_Conformal' in dataset.variables.keys() or 'Lambert_conformal' in dataset.variables.keys(): + return True + else: + return False + + +def __is_mercator(dataset): + """ + Check if the netCDF is in Mercator projection or not. + + Parameters + ---------- + dataset : Dataset + netcdf4-python open dataset object. + + Returns + ------- + value : bool + Indicated if the netCDF is a Mercator one. + """ + + if 'mercator' in dataset.variables.keys(): + return True + else: + return False + + +
+[docs] +def concatenate_netcdfs(nessy_list, comm=None, info=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, + first_level=0, last_level=None, balanced=False): + """ + Concatenate variables form different sources. + + Parameters + ---------- + nessy_list : list + List of Nes objects or list of paths to concatenate. + comm : MPI.Communicator + MPI Communicator. + + Returns + ------- + Nes + Nes object with all the variables. + """ + if not isinstance(nessy_list, list): + raise AttributeError("You must pass a list of NES objects or paths.") + + if isinstance(nessy_list[0], str): + nessy_first = open_netcdf(nessy_list[0], + comm=comm, + parallel_method=parallel_method, + info=info, + avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, + first_level=first_level, + last_level=last_level, + balanced=balanced + ) + nessy_first.load() + else: + nessy_first = nessy_list[0] + for i, aux_nessy in enumerate(nessy_list[1:]): + if isinstance(aux_nessy, str): + nc_add = Dataset(filename=aux_nessy, mode='r') + for var_name, var_info in nc_add.variables.items(): + if var_name not in DIM_VAR_NAMES: + nessy_first.variables[var_name] = {} + var_dims = var_info.dimensions + # Read data in 4 dimensions + if len(var_dims) < 2: + data = var_info[:] + elif len(var_dims) == 2: + data = var_info[nessy_first.read_axis_limits['y_min']:nessy_first.read_axis_limits['y_max'], + nessy_first.read_axis_limits['x_min']:nessy_first.read_axis_limits['x_max']] + data = data.reshape(1, 1, data.shape[-2], data.shape[-1]) + elif len(var_dims) == 3: + if 'strlen' in var_dims: + data = var_info[nessy_first.read_axis_limits['y_min']:nessy_first.read_axis_limits['y_max'], + nessy_first.read_axis_limits['x_min']:nessy_first.read_axis_limits['x_max'], + :] + data_aux = np.empty(shape=(data.shape[0], data.shape[1]), dtype=np.object) + for lat_n in range(data.shape[0]): + for lon_n in range(data.shape[1]): + data_aux[lat_n, lon_n] = ''.join( + data[lat_n, lon_n].tostring().decode('ascii').replace('\x00', '')) + data = data_aux.reshape((1, 1, data_aux.shape[-2], data_aux.shape[-1])) + else: + data = var_info[nessy_first.read_axis_limits['t_min']:nessy_first.read_axis_limits['t_max'], + nessy_first.read_axis_limits['y_min']:nessy_first.read_axis_limits['y_max'], + nessy_first.read_axis_limits['x_min']:nessy_first.read_axis_limits['x_max']] + data = data.reshape(data.shape[-3], 1, data.shape[-2], data.shape[-1]) + elif len(var_dims) == 4: + data = var_info[nessy_first.read_axis_limits['t_min']:nessy_first.read_axis_limits['t_max'], + nessy_first.read_axis_limits['z_min']:nessy_first.read_axis_limits['z_max'], + nessy_first.read_axis_limits['y_min']:nessy_first.read_axis_limits['y_max'], + nessy_first.read_axis_limits['x_min']:nessy_first.read_axis_limits['x_max']] + else: + raise TypeError("{} data shape is nto accepted".format(var_dims)) + + nessy_first.variables[var_name]['data'] = data + # Avoid some attributes + for attrname in var_info.ncattrs(): + if attrname not in ['missing_value', '_FillValue']: + value = getattr(var_info, attrname) + if value in ['unitless', '-']: + value = '' + nessy_first.variables[var_name][attrname] = value + nc_add.close() + + else: + nessy_first.concatenate(aux_nessy) + + return nessy_first
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/methods/cell_measures.html b/docs/build/html/_modules/nes/methods/cell_measures.html new file mode 100644 index 0000000000000000000000000000000000000000..2cde66b73b89bd08deb12d01d7a8e9d46c228d39 --- /dev/null +++ b/docs/build/html/_modules/nes/methods/cell_measures.html @@ -0,0 +1,398 @@ + + + + + + nes.methods.cell_measures — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for nes.methods.cell_measures

+#!/usr/bin/env python
+
+import numpy as np
+from copy import deepcopy
+
+
+
+[docs] +def calculate_grid_area(self): + """ + Get coordinate bounds and call function to calculate the area of each cell of a grid. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + """ + + # Create bounds if they do not exist + if self._lat_bnds is None or self._lon_bnds is None: + self.create_spatial_bounds() + + # Get spatial number of vertices + spatial_nv = self.lat_bnds['data'].shape[-1] + + # Reshape bounds + if spatial_nv == 2: + + aux_shape = (self.lat_bnds['data'].shape[0], self.lon_bnds['data'].shape[0], 4) + lon_bnds_aux = np.empty(aux_shape) + lon_bnds_aux[:, :, 0] = self.lon_bnds['data'][np.newaxis, :, 0] + lon_bnds_aux[:, :, 1] = self.lon_bnds['data'][np.newaxis, :, 1] + lon_bnds_aux[:, :, 2] = self.lon_bnds['data'][np.newaxis, :, 1] + lon_bnds_aux[:, :, 3] = self.lon_bnds['data'][np.newaxis, :, 0] + + lon_bnds = lon_bnds_aux + del lon_bnds_aux + + lat_bnds_aux = np.empty(aux_shape) + lat_bnds_aux[:, :, 0] = self.lat_bnds['data'][:, np.newaxis, 0] + lat_bnds_aux[:, :, 1] = self.lat_bnds['data'][:, np.newaxis, 0] + lat_bnds_aux[:, :, 2] = self.lat_bnds['data'][:, np.newaxis, 1] + lat_bnds_aux[:, :, 3] = self.lat_bnds['data'][:, np.newaxis, 1] + + lat_bnds = lat_bnds_aux + del lat_bnds_aux + + else: + lon_bnds = self.lon_bnds['data'] + lat_bnds = self.lat_bnds['data'] + + # Reshape bounds and assign as grid corner coordinates + grid_corner_lon = deepcopy(lon_bnds).reshape(lon_bnds.shape[0]*lon_bnds.shape[1], + lon_bnds.shape[2]) + grid_corner_lat = deepcopy(lat_bnds).reshape(lat_bnds.shape[0]*lat_bnds.shape[1], + lat_bnds.shape[2]) + + # Calculate cell areas + grid_area = calculate_cell_area(grid_corner_lon, grid_corner_lat, + earth_radius_minor_axis=self.earth_radius[0], + earth_radius_major_axis=self.earth_radius[1]) + + return grid_area
+ + + +
+[docs] +def calculate_geometry_area(geometry_list, earth_radius_minor_axis=6356752.3142, + earth_radius_major_axis=6378137.0): + """ + Get coordinate bounds and call function to calculate the area of each cell of a set of geometries. + + Parameters + ---------- + geometry_list : List + List with polygon geometries. + earth_radius_minor_axis : float + Radius of the minor axis of the Earth. + earth_radius_major_axis : float + Radius of the major axis of the Earth. + """ + + geometry_area = np.empty(shape=(len(geometry_list,))) + + for geom_ind in range(0, len(geometry_list)): + + # Calculate the area of each geometry in multipolygon and collection objects + if geometry_list[geom_ind].type in ['MultiPolygon', 'GeometryCollection']: + multi_geom_area = 0 + for multi_geom_ind in range(0, len(geometry_list[geom_ind].geoms)): + if geometry_list[geom_ind].geoms[multi_geom_ind].type == 'Point': + continue + geometry_corner_lon, geometry_corner_lat = geometry_list[geom_ind].geoms[multi_geom_ind].exterior.coords.xy + geometry_corner_lon = np.array(geometry_corner_lon) + geometry_corner_lat = np.array(geometry_corner_lat) + geom_area = mod_huiliers_area(geometry_corner_lon, geometry_corner_lat) + multi_geom_area += geom_area + geometry_area[geom_ind] = multi_geom_area * earth_radius_minor_axis * earth_radius_major_axis + + # Calculate the area of each geometry + else: + geometry_corner_lon, geometry_corner_lat = geometry_list[geom_ind].exterior.coords.xy + geometry_corner_lon = np.array(geometry_corner_lon) + geometry_corner_lat = np.array(geometry_corner_lat) + geom_area = mod_huiliers_area(geometry_corner_lon, geometry_corner_lat) + geometry_area[geom_ind] = geom_area * earth_radius_minor_axis * earth_radius_major_axis + + return geometry_area
+ + + +
+[docs] +def calculate_cell_area(grid_corner_lon, grid_corner_lat, + earth_radius_minor_axis=6356752.3142, + earth_radius_major_axis=6378137.0): + """ + Calculate the area of each cell of a grid. + + Parameters + ---------- + grid_corner_lon : np.array + Array with longitude bounds of grid. + grid_corner_lat : np.array + Array with longitude bounds of grid. + earth_radius_minor_axis : float + Radius of the minor axis of the Earth. + earth_radius_major_axis : float + Radius of the major axis of the Earth. + """ + + # Calculate area for each grid cell + n_cells = grid_corner_lon.shape[0] + area = np.empty(shape=(n_cells,)) + for i in range(0, n_cells): + area[i] = mod_huiliers_area(grid_corner_lon[i], grid_corner_lat[i]) + + return area*earth_radius_minor_axis*earth_radius_major_axis
+ + + +
+[docs] +def mod_huiliers_area(cell_corner_lon, cell_corner_lat): + """ + Calculate the area of each cell according to Huilier's theorem. + Reference: CDO (https://earth.bsc.es/gitlab/ces/cdo/) + + Parameters + ---------- + cell_corner_lon : np.array + Longitude boundaries of each cell + cell_corner_lat : np.array + Latitude boundaries of each cell + """ + + sum = 0 + + # Get points 0 (bottom left) and 1 (bottom right) in Earth coordinates + point_0 = lon_lat_to_cartesian(cell_corner_lon[0], cell_corner_lat[0], earth_radius_major_axis=1) + point_1 = lon_lat_to_cartesian(cell_corner_lon[1], cell_corner_lat[1], earth_radius_major_axis=1) + point_0, point_1 = point_0[0], point_1[0] + + # Get number of vertices + if cell_corner_lat[0] == cell_corner_lat[-1]: + spatial_nv = len(cell_corner_lon) - 1 + else: + spatial_nv = len(cell_corner_lon) + + for i in range(2, spatial_nv): + + # Get point 2 (top right) in Earth coordinates + point_2 = lon_lat_to_cartesian(cell_corner_lon[i], cell_corner_lat[i], earth_radius_major_axis=1) + point_2 = point_2[0] + + # Calculate area of triangle between points 0, 1 and 2 + sum += tri_area(point_0, point_1, point_2) + + # Copy to calculate area of next triangle + if i == (spatial_nv - 1): + point_1 = deepcopy(point_2) + + return sum
+ + + +
+[docs] +def tri_area(point_0, point_1, point_2): + """ + Calculate area between three points that form a triangle. + Reference: CDO (https://earth.bsc.es/gitlab/ces/cdo/) + + Parameters + ---------- + point_0 : np.array + Position of first point in cartesian coordinates. + point_1 : np.array + Position of second point in cartesian coordinates. + point_2 : np.array + Position of third point in cartesian coordinates. + """ + + # Get length of side a (between point 0 and 1) + tmp_vec = cross_product(point_0, point_1) + sina = norm(tmp_vec) + a = np.arcsin(sina) + + # Get length of side b (between point 0 and 2) + tmp_vec = cross_product(point_0, point_2) + sinb = norm(tmp_vec) + b = np.arcsin(sinb) + + # Get length of side c (between point 1 and 2) + tmp_vec = cross_product(point_2, point_1) + sinc = norm(tmp_vec) + c = np.arcsin(sinc) + + # Calculate area + s = 0.5*(a+b+c) + t = np.tan(s*0.5) * np.tan((s - a)*0.5) * np.tan((s - b)*0.5) * np.tan((s - c)*0.5) + area = np.fabs(4.0 * np.arctan(np.sqrt(np.fabs(t)))) + + return area
+ + + +
+[docs] +def cross_product(a, b): + """ + Calculate cross product between two points. + + Parameters + ---------- + a : np.array + Position of point a in cartesian coordinates. + b : np.array + Position of point b in cartesian coordinates. + """ + + return [a[1]*b[2] - a[2]*b[1], + a[2]*b[0] - a[0]*b[2], + a[0]*b[1] - a[1]*b[0]]
+ + + +
+[docs] +def norm(cp): + """ + Normalize the result of the cross product operation. + + Parameters + ---------- + cp : np.array + Cross product between two points. + """ + + return np.sqrt(cp[0]*cp[0] + cp[1]*cp[1] + cp[2]*cp[2])
+ + + +
+[docs] +def lon_lat_to_cartesian(lon, lat, earth_radius_major_axis=6378137.0): + """ + Calculate lon, lat coordinates of a point on a sphere. + + Parameters + ---------- + lon : np.array + Longitude values. + lat : np.array + Latitude values. + earth_radius_major_axis : float + Radius of the major axis of the Earth. + """ + + lon_r = np.radians(lon) + lat_r = np.radians(lat) + + x = earth_radius_major_axis * np.cos(lat_r) * np.cos(lon_r) + y = earth_radius_major_axis * np.cos(lat_r) * np.sin(lon_r) + z = earth_radius_major_axis * np.sin(lat_r) + + return np.column_stack([x, y, z])
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/methods/horizontal_interpolation.html b/docs/build/html/_modules/nes/methods/horizontal_interpolation.html new file mode 100644 index 0000000000000000000000000000000000000000..7ef793d71fe4893aa3968faeff57bcb1d7341c88 --- /dev/null +++ b/docs/build/html/_modules/nes/methods/horizontal_interpolation.html @@ -0,0 +1,887 @@ + + + + + + nes.methods.horizontal_interpolation — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.methods.horizontal_interpolation

+#!/usr/bin/env python
+
+import sys
+import warnings
+import numpy as np
+import pandas as pd
+from geopandas import GeoSeries
+import os
+import nes
+from mpi4py import MPI
+from scipy import spatial
+from filelock import FileLock
+from datetime import datetime
+from warnings import warn
+import copy
+import pyproj
+import gc
+import psutil
+
+# CONSTANTS
+NEAREST_OPTS = ['NearestNeighbour', 'NearestNeighbours', 'nn', 'NN']
+CONSERVATIVE_OPTS = ['Conservative', 'Area_Conservative', 'cons', 'conservative', 'area']
+
+
+
+[docs] +def interpolate_horizontal(self, dst_grid, weight_matrix_path=None, kind='NearestNeighbour', n_neighbours=4, + info=False, to_providentia=False, only_create_wm=False, wm=None, flux=False): + """ + Horizontal methods from one grid to another one. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + dst_grid : nes.Nes + Final projection Nes object. + weight_matrix_path : str, None + Path to the weight matrix to read/create. + kind : str + Kind of horizontal interpolation. Accepted values: ['NearestNeighbour', 'Conservative']. + n_neighbours : int + Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4. + info : bool + Indicates if you want to print extra info during the methods process. + to_providentia : bool + Indicates if we want the interpolated grid in Providentia format. + only_create_wm : bool + Indicates if you want to only create the Weight Matrix. + wm : Nes + Weight matrix Nes File + flux : bool + Indicates if you want to calculate the weight matrix for flux variables + """ + if info and self.master: + print("Creating Weight Matrix") + # Obtain weight matrix + if self.parallel_method == 'T': + weights, idx = get_weights_idx_t_axis(self, dst_grid, weight_matrix_path, kind, n_neighbours, + only_create_wm, wm, flux) + elif self.parallel_method in ['Y', 'X']: + weights, idx = get_weights_idx_xy_axis(self, dst_grid, weight_matrix_path, kind, n_neighbours, + only_create_wm, wm, flux) + else: + raise NotImplemented("Parallel method {0} is not implemented yet for horizontal interpolations. Use 'T'".format( + self.parallel_method)) + if info and self.master: + print("Weight Matrix done!") + if only_create_wm: + # weights for only_create is the WM NES object + return weights + + # idx[idx < 0] = np.nan + idx = np.ma.masked_array(idx, mask=idx == -999) + # idx = np.array(idx, dtype=float) + # idx[idx < 0] = np.nan + # weights[weights < 0] = np.nan + weights = np.ma.masked_array(weights, mask=weights == -999) + # weights = np.array(weights, dtype=float) + # weights[weights < 0] = np.nan + + # Copy NES + final_dst = dst_grid.copy() + final_dst.set_communicator(dst_grid.comm) + + # Remove original file information + final_dst.__ini_path = None + final_dst.netcdf = None + final_dst.dataset = None + + # Return final_dst + final_dst.lev = self.lev + final_dst._lev = self._lev + final_dst.time = self.time + final_dst._time = self._time + final_dst.hours_start = self.hours_start + final_dst.hours_end = self.hours_end + + if info and self.master: + print("Applying weights") + # Apply weights + for var_name, var_info in self.variables.items(): + if info and self.master: + print("\t{var} horizontal interpolation".format(var=var_name)) + sys.stdout.flush() + src_shape = var_info['data'].shape + if isinstance(dst_grid, nes.PointsNes): + dst_shape = (src_shape[0], src_shape[1], idx.shape[-1]) + else: + dst_shape = (src_shape[0], src_shape[1], idx.shape[-2], idx.shape[-1]) + # Creating new variable without data + final_dst.variables[var_name] = {attr_name: attr_value for attr_name, attr_value in var_info.items() + if attr_name != 'data'} + # Creating empty data + final_dst.variables[var_name]['data'] = np.empty(dst_shape) + + # src_data = var_info['data'].reshape((src_shape[0], src_shape[1], src_shape[2] * src_shape[3])) + for time in range(dst_shape[0]): + for lev in range(dst_shape[1]): + src_aux = get_src_data(self.comm, var_info['data'][time, lev], idx, self.parallel_method) + final_dst.variables[var_name]['data'][time, lev] = np.sum(weights * src_aux, axis=1) + + if isinstance(dst_grid, nes.PointsNes): + # Removing level axis + if src_shape[1] != 1: + raise IndexError("Data with vertical levels cannot be interpolated to points") + final_dst.variables[var_name]['data'] = final_dst.variables[var_name]['data'].reshape( + (src_shape[0], idx.shape[-1])) + if isinstance(dst_grid, nes.PointsNesGHOST) and not to_providentia: + final_dst = final_dst.to_points() + + final_dst.global_attrs = self.global_attrs + + if info and self.master: + print("Formatting") + + if to_providentia: + # self = experiment to interpolate (regular, rotated, etc.) + # final_dst = interpolated experiment (points) + if isinstance(final_dst, nes.PointsNes): + model_centre_lat, model_centre_lon = self.create_providentia_exp_centre_coordinates() + grid_edge_lat, grid_edge_lon = self.create_providentia_exp_grid_edge_coordinates() + final_dst = final_dst.to_providentia(model_centre_lon=model_centre_lon, + model_centre_lat=model_centre_lat, + grid_edge_lon=grid_edge_lon, + grid_edge_lat=grid_edge_lat) + else: + msg = "The final projection must be points to interpolate an experiment and get it in Providentia format." + warnings.warn(msg) + sys.stderr.flush() + else: + # Convert dimensions (time, lev, lat, lon) or (time, lat, lon) to (time, station) for interpolated variables + # and reshape data + if isinstance(final_dst, nes.PointsNes): + for var_name, var_info in final_dst.variables.items(): + if len(var_info['dimensions']) != len(var_info['data'].shape): + final_dst.variables[var_name]['dimensions'] = ('time', 'station') + + return final_dst
+ + + +
+[docs] +def get_src_data(comm, var_data, idx, parallel_method): + """ + To obtain the needed src data to interpolate. + + Parameters + ---------- + comm : MPI.Communicator. + var_data : np.array + Rank source data. + idx : np.array + Index of the needed data in a 2D flatten way. + parallel_method: str + Source parallel method. + + Returns + ------- + np.array + Flatten source needed data. + """ + + if parallel_method == 'T': + var_data = var_data.flatten() + else: + var_data = comm.gather(var_data, root=0) + if comm.Get_rank() == 0: + if parallel_method == 'Y': + axis = 0 + elif parallel_method == 'X': + axis = 1 + else: + raise NotImplementedError(parallel_method) + var_data = np.concatenate(var_data, axis=axis) + var_data = var_data.flatten() + + var_data = comm.bcast(var_data) + + var_data = np.pad(var_data, [1, 1], 'constant', constant_values=np.nan).take(idx + 1, mode='clip') + #var_data = np.take(var_data, idx) + + return var_data
+ + + +# noinspection DuplicatedCode +
+[docs] +def get_weights_idx_t_axis(self, dst_grid, weight_matrix_path, kind, n_neighbours, only_create, wm, flux): + """ + To obtain the weights and source data index through the T axis. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + dst_grid : nes.Nes + Final projection Nes object. + weight_matrix_path : str, None + Path to the weight matrix to read/create. + kind : str + Kind of horizontal interpolation. Accepted values: ['NearestNeighbour', 'Conservative']. + n_neighbours : int + Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4. + only_create : bool + Indicates if you want to only create the Weight Matrix. + wm : Nes + Weight matrix Nes File + flux : bool + Indicates if you want to calculate the weight matrix for flux variables + + Returns + ------- + tuple + Weights and source data index. + """ + if wm is not None: + weight_matrix = wm + + elif weight_matrix_path is not None: + with FileLock(weight_matrix_path + "{0:03d}.lock".format(self.rank)): + if os.path.isfile(weight_matrix_path): + if self.master: + weight_matrix = read_weight_matrix(weight_matrix_path, comm=MPI.COMM_SELF) + else: + weight_matrix = True + if kind in NEAREST_OPTS: + if self.master: + if len(weight_matrix.lev['data']) != n_neighbours: + warn("The selected weight matrix does not have the same number of nearest neighbours." + + "Re-calculating again but not saving it.") + sys.stderr.flush() + weight_matrix = create_nn_weight_matrix(self, dst_grid, n_neighbours=n_neighbours) + else: + weight_matrix = True + + else: + if self.master: + if kind in NEAREST_OPTS: + weight_matrix = create_nn_weight_matrix(self, dst_grid, n_neighbours=n_neighbours, + wm_path=weight_matrix_path) + elif kind in CONSERVATIVE_OPTS: + weight_matrix = create_area_conservative_weight_matrix( + self, dst_grid, wm_path=weight_matrix_path, flux=flux) + else: + raise NotImplementedError(kind) + else: + weight_matrix = True + + if os.path.exists(weight_matrix_path + "{0:03d}.lock".format(self.rank)): + os.remove(weight_matrix_path + "{0:03d}.lock".format(self.rank)) + else: + if self.master: + if kind in NEAREST_OPTS: + weight_matrix = create_nn_weight_matrix(self, dst_grid, n_neighbours=n_neighbours) + elif kind in CONSERVATIVE_OPTS: + weight_matrix = create_area_conservative_weight_matrix(self, dst_grid, flux=flux) + else: + raise NotImplementedError(kind) + else: + weight_matrix = True + + if only_create: + return weight_matrix, None + + if self.master: + if kind in NEAREST_OPTS: + # Normalize to 1 + weights = np.array(np.array(weight_matrix.variables['weight']['data'], dtype=np.float64) / + np.array(weight_matrix.variables['weight']['data'], dtype=np.float64).sum(axis=1), + dtype=np.float64) + else: + weights = np.array(weight_matrix.variables['weight']['data'], dtype=np.float64) + idx = np.array(weight_matrix.variables['idx']['data'][0], dtype=int) + else: + weights = None + idx = None + + weights = self.comm.bcast(weights, root=0) + idx = self.comm.bcast(idx, root=0) + + return weights, idx
+ + + +# noinspection DuplicatedCode +
+[docs] +def get_weights_idx_xy_axis(self, dst_grid, weight_matrix_path, kind, n_neighbours, only_create, wm, flux): + """ + To obtain the weights and source data index through the X or Y axis. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + dst_grid : nes.Nes + Final projection Nes object. + weight_matrix_path : str, None + Path to the weight matrix to read/create. + kind : str + Kind of horizontal interpolation. Accepted values: ['NearestNeighbour', 'Conservative']. + n_neighbours : int + Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4. + only_create : bool + Indicates if you want to only create the Weight Matrix. + wm : Nes + Weight matrix Nes File + flux : bool + Indicates if you want to calculate the weight matrix for flux variables + + Returns + ------- + tuple + Weights and source data index. + """ + if isinstance(dst_grid, nes.PointsNes) and weight_matrix_path is not None: + if self.master: + warn("To point weight matrix cannot be saved.") + sys.stderr.flush() + weight_matrix_path = None + + if wm is not None: + weight_matrix = wm + + elif weight_matrix_path is not None: + with FileLock(weight_matrix_path + "{0:03d}.lock".format(self.rank)): + if os.path.isfile(weight_matrix_path): + if self.master: + weight_matrix = read_weight_matrix(weight_matrix_path, comm=MPI.COMM_SELF) + else: + weight_matrix = True + if kind in NEAREST_OPTS: + if self.master: + if len(weight_matrix.lev['data']) != n_neighbours: + warn("The selected weight matrix does not have the same number of nearest neighbours." + + "Re-calculating again but not saving it.") + sys.stderr.flush() + weight_matrix = create_nn_weight_matrix(self, dst_grid, n_neighbours=n_neighbours) + else: + weight_matrix = True + else: + if kind in NEAREST_OPTS: + if self.master: + weight_matrix = create_nn_weight_matrix(self, dst_grid, n_neighbours=n_neighbours, + wm_path=weight_matrix_path) + else: + weight_matrix = True + elif kind in CONSERVATIVE_OPTS: + weight_matrix = create_area_conservative_weight_matrix( + self, dst_grid, wm_path=weight_matrix_path, flux=flux) + else: + raise NotImplementedError(kind) + + if os.path.exists(weight_matrix_path + "{0:03d}.lock".format(self.rank)): + os.remove(weight_matrix_path + "{0:03d}.lock".format(self.rank)) + else: + if kind in NEAREST_OPTS: + weight_matrix = create_nn_weight_matrix(self, dst_grid, n_neighbours=n_neighbours) + elif kind in CONSERVATIVE_OPTS: + weight_matrix = create_area_conservative_weight_matrix(self, dst_grid, flux=flux) + else: + raise NotImplementedError(kind) + + if only_create: + return weight_matrix, None + + # Normalize to 1 + if self.master: + if kind in NEAREST_OPTS: + weights = np.array(np.array(weight_matrix.variables['weight']['data'], dtype=np.float64) / + np.array(weight_matrix.variables['weight']['data'], dtype=np.float64).sum(axis=1), + dtype=np.float64) + else: + weights = np.array(weight_matrix.variables['weight']['data'], dtype=np.float64) + idx = np.array(weight_matrix.variables['idx']['data'][0], dtype=np.int64) + else: + weights = None + idx = None + + weights = self.comm.bcast(weights, root=0) + idx = self.comm.bcast(idx, root=0) + + # if isinstance(dst_grid, nes.PointsNes): + # print("weights 1 ->", weights.shape) + # print("idx 1 ->", idx.shape) + # weights = weights[:, dst_grid.write_axis_limits['x_min']:dst_grid.write_axis_limits['x_max']] + # idx = idx[dst_grid.write_axis_limits['x_min']:dst_grid.write_axis_limits['x_max']] + # else: + weights = weights[:, :, dst_grid.write_axis_limits['y_min']:dst_grid.write_axis_limits['y_max'], + dst_grid.write_axis_limits['x_min']:dst_grid.write_axis_limits['x_max']] + idx = idx[:, dst_grid.write_axis_limits['y_min']:dst_grid.write_axis_limits['y_max'], + dst_grid.write_axis_limits['x_min']:dst_grid.write_axis_limits['x_max']] + # print("weights 2 ->", weights.shape) + # print("idx 2 ->", idx.shape) + + return weights, idx
+ + + +
+[docs] +def read_weight_matrix(weight_matrix_path, comm=None, parallel_method='T'): + """ + Read weight matrix. + + Parameters + ---------- + weight_matrix_path : str + Path of the weight matrix. + comm : MPI.Communicator + Communicator to read the weight matrix. + parallel_method : str + Nes parallel method to read the weight matrix. + + Returns + ------- + nes.Nes + Weight matrix. + """ + weight_matrix = nes.open_netcdf(path=weight_matrix_path, comm=comm, parallel_method=parallel_method, balanced=True) + weight_matrix.load() + + # In previous versions of NES weight was called inverse_dists + if 'inverse_dists' in weight_matrix.variables.keys(): + weight_matrix.variables['weight'] = weight_matrix.variables['inverse_dists'] + + return weight_matrix
+ + + +
+[docs] +def create_nn_weight_matrix(self, dst_grid, n_neighbours=4, wm_path=None, info=False): + """ + To create the weight matrix with the nearest neighbours method. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + dst_grid : nes.Nes + Final projection Nes object. + n_neighbours : int + Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4. + wm_path : str + Path where write the weight matrix + info: bool + Indicates if you want to print extra info during the methods process. + + Returns + ------- + nes.Nes + Weight matrix. + """ + + if info and self.master: + print("\tCreating Nearest Neighbour Weight Matrix with {0} neighbours".format(n_neighbours)) + sys.stdout.flush() + # Source + src_lat = np.array(self._lat['data'], dtype=np.float32) + src_lon = np.array(self._lon['data'], dtype=np.float32) + + # 1D to 2D coordinates + if len(src_lon.shape) == 1: + src_lon, src_lat = np.meshgrid(src_lon, src_lat) + + # Destination + dst_lat = np.array(dst_grid._lat['data'], dtype=np.float32) + dst_lon = np.array(dst_grid._lon['data'], dtype=np.float32) + + if isinstance(dst_grid, nes.PointsNes): + dst_lat = np.expand_dims(dst_grid._lat['data'], axis=0) + dst_lon = np.expand_dims(dst_grid._lon['data'], axis=0) + else: + # 1D to 2D coordinates + if len(dst_lon.shape) == 1: + dst_lon, dst_lat = np.meshgrid(dst_lon, dst_lat) + + # calculate N nearest neighbour inverse distance weights (and indices) + # from gridcells centres of model 1 to each gridcell centre of model 2 + # model geographic longitude/latitude coordinates are first converted + # to cartesian ECEF (Earth Centred, Earth Fixed) coordinates, before + # calculating distances. + + # src_mod_xy = lon_lat_to_cartesian(src_lon.flatten(), src_lat.flatten()) + # dst_mod_xy = lon_lat_to_cartesian(dst_lon.flatten(), dst_lat.flatten()) + + src_mod_xy = lon_lat_to_cartesian_ecef(src_lon.flatten(), src_lat.flatten()) + dst_mod_xy = lon_lat_to_cartesian_ecef(dst_lon.flatten(), dst_lat.flatten()) + + # generate KDtree using model 1 coordinates (i.e. the model grid you are + # interpolating from) + src_tree = spatial.cKDTree(src_mod_xy) + + # get n-neighbour nearest distances/indices (ravel form) of model 1 gridcell + # centres from each model 2 gridcell centre + + dists, idx = src_tree.query(dst_mod_xy, k=n_neighbours) + # self.nearest_neighbour_inds = \ + # np.column_stack(np.unravel_index(idx, lon.shape)) + + weight_matrix = dst_grid.copy() + weight_matrix.time = [datetime(year=2000, month=1, day=1, hour=0, second=0, microsecond=0)] + weight_matrix._time = [datetime(year=2000, month=1, day=1, hour=0, second=0, microsecond=0)] + weight_matrix._time_bnds = None + weight_matrix.time_bnds = None + weight_matrix.last_level = None + weight_matrix.first_level = 0 + weight_matrix.hours_start = 0 + weight_matrix.hours_end = 0 + + weight_matrix.set_communicator(MPI.COMM_SELF) + # take the reciprocals of the nearest neighbours distances + dists[dists < 1] = 1 + inverse_dists = np.reciprocal(dists) + + inverse_dists_transf = inverse_dists.T.reshape((1, n_neighbours, dst_lon.shape[0], dst_lon.shape[1])) + weight_matrix.variables['weight'] = {'data': inverse_dists_transf, 'units': 'm'} + idx_transf = idx.T.reshape((1, n_neighbours, dst_lon.shape[0], dst_lon.shape[1])) + weight_matrix.variables['idx'] = {'data': idx_transf, 'units': ''} + weight_matrix.lev = {'data': np.arange(inverse_dists_transf.shape[1]), 'units': ''} + weight_matrix._lev = {'data': np.arange(inverse_dists_transf.shape[1]), 'units': ''} + if wm_path is not None: + weight_matrix.to_netcdf(wm_path) + + return weight_matrix
+ + + +
+[docs] +def create_area_conservative_weight_matrix(self, dst_nes, wm_path=None, flux=False, info=False): + """ + To create the weight matrix with the area conservative method. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + dst_nes : nes.Nes + Final projection Nes object. + wm_path : str + Path where write the weight matrix + flux : bool + Indicates if you want to calculate the weight matrix for flux variables + info: bool + Indicates if you want to print extra info during the methods process. + + Returns + ------- + nes.Nes + Weight matrix. + """ + if info and self.master: + print("\tCreating area conservative Weight Matrix") + sys.stdout.flush() + + my_crs = pyproj.CRS.from_proj4("+proj=latlon") # Common projection for both shapefiles + + # Get a portion of the destiny grid + if dst_nes.shapefile is None: + dst_nes.create_shapefile() + dst_grid = copy.deepcopy(dst_nes.shapefile) + + # Formatting Destination grid + dst_grid.to_crs(crs=my_crs, inplace=True) + dst_grid['FID_dst'] = dst_grid.index + + # Preparing Source grid + if self.shapefile is None: + self.create_shapefile() + src_grid = copy.deepcopy(self.shapefile) + + # Formatting Source grid + src_grid.to_crs(crs=my_crs, inplace=True) + + # Serialize index intersection function to avoid memory problems + if self.size > 1 and self.parallel_method != 'T': + src_grid = self.comm.gather(src_grid, root=0) + dst_grid = self.comm.gather(dst_grid, root=0) + if self.master: + src_grid = pd.concat(src_grid) + dst_grid = pd.concat(dst_grid) + if self.master: + src_grid['FID_src'] = src_grid.index + src_grid = src_grid.reset_index() + dst_grid = dst_grid.reset_index() + fid_src, fid_dst = dst_grid.sindex.query_bulk(src_grid.geometry, predicate='intersects') + + # Calculate intersected areas and fractions + intersection_df = pd.DataFrame(columns=["FID_src", "FID_dst"]) + + intersection_df['FID_src'] = np.array(src_grid.loc[fid_src, 'FID_src'], dtype=np.uint32) + intersection_df['FID_dst'] = np.array(dst_grid.loc[fid_dst, 'FID_dst'], dtype=np.uint32) + + intersection_df['geometry_src'] = src_grid.loc[fid_src, 'geometry'].values + intersection_df['geometry_dst'] = dst_grid.loc[fid_dst, 'geometry'].values + del src_grid, dst_grid, fid_src, fid_dst + # Split the array into smaller arrays in order to scatter the data among the processes + intersection_df = np.array_split(intersection_df, self.size) + else: + intersection_df = None + + intersection_df = self.comm.scatter(intersection_df, root=0) + + if info and self.master: + print("\t\tGrids created and ready to interpolate") + sys.stdout.flush() + if True: + # No Warnings Zone + warnings.filterwarnings('ignore') + # intersection_df['weight'] = np.array(intersection_df.apply( + # lambda x: x['geometry_src'].intersection(x['geometry_dst']).buffer(0).area / x['geometry_src'].area, + # axis=1), dtype=np.float64) + if flux: + intersection_df['weight'] = np.array(intersection_df.apply( + lambda x: (x['geometry_src'].intersection(x['geometry_dst']).buffer(0).area / x['geometry_src'].area) * + (nes.Nes.calculate_geometry_area([x['geometry_src']])[0] / + nes.Nes.calculate_geometry_area([x['geometry_dst']])[0]), + axis=1), dtype=np.float64) + else: + intersection_df['weight'] = np.array(intersection_df.apply( + lambda x: x['geometry_src'].intersection(x['geometry_dst']).buffer(0).area / x['geometry_src'].area, + axis=1), dtype=np.float64) + + + intersection_df.drop(columns=["geometry_src", "geometry_dst"], inplace=True) + gc.collect() + warnings.filterwarnings('default') + + # Format & Clean + if info and self.master: + print("\t\tWeights calculated. Formatting weight matrix.") + sys.stdout.flush() + + # Initialising weight matrix + if self.parallel_method != 'T': + intersection_df = self.comm.gather(intersection_df, root=0) + if self.master: + if self.parallel_method != 'T': + intersection_df = pd.concat(intersection_df) + intersection_df = intersection_df.set_index(['FID_dst', intersection_df.groupby('FID_dst').cumcount()]).rename_axis( + ('FID', 'level')).sort_index() + intersection_df.rename(columns={"FID_src": "idx"}, inplace=True) + weight_matrix = dst_nes.copy() + weight_matrix.time = [datetime(year=2000, month=1, day=1, hour=0, second=0, microsecond=0)] + weight_matrix._time = [datetime(year=2000, month=1, day=1, hour=0, second=0, microsecond=0)] + weight_matrix._time_bnds = None + weight_matrix.time_bnds = None + weight_matrix.last_level = None + weight_matrix.first_level = 0 + weight_matrix.hours_start = 0 + weight_matrix.hours_end = 0 + + weight_matrix.set_communicator(MPI.COMM_SELF) + + weight_matrix.set_levels({'data': np.arange(intersection_df.index.get_level_values('level').max() + 1), + 'dimensions': ('lev',), + 'units': '', + 'positive': 'up'}) + + # Creating Weight matrix empty variables + if len(weight_matrix._lat['data'].shape) == 1: + shape = (1, len(weight_matrix.lev['data']), + weight_matrix._lat['data'].shape[0], weight_matrix._lon['data'].shape[0],) + shape_flat = (1, len(weight_matrix.lev['data']), + weight_matrix._lat['data'].shape[0] * weight_matrix._lon['data'].shape[0],) + else: + shape = (1, len(weight_matrix.lev['data']), + weight_matrix._lat['data'].shape[0], weight_matrix._lat['data'].shape[1],) + shape_flat = (1, len(weight_matrix.lev['data']), + weight_matrix._lat['data'].shape[0] * weight_matrix._lat['data'].shape[1],) + + weight_matrix.variables['weight'] = {'data': np.empty(shape_flat), 'units': '-'} + weight_matrix.variables['weight']['data'][:] = -999 + weight_matrix.variables['idx'] = {'data': np.empty(shape_flat), 'units': '-'} + weight_matrix.variables['idx']['data'][:] = -999 + + # Filling Weight matrix variables + for aux_lev in weight_matrix.lev['data']: + aux_data = intersection_df.xs(level='level', key=aux_lev) + weight_matrix.variables['weight']['data'][0, aux_lev, aux_data.index] = aux_data.loc[:, 'weight'].values + weight_matrix.variables['idx']['data'][0, aux_lev, aux_data.index] = aux_data.loc[:, 'idx'].values + # Re-shaping + weight_matrix.variables['weight']['data'] = weight_matrix.variables['weight']['data'].reshape(shape) + weight_matrix.variables['idx']['data'] = weight_matrix.variables['idx']['data'].reshape(shape) + if wm_path is not None: + if info and self.master: + print("\t\tWeight matrix saved at {0}".format(wm_path)) + sys.stdout.flush() + weight_matrix.to_netcdf(wm_path) + else: + weight_matrix = True + return weight_matrix
+ + + +
+[docs] +def lon_lat_to_cartesian(lon, lat, radius=6378137.0): + """ + Calculate lon, lat coordinates of a point on a sphere. + + DEPRECATED!!!! + + Parameters + ---------- + lon : np.array + Longitude values. + lat : np.array + Latitude values. + radius : float + Radius of the sphere to get the distances. + """ + + lon_r = np.radians(lon) + lat_r = np.radians(lat) + + x = radius * np.cos(lat_r) * np.cos(lon_r) + y = radius * np.cos(lat_r) * np.sin(lon_r) + z = radius * np.sin(lat_r) + + return np.column_stack([x, y, z])
+ + + +
+[docs] +def lon_lat_to_cartesian_ecef(lon, lat): + """ + # convert observational/model geographic longitude/latitude coordinates to cartesian ECEF (Earth Centred, + # Earth Fixed) coordinates, assuming WGS84 datum and ellipsoid, and that all heights = 0. + # ECEF coordiantes represent positions (in meters) as X, Y, Z coordinates, approximating the earth surface + # as an ellipsoid of revolution. + # This conversion is for the subsequent calculation of euclidean distances of the model gridcell centres + # from each observational station. + # Defining the distance between two points on the earth's surface as simply the euclidean distance + # between the two lat/lon pairs could lead to inaccurate results depending on the distance + # between two points (i.e. 1 deg. of longitude varies with latitude). + + Parameters + ---------- + lon : np.array + Longitude values. + lat : np.array + Latitude values. + """ + + lla = pyproj.Proj(proj='latlong', ellps='WGS84', datum='WGS84') + ecef = pyproj.Proj(proj='geocent', ellps='WGS84', datum='WGS84') + + x, y, z = pyproj.transform(lla, ecef, lon, lat, np.zeros(lon.shape), radians=False) + + return np.column_stack([x, y, z])
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/methods/spatial_join.html b/docs/build/html/_modules/nes/methods/spatial_join.html new file mode 100644 index 0000000000000000000000000000000000000000..60e438320dc19c9616923a6721430ec9c9628ef2 --- /dev/null +++ b/docs/build/html/_modules/nes/methods/spatial_join.html @@ -0,0 +1,421 @@ + + + + + + nes.methods.spatial_join — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for nes.methods.spatial_join

+#!/usr/bin/env python
+
+import sys
+import warnings
+import geopandas as gpd
+from geopandas import GeoDataFrame
+import nes
+import numpy as np
+import pandas as pd
+from shapely.geos import TopologicalError
+
+
+
+[docs] +def spatial_join(self, ext_shp, method=None, var_list=None, info=False): + """ + Compute overlay intersection of two GeoPandasDataFrames. + + Parameters + ---------- + self : nes.Nes + ext_shp : GeoPandasDataFrame or str + File or path from where the data will be obtained on the intersection. + method : str + Overlay method. Accepted values: ['nearest', 'intersection', 'centroid']. + var_list : List or None or str + Variables that will be included in the resulting shapefile. + info : bool + Indicates if you want to print the process info or no + """ + + if self.master and info: + print("Starting spatial join") + if isinstance(var_list, str): + # Transforming string (variable name) to a list with length 0 + var_list = [var_list] + + # Create source shapefile if it does not exist + if self.shapefile is None: + if self.master and info: + print("\tCreating shapefile") + sys.stdout.flush() + self.create_shapefile() + + ext_shp = prepare_external_shapefile(self, ext_shp=ext_shp, var_list=var_list, info=info) + + if method == 'nearest': + # Nearest centroids to the shapefile polygons + spatial_join_nearest(self, ext_shp=ext_shp, info=info) + elif method == 'intersection': + # Intersect the areas of the shapefile polygons, outside the shapefile there will be NaN + spatial_join_intersection(self, ext_shp=ext_shp, info=info) + elif method == 'centroid': + # Centroids that fall on the shapefile polygons, outside the shapefile there will be NaN + spatial_join_centroid(self, ext_shp=ext_shp, info=info) + + else: + accepted_values = ['nearest', 'intersection', 'centroid'] + raise NotImplementedError('{0} is not implemented. Choose from: {1}'.format(method, accepted_values)) + + return None
+ + + +
+[docs] +def prepare_external_shapefile(self, ext_shp, var_list, info=False): + """ + Prepare the external shapefile. + + It is high recommended to pass ext_shp parameter as string because it will clip the external shapefile to the rank. + + 1. Read if it is not already read + 2. Filter variables list + 3. Standardize projections + + Parameters + ---------- + self : nes.Nes + ext_shp : GeoDataFrame or str + External shapefile or path to it + var_list : List[str] or None + External shapefile variables to be computed + info : bool + Indicates if you want to print the information + + Returns + ------- + GeoDataFrame + External shapefile + """ + + if isinstance(ext_shp, str): + # Reading external shapefile + if self.master and info: + print("\tReading external shapefile") + # ext_shp = gpd.read_file(ext_shp, include_fields=var_list, mask=self.shapefile.geometry) + ext_shp = gpd.read_file(ext_shp, include_fields=var_list, bbox=get_bbox(self)) + + else: + msg = "WARNING!!! " + msg += "External shapefile already read. If you pass the path to the shapefile instead of the opened shapefile " + msg += "a best usage of memory is performed because the external shape will be clipped while reading." + warnings.warn(msg) + sys.stderr.flush() + ext_shp.reset_index(inplace=True) + if var_list is not None: + ext_shp = ext_shp.loc[:, var_list + ['geometry']] + + self.comm.Barrier() + if self.master and info: + print("\t\tReading external shapefile done!") + + # Standardizing projection + ext_shp = ext_shp.to_crs(self.shapefile.crs) + + return ext_shp
+ + + +
+[docs] +def get_bbox(self): + """ + Obtain the bounding box of the rank data + + (lon_min, lat_min, lon_max, lat_max) + + Parameters + ---------- + self : nes.Nes + + Returns + ------- + tuple + Bounding box + """ + + bbox = (self.lon_bnds['data'].min(), self.lat_bnds['data'].min(), + self.lon_bnds['data'].max(), self.lat_bnds['data'].max(), ) + + return bbox
+ + + +
+[docs] +def spatial_join_nearest(self, ext_shp, info=False): + """ + Perform the spatial join using the nearest method + + Parameters + ---------- + self : nes.Nes + ext_shp : GeoDataFrame + External shapefile + info : bool + Indicates if you want to print the information + """ + + if self.master and info: + print("\tNearest spatial joint") + sys.stdout.flush() + grid_shp = self.get_centroids_from_coordinates() + + # From geodetic coordinates (e.g. 4326) to meters (e.g. 4328) to use sjoin_nearest + # TODO: Check if the projection 4328 does not distort the coordinates too much + # https://gis.stackexchange.com/questions/372564/ + # userwarning-when-trying-to-get-centroid-from-a-polygon-geopandas + # ext_shp = ext_shp.to_crs('EPSG:4328') + # grid_shp = grid_shp.to_crs('EPSG:4328') + + # Calculate spatial joint by distance + aux_grid = gpd.sjoin_nearest(grid_shp, ext_shp, distance_col='distance') + + # Get data from closest shapes to centroids + del aux_grid['geometry'], aux_grid['index_right'] + self.shapefile.loc[aux_grid.index, aux_grid.columns] = aux_grid + + var_list = list(ext_shp.columns) + var_list.remove('geometry') + for var_name in var_list: + self.shapefile.loc[:, var_name] = np.array(self.shapefile.loc[:, var_name], dtype=ext_shp[var_name].dtype) + + return None
+ + + +
+[docs] +def spatial_join_centroid(self, ext_shp, info=False): + """ + Perform the spatial join using the centroid method + + Parameters + ---------- + self : nes.Nes + ext_shp : GeoDataFrame + External shapefile + info : bool + Indicates if you want to print the information + """ + + if self.master and info: + print("\tCentroid spatial join") + sys.stdout.flush() + if info and self.master: + print("\t\tCalculating centroids") + sys.stdout.flush() + + # Get centroids + grid_shp = self.get_centroids_from_coordinates() + + # Calculate spatial joint + if info and self.master: + print("\t\tCalculating centroid spatial join") + sys.stdout.flush() + aux_grid = gpd.sjoin(grid_shp, ext_shp, predicate='within') + + # Get data from shapes where there are centroids, rest will be NaN + del aux_grid['geometry'], aux_grid['index_right'] + self.shapefile.loc[aux_grid.index, aux_grid.columns] = aux_grid + + var_list = list(ext_shp.columns) + var_list.remove('geometry') + for var_name in var_list: + self.shapefile.loc[:, var_name] = np.array(self.shapefile.loc[:, var_name], dtype=ext_shp[var_name].dtype) + + return None
+ + + +
+[docs] +def spatial_join_intersection(self, ext_shp, info=False): + """ + Perform the spatial join using the intersection method + + Parameters + ---------- + self : nes.Nes + ext_shp : GeoDataFrame + External shapefile + info : bool + Indicates if you want to print the information + """ + + var_list = list(ext_shp.columns) + var_list.remove('geometry') + + grid_shp = self.shapefile + grid_shp['FID_grid'] = grid_shp.index + grid_shp = grid_shp.reset_index() + + # Get intersected areas + # inp, res = ext_shp.sindex.query_bulk(grid_shp.geometry, predicate='intersects') + inp, res = grid_shp.sindex.query_bulk(ext_shp.geometry, predicate='intersects') + + if info: + print('\t\tRank {0:03d}: {1} intersected areas found'.format(self.rank, len(inp))) + sys.stdout.flush() + + # Calculate intersected areas and fractions + intersection = pd.DataFrame(columns=['FID', 'ext_shp_id', 'weight']) + intersection['FID'] = np.array(grid_shp.loc[res, 'FID_grid'], dtype=np.uint32) + intersection['ext_shp_id'] = np.array(inp, dtype=np.uint32) + + if len(intersection) > 0: + if True: + # No Warnings Zone + counts = intersection['FID'].value_counts() + warnings.filterwarnings('ignore') + intersection.loc[:, 'weight'] = 1. + + for i, row in intersection.iterrows(): + if isinstance(i, int) and i % 1000 == 0 and info: + print('\t\t\tRank {0:03d}: {1:.3f} %'.format(self.rank, i * 100 / len(intersection))) + sys.stdout.flush() + # Filter to do not calculate percentages over 100% grid cells spatial joint + if counts[row['FID']] > 1: + try: + intersection.loc[i, 'weight'] = grid_shp.loc[res[i], 'geometry'].intersection( + ext_shp.loc[inp[i], 'geometry']).area / grid_shp.loc[res[i], 'geometry'].area + except TopologicalError: + # If for some reason the geometry is corrupted it should work with the buffer function + ext_shp.loc[[inp[i]], 'geometry'] = ext_shp.loc[[inp[i]], 'geometry'].buffer(0) + intersection.loc[i, 'weight'] = grid_shp.loc[res[i], 'geometry'].intersection( + ext_shp.loc[inp[i], 'geometry']).area / grid_shp.loc[res[i], 'geometry'].area + # intersection['intersect_area'] = intersection.apply( + # lambda x: x['geometry_grid'].intersection(x['geometry_ext']).area, axis=1) + intersection.drop(intersection[intersection['weight'] <= 0].index, inplace=True) + + warnings.filterwarnings('default') + + # Choose the biggest area from intersected areas with multiple options + intersection.sort_values('weight', ascending=False, inplace=True) + intersection = intersection.drop_duplicates(subset='FID', keep="first") + intersection = intersection.sort_values('FID').set_index('FID') + + for var_name in var_list: + self.shapefile.loc[intersection.index, var_name] = np.array( + ext_shp.loc[intersection['ext_shp_id'], var_name]) + + else: + for var_name in var_list: + self.shapefile.loc[:, var_name] = np.nan + + for var_name in var_list: + self.shapefile.loc[:, var_name] = np.array(self.shapefile.loc[:, var_name], dtype=ext_shp[var_name].dtype) + + return None
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/methods/vertical_interpolation.html b/docs/build/html/_modules/nes/methods/vertical_interpolation.html new file mode 100644 index 0000000000000000000000000000000000000000..440e05316bee93bde0cb6dc47ec59138c0ed70cf --- /dev/null +++ b/docs/build/html/_modules/nes/methods/vertical_interpolation.html @@ -0,0 +1,315 @@ + + + + + + nes.methods.vertical_interpolation — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.methods.vertical_interpolation

+#!/usr/bin/env python
+
+import sys
+import nes
+from scipy.interpolate import interp1d
+import numpy as np
+from copy import copy
+
+
+
+[docs] +def add_4d_vertical_info(self, info_to_add): + """ + To add the vertical information from other source. + + Parameters + ---------- + self : nes.Nes + Source Nes object. + info_to_add : nes.Nes, str + Nes object with the vertical information as variable or str with the path to the NetCDF file that contains + the vertical data. + """ + + vertical_var = list(self.concatenate(info_to_add)) + self.vertical_var_name = vertical_var[0] + + return None
+ + + +
+[docs] +def interpolate_vertical(self, new_levels, new_src_vertical=None, kind='linear', extrapolate=None, info=None, + overwrite=False): + """ + Vertical interpolation. + + Parameters + ---------- + self : Nes + Source Nes object. + new_levels : List + List of new vertical levels. + new_src_vertical + kind : str + Vertical methods type. + extrapolate : None, tuple, str + Extrapolate method (for non linear operations). + info: None, bool + Indicates if you want to print extra information. + overwrite: bool + Indicates if you want to compute the vertical interpolation in the same object or not + """ + if len(self.lev) == 1: + raise RuntimeError("1D data cannot be vertically interpolated.") + if not overwrite: + self = self.copy(copy_vars=True) + if info is None: + info = self.info + + if new_src_vertical is not None: + self.add_4d_vertical_info(new_src_vertical) + if new_levels[0] > new_levels[-1]: + ascendant = False + else: + ascendant = True + + nz_new = len(new_levels) + + if self.vertical_var_name is None: + # To use current level data + current_level = True + # Checking old order + src_levels = self.lev['data'] + if src_levels[0] > src_levels[-1]: + if not ascendant: + flip = False + else: + flip = True + src_levels = np.flip(src_levels) + else: + if ascendant: + flip = False + else: + flip = True + src_levels = np.flip(src_levels) + else: + current_level = False + src_levels = self.variables[self.vertical_var_name]['data'] + if self.vertical_var_name == 'layer_thickness': + src_levels = np.flip(np.cumsum(np.flip(src_levels, axis=1), axis=1)) + else: + # src_levels = np.flip(src_levels, axis=1) + pass + # Checking old order + if np.nanmean(src_levels[:, 0, :, :]) > np.nanmean(src_levels[:, -1, :, :]): + if not ascendant: + flip = False + else: + flip = True + src_levels = np.flip(src_levels, axis=1) + else: + if ascendant: + flip = False + else: + flip = True + src_levels = np.flip(src_levels, axis=1) + + # Loop over variables + for var_name in self.variables.keys(): + if self.variables[var_name]['data'] is None: + # Load data if it is not loaded yet + self.load(var_name) + + if var_name != self.vertical_var_name: + if flip: + self.variables[var_name]['data'] = np.flip(self.variables[var_name]['data'], axis=1) + if info and self.master: + print("\t{var} vertical methods".format(var=var_name)) + sys.stdout.flush() + nt, nz, ny, nx = self.variables[var_name]['data'].shape + dst_data = np.empty((nt, nz_new, ny, nx), dtype=self.variables[var_name]['data'].dtype) + for t in range(nt): + # if info and self.rank == self.size - 1: + if self.info and self.master: + print('\t\t{3} time step {0} ({1}/{2}))'.format(self.time[t], t + 1, nt, var_name)) + sys.stdout.flush() + for j in range(ny): + for i in range(nx): + curr_level_values = src_levels[t, :, j, i] + try: + # Check if all values are identical or masked + if ((isinstance(curr_level_values, np.ndarray) and + (curr_level_values == curr_level_values[0]).all()) or + (isinstance(curr_level_values, np.ma.core.MaskedArray) and + curr_level_values.mask.all())): + kind = 'slinear' + else: + kind = kind # 'cubic' + if extrapolate is None: + fill_value = (np.float64(self.variables[var_name]['data'][t, 0, j, i]), + np.float64(self.variables[var_name]['data'][t, -1, j, i])) + else: + fill_value = extrapolate + + # We force the methods with float64 to avoid negative values + # We don't know why the negatives appears with float34 + if current_level: + # 1D vertical component + src_levels_aux = src_levels + else: + # 4D vertical component + src_levels_aux = src_levels[t, :, j, i] + + if kind == 'linear': + dst_data[t, :, j, i] = np.array( + np.interp(new_levels, + np.array(src_levels_aux, dtype=np.float64), + np.array(self.variables[var_name]['data'][t, :, j, i], dtype=np.float64)), + dtype=self.variables[var_name]['data'].dtype) + else: + dst_data[t, :, j, i] = np.array( + interp1d(np.array(src_levels_aux, dtype=np.float64), + np.array(self.variables[var_name]['data'][t, :, j, i], dtype=np.float64), + kind=kind, + bounds_error=False, + fill_value=fill_value)(new_levels), + dtype=self.variables[var_name]['data'].dtype) + except Exception as e: + print("time lat lon", t, j, i) + print("***********************") + print("LEVELS", np.array(src_levels[t, :, j, i], dtype=np.float64)) + print("DATA", np.array(self.variables[var_name]['data'][t, :, j, i], dtype=np.float64)) + print("METHOD", kind) + print("FILL_VALUE", fill_value) + print("+++++++++++++++++++++++") + raise Exception(str(e)) + # if level_array is not None: + # dst_data[t, :, j, i] = np.array(f(level_array), dtype=np.float32) + + self.variables[var_name]['data'] = copy(dst_data) + # print(self.variables[var_name]['data']) + + # Update level information + new_lev_info = {'data': np.array(new_levels)} + if 'positive' in self._lev.keys(): + # Vertical level direction + if flip: + self.reverse_level_direction() + new_lev_info['positive'] = self._lev['positive'] + for var_attr, attr_info in self.variables[self.vertical_var_name].items(): + if var_attr not in ['data', 'dimensions', 'crs', 'grid_mapping']: + new_lev_info[var_attr] = copy(attr_info) + self.set_levels(new_lev_info) + + self.free_vars(self.vertical_var_name) + self.vertical_var_name = None + + # Remove original file information + self.__ini_path = None + self.dataset = None + self.netcdf = None + + return self
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/default_nes.html b/docs/build/html/_modules/nes/nc_projections/default_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..1af4b6fa3cebbf427deb3c13d765e655e5afc266 --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/default_nes.html @@ -0,0 +1,4035 @@ + + + + + + nes.nc_projections.default_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nc_projections.default_nes

+#!/usr/bin/env python
+
+import sys
+import gc
+import warnings
+import numpy as np
+import pandas as pd
+from datetime import timedelta
+from xarray import open_dataset
+from netCDF4 import Dataset, num2date, date2num, stringtochar
+from mpi4py import MPI
+from cfunits import Units
+from shapely.geos import TopologicalError
+import geopandas as gpd
+from shapely.geometry import Polygon, Point
+from copy import deepcopy, copy
+import datetime
+from dateutil.relativedelta import relativedelta
+import pyproj
+from ..methods import vertical_interpolation, horizontal_interpolation, cell_measures, spatial_join
+from ..nes_formats import to_netcdf_cams_ra, to_netcdf_monarch, to_monarch_units, to_netcdf_cmaq, to_cmaq_units, \
+    to_netcdf_wrf_chem, to_wrf_chem_units
+
+
+
+[docs] +class Nes(object): + """ + + Attributes + ---------- + comm : MPI.Communicator. + rank : int + MPI rank. + master : bool + True when rank == 0. + size : int + Size of the communicator. + info : bool + Indicates if you want to print reading/writing info. + is_xarray : bool + (Not working) Indicates if you want to use xarray as default. + __ini_path : str + Path to the original file to read when open_netcdf is called. + hours_start : int + Number of hours to avoid from the first original values. + hours_end : int + Number of hours to avoid from the last original values. + dataset : xr.Dataset + (not working) xArray Dataset. + netcdf : Dataset + netcdf4-python Dataset. + variables : dict + Variables information. + The variables are stored in a dictionary with the var_name as key and another dictionary with the information. + The information dictionary contains the 'data' key with None (if the variable is not loaded) or the array values + and the other keys are the variable attributes or description. + _time : List + Complete list of original time step values. + _lev : dict + Vertical level dictionary with the complete 'data' key for all the values and the rest of the attributes. + _lat : dict + Latitudes dictionary with the complete 'data' key for all the values and the rest of the attributes. + _lon _ dict + Longitudes dictionary with the complete 'data' key for all the values and the rest of the attributes. + parallel_method : str + Parallel method to read/write. + Can be chosen any of the following axis to parallelize: 'T', 'Y' or 'X'. + read_axis_limits : dict + Dictionary with the 4D limits of the rank data to read. + t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. + write_axis_limits : dict + Dictionary with the 4D limits of the rank data to write. + t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. + time : List[datetime] + List of time steps of the rank data. + lev : dict + Vertical levels dictionary with the portion of 'data' corresponding to the rank values. + lat : dict + Latitudes dictionary with the portion of 'data' corresponding to the rank values. + lon : dict + Longitudes dictionary with the portion of 'data' corresponding to the rank values. + global_attrs : dict + Global attributes with the attribute name as key and data as values. + _var_dim : None or tuple + Name of the Y and X dimensions for the variables. + _lat_dim : None or tuple + Name of the dimensions of the Latitude values. + _lon_dim : None or tuple + Name of the dimensions of the Longitude values. + projection : pyproj.Proj + Grid projection. + projection_data : dict + Dictionary with the projection information. + + """ + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the Nes class + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default over Y axis + accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int or None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : List[datetime] or None + List of times to substitute the current ones while creation. + """ + + # MPI Initialization + if comm is None: + self.comm = MPI.COMM_WORLD + else: + self.comm = comm + self.rank = self.comm.Get_rank() + self.master = self.rank == 0 + self.size = self.comm.Get_size() + + # General info + self.info = info + self.is_xarray = xarray + self.__ini_path = path + self.shapefile = None + + # Selecting info + self.hours_start = avoid_first_hours + self.hours_end = avoid_last_hours + self.first_level = first_level + self.last_level = last_level + self.lat_min = None + self.lat_max = None + self.lon_min = None + self.lon_max = None + self.balanced = balanced + + # Define parallel method + self.parallel_method = parallel_method + self.serial_nc = None # Place to store temporally the serial Nes instance + + # Get minor and major axes of Earth + self.earth_radius = self.get_earth_radius('WGS84') + + # Time resolution and climatology will be modified, if needed, during the time variable reading + self._time_resolution = 'hours' + self._climatology = False + self._climatology_var_name = 'climatology_bounds' # Default var_name but can be changed if the input is dif + + # NetCDF object + if create_nes: + + self.netcdf = None + self.dataset = None + + # Set string length + self.strlen = None + + # Initialize variables + self.variables = {} + + # Set projection + self._create_projection(**kwargs) + + # Complete dimensions + self._time = times + + self._time_bnds = self.__get_time_bnds(create_nes) + self._lat_bnds, self._lon_bnds = self.__get_coordinates_bnds(create_nes) + self._lev = {'data': np.array([0]), + 'units': '', + 'positive': 'up'} + self._lat, self._lon = self._create_centre_coordinates(**kwargs) + + # Set axis limits for parallel reading + self.read_axis_limits = self.get_read_axis_limits() + + # Dimensions screening + self.time = self._time[self.read_axis_limits['t_min']:self.read_axis_limits['t_max']] + self.time_bnds = self._time_bnds + self.lev = deepcopy(self._lev) + self.lat_bnds, self.lon_bnds = self._lat_bnds, self._lon_bnds + + # Cell measures screening + self.cell_measures = self.__get_cell_measures(create_nes) + + # Set NetCDF attributes + self.global_attrs = self.__get_global_attributes(create_nes) + + else: + + if dataset is not None: + if self.is_xarray: + self.dataset = dataset + self.netcdf = None + else: + self.dataset = None + self.netcdf = dataset + elif self.__ini_path is not None: + if self.is_xarray: + self.dataset = self.__open_dataset() + self.netcdf = None + else: + self.dataset = None + self.netcdf = self.__open_netcdf4() + + # Get string length + self.strlen = self._get_strlen() + + # Lazy variables + self.variables = self._get_lazy_variables() + + # Get projection + self._get_projection() + + # Complete dimensions + self._time = self.__get_time() + self._time_bnds = self.__get_time_bnds() + self._lev = self._get_coordinate_dimension(['lev', 'level', 'lm', 'plev']) + self._lat = self._get_coordinate_dimension(['lat', 'latitude']) + self._lon = self._get_coordinate_dimension(['lon', 'longitude']) + self._lat_bnds, self._lon_bnds = self.__get_coordinates_bnds() + + # Complete cell measures + self._cell_measures = self.__get_cell_measures() + + # Set axis limits for parallel reading + self.read_axis_limits = self.get_read_axis_limits() + + # Dimensions screening + self.time = self._time[self.read_axis_limits['t_min']:self.read_axis_limits['t_max']] + self.time_bnds = self._time_bnds + self.lev = self._get_coordinate_values(self._lev, 'Z') + self.lat = self._get_coordinate_values(self._lat, 'Y') + self.lon = self._get_coordinate_values(self._lon, 'X') + self.lat_bnds = self._get_coordinate_values(self._lat_bnds, 'Y', bounds=True) + self.lon_bnds = self._get_coordinate_values(self._lon_bnds, 'X', bounds=True) + + # Cell measures screening + self.cell_measures = self._get_cell_measures_values(self._cell_measures) + + # Set axis limits for parallel writing + self.write_axis_limits = self.get_write_axis_limits() + + # Set NetCDF attributes + self.global_attrs = self.__get_global_attributes() + + # Writing options + self.zip_lvl = 0 + + # Dimensions information + self._var_dim = None + self._lat_dim = None + self._lon_dim = None + + self.vertical_var_name = None + + # Filtering (portion of the filter coordinates function) + idx = self.get_idx_intervals() + self._time = self._time[idx['idx_t_min']:idx['idx_t_max']] + self._lev['data'] = self._lev['data'][idx['idx_z_min']:idx['idx_z_max']] + + self.hours_start = 0 + self.hours_end = 0 + self.last_level = None + self.first_level = None + +
+[docs] + @staticmethod + def new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the Nes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default over Y axis + accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int or None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : List[datetime] or None + List of times to substitute the current ones while creation. + """ + + new = Nes(comm=comm, path=path, info=info, dataset=dataset, xarray=xarray, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, first_level=first_level, + last_level=last_level, create_nes=create_nes, balanced=balanced, times=times, **kwargs) + + return new
+ + + def _get_strlen(self): + """ + Get the strlen + + Returns + ------- + int + Max length of the string data + """ + + if 'strlen' in self.netcdf.dimensions: + strlen = self.netcdf.dimensions['strlen'].size + else: + return None + + return strlen + +
+[docs] + def set_strlen(self, strlen=75): + """ + Set the strlen + + 75 is the standard value used in GHOST data + + Parameters + ---------- + strlen : int or None + Max length of the string + """ + + self.strlen = strlen + + return None
+ + + def __del__(self): + """ + To delete the Nes object and close all the open datasets. + """ + + self.close() + try: + self.free_vars(list(self.variables.keys())) + del self.variables + del self.time + del self._time + del self.time_bnds + del self._time_bnds + del self.lev + del self._lev + del self.lat + del self._lat + del self.lon + del self._lon + del self._lat_bnds + del self.lat_bnds + del self._lon_bnds + del self.lon_bnds + del self.strlen + del self.shapefile + for cell_measure in self.cell_measures.keys(): + if self.cell_measures[cell_measure]['data'] is not None: + del self.cell_measures[cell_measure]['data'] + del self.cell_measures + except (AttributeError, KeyError): + pass + + del self + gc.collect() + + return None + + def __getstate__(self): + """ + Read the CSV file that contains all the Reduce variable specifications. + + Returns + ------- + state : dict + Dictionary with the class parameters. + """ + + d = self.__dict__ + state = {k: d[k] for k in d if k not in ['comm', 'variables', 'netcdf', 'cell_measures']} + + return state + + def __setstate__(self, state): + """ + Set the state of the class. + + Parameters + ---------- + state: dict + Dictionary with the class parameters. + """ + + self.__dict__ = state + + return None + + def __add__(self, other): + """ + Sum two NES objects + + Parameters + ---------- + other : Nes + Nes to be summed + + Returns + ------- + Nes + Summed Nes object + """ + nessy = self.copy(copy_vars=True) + for var_name in other.variables.keys(): + if var_name not in nessy.variables.keys(): + # Create New variable + nessy.variables[var_name] = deepcopy(other.variables[var_name]) + else: + nessy.variables[var_name]['data'] += other.variables[var_name]['data'] + return nessy + + def __radd__(self, other): + if other == 0 or other is None: + return self + else: + return self.__add__(other) + +
+[docs] + def copy(self, copy_vars=False): + """ + Copy the Nes object. + The copy will avoid to copy the communicator, dataset and variables by default. + + Parameters + ---------- + copy_vars: bool + Indicates if you want to copy the variables (in lazy mode). + + Returns + ------- + nessy : Nes + Copy of the Nes object. + """ + + nessy = deepcopy(self) + nessy.netcdf = None + if copy_vars: + nessy.set_communicator(self.comm) + nessy.variables = deepcopy(self.variables) + nessy.cell_measures = deepcopy(self.cell_measures) + else: + nessy.variables = {} + nessy.cell_measures = {} + + return nessy
+ + +
+[docs] + def get_full_times(self): + return self._time
+ + +
+[docs] + def get_full_levels(self): + return self._lev
+ + +
+[docs] + def set_level_direction(self, new_direction): + if new_direction not in ['up', 'down']: + raise ValueError("Level direction mus be up or down. '{0}' is not a valid option".format(new_direction)) + self._lev['positive'] = new_direction + self.lev['positive'] = new_direction + + return True
+ + +
+[docs] + def reverse_level_direction(self): + if 'positive' in self._lev.keys(): + if self._lev['positive'] == 'up': + self._lev['positive'] = 'down' + self.lev['positive'] = 'down' + else: + self._lev['positive'] = 'up' + self.lev['positive'] = 'up' + return True
+ + +
+[docs] + def clear_communicator(self): + """ + Erase the communicator and the parallelization indexes. + """ + + self.comm = None + self.rank = 0 + self.master = 0 + self.size = 0 + + return None
+ + +
+[docs] + def set_communicator(self, comm): + """ + Set a new communicator and the correspondent parallelization indexes. + + Parameters + ---------- + comm: MPI.COMM + Communicator to be set. + """ + + self.comm = comm + self.rank = self.comm.Get_rank() + self.master = self.rank == 0 + self.size = self.comm.Get_size() + + self.read_axis_limits = self.get_read_axis_limits() + self.write_axis_limits = self.get_write_axis_limits() + + return None
+ + +
+[docs] + def set_climatology(self, is_climatology): + if not isinstance(is_climatology, bool): + raise TypeError("Only boolean values are accepted") + self._climatology = is_climatology + return None
+ + +
+[docs] + def get_climatology(self): + return self._climatology
+ + +
+[docs] + def set_levels(self, levels): + """ + Modify the original level values with new ones. + + Parameters + ---------- + levels : dict + Dictionary with the new level information to be set. + """ + + self._lev = deepcopy(levels) + self.lev = deepcopy(levels) + + return None
+ + +
+[docs] + def set_time(self, time_list): + """ + Modify the original level values with new ones. + + Parameters + ---------- + time_list : List[datetime] + List of time steps + """ + if self.parallel_method == 'T': + raise TypeError("Cannot set time on a 'T' parallel method") + self._time = deepcopy(time_list) + self.time = deepcopy(time_list) + + return None
+ + +
+[docs] + def set_time_bnds(self, time_bnds): + """ + Modify the original time bounds values with new ones. + + Parameters + ---------- + time_bnds : List + List with the new time bounds information to be set. + """ + + correct_format = True + for time_bnd in np.array(time_bnds).flatten(): + if not isinstance(time_bnd, datetime.datetime): + print("{0} is not a datetime object".format(time_bnd)) + correct_format = False + if correct_format: + if len(self._time) == len(time_bnds): + self._time_bnds = deepcopy(time_bnds) + self.time_bnds = deepcopy(time_bnds) + else: + msg = "WARNING!!! " + msg += "The given time bounds list has a different length than the time array. " + msg += "(time:{0}, bnds:{1}). Time bounds will not be set.".format(len(self._time), len(time_bnds)) + warnings.warn(msg) + sys.stderr.flush() + else: + msg = 'WARNING!!! ' + msg += 'There is at least one element in the time bounds to be set that is not a datetime object. ' + msg += 'Time bounds will not be set.' + warnings.warn(msg) + sys.stderr.flush() + + return None
+ + +
+[docs] + def set_time_resolution(self, new_resolution): + accepted_resolutions = ['second', 'seconds', 'minute', 'minutes', 'hour', 'hours', 'day', 'days'] + if new_resolution in accepted_resolutions: + self._time_resolution = new_resolution + else: + raise ValueError("Time resolution '{0}' is not accepted. Use one of this: {1}".format( + new_resolution, accepted_resolutions)) + return True
+ + + +
+[docs] + @staticmethod + def create_single_spatial_bounds(coordinates, inc, spatial_nv=2, inverse=False): + """ + Calculate the vertices coordinates. + + Parameters + ---------- + coordinates : np.array + Coordinates in degrees (latitude or longitude). + inc : float + Increment between centre values. + spatial_nv : int + Non-mandatory parameter that informs the number of vertices that the boundaries must have. Default: 2. + inverse : bool + For some grid latitudes. + + Returns + ---------- + bounds : np.array + Array with as many elements as vertices for each value of coords. + """ + + # Create new arrays moving the centres half increment less and more. + coords_left = coordinates - inc / 2 + coords_right = coordinates + inc / 2 + + # Defining the number of corners needed. 2 to regular grids and 4 for irregular ones. + if spatial_nv == 2: + # Create an array of N arrays of 2 elements to store the floor and the ceil values for each cell + bounds = np.dstack((coords_left, coords_right)) + bounds = bounds.reshape((len(coordinates), spatial_nv)) + elif spatial_nv == 4: + # Create an array of N arrays of 4 elements to store the corner values for each cell + # It can be stored in clockwise starting form the left-top element, or in inverse mode. + if inverse: + bounds = np.dstack((coords_left, coords_left, coords_right, coords_right)) + else: + bounds = np.dstack((coords_left, coords_right, coords_right, coords_left)) + else: + raise ValueError('The number of vertices of the boundaries must be 2 or 4.') + + return bounds
+ + +
+[docs] + def create_spatial_bounds(self): + """ + Calculate longitude and latitude bounds and set them. + """ + + inc_lat = np.abs(np.mean(np.diff(self._lat['data']))) + lat_bnds = self.create_single_spatial_bounds(self._lat['data'], inc_lat, spatial_nv=2) + + self._lat_bnds = {'data': deepcopy(lat_bnds)} + self.lat_bnds = {'data': lat_bnds[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], :]} + + inc_lon = np.abs(np.mean(np.diff(self._lon['data']))) + lon_bnds = self.create_single_spatial_bounds(self._lon['data'], inc_lon, spatial_nv=2) + + self._lon_bnds = {'data': deepcopy(lon_bnds)} + self.lon_bnds = {'data': lon_bnds[self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], :]} + + return None
+ + +
+[docs] + def get_spatial_bounds_mesh_format(self): + """ + Get the spatial bounds in the pcolormesh format: + + see: https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html + + Returns + ------- + lon_bnds_mesh : numpy.ndarray + Longitude boundaries in the mesh format + lat_bnds_mesh : numpy.ndarray + Latitude boundaries in the mesh format + """ + if self.size > 1: + raise RuntimeError("NES.get_spatial_bounds_mesh_format() function only works in serial mode.") + if self.lat_bnds is None: + self.create_spatial_bounds() + + if self.lat_bnds['data'].shape[-1] == 2: + # get the lat_b and lon_b first rows + lat_b_0 = np.append(self.lat_bnds['data'][:, 0], self.lat_bnds['data'][-1, -1]) + lon_b_0 = np.append(self.lon_bnds['data'][:, 0], self.lon_bnds['data'][-1, -1]) + # expand lat_band lon_b in 2D + lat_bnds_mesh = np.tile(lat_b_0, (len(self.lon['data']) + 1, 1)).transpose() + lon_bnds_mesh = np.tile(lon_b_0, (len(self.lat['data']) + 1, 1)) + + elif self.lat_bnds['data'].shape[-1] == 4: + # Irregular quadrilateral polygon cell definition + lat_bnds_mesh = np.empty((self.lat['data'].shape[0] + 1, self.lat['data'].shape[1] + 1)) + lat_bnds_mesh[:-1, :-1] = self.lat_bnds['data'][:, :, 0] + lat_bnds_mesh[:-1, 1:] = self.lat_bnds['data'][:, :, 1] + lat_bnds_mesh[1:, 1:] = self.lat_bnds['data'][:, :, 2] + lat_bnds_mesh[1:, :-1] = self.lat_bnds['data'][:, :, 3] + + lon_bnds_mesh = np.empty((self.lat['data'].shape[0] + 1, self.lat['data'].shape[1] + 1)) + lon_bnds_mesh[:-1, :-1] = self.lon_bnds['data'][:, :, 0] + lon_bnds_mesh[:-1, 1:] = self.lon_bnds['data'][:, :, 1] + lon_bnds_mesh[1:, 1:] = self.lon_bnds['data'][:, :, 2] + lon_bnds_mesh[1:, :-1] = self.lon_bnds['data'][:, :, 3] + else: + raise RuntimeError("Invalid number of vertices: {0}".format(self.lat_bnds['data'].shape[-1])) + + return lon_bnds_mesh, lat_bnds_mesh
+ + +
+[docs] + def free_vars(self, var_list): + """ + Erase the selected variables from the variables' information. + + Parameters + ---------- + var_list : List or str + List (or single string) of the variables to be loaded. + """ + + if isinstance(var_list, str): + var_list = [var_list] + + if self.is_xarray: + self.dataset = self.dataset.drop_vars(var_list) + self.variables = self._get_lazy_variables() + else: + if self.variables is not None: + for var_name in var_list: + if var_name in self.variables: + if 'data' in self.variables[var_name].keys(): + del self.variables[var_name]['data'] + del self.variables[var_name] + gc.collect() + + return None
+ + +
+[docs] + def keep_vars(self, var_list): + """ + Keep the selected variables and erases the rest. + + Parameters + ---------- + var_list : List or str + List (or single string) of the variables to be loaded. + """ + + if isinstance(var_list, str): + var_list = [var_list] + + to_remove = list(set(self.variables.keys()).difference(set(var_list))) + + self.free_vars(to_remove) + + return None
+ + +
+[docs] + def get_time_interval(self): + """ + Calculate the interrval of hours between time steps. + + Returns + ------- + int + Number of hours between time steps. + """ + + time_interval = self._time[1] - self._time[0] + time_interval = int(time_interval.seconds // 3600) + + return time_interval
+ + +
+[docs] + def sel_time(self, time, copy=False): + """ + To select only one time step. + + Parameters + ---------- + time : datetime.datetime + Time stamp to select. + copy : bool + Indicates if you want a copy with the selected time step (True) or to modify te existing one (False). + + Returns + ------- + Nes + Nes object with the data (and metadata) of the selected time step. + """ + + if copy: + aux_nessy = self.copy(copy_vars=False) + aux_nessy.comm = self.comm + else: + aux_nessy = self + + aux_nessy.hours_start = 0 + aux_nessy.hours_end = 0 + + idx_time = aux_nessy.time.index(time) + + aux_nessy.time = [self.time[idx_time]] + aux_nessy._time = aux_nessy.time + for var_name, var_info in self.variables.items(): + if copy: + aux_nessy.variables[var_name] = {} + for att_name, att_value in var_info.items(): + if att_name == 'data': + if att_value is None: + raise ValueError("{} data not loaded".format(var_name)) + aux_nessy.variables[var_name][att_name] = att_value[[idx_time]] + else: + aux_nessy.variables[var_name][att_name] = att_value + else: + aux_nessy.variables[var_name]['data'] = aux_nessy.variables[var_name]['data'][[idx_time]] + + return aux_nessy
+ + +
+[docs] + def sel(self, hours_start=None, time_min=None, hours_end=None, time_max=None, lev_min=None, lev_max=None, + lat_min=None, lat_max=None, lon_min=None, lon_max=None): + """ + Select a slice of time, lev, lat or lon given a minimum and maximum limits. + """ + + loaded_vars = False + for var_info in self.variables.values(): + if var_info['data'] is not None: + loaded_vars = True + # var_info['data'] = None + if loaded_vars: + raise ValueError("Some variables have been loaded. Use select function before load.") + + # First time filter + if hours_start is not None: + if time_min is not None: + raise ValueError("Choose to select by hours_start or time_min but not both") + self.hours_start = hours_start + elif time_min is not None: + if time_min <= self._time[0]: + self.hours_start = 0 + else: + self.hours_start = int((time_min - self._time[0]).total_seconds() // 3600) + + # Last time filter + if hours_end is not None: + if time_max is not None: + raise ValueError("Choose to select by hours_end or time_max but not both") + self.hours_end = hours_end + elif time_max is not None: + if time_max >= self._time[-1]: + self.hours_end = 0 + else: + self.hours_end = int((self._time[-1] - time_max).total_seconds() // 3600) + + # Level filter + self.first_level = lev_min + self.last_level = lev_max + + # Coordinate filter + self.lat_min = lat_min + self.lat_max = lat_max + self.lon_min = lon_min + self.lon_max = lon_max + + # New axis limits + self.read_axis_limits = self.get_read_axis_limits() + + # Dimensions screening + self.time = self._time[self.read_axis_limits['t_min']:self.read_axis_limits['t_max']] + self.time_bnds = self._time_bnds + self.lev = self._get_coordinate_values(self._lev, 'Z') + self.lat = self._get_coordinate_values(self._lat, 'Y') + self.lon = self._get_coordinate_values(self._lon, 'X') + + self.lat_bnds = self._get_coordinate_values(self._lat_bnds, 'Y', bounds=True) + self.lon_bnds = self._get_coordinate_values(self._lon_bnds, 'X', bounds=True) + + # Filter dimensions + self.filter_coordinates_selection() + + # Removing complete coordinates + self.write_axis_limits = self.get_write_axis_limits() + + return None
+ + +
+[docs] + def filter_coordinates_selection(self): + """ + Use the selection limits to filter time, lev, lat, lon, lon_bnds and lat_bnds. + """ + + idx = self.get_idx_intervals() + + self._time = self._time[idx['idx_t_min']:idx['idx_t_max']] + self._lev['data'] = self._lev['data'][idx['idx_z_min']:idx['idx_z_max']] + + if len(self._lat['data'].shape) == 1: + # Regular projection + self._lat['data'] = self._lat['data'][idx['idx_y_min']:idx['idx_y_max']] + self._lon['data'] = self._lon['data'][idx['idx_x_min']:idx['idx_x_max']] + + if self._lat_bnds is not None: + self._lat_bnds['data'] = self._lat_bnds['data'][idx['idx_y_min']:idx['idx_y_max'], :] + if self._lon_bnds is not None: + self._lon_bnds['data'] = self._lon_bnds['data'][idx['idx_x_min']:idx['idx_x_max'], :] + else: + # Irregular projections + self._lat['data'] = self._lat['data'][idx['idx_y_min']:idx['idx_y_max'], idx['idx_x_min']:idx['idx_x_max']] + self._lon['data'] = self._lon['data'][idx['idx_y_min']:idx['idx_y_max'], idx['idx_x_min']:idx['idx_x_max']] + + if self._lat_bnds is not None: + self._lat_bnds['data'] = self._lat_bnds['data'][idx['idx_y_min']:idx['idx_y_max'], + idx['idx_x_min']:idx['idx_x_max'], :] + if self._lon_bnds is not None: + self._lon_bnds['data'] = self._lon_bnds['data'][idx['idx_y_min']:idx['idx_y_max'], + idx['idx_x_min']:idx['idx_x_max'], :] + + self.hours_start = 0 + self.hours_end = 0 + self.last_level = None + self.first_level = None + self.lat_min = None + self.lat_max = None + self.lon_max = None + self.lon_min = None + + return None
+ + + def _get_projection(self): + """ + Must be implemented on inner class. + """ + + return None + + def _create_projection(self, **kwargs): + """ + Must be implemented on inner class. + """ + + return None + +
+[docs] + def get_idx_intervals(self): + """ + Calculate the index intervals + + Returns + ------- + dict + Dictionary with the index intervals + """ + idx = {'idx_t_min': self.get_time_id(self.hours_start, first=True), + 'idx_t_max': self.get_time_id(self.hours_end, first=False), + 'idx_z_min': self.first_level, + 'idx_z_max': self.last_level} + + # Axis Y + if self.lat_min is None: + idx['idx_y_min'] = 0 + else: + idx['idx_y_min'] = self.get_coordinate_id(self._lat['data'], self.lat_min, axis=0) + if self.lat_max is None: + idx['idx_y_max'] = self._lat['data'].shape[0] + else: + idx['idx_y_max'] = self.get_coordinate_id(self._lat['data'], self.lat_max, axis=0) + 1 + + if idx['idx_y_min'] > idx['idx_y_max']: + idx_aux = copy(idx['idx_y_min']) + idx['idx_y_min'] = idx['idx_y_max'] + idx['idx_y_max'] = idx_aux + + # Axis X + + if self.lon_min is None: + idx['idx_x_min'] = 0 + else: + if len(self._lon['data'].shape) == 1: + axis = 0 + else: + axis = 1 + idx['idx_x_min'] = self.get_coordinate_id(self._lon['data'], self.lon_min, axis=axis) + if self.lon_max is None: + idx['idx_x_max'] = self._lon['data'].shape[-1] + else: + if len(self._lon['data'].shape) == 1: + axis = 0 + else: + axis = 1 + idx['idx_x_max'] = self.get_coordinate_id(self._lon['data'], self.lon_max, axis=axis) + 1 + + if idx['idx_x_min'] > idx['idx_x_max']: + idx_aux = copy(idx['idx_x_min']) + idx['idx_x_min'] = idx['idx_x_max'] + idx['idx_x_max'] = idx_aux + return idx
+ + + # ================================================================================================================== + # Statistics + # ================================================================================================================== + +
+[docs] + def last_time_step(self): + """ + Modify variables to keep only the last time step. + """ + + if self.parallel_method == 'T': + raise NotImplementedError("Statistics are not implemented on time axis paralelitation method.") + aux_time = self._time[0].replace(hour=0, minute=0, second=0, microsecond=0) + self._time = [aux_time] + self.time = [aux_time] + + for var_name, var_info in self.variables.items(): + if var_info['data'] is None: + self.load(var_name) + aux_data = var_info['data'][-1, :] + if len(aux_data.shape) == 3: + aux_data = aux_data.reshape((1, aux_data.shape[0], aux_data.shape[1], aux_data.shape[2])) + self.variables[var_name]['data'] = aux_data + self.hours_start = 0 + self.hours_end = 0 + + return None
+ + +
+[docs] + def daily_statistic(self, op, type_op='calendar'): + """ + Calculate daily statistic. + + Parameters + ---------- + op : str + Statistic to perform. Accepted values: "max", "mean" and "min". + type_op : str + Type of statistic to perform. Accepted values: "calendar", "alltsteps", and "withoutt0". + - "calendar": Calculate the statistic using the time metadata. + It will avoid single time step by day calculations + - "alltsteps": Calculate a single time statistic with all the time steps. + - "withoutt0": Calculate a single time statistic with all the time steps avoiding the first one. + """ + + if self.parallel_method == 'T': + raise NotImplementedError("Statistics are not implemented on time axis parallel method.") + time_interval = self.get_time_interval() + if type_op == 'calendar': + aux_time_bounds = [] + aux_time = [] + day_list = [date_aux.day for date_aux in self.time] + for var_name, var_info in self.variables.items(): + if var_info['data'] is None: + self.load(var_name) + stat_data = None + for day in np.unique(day_list): + idx_first = next(i for i, val in enumerate(day_list, 0) if val == day) + idx_last = len(day_list) - next(i for i, val in enumerate(reversed(day_list), 1) if val == day) + if idx_first != idx_last: # To avoid single time step statistic + if idx_last != len(day_list): + if op == 'mean': + data_aux = var_info['data'][idx_first:idx_last + 1, :, :, :].mean(axis=0) + elif op == 'max': + data_aux = var_info['data'][idx_first:idx_last + 1, :, :, :].max(axis=0) + elif op == 'min': + data_aux = var_info['data'][idx_first:idx_last + 1, :, :, :].min(axis=0) + else: + raise NotImplementedError("Statistic operation '{0}' is not implemented.".format(op)) + aux_time_bounds.append([self.time[idx_first], self.time[idx_last]]) + else: + if op == 'mean': + data_aux = var_info['data'][idx_first:, :, :, :].mean(axis=0) + elif op == 'max': + data_aux = var_info['data'][idx_first:, :, :, :].max(axis=0) + elif op == 'min': + data_aux = var_info['data'][idx_first:, :, :, :].min(axis=0) + else: + raise NotImplementedError("Statistic operation '{0}' is not implemented.".format(op)) + aux_time_bounds.append([self.time[idx_first], self.time[-1]]) + + data_aux = data_aux.reshape((1, data_aux.shape[0], data_aux.shape[1], data_aux.shape[2])) + aux_time.append(self.time[idx_first].replace(hour=0, minute=0, second=0)) + # Append over time dimension + if stat_data is None: + stat_data = data_aux.copy() + else: + stat_data = np.vstack([stat_data, data_aux]) + self.variables[var_name]['data'] = stat_data + self.variables[var_name]['cell_methods'] = "time: {0} (interval: {1}hr)".format(op, time_interval) + self.time = aux_time + self._time = self.time + + self.set_time_bnds(aux_time_bounds) + + elif type_op == 'alltsteps': + for var_name, var_info in self.variables.items(): + if var_info['data'] is None: + self.load(var_name) + if op == 'mean': + aux_data = var_info['data'].mean(axis=0) + elif op == 'max': + aux_data = var_info['data'].max(axis=0) + elif op == 'min': + aux_data = var_info['data'].min(axis=0) + else: + raise NotImplementedError("Statistic operation '{0}' is not implemented.".format(op)) + if len(aux_data.shape) == 3: + aux_data = aux_data.reshape((1, aux_data.shape[0], aux_data.shape[1], aux_data.shape[2])) + self.variables[var_name]['data'] = aux_data + self.variables[var_name]['cell_methods'] = "time: {0} (interval: {1}hr)".format(op, time_interval) + + aux_time = self.time[0].replace(hour=0, minute=0, second=0, microsecond=0) + aux_time_bounds = [[self.time[0], self.time[-1]]] + self.time = [aux_time] + self._time = self.time + + self.set_time_bnds(aux_time_bounds) + + elif type_op == 'withoutt0': + for var_name, var_info in self.variables.items(): + if var_info['data'] is None: + self.load(var_name) + if op == 'mean': + aux_data = var_info['data'][1:, :].mean(axis=0) + elif op == 'max': + aux_data = var_info['data'][1:, :].max(axis=0) + elif op == 'min': + aux_data = var_info['data'][1:, :].min(axis=0) + else: + raise NotImplementedError("Statistic operation '{0}' is not implemented.".format(op)) + if len(aux_data.shape) == 3: + aux_data = aux_data.reshape((1, aux_data.shape[0], aux_data.shape[1], aux_data.shape[2])) + self.variables[var_name]['data'] = aux_data + self.variables[var_name]['cell_methods'] = "time: {0} (interval: {1}hr)".format(op, time_interval) + aux_time = self._time[1].replace(hour=0, minute=0, second=0, microsecond=0) + aux_time_bounds = [[self._time[1], self._time[-1]]] + self.time = [aux_time] + self._time = self.time + + self.set_time_bnds(aux_time_bounds) + else: + raise NotImplementedError("Statistic operation type '{0}' is not implemented.".format(type_op)) + self.hours_start = 0 + self.hours_end = 0 + + return None
+ + + @staticmethod + def _get_axis_index_(axis): + + if axis == 'T': + value = 0 + elif axis == 'Z': + value = 1 + elif axis == 'Y': + value = 2 + elif axis == 'X': + value = 3 + else: + raise ValueError("Unknown axis: {0}".format(axis)) + + return value + +
+[docs] + def sum_axis(self, axis='Z'): + + if self.parallel_method == axis: + raise NotImplementedError("It is not possible to sum the axis with it is parallelized '{0}'".format( + self.parallel_method)) + + for var_name, var_info in self.variables.items(): + if var_info['data'] is not None: + self.variables[var_name]['data'] = self.variables[var_name]['data'].sum( + axis=self._get_axis_index_(axis), keepdims=True) + if axis == 'T': + self.variables[var_name]['cell_methods'] = "time: sum (interval: {0}hr)".format( + (self.time[-1] - self.time[0]).total_seconds() // 3600) + + if axis == 'T': + self.set_time_bnds([self.time[0], self.time[-1]]) + self.time = [self.time[0]] + self._time = [self._time[0]] + if axis == 'Z': + self.lev['data'] = [self.lev['data'][0]] + self._lev['data'] = [self._lev['data'][0]] + + return None
+ + +
+[docs] + def find_time_id(self, time): + """ + Find index of time in time array. + + Parameters + ---------- + time : datetime.datetime + Time element. + + Returns + ------- + int + Index of time element. + """ + + if time in self.time: + return self.time.index(time)
+ + +
+[docs] + def rolling_mean(self, var_list=None, hours=8): + """ + Calculate rolling mean for given hours + + Parameters + ---------- + var_list : : List, str, None + List (or single string) of the variables to be loaded. + hours : int, optional + Window hours to calculate rolling mean, by default 8 + + Returns + ------- + Nes + Nes object + """ + + if self.parallel_method == 'T': + raise NotImplementedError("The rolling mean cannot be calculated using the time axis parallel method.") + + aux_nessy = self.copy(copy_vars=False) + aux_nessy.set_communicator(self.comm) + + if isinstance(var_list, str): + var_list = [var_list] + elif var_list is None: + var_list = list(self.variables.keys()) + + for var_name in var_list: + # Load variables if they have not been loaded previously + if self.variables[var_name]['data'] is None: + self.load(var_name) + + # Get original file shape + nessy_shape = self.variables[var_name]['data'].shape + + # Initialise array + aux_nessy.variables[var_name] = {} + aux_nessy.variables[var_name]['data'] = np.empty(shape=nessy_shape) + aux_nessy.variables[var_name]['dimensions'] = deepcopy(self.variables[var_name]['dimensions']) + + for curr_time in self.time: + # Get previous time given a set of hours + prev_time = curr_time - timedelta(hours=(hours-1)) + + # Get time indices + curr_time_id = self.find_time_id(curr_time) + prev_time_id = self.find_time_id(prev_time) + + # Get mean if previous time is available + if prev_time_id is not None: + if self.info: + print(f'Calculating mean between {prev_time} and {curr_time}.') + aux_nessy.variables[var_name]['data'][curr_time_id, :, :, :] = self.variables[var_name]['data'][ + prev_time_id:curr_time_id, :, :, :].mean(axis=0, keepdims=True) + # Fill with nan if previous time is not available + else: + if self.info: + msg = f'Mean between {prev_time} and {curr_time} cannot be calculated ' + msg += f'because data for {prev_time} is not available.' + print(msg) + aux_nessy.variables[var_name]['data'][curr_time_id, :, :, :] = np.full(shape= + (1, nessy_shape[1], nessy_shape[2], nessy_shape[3]), fill_value=np.nan) + + return aux_nessy
+ + + # ================================================================================================================== + # Reading + # ================================================================================================================== + +
+[docs] + def get_read_axis_limits(self): + """ + Calculate the 4D reading axis limits depending on if them have to balanced or not. + + Returns + ------- + dict + Dictionary with the 4D limits of the rank data to read. + t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. + """ + + if self.balanced: + return self.get_read_axis_limits_balanced() + else: + return self.get_read_axis_limits_unbalanced()
+ + +
+[docs] + def get_read_axis_limits_unbalanced(self): + """ + Calculate the 4D reading axis limits. + + Returns + ------- + dict + Dictionary with the 4D limits of the rank data to read. + t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. + """ + + axis_limits = {'x_min': None, 'x_max': None, + 'y_min': None, 'y_max': None, + 'z_min': None, 'z_max': None, + 't_min': None, 't_max': None} + + idx = self.get_idx_intervals() + if self.parallel_method == 'Y': + y_len = idx['idx_y_max'] - idx['idx_y_min'] + if y_len < self.size: + raise IndexError('More processors (size={0}) selected than Y elements (size={1})'.format( + self.size, y_len)) + axis_limits['y_min'] = ((y_len // self.size) * self.rank) + idx['idx_y_min'] + if self.rank + 1 < self.size: + axis_limits['y_max'] = ((y_len // self.size) * (self.rank + 1)) + idx['idx_y_min'] + else: + axis_limits['y_max'] = idx['idx_y_max'] + + # Non parallel filters + axis_limits['x_min'] = idx['idx_x_min'] + axis_limits['x_max'] = idx['idx_x_max'] + + axis_limits['t_min'] = idx['idx_t_min'] + axis_limits['t_max'] = idx['idx_t_max'] + + elif self.parallel_method == 'X': + x_len = idx['idx_x_max'] - idx['idx_x_min'] + if x_len < self.size: + raise IndexError('More processors (size={0}) selected than X elements (size={1})'.format( + self.size, x_len)) + axis_limits['x_min'] = ((x_len // self.size) * self.rank) + idx['idx_x_min'] + if self.rank + 1 < self.size: + axis_limits['x_max'] = ((x_len // self.size) * (self.rank + 1)) + idx['idx_x_min'] + else: + axis_limits['x_max'] = idx['idx_x_max'] + + # Non parallel filters + axis_limits['y_min'] = idx['idx_y_min'] + axis_limits['y_max'] = idx['idx_y_max'] + + axis_limits['t_min'] = idx['idx_t_min'] + axis_limits['t_max'] = idx['idx_t_max'] + + elif self.parallel_method == 'T': + t_len = idx['idx_t_max'] - idx['idx_t_min'] + if t_len < self.size: + raise IndexError('More processors (size={0}) selected than T elements (size={1})'.format( + self.size, t_len)) + axis_limits['t_min'] = ((t_len // self.size) * self.rank) + idx['idx_t_min'] + if self.rank + 1 < self.size: + axis_limits['t_max'] = ((t_len // self.size) * (self.rank + 1)) + idx['idx_t_min'] + + # Non parallel filters + axis_limits['y_min'] = idx['idx_y_min'] + axis_limits['y_max'] = idx['idx_y_max'] + + axis_limits['x_min'] = idx['idx_x_min'] + axis_limits['x_max'] = idx['idx_x_max'] + + else: + raise NotImplementedError("Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'Y', 'T'])) + + # Vertical levels selection: + axis_limits['z_min'] = self.first_level + if self.last_level == -1 or self.last_level is None: + self.last_level = None + elif self.last_level + 1 == len(self._lev['data']): + self.last_level = None + else: + self.last_level += 1 + axis_limits['z_max'] = self.last_level + + return axis_limits
+ + +
+[docs] + def get_read_axis_limits_balanced(self): + """ + Calculate the 4D reading balanced axis limits. + + Returns + ------- + dict + Dictionary with the 4D limits of the rank data to read. + t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. + """ + idx = self.get_idx_intervals() + + fid_dist = {} + if self.parallel_method == 'Y': + len_to_split = idx['idx_y_max'] - idx['idx_y_min'] + if len_to_split < self.size: + raise IndexError('More processors (size={0}) selected than Y elements (size={1})'.format( + self.size, len_to_split)) + min_axis = 'y_min' + max_axis = 'y_max' + to_add = idx['idx_y_min'] + + elif self.parallel_method == 'X': + len_to_split = idx['idx_x_max'] - idx['idx_x_min'] + if len_to_split < self.size: + raise IndexError('More processors (size={0}) selected than X elements (size={1})'.format( + self.size, len_to_split)) + min_axis = 'x_min' + max_axis = 'x_max' + to_add = idx['idx_x_min'] + elif self.parallel_method == 'T': + len_to_split = idx['idx_t_max'] - idx['idx_t_min'] + if len_to_split < self.size: + raise IndexError('More processors (size={0}) selected than T elements (size={1})'.format( + self.size, len_to_split)) + min_axis = 't_min' + max_axis = 't_max' + to_add = idx['idx_t_min'] + else: + raise NotImplementedError("Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'Y', 'T'])) + + procs_len = len_to_split // self.size + procs_rows_extended = len_to_split - (procs_len * self.size) + + rows_sum = 0 + for proc in range(self.size): + fid_dist[proc] = {'x_min': 0, 'x_max': None, + 'y_min': 0, 'y_max': None, + 'z_min': 0, 'z_max': None, + 't_min': 0, 't_max': None} + if proc < procs_rows_extended: + aux_rows = procs_len + 1 + else: + aux_rows = procs_len + + len_to_split -= aux_rows + if len_to_split < 0: + rows = len_to_split + aux_rows + else: + rows = aux_rows + + fid_dist[proc][min_axis] = rows_sum + fid_dist[proc][max_axis] = rows_sum + rows + + if to_add is not None: + fid_dist[proc][min_axis] += to_add + fid_dist[proc][max_axis] += to_add + + # # Last element + # if len_to_split == 0 and to_add == 0: + # fid_dist[proc][max_axis] = None + + rows_sum += rows + + axis_limits = fid_dist[self.rank] + + # Non parallel filters + if self.parallel_method != 'T': + axis_limits['t_min'] = idx['idx_t_min'] + axis_limits['t_max'] = idx['idx_t_max'] + if self.parallel_method != 'X': + axis_limits['x_min'] = idx['idx_x_min'] + axis_limits['x_max'] = idx['idx_x_max'] + if self.parallel_method != 'Y': + axis_limits['y_min'] = idx['idx_y_min'] + axis_limits['y_max'] = idx['idx_y_max'] + + # Vertical levels selection: + axis_limits['z_min'] = self.first_level + if self.last_level == -1 or self.last_level is None: + self.last_level = None + elif self.last_level + 1 == len(self._lev['data']): + self.last_level = None + else: + self.last_level += 1 + axis_limits['z_max'] = self.last_level + + return axis_limits
+ + +
+[docs] + def get_time_id(self, hours, first=True): + """ + Get the index of the corresponding time value. + + Parameters + ---------- + hours : int + Number of hours to avoid. + first : bool + Indicates if you want to avoid from the first hours (True) or from the last (False). + Default: True. + + Returns + ------- + int + Index of the time array. + """ + + if first: + idx = self._time.index(self._time[0] + timedelta(hours=hours)) + else: + idx = self._time.index(self._time[-1] - timedelta(hours=hours)) + 1 + + return idx
+ + +
+[docs] + @staticmethod + def get_coordinate_id(array, value, axis=0): + """ + Get the index of the corresponding coordinate value. + + Parameters + ---------- + array : np.array + Array with the coordinate data + value : float + Coordinate value to search. + axis : int + Axis where find the value + Default: 0. + + Returns + ------- + int + Index of the coordinate array. + """ + idx = (np.abs(array - value)).argmin(axis=axis).min() + + return idx
+ + +
+[docs] + def open(self): + """ + Open the NetCDF. + """ + + if self.is_xarray: + self.dataset = self.__open_dataset() + self.netcdf = None + else: + self.dataset = None + self.netcdf = self.__open_netcdf4() + + return None
+ + + def __open_dataset(self): + """ + Open the NetCDF with xarray. + + Returns + ------- + dataset : xr.Dataset + Open dataset. + """ + + if self.master: + warnings.filterwarnings('ignore') # Disabling warnings while reading MONARCH original file + dataset = open_dataset(self.__ini_path, decode_coords='all') + warnings.filterwarnings('default') # Re-activating warnings + else: + dataset = None + + dataset = self.comm.bcast(dataset, root=0) + self.dataset = dataset + + return dataset + + def __open_netcdf4(self, mode='r'): + """ + Open the NetCDF with netcdf4-python. + + Parameters + ---------- + mode : str + Inheritance from mode parameter from https://unidata.github.io/netcdf4-python/#Dataset.__init__ + Default: 'r' (read-only). + Returns + ------- + netcdf : Dataset + Open dataset. + """ + + if self.size == 1: + netcdf = Dataset(self.__ini_path, format="NETCDF4", mode=mode, parallel=False) + else: + netcdf = Dataset(self.__ini_path, format="NETCDF4", mode=mode, parallel=True, comm=self.comm, + info=MPI.Info()) + self.netcdf = netcdf + + return netcdf + +
+[docs] + def close(self): + """ + Close the NetCDF with netcdf4-python. + """ + if (hasattr(self, 'serial_nc')) and (self.serial_nc is not None): + if self.master: + self.serial_nc.close() + self.serial_nc = None + if (hasattr(self, 'netcdf')) and (self.netcdf is not None): + self.netcdf.close() + self.netcdf = None + + return None
+ + + def __get_dates_from_months(self, time, units, calendar): + """ + Calculates the number of days since the first date + in the 'time' list and store in new list: + This is useful when the units are 'months since', + which cannot be transformed to dates using num2date. + + Parameter + --------- + time: List + Original time. + units: str + CF compliant time units. + calendar: str + Original calendar. + + Returns + ------- + time: List + CF compliant time. + """ + + start_date_str = time.units.split('since')[1].lstrip() + start_date = datetime.datetime(int(start_date_str[0:4]), + int(start_date_str[5:7]), + int(start_date_str[8:10])) + + new_time_deltas = [] + + for month_delta in time[:]: + # Transform current_date into number of days since base date + current_date = start_date + relativedelta(months=month_delta) + + # Calculate number of days between base date and the other dates + n_days = int((current_date - start_date).days) + + # Store in list + new_time_deltas.append(n_days) + + return new_time_deltas + + def __parse_time(self, time): + """ + Parses the time to be CF compliant. + + Parameters + ---------- + time: Namespace + Original time. + + Returns + ------- + time : str + CF compliant time. + """ + + units = self.__parse_time_unit(time.units) + + if not hasattr(time, 'calendar'): + calendar = 'standard' + else: + calendar = time.calendar + + if 'months since' in time.units: + units = 'days since ' + time.units.split('since')[1].lstrip() + time = self.__get_dates_from_months(time, units, calendar) + + time_data = time[:] + + if len(time_data) == 1 and np.isnan(time_data[0]): + time_data[0] = 0 + + return time_data, units, calendar + + @staticmethod + def __parse_time_unit(t_units): + """ + Parses the time units to be CF compliant. + + Parameters + ---------- + t_units : str + Original time units. + + Returns + ------- + t_units : str + CF compliant time units. + """ + + if 'h @' in t_units: + t_units = 'hours since {0}-{1}-{2} {3}:{4}:{5} UTC'.format( + t_units[4:8], t_units[8:10], t_units[10:12], t_units[13:15], t_units[15:17], t_units[17:-4]) + + return t_units + + @staticmethod + def __get_time_resolution_from_units(units): + """ + Parses the time units to get the time resolution + + Parameters + ---------- + units : str + Time variable units + + Returns + ------- + str + Time variable resolution + """ + if 'day' in units or 'days' in units: + resolution = 'days' + elif 'hour' in units or 'hours' in units: + resolution = 'hours' + elif 'minute' in units or 'minutes' in units: + resolution = 'minutes' + elif 'second' in units or 'seconds' in units: + resolution = 'seconds' + else: + # Default resolution is 'hours' + resolution = 'hours' + return resolution + + def __get_time(self): + """ + Get the NetCDF file time values. + + Returns + ------- + time : List + List of times (datetime.datetime) of the NetCDF data. + """ + + if self.is_xarray: + time = self.variables['time'] + else: + if self.master: + nc_var = self.netcdf.variables['time'] + time_data, units, calendar = self.__parse_time(nc_var) + # Extracting time resolution depending on the units + self._time_resolution = self.__get_time_resolution_from_units(units) + # Checking if it is a climatology dataset + if hasattr(nc_var, 'climatology'): + self._climatology = True + self._climatology_var_name = nc_var.climatology + time = num2date(time_data, units, calendar=calendar) + time = [aux.replace(second=0, microsecond=0) for aux in time] + else: + time = None + time = self.comm.bcast(time, root=0) + self.free_vars('time') + + return time + + def __get_time_bnds(self, create_nes=False): + """ + Get the NetCDF time bounds values. + + Parameters + ---------- + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + + Returns + ------- + time_bnds : List + List of time bounds (datetime) of the NetCDF data. + """ + + if self.master: + if not create_nes: + if 'time_bnds' in self.netcdf.variables.keys() or self._climatology: + time = self.netcdf.variables['time'] + if self._climatology: + nc_var = self.netcdf.variables[self._climatology_var_name] + else: + nc_var = self.netcdf.variables['time_bnds'] + time_bnds = num2date(nc_var[:], self.__parse_time_unit(time.units), + calendar=time.calendar).tolist() + else: + time_bnds = None + else: + time_bnds = None + else: + time_bnds = None + + time_bnds = self.comm.bcast(time_bnds, root=0) + + self.free_vars('time_bnds') + + return time_bnds + + def __get_coordinates_bnds(self, create_nes=False): + """ + Get the NetCDF coordinates bounds values. + + Parameters + ---------- + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + + Returns + ------- + lat_bnds : List + Latitude bounds of the NetCDF data. + lon_bnds : List + Longitude bounds of the NetCDF data. + """ + + if self.is_xarray: + lat_bnds = self.variables['lat_bnds'] + lon_bnds = self.variables['lon_bnds'] + else: + if self.master: + if not create_nes: + if 'lat_bnds' in self.netcdf.variables.keys(): + lat_bnds = {'data': self._unmask_array(self.netcdf.variables['lat_bnds'][:])} + else: + lat_bnds = None + if 'lon_bnds' in self.netcdf.variables.keys(): + lon_bnds = {'data': self._unmask_array(self.netcdf.variables['lon_bnds'][:])} + else: + lon_bnds = None + else: + lat_bnds = None + lon_bnds = None + else: + lat_bnds = None + lon_bnds = None + lat_bnds = self.comm.bcast(lat_bnds, root=0) + lon_bnds = self.comm.bcast(lon_bnds, root=0) + + self.free_vars(['lat_bnds', 'lon_bnds']) + + return lat_bnds, lon_bnds + + def __get_cell_measures(self, create_nes=False): + """ + Get the NetCDF cell measures values. + + Parameters + ---------- + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + + Returns + ------- + dict + Dictionary of cell measures of the NetCDF data. + """ + + c_measures = {} + if self.master: + if not create_nes: + if 'cell_area' in self.netcdf.variables.keys(): + c_measures['cell_area'] = {} + c_measures['cell_area']['data'] = self._unmask_array(self.netcdf.variables['cell_area'][:]) + c_measures = self.comm.bcast(c_measures, root=0) + + self.free_vars(['cell_area']) + + return c_measures + + def _get_coordinate_dimension(self, possible_names): + """ + Read the coordinate dimension data. + + This will read the complete data of the coordinate. + + Parameters + ---------- + possible_names: List, str, list + List (or single string) of the possible names of the coordinate (e.g. ['lat', 'latitude']). + + Returns + ------- + nc_var : dict + Dictionary with the 'data' key with the coordinate variable values. and the attributes as other keys. + """ + + if isinstance(possible_names, str): + possible_names = [possible_names] + + try: + dimension_name = set(possible_names).intersection(set(self.variables.keys())).pop() + if self.is_xarray: + nc_var = self.dataset[dimension_name] + else: + nc_var = self.variables[dimension_name].copy() + nc_var['data'] = self.netcdf.variables[dimension_name][:] + if hasattr(nc_var, 'units'): + if nc_var['units'] in ['unitless', '-']: + nc_var['units'] = '' + self.free_vars(dimension_name) + except KeyError: + nc_var = {'data': np.array([0]), + 'units': ''} + + return nc_var + + def _get_coordinate_values(self, coordinate_info, coordinate_axis, bounds=False): + """ + Get the coordinate data of the current portion. + + Parameters + ---------- + coordinate_info : dict, list + Dictionary with the 'data' key with the coordinate variable values. and the attributes as other keys. + coordinate_axis : str + Name of the coordinate to extract. Accepted values: ['Z', 'Y', 'X']. + bounds : bool + Boolean variable to know if there are coordinate bounds. + Returns + ------- + values : dict + Dictionary with the portion of data corresponding to the rank. + """ + + if coordinate_info is None: + return None + + if not isinstance(coordinate_info, dict): + values = {'data': deepcopy(coordinate_info)} + else: + values = deepcopy(coordinate_info) + + coordinate_len = len(values['data'].shape) + if bounds: + coordinate_len -= 1 + + if coordinate_axis == 'Y': + if coordinate_len == 1: + values['data'] = values['data'][self.read_axis_limits['y_min']:self.read_axis_limits['y_max']] + elif coordinate_len == 2: + values['data'] = values['data'][self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + else: + raise NotImplementedError("The coordinate has wrong dimensions: {dim}".format( + dim=values['data'].shape)) + elif coordinate_axis == 'X': + if coordinate_len == 1: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif coordinate_len == 2: + values['data'] = values['data'][self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + else: + raise NotImplementedError("The coordinate has wrong dimensions: {dim}".format( + dim=values['data'].shape)) + elif coordinate_axis == 'Z': + if coordinate_len == 1: + values['data'] = values['data'][self.read_axis_limits['z_min']:self.read_axis_limits['z_max']] + else: + raise NotImplementedError("The coordinate has wrong dimensions: {dim}".format( + dim=values['data'].shape)) + + return values + + def _get_cell_measures_values(self, cell_measures_info): + """ + Get the cell measures data of the current portion. + + Parameters + ---------- + cell_measures_info : dict, list + Dictionary with the 'data' key with the cell measures variable values. and the attributes as other keys. + + Returns + ------- + values : dict + Dictionary with the portion of data corresponding to the rank. + """ + + if cell_measures_info is None: + return None + + cell_measures_values = {} + + for cell_measures_var in cell_measures_info.keys(): + + values = deepcopy(cell_measures_info[cell_measures_var]) + coordinate_len = len(values['data'].shape) + + if coordinate_len == 1: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif coordinate_len == 2: + values['data'] = values['data'][self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + else: + raise NotImplementedError("The coordinate has wrong dimensions: {dim}".format( + dim=values['data'].shape)) + + cell_measures_values[cell_measures_var] = values + + return cell_measures_values + + def _get_lazy_variables(self): + """ + Get all the variables' information. + + Returns + ------- + variables : dict + Dictionary with the variable name as key and another dictionary as value. + De value dictionary will have the 'data' key with None as value and all the variable attributes as the + other keys. + e.g. + {'var_name_1': {'data': None, 'attr_1': value_1_1, 'attr_2': value_1_2, ...}, + 'var_name_2': {'data': None, 'attr_1': value_2_1, 'attr_2': value_2_2, ...}, + ...} + """ + + if self.is_xarray: + variables = self.dataset.variables + else: + if self.master: + variables = {} + # Initialise data + for var_name, var_info in self.netcdf.variables.items(): + variables[var_name] = {} + variables[var_name]['data'] = None + variables[var_name]['dimensions'] = var_info.dimensions + variables[var_name]['dtype'] = var_info.dtype + if variables[var_name]['dtype'] in [str, np.object]: + if self.strlen is None: + self.set_strlen() + variables[var_name]['dtype'] = str + + # Avoid some attributes + for attrname in var_info.ncattrs(): + if attrname not in ['missing_value', '_FillValue']: + value = getattr(var_info, attrname) + if value in ['unitless', '-']: + value = '' + variables[var_name][attrname] = value + else: + variables = None + variables = self.comm.bcast(variables, root=0) + + return variables + + def _read_variable(self, var_name): + """ + Read the corresponding variable data according to the current rank. + + Parameters + ---------- + var_name : str + Name of the variable to read. + + Returns + ------- + data: np.array + Portion of the variable data corresponding to the rank. + """ + + nc_var = self.netcdf.variables[var_name] + var_dims = nc_var.dimensions + + # Read data in 4 dimensions + if len(var_dims) < 2: + data = nc_var[:] + elif len(var_dims) == 2: + data = nc_var[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + data = data.reshape(1, 1, data.shape[-2], data.shape[-1]) + elif len(var_dims) == 3: + if 'strlen' in var_dims: + data = nc_var[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + :] + data_aux = np.empty(shape=(data.shape[0], data.shape[1]), dtype=np.object) + for lat_n in range(data.shape[0]): + for lon_n in range(data.shape[1]): + data_aux[lat_n, lon_n] = ''.join( + data[lat_n, lon_n].tostring().decode('ascii').replace('\x00', '')) + data = data_aux.reshape((1, 1, data_aux.shape[-2], data_aux.shape[-1])) + else: + data = nc_var[self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], + self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + data = data.reshape(data.shape[-3], 1, data.shape[-2], data.shape[-1]) + elif len(var_dims) == 4: + data = nc_var[self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], + self.read_axis_limits['z_min']:self.read_axis_limits['z_max'], + self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif len(var_dims) == 5: + if 'strlen' in var_dims: + data = nc_var[self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], + self.read_axis_limits['z_min']:self.read_axis_limits['z_max'], + self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + :] + data_aux = np.empty(shape=(data.shape[0], data.shape[1], data.shape[2], data.shape[3]), dtype=np.object) + for time_n in range(data.shape[0]): + for lev_n in range(data.shape[1]): + for lat_n in range(data.shape[2]): + for lon_n in range(data.shape[3]): + data_aux[time_n, lev_n, lat_n, lon_n] = ''.join( + data[time_n, lev_n, lat_n, lon_n].tostring().decode('ascii').replace('\x00', '')) + data = data_aux + else: + # data = nc_var[self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], + # :, + # self.read_axis_limits['z_min']:self.read_axis_limits['z_max'], + # self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + # self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + raise NotImplementedError('Error with {0}. Only can be read netCDF with 4 dimensions or less'.format( + var_name)) + else: + raise NotImplementedError('Error with {0}. Only can be read netCDF with 4 dimensions or less'.format( + var_name)) + + # Unmask array + data = self._unmask_array(data) + + return data + +
+[docs] + def load(self, var_list=None): + """ + Load of the selected variables. + + That function will fill the variable 'data' key with the corresponding values. + + Parameters + ---------- + var_list : List, str, None + List (or single string) of the variables to be loaded. + """ + + if (self.__ini_path is None) and (self.dataset is None) and (self.netcdf is None): + raise RuntimeError('Only data from existing files can be loaded.') + + if self.netcdf is None: + self.__open_dataset() + close = True + else: + close = False + + if isinstance(var_list, str): + var_list = [var_list] + elif var_list is None: + var_list = list(self.variables.keys()) + + for i, var_name in enumerate(var_list): + if self.info: + print("Rank {0:03d}: Loading {1} var ({2}/{3})".format(self.rank, var_name, i + 1, len(var_list))) + if self.variables[var_name]['data'] is None: + self.variables[var_name]['data'] = self._read_variable(var_name) + # Data type changes when joining characters in read_variable (S1 to S+strlen) + if 'strlen' in self.variables[var_name]['dimensions']: + if self.strlen is None: + self.set_strlen() + self.variables[var_name]['dtype'] = str + self.variables[var_name]['dimensions'] = tuple([x for x in self.variables[var_name]['dimensions'] + if x != "strlen"]) + else: + if self.master: + print("Data for {0} was previously loaded. Skipping variable.".format(var_name)) + if self.info: + print("Rank {0:03d}: Loaded {1} var ({2})".format( + self.rank, var_name, self.variables[var_name]['data'].shape)) + + if close: + self.close() + + return None
+ + + @staticmethod + def _unmask_array(data): + """ + Missing to nan. This operation is done because sometimes the missing value is lost during the calculation. + + Parameters + ---------- + data : np.array + Masked array to unmask. + + Returns + ------- + np.array + Unmasked array. + """ + + if isinstance(data, np.ma.MaskedArray): + try: + data = data.filled(np.nan) + except TypeError: + msg = 'Data missing values cannot be converted to np.nan.' + warnings.warn(msg) + sys.stderr.flush() + + return data + +
+[docs] + def to_dtype(self, data_type='float32'): + """ Cast variables data into selected data type. + + Parameters + ---------- + data_type : str or Type + Data type, by default 'float32' + """ + + for var_name, var_info in self.variables.items(): + if isinstance(var_info['data'], np.ndarray): + self.variables[var_name]['data'] = self.variables[var_name]['data'].astype(data_type) + self.variables[var_name]['dtype'] = data_type + + return None
+ + +
+[docs] + def concatenate(self, aux_nessy): + """ + Concatenate different variables into the same nes object. + + Parameters + ---------- + aux_nessy : Nes, str + Nes object or str with the path to the NetCDF file that contains the variables to add. + + Returns + ------- + list + List of var names added. + """ + + if isinstance(aux_nessy, str): + aux_nessy = self.new(path=aux_nessy, comm=self.comm, parallel_method=self.parallel_method, + xarray=self.is_xarray, + avoid_first_hours=self.hours_start, avoid_last_hours=self.hours_end, + first_level=self.first_level, last_level=self.last_level) + new = True + else: + new = False + for var_name, var_info in aux_nessy.variables.items(): + if var_info['data'] is None: + aux_nessy.read_axis_limits = self.read_axis_limits + aux_nessy.load(var_name) + + new_vars_added = [] + for new_var_name, new_var_data in aux_nessy.variables.items(): + if new_var_name not in self.variables.keys(): + self.variables[new_var_name] = deepcopy(new_var_data) + new_vars_added.append(new_var_name) + + if new: + del aux_nessy + + return new_vars_added
+ + + def __get_global_attributes(self, create_nes=False): + """ + Read the netcdf global attributes. + + Parameters + ---------- + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + + Returns + ------- + gl_attrs : dict + Dictionary with the netCDF global attributes. + """ + + gl_attrs = {} + if self.is_xarray: + gl_attrs = self.dataset.attrs + else: + if not create_nes: + for attrname in self.netcdf.ncattrs(): + gl_attrs[attrname] = getattr(self.netcdf, attrname) + + return gl_attrs + + # ================================================================================================================== + # Writing + # ================================================================================================================== + +
+[docs] + def get_write_axis_limits(self): + """ + Calculate the 4D writing axis limits depending on if them have to balanced or not. + + Returns + ------- + dict + Dictionary with the 4D limits of the rank data to write. + t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. + """ + + if self.balanced: + return self.get_write_axis_limits_balanced() + else: + return self.get_write_axis_limits_unbalanced()
+ + +
+[docs] + def get_write_axis_limits_unbalanced(self): + """ + Calculate the 4D writing axis limits. + + Returns + ------- + dict + Dictionary with the 4D limits of the rank data to write. + t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. + """ + + axis_limits = {'x_min': None, 'x_max': None, + 'y_min': None, 'y_max': None, + 'z_min': None, 'z_max': None, + 't_min': None, 't_max': None} + + if self.parallel_method == 'Y': + y_len = self._lat['data'].shape[0] + axis_limits['y_min'] = (y_len // self.size) * self.rank + if self.rank + 1 < self.size: + axis_limits['y_max'] = (y_len // self.size) * (self.rank + 1) + elif self.parallel_method == 'X': + x_len = self._lon['data'].shape[-1] + axis_limits['x_min'] = (x_len // self.size) * self.rank + if self.rank + 1 < self.size: + axis_limits['x_max'] = (x_len // self.size) * (self.rank + 1) + elif self.parallel_method == 'T': + t_len = len(self._time) + axis_limits['t_min'] = ((t_len // self.size) * self.rank) + if self.rank + 1 < self.size: + axis_limits['t_max'] = (t_len // self.size) * (self.rank + 1) + else: + raise NotImplementedError("Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'Y', 'T'])) + + return axis_limits
+ + +
+[docs] + def get_write_axis_limits_balanced(self): + """ + Calculate the 4D reading balanced axis limits. + + Returns + ------- + dict + Dictionary with the 4D limits of the rank data to read. + t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. + """ + + fid_dist = {} + if self.parallel_method == 'Y': + len_to_split = self._lat['data'].shape[0] + min_axis = 'y_min' + max_axis = 'y_max' + elif self.parallel_method == 'X': + len_to_split = self._lon['data'].shape[-1] + min_axis = 'x_min' + max_axis = 'x_max' + elif self.parallel_method == 'T': + len_to_split = len(self._time) + min_axis = 't_min' + max_axis = 't_max' + else: + raise NotImplementedError("Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'Y', 'T'])) + + procs_len = len_to_split // self.size + procs_rows_extended = len_to_split - (procs_len * self.size) + + rows_sum = 0 + for proc in range(self.size): + fid_dist[proc] = {'x_min': 0, 'x_max': None, + 'y_min': 0, 'y_max': None, + 'z_min': 0, 'z_max': None, + 't_min': 0, 't_max': None} + if proc < procs_rows_extended: + aux_rows = procs_len + 1 + else: + aux_rows = procs_len + + len_to_split -= aux_rows + if len_to_split < 0: + rows = len_to_split + aux_rows + else: + rows = aux_rows + + fid_dist[proc][min_axis] = rows_sum + fid_dist[proc][max_axis] = rows_sum + rows + + # Last element + if len_to_split == 0: + fid_dist[proc][max_axis] = None + + rows_sum += rows + + axis_limits = fid_dist[self.rank] + + return axis_limits
+ + + def _create_dimensions(self, netcdf): + """ + Create 'time', 'time_bnds', 'lev', 'lon' and 'lat' dimensions. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python open dataset. + """ + + # Create time dimension + netcdf.createDimension('time', None) + + # Create time_nv (number of vertices) dimension + if self._time_bnds is not None: + netcdf.createDimension('time_nv', 2) + + # Create lev, lon and lat dimensions + netcdf.createDimension('lev', len(self.lev['data'])) + netcdf.createDimension('lon', len(self._lon['data'])) + netcdf.createDimension('lat', len(self._lat['data'])) + + # Create string length dimension + if self.strlen is not None: + netcdf.createDimension('strlen', self.strlen) + + return None + + def _create_dimension_variables(self, netcdf): + """ + Create the 'time', 'time_bnds', 'lev', 'lat', 'lat_bnds', 'lon' and 'lon_bnds' variables. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python open dataset. + """ + + self._create_dimension_variables_64(netcdf) + + return None + + def _create_dimension_variables_32(self, netcdf): + """ + Create the 'time', 'time_bnds', 'lev', 'lat', 'lat_bnds', 'lon' and 'lon_bnds' variables. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python open dataset. + """ + + # TIMES + time_var = netcdf.createVariable('time', np.float32, ('time',), zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + time_var.units = '{0} since {1}'.format(self._time_resolution, self._time[0].strftime('%Y-%m-%d %H:%M:%S')) + time_var.standard_name = 'time' + time_var.calendar = 'standard' + time_var.long_name = 'time' + if self._time_bnds is not None: + if self._climatology: + time_var.climatology = self._climatology_var_name + else: + time_var.bounds = 'time_bnds' + if self.size > 1: + time_var.set_collective(True) + time_var[:] = date2num(self._time[:], time_var.units, time_var.calendar) + + # TIME BOUNDS + if self._time_bnds is not None: + if self._climatology: + time_bnds_var = netcdf.createVariable(self._climatology_var_name, np.float64, ('time', 'time_nv',), + zlib=self.zip_lvl, complevel=self.zip_lvl) + else: + time_bnds_var = netcdf.createVariable('time_bnds', np.float64, ('time', 'time_nv',), + zlib=self.zip_lvl, complevel=self.zip_lvl) + if self.size > 1: + time_bnds_var.set_collective(True) + time_bnds_var[:] = date2num(self._time_bnds, time_var.units, calendar='standard') + + # LEVELS + lev = netcdf.createVariable('lev', np.float32, ('lev',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + if 'units' in self._lev.keys(): + lev.units = Units(self._lev['units'], formatted=True).units + else: + lev.units = '' + if 'positive' in self._lev.keys(): + lev.positive = self._lev['positive'] + + if self.size > 1: + lev.set_collective(True) + lev[:] = np.array(self._lev['data'], dtype=np.float32) + + # LATITUDES + lat = netcdf.createVariable('lat', np.float32, self._lat_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + lat.units = 'degrees_north' + lat.axis = 'Y' + lat.long_name = 'latitude coordinate' + lat.standard_name = 'latitude' + if self._lat_bnds is not None: + lat.bounds = 'lat_bnds' + if self.size > 1: + lat.set_collective(True) + lat[:] = np.array(self._lat['data'], dtype=np.float32) + + # LATITUDES BOUNDS + if self._lat_bnds is not None: + lat_bnds_var = netcdf.createVariable('lat_bnds', np.float32, + self._lat_dim + ('spatial_nv',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + if self.size > 1: + lat_bnds_var.set_collective(True) + lat_bnds_var[:] = np.array(self._lat_bnds['data'], dtype=np.float32) + + # LONGITUDES + lon = netcdf.createVariable('lon', np.float32, self._lon_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + lon.units = 'degrees_east' + lon.axis = 'X' + lon.long_name = 'longitude coordinate' + lon.standard_name = 'longitude' + if self._lon_bnds is not None: + lon.bounds = 'lon_bnds' + if self.size > 1: + lon.set_collective(True) + lon[:] = np.array(self._lon['data'], dtype=np.float32) + + # LONGITUDES BOUNDS + if self._lon_bnds is not None: + lon_bnds_var = netcdf.createVariable('lon_bnds', np.float32, + self._lon_dim + ('spatial_nv',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + if self.size > 1: + lon_bnds_var.set_collective(True) + lon_bnds_var[:] = np.array(self._lon_bnds['data'], dtype=np.float32) + + return None + + def _create_dimension_variables_64(self, netcdf): + """ + Create the 'time', 'time_bnds', 'lev', 'lat', 'lat_bnds', 'lon' and 'lon_bnds' variables. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python open dataset. + """ + + # TIMES + time_var = netcdf.createVariable('time', np.float64, ('time',), zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + time_var.units = '{0} since {1}'.format(self._time_resolution, self._time[0].strftime('%Y-%m-%d %H:%M:%S')) + time_var.standard_name = 'time' + time_var.calendar = 'standard' + time_var.long_name = 'time' + if self._time_bnds is not None: + if self._climatology: + time_var.climatology = self._climatology_var_name + else: + time_var.bounds = 'time_bnds' + if self.size > 1: + time_var.set_collective(True) + time_var[:] = date2num(self._time[:], time_var.units, time_var.calendar) + + # TIME BOUNDS + if self._time_bnds is not None: + if self._climatology: + time_bnds_var = netcdf.createVariable(self._climatology_var_name, np.float64, ('time', 'time_nv',), + zlib=self.zip_lvl, complevel=self.zip_lvl) + else: + time_bnds_var = netcdf.createVariable('time_bnds', np.float64, ('time', 'time_nv',), + zlib=self.zip_lvl, complevel=self.zip_lvl) + if self.size > 1: + time_bnds_var.set_collective(True) + time_bnds_var[:] = date2num(self._time_bnds, time_var.units, calendar='standard') + + # LEVELS + lev = netcdf.createVariable('lev', self._lev['data'].dtype, ('lev',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + if 'units' in self._lev.keys(): + lev.units = Units(self._lev['units'], formatted=True).units + else: + lev.units = '' + if 'positive' in self._lev.keys(): + lev.positive = self._lev['positive'] + + if self.size > 1: + lev.set_collective(True) + lev[:] = self._lev['data'] + + # LATITUDES + lat = netcdf.createVariable('lat', self._lat['data'].dtype, self._lat_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + lat.units = 'degrees_north' + lat.axis = 'Y' + lat.long_name = 'latitude coordinate' + lat.standard_name = 'latitude' + if self._lat_bnds is not None: + lat.bounds = 'lat_bnds' + if self.size > 1: + lat.set_collective(True) + lat[:] = self._lat['data'] + + # LATITUDES BOUNDS + if self._lat_bnds is not None: + lat_bnds_var = netcdf.createVariable('lat_bnds', self._lat_bnds['data'].dtype, + self._lat_dim + ('spatial_nv',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + if self.size > 1: + lat_bnds_var.set_collective(True) + lat_bnds_var[:] = self._lat_bnds['data'] + + # LONGITUDES + lon = netcdf.createVariable('lon', self._lon['data'].dtype, self._lon_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + lon.units = 'degrees_east' + lon.axis = 'X' + lon.long_name = 'longitude coordinate' + lon.standard_name = 'longitude' + if self._lon_bnds is not None: + lon.bounds = 'lon_bnds' + if self.size > 1: + lon.set_collective(True) + lon[:] = self._lon['data'] + + # LONGITUDES BOUNDS + if self._lon_bnds is not None: + lon_bnds_var = netcdf.createVariable('lon_bnds', self._lon_bnds['data'].dtype, + self._lon_dim + ('spatial_nv',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + if self.size > 1: + lon_bnds_var.set_collective(True) + lon_bnds_var[:] = self._lon_bnds['data'] + + return None + + + def _create_cell_measures(self, netcdf): + + # CELL AREA + if 'cell_area' in self.cell_measures.keys(): + cell_area = netcdf.createVariable('cell_area', self.cell_measures['cell_area']['data'].dtype, self._var_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + if self.size > 1: + cell_area.set_collective(True) + cell_area[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] = \ + self.cell_measures['cell_area']['data'] + + cell_area.long_name = 'area of grid cell' + cell_area.standard_name = 'cell_area' + cell_area.units = 'm2' + + for var_name in self.variables.keys(): + self.variables[var_name]['cell_measures'] = 'area: cell_area' + + if self.info: + print("Rank {0:03d}: Cell measures done".format(self.rank)) + return None + +
+[docs] + def str2char(self, data): + + if self.strlen is None: + msg = 'String data could not be converted into chars while writing.' + msg += " Please, set the maximum string length (set_strlen) before writing." + raise RuntimeError(msg) + + # Get final shape by adding strlen at the end + data_new_shape = data.shape + (self.strlen, ) + + # nD (2D, 3D, 4D) data as 1D string array + data = data.flatten() + + # Split strings into chars (S1) + data_aux = stringtochar(np.array([v.encode('ascii', 'ignore') for v in data]).astype('S' + str(self.strlen))) + data_aux = data_aux.reshape(data_new_shape) + + return data_aux
+ + + def _create_variables(self, netcdf, chunking=False): + """ + Create the netCDF file variables. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python open dataset. + chunking : bool + Indicates if you want to chunk the output netCDF. + """ + + for i, (var_name, var_dict) in enumerate(self.variables.items()): + if isinstance(var_dict['data'], int) and var_dict['data'] == 0: + var_dims = ('time', 'lev',) + self._var_dim + var_dtype = np.float32 + else: + # Get dimensions + if (var_dict['data'] is None) or (len(var_dict['data'].shape) == 4): + var_dims = ('time', 'lev',) + self._var_dim + else: + var_dims = self._var_dim + + # Get data type + if 'dtype' in var_dict.keys(): + var_dtype = var_dict['dtype'] + if (var_dict['data'] is not None) and (var_dtype != var_dict['data'].dtype): + msg = "WARNING!!! " + msg += "Different data types for variable {0}. ".format(var_name) + msg += "Input dtype={0}. Data dtype={1}.".format(var_dtype, var_dict['data'].dtype) + warnings.warn(msg) + sys.stderr.flush() + try: + var_dict['data'] = var_dict['data'].astype(var_dtype) + except Exception as e: # TODO: Detect exception + print(e) + raise TypeError("It was not possible to cast the data to the input dtype.") + else: + var_dtype = var_dict['data'].dtype + if var_dtype is np.object: + raise TypeError("Data dtype is np.object. Define dtype explicitly as dictionary key 'dtype'") + + if var_dict['data'] is not None: + + # Ensure data is of type numpy array (to create NES) + if not isinstance(var_dict['data'], (np.ndarray, np.generic)): + try: + var_dict['data'] = np.array(var_dict['data']) + except AttributeError: + raise AttributeError("Data for variable {0} must be a numpy array.".format(var_name)) + + # Convert list of strings to chars for parallelization + if np.issubdtype(var_dtype, np.character): + var_dict['data_aux'] = self.str2char(var_dict['data']) + var_dims += ('strlen',) + var_dtype = 'S1' + + if self.info: + print("Rank {0:03d}: Writing {1} var ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + + if not chunking: + var = netcdf.createVariable(var_name, var_dtype, var_dims, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + else: + if self.balanced: + raise NotImplementedError("A balanced data cannot be chunked.") + if self.master: + chunk_size = var_dict['data'].shape + else: + chunk_size = None + chunk_size = self.comm.bcast(chunk_size, root=0) + var = netcdf.createVariable(var_name, var_dtype, var_dims, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl, + chunksizes=chunk_size) + if self.info: + print("Rank {0:03d}: Var {1} created ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + if self.size > 1: + var.set_collective(True) + if self.info: + print("Rank {0:03d}: Var {1} collective ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + + for att_name, att_value in var_dict.items(): + if att_name == 'data': + if att_value is not None: + if self.info: + print("Rank {0:03d}: Filling {1})".format(self.rank, var_name)) + if 'data_aux' in var_dict.keys(): + att_value = var_dict['data_aux'] + if isinstance(att_value, int) and att_value == 0: + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = 0 + + elif len(att_value.shape) == 5: + if 'strlen' in var_dims: + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + :] = att_value + else: + raise NotImplementedError('It is not possible to write 5D variables.') + + elif len(att_value.shape) == 4: + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = att_value + + elif len(att_value.shape) == 3: + if 'strlen' in var_dims: + var[self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + :] = att_value + else: + raise NotImplementedError('It is not possible to write 3D variables.') + + if self.info: + print("Rank {0:03d}: Var {1} data ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + + elif att_name not in ['chunk_size', 'var_dims', 'dimensions', 'dtype', 'data_aux']: + var.setncattr(att_name, att_value) + + if 'data_aux' in var_dict.keys(): + del var_dict['data_aux'] + + self._set_var_crs(var) + if self.info: + print("Rank {0:03d}: Var {1} completed ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + + return None + +
+[docs] + def append_time_step_data(self, i_time, out_format='DEFAULT'): + """ + Fill the netCDF data for the indicated index time. + + Parameters + ---------- + i_time : int + index of the time step to write + out_format : str + Indicates the output format type to change the units (if needed) + """ + if self.serial_nc is not None: + try: + data = self._gather_data(self.variables) + except KeyError: + # Key Error means string data + data = self.__gather_data_py_object(self.variables) + if self.master: + self.serial_nc.variables = data + self.serial_nc.append_time_step_data(i_time, out_format=out_format) + self.comm.Barrier() + else: + if out_format == 'MONARCH': + self.variables = to_monarch_units(self) + elif out_format == 'CMAQ': + self.variables = to_cmaq_units(self) + elif out_format == 'WRF_CHEM': + self.variables = to_wrf_chem_units(self) + for i, (var_name, var_dict) in enumerate(self.variables.items()): + for att_name, att_value in var_dict.items(): + if att_name == 'data': + + if att_value is not None: + if self.info: + print("Rank {0:03d}: Filling {1})".format(self.rank, var_name)) + var = self.netcdf.variables[var_name] + if isinstance(att_value, int) and att_value == 0: + var[i_time, + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = 0 + elif len(att_value.shape) == 4: + var[i_time, + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = att_value + + elif len(att_value.shape) == 3: + raise NotImplementedError('It is not possible to write 3D variables.') + else: + raise NotImplementedError("SHAPE APPEND ERROR: {0}".format(att_value.shape)) + if self.info: + print("Rank {0:03d}: Var {1} data ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + else: + raise ValueError("Cannot append None Data for {0}".format(var_name)) + else: + # Metadata already writen + pass + + return None
+ + + def _create_centre_coordinates(self, **kwargs): + """ + Calculate centre latitudes and longitudes from grid details. + + Must be implemented on inner classes + + Returns + ---------- + centre_lat : dict + Dictionary with data of centre latitudes in 1D + centre_lon : dict + Dictionary with data of centre longitudes in 1D + """ + + return None + + def _create_metadata(self, netcdf): + """ + Must be implemented on inner class. + """ + + return None + + @staticmethod + def _set_var_crs(var): + """ + Must be implemented on inner class. + + Parameters + ---------- + var : Variable + netCDF4-python variable object. + """ + + return None + + def __to_netcdf_py(self, path, chunking=False, keep_open=False): + """ + Create the NetCDF using netcdf4-python methods. + + Parameters + ---------- + path : str + Path to the output netCDF file. + chunking: bool + Indicates if you want to chunk the output netCDF. + keep_open : bool + Indicates if you want to keep open the NetCDH to fill the data by time-step + """ + + # Open NetCDF + if self.info: + print("Rank {0:03d}: Creating {1}".format(self.rank, path)) + if self.size > 1: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=True, comm=self.comm, info=MPI.Info()) + else: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=False) + if self.info: + print("Rank {0:03d}: NetCDF ready to write".format(self.rank)) + + # Create dimensions + self._create_dimensions(netcdf) + + # Create dimension variables + self._create_dimension_variables(netcdf) + if self.info: + print("Rank {0:03d}: Dimensions done".format(self.rank)) + + # Create cell measures + self._create_cell_measures(netcdf) + + # Create variables + self._create_variables(netcdf, chunking=chunking) + + # Create metadata + self._create_metadata(netcdf) + + # Close NetCDF + if self.global_attrs is not None: + for att_name, att_value in self.global_attrs.items(): + netcdf.setncattr(att_name, att_value) + netcdf.setncattr('Conventions', 'CF-1.7') + + if keep_open: + self.netcdf = netcdf + else: + netcdf.close() + + return None + + def __to_netcdf_cams_ra(self, path): + return to_netcdf_cams_ra(self, path) + + +
+[docs] + def to_netcdf(self, path, compression_level=0, serial=False, info=False, chunking=False, type='NES', + keep_open=False): + """ + Write the netCDF output file. + + Parameters + ---------- + path : str + Path to the output netCDF file. + compression_level : int + Level of compression (0 to 9) Default: 0 (no compression). + serial : bool + Indicates if you want to write in serial or not. Default: False. + info : bool + Indicates if you want to print the information of each writing step by stdout Default: False. + chunking : bool + Indicates if you want a chunked netCDF output. Only available with non-serial writes. Default: False. + type : str + Type to NetCDf to write. 'CAMS_RA' or 'NES' + keep_open : bool + Indicates if you want to keep open the NetCDH to fill the data by time-step + """ + nc_type = type + old_info = self.info + self.info = info + self.serial_nc = None + self.zip_lvl = compression_level + if self.is_xarray: + raise NotImplementedError("Writing with xarray not implemented") + else: + # if serial: + if serial and self.size > 1: + try: + data = self._gather_data(self.variables) + except KeyError: + data = self.__gather_data_py_object(self.variables) + try: + c_measures = self._gather_data(self.cell_measures) + except KeyError: + c_measures = self.__gather_data_py_object(self.cell_measures) + if self.master: + new_nc = self.copy(copy_vars=False) + new_nc.set_communicator(MPI.COMM_SELF) + new_nc.variables = data + new_nc.cell_measures = c_measures + if type in ['NES', 'DEFAULT']: + new_nc.__to_netcdf_py(path, keep_open=keep_open) + elif type == 'CAMS_RA': + new_nc.__to_netcdf_cams_ra(path) + elif type == 'MONARCH': + to_netcdf_monarch(new_nc, path, chunking=chunking, keep_open=keep_open) + elif type == 'CMAQ': + to_netcdf_cmaq(new_nc, path, keep_open=keep_open) + elif type == 'WRF_CHEM': + to_netcdf_wrf_chem(new_nc, path, keep_open=keep_open) + else: + msg = "Unknown NetCDF type '{0}'. ".format(nc_type) + msg += "Use CAMS_RA, MONARCH or NES (or DEFAULT)" + raise ValueError(msg) + self.serial_nc = new_nc + else: + self.serial_nc = True + else: + if type in ['NES', 'DEFAULT']: + self.__to_netcdf_py(path, chunking=chunking, keep_open=keep_open) + elif nc_type == 'CAMS_RA': + self.__to_netcdf_cams_ra(path) + elif nc_type == 'MONARCH': + to_netcdf_monarch(self, path, chunking=chunking, keep_open=keep_open) + elif nc_type == 'CMAQ': + to_netcdf_cmaq(self, path, keep_open=keep_open) + elif nc_type == 'WRF_CHEM': + to_netcdf_wrf_chem(self, path, keep_open=keep_open) + else: + msg = "Unknown NetCDF type '{0}'. ".format(nc_type) + msg += "Use CAMS_RA, MONARCH or NES (or DEFAULT)" + raise ValueError(msg) + + self.info = old_info + + return None
+ + + def __to_grib2(self, path, grib_keys, grib_template_path, lat_flip=True, info=False): + """ + Private method to write output file with grib2 format. + + Parameters + ---------- + path : str + Path to the output file. + grib_keys : dict + Dictionary with the grib2 keys. + grib_template_path : str + Path to the grib2 file to use as template. + info : bool + Indicates if you want to print extra information during the process. + """ + + from eccodes import codes_grib_new_from_file + from eccodes import codes_keys_iterator_new + from eccodes import codes_keys_iterator_next + from eccodes import codes_keys_iterator_get_name + from eccodes import codes_get_string + from eccodes import codes_keys_iterator_delete + from eccodes import codes_clone + from eccodes import codes_set + from eccodes import codes_set_values + from eccodes import codes_write + from eccodes import codes_release + + fout = open(path, 'wb') + + # read template + fin = open(grib_template_path, 'rb') + + gid = codes_grib_new_from_file(fin) + if gid is None: + sys.exit(1) + + iterid = codes_keys_iterator_new(gid, 'ls') + while codes_keys_iterator_next(iterid): + keyname = codes_keys_iterator_get_name(iterid) + keyval = codes_get_string(gid, keyname) + if info: + print("%s = %s" % (keyname, keyval)) + + codes_keys_iterator_delete(iterid) + for var_name, var_info in self.variables.items(): + for i_time, time in enumerate(self.time): + for i_lev, lev in enumerate(self.lev['data']): + clone_id = codes_clone(gid) + + # Adding grib2 keys to file + for key, value in grib_keys.items(): + if value not in ['', 'None', None]: + try: + codes_set(clone_id, key, value) + except Exception as e: + print("Something went wrong while writing the Grib key '{0}': {1}".format(key, value)) + raise e + + # Time dependent keys + if 'dataTime' in grib_keys.keys() and grib_keys['dataTime'] in ['', 'None', None]: + codes_set(clone_id, 'dataTime', int(i_time * 100)) + if 'stepRange' in grib_keys.keys() and grib_keys['stepRange'] in ['', 'None', None]: + n_secs = (time - self._time[0]).total_seconds() + codes_set(clone_id, 'stepRange', int(n_secs // 3600)) + if 'forecastTime' in grib_keys.keys() and grib_keys['forecastTime'] in ['', 'None', None]: + n_secs = (time - self._time[0]).total_seconds() + codes_set(clone_id, 'forecastTime', int(n_secs)) + + # Level dependent keys + if 'typeOfFirstFixedSurface' in grib_keys.keys() and \ + grib_keys['typeOfFirstFixedSurface'] in ['', 'None', None]: + if float(lev) == 0: + codes_set(clone_id, 'typeOfFirstFixedSurface', 1) + # grib_keys['typeOfFirstFixedSurface'] = 1 + else: + codes_set(clone_id, 'typeOfFirstFixedSurface', 103) + # grib_keys['typeOfFirstFixedSurface'] = 103 + if 'level' in grib_keys.keys() and grib_keys['level'] in ['', 'None', None]: + codes_set(clone_id, 'level', float(lev)) + + newval = var_info['data'][i_time, i_lev, :, :] + if lat_flip: + newval = np.flipud(newval) + + # TODO Check default NaN Value + newval[np.isnan(newval)] = 0. + + codes_set_values(clone_id, np.array(newval.ravel(), dtype='float64')) + # codes_set_values(clone_id, newval.ravel()) + codes_write(clone_id, fout) + del newval + codes_release(gid) + fout.close() + fin.close() + + return None + +
+[docs] + def to_grib2(self, path, grib_keys, grib_template_path, lat_flip=True, info=False): + """ + Write output file with grib2 format. + + Parameters + ---------- + path : str + Path to the output file. + grib_keys : dict + Dictionary with the grib2 keys. + grib_template_path : str + Path to the grib2 file to use as template. + lat_flip : bool + Indicates if the latitude values (and data) has to be flipped + info : bool + Indicates if you want to print extra information during the process. + """ + + # if serial: + if self.parallel_method in ['X', 'Y'] and self.size > 1: + try: + data = self._gather_data(self.variables) + except KeyError: + data = self.__gather_data_py_object(self.variables) + try: + c_measures = self._gather_data(self.cell_measures) + except KeyError: + c_measures = self.__gather_data_py_object(self.cell_measures) + if self.master: + new_nc = self.copy(copy_vars=False) + new_nc.set_communicator(MPI.COMM_SELF) + new_nc.variables = data + new_nc.cell_measures = c_measures + new_nc.__to_grib2(path, grib_keys, grib_template_path, lat_flip=lat_flip, info=info) + else: + self.__to_grib2(path, grib_keys, grib_template_path, lat_flip=lat_flip, info=info) + + return None
+ + +
+[docs] + def create_shapefile(self): + """ + Create spatial geodataframe (shapefile). + + Returns + ------- + shapefile : GeoPandasDataFrame + Shapefile dataframe. + """ + + if self.shapefile is None: + + if self._lat_bnds is None or self._lon_bnds is None: + self.create_spatial_bounds() + + # Reshape arrays to create geometry + aux_shape = (self.lat_bnds['data'].shape[0], self.lon_bnds['data'].shape[0], 4) + lon_bnds_aux = np.empty(aux_shape) + lon_bnds_aux[:, :, 0] = self.lon_bnds['data'][np.newaxis, :, 0] + lon_bnds_aux[:, :, 1] = self.lon_bnds['data'][np.newaxis, :, 1] + lon_bnds_aux[:, :, 2] = self.lon_bnds['data'][np.newaxis, :, 1] + lon_bnds_aux[:, :, 3] = self.lon_bnds['data'][np.newaxis, :, 0] + + lon_bnds = lon_bnds_aux + del lon_bnds_aux + + lat_bnds_aux = np.empty(aux_shape) + lat_bnds_aux[:, :, 0] = self.lat_bnds['data'][:, np.newaxis, 0] + lat_bnds_aux[:, :, 1] = self.lat_bnds['data'][:, np.newaxis, 0] + lat_bnds_aux[:, :, 2] = self.lat_bnds['data'][:, np.newaxis, 1] + lat_bnds_aux[:, :, 3] = self.lat_bnds['data'][:, np.newaxis, 1] + + lat_bnds = lat_bnds_aux + del lat_bnds_aux + + aux_b_lats = lat_bnds.reshape((lat_bnds.shape[0] * lat_bnds.shape[1], lat_bnds.shape[2])) + aux_b_lons = lon_bnds.reshape((lon_bnds.shape[0] * lon_bnds.shape[1], lon_bnds.shape[2])) + + # Create dataframe cointaining all polygons + geometry = [] + for i in range(aux_b_lons.shape[0]): + geometry.append(Polygon([(aux_b_lons[i, 0], aux_b_lats[i, 0]), + (aux_b_lons[i, 1], aux_b_lats[i, 1]), + (aux_b_lons[i, 2], aux_b_lats[i, 2]), + (aux_b_lons[i, 3], aux_b_lats[i, 3]), + (aux_b_lons[i, 0], aux_b_lats[i, 0])])) + fids = np.arange(len(self._lat['data']) * len(self._lon['data'])) + fids = fids.reshape((len(self._lat['data']), len(self._lon['data']))) + fids = fids[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids.ravel()), + geometry=geometry, + crs="EPSG:4326") + self.shapefile = gdf + + else: + gdf = self.shapefile + + return gdf
+ + +
+[docs] + def write_shapefile(self, path): + """ + Save spatial geodataframe (shapefile). + + Parameters + ---------- + path : str + Path to the output file. + """ + + if self.shapefile is None: + raise ValueError('Shapefile was not created.') + + if self.size == 1: + # In serial, avoid gather + self.shapefile.to_file(path) + else: + # In parallel + data = self.comm.gather(self.shapefile, root=0) + if self.master: + data = pd.concat(data) + data.to_file(path) + + return None
+ + +
+[docs] + def to_shapefile(self, path, time=None, lev=None, var_list=None): + """ + Create shapefile from NES data. + + 1. Create grid shapefile. + 2. Add variables to shapefile (as independent function). + 3. Write shapefile. + + Parameters + ---------- + path : str + Path to the output file. + time : datetime.datetime + Time stamp to select. + lev : int + Vertical level to select. + var_list : List, str, None + List (or single string) of the variables to be loaded and saved in the shapefile. + """ + + # If list is not defined, get all variables + if var_list is None: + var_list = list(self.variables.keys()) + else: + if isinstance(var_list, str): + var_list = [var_list] + + # Add warning for unloaded variables + unloaded_vars = [] + for var_name in var_list: + if self.variables[var_name]['data'] is None: + unloaded_vars.append(var_name) + if len(unloaded_vars) > 0: + raise ValueError('The variables {0} need to be loaded/created before using to_shapefile.'.format( + unloaded_vars)) + + # Select first vertical level (if needed) + if lev is None: + msg = 'No vertical level has been specified. The first one will be selected.' + warnings.warn(msg) + sys.stderr.flush() + idx_lev = 0 + else: + if lev not in self.lev['data']: + raise ValueError('Level {} is not available. Choose from {}'.format(lev, self.lev['data'])) + idx_lev = lev + + # Select first time (if needed) + if time is None: + msg = 'No time has been specified. The first one will be selected.' + warnings.warn(msg) + sys.stderr.flush() + idx_time = 0 + else: + if time not in self.time: + raise ValueError('Time {} is not available. Choose from {}'.format(time, self.time)) + idx_time = self.time.index(time) + + # Create shapefile + self.create_shapefile() + + # Load variables from original file and get data for selected time / level + self.add_variables_to_shapefile(var_list, idx_lev, idx_time) + + # Write shapefile + self.write_shapefile(path) + + return None
+ + +
+[docs] + def add_variables_to_shapefile(self, var_list, idx_lev=0, idx_time=0): + """ + Add variables data to shapefile. + + var_list : List or str + Variables to be loaded and saved in the shapefile. + idx_lev : int + Index of vertical level for which the data will be saved in the shapefile. + idx_time : int + Index of time for which the data will be saved in the shapefile. + """ + + for var_name in var_list: + self.shapefile[var_name] = self.variables[var_name]['data'][idx_time, idx_lev, :].ravel() + + return None
+ + +
+[docs] + def get_centroids_from_coordinates(self): + """ + Get centroids from geographical coordinates. + + Returns + ------- + centroids_gdf: GeoPandasDataFrame + Centroids dataframe. + """ + + # Get centroids from coordinates + centroids = [] + for lat_ind in range(0, len(self.lat['data'])): + for lon_ind in range(0, len(self.lon['data'])): + centroids.append(Point(self.lon['data'][lon_ind], + self.lat['data'][lat_ind])) + + # Create dataframe cointaining all points + fids = np.arange(len(self._lat['data']) * len(self._lon['data'])) + fids = fids.reshape((len(self._lat['data']), len(self._lon['data']))) + fids = fids[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + centroids_gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids.ravel()), + geometry=centroids, + crs="EPSG:4326") + + return centroids_gdf
+ + + def __gather_data_py_object(self, data_to_gather): + """ + Gather all the variable data into the MPI rank 0 to perform a serial write. + + Returns + ------- + data_list: dict + Variables dictionary with all the data from all the ranks. + """ + + data_list = deepcopy(data_to_gather) + for var_name in data_list.keys(): + try: + # noinspection PyArgumentList + data_aux = self.comm.gather(data_list[var_name]['data'], root=0) + if self.rank == 0: + shp_len = len(data_list[var_name]['data'].shape) + add_dimension = False # to Add a dimension + if self.parallel_method == 'Y': + if shp_len == 2: + # if is a 2D concatenate over first axis + axis = 0 + elif shp_len == 3: + # if is a 3D concatenate over second axis + axis = 1 + else: + # if is a 4D concatenate over third axis + axis = 2 + elif self.parallel_method == 'X': + if shp_len == 2: + # if is a 2D concatenate over second axis + axis = 1 + elif shp_len == 3: + # if is a 3D concatenate over third axis + axis = 2 + else: + # if is a 4D concatenate over forth axis + axis = 3 + elif self.parallel_method == 'T': + if shp_len == 2: + # if is a 2D add dimension + add_dimension = True + axis = None # Not used + elif shp_len == 3: + # if is a 3D concatenate over first axis + axis = 0 + else: + # if is a 4D concatenate over second axis + axis = 0 + else: + raise NotImplementedError( + "Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'Y', 'T'])) + if add_dimension: + data_list[var_name]['data'] = np.stack(data_aux) + else: + data_list[var_name]['data'] = np.concatenate(data_aux, axis=axis) + except Exception as e: + print("**ERROR** an error has occurred while gathering the '{0}' variable.\n".format(var_name)) + sys.stderr.write("**ERROR** an error has occurred while gathering the '{0}' variable.\n".format( + var_name)) + print(e) + sys.stderr.write(str(e)) + # print(e, file=sys.stderr) + sys.stderr.flush() + self.comm.Abort(1) + raise e + + return data_list + + def _gather_data(self, data_to_gather): + """ + Gather all the variable data into the MPI rank 0 to perform a serial write. + + Returns + ------- + data_to_gather: dict + Variables to gather. + """ + + data_list = deepcopy(data_to_gather) + for var_name in data_list.keys(): + if self.info and self.master: + print("Gathering {0}".format(var_name)) + if data_list[var_name]['data'] is None: + data_list[var_name]['data'] = None + elif isinstance(data_list[var_name]['data'], int) and data_list[var_name]['data'] == 0: + data_list[var_name]['data'] = 0 + else: + shp_len = len(data_list[var_name]['data'].shape) + # Collect local array sizes using the gather communication pattern + rank_shapes = np.array(self.comm.gather(data_list[var_name]['data'].shape, root=0)) + sendbuf = data_list[var_name]['data'].flatten() + sendcounts = np.array(self.comm.gather(len(sendbuf), root=0)) + if self.master: + # recvbuf = np.empty(sum(sendcounts), dtype=type(sendbuf[0])) + recvbuf = np.empty(sum(sendcounts), dtype=type(sendbuf.max())) + else: + recvbuf = None + self.comm.Gatherv(sendbuf=sendbuf, recvbuf=(recvbuf, sendcounts), root=0) + if self.master: + recvbuf = np.split(recvbuf, np.cumsum(sendcounts)) + # TODO ask + # I don't understand why it is giving one more split + if len(recvbuf) > len(sendcounts): + recvbuf = recvbuf[:-1] + for i, shape in enumerate(rank_shapes): + recvbuf[i] = recvbuf[i].reshape(shape) + add_dimension = False # to Add a dimension + if self.parallel_method == 'Y': + if shp_len == 2: + # if is a 2D concatenate over first axis + axis = 0 + elif shp_len == 3: + # if is a 3D concatenate over second axis + axis = 1 + else: + # if is a 4D concatenate over third axis + axis = 2 + elif self.parallel_method == 'X': + if shp_len == 2: + # if is a 2D concatenate over second axis + axis = 1 + elif shp_len == 3: + # if is a 3D concatenate over third axis + axis = 2 + else: + # if is a 4D concatenate over forth axis + axis = 3 + elif self.parallel_method == 'T': + if shp_len == 2: + # if is a 2D add dimension + add_dimension = True + axis = None # Not used + elif shp_len == 3: + # if is a 3D concatenate over first axis + axis = 0 + else: + # if is a 4D concatenate over second axis + axis = 0 + else: + raise NotImplementedError( + "Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'Y', 'T'])) + if add_dimension: + data_list[var_name]['data'] = np.stack(recvbuf) + else: + data_list[var_name]['data'] = np.concatenate(recvbuf, axis=axis) + + return data_list + + # ================================================================================================================== + # Extra Methods + # ================================================================================================================== +
+[docs] + @staticmethod + def lon_lat_to_cartesian_ecef(lon, lat): + """ + # Convert observational/model geographic longitude/latitude coordinates to cartesian ECEF (Earth Centred, + # Earth Fixed) coordinates, assuming WGS84 datum and ellipsoid, and that all heights = 0. + # ECEF coordiantes represent positions (in meters) as X, Y, Z coordinates, approximating the earth surface + # as an ellipsoid of revolution. + # This conversion is for the subsequent calculation of euclidean distances of the model gridcell centres + # from each observational station. + # Defining the distance between two points on the earth's surface as simply the euclidean distance + # between the two lat/lon pairs could lead to inaccurate results depending on the distance + # between two points (i.e. 1 deg. of longitude varies with latitude). + + Parameters + ---------- + lon : np.array + Longitude values. + lat : np.array + Latitude values. + """ + + lla = pyproj.Proj(proj='latlong', ellps='WGS84', datum='WGS84') + ecef = pyproj.Proj(proj='geocent', ellps='WGS84', datum='WGS84') + x, y, z = pyproj.transform(lla, ecef, lon, lat, np.zeros(lon.shape), radians=False) + + return np.column_stack([x, y, z])
+ + +
+[docs] + def add_4d_vertical_info(self, info_to_add): + """ + To add the vertical information from other source. + + Parameters + ---------- + info_to_add : nes.Nes, str + Nes object with the vertical information as variable or str with the path to the NetCDF file that contains + the vertical data. + """ + + return vertical_interpolation.add_4d_vertical_info(self, info_to_add)
+ + +
+[docs] + def interpolate_vertical(self, new_levels, new_src_vertical=None, kind='linear', extrapolate=None, info=None, + overwrite=False): + """ + Vertical interpolation function. + + Parameters + ---------- + self : Nes + Source Nes object. + new_levels : List + New vertical levels. + new_src_vertical + kind : str + Vertical methods type. + extrapolate : None, tuple, str + Extrapolate method (for non linear operations). + info: None, bool + Indicates if you want to print extra information. + overwrite: bool + Indicates if you want to compute the vertical interpolation in the same object or not + """ + + return vertical_interpolation.interpolate_vertical( + self, new_levels, new_src_vertical=new_src_vertical, kind=kind, extrapolate=extrapolate, info=info, + overwrite=overwrite)
+ + +
+[docs] + def interpolate_horizontal(self, dst_grid, weight_matrix_path=None, kind='NearestNeighbour', n_neighbours=4, + info=False, to_providentia=False, only_create_wm=False, wm=None, flux=False): + """ + Horizontal methods from the current grid to another one. + + Parameters + ---------- + dst_grid : nes.Nes + Final projection Nes object. + weight_matrix_path : str, None + Path to the weight matrix to read/create. + kind : str + Kind of horizontal methods. choices = ['NearestNeighbour', 'Conservative']. + n_neighbours: int + Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4. + info: bool + Indicates if you want to print extra info during the methods process. + to_providentia : bool + Indicates if we want the interpolated grid in Providentia format. + only_create_wm : bool + Indicates if you want to only create the Weight Matrix. + wm : Nes + Weight matrix Nes File + flux : bool + Indicates if you want to calculate the weight matrix for flux variables + """ + + return horizontal_interpolation.interpolate_horizontal( + self, dst_grid, weight_matrix_path=weight_matrix_path, kind=kind, n_neighbours=n_neighbours, info=info, + to_providentia=to_providentia, only_create_wm=only_create_wm, wm=wm, flux=flux)
+ + +
+[docs] + def spatial_join(self, ext_shp, method=None, var_list=None, info=False): + """ + Compute overlay intersection of two GeoPandasDataFrames. + + Parameters + ---------- + ext_shp : GeoPandasDataFrame or str + File or path from where the data will be obtained on the intersection. + method : str + Overlay method. Accepted values: ['nearest', 'intersection', 'centroid']. + var_list : List or None + Variables that will be included in the resulting shapefile. + info : bool + Indicates if you want to print the process info or no + """ + + return spatial_join(self, ext_shp=ext_shp, method=method, var_list=var_list, info=info)
+ + +
+[docs] + def calculate_grid_area(self, overwrite=True): + """ + Get coordinate bounds and call function to calculate the area of each cell of a grid. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + overwrite : bool + Indicates if we want to overwrite the grid area + """ + + if ('cell_area' not in self.cell_measures.keys()) or (overwrite): + grid_area = cell_measures.calculate_grid_area(self) + grid_area = grid_area.reshape([self.lat['data'].shape[0], self.lon['data'].shape[-1]]) + self.cell_measures['cell_area'] = {'data': grid_area} + else: + grid_area = self.cell_measures['cell_area']['data'] + + return grid_area
+ + +
+[docs] + @staticmethod + def calculate_geometry_area(geometry_list, earth_radius_minor_axis=6356752.3142, + earth_radius_major_axis=6378137.0): + """ + Get coordinate bounds and call function to calculate the area of each cell of a set of geometries. + + Parameters + ---------- + geometry_list : List + List with polygon geometries. + earth_radius_minor_axis : float + Radius of the minor axis of the Earth. + earth_radius_major_axis : float + Radius of the major axis of the Earth. + """ + + return cell_measures.calculate_geometry_area(geometry_list, earth_radius_minor_axis=earth_radius_minor_axis, + earth_radius_major_axis=earth_radius_major_axis)
+ + +
+[docs] + @staticmethod + def get_earth_radius(ellps): + """ + Get minor and major axis of Earth. + + Parameters + ---------- + ellps : str + Spatial reference system. + """ + + # WGS84 with radius defined in Cartopy source code + earth_radius_dict = {'WGS84': [6356752.3142, 6378137.0]} + + return earth_radius_dict[ellps]
+ + +
+[docs] + def get_fids(self): + """ + Obtain the FIDs in a 2D format + + Returns + ------- + np.array + 2D array with the FID data + """ + fids = np.arange(self._lat['data'].shape[0] * self._lon['data'].shape[-1]) + fids = fids.reshape((self._lat['data'].shape[0], self._lon['data'].shape[-1])) + fids = fids[self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] + + return fids
+
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/latlon_nes.html b/docs/build/html/_modules/nes/nc_projections/latlon_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..3fc4c79305b7be4204b609968c09ee88b65cbca8 --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/latlon_nes.html @@ -0,0 +1,496 @@ + + + + + + nes.nc_projections.latlon_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nc_projections.latlon_nes

+#!/usr/bin/env python
+
+import numpy as np
+from pyproj import Proj
+from .default_nes import Nes
+
+
+
+[docs] +class LatLonNes(Nes): + """ + + Attributes + ---------- + _var_dim : tuple + Tuple with the name of the Y and X dimensions for the variables. + ('lat', 'lon') for a regular latitude-longitude projection. + _lat_dim : tuple + Tuple with the name of the dimensions of the Latitude values. + ('lat',) for a regular latitude-longitude projection. + _lon_dim : tuple + Tuple with the name of the dimensions of the Longitude values. + ('lon',) for a regular latitude-longitude projection. + """ + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the LatLonNes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + super(LatLonNes, self).__init__(comm=comm, path=path, info=info, dataset=dataset, + xarray=xarray, parallel_method=parallel_method, balanced=balanced, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=create_nes, + times=times, **kwargs) + + if create_nes: + # Dimensions screening + self.lat = self._get_coordinate_values(self._lat, 'Y') + self.lon = self._get_coordinate_values(self._lon, 'X') + + # Set axis limits for parallel writing + self.write_axis_limits = self.get_write_axis_limits() + + self._var_dim = ('lat', 'lon') + self._lat_dim = ('lat',) + self._lon_dim = ('lon',) + + self.free_vars('crs') + +
+[docs] + @staticmethod + def new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the Nes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + new = LatLonNes(comm=comm, path=path, info=info, dataset=dataset, xarray=xarray, + parallel_method=parallel_method, avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, first_level=first_level, last_level=last_level, + create_nes=create_nes, balanced=balanced, times=times, **kwargs) + + return new
+ + + def _get_pyproj_projection(self): + """ + Get projection data as in Pyproj library. + + Returns + ---------- + projection : pyproj.Proj + Grid projection. + """ + + projection = Proj(proj='latlong', + ellps='WGS84', + ) + + return projection + + def _get_projection(self): + """ + Get 'projection' and 'projection_data' from grid details. + """ + + if 'crs' in self.variables.keys(): + projection_data = self.variables['crs'] + self.free_vars('crs') + else: + projection_data = {'grid_mapping_name': 'latitude_longitude', + 'semi_major_axis': self.earth_radius[1], + 'inverse_flattening': 0, + } + + if 'dtype' in projection_data.keys(): + del projection_data['dtype'] + + if 'data' in projection_data.keys(): + del projection_data['data'] + + if 'dimensions' in projection_data.keys(): + del projection_data['dimensions'] + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_projection(self, **kwargs): + """ + Create 'projection' and 'projection_data' from projection arguments. + """ + + projection_data = {'grid_mapping_name': 'latitude_longitude', + 'semi_major_axis': self.earth_radius[1], + 'inverse_flattening': 0, + 'inc_lat': kwargs['inc_lat'], + 'inc_lon': kwargs['inc_lon'], + + } + # Global domain + if len(kwargs) == 2: + projection_data['lat_orig'] = -90 + projection_data['lon_orig'] = -180 + projection_data['n_lat'] = int(180 // np.float64(projection_data['inc_lat'])) + projection_data['n_lon'] = int(360 // np.float64(projection_data['inc_lon'])) + # Other domains + else: + projection_data['lat_orig'] = kwargs['lat_orig'] + projection_data['lon_orig'] = kwargs['lon_orig'] + projection_data['n_lat'] = kwargs['n_lat'] + projection_data['n_lon'] = kwargs['n_lon'] + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_dimensions(self, netcdf): + """ + Create 'spatial_nv' dimensions and the super dimensions 'lev', 'time', 'time_nv', 'lon' and 'lat'. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(LatLonNes, self)._create_dimensions(netcdf) + + # Create spatial_nv (number of vertices) dimension + if (self._lat_bnds is not None) and (self._lon_bnds is not None): + netcdf.createDimension('spatial_nv', 2) + + return None + + def _create_centre_coordinates(self, **kwargs): + """ + Calculate centre latitudes and longitudes from grid details. + + Returns + ---------- + centre_lat : dict + Dictionary with data of centre latitudes in 1D + centre_lon : dict + Dictionary with data of centre longitudes in 1D + """ + + # Get grid resolution + inc_lat = np.float64(self.projection_data['inc_lat']) + inc_lon = np.float64(self.projection_data['inc_lon']) + + # Get coordinates origen + lat_orig = np.float64(self.projection_data['lat_orig']) + lon_orig = np.float64(self.projection_data['lon_orig']) + + # Get number of coordinates + n_lat = int(self.projection_data['n_lat']) + n_lon = int(self.projection_data['n_lon']) + + # Calculate centre latitudes + lat_c_orig = lat_orig + (inc_lat / 2) + centre_lat = np.linspace(lat_c_orig, + lat_c_orig + (inc_lat * (n_lat - 1)), + n_lat, dtype=np.float64) + + # Calculate centre longitudes + lon_c_orig = lon_orig + (inc_lon / 2) + centre_lon = np.linspace(lon_c_orig, + lon_c_orig + (inc_lon * (n_lon - 1)), + n_lon, dtype=np.float64) + + return {'data': centre_lat}, {'data': centre_lon} + +
+[docs] + def create_providentia_exp_centre_coordinates(self): + """ + Calculate centre latitudes and longitudes from original coordinates and store as 2D arrays. + + Returns + ---------- + model_centre_lat : dict + Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude). + model_centre_lon : dict + Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude). + """ + + model_centre_lon_data, model_centre_lat_data = np.meshgrid(self.lon['data'], self.lat['data']) + + # Calculate centre latitudes + model_centre_lat = {'data': model_centre_lat_data} + + # Calculate centre longitudes + model_centre_lon = {'data': model_centre_lon_data} + + return model_centre_lat, model_centre_lon
+ + +
+[docs] + def create_providentia_exp_grid_edge_coordinates(self): + """ + Calculate grid edge latitudes and longitudes and get model grid outline. + + Returns + ---------- + grid_edge_lat : dict + Dictionary with data of grid edge latitudes. + grid_edge_lon : dict + Dictionary with data of grid edge longitudes. + """ + + # Get grid resolution + inc_lon = np.abs(np.mean(np.diff(self.lon['data']))) + inc_lat = np.abs(np.mean(np.diff(self.lat['data']))) + + # Get bounds + lat_bounds = self.create_single_spatial_bounds(self.lat['data'], inc_lat) + lon_bounds = self.create_single_spatial_bounds(self.lon['data'], inc_lon) + + # Get latitudes for grid edge + left_edge_lat = np.append(lat_bounds.flatten()[::2], lat_bounds.flatten()[-1]) + right_edge_lat = np.flip(left_edge_lat, 0) + top_edge_lat = np.repeat(lat_bounds[-1][-1], len(self.lon['data']) - 1) + bottom_edge_lat = np.repeat(lat_bounds[0][0], len(self.lon['data'])) + lat_grid_edge = np.concatenate((left_edge_lat, top_edge_lat, right_edge_lat, bottom_edge_lat)) + + # Get longitudes for grid edge + left_edge_lon = np.repeat(lon_bounds[0][0], len(self.lat['data']) + 1) + top_edge_lon = lon_bounds.flatten()[1:-1:2] + right_edge_lon = np.repeat(lon_bounds[-1][-1], len(self.lat['data']) + 1) + bottom_edge_lon = np.flip(lon_bounds.flatten()[:-1:2], 0) + lon_grid_edge = np.concatenate((left_edge_lon, top_edge_lon, right_edge_lon, bottom_edge_lon)) + + # Create grid outline by stacking the edges in both coordinates + model_grid_outline = np.vstack((lon_grid_edge, lat_grid_edge)).T + grid_edge_lat = {'data': model_grid_outline[:,1]} + grid_edge_lon = {'data': model_grid_outline[:,0]} + + return grid_edge_lat, grid_edge_lon
+ + + @staticmethod + def _set_var_crs(var): + """ + Set the grid_mapping to 'crs'. + + Parameters + ---------- + var : Variable + netCDF4-python variable object. + """ + + var.grid_mapping = 'crs' + var.coordinates = "lat lon" + + return None + + def _create_metadata(self, netcdf): + """ + Create the 'crs' variable for the rotated latitude longitude grid_mapping. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python Dataset. + """ + + if self.projection_data is not None: + mapping = netcdf.createVariable('crs', 'i') + mapping.grid_mapping_name = self.projection_data['grid_mapping_name'] + mapping.semi_major_axis = self.projection_data['semi_major_axis'] + mapping.inverse_flattening = self.projection_data['inverse_flattening'] + + return None + +
+[docs] + def to_grib2(self, path, grib_keys, grib_template_path, lat_flip=False, info=False): + """ + Write output file with grib2 format. + + Parameters + ---------- + lat_flip : bool + Indicates if the latitudes have to be flipped + path : str + Path to the output file. + grib_keys : dict + Dictionary with the grib2 keys. + grib_template_path : str + Path to the grib2 file to use as template. + info : bool + Indicates if you want to print extra information during the process. + """ + + return super(LatLonNes, self).to_grib2(path, grib_keys, grib_template_path, lat_flip=lat_flip, info=info)
+
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/lcc_nes.html b/docs/build/html/_modules/nes/nc_projections/lcc_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..e2530c67631de1d0201a3f604fb436b08ebb95d0 --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/lcc_nes.html @@ -0,0 +1,700 @@ + + + + + + nes.nc_projections.lcc_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for nes.nc_projections.lcc_nes

+#!/usr/bin/env python
+
+import warnings
+import sys
+import numpy as np
+import pandas as pd
+from cfunits import Units
+from pyproj import Proj
+from copy import deepcopy
+import geopandas as gpd
+from shapely.geometry import Polygon, Point
+from .default_nes import Nes
+
+
+
+[docs] +class LCCNes(Nes): + """ + + Attributes + ---------- + _y : dict + Y coordinates dictionary with the complete 'data' key for all the values and the rest of the attributes. + _x : dict + X coordinates dictionary with the complete 'data' key for all the values and the rest of the attributes. + y : dict + Y coordinates dictionary with the portion of 'data' corresponding to the rank values. + x : dict + X coordinates dictionary with the portion of 'data' corresponding to the rank values. + _var_dim : tuple + Tuple with the name of the Y and X dimensions for the variables. + ('y', 'x',) for a LCC projection. + _lat_dim : tuple + Tuple with the name of the dimensions of the Latitude values. + ('y', 'x',) for a LCC projection. + _lon_dim : tuple + Tuple with the name of the dimensions of the Longitude values. + ('y', 'x') for a LCC projection. + """ + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the LCCNes class + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + super(LCCNes, self).__init__(comm=comm, path=path, info=info, dataset=dataset, + xarray=xarray, parallel_method=parallel_method, balanced=balanced, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=create_nes, + times=times, **kwargs) + + if create_nes: + # Dimensions screening + self.lat = self._get_coordinate_values(self._lat, 'Y') + self.lon = self._get_coordinate_values(self._lon, 'X') + else: + # Complete dimensions + self._y = self._get_coordinate_dimension('y') + self._x = self._get_coordinate_dimension('x') + + # Dimensions screening + self.y = self._get_coordinate_values(self._y, 'Y') + self.x = self._get_coordinate_values(self._x, 'X') + + # Set axis limits for parallel writing + self.write_axis_limits = self.get_write_axis_limits() + + self._var_dim = ('y', 'x') + self._lat_dim = ('y', 'x') + self._lon_dim = ('y', 'x') + + self.free_vars('crs') + +
+[docs] + @staticmethod + def new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the Nes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + new = LCCNes(comm=comm, path=path, info=info, dataset=dataset, xarray=xarray, + parallel_method=parallel_method, avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, first_level=first_level, last_level=last_level, + create_nes=create_nes, balanced=balanced, times=times, **kwargs) + + return new
+ + +
+[docs] + def filter_coordinates_selection(self): + """ + Use the selection limits to filter y, x, time, lev, lat, lon, lon_bnds and lat_bnds. + """ + + idx = self.get_idx_intervals() + + self.y = self._get_coordinate_values(self._y, 'Y') + self.x = self._get_coordinate_values(self._x, 'X') + + self._y['data'] = self._y['data'][idx['idx_y_min']:idx['idx_y_max']] + self._x['data'] = self._x['data'][idx['idx_x_min']:idx['idx_x_max']] + + super(LCCNes, self).filter_coordinates_selection() + + return None
+ + + def _get_pyproj_projection(self): + """ + Get projection data as in Pyproj library. + + Returns + ---------- + projection : pyproj.Proj + Grid projection. + """ + + projection = Proj(proj='lcc', + ellps='WGS84', + R=self.earth_radius[0], + lat_1=np.float64(self.projection_data['standard_parallel'][0]), + lat_2=np.float64(self.projection_data['standard_parallel'][1]), + lon_0=np.float64(self.projection_data['longitude_of_central_meridian']), + lat_0=np.float64(self.projection_data['latitude_of_projection_origin']), + to_meter=1, + x_0=0, + y_0=0, + a=self.earth_radius[1], + k_0=1.0, + ) + + return projection + + def _get_projection(self): + """ + Get 'projection' and 'projection_data' from grid details. + """ + + if 'Lambert_Conformal' in self.variables.keys(): + projection_data = self.variables['Lambert_Conformal'] + self.free_vars('Lambert_Conformal') + elif 'Lambert_conformal' in self.variables.keys(): + projection_data = self.variables['Lambert_conformal'] + self.free_vars('Lambert_conformal') + else: + # We will never have this condition since the LCC grid will never be correctly detected + # since the function __is_lcc in load_nes only detects LCC grids when there is Lambert_conformal + msg = 'There is no variable called Lambert_Conformal, projection has not been defined.' + raise RuntimeError(msg) + + if 'dtype' in projection_data.keys(): + del projection_data['dtype'] + + if 'data' in projection_data.keys(): + del projection_data['data'] + + if 'dimensions' in projection_data.keys(): + del projection_data['dimensions'] + + if isinstance(projection_data['standard_parallel'], str): + projection_data['standard_parallel'] = [projection_data['standard_parallel'].split(', ')[0], + projection_data['standard_parallel'].split(', ')[1]] + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_projection(self, **kwargs): + """ + Create 'projection' and 'projection_data' from projection arguments. + """ + + projection_data = {'grid_mapping_name': 'lambert_conformal_conic', + 'standard_parallel': [kwargs['lat_1'], kwargs['lat_2']], + 'longitude_of_central_meridian': kwargs['lon_0'], + 'latitude_of_projection_origin': kwargs['lat_0'], + 'x_0': kwargs['x_0'], 'y_0': kwargs['y_0'], + 'inc_x': kwargs['inc_x'], 'inc_y': kwargs['inc_y'], + 'nx': kwargs['nx'], 'ny': kwargs['ny'], + + } + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_dimensions(self, netcdf): + """ + Create 'y', 'x' and 'spatial_nv' dimensions and the super dimensions 'lev', 'time', 'time_nv', 'lon' and 'lat' + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(LCCNes, self)._create_dimensions(netcdf) + + # Create y and x dimensions + netcdf.createDimension('y', len(self._y['data'])) + netcdf.createDimension('x', len(self._x['data'])) + + # Create spatial_nv (number of vertices) dimension + if (self._lat_bnds is not None) and (self._lon_bnds is not None): + netcdf.createDimension('spatial_nv', 4) + + return None + + def _create_dimension_variables(self, netcdf): + """ + Create the 'y' and 'x' variables. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(LCCNes, self)._create_dimension_variables(netcdf) + + # LCC Y COORDINATES + y = netcdf.createVariable('y', self._y['data'].dtype, ('y',)) + y.long_name = 'y coordinate of projection' + if 'units' in self._y.keys(): + y.units = Units(self._y['units'], formatted=True).units + else: + y.units = 'm' + y.standard_name = 'projection_y_coordinate' + if self.size > 1: + y.set_collective(True) + y[:] = self._y['data'] + + # LCC X COORDINATES + x = netcdf.createVariable('x', self._x['data'].dtype, ('x',)) + x.long_name = 'x coordinate of projection' + if 'units' in self._x.keys(): + x.units = Units(self._x['units'], formatted=True).units + else: + x.units = 'm' + x.standard_name = 'projection_x_coordinate' + if self.size > 1: + x.set_collective(True) + x[:] = self._x['data'] + + return None + + def _create_centre_coordinates(self, **kwargs): + """ + Calculate centre latitudes and longitudes from grid details. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + # Get projection details on x + x_0 = np.float64(self.projection_data['x_0']) + inc_x = np.float64(self.projection_data['inc_x']) + nx = int(self.projection_data['nx']) + + # Get projection details on y + y_0 = np.float64(self.projection_data['y_0']) + inc_y = np.float64(self.projection_data['inc_y']) + ny = int(self.projection_data['ny']) + + # Create a regular grid in metres (1D) + self._x = {'data': np.linspace(x_0 + (inc_x / 2), + x_0 + (inc_x / 2) + (inc_x * (nx - 1)), + nx, dtype=np.float64)} + self._y = {'data': np.linspace(y_0 + (inc_y / 2), + y_0 + (inc_y / 2) + (inc_y * (ny - 1)), + ny, dtype=np.float64)} + + + # Create a regular grid in metres (1D to 2D) + x = np.array([self._x['data']] * len(self._y['data'])) + y = np.array([self._y['data']] * len(self._x['data'])).T + + # Calculate centre latitudes and longitudes (UTM to LCC) + centre_lon, centre_lat = self.projection(x, y, inverse=True) + + return {'data': centre_lat}, {'data': centre_lon} + +
+[docs] + def create_providentia_exp_centre_coordinates(self): + """ + Calculate centre latitudes and longitudes from original coordinates and store as 2D arrays. + + Returns + ---------- + model_centre_lat : dict + Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude). + model_centre_lon : dict + Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude). + """ + + # Get centre latitudes + model_centre_lat = self.lat + + # Get centre longitudes + model_centre_lon = self.lon + + return model_centre_lat, model_centre_lon
+ + +
+[docs] + def create_providentia_exp_grid_edge_coordinates(self): + """ + Calculate grid edge latitudes and longitudes and get model grid outline. + + Returns + ---------- + grid_edge_lat : dict + Dictionary with data of grid edge latitudes. + grid_edge_lon : dict + Dictionary with data of grid edge longitudes. + """ + + # Get grid resolution + inc_x = np.abs(np.mean(np.diff(self.x['data']))) + inc_y = np.abs(np.mean(np.diff(self.y['data']))) + + # Get bounds for rotated coordinates + y_bnds = self.create_single_spatial_bounds(self.y['data'], inc_y) + x_bnds = self.create_single_spatial_bounds(self.x['data'], inc_x) + + # Get rotated latitudes for grid edge + left_edge_y = np.append(y_bnds.flatten()[::2], y_bnds.flatten()[-1]) + right_edge_y = np.flip(left_edge_y, 0) + top_edge_y = np.repeat(y_bnds[-1][-1], len(self.x['data']) - 1) + bottom_edge_y = np.repeat(y_bnds[0][0], len(self.x['data'])) + y_grid_edge = np.concatenate((left_edge_y, top_edge_y, right_edge_y, bottom_edge_y)) + + # Get rotated longitudes for grid edge + left_edge_x = np.repeat(x_bnds[0][0], len(self.y['data']) + 1) + top_edge_x = x_bnds.flatten()[1:-1:2] + right_edge_x = np.repeat(x_bnds[-1][-1], len(self.y['data']) + 1) + bottom_edge_x = np.flip(x_bnds.flatten()[:-1:2], 0) + x_grid_edge = np.concatenate((left_edge_x, top_edge_x, right_edge_x, bottom_edge_x)) + + # Get edges for regular coordinates + grid_edge_lon_data, grid_edge_lat_data = self.projection(x_grid_edge, y_grid_edge, inverse=True) + + # Create grid outline by stacking the edges in both coordinates + model_grid_outline = np.vstack((grid_edge_lon_data, grid_edge_lat_data)).T + grid_edge_lat = {'data': model_grid_outline[:,1]} + grid_edge_lon = {'data': model_grid_outline[:,0]} + + return grid_edge_lat, grid_edge_lon
+ + +
+[docs] + def create_spatial_bounds(self): + """ + Calculate longitude and latitude bounds and set them. + """ + + # Calculate LCC coordinates bounds + inc_x = np.abs(np.mean(np.diff(self._x['data']))) + x_bnds = self.create_single_spatial_bounds(np.array([self._x['data']] * len(self._y['data'])), + inc_x, spatial_nv=4) + + inc_y = np.abs(np.mean(np.diff(self._y['data']))) + y_bnds = self.create_single_spatial_bounds(np.array([self._y['data']] * len(self._x['data'])).T, + inc_y, spatial_nv=4, inverse=True) + + # Transform LCC bounds to regular bounds + lon_bnds, lat_bnds = self.projection(x_bnds, y_bnds, inverse=True) + + # Obtain regular coordinates bounds + self._lat_bnds = {} + self._lat_bnds['data'] = deepcopy(lat_bnds) + self.lat_bnds = {} + self.lat_bnds['data'] = lat_bnds[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + :] + + self._lon_bnds = {} + self._lon_bnds['data'] = deepcopy(lon_bnds) + self.lon_bnds = {} + self.lon_bnds['data'] = lon_bnds[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + :] + + return None
+ + + @staticmethod + def _set_var_crs(var): + """ + Set the grid_mapping to 'Lambert_Conformal'. + + Parameters + ---------- + var : Variable + netCDF4-python variable object. + """ + + var.grid_mapping = 'Lambert_Conformal' + var.coordinates = "lat lon" + + return None + + def _create_metadata(self, netcdf): + """ + Create the 'crs' variable for the lambert conformal grid_mapping. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python Dataset + """ + + if self.projection_data is not None: + mapping = netcdf.createVariable('Lambert_Conformal', 'i') + mapping.grid_mapping_name = self.projection_data['grid_mapping_name'] + mapping.standard_parallel = self.projection_data['standard_parallel'] + mapping.longitude_of_central_meridian = self.projection_data['longitude_of_central_meridian'] + mapping.latitude_of_projection_origin = self.projection_data['latitude_of_projection_origin'] + + return None + +
+[docs] + def to_grib2(self, path, grib_keys, grib_template_path, lat_flip=False, info=False): + """ + Write output file with grib2 format. + + Parameters + ---------- + path : str + Path to the output file. + grib_keys : dict + Dictionary with the grib2 keys. + grib_template_path : str + Path to the grib2 file to use as template. + info : bool + Indicates if you want to print extra information during the process. + """ + + raise NotImplementedError("Grib2 format cannot be written in a Lambert Conformal Conic projection.")
+ + +
+[docs] + def create_shapefile(self): + """ + Create spatial geodataframe (shapefile). + + Returns + ------- + shapefile : GeoPandasDataFrame + Shapefile dataframe. + """ + + if self.shapefile is None: + + # Get latitude and longitude cell boundaries + if self._lat_bnds is None or self._lon_bnds is None: + self.create_spatial_bounds() + + # Reshape arrays to create geometry + aux_b_lats = self.lat_bnds['data'].reshape((self.lat_bnds['data'].shape[0] * self.lat_bnds['data'].shape[1], + self.lat_bnds['data'].shape[2])) + aux_b_lons = self.lon_bnds['data'].reshape((self.lon_bnds['data'].shape[0] * self.lon_bnds['data'].shape[1], + self.lon_bnds['data'].shape[2])) + + # Get polygons from bounds + geometry = [] + for i in range(aux_b_lons.shape[0]): + geometry.append(Polygon([(aux_b_lons[i, 0], aux_b_lats[i, 0]), + (aux_b_lons[i, 1], aux_b_lats[i, 1]), + (aux_b_lons[i, 2], aux_b_lats[i, 2]), + (aux_b_lons[i, 3], aux_b_lats[i, 3]), + (aux_b_lons[i, 0], aux_b_lats[i, 0])])) + + # Create dataframe cointaining all polygons + fids = self.get_fids() + gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids.ravel()), + geometry=geometry, + crs="EPSG:4326") + self.shapefile = gdf + + else: + gdf = self.shapefile + + return gdf
+ + +
+[docs] + def get_centroids_from_coordinates(self): + """ + Get centroids from geographical coordinates. + + Returns + ------- + centroids_gdf: GeoPandasDataFrame + Centroids dataframe. + """ + + # Get centroids from coordinates + centroids = [] + for lat_ind in range(0, self.lon['data'].shape[0]): + for lon_ind in range(0, self.lon['data'].shape[1]): + centroids.append(Point(self.lon['data'][lat_ind, lon_ind], + self.lat['data'][lat_ind, lon_ind])) + + # Create dataframe cointaining all points + fids = self.get_fids() + centroids_gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids.ravel()), + geometry=centroids, + crs="EPSG:4326") + + return centroids_gdf
+
+ + +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/mercator_nes.html b/docs/build/html/_modules/nes/nc_projections/mercator_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..234e64b4247235f66f042ea1d919bccee4e46ce3 --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/mercator_nes.html @@ -0,0 +1,677 @@ + + + + + + nes.nc_projections.mercator_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nc_projections.mercator_nes

+#!/usr/bin/env python
+
+import warnings
+import sys
+import numpy as np
+import pandas as pd
+from cfunits import Units
+from pyproj import Proj
+from copy import deepcopy
+import geopandas as gpd
+from shapely.geometry import Polygon, Point
+from nes.nc_projections.default_nes import Nes
+
+
+
+[docs] +class MercatorNes(Nes): + """ + + Attributes + ---------- + _y : dict + Y coordinates dictionary with the complete 'data' key for all the values and the rest of the attributes. + _x : dict + X coordinates dictionary with the complete 'data' key for all the values and the rest of the attributes. + y : dict + Y coordinates dictionary with the portion of 'data' corresponding to the rank values. + x : dict + X coordinates dictionary with the portion of 'data' corresponding to the rank values. + _var_dim : tuple + Tuple with the name of the Y and X dimensions for the variables. + ('y', 'x') for a Mercator projection. + _lat_dim : tuple + Tuple with the name of the dimensions of the Latitude values. + ('y', 'x') for a Mercator projection. + _lon_dim : tuple + Tuple with the name of the dimensions of the Longitude values. + ('y', 'x') for a Mercator projection. + """ + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the MercatorNes class + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + times : list, None + List of times to substitute the current ones while creation. + + """ + + super(MercatorNes, self).__init__(comm=comm, path=path, info=info, dataset=dataset, + xarray=xarray, parallel_method=parallel_method, balanced=balanced, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=create_nes, + times=times, **kwargs) + + if create_nes: + # Dimensions screening + self.lat = self._get_coordinate_values(self._lat, 'Y') + self.lon = self._get_coordinate_values(self._lon, 'X') + else: + # Complete dimensions + self._y = self._get_coordinate_dimension('y') + self._x = self._get_coordinate_dimension('x') + + # Dimensions screening + self.y = self._get_coordinate_values(self._y, 'Y') + self.x = self._get_coordinate_values(self._x, 'X') + + # Set axis limits for parallel writing + self.write_axis_limits = self.get_write_axis_limits() + + self._var_dim = ('y', 'x') + self._lat_dim = ('y', 'x') + self._lon_dim = ('y', 'x') + + self.free_vars('crs') + +
+[docs] + @staticmethod + def new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the Nes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + new = MercatorNes(comm=comm, path=path, info=info, dataset=dataset, xarray=xarray, + parallel_method=parallel_method, avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, first_level=first_level, last_level=last_level, + create_nes=create_nes, balanced=balanced, times=times, **kwargs) + + return new
+ + +
+[docs] + def filter_coordinates_selection(self): + """ + Use the selection limits to filter y, x, time, lev, lat, lon, lon_bnds and lat_bnds. + """ + + idx = self.get_idx_intervals() + + self.y = self._get_coordinate_values(self._y, 'Y') + self.x = self._get_coordinate_values(self._x, 'X') + + self._y['data'] = self._y['data'][idx['idx_y_min']:idx['idx_y_max']] + self._x['data'] = self._x['data'][idx['idx_x_min']:idx['idx_x_max']] + + super(MercatorNes, self).filter_coordinates_selection() + + return None
+ + + def _get_pyproj_projection(self): + """ + Get projection data as in Pyproj library. + + Returns + ---------- + projection : pyproj.Proj + Grid projection. + """ + + projection = Proj(proj='merc', + a=self.earth_radius[1], + b=self.earth_radius[0], + lat_ts=np.float64(self.projection_data['standard_parallel']), + lon_0=np.float64(self.projection_data['longitude_of_projection_origin']), + ) + + return projection + + def _get_projection(self): + """ + Get 'projection' and 'projection_data' from grid details. + """ + + if 'mercator' in self.variables.keys(): + projection_data = self.variables['mercator'] + self.free_vars('mercator') + + else: + msg = 'There is no variable called mercator, projection has not been defined.' + raise RuntimeError(msg) + + if 'dtype' in projection_data.keys(): + del projection_data['dtype'] + + if 'data' in projection_data.keys(): + del projection_data['data'] + + if 'dimensions' in projection_data.keys(): + del projection_data['dimensions'] + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_projection(self, **kwargs): + """ + Create 'projection' and 'projection_data' from projection arguments. + """ + + projection_data = {'grid_mapping_name': 'mercator', + 'standard_parallel': kwargs['lat_ts'], # TODO: Check if True + 'longitude_of_projection_origin': kwargs['lon_0'], + 'x_0': kwargs['x_0'], 'y_0': kwargs['y_0'], + 'inc_x': kwargs['inc_x'], 'inc_y': kwargs['inc_y'], + 'nx': kwargs['nx'], 'ny': kwargs['ny'], + } + + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_dimensions(self, netcdf): + """ + Create 'y', 'x' and 'spatial_nv' dimensions and the super dimensions 'lev', 'time', 'time_nv', 'lon' and 'lat' + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(MercatorNes, self)._create_dimensions(netcdf) + + # Create y and x dimensions + netcdf.createDimension('y', len(self._y['data'])) + netcdf.createDimension('x', len(self._x['data'])) + + # Create spatial_nv (number of vertices) dimension + if (self._lat_bnds is not None) and (self._lon_bnds is not None): + netcdf.createDimension('spatial_nv', 4) + + return None + + def _create_dimension_variables(self, netcdf): + """ + Create the 'y' and 'x' variables. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(MercatorNes, self)._create_dimension_variables(netcdf) + + # MERCATOR Y COORDINATES + y = netcdf.createVariable('y', self._y['data'].dtype, ('y',)) + y.long_name = 'y coordinate of projection' + if 'units' in self._y.keys(): + y.units = Units(self._y['units'], formatted=True).units + else: + y.units = 'm' + y.standard_name = 'projection_y_coordinate' + if self.size > 1: + y.set_collective(True) + y[:] = self._y['data'] + + # MERCATOR X COORDINATES + x = netcdf.createVariable('x', self._x['data'].dtype, ('x',)) + x.long_name = 'x coordinate of projection' + if 'units' in self._x.keys(): + x.units = Units(self._x['units'], formatted=True).units + else: + x.units = 'm' + x.standard_name = 'projection_x_coordinate' + if self.size > 1: + x.set_collective(True) + x[:] = self._x['data'] + + return None + + def _create_centre_coordinates(self, **kwargs): + """ + Calculate centre latitudes and longitudes from grid details. + """ + + # Get projection details on x + x_0 = np.float64(self.projection_data['x_0']) + inc_x = np.float64(self.projection_data['inc_x']) + nx = int(self.projection_data['nx']) + + # Get projection details on y + y_0 = np.float64(self.projection_data['y_0']) + inc_y = np.float64(self.projection_data['inc_y']) + ny = int(self.projection_data['ny']) + + # Create a regular grid in metres (1D) + self._x = {'data': np.linspace(x_0 + (inc_x / 2), + x_0 + (inc_x / 2) + (inc_x * (nx - 1)), + nx, dtype=np.float64)} + self._y = {'data': np.linspace(y_0 + (inc_y / 2), + y_0 + (inc_y / 2) + (inc_y * (ny - 1)), + ny, dtype=np.float64)} + + # Create a regular grid in metres (1D to 2D) + x = np.array([self._x['data']] * len(self._y['data'])) + y = np.array([self._y['data']] * len(self._x['data'])).T + + # Calculate centre latitudes and longitudes (UTM to Mercator) + centre_lon, centre_lat = self.projection(x, y, inverse=True) + + return {'data': centre_lat}, {'data': centre_lon} + +
+[docs] + def create_providentia_exp_centre_coordinates(self): + """ + Calculate centre latitudes and longitudes from original coordinates and store as 2D arrays. + + Returns + ---------- + model_centre_lat : dict + Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude). + model_centre_lon : dict + Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude). + """ + + # Get centre latitudes + model_centre_lat = self.lat + + # Get centre longitudes + model_centre_lon = self.lon + + return model_centre_lat, model_centre_lon
+ + +
+[docs] + def create_providentia_exp_grid_edge_coordinates(self): + """ + Calculate grid edge latitudes and longitudes and get model grid outline. + + Returns + ---------- + grid_edge_lat : dict + Dictionary with data of grid edge latitudes. + grid_edge_lon : dict + Dictionary with data of grid edge longitudes. + """ + + # Get grid resolution + inc_x = np.abs(np.mean(np.diff(self.x['data']))) + inc_y = np.abs(np.mean(np.diff(self.y['data']))) + + # Get bounds for rotated coordinates + y_bounds = self.create_single_spatial_bounds(self.y['data'], inc_y) + x_bounds = self.create_single_spatial_bounds(self.x['data'], inc_x) + + # Get rotated latitudes for grid edge + left_edge_y = np.append(y_bounds.flatten()[::2], y_bounds.flatten()[-1]) + right_edge_y = np.flip(left_edge_y, 0) + top_edge_y = np.repeat(y_bounds[-1][-1], len(self.x['data']) - 1) + bottom_edge_y = np.repeat(y_bounds[0][0], len(self.x['data'])) + y_grid_edge = np.concatenate((left_edge_y, top_edge_y, right_edge_y, bottom_edge_y)) + + # Get rotated longitudes for grid edge + left_edge_x = np.repeat(x_bounds[0][0], len(self.y['data']) + 1) + top_edge_x = x_bounds.flatten()[1:-1:2] + right_edge_x = np.repeat(x_bounds[-1][-1], len(self.y['data']) + 1) + bottom_edge_x = np.flip(x_bounds.flatten()[:-1:2], 0) + x_grid_edge = np.concatenate((left_edge_x, top_edge_x, right_edge_x, bottom_edge_x)) + + # Get edges for regular coordinates + grid_edge_lon_data, grid_edge_lat_data = self.projection(x_grid_edge, y_grid_edge, inverse=True) + + # Create grid outline by stacking the edges in both coordinates + model_grid_outline = np.vstack((grid_edge_lon_data, grid_edge_lat_data)).T + grid_edge_lat = {'data': model_grid_outline[:,1]} + grid_edge_lon = {'data': model_grid_outline[:,0]} + + return grid_edge_lat, grid_edge_lon
+ + +
+[docs] + def create_spatial_bounds(self): + """ + Calculate longitude and latitude bounds and set them. + """ + + # Calculate Mercator coordinates bounds + inc_x = np.abs(np.mean(np.diff(self._x['data']))) + x_bnds = self.create_single_spatial_bounds(np.array([self._x['data']] * len(self._y['data'])), + inc_x, spatial_nv=4) + + inc_y = np.abs(np.mean(np.diff(self._y['data']))) + y_bnds = self.create_single_spatial_bounds(np.array([self._y['data']] * len(self._x['data'])).T, + inc_y, spatial_nv=4, inverse=True) + + # Transform Mercator bounds to regular bounds + lon_bnds, lat_bnds = self.projection(x_bnds, y_bnds, inverse=True) + + # Obtain regular coordinates bounds + self._lat_bnds = {} + self._lat_bnds['data'] = deepcopy(lat_bnds) + self.lat_bnds = {} + self.lat_bnds['data'] = lat_bnds[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + :] + + self._lon_bnds = {} + self._lon_bnds['data'] = deepcopy(lon_bnds) + self.lon_bnds = {} + self.lon_bnds['data'] = lon_bnds[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + :] + + return None
+ + + @staticmethod + def _set_var_crs(var): + """ + Set the grid_mapping to 'mercator'. + + Parameters + ---------- + var : Variable + netCDF4-python variable object. + """ + + var.grid_mapping = 'mercator' + var.coordinates = "lat lon" + + return None + + def _create_metadata(self, netcdf): + """ + Create the 'crs' variable for the Mercator grid_mapping. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python Dataset. + """ + + if self.projection_data is not None: + mapping = netcdf.createVariable('mercator', 'i') + mapping.grid_mapping_name = self.projection_data['grid_mapping_name'] + mapping.standard_parallel = self.projection_data['standard_parallel'] + mapping.longitude_of_projection_origin = self.projection_data['longitude_of_projection_origin'] + + return None + +
+[docs] + def to_grib2(self, path, grib_keys, grib_template_path, lat_flip=False, info=False): + """ + Write output file with grib2 format. + + Parameters + ---------- + path : str + Path to the output file. + grib_keys : dict + Dictionary with the grib2 keys. + grib_template_path : str + Path to the grib2 file to use as template. + info : bool + Indicates if you want to print extra information during the process. + """ + + raise NotImplementedError("Grib2 format cannot be written in a Mercator projection.")
+ + +
+[docs] + def create_shapefile(self): + """ + Create spatial geodataframe (shapefile). + + Returns + ------- + shapefile : GeoPandasDataFrame + Shapefile dataframe. + """ + + if self.shapefile is None: + + # Get latitude and longitude cell boundaries + if self._lat_bnds is None or self._lon_bnds is None: + self.create_spatial_bounds() + + # Reshape arrays to create geometry + aux_b_lats = self.lat_bnds['data'].reshape((self.lat_bnds['data'].shape[0] * self.lat_bnds['data'].shape[1], + self.lat_bnds['data'].shape[2])) + aux_b_lons = self.lon_bnds['data'].reshape((self.lon_bnds['data'].shape[0] * self.lon_bnds['data'].shape[1], + self.lon_bnds['data'].shape[2])) + + # Get polygons from bounds + geometry = [] + for i in range(aux_b_lons.shape[0]): + geometry.append(Polygon([(aux_b_lons[i, 0], aux_b_lats[i, 0]), + (aux_b_lons[i, 1], aux_b_lats[i, 1]), + (aux_b_lons[i, 2], aux_b_lats[i, 2]), + (aux_b_lons[i, 3], aux_b_lats[i, 3]), + (aux_b_lons[i, 0], aux_b_lats[i, 0])])) + + # Create dataframe cointaining all polygons + fids = self.get_fids() + gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids.ravel()), + geometry=geometry, + crs="EPSG:4326") + self.shapefile = gdf + + else: + gdf = self.shapefile + + return gdf
+ + +
+[docs] + def get_centroids_from_coordinates(self): + """ + Get centroids from geographical coordinates. + + Returns + ------- + centroids_gdf: GeoPandasDataFrame + Centroids dataframe. + """ + + # Get centroids from coordinates + centroids = [] + for lat_ind in range(0, self.lon['data'].shape[0]): + for lon_ind in range(0, self.lon['data'].shape[1]): + centroids.append(Point(self.lon['data'][lat_ind, lon_ind], + self.lat['data'][lat_ind, lon_ind])) + + # Create dataframe cointaining all points + fids = self.get_fids() + centroids_gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids.ravel()), + geometry=centroids, + crs="EPSG:4326") + + return centroids_gdf
+
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/points_nes.html b/docs/build/html/_modules/nes/nc_projections/points_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..03a2900ba75617ab59e6d07d4cb7538abb2d37de --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/points_nes.html @@ -0,0 +1,879 @@ + + + + + + nes.nc_projections.points_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nc_projections.points_nes

+#!/usr/bin/env python
+
+import sys
+import warnings
+import numpy as np
+import pandas as pd
+from copy import deepcopy
+import geopandas as gpd
+from netCDF4 import date2num, stringtochar
+from .default_nes import Nes
+
+
+
+[docs] +class PointsNes(Nes): + """ + + Attributes + ---------- + _var_dim : tuple + Tuple with the name of the Y and X dimensions for the variables. + ('lat', 'lon') for a points grid. + _lat_dim : tuple + Tuple with the name of the dimensions of the Latitude values. + ('lat',) for a points grid. + _lon_dim : tuple + Tuple with the name of the dimensions of the Longitude values. + ('lon',) for a points grid. + _station : tuple + Tuple with the name of the dimensions of the station values. + ('station',) for a points grid. + """ + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the PointsNes class. + + Parameters + ---------- + comm: MPI.Communicator + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset, None + NetCDF4-python Dataset to initialize the class. + xarray: bool + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'X'. + accepted values: ['X', 'T']. + strlen: int + Maximum length of strings in NetCDF. Default: 75. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + super(PointsNes, self).__init__(comm=comm, path=path, info=info, dataset=dataset, + xarray=xarray, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=create_nes, + times=times, **kwargs) + + if create_nes: + # Dimensions screening + self.lat = self._get_coordinate_values(self._lat, 'X') + self.lon = self._get_coordinate_values(self._lon, 'X') + + # Complete dimensions + self._station = {'data': np.arange(len(self._lon['data']))} + + # Dimensions screening + self.station = self._get_coordinate_values(self._station, 'X') + + # Set axis limits for parallel writing + self.write_axis_limits = self.get_write_axis_limits() + + self._var_dim = ('station',) + self._lat_dim = ('station',) + self._lon_dim = ('station',) + +
+[docs] + @staticmethod + def new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, + create_nes=False, balanced=False, times=None, **kwargs): + """ + Initialize the Nes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'X'. + accepted values: ['X', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + new = PointsNes(comm=comm, path=path, info=info, dataset=dataset, xarray=xarray, + parallel_method=parallel_method, avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, first_level=first_level, last_level=last_level, + create_nes=create_nes, balanced=balanced, times=times, **kwargs) + + return new
+ + + def _get_projection(self): + """ + Get 'projection' and 'projection_data' from grid details. + """ + + self.projection_data = None + self.projection = None + + return None + + def _create_projection(self, **kwargs): + """ + Create 'projection' and 'projection_data' from projection arguments. + """ + + self.projection_data = None + self.projection = None + + return None + + def _create_dimensions(self, netcdf): + """ + Create 'time', 'time_nv', 'station' and 'strlen' dimensions. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + # Create time dimension + netcdf.createDimension('time', None) + + # Create time_nv (number of vertices) dimension + if self._time_bnds is not None: + netcdf.createDimension('time_nv', 2) + + # Create station dimension + # The number of longitudes is equal to the number of stations + netcdf.createDimension('station', len(self._lon['data'])) + + # Create string length dimension + if self.strlen is not None: + netcdf.createDimension('strlen', self.strlen) + + return None + + def _create_dimension_variables(self, netcdf): + """ + Create the 'time', 'time_bnds', 'station', 'lat', 'lat_bnds', 'lon' and 'lon_bnds' variables. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + # TIMES + time_var = netcdf.createVariable('time', np.float64, ('time',), zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + time_var.units = 'hours since {0}'.format( + self._time[self.get_time_id(self.hours_start, first=True)].strftime('%Y-%m-%d %H:%M:%S')) + time_var.standard_name = 'time' + time_var.calendar = 'standard' + time_var.long_name = 'time' + if self._time_bnds is not None: + time_var.bounds = 'time_bnds' + if self.size > 1: + time_var.set_collective(True) + time_var[:] = date2num(self._time[self.get_time_id(self.hours_start, first=True): + self.get_time_id(self.hours_end, first=False)], + time_var.units, time_var.calendar) + + # TIME BOUNDS + if self._time_bnds is not None: + time_bnds_var = netcdf.createVariable('time_bnds', np.float64, ('time', 'time_nv',), zlib=self.zip_lvl, + complevel=self.zip_lvl) + if self.size > 1: + time_bnds_var.set_collective(True) + time_bnds_var[:] = date2num(self._time_bnds, time_var.units, calendar='standard') + + # STATIONS + stations = netcdf.createVariable('station', np.float64, ('station',), zlib=self.zip_lvl > 0, + complevel=self.zip_lvl) + stations.units = '' + stations.axis = 'X' + stations.long_name = '' + stations.standard_name = 'station' + if self.size > 1: + stations.set_collective(True) + stations[:] = self._station['data'] + + # LATITUDES + lat = netcdf.createVariable('lat', np.float64, self._lat_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + lat.units = 'degrees_north' + lat.axis = 'Y' + lat.long_name = 'latitude coordinate' + lat.standard_name = 'latitude' + if self._lat_bnds is not None: + lat.bounds = 'lat_bnds' + if self.size > 1: + lat.set_collective(True) + lat[:] = self._lat['data'] + + # LONGITUDES + lon = netcdf.createVariable('lon', np.float64, self._lon_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + lon.units = 'degrees_east' + lon.axis = 'X' + lon.long_name = 'longitude coordinate' + lon.standard_name = 'longitude' + if self._lon_bnds is not None: + lon.bounds = 'lon_bnds' + if self.size > 1: + lon.set_collective(True) + lon[:] = self._lon['data'] + + return None + + def _get_coordinate_values(self, coordinate_info, coordinate_axis, bounds=False): + """ + Get the coordinate data of the current portion. + + Parameters + ---------- + coordinate_info : dict, list + Dictionary with the 'data' key with the coordinate variable values. and the attributes as other keys. + coordinate_axis : str + Name of the coordinate to extract. Accepted values: ['X']. + bounds : bool + Boolean variable to know if there are coordinate bounds. + Returns + ------- + values : dict + Dictionary with the portion of data corresponding to the rank. + """ + + if coordinate_info is None: + return None + + if not isinstance(coordinate_info, dict): + values = {'data': deepcopy(coordinate_info)} + else: + values = deepcopy(coordinate_info) + + coordinate_len = len(values['data'].shape) + if bounds: + coordinate_len -= 1 + + if coordinate_axis == 'X': + if coordinate_len == 1: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif coordinate_len == 2: + values['data'] = values['data'][self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + else: + raise NotImplementedError("The coordinate has wrong dimensions: {dim}".format( + dim=values['data'].shape)) + + return values + + def _read_variable(self, var_name): + """ + Read the corresponding variable data according to the current rank. + + Parameters + ---------- + var_name : str + Name of the variable to read. + + Returns + ------- + data: np.array + Portion of the variable data corresponding to the rank. + """ + + nc_var = self.netcdf.variables[var_name] + var_dims = nc_var.dimensions + + # Read data in 1 or 2 dimensions + if len(var_dims) < 2: + data = nc_var[self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif len(var_dims) == 2: + if 'strlen' in var_dims: + data = nc_var[self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], :] + data = np.array([''.join(i.tostring().decode('ascii').replace('\x00', '')) for i in data], + dtype=np.object) + else: + data = nc_var[self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + else: + raise NotImplementedError("Error with {0}. Only can be read netCDF with 2 dimensions or less".format( + var_name)) + + # Unmask array + data = self._unmask_array(data) + + return data + + def _create_variables(self, netcdf, chunking=False): + """ + Create the netCDF file variables. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python open Dataset. + chunking : bool + Indicates if you want to chunk the output netCDF. + """ + + if self.variables is not None: + for i, (var_name, var_dict) in enumerate(self.variables.items()): + # Get data type + if 'dtype' in var_dict.keys(): + var_dtype = var_dict['dtype'] + if (var_dict['data'] is not None) and (var_dtype != var_dict['data'].dtype): + msg = "WARNING!!! " + msg += "Different data types for variable {0}. ".format(var_name) + msg += "Input dtype={0}. Data dtype={1}.".format(var_dtype, var_dict['data'].dtype) + warnings.warn(msg) + sys.stderr.flush() + try: + var_dict['data'] = var_dict['data'].astype(var_dtype) + except Exception as e: # TODO: Detect exception + raise e("It was not possible to cast the data to the input dtype.") + else: + var_dtype = var_dict['data'].dtype + if var_dtype is np.object: + raise TypeError("Data dtype is np.object. Define dtype explicitly as dictionary key 'dtype'") + + # Get dimensions when reading datasets + if 'dimensions' in var_dict.keys(): + var_dims = var_dict['dimensions'] + # Get dimensions when creating new datasets + else: + if len(var_dict['data'].shape) == 1: + # For data that depends only on station (e.g. station_code) + var_dims = self._var_dim + else: + # For data that is dependent on time and station (e.g. PM10) + var_dims = ('time',) + self._var_dim + + if var_dict['data'] is not None: + + # Ensure data is of type numpy array (to create NES) + if not isinstance(var_dict['data'], (np.ndarray, np.generic)): + try: + var_dict['data'] = np.array(var_dict['data']) + except AttributeError: + raise AttributeError("Data for variable {0} must be a numpy array.".format(var_name)) + + # Convert list of strings to chars for parallelization + if np.issubdtype(var_dtype, np.character): + var_dict['data_aux'] = self.str2char(var_dict['data']) + var_dims += ('strlen',) + var_dtype = 'S1' + + if self.info: + print('Rank {0:03d}: Writing {1} var ({2}/{3})'.format(self.rank, var_name, i + 1, + len(self.variables))) + if not chunking: + var = netcdf.createVariable(var_name, var_dtype, var_dims, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + else: + if self.balanced: + raise NotImplementedError("A balanced data cannot be chunked.") + if self.master: + chunk_size = var_dict['data'].shape + else: + chunk_size = None + chunk_size = self.comm.bcast(chunk_size, root=0) + var = netcdf.createVariable(var_name, var_dtype, var_dims, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl, + chunksizes=chunk_size) + + if self.info: + print('Rank {0:03d}: Var {1} created ({2}/{3})'.format( + self.rank, var_name, i + 1, len(self.variables))) + if self.size > 1: + var.set_collective(True) + if self.info: + print('Rank {0:03d}: Var {1} collective ({2}/{3})'.format( + self.rank, var_name, i + 1, len(self.variables))) + + for att_name, att_value in var_dict.items(): + if att_name == 'data': + if self.info: + print("Rank {0:03d}: Filling {1})".format(self.rank, var_name)) + if 'data_aux' in var_dict.keys(): + att_value = var_dict['data_aux'] + if len(att_value.shape) == 1: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']].shape, + att_value.shape)) + elif len(att_value.shape) == 2: + if 'strlen' in var_dims: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :].shape, + att_value.shape)) + else: + try: + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']].shape, + att_value.shape)) + if self.info: + print('Rank {0:03d}: Var {1} data ({2}/{3})'.format(self.rank, var_name, i + 1, + len(self.variables))) + elif att_name not in ['chunk_size', 'var_dims', 'dimensions', 'dtype', 'data_aux']: + var.setncattr(att_name, att_value) + + if 'data_aux' in var_dict.keys(): + del var_dict['data_aux'] + + self._set_var_crs(var) + if self.info: + print('Rank {0:03d}: Var {1} completed ({2}/{3})'.format(self.rank, var_name, i + 1, + len(self.variables))) + + return None + + def _gather_data(self, data_to_gather): + """ + Gather all the variable data into the MPI rank 0 to perform a serial write. + + Returns + ------- + data_to_gather: dict + Variables to gather. + """ + data_list = deepcopy(data_to_gather) + for var_name, var_info in data_list.items(): + try: + # noinspection PyArgumentList + data_aux = self.comm.gather(data_list[var_name]['data'], root=0) + if self.rank == 0: + shp_len = len(data_list[var_name]['data'].shape) + if self.parallel_method == 'X': + # concatenate over station + if shp_len == 1: + # dimensions = (station) + axis = 0 + elif shp_len == 2: + if 'strlen' in var_info['dimensions']: + # dimensions = (station, strlen) + axis = 0 + else: + # dimensions = (time, station) + axis = 1 + else: + msg = "The points NetCDF must have " + msg += "surface values (without levels)." + raise NotImplementedError(msg) + elif self.parallel_method == 'T': + # concatenate over time + if shp_len == 1: + # dimensions = (station) + axis = None + continue + elif shp_len == 2: + if 'strlen' in var_info['dimensions']: + # dimensions = (station, strlen) + axis = None + continue + else: + # dimensions = (time, station) + axis = 0 + else: + msg = "The points NetCDF must only have surface values (without levels)." + raise NotImplementedError(msg) + else: + raise NotImplementedError( + "Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'T'])) + data_list[var_name]['data'] = np.concatenate(data_aux, axis=axis) + except Exception as e: + msg = "**ERROR** an error has occurred while gathering the '{0}' variable.\n".format(var_name) + print(msg) + sys.stderr.write(msg) + print(e) + sys.stderr.write(str(e)) + # print(e, file=sys.stderr) + sys.stderr.flush() + self.comm.Abort(1) + raise e + + return data_list + + def _create_centre_coordinates(self, **kwargs): + """ + Calculate centre latitudes and longitudes from points. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + # Calculate centre latitudes + centre_lat = kwargs['lat'] + + # Calculate centre longitudes + centre_lon = kwargs['lon'] + + return {'data': centre_lat}, {'data': centre_lon} + + def _create_metadata(self, netcdf): + """ + Create metadata variables + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + return None + +
+[docs] + def create_spatial_bounds(self): + """ + Calculate longitude and latitude bounds and set them. + """ + + raise NotImplementedError("Spatial bounds cannot be created for points datasets.")
+ + +
+[docs] + def to_providentia(self, model_centre_lon, model_centre_lat, grid_edge_lon, grid_edge_lat): + """ + Transform a PointsNes into a PointsNesProvidentia object + + Returns + ---------- + points_nes_providentia : nes.Nes + Points Nes Providentia Object + """ + + from .points_nes_providentia import PointsNesProvidentia + + points_nes_providentia = PointsNesProvidentia(comm=self.comm, + info=self.info, + balanced=self.balanced, + parallel_method=self.parallel_method, + avoid_first_hours=self.hours_start, + avoid_last_hours=self.hours_end, + first_level=self.first_level, + last_level=self.last_level, + create_nes=True, + times=self.time, + model_centre_lon=model_centre_lon, + model_centre_lat=model_centre_lat, + grid_edge_lon=grid_edge_lon, + grid_edge_lat=grid_edge_lat, + lat=self.lat['data'], + lon=self.lon['data'] + ) + + # Convert dimensions (time, lev, lat, lon) to (station, time) for interpolated variables and reshape data + variables = {} + interpolated_variables = deepcopy(self.variables) + for var_name, var_info in interpolated_variables.items(): + variables[var_name] = {} + # ('time', 'lev', 'lat', 'lon') or ('time', 'lat', 'lon') to ('station', 'time') + if len(var_info['dimensions']) != len(var_info['data'].shape): + variables[var_name]['data'] = var_info['data'].T + variables[var_name]['dimensions'] = ('station', 'time') + else: + variables[var_name]['data'] = var_info['data'] + variables[var_name]['dimensions'] = var_info['dimensions'] + + # Set variables + points_nes_providentia.variables = variables + + return points_nes_providentia
+ + +
+[docs] + def to_grib2(self, path, grib_keys, grib_template_path, lat_flip=False, info=False): + """ + Write output file with grib2 format. + + Parameters + ---------- + path : str + Path to the output file. + grib_keys : dict + Dictionary with the grib2 keys. + grib_template_path : str + Path to the grib2 file to use as template. + info : bool + Indicates if you want to print extra information during the process. + """ + + raise NotImplementedError("Grib2 format cannot be written with point data.")
+ + +
+[docs] + def create_shapefile(self): + """ + Create spatial geodataframe (shapefile). + + Returns + ------- + shapefile : GeoPandasDataFrame + Shapefile dataframe. + """ + + if self.shapefile is None: + + # Create dataframe cointaining all points + gdf = self.get_centroids_from_coordinates() + self.shapefile = gdf + + else: + gdf = self.shapefile + + return gdf
+ + +
+[docs] + def get_centroids_from_coordinates(self): + """ + Get centroids from geographical coordinates. + + Returns + ------- + centroids_gdf: GeoPandasDataFrame + Centroids dataframe. + """ + + # Get centroids from coordinates + centroids = gpd.points_from_xy(self.lon['data'], self.lat['data']) + + # Create dataframe cointaining all points + fids = np.arange(len(self._lon['data'])) + fids = fids[self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + centroids_gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids), + geometry=centroids, + crs="EPSG:4326") + + return centroids_gdf
+ + +
+[docs] + def add_variables_to_shapefile(self, var_list, idx_lev=0, idx_time=0): + """ + Add variables data to shapefile. + + var_list : list, str + List (or single string) of the variables to be loaded and saved in the shapefile. + idx_lev : int + Index of vertical level for which the data will be saved in the shapefile. + idx_time : int + Index of time for which the data will be saved in the shapefile. + """ + + if idx_lev != 0: + msg = 'Error: Points dataset has no level (Level: {0}).'.format(idx_lev) + raise ValueError(msg) + + for var_name in var_list: + # station as dimension + if len(self.variables[var_name]['dimensions']) == 1: + self.shapefile[var_name] = self.variables[var_name]['data'][:].ravel() + # station and time as dimensions + else: + self.shapefile[var_name] = self.variables[var_name]['data'][idx_time, :].ravel() + + return None
+ + + @staticmethod + def _get_axis_index_(axis): + if axis == 'T': + value = 0 + elif axis == 'X': + value = 1 + else: + raise ValueError("Unknown axis: {0}".format(axis)) + return value + + @staticmethod + def _set_var_crs(var): + """ + Set the grid_mapping + + Parameters + ---------- + var : Variable + netCDF4-python variable object. + """ + var.coordinates = "lat lon" + + return None
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/points_nes_ghost.html b/docs/build/html/_modules/nes/nc_projections/points_nes_ghost.html new file mode 100644 index 0000000000000000000000000000000000000000..4507f8fc6ab41bcacf46e3a82cd66c6878649443 --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/points_nes_ghost.html @@ -0,0 +1,945 @@ + + + + + + nes.nc_projections.points_nes_ghost — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nc_projections.points_nes_ghost

+#!/usr/bin/env python
+
+import sys
+import warnings
+import numpy as np
+from netCDF4 import stringtochar, date2num
+from copy import deepcopy
+from .points_nes import PointsNes
+
+
+
+[docs] +class PointsNesGHOST(PointsNes): + """ + + Attributes + ---------- + _qa : dict + Quality flags (GHOST checks) dictionary with the complete 'data' key for all the values and the rest of the + attributes. + _flag : dict + Data flags (given by data provider) dictionary with the complete 'data' key for all the values and the rest of + the attributes. + _qa : dict + Quality flags (GHOST checks) dictionary with the portion of 'data' corresponding to the rank values. + _flag : dict + Data flags (given by data provider) dictionary with the portion of 'data' corresponding to the rank values. + """ + + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the PointsNesGHOST class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'X'. + Accepted values: ['X']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + super(PointsNesGHOST, self).__init__(comm=comm, path=path, info=info, dataset=dataset, + xarray=xarray, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=create_nes, + times=times, **kwargs) + + # Complete dimensions + self._flag = self._get_coordinate_dimension(['flag']) + self._qa = self._get_coordinate_dimension(['qa']) + + # Dimensions screening + self.flag = self._get_coordinate_values(self._flag, 'X') + self.qa = self._get_coordinate_values(self._qa, 'X') + +
+[docs] + @staticmethod + def new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the PointsNesGHOST class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'X'. + Accepted values: ['X']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + new = PointsNesGHOST(comm=comm, path=path, info=info, dataset=dataset, xarray=xarray, + parallel_method=parallel_method, avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, first_level=first_level, last_level=last_level, + create_nes=create_nes, balanced=balanced, times=times, **kwargs) + + return new
+ + + def _create_dimensions(self, netcdf): + """ + Create 'N_flag_codes' and 'N_qa_codes' dimensions and the super dimensions + 'time', 'time_nv', 'station', and 'strlen'. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(PointsNesGHOST, self)._create_dimensions(netcdf) + + # Create N_flag_codes and N_qa_codes dimensions + netcdf.createDimension('N_flag_codes', self._flag['data'].shape[2]) + netcdf.createDimension('N_qa_codes', self._qa['data'].shape[2]) + + return None + + def _create_dimension_variables(self, netcdf): + """ + Create the 'time', 'time_bnds', 'station', 'lat', 'lat_bnds', 'lon' and 'lon_bnds' variables. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + # TIMES + time_var = netcdf.createVariable('time', np.float64, ('time',), zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + time_var.units = 'hours since {0}'.format( + self._time[self.get_time_id(self.hours_start, first=True)].strftime('%Y-%m-%d %H:%M:%S')) + time_var.standard_name = 'time' + time_var.calendar = 'standard' + time_var.long_name = 'time' + if self._time_bnds is not None: + time_var.bounds = 'time_bnds' + if self.size > 1: + time_var.set_collective(True) + time_var[:] = date2num(self._time[self.get_time_id(self.hours_start, first=True): + self.get_time_id(self.hours_end, first=False)], + time_var.units, time_var.calendar) + + # TIME BOUNDS + if self._time_bnds is not None: + time_bnds_var = netcdf.createVariable('time_bnds', np.float64, ('time', 'time_nv',), zlib=self.zip_lvl, + complevel=self.zip_lvl) + if self.size > 1: + time_bnds_var.set_collective(True) + time_bnds_var[:] = date2num(self._time_bnds, time_var.units, calendar='standard') + + # STATIONS + stations = netcdf.createVariable('station', np.float64, ('station',), zlib=self.zip_lvl > 0, + complevel=self.zip_lvl) + stations.units = '' + stations.axis = 'X' + stations.long_name = '' + stations.standard_name = 'station' + if self.size > 1: + stations.set_collective(True) + stations[:] = self._station['data'] + + # LATITUDES + lat = netcdf.createVariable('latitude', np.float64, self._lat_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + lat.units = 'degrees_north' + lat.axis = 'Y' + lat.long_name = 'latitude coordinate' + lat.standard_name = 'latitude' + if self._lat_bnds is not None: + lat.bounds = 'lat_bnds' + if self.size > 1: + lat.set_collective(True) + lat[:] = self._lat['data'] + + # LONGITUDES + lon = netcdf.createVariable('longitude', np.float64, self._lon_dim, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + lon.units = 'degrees_east' + lon.axis = 'X' + lon.long_name = 'longitude coordinate' + lon.standard_name = 'longitude' + if self._lon_bnds is not None: + lon.bounds = 'lon_bnds' + if self.size > 1: + lon.set_collective(True) + lon[:] = self._lon['data'] + +
+[docs] + def erase_flags(self): + + first_time_idx = self.get_time_id(self.hours_start, first=True) + last_time_idx = self.get_time_id(self.hours_end, first=False) + t_len = last_time_idx - first_time_idx + + self._qa['data'] = np.empty((len(self._lon['data']), t_len, 0)) + self._flag['data'] = np.empty((len(self._lon['data']), t_len, 0)) + + return None
+ + + def _get_coordinate_values(self, coordinate_info, coordinate_axis, bounds=False): + """ + Get the coordinate data of the current portion. + + Parameters + ---------- + coordinate_info : dict, list + Dictionary with the 'data' key with the coordinate variable values. and the attributes as other keys. + coordinate_axis : str + Name of the coordinate to extract. Accepted values: ['X']. + bounds : bool + Boolean variable to know if there are coordinate bounds. + Returns + ------- + values : dict + Dictionary with the portion of data corresponding to the rank. + """ + + if coordinate_info is None: + return None + + if not isinstance(coordinate_info, dict): + values = {'data': deepcopy(coordinate_info)} + else: + values = deepcopy(coordinate_info) + + coordinate_len = len(values['data'].shape) + if bounds: + coordinate_len -= 1 + + if coordinate_axis == 'X': + if coordinate_len == 1: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif coordinate_len == 2: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + self.read_axis_limits['t_min']:self.read_axis_limits['t_max']] + elif coordinate_len == 3: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], :] + else: + raise NotImplementedError("The coordinate has wrong dimensions: {dim}".format( + dim=values['data'].shape)) + + return values + + def _read_variable(self, var_name): + """ + Read the corresponding variable data according to the current rank. + + Parameters + ---------- + var_name : str + Name of the variable to read. + + Returns + ------- + data: np.array + Portion of the variable data corresponding to the rank. + """ + + nc_var = self.netcdf.variables[var_name] + var_dims = nc_var.dimensions + + # Read data in 1 or 2 dimensions + if len(var_dims) < 2: + data = nc_var[self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif len(var_dims) == 2: + data = nc_var[self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + self.read_axis_limits['t_min']:self.read_axis_limits['t_max']] + elif len(var_dims) == 3: + data = nc_var[self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], + :] + else: + raise NotImplementedError('Error with {0}. Only can be read netCDF with 3 dimensions or less'.format( + var_name)) + + # Unmask array + data = self._unmask_array(data) + + return data + + def _create_variables(self, netcdf, chunking=False): + """ + Create the netCDF file variables. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python open Dataset. + chunking : bool + Indicates if you want to chunk the output netCDF. + """ + + if self.variables is not None: + for i, (var_name, var_dict) in enumerate(self.variables.items()): + # Get data type + if 'dtype' in var_dict.keys(): + var_dtype = var_dict['dtype'] + if (var_dict['data'] is not None) and (var_dtype != var_dict['data'].dtype): + msg = "WARNING!!! " + msg += "Different data types for variable {0}. ".format(var_name) + msg += "Input dtype={0}. Data dtype={1}.".format(var_dtype, var_dict['data'].dtype) + warnings.warn(msg) + sys.stderr.flush() + try: + var_dict['data'] = var_dict['data'].astype(var_dtype) + except Exception as e: # TODO: Detect exception + raise e("It was not possible to cast the data to the input dtype.") + else: + var_dtype = var_dict['data'].dtype + if var_dtype is np.object: + raise TypeError("Data dtype is np.object. Define dtype explicitly as dictionary key 'dtype'") + + # Get dimensions when reading datasets + if 'dimensions' in var_dict.keys(): + var_dims = var_dict['dimensions'] + # Get dimensions when creating new datasets + else: + if len(var_dict['data'].shape) == 1: + # For data that depends only on station (e.g. station_code) + var_dims = self._var_dim + else: + # For data that is dependent on time and station (e.g. PM10) + var_dims = self._var_dim + ('time',) + + if var_dict['data'] is not None: + + # Ensure data is of type numpy array (to create NES) + if not isinstance(var_dict['data'], (np.ndarray, np.generic)): + try: + var_dict['data'] = np.array(var_dict['data']) + except AttributeError: + raise AttributeError("Data for variable {0} must be a numpy array.".format(var_name)) + + # Convert list of strings to chars for parallelization + if np.issubdtype(var_dtype, np.character): + var_dict['data_aux'] = self.str2char(var_dict['data']) + var_dims += ('strlen',) + var_dtype = 'S1' + + if self.info: + print("Rank {0:03d}: Writing {1} var ({2}/{3})".format(self.rank, var_name, i + 1, + len(self.variables))) + + if not chunking: + var = netcdf.createVariable(var_name, var_dtype, var_dims, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + else: + if self.master: + chunk_size = var_dict['data'].shape + else: + chunk_size = None + chunk_size = self.comm.bcast(chunk_size, root=0) + var = netcdf.createVariable(var_name, var_dtype, var_dims, zlib=self.zip_lvl > 0, + complevel=self.zip_lvl, chunksizes=chunk_size) + + if self.info: + print("Rank {0:03d}: Var {1} created ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + if self.size > 1: + var.set_collective(True) + if self.info: + print("Rank {0:03d}: Var {1} collective ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + + for att_name, att_value in var_dict.items(): + if att_name == 'data': + if self.info: + print("Rank {0:03d}: Filling {1})".format(self.rank, var_name)) + if 'data_aux' in var_dict.keys(): + att_value = var_dict['data_aux'] + if len(att_value.shape) == 1: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']].shape, + att_value.shape)) + elif len(att_value.shape) == 2: + if 'strlen' in var_dims: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :].shape, + att_value.shape)) + else: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max']] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max']].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max']].shape, + att_value.shape)) + elif len(att_value.shape) == 3: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + :] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + :].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + :].shape, + att_value.shape)) + + if self.info: + print("Rank {0:03d}: Var {1} data ({2}/{3})".format(self.rank, var_name, i + 1, + len(self.variables))) + + elif att_name not in ['chunk_size', 'var_dims', 'dimensions', 'dtype', 'data_aux']: + var.setncattr(att_name, att_value) + + if 'data_aux' in var_dict.keys(): + del var_dict['data_aux'] + + self._set_var_crs(var) + if self.info: + print("Rank {0:03d}: Var {1} completed ({2}/{3})".format(self.rank, var_name, i + 1, + len(self.variables))) + + return None + + def _gather_data(self, data_to_gather): + """ + Gather all the variable data into the MPI rank 0 to perform a serial write. + + Returns + ------- + data_to_gather: dict + Variables to gather. + """ + + data_list = deepcopy(data_to_gather) + for var_name, var_info in data_list.items(): + try: + # noinspection PyArgumentList + data_aux = self.comm.gather(data_list[var_name]['data'], root=0) + if self.rank == 0: + shp_len = len(data_list[var_name]['data'].shape) + # concatenate over station + if self.parallel_method == 'X': + if shp_len == 1: + # dimensions = (station) + axis = 0 + elif shp_len == 2: + # dimensions = (station, strlen) or + # dimensions = (station, time) + axis = 0 + else: + msg = 'The points NetCDF must have ' + msg += 'surface values (without levels).' + raise NotImplementedError(msg) + elif self.parallel_method == 'T': + # concatenate over time + if shp_len == 1: + # dimensions = (station) + axis = None + continue + elif shp_len == 2: + if 'strlen' in var_info['dimensions']: + # dimensions = (station, strlen) + axis = None + continue + else: + # dimensions = (station, time) + axis = 1 + else: + msg = 'The points NetCDF must have ' + msg += 'surface values (without levels).' + raise NotImplementedError(msg) + else: + raise NotImplementedError( + "Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'T'])) + data_list[var_name]['data'] = np.concatenate(data_aux, axis=axis) + except Exception as e: + print("**ERROR** an error has occurred while gathering the '{0}' variable.\n".format(var_name)) + sys.stderr.write("**ERROR** an error has occurred while gathering the '{0}' variable.\n".format(var_name)) + print(e) + sys.stderr.write(str(e)) + # print(e, file=sys.stderr) + sys.stderr.flush() + self.comm.Abort(1) + raise e + + return data_list + + def _create_metadata(self, netcdf): + """ + Create metadata variables. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + # N FLAG CODES + flag = netcdf.createVariable('flag', np.int64, ('station', 'time', 'N_flag_codes',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + flag.units = '' + flag.axis = '' + flag.long_name = '' + flag.standard_name = 'flag' + if self.size > 1: + flag.set_collective(True) + flag[:] = self._flag['data'] + + # N QA CODES + qa = netcdf.createVariable('qa', np.int64, ('station', 'time', 'N_qa_codes',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + qa.units = '' + qa.axis = '' + qa.long_name = '' + qa.standard_name = 'N_qa_codes' + if self.size > 1: + qa.set_collective(True) + qa[:] = self._qa['data'] + + return None + +
+[docs] + def to_netcdf(self, path, compression_level=0, serial=False, info=False, chunking=False): + """ + Write the netCDF output file. + + Parameters + ---------- + path : str + Path to the output netCDF file. + compression_level : int + Level of compression (0 to 9) Default: 0 (no compression). + serial : bool + Indicates if you want to write in serial or not. Default: False. + info : bool + Indicates if you want to print the information of each writing step by stdout Default: False. + chunking : bool + Indicates if you want a chunked netCDF output. Only available with non serial writes. Default: False. + """ + + if (not serial) and (self.size > 1): + msg = 'WARNING!!! ' + msg += 'GHOST datasets cannot be written in parallel yet. ' + msg += 'Changing to serial mode.' + warnings.warn(msg) + sys.stderr.flush() + + super(PointsNesGHOST, self).to_netcdf(path, compression_level=compression_level, + serial=True, info=info, chunking=chunking) + + return None
+ + +
+[docs] + def to_points(self): + """ + Transform a PointsNesGHOST into a PointsNes object + + Returns + ---------- + points_nes : nes.Nes + Points Nes Object (without GHOST metadata variables) + """ + + points_nes = PointsNes(comm=self.comm, + info=self.info, + balanced=self.balanced, + parallel_method=self.parallel_method, + avoid_first_hours=self.hours_start, + avoid_last_hours=self.hours_end, + first_level=self.first_level, + last_level=self.last_level, + create_nes=True, + lat=self.lat['data'], + lon=self.lon['data'], + times=self.time + ) + + # The version attribute in GHOST files prior to 1.3.3 is called data_version, after it is version + if 'version' in self.global_attrs: + GHOST_version = self.global_attrs['version'] + elif 'data_version' in self.global_attrs: + GHOST_version = self.global_attrs['data_version'] + metadata_variables = self.get_standard_metadata(GHOST_version) + self.free_vars(metadata_variables) + self.free_vars('station') + points_nes.variables = deepcopy(self.variables) + + return points_nes
+ + +
+[docs] + def get_standard_metadata(self, GHOST_version): + """ + Get all possible GHOST variables for each version. + + Parameters + ---------- + GHOST_version : str + Version of GHOST file. + + Returns + ---------- + metadata_variables[GHOST_version] : list + List of metadata variables for a certain GHOST version + """ + + # This metadata variables are + metadata_variables = {'1.4': ['GHOST_version', 'station_reference', 'station_timezone', 'latitude', 'longitude', + 'altitude', 'sampling_height', 'measurement_altitude', 'ellipsoid', + 'horizontal_datum', 'vertical_datum', 'projection', 'distance_to_building', + 'distance_to_kerb', 'distance_to_junction', 'distance_to_source', 'street_width', + 'street_type', 'daytime_traffic_speed', 'daily_passing_vehicles', 'data_level', + 'climatology', 'station_name', 'city', 'country', + 'administrative_country_division_1', 'administrative_country_division_2', + 'population', 'representative_radius', 'network', 'associated_networks', + 'area_classification', 'station_classification', 'main_emission_source', + 'land_use', 'terrain', 'measurement_scale', + 'ESDAC_Iwahashi_landform_classification', + 'ESDAC_modal_Iwahashi_landform_classification_5km', + 'ESDAC_modal_Iwahashi_landform_classification_25km', + 'ESDAC_Meybeck_landform_classification', + 'ESDAC_modal_Meybeck_landform_classification_5km', + 'ESDAC_modal_Meybeck_landform_classification_25km', + 'GHSL_settlement_model_classification', + 'GHSL_modal_settlement_model_classification_5km', + 'GHSL_modal_settlement_model_classification_25km', + 'Joly-Peuch_classification_code', 'Koppen-Geiger_classification', + 'Koppen-Geiger_modal_classification_5km', + 'Koppen-Geiger_modal_classification_25km', + 'MODIS_MCD12C1_v6_IGBP_land_use', 'MODIS_MCD12C1_v6_modal_IGBP_land_use_5km', + 'MODIS_MCD12C1_v6_modal_IGBP_land_use_25km', 'MODIS_MCD12C1_v6_UMD_land_use', + 'MODIS_MCD12C1_v6_modal_UMD_land_use_5km', + 'MODIS_MCD12C1_v6_modal_UMD_land_use_25km', 'MODIS_MCD12C1_v6_LAI', + 'MODIS_MCD12C1_v6_modal_LAI_5km', 'MODIS_MCD12C1_v6_modal_LAI_25km', + 'WMO_region', 'WWF_TEOW_terrestrial_ecoregion', 'WWF_TEOW_biogeographical_realm', + 'WWF_TEOW_biome', 'UMBC_anthrome_classification', + 'UMBC_modal_anthrome_classification_5km', + 'UMBC_modal_anthrome_classification_25km', + 'EDGAR_v4.3.2_annual_average_BC_emissions', + 'EDGAR_v4.3.2_annual_average_CO_emissions', + 'EDGAR_v4.3.2_annual_average_NH3_emissions', + 'EDGAR_v4.3.2_annual_average_NMVOC_emissions', + 'EDGAR_v4.3.2_annual_average_NOx_emissions', + 'EDGAR_v4.3.2_annual_average_OC_emissions', + 'EDGAR_v4.3.2_annual_average_PM10_emissions', + 'EDGAR_v4.3.2_annual_average_biogenic_PM2.5_emissions', + 'EDGAR_v4.3.2_annual_average_fossilfuel_PM2.5_emissions', + 'EDGAR_v4.3.2_annual_average_SO2_emissions', 'ASTER_v3_altitude', + 'ETOPO1_altitude', 'ETOPO1_max_altitude_difference_5km', + 'GHSL_built_up_area_density', 'GHSL_average_built_up_area_density_5km', + 'GHSL_average_built_up_area_density_25km', 'GHSL_max_built_up_area_density_5km', + 'GHSL_max_built_up_area_density_25km', 'GHSL_population_density', + 'GHSL_average_population_density_5km', 'GHSL_average_population_density_25km', + 'GHSL_max_population_density_5km', 'GHSL_max_population_density_25km', + 'GPW_population_density', 'GPW_average_population_density_5km', + 'GPW_average_population_density_25km', 'GPW_max_population_density_5km', + 'GPW_max_population_density_25km', + 'NOAA-DMSP-OLS_v4_nighttime_stable_lights', + 'NOAA-DMSP-OLS_v4_average_nighttime_stable_lights_5km', + 'NOAA-DMSP-OLS_v4_average_nighttime_stable_lights_25km', + 'NOAA-DMSP-OLS_v4_max_nighttime_stable_lights_5km', + 'NOAA-DMSP-OLS_v4_max_nighttime_stable_lights_25km', + 'OMI_level3_column_annual_average_NO2', + 'OMI_level3_column_cloud_screened_annual_average_NO2', + 'OMI_level3_tropospheric_column_annual_average_NO2', + 'OMI_level3_tropospheric_column_cloud_screened_annual_average_NO2', + 'GSFC_coastline_proximity', 'primary_sampling_type', + 'primary_sampling_instrument_name', + 'primary_sampling_instrument_documented_flow_rate', + 'primary_sampling_instrument_reported_flow_rate', + 'primary_sampling_process_details', 'primary_sampling_instrument_manual_name', + 'primary_sampling_further_details', 'sample_preparation_types', + 'sample_preparation_techniques', 'sample_preparation_process_details', + 'sample_preparation_further_details', 'measurement_methodology', + 'measuring_instrument_name', 'measuring_instrument_sampling_type', + 'measuring_instrument_documented_flow_rate', + 'measuring_instrument_reported_flow_rate', 'measuring_instrument_process_details', + 'measuring_instrument_process_details', 'measuring_instrument_manual_name', + 'measuring_instrument_further_details', 'measuring_instrument_reported_units', + 'measuring_instrument_reported_lower_limit_of_detection', + 'measuring_instrument_documented_lower_limit_of_detection', + 'measuring_instrument_reported_upper_limit_of_detection', + 'measuring_instrument_documented_upper_limit_of_detection', + 'measuring_instrument_reported_uncertainty', + 'measuring_instrument_documented_uncertainty', + 'measuring_instrument_reported_accuracy', + 'measuring_instrument_documented_accuracy', + 'measuring_instrument_reported_precision', + 'measuring_instrument_documented_precision', + 'measuring_instrument_reported_zero_drift', + 'measuring_instrument_documented_zero_drift', + 'measuring_instrument_reported_span_drift', + 'measuring_instrument_documented_span_drift', + 'measuring_instrument_reported_zonal_drift', + 'measuring_instrument_documented_zonal_drift', + 'measuring_instrument_reported_measurement_resolution', + 'measuring_instrument_documented_measurement_resolution', + 'measuring_instrument_reported_absorption_cross_section', + 'measuring_instrument_documented_absorption_cross_section', + 'measuring_instrument_inlet_information', + 'measuring_instrument_calibration_scale', + 'network_provided_volume_standard_temperature', + 'network_provided_volume_standard_pressure', 'retrieval_algorithm', + 'principal_investigator_name', 'principal_investigator_institution', + 'principal_investigator_email_address', 'contact_name', + 'contact_institution', 'contact_email_address', 'meta_update_stamp', + 'data_download_stamp', 'data_revision_stamp', 'network_sampling_details', + 'network_uncertainty_details', 'network_maintenance_details', + 'network_qa_details', 'network_miscellaneous_details', 'data_licence', + 'process_warnings', 'temporal_resolution', + 'reported_lower_limit_of_detection_per_measurement', + 'reported_upper_limit_of_detection_per_measurement', + 'reported_uncertainty_per_measurement', 'derived_uncertainty_per_measurement', + 'day_night_code', 'weekday_weekend_code', 'season_code', + 'hourly_native_representativity_percent', 'hourly_native_max_gap_percent', + 'daily_native_representativity_percent', 'daily_representativity_percent', + 'daily_native_max_gap_percent', 'daily_max_gap_percent', + 'monthly_native_representativity_percent', 'monthly_representativity_percent', + 'monthly_native_max_gap_percent', 'monthly_max_gap_percent', + 'annual_native_representativity_percent', 'annual_native_max_gap_percent', + 'all_representativity_percent', 'all_max_gap_percent'], + } + + return metadata_variables[GHOST_version]
+ + +
+[docs] + def add_variables_to_shapefile(self, var_list, idx_lev=0, idx_time=0): + """ + Add variables data to shapefile. + + var_list : list, str + List (or single string) of the variables to be loaded and saved in the shapefile. + idx_lev : int + Index of vertical level for which the data will be saved in the shapefile. + idx_time : int + Index of time for which the data will be saved in the shapefile. + """ + + if idx_lev != 0: + msg = 'Error: Points dataset has no level (Level: {0}).'.format(idx_lev) + raise ValueError(msg) + + for var_name in var_list: + # station as dimension + if len(self.variables[var_name]['dimensions']) == 1: + self.shapefile[var_name] = self.variables[var_name]['data'][:].ravel() + # station and time as dimensions + else: + self.shapefile[var_name] = self.variables[var_name]['data'][:, idx_time].ravel() + + return None
+ + + @staticmethod + def _get_axis_index_(axis): + if axis == 'T': + value = 1 + elif axis == 'X': + value = 0 + else: + raise ValueError("Unknown axis: {0}".format(axis)) + return value + + @staticmethod + def _set_var_crs(var): + """ + Set the grid_mapping + + Parameters + ---------- + var : Variable + netCDF4-python variable object. + """ + return None
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/points_nes_providentia.html b/docs/build/html/_modules/nes/nc_projections/points_nes_providentia.html new file mode 100644 index 0000000000000000000000000000000000000000..f6eef9a77dc6d7a669b67f9c8bc9ea61d83448bd --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/points_nes_providentia.html @@ -0,0 +1,768 @@ + + + + + + nes.nc_projections.points_nes_providentia — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nc_projections.points_nes_providentia

+#!/usr/bin/env python
+
+import sys
+import warnings
+import numpy as np
+from copy import deepcopy
+from netCDF4 import stringtochar
+from .points_nes import PointsNes
+
+
+
+[docs] +class PointsNesProvidentia(PointsNes): + """ + + Attributes + ---------- + _model_centre_lon : dict + Model centre longitudes dictionary with the complete 'data' key for all the values and the rest of the + attributes. + _model_centre_lat : dict + Model centre latitudes dictionary with the complete 'data' key for all the values and the rest of the + attributes. + _grid_edge_lon : dict + Grid edge longitudes dictionary with the complete 'data' key for all the values and the rest of the + attributes. + _grid_edge_lat : dict + Grid edge latitudes dictionary with the complete 'data' key for all the values and the rest of the + attributes. + model_centre_lon : dict + Model centre longitudes dictionary with the portion of 'data' corresponding to the rank values. + model_centre_lat : dict + Model centre latitudes dictionary with the portion of 'data' corresponding to the rank values. + grid_edge_lon : dict + Grid edge longitudes dictionary with the portion of 'data' corresponding to the rank values. + grid_edge_lat : dict + Grid edge latitudes dictionary with the portion of 'data' corresponding to the rank values. + """ + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, model_centre_lon=None, model_centre_lat=None, grid_edge_lon=None, grid_edge_lat=None, + **kwargs): + """ + Initialize the PointsNesProvidentia class + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'X'. + Accepted values: ['X']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + model_centre_lon : dict + Model centre longitudes dictionary with the portion of 'data' corresponding to the rank values. + model_centre_lat : dict + Model centre latitudes dictionary with the portion of 'data' corresponding to the rank values. + grid_edge_lon : dict + Grid edge longitudes dictionary with the portion of 'data' corresponding to the rank values. + grid_edge_lat : dict + Grid edge latitudes dictionary with the portion of 'data' corresponding to the rank values. + """ + + super(PointsNesProvidentia, self).__init__(comm=comm, path=path, info=info, dataset=dataset, + xarray=xarray, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, + create_nes=create_nes, times=times, **kwargs) + + if create_nes: + # Complete dimensions + self._model_centre_lon = model_centre_lon + self._model_centre_lat = model_centre_lat + self._grid_edge_lon = grid_edge_lon + self._grid_edge_lat = grid_edge_lat + else: + # Complete dimensions + self._model_centre_lon = self._get_coordinate_dimension(['model_centre_longitude']) + self._model_centre_lat = self._get_coordinate_dimension(['model_centre_latitude']) + self._grid_edge_lon = self._get_coordinate_dimension(['grid_edge_longitude']) + self._grid_edge_lat = self._get_coordinate_dimension(['grid_edge_latitude']) + + # Dimensions screening + self.model_centre_lon = self._get_coordinate_values(self._model_centre_lon, '') + self.model_centre_lat = self._get_coordinate_values(self._model_centre_lat, '') + self.grid_edge_lon = self._get_coordinate_values(self._grid_edge_lon, '') + self.grid_edge_lat = self._get_coordinate_values(self._grid_edge_lat, '') + +
+[docs] + @staticmethod + def new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, + create_nes=False, balanced=False, times=None, + model_centre_lon=None, model_centre_lat=None, grid_edge_lon=None, grid_edge_lat=None, + **kwargs): + """ + Initialize the PointsNesProvidentia class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'X'. + Accepted values: ['X']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use + last_level : int, None + Index of the last level to use. None if it is the last. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + model_centre_lon : dict + Model centre longitudes dictionary with the portion of 'data' corresponding to the rank values. + model_centre_lat : dict + Model centre latitudes dictionary with the portion of 'data' corresponding to the rank values. + grid_edge_lon : dict + Grid edge longitudes dictionary with the portion of 'data' corresponding to the rank values. + grid_edge_lat : dict + Grid edge latitudes dictionary with the portion of 'data' corresponding to the rank values. + """ + + new = PointsNesProvidentia(comm=comm, path=path, info=info, dataset=dataset, xarray=xarray, + parallel_method=parallel_method, avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, first_level=first_level, last_level=last_level, + create_nes=create_nes, balanced=balanced, times=times, + model_centre_lon=model_centre_lon, model_centre_lat=model_centre_lat, + grid_edge_lon=grid_edge_lon, grid_edge_lat=grid_edge_lat, **kwargs) + + return new
+ + + def _create_dimensions(self, netcdf): + """ + Create 'grid_edge', 'model_latitude' and 'model_longitude' dimensions and the super dimensions + 'time', 'time_nv', 'station', and 'strlen'. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(PointsNesProvidentia, self)._create_dimensions(netcdf) + + # Create grid_edge, model_latitude and model_longitude dimensions + netcdf.createDimension('grid_edge', len(self._grid_edge_lon['data'])) + netcdf.createDimension('model_latitude', self._model_centre_lon['data'].shape[0]) + netcdf.createDimension('model_longitude', self._model_centre_lon['data'].shape[1]) + + return None + + def _create_dimension_variables(self, netcdf): + """ + Create the 'model_centre_lon', model_centre_lat', 'grid_edge_lon' and 'grid_edge_lat' variables. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(PointsNesProvidentia, self)._create_dimension_variables(netcdf) + + # MODEL CENTRE LONGITUDES + model_centre_lon = netcdf.createVariable('model_centre_longitude', 'f8', + ('model_latitude', 'model_longitude',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + model_centre_lon.units = 'degrees_east' + model_centre_lon.axis = 'X' + model_centre_lon.long_name = 'model centre longitude' + model_centre_lon.standard_name = 'model centre longitude' + if self.size > 1: + model_centre_lon.set_collective(True) + msg = '2D meshed grid centre longitudes with ' + msg += '{} longitudes in {} bands of latitude'.format(self._model_centre_lon['data'].shape[1], + self._model_centre_lat['data'].shape[0]) + model_centre_lon.description = msg + model_centre_lon[:] = self._model_centre_lon['data'] + + # MODEL CENTRE LATITUDES + model_centre_lat = netcdf.createVariable('model_centre_latitude', 'f8', + ('model_latitude','model_longitude',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + model_centre_lat.units = 'degrees_north' + model_centre_lat.axis = 'Y' + model_centre_lat.long_name = 'model centre latitude' + model_centre_lat.standard_name = 'model centre latitude' + if self.size > 1: + model_centre_lat.set_collective(True) + msg = '2D meshed grid centre longitudes with ' + msg += '{} longitudes in {} bands of latitude'.format(self._model_centre_lon['data'].shape[1], + self._model_centre_lat['data'].shape[0]) + model_centre_lat[:] = self._model_centre_lat['data'] + + # GRID EDGE DOMAIN LONGITUDES + grid_edge_lon = netcdf.createVariable('grid_edge_longitude', 'f8', ('grid_edge')) + grid_edge_lon.units = 'degrees_east' + grid_edge_lon.axis = 'X' + grid_edge_lon.long_name = 'grid edge longitude' + grid_edge_lon.standard_name = 'grid edge longitude' + if self.size > 1: + grid_edge_lon.set_collective(True) + msg = 'Longitude coordinate along edge of grid domain ' + msg += '(going clockwise around grid boundary from bottom-left corner).' + grid_edge_lon.description = msg + grid_edge_lon[:] = self._grid_edge_lon['data'] + + # GRID EDGE DOMAIN LATITUDES + grid_edge_lat = netcdf.createVariable('grid_edge_latitude', 'f8', ('grid_edge')) + grid_edge_lat.units = 'degrees_north' + grid_edge_lat.axis = 'Y' + grid_edge_lat.long_name = 'grid edge latitude' + grid_edge_lat.standard_name = 'grid edge latitude' + if self.size > 1: + grid_edge_lat.set_collective(True) + msg = 'Latitude coordinate along edge of grid domain ' + msg += '(going clockwise around grid boundary from bottom-left corner).' + grid_edge_lat.description = msg + grid_edge_lat[:] = self._grid_edge_lat['data'] + + self.free_vars(('model_centre_longitude', 'model_centre_latitude', 'grid_edge_longitude', 'grid_edge_latitude')) + + def _get_coordinate_values(self, coordinate_info, coordinate_axis, bounds=False): + """ + Get the coordinate data of the current portion. + + Parameters + ---------- + coordinate_info : dict, list + Dictionary with the 'data' key with the coordinate variable values. and the attributes as other keys. + coordinate_axis : str + Name of the coordinate to extract. Accepted values: ['X']. + bounds : bool + Boolean variable to know if there are coordinate bounds. + Returns + ------- + values : dict + Dictionary with the portion of data corresponding to the rank. + """ + + if coordinate_info is None: + return None + + if not isinstance(coordinate_info, dict): + values = {'data': deepcopy(coordinate_info)} + else: + values = deepcopy(coordinate_info) + + coordinate_len = len(values['data'].shape) + if bounds: + coordinate_len -= 1 + + if coordinate_axis == 'X': + if coordinate_len == 1: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif coordinate_len == 2: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + self.read_axis_limits['t_min']:self.read_axis_limits['t_max']] + elif coordinate_len == 3: + values['data'] = values['data'][self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], :] + else: + raise NotImplementedError("The coordinate has wrong dimensions: {dim}".format( + dim=values['data'].shape)) + elif coordinate_axis == '': + # pass for 'model_centre_lon', 'model_centre_lat', 'grid_edge_lon' and 'grid_edge_lat' + pass + + return values + + def _read_variable(self, var_name): + """ + Read the corresponding variable data according to the current rank. + + Parameters + ---------- + var_name : str + Name of the variable to read. + + Returns + ------- + data: np.array + Portion of the variable data corresponding to the rank. + """ + + nc_var = self.netcdf.variables[var_name] + var_dims = nc_var.dimensions + + # Read data in 1, 2 or 3 dimensions + if len(var_dims) < 2: + data = nc_var[self.read_axis_limits['x_min']:self.read_axis_limits['x_max']] + elif len(var_dims) == 2: + data = nc_var[self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + self.read_axis_limits['t_min']:self.read_axis_limits['t_max']] + elif len(var_dims) == 3: + data = nc_var[self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + self.read_axis_limits['t_min']:self.read_axis_limits['t_max'], + :] + else: + raise NotImplementedError('Error with {0}. Only can be read netCDF with 3 dimensions or less'.format( + var_name)) + + # Unmask array + data = self._unmask_array(data) + + return data + + def _create_variables(self, netcdf, chunking=False): + """ + Create the netCDF file variables. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python open Dataset. + chunking : bool + Indicates if you want to chunk the output netCDF. + """ + + if self.variables is not None: + for i, (var_name, var_dict) in enumerate(self.variables.items()): + # Get data type + if 'dtype' in var_dict.keys(): + var_dtype = var_dict['dtype'] + if (var_dict['data'] is not None) and (var_dtype != var_dict['data'].dtype): + msg = "WARNING!!! " + msg += "Different data types for variable {0}. ".format(var_name) + msg += "Input dtype={0}. Data dtype={1}.".format(var_dtype, + var_dict['data'].dtype) + warnings.warn(msg) + sys.stderr.flush() + try: + var_dict['data'] = var_dict['data'].astype(var_dtype) + except Exception as e: # TODO: Detect exception + raise e("It was not possible to cast the data to the input dtype.") + else: + var_dtype = var_dict['data'].dtype + if var_dtype is np.object: + raise TypeError("Data dtype is np.object. Define dtype explicitly as dictionary key 'dtype'") + + # Get dimensions when reading datasets + if 'dimensions' in var_dict.keys(): + var_dims = var_dict['dimensions'] + # Get dimensions when creating new datasets + else: + if len(var_dict['data'].shape) == 1: + # For data that depends only on station (e.g. station_code) + var_dims = self._var_dim + else: + # For data that is dependent on time and station (e.g. PM10) + var_dims = self._var_dim + ('time',) + + if var_dict['data'] is not None: + + # Ensure data is of type numpy array (to create NES) + if not isinstance(var_dict['data'], (np.ndarray, np.generic)): + try: + var_dict['data'] = np.array(var_dict['data']) + except AttributeError: + raise AttributeError("Data for variable {0} must be a numpy array.".format(var_name)) + + # Convert list of strings to chars for parallelization + if np.issubdtype(var_dtype, np.character): + var_dict['data_aux'] = self.str2char(var_dict['data']) + var_dims += ('strlen',) + var_dtype = 'S1' + + if self.info: + print("Rank {0:03d}: Writing {1} var ({2}/{3})".format(self.rank, var_name, i + 1, + len(self.variables))) + + if not chunking: + var = netcdf.createVariable(var_name, var_dtype, var_dims, + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + else: + if self.master: + chunk_size = var_dict['data'].shape + else: + chunk_size = None + chunk_size = self.comm.bcast(chunk_size, root=0) + var = netcdf.createVariable(var_name, var_dtype, var_dims, zlib=self.zip_lvl > 0, + complevel=self.zip_lvl, chunksizes=chunk_size) + + if self.info: + print("Rank {0:03d}: Var {1} created ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + if self.size > 1: + var.set_collective(True) + if self.info: + print("Rank {0:03d}: Var {1} collective ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + + for att_name, att_value in var_dict.items(): + if att_name == 'data': + if self.info: + print("Rank {0:03d}: Filling {1})".format(self.rank, var_name)) + if 'data_aux' in var_dict.keys(): + att_value = var_dict['data_aux'] + if len(att_value.shape) == 1: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max']].shape, + att_value.shape)) + elif len(att_value.shape) == 2: + if 'strlen' in var_dims: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], :].shape, + att_value.shape)) + else: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max']] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max']].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max']].shape, + att_value.shape)) + elif len(att_value.shape) == 3: + try: + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + :] = att_value + except IndexError: + raise IndexError("Different shapes. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + :].shape, + att_value.shape)) + except ValueError: + raise ValueError("Axis limits cannot be accessed. out_shape={0}, data_shp={1}".format( + var[self.write_axis_limits['x_min']:self.write_axis_limits['x_max'], + self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + :].shape, + att_value.shape)) + + if self.info: + print("Rank {0:03d}: Var {1} data ({2}/{3})".format(self.rank, var_name, i + 1, + len(self.variables))) + elif att_name not in ['chunk_size', 'var_dims', 'dimensions', 'dtype', 'data_aux']: + var.setncattr(att_name, att_value) + + if 'data_aux' in var_dict.keys(): + del var_dict['data_aux'] + + self._set_var_crs(var) + if self.info: + print("Rank {0:03d}: Var {1} completed ({2}/{3})".format(self.rank, var_name, i + 1, + len(self.variables))) + + return None + + def _gather_data(self, data_to_gather): + """ + Gather all the variable data into the MPI rank 0 to perform a serial write. + + Returns + ------- + data_to_gather: dict + Variables to gather. + """ + + data_list = deepcopy(data_to_gather) + for var_name, var_info in data_list.items(): + try: + # noinspection PyArgumentList + data_aux = self.comm.gather(data_list[var_name]['data'], root=0) + if self.rank == 0: + shp_len = len(data_list[var_name]['data'].shape) + # concatenate over station + if self.parallel_method == 'X': + if shp_len == 1: + # dimensions = (station) + axis = 0 + elif shp_len == 2: + # dimensions = (station, strlen) or + # dimensions = (station, time) + axis = 0 + else: + msg = 'The points NetCDF must have ' + msg += 'surface values (without levels).' + raise NotImplementedError(msg) + elif self.parallel_method == 'T': + # concatenate over time + if shp_len == 1: + # dimensions = (station) + axis = None + continue + elif shp_len == 2: + if 'strlen' in var_info['dimensions']: + # dimensions = (station, strlen) + axis = None + continue + else: + # dimensions = (station, time) + axis = 1 + else: + msg = 'The points NetCDF must have ' + msg += 'surface values (without levels).' + raise NotImplementedError(msg) + else: + raise NotImplementedError( + "Parallel method '{meth}' is not implemented. Use one of these: {accept}".format( + meth=self.parallel_method, accept=['X', 'T'])) + data_list[var_name]['data'] = np.concatenate(data_aux, axis=axis) + except Exception as e: + print("**ERROR** an error has occurred while gathering the '{0}' variable.\n".format(var_name)) + sys.stderr.write("**ERROR** an error has occurred while gathering the '{0}' variable.\n".format(var_name)) + print(e) + sys.stderr.write(str(e)) + # print(e, file=sys.stderr) + sys.stderr.flush() + self.comm.Abort(1) + raise e + + return data_list + +
+[docs] + def to_netcdf(self, path, compression_level=0, serial=False, info=False, chunking=False): + """ + Write the netCDF output file. + + Parameters + ---------- + path : str + Path to the output netCDF file. + compression_level : int + Level of compression (0 to 9) Default: 0 (no compression). + serial : bool + Indicates if you want to write in serial or not. Default: False. + info : bool + Indicates if you want to print the information of each writing step by stdout Default: False. + chunking : bool + Indicates if you want a chunked netCDF output. Only available with non serial writes. Default: False. + """ + + if (not serial) and (self.size > 1): + msg = 'WARNING!!! ' + msg += 'Providentia datasets cannot be written in parallel yet. ' + msg += 'Changing to serial mode.' + warnings.warn(msg) + sys.stderr.flush() + + super(PointsNesProvidentia, self).to_netcdf(path, compression_level=compression_level, + serial=True, info=info, chunking=chunking) + + return None
+ + +
+[docs] + def add_variables_to_shapefile(self, var_list, idx_lev=0, idx_time=0): + """ + Add variables data to shapefile. + + var_list : list, str + List (or single string) of the variables to be loaded and saved in the shapefile. + idx_lev : int + Index of vertical level for which the data will be saved in the shapefile. + idx_time : int + Index of time for which the data will be saved in the shapefile. + """ + + if idx_lev != 0: + msg = 'Error: Points dataset has no level (Level: {0}).'.format(idx_lev) + raise ValueError(msg) + + for var_name in var_list: + # station as dimension + if len(self.variables[var_name]['dimensions']) == 1: + self.shapefile[var_name] = self.variables[var_name]['data'][:].ravel() + # station and time as dimensions + else: + self.shapefile[var_name] = self.variables[var_name]['data'][:, idx_time].ravel() + + return None
+ + + @staticmethod + def _get_axis_index_(axis): + if axis == 'T': + value = 1 + elif axis == 'X': + value = 0 + else: + raise ValueError("Unknown axis: {0}".format(axis)) + return value + + @staticmethod + def _set_var_crs(var): + """ + Set the grid_mapping + + Parameters + ---------- + var : Variable + netCDF4-python variable object. + """ + return None
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/rotated_nes.html b/docs/build/html/_modules/nes/nc_projections/rotated_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..b0a044994e23bf9744b0501f9c5b1880ddb2c448 --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/rotated_nes.html @@ -0,0 +1,755 @@ + + + + + + nes.nc_projections.rotated_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nc_projections.rotated_nes

+#!/usr/bin/env python
+
+import warnings
+import sys
+import numpy as np
+import pandas as pd
+import math
+from cfunits import Units
+from pyproj import Proj
+from copy import deepcopy
+import geopandas as gpd
+from shapely.geometry import Polygon, Point
+from .default_nes import Nes
+
+
+
+[docs] +class RotatedNes(Nes): + """ + + Attributes + ---------- + _rlat : dict + Rotated latitudes dictionary with the complete 'data' key for all the values and the rest of the attributes. + _rlon : dict + Rotated longitudes dictionary with the complete 'data' key for all the values and the rest of the attributes. + rlat : dict + Rotated latitudes dictionary with the portion of 'data' corresponding to the rank values. + rlon : dict + Rotated longitudes dictionary with the portion of 'data' corresponding to the rank values. + _var_dim : tuple + Tuple with the name of the Y and X dimensions for the variables. + ('rlat', 'rlon') for a rotated projection. + _lat_dim : tuple + Tuple with the name of the dimensions of the Latitude values. + ('rlat', 'rlon') for a rotated projection. + _lon_dim : tuple + Tuple with the name of the dimensions of the Longitude values. + ('rlat', 'rlon') for a rotated projection. + """ + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the RotatedNes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + super(RotatedNes, self).__init__(comm=comm, path=path, + info=info, dataset=dataset, balanced=balanced, + xarray=xarray, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=create_nes, + times=times, **kwargs) + + if create_nes: + # Dimensions screening + self.lat = self._get_coordinate_values(self._lat, 'Y') + self.lon = self._get_coordinate_values(self._lon, 'X') + else: + # Complete dimensions + self._rlat = self._get_coordinate_dimension('rlat') + self._rlon = self._get_coordinate_dimension('rlon') + + # Dimensions screening + self.rlat = self._get_coordinate_values(self._rlat, 'Y') + self.rlon = self._get_coordinate_values(self._rlon, 'X') + + # Set axis limits for parallel writing + self.write_axis_limits = self.get_write_axis_limits() + + self._var_dim = ('rlat', 'rlon') + self._lat_dim = ('rlat', 'rlon') + self._lon_dim = ('rlat', 'rlon') + +
+[docs] + @staticmethod + def new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, + create_nes=False, balanced=False, times=None, **kwargs): + """ + Initialize the Nes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + new = RotatedNes(comm=comm, path=path, info=info, dataset=dataset, xarray=xarray, + parallel_method=parallel_method, avoid_first_hours=avoid_first_hours, + avoid_last_hours=avoid_last_hours, first_level=first_level, last_level=last_level, + create_nes=create_nes, balanced=balanced, times=times, **kwargs) + + return new
+ + +
+[docs] + def filter_coordinates_selection(self): + """ + Use the selection limits to filter rlat, rlon, time, lev, lat, lon, lon_bnds and lat_bnds. + """ + + idx = self.get_idx_intervals() + + self.rlat = self._get_coordinate_values(self._rlat, 'Y') + self.rlon = self._get_coordinate_values(self._rlon, 'X') + + self._rlat['data'] = self._rlat['data'][idx['idx_y_min']:idx['idx_y_max']] + self._rlon['data'] = self._rlon['data'][idx['idx_x_min']:idx['idx_x_max']] + + super(RotatedNes, self).filter_coordinates_selection() + + return None
+ + + def _get_pyproj_projection(self): + """ + Get projection data as in Pyproj library. + + Returns + ---------- + projection : pyproj.Proj + Grid projection. + """ + + projection = Proj(proj='ob_tran', + o_proj="longlat", + ellps='WGS84', + R=self.earth_radius[0], + o_lat_p=np.float64(self.projection_data['grid_north_pole_latitude']), + o_lon_p=np.float64(self.projection_data['grid_north_pole_longitude']), + ) + + return projection + + def _get_projection(self): + """ + Get 'projection' and 'projection_data' from grid details. + """ + + if 'rotated_pole' in self.variables.keys(): + projection_data = self.variables['rotated_pole'] + self.free_vars('rotated_pole') + else: + msg = 'There is no variable called rotated_pole, projection has not been defined.' + raise RuntimeError(msg) + + if 'dtype' in projection_data.keys(): + del projection_data['dtype'] + + if 'data' in projection_data.keys(): + del projection_data['data'] + + if 'dimensions' in projection_data.keys(): + del projection_data['dimensions'] + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_projection(self, **kwargs): + """ + Create 'projection' and 'projection_data' from projection arguments. + """ + + projection_data = {'grid_mapping_name': 'rotated_latitude_longitude', + 'grid_north_pole_latitude': 90 - kwargs['centre_lat'], + 'grid_north_pole_longitude': -180 + kwargs['centre_lon'], + 'inc_rlat': kwargs['inc_rlat'], + 'inc_rlon': kwargs['inc_rlon'], + 'south_boundary': kwargs['south_boundary'], + 'west_boundary': kwargs['west_boundary'], + } + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_dimensions(self, netcdf): + """ + Create 'rlat', 'rlon' and 'spatial_nv' dimensions and the super dimensions 'lev', 'time', 'time_nv', 'lon' and 'lat'. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(RotatedNes, self)._create_dimensions(netcdf) + + # Create rlat and rlon dimensions + netcdf.createDimension('rlon', len(self._rlon['data'])) + netcdf.createDimension('rlat', len(self._rlat['data'])) + + # Create spatial_nv (number of vertices) dimension + if (self._lat_bnds is not None) and (self._lon_bnds is not None): + netcdf.createDimension('spatial_nv', 4) + pass + + return None + + def _create_dimension_variables(self, netcdf): + """ + Create the 'rlat' and 'rlon' variables. + + Parameters + ---------- + netcdf : Dataset + NetCDF object. + """ + + super(RotatedNes, self)._create_dimension_variables(netcdf) + + # ROTATED LATITUDES + rlat = netcdf.createVariable('rlat', self._rlat['data'].dtype, ('rlat',)) + rlat.long_name = "latitude in rotated pole grid" + if 'units' in self._rlat.keys(): + rlat.units = Units(self._rlat['units'], formatted=True).units + else: + rlat.units = 'degrees' + rlat.standard_name = "grid_latitude" + if self.size > 1: + rlat.set_collective(True) + rlat[:] = self._rlat['data'] + + # ROTATED LONGITUDES + rlon = netcdf.createVariable('rlon', self._rlon['data'].dtype, ('rlon',)) + rlon.long_name = "longitude in rotated pole grid" + if 'units' in self._rlon.keys(): + rlon.units = Units(self._rlon['units'], formatted=True).units + else: + rlon.units = 'degrees' + rlon.standard_name = "grid_longitude" + if self.size > 1: + rlon.set_collective(True) + rlon[:] = self._rlon['data'] + + return None + + def _create_rotated_coordinates(self): + """ + Calculate rotated latitudes and longitudes from grid details. + + Returns + ---------- + _rlat : dict + Rotated latitudes dictionary with the complete 'data' key for all the values and the rest of the attributes. + _rlon : dict + Rotated longitudes dictionary with the complete 'data' key for all the values and the rest of the attributes. + """ + + # Get grid resolution + inc_rlon = np.float64(self.projection_data['inc_rlon']) + inc_rlat = np.float64(self.projection_data['inc_rlat']) + + # Get south and west boundaries + south_boundary = np.float64(self.projection_data['south_boundary']) + west_boundary = np.float64(self.projection_data['west_boundary']) + + # Calculate rotated latitudes + n_lat = int((abs(south_boundary) / inc_rlat) * 2 + 1) + rlat = np.linspace(south_boundary, + south_boundary + (inc_rlat * (n_lat - 1)), + n_lat, dtype=np.float64) + + # Calculate rotated longitudes + n_lon = int((abs(west_boundary) / inc_rlon) * 2 + 1) + rlon = np.linspace(west_boundary, + west_boundary + (inc_rlon * (n_lon - 1)), + n_lon, dtype=np.float64) + + return {'data': rlat}, {'data': rlon} + +
+[docs] + def rotated2latlon(self, lon_deg, lat_deg, lon_min=-180): + """ + Calculate the unrotated coordinates using the rotated ones. + + Parameters + ---------- + lon_deg : numpy.array + Rotated longitude coordinate. + lat_deg : numpy.array + Rotated latitude coordinate. + lon_min : float + Minimum value for the longitudes: -180 (-180 to 180) or 0 (0 to 360). + + Returns + ---------- + almd : numpy.array + Unrotated longitudes. + aphd : numpy.array + Unrotated latitudes. + """ + + # Get centre coordinates + centre_lat = 90 - np.float64(self.projection_data['grid_north_pole_latitude']) + centre_lon = np.float64(self.projection_data['grid_north_pole_longitude']) + 180 + + # Convert to radians + degrees_to_radians = math.pi / 180. + tph0 = centre_lat * degrees_to_radians + tlm = lon_deg * degrees_to_radians + tph = lat_deg * degrees_to_radians + + tlm0d = -180 + centre_lon + ctph0 = np.cos(tph0) + stph0 = np.sin(tph0) + stlm = np.sin(tlm) + ctlm = np.cos(tlm) + stph = np.sin(tph) + ctph = np.cos(tph) + + # Calculate unrotated latitudes + sph = (ctph0 * stph) + (stph0 * ctph * ctlm) + sph[sph > 1.] = 1. + sph[sph < -1.] = -1. + aph = np.arcsin(sph) + aphd = aph / degrees_to_radians + + # Calculate rotated longitudes + anum = ctph * stlm + denom = (ctlm * ctph - stph0 * sph) / ctph0 + relm = np.arctan2(anum, denom) - math.pi + almd = relm / degrees_to_radians + tlm0d + almd[almd > (lon_min + 360)] -= 360 + almd[almd < lon_min] += 360 + + return almd, aphd
+ + + def _create_centre_coordinates(self, **kwargs): + """ + Calculate centre latitudes and longitudes from grid details. + + Returns + ---------- + centre_lat : dict + Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude). + centre_lon : dict + Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude). + """ + + # Complete dimensions + self._rlat, self._rlon = self._create_rotated_coordinates() + + # Calculate centre latitudes and longitudes (1D to 2D) + centre_lon, centre_lat = self.rotated2latlon(np.array([self._rlon['data']] * len(self._rlat['data'])), + np.array([self._rlat['data']] * len(self._rlon['data'])).T) + + return {'data': centre_lat}, {'data': centre_lon} + +
+[docs] + def create_providentia_exp_centre_coordinates(self): + """ + Calculate centre latitudes and longitudes from original coordinates and store as 2D arrays. + + Returns + ---------- + model_centre_lat : dict + Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude). + model_centre_lon : dict + Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude). + """ + + # Get centre latitudes + model_centre_lat = self.lat + + # Get centre longitudes + model_centre_lon = self.lon + + return model_centre_lat, model_centre_lon
+ + +
+[docs] + def create_providentia_exp_grid_edge_coordinates(self): + """ + Calculate grid edge latitudes and longitudes and get model grid outline. + + Returns + ---------- + grid_edge_lat : dict + Dictionary with data of grid edge latitudes. + grid_edge_lon : dict + Dictionary with data of grid edge longitudes. + """ + + # Get grid resolution + inc_rlon = np.abs(np.mean(np.diff(self.rlon['data']))) + inc_rlat = np.abs(np.mean(np.diff(self.rlat['data']))) + + # Get bounds for rotated coordinates + rlat_bounds = self.create_single_spatial_bounds(self.rlat['data'], inc_rlat) + rlon_bounds = self.create_single_spatial_bounds(self.rlon['data'], inc_rlon) + + # Get rotated latitudes for grid edge + left_edge_rlat = np.append(rlat_bounds.flatten()[::2], rlat_bounds.flatten()[-1]) + right_edge_rlat = np.flip(left_edge_rlat, 0) + top_edge_rlat = np.repeat(rlat_bounds[-1][-1], len(self.rlon['data']) - 1) + bottom_edge_rlat = np.repeat(rlat_bounds[0][0], len(self.rlon['data'])) + rlat_grid_edge = np.concatenate((left_edge_rlat, top_edge_rlat, right_edge_rlat, bottom_edge_rlat)) + + # Get rotated longitudes for grid edge + left_edge_rlon = np.repeat(rlon_bounds[0][0], len(self.rlat['data']) + 1) + top_edge_rlon = rlon_bounds.flatten()[1:-1:2] + right_edge_rlon = np.repeat(rlon_bounds[-1][-1], len(self.rlat['data']) + 1) + bottom_edge_rlon = np.flip(rlon_bounds.flatten()[:-1:2], 0) + rlon_grid_edge = np.concatenate((left_edge_rlon, top_edge_rlon, right_edge_rlon, bottom_edge_rlon)) + + # Get edges for regular coordinates + grid_edge_lon_data, grid_edge_lat_data = self.rotated2latlon(rlon_grid_edge, rlat_grid_edge) + + # Create grid outline by stacking the edges in both coordinates + model_grid_outline = np.vstack((grid_edge_lon_data, grid_edge_lat_data)).T + + grid_edge_lat = {'data': model_grid_outline[:,1]} + grid_edge_lon = {'data': model_grid_outline[:,0]} + + return grid_edge_lat, grid_edge_lon
+ + +
+[docs] + def create_spatial_bounds(self): + """ + Calculate longitude and latitude bounds and set them. + """ + + # Calculate rotated coordinates bounds + inc_rlat = np.abs(np.mean(np.diff(self._rlat['data']))) + rlat_bnds = self.create_single_spatial_bounds(np.array([self._rlat['data']] * len(self._rlon['data'])).T, + inc_rlat, spatial_nv=4, inverse=True) + + inc_rlon = np.abs(np.mean(np.diff(self._rlon['data']))) + rlon_bnds = self.create_single_spatial_bounds(np.array([self._rlon['data']] * len(self._rlat['data'])), + inc_rlon, spatial_nv=4) + + # Transform rotated bounds to regular bounds + lon_bnds, lat_bnds = self.rotated2latlon(rlon_bnds, rlat_bnds) + + # Obtain regular coordinates bounds + self._lat_bnds = {} + self._lat_bnds['data'] = deepcopy(lat_bnds) + self.lat_bnds = {} + self.lat_bnds['data'] = lat_bnds[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + :] + + self._lon_bnds = {} + self._lon_bnds['data'] = deepcopy(lon_bnds) + self.lon_bnds = {} + self.lon_bnds['data']= lon_bnds[self.read_axis_limits['y_min']:self.read_axis_limits['y_max'], + self.read_axis_limits['x_min']:self.read_axis_limits['x_max'], + :] + + return None
+ + + @staticmethod + def _set_var_crs(var): + """ + Set the grid_mapping to 'rotated_pole'. + + Parameters + ---------- + var : Variable + netCDF4-python variable object. + """ + + var.grid_mapping = 'rotated_pole' + var.coordinates = 'lat lon' + + return None + + def _create_metadata(self, netcdf): + """ + Create the 'crs' variable for the rotated latitude longitude grid_mapping. + + Parameters + ---------- + netcdf : Dataset + netcdf4-python Dataset. + """ + + if self.projection_data is not None: + mapping = netcdf.createVariable('rotated_pole', 'i') + mapping.grid_mapping_name = self.projection_data['grid_mapping_name'] + mapping.grid_north_pole_latitude = self.projection_data['grid_north_pole_latitude'] + mapping.grid_north_pole_longitude = self.projection_data['grid_north_pole_longitude'] + + return None + +
+[docs] + def to_grib2(self, path, grib_keys, grib_template_path, lat_flip=False, info=False): + """ + Write output file with grib2 format. + + Parameters + ---------- + path : str + Path to the output file. + grib_keys : dict + Dictionary with the grib2 keys. + grib_template_path : str + Path to the grib2 file to use as template. + info : bool + Indicates if you want to print extra information during the process. + """ + + raise NotImplementedError("Grib2 format cannot be written in a Rotated pole projection.")
+ + +
+[docs] + def create_shapefile(self): + """ + Create spatial geodataframe (shapefile). + + Returns + ------- + shapefile : GeoPandasDataFrame + Shapefile dataframe. + """ + + if self.shapefile is None: + + if self._lat_bnds is None or self._lon_bnds is None: + self.create_spatial_bounds() + + # Reshape arrays to create geometry + aux_b_lats = self.lat_bnds['data'].reshape((self.lat_bnds['data'].shape[0] * self.lat_bnds['data'].shape[1], + self.lat_bnds['data'].shape[2])) + aux_b_lons = self.lon_bnds['data'].reshape((self.lon_bnds['data'].shape[0] * self.lon_bnds['data'].shape[1], + self.lon_bnds['data'].shape[2])) + + # Get polygons from bounds + geometry = [] + for i in range(aux_b_lons.shape[0]): + geometry.append(Polygon([(aux_b_lons[i, 0], aux_b_lats[i, 0]), + (aux_b_lons[i, 1], aux_b_lats[i, 1]), + (aux_b_lons[i, 2], aux_b_lats[i, 2]), + (aux_b_lons[i, 3], aux_b_lats[i, 3]), + (aux_b_lons[i, 0], aux_b_lats[i, 0])])) + + # Create dataframe cointaining all polygons + fids = self.get_fids() + gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids.ravel()), + geometry=geometry, + crs="EPSG:4326") + self.shapefile = gdf + + else: + gdf = self.shapefile + + return gdf
+ + +
+[docs] + def get_centroids_from_coordinates(self): + """ + Get centroids from geographical coordinates. + + Returns + ------- + centroids_gdf: GeoPandasDataFrame + Centroids dataframe. + """ + + # Get centroids from coordinates + centroids = [] + for lat_ind in range(0, self.lon['data'].shape[0]): + for lon_ind in range(0, self.lon['data'].shape[1]): + centroids.append(Point(self.lon['data'][lat_ind, lon_ind], + self.lat['data'][lat_ind, lon_ind])) + + # Create dataframe cointaining all points + fids = self.get_fids() + centroids_gdf = gpd.GeoDataFrame(index=pd.Index(name='FID', data=fids.ravel()), + geometry=centroids, + crs="EPSG:4326") + + return centroids_gdf
+
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nc_projections/rotated_nested_nes.html b/docs/build/html/_modules/nes/nc_projections/rotated_nested_nes.html new file mode 100644 index 0000000000000000000000000000000000000000..f48f8e40c04f268373750c975719e2269e47c7d5 --- /dev/null +++ b/docs/build/html/_modules/nes/nc_projections/rotated_nested_nes.html @@ -0,0 +1,259 @@ + + + + + + nes.nc_projections.rotated_nested_nes — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nc_projections.rotated_nested_nes

+#!/usr/bin/env python
+
+import numpy as np
+from netCDF4 import Dataset
+from .rotated_nes import RotatedNes
+
+
+[docs] +class RotatedNestedNes(RotatedNes): + + def __init__(self, comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', + avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, + balanced=False, times=None, **kwargs): + """ + Initialize the RotatedNestedNes class. + + Parameters + ---------- + comm: MPI.COMM + MPI Communicator. + path: str + Path to the NetCDF to initialize the object. + info: bool + Indicates if you want to get reading/writing info. + dataset: Dataset + NetCDF4-python Dataset to initialize the class. + xarray: bool: + (Not working) Indicates if you want to use xarray as default. + parallel_method : str + Indicates the parallelization method that you want. Default: 'Y'. + Accepted values: ['X', 'Y', 'T']. + avoid_first_hours : int + Number of hours to remove from first time steps. + avoid_last_hours : int + Number of hours to remove from last time steps. + first_level : int + Index of the first level to use. + last_level : int, None + Index of the last level to use. None if it is the last. + create_nes : bool + Indicates if you want to create the object from scratch (True) or through an existing file. + balanced : bool + Indicates if you want a balanced parallelization or not. + Balanced dataset cannot be written in chunking mode. + times : list, None + List of times to substitute the current ones while creation. + """ + + super(RotatedNestedNes, self).__init__(comm=comm, path=path, + info=info, dataset=dataset, balanced=balanced, + xarray=xarray, parallel_method=parallel_method, + avoid_first_hours=avoid_first_hours, avoid_last_hours=avoid_last_hours, + first_level=first_level, last_level=last_level, create_nes=create_nes, + times=times, **kwargs) + + @staticmethod + def _get_parent_attributes(projection_data): + """ + Get projection attributes from parent grid. + + Parameters + ---------- + projection_data : dict + Dictionary with the projection information. + + Returns + ------- + projection_data : dict + Dictionary with the projection information, including parameters from the parent grid. + """ + + # Read variables from parent grid + netcdf = Dataset(projection_data['parent_grid_path'], mode='r') + rlat = netcdf.variables['rlat'][:] + rlon = netcdf.variables['rlon'][:] + rotated_pole = netcdf.variables['rotated_pole'] + + # j_parent_start starts at index 1 so we must subtract 1 + projection_data['inc_rlat'] = (rlat[1] - rlat[0]) / projection_data['parent_ratio'] + projection_data['1st_rlat'] = rlat[int(projection_data['j_parent_start']) - 1] + + # i_parent_start starts at index 1 so we must subtract 1 + projection_data['inc_rlon'] = (rlon[1] - rlon[0]) / projection_data['parent_ratio'] + projection_data['1st_rlon'] = rlon[int(projection_data['i_parent_start']) - 1] + + projection_data['grid_north_pole_longitude'] = rotated_pole.grid_north_pole_longitude + projection_data['grid_north_pole_latitude'] = rotated_pole.grid_north_pole_latitude + + netcdf.close() + + return projection_data + + def _create_projection(self, **kwargs): + """ + Create 'projection' and 'projection_data' from projection arguments. + """ + + projection_data = {'grid_mapping_name': "", # TODO: Add name + 'parent_grid_path': kwargs['parent_grid_path'], + 'parent_ratio': kwargs['parent_ratio'], + 'i_parent_start': kwargs['i_parent_start'], + 'j_parent_start': kwargs['j_parent_start'], + 'n_rlat': kwargs['n_rlat'], + 'n_rlon': kwargs['n_rlon'] + } + + projection_data = self._get_parent_attributes(projection_data) + + self.projection_data = projection_data + self.projection = self._get_pyproj_projection() + + return None + + def _create_rotated_coordinates(self): + """ + Calculate rotated latitudes and longitudes from grid details. + + Returns + ---------- + _rlat : dict + Rotated latitudes dictionary with the complete 'data' key for all the values and the rest of the attributes. + _rlon : dict + Rotated longitudes dictionary with the complete 'data' key for all the values and the rest of the attributes. + """ + + # Get grid resolution + inc_rlon = self.projection_data['inc_rlon'] + inc_rlat = self.projection_data['inc_rlat'] + + # Get number of rotated coordinates + n_rlat = self.projection_data['n_rlat'] + n_rlon = self.projection_data['n_rlon'] + + # Get first coordinates + first_rlat = self.projection_data['1st_rlat'] + first_rlon = self.projection_data['1st_rlon'] + + # Calculate rotated latitudes + rlat = np.linspace(first_rlat, + first_rlat + (inc_rlat * (n_rlat - 1)), + n_rlat, dtype=np.float64) + + # Calculate rotated longitudes + rlon = np.linspace(first_rlon, + first_rlon + (inc_rlon * (n_rlon - 1)), + n_rlon, dtype=np.float64) + + return {'data': rlat}, {'data': rlon}
+ + +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nes_formats/cams_ra_format.html b/docs/build/html/_modules/nes/nes_formats/cams_ra_format.html new file mode 100644 index 0000000000000000000000000000000000000000..825f9fcd7154918a06b1ba48bdf1a5474f7de577 --- /dev/null +++ b/docs/build/html/_modules/nes/nes_formats/cams_ra_format.html @@ -0,0 +1,326 @@ + + + + + + nes.nes_formats.cams_ra_format — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nes_formats.cams_ra_format

+#!/usr/bin/env python
+
+import sys
+import warnings
+import numpy as np
+import os
+import nes
+from netCDF4 import Dataset
+from mpi4py import MPI
+from copy import copy
+
+
+
+[docs] +def to_netcdf_cams_ra(self, path): + """ + Horizontal methods from one grid to another one. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + path : str + Path to the output netCDF file. + """ + + if not isinstance(self, nes.LatLonNes): + raise TypeError("CAMS Re-Analysis format must have Regular Lat-Lon projection") + if '<level>' not in path: + raise ValueError("AMS Re-Analysis path must contain '<level>' as pattern; current: '{0}'".format(path)) + + orig_path = copy(path) + + for i_lev, level in enumerate(self.lev['data']): + path = orig_path.replace('<level>', 'l{0}'.format(i_lev)) + # Open NetCDF + if self.info: + print("Rank {0:03d}: Creating {1}".format(self.rank, path)) + if self.size > 1: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=True, comm=self.comm, info=MPI.Info()) + else: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=False) + if self.info: + print("Rank {0:03d}: NetCDF ready to write".format(self.rank)) + self.to_dtype(data_type=np.float32) + + # Create dimensions + create_dimensions(self, netcdf) + + # Create variables + create_variables(self, netcdf, i_lev) + + # Create dimension variables + create_dimension_variables(self, netcdf) + if self.info: + print("Rank {0:03d}: Dimensions done".format(self.rank)) + + # Close NetCDF + if self.global_attrs is not None: + for att_name, att_value in self.global_attrs.items(): + netcdf.setncattr(att_name, att_value) + + netcdf.close() + + return None
+ + + +
+[docs] +def create_dimensions(self, netcdf): + """ + Create 'time', 'time_bnds', 'lev', 'lon' and 'lat' dimensions. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + netcdf : Dataset + netcdf4-python open dataset. + """ + + # Create time dimension + netcdf.createDimension('time', None) + + # Create lev, lon and lat dimensions + netcdf.createDimension('lat', len(self._lat['data'])) + netcdf.createDimension('lon', len(self._lon['data'])) + + return None
+ + + +
+[docs] +def create_dimension_variables(self, netcdf): + """ + Create the 'time', 'time_bnds', 'lev', 'lat', 'lat_bnds', 'lon' and 'lon_bnds' variables. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + netcdf : Dataset + netcdf4-python open dataset. + """ + + # LATITUDES + lat = netcdf.createVariable('lat', np.float64, ('lat',)) + lat.standard_name = 'latitude' + lat.long_name = 'latitude' + lat.units = 'degrees_north' + lat.axis = 'Y' + + if self.size > 1: + lat.set_collective(True) + lat[:] = self._lat['data'] + + # LONGITUDES + lon = netcdf.createVariable('lon', np.float64, ('lon',)) + lon.long_name = 'longitude' + lon.standard_name = 'longitude' + lon.units = 'degrees_east' + lon.axis = 'X' + if self.size > 1: + lon.set_collective(True) + lon[:] = self._lon['data'] + + # TIMES + time_var = netcdf.createVariable('time', np.float64, ('time',)) + time_var.standard_name = 'time' + time_var.units = 'day as %Y%m%d.%f' + time_var.calendar = 'proleptic_gregorian' + time_var.axis = 'T' + if self.size > 1: + time_var.set_collective(True) + time_var[:] = date2num(self._time[self.get_time_id(self.hours_start, first=True): + self.get_time_id(self.hours_end, first=False)], + time_var.units, time_var.calendar) + + return None
+ + + +
+[docs] +def create_variables(self, netcdf, i_lev): + """ + Create the netCDF file variables. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + netcdf : Dataset + netcdf4-python open dataset. + """ + + for i, (var_name, var_dict) in enumerate(self.variables.items()): + if var_dict['data'] is not None: + if self.info: + print("Rank {0:03d}: Writing {1} var ({2}/{3})".format(self.rank, var_name, i + 1, len(self.variables))) + try: + var = netcdf.createVariable(var_name, np.float32, ('time', 'lat', 'lon',), + zlib=True, complevel=7, least_significant_digit=3) + + if self.info: + print("Rank {0:03d}: Var {1} created ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + if self.size > 1: + var.set_collective(True) + if self.info: + print("Rank {0:03d}: Var {1} collective ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + + if self.info: + print("Rank {0:03d}: Filling {1})".format(self.rank, var_name)) + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = var_dict['data'][:, i_lev, :, :] + + if self.info: + print("Rank {0:03d}: Var {1} data ({2}/{3})".format( + self.rank, var_name, i + 1, len(self.variables))) + var.long_name = var_dict['long_name'] + var.units = var_dict['units'] + var.number_of_significant_digits = np.int32(3) + + if self.info: + print("Rank {0:03d}: Var {1} completed ({2}/{3})".format(self.rank, var_name, i + 1, + len(self.variables))) + except Exception as e: + print("**ERROR** an error has occurred while writing the '{0}' variable".format(var_name)) + # print("**ERROR** an error has occurredred while writing the '{0}' variable".format(var_name), + # file=sys.stderr) + raise e + else: + msg = 'WARNING!!! ' + msg += 'Variable {0} was not loaded. It will not be written.'.format(var_name) + warnings.warn(msg) + sys.stderr.flush() + + return None
+ + + +
+[docs] +def date2num(time_array, time_units=None, time_calendar=None): + + time_res = [] + for aux_time in time_array: + time_res.append(float(aux_time.strftime("%Y%m%d")) + (float(aux_time.strftime("%H")) / 24)) + time_res = np.array(time_res, dtype=np.float64) + + return time_res
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nes_formats/cmaq_format.html b/docs/build/html/_modules/nes/nes_formats/cmaq_format.html new file mode 100644 index 0000000000000000000000000000000000000000..3d152c311a2f95eb0b98a6ebac57473bbaa20121 --- /dev/null +++ b/docs/build/html/_modules/nes/nes_formats/cmaq_format.html @@ -0,0 +1,474 @@ + + + + + + nes.nes_formats.cmaq_format — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for nes.nes_formats.cmaq_format

+#!/usr/bin/env python
+
+import numpy as np
+import nes
+from netCDF4 import Dataset
+from mpi4py import MPI
+from copy import deepcopy
+from datetime import datetime
+
+GLOBAL_ATTRIBUTES_ORDER = [
+    'IOAPI_VERSION', 'EXEC_ID', 'FTYPE', 'CDATE', 'CTIME', 'WDATE', 'WTIME', 'SDATE', 'STIME', 'TSTEP',  'NTHIK',
+    'NCOLS', 'NROWS', 'NLAYS', 'NVARS', 'GDTYP', 'P_ALP', 'P_BET', 'P_GAM', 'XCENT', 'YCENT',  'XORIG', 'YORIG',
+    'XCELL', 'YCELL', 'VGTYP', 'VGTOP', 'VGLVLS', 'GDNAM', 'UPNAM', 'FILEDESC', 'HISTORY', 'VAR-LIST']
+
+
+# noinspection DuplicatedCode
+
+[docs] +def to_netcdf_cmaq(self, path, chunking=False, keep_open=False): + """ + Create the NetCDF using netcdf4-python methods. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + path : str + Path to the output netCDF file. + chunking: bool + Indicates if you want to chunk the output netCDF. + keep_open : bool + Indicates if you want to keep open the NetCDH to fill the data by time-step + """ + self.to_dtype(np.float32) + + set_global_attributes(self) + change_variable_attributes(self) + + # Open NetCDF + if self.info: + print("Rank {0:03d}: Creating {1}".format(self.rank, path)) + if self.size > 1: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=True, comm=self.comm, info=MPI.Info()) + else: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=False) + if self.info: + print("Rank {0:03d}: NetCDF ready to write".format(self.rank)) + + # Create dimensions + create_dimensions(self, netcdf) + + create_dimension_variables(self, netcdf) + if self.info: + print("Rank {0:03d}: Dimensions done".format(self.rank)) + + # Create variables + create_variables(self, netcdf) + + for att_name in GLOBAL_ATTRIBUTES_ORDER: + netcdf.setncattr(att_name, self.global_attrs[att_name]) + + # Close NetCDF + if keep_open: + self.netcdf = netcdf + else: + netcdf.close() + + return None
+ + + +
+[docs] +def change_variable_attributes(self): + """ + Modify the emission list to be consistent to use the output as input for CMAQ model. + + Parameters + ---------- + self : nes.Nes + """ + for var_name in self.variables.keys(): + + if self.variables[var_name]['units'] == 'mol.s-1': + self.variables[var_name]['units'] = "{:<16}".format('mole/s') + self.variables[var_name]['var_desc'] = "{:<80}".format(self.variables[var_name]['long_name']) + self.variables[var_name]['long_name'] = "{:<16}".format(var_name) + elif self.variables[var_name]['units'] == 'g.s-1': + self.variables[var_name]['units'] = "{:<16}".format('g/s') + self.variables[var_name]['var_desc'] = "{:<80}".format(self.variables[var_name]['long_name']) + self.variables[var_name]['long_name'] = "{:<16}".format(var_name) + + else: + raise TypeError("The unit '{0}' of specie {1} is not defined correctly. ".format( + self.variables[var_name]['units'], var_name) + "Should be 'mol.s-1' or 'g.s-1'") + return None
+ + + +
+[docs] +def to_cmaq_units(self): + """ + Change the data values according to the CMAQ conventions + + Parameters + ---------- + self : nes.Nes + + Returns + ------- + dict + Variable in the MONARCH units + """ + self.calculate_grid_area(overwrite=False) + for var_name in self.variables.keys(): + if isinstance(self.variables[var_name]['data'], np.ndarray): + if self.variables[var_name]['units'] == 'mol.s-1': + # Kmol.m-2.s-1 to mol.s-1 + self.variables[var_name]['data'] = np.array( + self.variables[var_name]['data'] * 1000 * self.cell_measures['cell_area']['data'], dtype=np.float32) + elif self.variables[var_name]['units'] == 'g.s-1': + # Kg.m-2.s-1 to g.s-1 + self.variables[var_name]['data'] = np.array( + self.variables[var_name]['data'] * 1000 * self.cell_measures['cell_area']['data'], dtype=np.float32) + + else: + raise TypeError("The unit '{0}' of specie {1} is not defined correctly. ".format( + self.variables[var_name]['units'], var_name) + "Should be 'mol.s-1' or 'g.s-1'") + self.variables[var_name]['dtype'] = np.float32 + + return self.variables
+ + + +
+[docs] +def create_tflag(self): + """ + Create the content of the CMAQ variable TFLAG + + Parameters + ---------- + self : nes.Nes + + Returns + ------- + numpy.ndarray + Array with the content of TFLAG + """ + t_flag = np.empty((len(self.time), len(self.variables), 2)) + + for i_d, aux_date in enumerate(self.time): + y_d = int(aux_date.strftime('%Y%j')) + hms = int(aux_date.strftime('%H%M%S')) + for i_p in range(len(self.variables)): + t_flag[i_d, i_p, 0] = y_d + t_flag[i_d, i_p, 1] = hms + + return t_flag
+ + + +
+[docs] +def str_var_list(self): + """ + Transform the list of variable names to a string with the elements with 16 white spaces. + + Parameters + ---------- + self : nes.Nes + + Returns + ------- + str + List of variable names transformed on string. + """ + str_var_list = "" + for var in self.variables.keys(): + str_var_list += "{:<16}".format(var) + + return str_var_list
+ + + +
+[docs] +def set_global_attributes(self): + """ + Set the NetCDF global attributes + + Parameters + ---------- + self : nes.Nes + """ + now = datetime.now() + if len(self.time) > 1: + tstep = ((self.time[1] - self.time[0]).seconds // 3600) * 10000 + else: + tstep = 1 * 10000 + + current_attributes = deepcopy(self.global_attrs) + del self.global_attrs + + self.global_attrs = {'IOAPI_VERSION': 'None: made only with NetCDF libraries', + 'EXEC_ID': "{:<80}".format('0.1alpha'), # Editable + 'FTYPE': np.int32(1), # Editable + 'CDATE': np.int32(now.strftime('%Y%j')), + 'CTIME': np.int32(now.strftime('%H%M%S')), + 'WDATE': np.int32(now.strftime('%Y%j')), + 'WTIME': np.int32(now.strftime('%H%M%S')), + 'SDATE': np.int32(self.time[0].strftime('%Y%j')), + 'STIME': np.int32(self.time[0].strftime('%H%M%S')), + 'TSTEP': np.int32(tstep), + 'NTHIK': np.int32(1), # Editable + 'NCOLS': None, # Projection dependent + 'NROWS': None, # Projection dependent + 'NLAYS': np.int32(len(self.lev['data'])), + 'NVARS': None, # Projection dependent + 'GDTYP': None, # Projection dependent + 'P_ALP': None, # Projection dependent + 'P_BET': None, # Projection dependent + 'P_GAM': None, # Projection dependent + 'XCENT': None, # Projection dependent + 'YCENT': None, # Projection dependent + 'XORIG': None, # Projection dependent + 'YORIG': None, # Projection dependent + 'XCELL': None, # Projection dependent + 'YCELL': None, # Projection dependent + 'VGTYP': np.int32(7), # Editable + 'VGTOP': np.float32(5000.), # Editable + 'VGLVLS': np.array([1., 0.], dtype=np.float32), # Editable + 'GDNAM': "{:<16}".format(''), # Editable + 'UPNAM': "{:<16}".format('HERMESv3'), + 'FILEDESC': "", # Editable + 'HISTORY': "", # Editable + 'VAR-LIST': str_var_list(self)} + + # Editable attributes + for att_name, att_value in current_attributes.items(): + if att_name == 'EXEC_ID': + self.global_attrs[att_name] = "{:<80}".format(att_value) # Editable + elif att_name == 'FTYPE': + self.global_attrs[att_name] = np.int32(att_value) # Editable + elif att_name == 'NTHIK': + self.global_attrs[att_name] = np.int32(att_value) # Editable + elif att_name == 'VGTYP': + self.global_attrs[att_name] = np.int32(att_value) # Editable + elif att_name == 'VGTOP': + self.global_attrs[att_name] = np.float32(att_value) # Editable + elif att_name == 'VGLVLS': + self.global_attrs[att_name] = np.array(att_value.split(), dtype=np.float32) # Editable + elif att_name == 'GDNAM': + self.global_attrs[att_name] = "{:<16}".format(att_value) # Editable + elif att_name == 'FILEDESC': + self.global_attrs[att_name] = att_value # Editable + elif att_name == 'HISTORY': + self.global_attrs[att_name] = att_value # Editable + + # Projection dependent attributes + if isinstance(self, nes.LCCNes): + self.global_attrs['NCOLS'] = np.int32(len(self._x['data'])) + self.global_attrs['NROWS'] = np.int32(len(self._y['data'])) + self.global_attrs['NVARS'] = np.int32(len(self.variables)) + self.global_attrs['GDTYP'] = np.int32(2) + + self.global_attrs['P_ALP'] = np.float64(self.projection_data['standard_parallel'][0]) + self.global_attrs['P_BET'] = np.float64(self.projection_data['standard_parallel'][1]) + self.global_attrs['P_GAM'] = np.float64(self.projection_data['longitude_of_central_meridian']) + self.global_attrs['XCENT'] = np.float64(self.projection_data['longitude_of_central_meridian']) + self.global_attrs['YCENT'] = np.float64(self.projection_data['latitude_of_projection_origin']) + self.global_attrs['XORIG'] = np.float64( + self._x['data'][0]) - (np.float64(self._x['data'][1] - self._x['data'][0]) / 2) + self.global_attrs['YORIG'] = np.float64( + self._y['data'][0]) - (np.float64(self._y['data'][1] - self._y['data'][0]) / 2) + self.global_attrs['XCELL'] = np.float64(self._x['data'][1] - self._x['data'][0]) + self.global_attrs['YCELL'] = np.float64(self._y['data'][1] - self._y['data'][0]) + + return None
+ + + +
+[docs] +def create_dimensions(self, netcdf): + """ + Create 'time', 'time_bnds', 'lev', 'lon' and 'lat' dimensions. + + Parameters + ---------- + self : nes.Nes + netcdf : Dataset + netcdf4-python open dataset. + """ + netcdf.createDimension('TSTEP', len(self._time)) + netcdf.createDimension('DATE-TIME', 2) + netcdf.createDimension('LAY', len(self._lev['data'])) + netcdf.createDimension('VAR', len(self.variables)) + if isinstance(self, nes.LCCNes): + netcdf.createDimension('COL', len(self._x['data'])) + netcdf.createDimension('ROW', len(self._y['data'])) + + return None
+ + + +
+[docs] +def create_dimension_variables(self, netcdf): + """ + Create the 'y' and 'x' variables. + + Parameters + ---------- + self : nes.Nes + netcdf : Dataset + NetCDF object. + """ + + tflag = netcdf.createVariable('TFLAG', 'i', ('TSTEP', 'VAR', 'DATE-TIME',)) + tflag.setncatts({'units': "{:<16}".format('<YYYYDDD,HHMMSS>'), 'long_name': "{:<16}".format('TFLAG'), + 'var_desc': "{:<80}".format('Timestep-valid flags: (1) YYYYDDD or (2) HHMMSS')}) + tflag[:] = create_tflag(self) + + return None
+ + + +
+[docs] +def create_variables(self, netcdf): + """ + Create the netCDF file variables. + + Parameters + ---------- + self : nes.Nes + netcdf : Dataset + netcdf4-python open dataset. + """ + for var_name, var_info in self.variables.items(): + var = netcdf.createVariable(var_name, 'f', ('TSTEP', 'LAY', 'ROW', 'COL',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + var.units = var_info['units'] + var.long_name = str(var_info['long_name']) + var.var_desc = str(var_info['var_desc']) + if var_info['data'] is not None: + if self.info: + print("Rank {0:03d}: Filling {1})".format(self.rank, var_name)) + + if isinstance(var_info['data'], int) and var_info['data'] == 0: + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = 0 + + elif len(var_info['data'].shape) == 4: + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = var_info['data'] + + return None
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nes_formats/monarch_format.html b/docs/build/html/_modules/nes/nes_formats/monarch_format.html new file mode 100644 index 0000000000000000000000000000000000000000..0a8be6613611dc6c888f78a4ea4c851aed0c95d2 --- /dev/null +++ b/docs/build/html/_modules/nes/nes_formats/monarch_format.html @@ -0,0 +1,225 @@ + + + + + + nes.nes_formats.monarch_format — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nes_formats.monarch_format

+#!/usr/bin/env python
+
+import numpy as np
+import nes
+from netCDF4 import Dataset
+from mpi4py import MPI
+
+
+# noinspection DuplicatedCode
+
+[docs] +def to_netcdf_monarch(self, path, chunking=False, keep_open=False): + """ + Create the NetCDF using netcdf4-python methods. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + path : str + Path to the output netCDF file. + chunking: bool + Indicates if you want to chunk the output netCDF. + keep_open : bool + Indicates if you want to keep open the NetCDH to fill the data by time-step + """ + + self.to_dtype(np.float32) + + # Open NetCDF + if self.info: + print("Rank {0:03d}: Creating {1}".format(self.rank, path)) + if self.size > 1: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=True, comm=self.comm, info=MPI.Info()) + else: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=False) + if self.info: + print("Rank {0:03d}: NetCDF ready to write".format(self.rank)) + + # Create dimensions + self._create_dimensions(netcdf) + + # Create dimension variables + self._lev['data'] = np.array(self._lev['data'], dtype=np.float32) + self._lat['data'] = np.array(self._lat['data'], dtype=np.float32) + self._lat_bnds['data'] = np.array(self._lat_bnds['data'], dtype=np.float32) + self._lon['data'] = np.array(self._lon['data'], dtype=np.float32) + self._lon_bnds['data'] = np.array(self._lon_bnds['data'], dtype=np.float32) + if isinstance(self, nes.RotatedNes): + self._rlat['data'] = np.array(self._rlat['data'], dtype=np.float32) + self._rlon['data'] = np.array(self._rlon['data'], dtype=np.float32) + if isinstance(self, nes.LCCNes) or isinstance(self, nes.MercatorNes): + self._y['data'] = np.array(self._y['data'], dtype=np.float32) + self._x['data'] = np.array(self._x['data'], dtype=np.float32) + + self._create_dimension_variables(netcdf) + if self.info: + print("Rank {0:03d}: Dimensions done".format(self.rank)) + + # Create cell measures + if 'cell_area' in self.cell_measures.keys(): + self.cell_measures['cell_area']['data'] = np.array(self.cell_measures['cell_area']['data'], dtype=np.float32) + self._create_cell_measures(netcdf) + + # Create variables + self._create_variables(netcdf, chunking=chunking) + + # Create metadata + self._create_metadata(netcdf) + + # Close NetCDF + if self.global_attrs is not None: + for att_name, att_value in self.global_attrs.items(): + netcdf.setncattr(att_name, att_value) + netcdf.setncattr('Conventions', 'CF-1.7') + + if keep_open: + self.netcdf = netcdf + else: + netcdf.close() + + return None
+ + + +
+[docs] +def to_monarch_units(self): + """ + Change the data values according to the MONARCH conventions + + Parameters + ---------- + self : nes.Nes + + Returns + ------- + dict + Variable in the MONARCH units + """ + for var_name in self.variables.keys(): + if isinstance(self.variables[var_name]['data'], np.ndarray): + if self.variables[var_name]['units'] == 'mol.s-1.m-2': + # Kmol to mol + self.variables[var_name]['data'] = np.array(self.variables[var_name]['data'] * 1000, dtype=np.float32) + elif self.variables[var_name]['units'] == 'kg.s-1.m-2': + # No unit change needed + self.variables[var_name]['data'] = np.array(self.variables[var_name]['data'], dtype=np.float32) + + else: + raise TypeError("The unit '{0}' of specie {1} is not defined correctly. ".format( + self.variables[var_name]['units'], var_name) + + "Should be 'mol.s-1.m-2' or 'kg.s-1.m-2'") + self.variables[var_name]['dtype'] = np.float32 + return self.variables
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_modules/nes/nes_formats/wrf_chem_format.html b/docs/build/html/_modules/nes/nes_formats/wrf_chem_format.html new file mode 100644 index 0000000000000000000000000000000000000000..eb4945f639db998e484633445e744b6eeddcc9c3 --- /dev/null +++ b/docs/build/html/_modules/nes/nes_formats/wrf_chem_format.html @@ -0,0 +1,518 @@ + + + + + + nes.nes_formats.wrf_chem_format — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for nes.nes_formats.wrf_chem_format

+#!/usr/bin/env python
+
+import numpy as np
+import nes
+from netCDF4 import Dataset
+from mpi4py import MPI
+from copy import deepcopy
+from datetime import datetime
+
+GLOBAL_ATTRIBUTES_ORDER = [
+    'TITLE', 'START_DATE', 'WEST-EAST_GRID_DIMENSION', 'SOUTH-NORTH_GRID_DIMENSION', 'BOTTOM-TOP_GRID_DIMENSION', 'DX',
+    'DY', 'GRIDTYPE', 'DIFF_OPT', 'KM_OPT', 'DAMP_OPT', 'DAMPCOEF', 'KHDIF', 'KVDIF', 'MP_PHYSICS', 'RA_LW_PHYSICS',
+    'RA_SW_PHYSICS', 'SF_SFCLAY_PHYSICS', 'SF_SURFACE_PHYSICS', 'BL_PBL_PHYSICS', 'CU_PHYSICS', 'SF_LAKE_PHYSICS',
+    'SURFACE_INPUT_SOURCE', 'SST_UPDATE', 'GRID_FDDA', 'GFDDA_INTERVAL_M', 'GFDDA_END_H', 'GRID_SFDDA',
+    'SGFDDA_INTERVAL_M', 'SGFDDA_END_H', 'WEST-EAST_PATCH_START_UNSTAG', 'WEST-EAST_PATCH_END_UNSTAG',
+    'WEST-EAST_PATCH_START_STAG', 'WEST-EAST_PATCH_END_STAG', 'SOUTH-NORTH_PATCH_START_UNSTAG',
+    'SOUTH-NORTH_PATCH_END_UNSTAG', 'SOUTH-NORTH_PATCH_START_STAG', 'SOUTH-NORTH_PATCH_END_STAG',
+    'BOTTOM-TOP_PATCH_START_UNSTAG', 'BOTTOM-TOP_PATCH_END_UNSTAG', 'BOTTOM-TOP_PATCH_START_STAG',
+    'BOTTOM-TOP_PATCH_END_STAG', 'GRID_ID', 'PARENT_ID', 'I_PARENT_START', 'J_PARENT_START', 'PARENT_GRID_RATIO', 'DT',
+    'CEN_LAT', 'CEN_LON', 'TRUELAT1', 'TRUELAT2', 'MOAD_CEN_LAT', 'STAND_LON', 'POLE_LAT', 'POLE_LON', 'GMT', 'JULYR',
+    'JULDAY', 'MAP_PROJ', 'MMINLU', 'NUM_LAND_CAT', 'ISWATER', 'ISLAKE', 'ISICE', 'ISURBAN', 'ISOILWATER']
+
+
+# noinspection DuplicatedCode
+
+[docs] +def to_netcdf_wrf_chem(self, path, chunking=False, keep_open=False): + """ + Create the NetCDF using netcdf4-python methods. + + Parameters + ---------- + self : nes.Nes + Source projection Nes Object. + path : str + Path to the output netCDF file. + chunking: bool + Indicates if you want to chunk the output netCDF. + keep_open : bool + Indicates if you want to keep open the NetCDH to fill the data by time-step + """ + self.to_dtype(np.float32) + + set_global_attributes(self) + change_variable_attributes(self) + + # Open NetCDF + if self.info: + print("Rank {0:03d}: Creating {1}".format(self.rank, path)) + if self.size > 1: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=True, comm=self.comm, info=MPI.Info()) + else: + netcdf = Dataset(path, format="NETCDF4", mode='w', parallel=False) + if self.info: + print("Rank {0:03d}: NetCDF ready to write".format(self.rank)) + + # Create dimensions + create_dimensions(self, netcdf) + + create_dimension_variables(self, netcdf) + if self.info: + print("Rank {0:03d}: Dimensions done".format(self.rank)) + + # Create variables + create_variables(self, netcdf) + + for att_name in GLOBAL_ATTRIBUTES_ORDER: + netcdf.setncattr(att_name, self.global_attrs[att_name]) + + # Close NetCDF + if keep_open: + self.netcdf = netcdf + else: + netcdf.close() + + return None
+ + + +
+[docs] +def change_variable_attributes(self): + """ + Modify the emission list to be consistent to use the output as input for WRF-CHEM model. + + Parameters + ---------- + self : nes.Nes + """ + for var_name in self.variables.keys(): + if self.variables[var_name]['units'] == 'mol.h-1.km-2': + self.variables[var_name]['FieldType'] = np.int32(104) + self.variables[var_name]['MemoryOrder'] = "XYZ" + self.variables[var_name]['description'] = "EMISSIONS" + self.variables[var_name]['units'] = "mol km^-2 hr^-1" + self.variables[var_name]['stagger'] = "" + self.variables[var_name]['coordinates'] = "XLONG XLAT" + + elif self.variables[var_name]['units'] == 'ug.s-1.m-2': + self.variables[var_name]['FieldType'] = np.int32(104) + self.variables[var_name]['MemoryOrder'] = "XYZ" + self.variables[var_name]['description'] = "EMISSIONS" + self.variables[var_name]['units'] = "ug/m3 m/s" + self.variables[var_name]['stagger'] = "" + self.variables[var_name]['coordinates'] = "XLONG XLAT" + + else: + raise TypeError("The unit '{0}' of specie {1} is not defined correctly. ".format( + self.variables[var_name]['units'], var_name) + "Should be 'mol.h-1.km-2' or 'ug.s-1.m-2'") + + if 'long_name' in self.variables[var_name].keys(): + del self.variables[var_name]['long_name'] + + return None
+ + + +
+[docs] +def to_wrf_chem_units(self): + """ + Change the data values according to the WRF-CHEM conventions + + Parameters + ---------- + self : nes.Nes + + Returns + ------- + dict + Variable in the MONARCH units + """ + self.calculate_grid_area(overwrite=False) + for var_name in self.variables.keys(): + if isinstance(self.variables[var_name]['data'], np.ndarray): + if self.variables[var_name]['units'] == 'mol.h-1.km-2': + # 10**6 -> from m2 to km2 + # 10**3 -> from kmol to mol + # 3600 -> from s to h + self.variables[var_name]['data'] = np.array( + self.variables[var_name]['data'] * 10 ** 6 * 10 ** 3 * 3600, dtype=np.float32) + elif self.variables[var_name]['units'] == 'ug.s-1.m-2': + # 10**9 -> from kg to ug + self.variables[var_name]['data'] = np.array( + self.variables[var_name]['data'] * 10 ** 9, dtype=np.float32) + + else: + raise TypeError("The unit '{0}' of specie {1} is not defined correctly. ".format( + self.variables[var_name]['units'], var_name) + "Should be 'mol.h-1.km-2' or 'ug.s-1.m-2'") + self.variables[var_name]['dtype'] = np.float32 + + return self.variables
+ + + +
+[docs] +def create_times_var(self): + """ + Create the content of the WRF-CHEM variable times + + Parameters + ---------- + self : nes.Nes + + Returns + ------- + numpy.ndarray + Array with the content of TFLAG + """ + aux_times = np.chararray((len(self.time), 19), itemsize=1) + + for i_d, aux_date in enumerate(self.time): + aux_times[i_d] = list(aux_date.strftime("%Y-%m-%d_%H:%M:%S")) + + return aux_times
+ + + +
+[docs] +def set_global_attributes(self): + """ + Set the NetCDF global attributes + + Parameters + ---------- + self : nes.Nes + """ + now = datetime.now() + if len(self.time) > 1: + tstep = ((self.time[1] - self.time[0]).seconds // 3600) * 10000 + else: + tstep = 1 * 10000 + + current_attributes = deepcopy(self.global_attrs) + del self.global_attrs + + self.global_attrs = {'TITLE': None, + 'START_DATE': self.time[0].strftime("%Y-%m-%d_%H:%M:%S"), + 'WEST-EAST_GRID_DIMENSION': None, # Projection dependent attributes + 'SOUTH-NORTH_GRID_DIMENSION': None, # Projection dependent attributes + 'BOTTOM-TOP_GRID_DIMENSION': np.int32(45), + 'DX': None, # Projection dependent attributes + 'DY': None, # Projection dependent attributes + 'GRIDTYPE': 'C', + 'DIFF_OPT': np.int32(1), + 'KM_OPT': np.int32(4), + 'DAMP_OPT': np.int32(3), + 'DAMPCOEF': np.float32(0.2), + 'KHDIF': np.float32(0.), + 'KVDIF': np.float32(0.), + 'MP_PHYSICS': np.int32(6), + 'RA_LW_PHYSICS': np.int32(4), + 'RA_SW_PHYSICS': np.int32(4), + 'SF_SFCLAY_PHYSICS': np.int32(2), + 'SF_SURFACE_PHYSICS': np.int32(2), + 'BL_PBL_PHYSICS': np.int32(8), + 'CU_PHYSICS': np.int32(0), + 'SF_LAKE_PHYSICS': np.int32(0), + 'SURFACE_INPUT_SOURCE': None, # Projection dependent attributes + 'SST_UPDATE': np.int32(0), + 'GRID_FDDA': np.int32(0), + 'GFDDA_INTERVAL_M': np.int32(0), + 'GFDDA_END_H': np.int32(0), + 'GRID_SFDDA': np.int32(0), + 'SGFDDA_INTERVAL_M': np.int32(0), + 'SGFDDA_END_H': np.int32(0), + 'WEST-EAST_PATCH_START_UNSTAG': None, # Projection dependent attributes + 'WEST-EAST_PATCH_END_UNSTAG': None, # Projection dependent attributes + 'WEST-EAST_PATCH_START_STAG': None, # Projection dependent attributes + 'WEST-EAST_PATCH_END_STAG': None, # Projection dependent attributes + 'SOUTH-NORTH_PATCH_START_UNSTAG': None, # Projection dependent attributes + 'SOUTH-NORTH_PATCH_END_UNSTAG': None, # Projection dependent attributes + 'SOUTH-NORTH_PATCH_START_STAG': None, # Projection dependent attributes + 'SOUTH-NORTH_PATCH_END_STAG': None, # Projection dependent attributes + 'BOTTOM-TOP_PATCH_START_UNSTAG': None, + 'BOTTOM-TOP_PATCH_END_UNSTAG': None, + 'BOTTOM-TOP_PATCH_START_STAG': None, + 'BOTTOM-TOP_PATCH_END_STAG': None, + 'GRID_ID': np.int32(1), + 'PARENT_ID': np.int32(0), + 'I_PARENT_START': np.int32(1), + 'J_PARENT_START': np.int32(1), + 'PARENT_GRID_RATIO': np.int32(1), + 'DT': np.float32(18.), + 'CEN_LAT': None, # Projection dependent attributes + 'CEN_LON': None, # Projection dependent attributes + 'TRUELAT1': None, # Projection dependent attributes + 'TRUELAT2': None, # Projection dependent attributes + 'MOAD_CEN_LAT': None, # Projection dependent attributes + 'STAND_LON': None, # Projection dependent attributes + 'POLE_LAT': None, # Projection dependent attributes + 'POLE_LON': None, # Projection dependent attributes + 'GMT': np.float32(self.time[0].hour), + 'JULYR': np.int32(self.time[0].year), + 'JULDAY': np.int32(self.time[0].strftime("%j")), + 'MAP_PROJ': None, # Projection dependent attributes + 'MMINLU': 'MODIFIED_IGBP_MODIS_NOAH', + 'NUM_LAND_CAT': np.int32(41), + 'ISWATER': np.int32(17), + 'ISLAKE': np.int32(-1), + 'ISICE': np.int32(15), + 'ISURBAN': np.int32(13), + 'ISOILWATER': np.int32(14), + 'HISTORY': "", # Editable + } + + # Editable attributes + float_atts = ['DAMPCOEF', 'KHDIF', 'KVDIF', 'CEN_LAT', 'CEN_LON', 'DT'] + int_atts = ['BOTTOM-TOP_GRID_DIMENSION', 'DIFF_OPT', 'KM_OPT', 'DAMP_OPT', + 'MP_PHYSICS', 'RA_LW_PHYSICS', 'RA_SW_PHYSICS', 'SF_SFCLAY_PHYSICS', 'SF_SURFACE_PHYSICS', + 'BL_PBL_PHYSICS', 'CU_PHYSICS', 'SF_LAKE_PHYSICS', 'SURFACE_INPUT_SOURCE', 'SST_UPDATE', + 'GRID_FDDA', 'GFDDA_INTERVAL_M', 'GFDDA_END_H', 'GRID_SFDDA', 'SGFDDA_INTERVAL_M', 'SGFDDA_END_H', + 'BOTTOM-TOP_PATCH_START_UNSTAG', 'BOTTOM-TOP_PATCH_END_UNSTAG', 'BOTTOM-TOP_PATCH_START_STAG', + 'BOTTOM-TOP_PATCH_END_STAG', 'GRID_ID', 'PARENT_ID', 'I_PARENT_START', 'J_PARENT_START', + 'PARENT_GRID_RATIO', 'NUM_LAND_CAT', 'ISWATER', 'ISLAKE', 'ISICE', 'ISURBAN', 'ISOILWATER'] + str_atts = ['GRIDTYPE', 'MMINLU', 'HISTORY'] + for att_name, att_value in current_attributes.items(): + if att_name in int_atts: + self.global_attrs[att_name] = np.int32(att_value) + elif att_name in float_atts: + self.global_attrs[att_name] = np.float32(att_value) + elif att_name in str_atts: + self.global_attrs[att_name] = str(att_value) + + # Projection dependent attributes + if isinstance(self, nes.LCCNes) or isinstance(self, nes.MercatorNes): + self.global_attrs['WEST-EAST_GRID_DIMENSION'] = np.int32(len(self._x['data']) + 1) + self.global_attrs['SOUTH-NORTH_GRID_DIMENSION'] = np.int32(len(self._y['data']) + 1) + self.global_attrs['DX'] = np.float32(self._x['data'][1] - self._x['data'][0]) + self.global_attrs['DY'] = np.float32(self._y['data'][1] - self._y['data'][0]) + self.global_attrs['SURFACE_INPUT_SOURCE'] = np.int32(1) + self.global_attrs['WEST-EAST_PATCH_START_UNSTAG'] = np.int32(1) + self.global_attrs['WEST-EAST_PATCH_END_UNSTAG'] = np.int32(len(self._x['data'])) + self.global_attrs['WEST-EAST_PATCH_START_STAG'] = np.int32(1) + self.global_attrs['WEST-EAST_PATCH_END_STAG'] = np.int32(len(self._x['data']) + 1) + self.global_attrs['SOUTH-NORTH_PATCH_START_UNSTAG'] = np.int32(1) + self.global_attrs['SOUTH-NORTH_PATCH_END_UNSTAG'] = np.int32(len(self._y['data'])) + self.global_attrs['SOUTH-NORTH_PATCH_START_STAG'] = np.int32(1) + self.global_attrs['SOUTH-NORTH_PATCH_END_STAG'] = np.int32(len(self._y['data']) + 1) + + self.global_attrs['POLE_LAT'] = np.float32(90) + self.global_attrs['POLE_LON'] = np.float32(0) + + if isinstance(self, nes.LCCNes): + self.global_attrs['MAP_PROJ'] = np.int32(1) + self.global_attrs['TRUELAT1'] = np.float32(self.projection_data['standard_parallel'][0]) + self.global_attrs['TRUELAT2'] = np.float32(self.projection_data['standard_parallel'][1]) + self.global_attrs['MOAD_CEN_LAT'] = np.float32(self.projection_data['latitude_of_projection_origin']) + self.global_attrs['STAND_LON'] = np.float32(self.projection_data['longitude_of_central_meridian']) + self.global_attrs['CEN_LAT'] = np.float32(self.projection_data['latitude_of_projection_origin']) + self.global_attrs['CEN_LON'] = np.float32(self.projection_data['longitude_of_central_meridian']) + elif isinstance(self, nes.MercatorNes): + self.global_attrs['MAP_PROJ'] = np.int32(3) + self.global_attrs['TRUELAT1'] = np.float32(self.projection_data['standard_parallel']) + self.global_attrs['TRUELAT2'] = np.float32(0) + self.global_attrs['MOAD_CEN_LAT'] = np.float32(self.projection_data['standard_parallel']) + self.global_attrs['STAND_LON'] = np.float32(self.projection_data['longitude_of_projection_origin']) + self.global_attrs['CEN_LAT'] = np.float32(self.projection_data['standard_parallel']) + self.global_attrs['CEN_LON'] = np.float32(self.projection_data['longitude_of_projection_origin']) + + return None
+ + + +
+[docs] +def create_dimensions(self, netcdf): + """ + Create 'time', 'time_bnds', 'lev', 'lon' and 'lat' dimensions. + + Parameters + ---------- + self : nes.Nes + netcdf : Dataset + netcdf4-python open dataset. + """ + netcdf.createDimension('Time', len(self._time)) + netcdf.createDimension('DateStrLen', 19) + netcdf.createDimension('emissions_zdim', len(self._lev['data'])) + if isinstance(self, nes.LCCNes): + netcdf.createDimension('west_east', len(self._x['data'])) + netcdf.createDimension('south_north', len(self._y['data'])) + + return None
+ + + +
+[docs] +def create_dimension_variables(self, netcdf): + """ + Create the 'y' and 'x' variables. + + Parameters + ---------- + self : nes.Nes + netcdf : Dataset + NetCDF object. + """ + + times = netcdf.createVariable('Times', 'S1', ('Time', 'DateStrLen', )) + times[:] = create_times_var(self) + + return None
+ + + +
+[docs] +def create_variables(self, netcdf): + """ + Create the netCDF file variables. + + Parameters + ---------- + self : nes.Nes + netcdf : Dataset + netcdf4-python open dataset. + """ + for var_name, var_info in self.variables.items(): + var = netcdf.createVariable(var_name, 'f', ('Time', 'emissions_zdim', 'south_north', 'west_east',), + zlib=self.zip_lvl > 0, complevel=self.zip_lvl) + var.FieldType = var_info['FieldType'] + var.MemoryOrder = var_info['MemoryOrder'] + var.description = var_info['description'] + var.units = var_info['units'] + var.stagger = var_info['stagger'] + var.coordinates = var_info['coordinates'] + + if var_info['data'] is not None: + if self.info: + print("Rank {0:03d}: Filling {1})".format(self.rank, var_name)) + + if isinstance(var_info['data'], int) and var_info['data'] == 0: + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = 0 + + elif len(var_info['data'].shape) == 4: + var[self.write_axis_limits['t_min']:self.write_axis_limits['t_max'], + self.write_axis_limits['z_min']:self.write_axis_limits['z_max'], + self.write_axis_limits['y_min']:self.write_axis_limits['y_max'], + self.write_axis_limits['x_min']:self.write_axis_limits['x_max']] = var_info['data'] + + return None
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/_sources/authors.rst.txt b/docs/build/html/_sources/authors.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..29b236c46b765dceabbbb656f3375a05ea384c35 --- /dev/null +++ b/docs/build/html/_sources/authors.rst.txt @@ -0,0 +1,9 @@ +======= +Authors +======= + +* Carles Tena (`ctwebpage`_) +* Alba Vilanova Cortezón (`avcwebpage`_) + +.. _ctwebpage: https://www.bsc.es/tena-carles +.. _avcwebpage: https://www.bsc.es/vilanova-cortezon-alba diff --git a/docs/build/html/_sources/changelog.rst.txt b/docs/build/html/_sources/changelog.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..4478cf9307b17bce6359103f6b77d45ee1a712ae --- /dev/null +++ b/docs/build/html/_sources/changelog.rst.txt @@ -0,0 +1,6 @@ +========= +CHANGELOG +========= + +.. include:: ../../CHANGELOG.rst + :start-after: .. start-here \ No newline at end of file diff --git a/docs/build/html/_sources/contributing.rst.txt b/docs/build/html/_sources/contributing.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..52feb0790289b0e29b3b94a953b2802360243a99 --- /dev/null +++ b/docs/build/html/_sources/contributing.rst.txt @@ -0,0 +1,6 @@ +============ +Contributing +============ + +.. include:: ../../CONTRIBUTING.rst + :start-after: .. start-here diff --git a/docs/build/html/_sources/formats.rst.txt b/docs/build/html/_sources/formats.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..5b99a69f94b2f06f75ff90493b9ef817b5ce36fb --- /dev/null +++ b/docs/build/html/_sources/formats.rst.txt @@ -0,0 +1,34 @@ +Formats +======================== + +CAMS RA format +---------------------------------------- + +.. automodule:: nes.nes_formats.cams_ra_format + :members: + :undoc-members: + :show-inheritance: + +CMAQ format +------------------------------------ + +.. automodule:: nes.nes_formats.cmaq_format + :members: + :undoc-members: + :show-inheritance: + +MONARCH format +--------------------------------------- + +.. automodule:: nes.nes_formats.monarch_format + :members: + :undoc-members: + :show-inheritance: + +WRF CHEM format +----------------------------------------- + +.. automodule:: nes.nes_formats.wrf_chem_format + :members: + :undoc-members: + :show-inheritance: \ No newline at end of file diff --git a/docs/build/html/_sources/index.rst.txt b/docs/build/html/_sources/index.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..db32bc2a7549d32b90b2337477c0f6b2dd985fec --- /dev/null +++ b/docs/build/html/_sources/index.rst.txt @@ -0,0 +1,15 @@ +======== +Contents +======== + +.. toctree:: + :maxdepth: 2 + + readme + object + methods + formats + projections + contributing + changelog + authors \ No newline at end of file diff --git a/docs/build/html/_sources/methods.rst.txt b/docs/build/html/_sources/methods.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..095aa39c09283b01266c18793cf50467da148f37 --- /dev/null +++ b/docs/build/html/_sources/methods.rst.txt @@ -0,0 +1,34 @@ +Methods +=================== + +Generic methods +--------------------------------- + +.. automodule:: nes.methods.cell_measures + :members: + :undoc-members: + :show-inheritance: + +Horizontal interpolation +-------------------------------------------- + +.. automodule:: nes.methods.horizontal_interpolation + :members: + :undoc-members: + :show-inheritance: + +Spatial join +-------------------------------- + +.. automodule:: nes.methods.spatial_join + :members: + :undoc-members: + :show-inheritance: + +Vertical interpolation +------------------------------------------ + +.. automodule:: nes.methods.vertical_interpolation + :members: + :undoc-members: + :show-inheritance: diff --git a/docs/build/html/_sources/object.rst.txt b/docs/build/html/_sources/object.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a52b3340399edb0fbc35fc3dc6ed457b8d3bd7d --- /dev/null +++ b/docs/build/html/_sources/object.rst.txt @@ -0,0 +1,18 @@ +The NES object +============== + +Creating a NES object +---------------------- + +.. automodule:: nes.create_nes + :members: + :undoc-members: + :show-inheritance: + +Loading a NES object +-------------------- + +.. automodule:: nes.load_nes + :members: + :undoc-members: + :show-inheritance: diff --git a/docs/build/html/_sources/projections.rst.txt b/docs/build/html/_sources/projections.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..07dd376e389fd536f53b907407f51a453c0b97c2 --- /dev/null +++ b/docs/build/html/_sources/projections.rst.txt @@ -0,0 +1,74 @@ +Projections +=========================== + +Default projection +--------------------------------------- + +.. automodule:: nes.nc_projections.default_nes + :members: + :undoc-members: + :show-inheritance: + +Regulat lat lon projection +-------------------------------------- + +.. automodule:: nes.nc_projections.latlon_nes + :members: + :undoc-members: + :show-inheritance: + +LCC projection +----------------------------------- + +.. automodule:: nes.nc_projections.lcc_nes + :members: + :undoc-members: + :show-inheritance: + +Mercator projection +---------------------------------------- + +.. automodule:: nes.nc_projections.mercator_nes + :members: + :undoc-members: + :show-inheritance: + +Points projection +-------------------------------------- + +.. automodule:: nes.nc_projections.points_nes + :members: + :undoc-members: + :show-inheritance: + +GHOST projection +--------------------------------------------- + +.. automodule:: nes.nc_projections.points_nes_ghost + :members: + :undoc-members: + :show-inheritance: + +Providentia projection +--------------------------------------------------- + +.. automodule:: nes.nc_projections.points_nes_providentia + :members: + :undoc-members: + :show-inheritance: + +Rotated projection +--------------------------------------- + +.. automodule:: nes.nc_projections.rotated_nes + :members: + :undoc-members: + :show-inheritance: + +Rotated nested projection +----------------------------------------------- + +.. automodule:: nes.nc_projections.rotated_nested_nes + :members: + :undoc-members: + :show-inheritance: diff --git a/docs/build/html/_sources/readme.rst.txt b/docs/build/html/_sources/readme.rst.txt new file mode 100644 index 0000000000000000000000000000000000000000..c26dea0d974fb0f68eb2ea444c2cda0557e5adc0 --- /dev/null +++ b/docs/build/html/_sources/readme.rst.txt @@ -0,0 +1,6 @@ +============ +Introduction +============ + +.. include:: ../../README.rst + :start-after: .. start-here \ No newline at end of file diff --git a/docs/build/html/_static/_sphinx_javascript_frameworks_compat.js b/docs/build/html/_static/_sphinx_javascript_frameworks_compat.js new file mode 100644 index 0000000000000000000000000000000000000000..81415803ec2750c82251e896e7eb7b0ac842dac1 --- /dev/null +++ b/docs/build/html/_static/_sphinx_javascript_frameworks_compat.js @@ -0,0 +1,123 @@ +/* Compatability shim for jQuery and underscores.js. + * + * Copyright Sphinx contributors + * Released under the two clause BSD licence + */ + +/** + * small helper function to urldecode strings + * + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL + */ +jQuery.urldecode = function(x) { + if (!x) { + return x + } + return decodeURIComponent(x.replace(/\+/g, ' ')); +}; + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s === 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node, addItems) { + if (node.nodeType === 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && + !jQuery(node.parentNode).hasClass(className) && + !jQuery(node.parentNode).hasClass("nohighlight")) { + var span; + var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.className = className; + } + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + if (isInSVG) { + var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); + var bbox = node.parentElement.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute('class', className); + addItems.push({ + "parent": node.parentNode, + "target": rect}); + } + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this, addItems); + }); + } + } + var addItems = []; + var result = this.each(function() { + highlight(this, addItems); + }); + for (var i = 0; i < addItems.length; ++i) { + jQuery(addItems[i].parent).before(addItems[i].target); + } + return result; +}; + +/* + * backward compatibility for jQuery.browser + * This will be supported until firefox bug is fixed. + */ +if (!jQuery.browser) { + jQuery.uaMatch = function(ua) { + ua = ua.toLowerCase(); + + var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || + /(webkit)[ \/]([\w.]+)/.exec(ua) || + /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || + /(msie) ([\w.]+)/.exec(ua) || + ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || + []; + + return { + browser: match[ 1 ] || "", + version: match[ 2 ] || "0" + }; + }; + jQuery.browser = {}; + jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; +} diff --git a/docs/build/html/_static/basic.css b/docs/build/html/_static/basic.css new file mode 100644 index 0000000000000000000000000000000000000000..30fee9d0f76a47aec5ef23e46adbf6bab4671eac --- /dev/null +++ b/docs/build/html/_static/basic.css @@ -0,0 +1,925 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +a:visited { + color: #551A8B; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} + +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ + +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} + +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +.sig dd { + margin-top: 0px; + margin-bottom: 0px; +} + +.sig dl { + margin-top: 0px; + margin-bottom: 0px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +.translated { + background-color: rgba(207, 255, 207, 0.2) +} + +.untranslated { + background-color: rgba(255, 207, 207, 0.2) +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/docs/build/html/_static/css/badge_only.css b/docs/build/html/_static/css/badge_only.css new file mode 100644 index 0000000000000000000000000000000000000000..c718cee4418893ad43bbac98411059097788706f --- /dev/null +++ b/docs/build/html/_static/css/badge_only.css @@ -0,0 +1 @@ +.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:FontAwesome;font-style:normal;font-weight:400;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#FontAwesome) format("svg")}.fa:before{font-family:FontAwesome;font-style:normal;font-weight:400;line-height:1}.fa:before,a .fa{text-decoration:inherit}.fa:before,a .fa,li .fa{display:inline-block}li .fa-large:before{width:1.875em}ul.fas{list-style-type:none;margin-left:2em;text-indent:-.8em}ul.fas li .fa{width:.8em}ul.fas li .fa-large:before{vertical-align:baseline}.fa-book:before,.icon-book:before{content:"\f02d"}.fa-caret-down:before,.icon-caret-down:before{content:"\f0d7"}.fa-caret-up:before,.icon-caret-up:before{content:"\f0d8"}.fa-caret-left:before,.icon-caret-left:before{content:"\f0d9"}.fa-caret-right:before,.icon-caret-right:before{content:"\f0da"}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60}.rst-versions .rst-current-version:after{clear:both;content:"";display:block}.rst-versions .rst-current-version .fa{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}} \ No newline at end of file diff --git a/docs/build/html/_static/css/fonts/Roboto-Slab-Bold.woff b/docs/build/html/_static/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 0000000000000000000000000000000000000000..6cb60000181dbd348963953ac8ac54afb46c63d5 Binary files /dev/null and b/docs/build/html/_static/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/docs/build/html/_static/css/fonts/Roboto-Slab-Bold.woff2 b/docs/build/html/_static/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..7059e23142aae3d8bad6067fc734a6cffec779c9 Binary files /dev/null and b/docs/build/html/_static/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/docs/build/html/_static/css/fonts/Roboto-Slab-Regular.woff b/docs/build/html/_static/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 0000000000000000000000000000000000000000..f815f63f99da80ad2be69e4021023ec2981eaea0 Binary files /dev/null and b/docs/build/html/_static/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/docs/build/html/_static/css/fonts/Roboto-Slab-Regular.woff2 b/docs/build/html/_static/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..f2c76e5bda18a9842e24cd60d8787257da215ca7 Binary files /dev/null and b/docs/build/html/_static/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/docs/build/html/_static/css/fonts/fontawesome-webfont.eot b/docs/build/html/_static/css/fonts/fontawesome-webfont.eot new file mode 100644 index 0000000000000000000000000000000000000000..e9f60ca953f93e35eab4108bd414bc02ddcf3928 Binary files /dev/null and b/docs/build/html/_static/css/fonts/fontawesome-webfont.eot differ diff --git a/docs/build/html/_static/css/fonts/fontawesome-webfont.svg b/docs/build/html/_static/css/fonts/fontawesome-webfont.svg new file mode 100644 index 0000000000000000000000000000000000000000..855c845e538b65548118279537a04eab2ec6ef0d --- /dev/null +++ b/docs/build/html/_static/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/build/html/_static/css/fonts/fontawesome-webfont.ttf b/docs/build/html/_static/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 0000000000000000000000000000000000000000..35acda2fa1196aad98c2adf4378a7611dd713aa3 Binary files /dev/null and b/docs/build/html/_static/css/fonts/fontawesome-webfont.ttf differ diff --git a/docs/build/html/_static/css/fonts/fontawesome-webfont.woff b/docs/build/html/_static/css/fonts/fontawesome-webfont.woff new file mode 100644 index 0000000000000000000000000000000000000000..400014a4b06eee3d0c0d54402a47ab2601b2862b Binary files /dev/null and b/docs/build/html/_static/css/fonts/fontawesome-webfont.woff differ diff --git a/docs/build/html/_static/css/fonts/fontawesome-webfont.woff2 b/docs/build/html/_static/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..4d13fc60404b91e398a37200c4a77b645cfd9586 Binary files /dev/null and b/docs/build/html/_static/css/fonts/fontawesome-webfont.woff2 differ diff --git a/docs/build/html/_static/css/fonts/lato-bold-italic.woff b/docs/build/html/_static/css/fonts/lato-bold-italic.woff new file mode 100644 index 0000000000000000000000000000000000000000..88ad05b9ff413055b4d4e89dd3eec1c193fa20c6 Binary files /dev/null and b/docs/build/html/_static/css/fonts/lato-bold-italic.woff differ diff --git a/docs/build/html/_static/css/fonts/lato-bold-italic.woff2 b/docs/build/html/_static/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..c4e3d804b57b625b16a36d767bfca6bbf63d414e Binary files /dev/null and b/docs/build/html/_static/css/fonts/lato-bold-italic.woff2 differ diff --git a/docs/build/html/_static/css/fonts/lato-bold.woff b/docs/build/html/_static/css/fonts/lato-bold.woff new file mode 100644 index 0000000000000000000000000000000000000000..c6dff51f063cc732fdb5fe786a8966de85f4ebec Binary files /dev/null and b/docs/build/html/_static/css/fonts/lato-bold.woff differ diff --git a/docs/build/html/_static/css/fonts/lato-bold.woff2 b/docs/build/html/_static/css/fonts/lato-bold.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..bb195043cfc07fa52741c6144d7378b5ba8be4c5 Binary files /dev/null and b/docs/build/html/_static/css/fonts/lato-bold.woff2 differ diff --git a/docs/build/html/_static/css/fonts/lato-normal-italic.woff b/docs/build/html/_static/css/fonts/lato-normal-italic.woff new file mode 100644 index 0000000000000000000000000000000000000000..76114bc03362242c3325ecda6ce6d02bb737880f Binary files /dev/null and b/docs/build/html/_static/css/fonts/lato-normal-italic.woff differ diff --git a/docs/build/html/_static/css/fonts/lato-normal-italic.woff2 b/docs/build/html/_static/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..3404f37e2e312757841abe20343588a7740768ca Binary files /dev/null and b/docs/build/html/_static/css/fonts/lato-normal-italic.woff2 differ diff --git a/docs/build/html/_static/css/fonts/lato-normal.woff b/docs/build/html/_static/css/fonts/lato-normal.woff new file mode 100644 index 0000000000000000000000000000000000000000..ae1307ff5f4c48678621c240f8972d5a6e20b22c Binary files /dev/null and b/docs/build/html/_static/css/fonts/lato-normal.woff differ diff --git a/docs/build/html/_static/css/fonts/lato-normal.woff2 b/docs/build/html/_static/css/fonts/lato-normal.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..3bf9843328a6359b6bd06e50010319c63da0d717 Binary files /dev/null and b/docs/build/html/_static/css/fonts/lato-normal.woff2 differ diff --git a/docs/build/html/_static/css/theme.css b/docs/build/html/_static/css/theme.css new file mode 100644 index 0000000000000000000000000000000000000000..19a446a0e70027fc1c2499729def6191c4bbbb20 --- /dev/null +++ b/docs/build/html/_static/css/theme.css @@ -0,0 +1,4 @@ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel,.rst-content .menuselection{font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content .guilabel,.rst-content .menuselection{border:1px solid #7fbbe3;background:#e7f2fa}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} \ No newline at end of file diff --git a/docs/build/html/_static/doctools.js b/docs/build/html/_static/doctools.js new file mode 100644 index 0000000000000000000000000000000000000000..d06a71d7518041301a303697d2a3c372648eb7bf --- /dev/null +++ b/docs/build/html/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/docs/build/html/_static/documentation_options.js b/docs/build/html/_static/documentation_options.js new file mode 100644 index 0000000000000000000000000000000000000000..ca548c46629b48e14368faf1a1b20e44b662e0c7 --- /dev/null +++ b/docs/build/html/_static/documentation_options.js @@ -0,0 +1,13 @@ +const DOCUMENTATION_OPTIONS = { + VERSION: '1.1.3', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'html', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/docs/build/html/_static/file.png b/docs/build/html/_static/file.png new file mode 100644 index 0000000000000000000000000000000000000000..a858a410e4faa62ce324d814e4b816fff83a6fb3 Binary files /dev/null and b/docs/build/html/_static/file.png differ diff --git a/docs/build/html/_static/jquery.js b/docs/build/html/_static/jquery.js new file mode 100644 index 0000000000000000000000000000000000000000..c4c6022f2982e8dae64cebd6b9a2b59f2547faad --- /dev/null +++ b/docs/build/html/_static/jquery.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=y.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=y.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),y.elements=c+" "+a,j(b)}function f(a){var b=x[a[v]];return b||(b={},w++,a[v]=w,x[w]=b),b}function g(a,c,d){if(c||(c=b),q)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():u.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||t.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),q)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return y.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(y,b.frag)}function j(a){a||(a=b);var d=f(a);return!y.shivCSS||p||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),q||i(a,d),a}function k(a){for(var b,c=a.getElementsByTagName("*"),e=c.length,f=RegExp("^(?:"+d().join("|")+")$","i"),g=[];e--;)b=c[e],f.test(b.nodeName)&&g.push(b.applyElement(l(b)));return g}function l(a){for(var b,c=a.attributes,d=c.length,e=a.ownerDocument.createElement(A+":"+a.nodeName);d--;)b=c[d],b.specified&&e.setAttribute(b.nodeName,b.nodeValue);return e.style.cssText=a.style.cssText,e}function m(a){for(var b,c=a.split("{"),e=c.length,f=RegExp("(^|[\\s,>+~])("+d().join("|")+")(?=[[\\s,>+~#.:]|$)","gi"),g="$1"+A+"\\:$2";e--;)b=c[e]=c[e].split("}"),b[b.length-1]=b[b.length-1].replace(f,g),c[e]=b.join("}");return c.join("{")}function n(a){for(var b=a.length;b--;)a[b].removeNode()}function o(a){function b(){clearTimeout(g._removeSheetTimer),d&&d.removeNode(!0),d=null}var d,e,g=f(a),h=a.namespaces,i=a.parentWindow;return!B||a.printShived?a:("undefined"==typeof h[A]&&h.add(A),i.attachEvent("onbeforeprint",function(){b();for(var f,g,h,i=a.styleSheets,j=[],l=i.length,n=Array(l);l--;)n[l]=i[l];for(;h=n.pop();)if(!h.disabled&&z.test(h.media)){try{f=h.imports,g=f.length}catch(o){g=0}for(l=0;g>l;l++)n.push(f[l]);try{j.push(h.cssText)}catch(o){}}j=m(j.reverse().join("")),e=k(a),d=c(a,j)}),i.attachEvent("onafterprint",function(){n(e),clearTimeout(g._removeSheetTimer),g._removeSheetTimer=setTimeout(b,500)}),a.printShived=!0,a)}var p,q,r="3.7.3",s=a.html5||{},t=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,u=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,v="_html5shiv",w=0,x={};!function(){try{var a=b.createElement("a");a.innerHTML="",p="hidden"in a,q=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){p=!0,q=!0}}();var y={elements:s.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:r,shivCSS:s.shivCSS!==!1,supportsUnknownElements:q,shivMethods:s.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=y,j(b);var z=/^$|\b(?:all|print)\b/,A="html5shiv",B=!q&&function(){var c=b.documentElement;return!("undefined"==typeof b.namespaces||"undefined"==typeof b.parentWindow||"undefined"==typeof c.applyElement||"undefined"==typeof c.removeNode||"undefined"==typeof a.attachEvent)}();y.type+=" print",y.shivPrint=o,o(b),"object"==typeof module&&module.exports&&(module.exports=y)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/docs/build/html/_static/js/html5shiv.min.js b/docs/build/html/_static/js/html5shiv.min.js new file mode 100644 index 0000000000000000000000000000000000000000..cd1c674f5e3a290a12386156500df3c50903a46b --- /dev/null +++ b/docs/build/html/_static/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3-pre",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/docs/build/html/_static/js/theme.js b/docs/build/html/_static/js/theme.js new file mode 100644 index 0000000000000000000000000000000000000000..1fddb6ee4a60f30b4a4c4b3ad1f1604043f77981 --- /dev/null +++ b/docs/build/html/_static/js/theme.js @@ -0,0 +1 @@ +!function(n){var e={};function t(i){if(e[i])return e[i].exports;var o=e[i]={i:i,l:!1,exports:{}};return n[i].call(o.exports,o,o.exports,t),o.l=!0,o.exports}t.m=n,t.c=e,t.d=function(n,e,i){t.o(n,e)||Object.defineProperty(n,e,{enumerable:!0,get:i})},t.r=function(n){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(n,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(n,"__esModule",{value:!0})},t.t=function(n,e){if(1&e&&(n=t(n)),8&e)return n;if(4&e&&"object"==typeof n&&n&&n.__esModule)return n;var i=Object.create(null);if(t.r(i),Object.defineProperty(i,"default",{enumerable:!0,value:n}),2&e&&"string"!=typeof n)for(var o in n)t.d(i,o,function(e){return n[e]}.bind(null,o));return i},t.n=function(n){var e=n&&n.__esModule?function(){return n.default}:function(){return n};return t.d(e,"a",e),e},t.o=function(n,e){return Object.prototype.hasOwnProperty.call(n,e)},t.p="",t(t.s=0)}([function(n,e,t){t(1),n.exports=t(3)},function(n,e,t){(function(){var e="undefined"!=typeof window?window.jQuery:t(2);n.exports.ThemeNav={navBar:null,win:null,winScroll:!1,winResize:!1,linkScroll:!1,winPosition:0,winHeight:null,docHeight:null,isRunning:!1,enable:function(n){var t=this;void 0===n&&(n=!0),t.isRunning||(t.isRunning=!0,e((function(e){t.init(e),t.reset(),t.win.on("hashchange",t.reset),n&&t.win.on("scroll",(function(){t.linkScroll||t.winScroll||(t.winScroll=!0,requestAnimationFrame((function(){t.onScroll()})))})),t.win.on("resize",(function(){t.winResize||(t.winResize=!0,requestAnimationFrame((function(){t.onResize()})))})),t.onResize()})))},enableSticky:function(){this.enable(!0)},init:function(n){n(document);var e=this;this.navBar=n("div.wy-side-scroll:first"),this.win=n(window),n(document).on("click","[data-toggle='wy-nav-top']",(function(){n("[data-toggle='wy-nav-shift']").toggleClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift")})).on("click",".wy-menu-vertical .current ul li a",(function(){var t=n(this);n("[data-toggle='wy-nav-shift']").removeClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift"),e.toggleCurrent(t),e.hashChange()})).on("click","[data-toggle='rst-current-version']",(function(){n("[data-toggle='rst-versions']").toggleClass("shift-up")})),n("table.docutils:not(.field-list,.footnote,.citation)").wrap("
"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + diff --git a/docs/build/html/_static/minus.png b/docs/build/html/_static/minus.png new file mode 100644 index 0000000000000000000000000000000000000000..d96755fdaf8bb2214971e0db9c1fd3077d7c419d Binary files /dev/null and b/docs/build/html/_static/minus.png differ diff --git a/docs/build/html/_static/plus.png b/docs/build/html/_static/plus.png new file mode 100644 index 0000000000000000000000000000000000000000..7107cec93a979b9a5f64843235a16651d563ce2d Binary files /dev/null and b/docs/build/html/_static/plus.png differ diff --git a/docs/build/html/_static/pygments.css b/docs/build/html/_static/pygments.css new file mode 100644 index 0000000000000000000000000000000000000000..84ab3030a9329e5598877502bfa7f8a999af8535 --- /dev/null +++ b/docs/build/html/_static/pygments.css @@ -0,0 +1,75 @@ +pre { line-height: 125%; } +td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +.highlight .hll { background-color: #ffffcc } +.highlight { background: #f8f8f8; } +.highlight .c { color: #3D7B7B; font-style: italic } /* Comment */ +.highlight .err { border: 1px solid #FF0000 } /* Error */ +.highlight .k { color: #008000; font-weight: bold } /* Keyword */ +.highlight .o { color: #666666 } /* Operator */ +.highlight .ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */ +.highlight .cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #9C6500 } /* Comment.Preproc */ +.highlight .cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */ +.highlight .c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #3D7B7B; font-style: italic } /* Comment.Special */ +.highlight .gd { color: #A00000 } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ +.highlight .gr { color: #E40000 } /* Generic.Error */ +.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.highlight .gi { color: #008400 } /* Generic.Inserted */ +.highlight .go { color: #717171 } /* Generic.Output */ +.highlight .gp { color: #000080; font-weight: bold } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.highlight .gt { color: #0044DD } /* Generic.Traceback */ +.highlight .kc { color: #008000; font-weight: bold } /* Keyword.Constant */ +.highlight .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { color: #008000 } /* Keyword.Pseudo */ +.highlight .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #B00040 } /* Keyword.Type */ +.highlight .m { color: #666666 } /* Literal.Number */ +.highlight .s { color: #BA2121 } /* Literal.String */ +.highlight .na { color: #687822 } /* Name.Attribute */ +.highlight .nb { color: #008000 } /* Name.Builtin */ +.highlight .nc { color: #0000FF; font-weight: bold } /* Name.Class */ +.highlight .no { color: #880000 } /* Name.Constant */ +.highlight .nd { color: #AA22FF } /* Name.Decorator */ +.highlight .ni { color: #717171; font-weight: bold } /* Name.Entity */ +.highlight .ne { color: #CB3F38; font-weight: bold } /* Name.Exception */ +.highlight .nf { color: #0000FF } /* Name.Function */ +.highlight .nl { color: #767600 } /* Name.Label */ +.highlight .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */ +.highlight .nt { color: #008000; font-weight: bold } /* Name.Tag */ +.highlight .nv { color: #19177C } /* Name.Variable */ +.highlight .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mb { color: #666666 } /* Literal.Number.Bin */ +.highlight .mf { color: #666666 } /* Literal.Number.Float */ +.highlight .mh { color: #666666 } /* Literal.Number.Hex */ +.highlight .mi { color: #666666 } /* Literal.Number.Integer */ +.highlight .mo { color: #666666 } /* Literal.Number.Oct */ +.highlight .sa { color: #BA2121 } /* Literal.String.Affix */ +.highlight .sb { color: #BA2121 } /* Literal.String.Backtick */ +.highlight .sc { color: #BA2121 } /* Literal.String.Char */ +.highlight .dl { color: #BA2121 } /* Literal.String.Delimiter */ +.highlight .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #BA2121 } /* Literal.String.Double */ +.highlight .se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */ +.highlight .sh { color: #BA2121 } /* Literal.String.Heredoc */ +.highlight .si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */ +.highlight .sx { color: #008000 } /* Literal.String.Other */ +.highlight .sr { color: #A45A77 } /* Literal.String.Regex */ +.highlight .s1 { color: #BA2121 } /* Literal.String.Single */ +.highlight .ss { color: #19177C } /* Literal.String.Symbol */ +.highlight .bp { color: #008000 } /* Name.Builtin.Pseudo */ +.highlight .fm { color: #0000FF } /* Name.Function.Magic */ +.highlight .vc { color: #19177C } /* Name.Variable.Class */ +.highlight .vg { color: #19177C } /* Name.Variable.Global */ +.highlight .vi { color: #19177C } /* Name.Variable.Instance */ +.highlight .vm { color: #19177C } /* Name.Variable.Magic */ +.highlight .il { color: #666666 } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/docs/build/html/_static/searchtools.js b/docs/build/html/_static/searchtools.js new file mode 100644 index 0000000000000000000000000000000000000000..7918c3fab3116026a6626a50bdc8966abc24b0b3 --- /dev/null +++ b/docs/build/html/_static/searchtools.js @@ -0,0 +1,574 @@ +/* + * searchtools.js + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for the full-text search. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +/** + * Simple result scoring code. + */ +if (typeof Scorer === "undefined") { + var Scorer = { + // Implement the following function to further tweak the score for each result + // The function takes a result array [docname, title, anchor, descr, score, filename] + // and returns the new score. + /* + score: result => { + const [docname, title, anchor, descr, score, filename] = result + return score + }, + */ + + // query matches the full name of an object + objNameMatch: 11, + // or matches in the last dotted part of the object name + objPartialMatch: 6, + // Additive scores depending on the priority of the object + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, + // Used when the priority is not in the mapping. + objPrioDefault: 0, + + // query found in title + title: 15, + partialTitle: 7, + // query found in terms + term: 5, + partialTerm: 2, + }; +} + +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms, highlightTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + const contentRoot = document.documentElement.dataset.content_root; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = contentRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = contentRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; + } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) { + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + // highlight search terms in the description + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + } + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms) + ); + // highlight search terms in the summary + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + `Search finished, found ${resultCount} page(s) matching the search query.` + ); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms, + highlightTerms, +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms, highlightTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms, highlightTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings +} + +/** + * Search Module + */ +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() }); + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent !== undefined) return docContent.textContent; + console.warn( + "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template." + ); + return ""; + }, + + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); + }, + + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), + + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); + } + }, + + hasIndex: () => Search._index !== null, + + deferQuery: (query) => (Search._queued_query = query), + + stopPulse: () => (Search._pulse_status = -1), + + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { + Search._pulse_status = (Search._pulse_status + 1) % 4; + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something (or wait until index is loaded) + */ + performSearch: (query) => { + // create the required interface elements + const searchText = document.createElement("h2"); + searchText.textContent = _("Searching"); + const searchSummary = document.createElement("p"); + searchSummary.classList.add("search-summary"); + searchSummary.innerText = ""; + const searchList = document.createElement("ul"); + searchList.classList.add("search"); + + const out = document.getElementById("search-results"); + Search.title = out.appendChild(searchText); + Search.dots = Search.title.appendChild(document.createElement("span")); + Search.status = out.appendChild(searchSummary); + Search.output = out.appendChild(searchList); + + const searchProgress = document.getElementById("search-progress"); + // Some themes don't use the search progress node + if (searchProgress) { + searchProgress.innerText = _("Preparing search..."); + } + Search.startPulse(); + + // index already loaded, the browser was quick! + if (Search.hasIndex()) Search.query(query); + else Search.deferQuery(query); + }, + + /** + * execute search (requires search index to be loaded) + */ + query: (query) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // stem the search terms and add them to the correct list + const stemmer = new Stemmer(); + const searchTerms = new Set(); + const excludedTerms = new Set(); + const highlightTerms = new Set(); + const objectTerms = new Set(splitQuery(query.toLowerCase().trim())); + splitQuery(query.trim()).forEach((queryTerm) => { + const queryTermLower = queryTerm.toLowerCase(); + + // maybe skip this "word" + // stopwords array is from language_data.js + if ( + stopwords.indexOf(queryTermLower) !== -1 || + queryTerm.match(/^\d+$/) + ) + return; + + // stem the word + let word = stemmer.stemWord(queryTermLower); + // select the correct list + if (word[0] === "-") excludedTerms.add(word.substr(1)); + else { + searchTerms.add(word); + highlightTerms.add(queryTermLower); + } + }); + + if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js + localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" ")) + } + + // console.debug("SEARCH: searching for:"); + // console.info("required: ", [...searchTerms]); + // console.info("excluded: ", [...excludedTerms]); + + // array of [docname, title, anchor, descr, score, filename] + let results = []; + _removeChildren(document.getElementById("search-progress")); + + const queryLower = query.toLowerCase(); + for (const [title, foundTitles] of Object.entries(allTitles)) { + if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) { + for (const [file, id] of foundTitles) { + let score = Math.round(100 * queryLower.length / title.length) + results.push([ + docNames[file], + titles[file] !== title ? `${titles[file]} > ${title}` : title, + id !== null ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // search for explicit entries in index directives + for (const [entry, foundEntries] of Object.entries(indexEntries)) { + if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { + for (const [file, id] of foundEntries) { + let score = Math.round(100 * queryLower.length / entry.length) + results.push([ + docNames[file], + titles[file], + id ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // lookup as object + objectTerms.forEach((term) => + results.push(...Search.performObjectSearch(term, objectTerms)) + ); + + // lookup as search terms in fulltext + results.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + + // let the scorer override scores with a custom scoring function + if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item))); + + // now sort the results by score (in opposite order of appearance, since the + // display function below uses pop() to retrieve items) and then + // alphabetically + results.sort((a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; + }); + + // remove duplicate search results + // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept + let seen = new Set(); + results = results.reverse().reduce((acc, result) => { + let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(','); + if (!seen.has(resultStr)) { + acc.push(result); + seen.add(resultStr); + } + return acc; + }, []); + + results = results.reverse(); + + // for debugging + //Search.lastresults = results.slice(); // a copy + // console.info("search results:", Search.lastresults); + + // print the results + _displayNextItem(results, results.length, searchTerms, highlightTerms); + }, + + /** + * search for object names + */ + performObjectSearch: (object, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const objects = Search._index.objects; + const objNames = Search._index.objnames; + const titles = Search._index.titles; + + const results = []; + + const objectSearchCallback = (prefix, match) => { + const name = match[4] + const fullname = (prefix ? prefix + "." : "") + name; + const fullnameLower = fullname.toLowerCase(); + if (fullnameLower.indexOf(object) < 0) return; + + let score = 0; + const parts = fullnameLower.split("."); + + // check for different match types: exact matches of full name or + // "last name" (i.e. last dotted part) + if (fullnameLower === object || parts.slice(-1)[0] === object) + score += Scorer.objNameMatch; + else if (parts.slice(-1)[0].indexOf(object) > -1) + score += Scorer.objPartialMatch; // matches in last name + + const objName = objNames[match[1]][2]; + const title = titles[match[0]]; + + // If more than one term searched for, we require other words to be + // found in the name/title/description + const otherTerms = new Set(objectTerms); + otherTerms.delete(object); + if (otherTerms.size > 0) { + const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase(); + if ( + [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0) + ) + return; + } + + let anchor = match[3]; + if (anchor === "") anchor = fullname; + else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname; + + const descr = objName + _(", in ") + title; + + // add custom score for some objects according to scorer + if (Scorer.objPrio.hasOwnProperty(match[2])) + score += Scorer.objPrio[match[2]]; + else score += Scorer.objPrioDefault; + + results.push([ + docNames[match[0]], + fullname, + "#" + anchor, + descr, + score, + filenames[match[0]], + ]); + }; + Object.keys(objects).forEach((prefix) => + objects[prefix].forEach((array) => + objectSearchCallback(prefix, array) + ) + ); + return results; + }, + + /** + * search for full-text terms in the index + */ + performTermsSearch: (searchTerms, excludedTerms) => { + // prepare search + const terms = Search._index.terms; + const titleTerms = Search._index.titleterms; + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + + const scoreMap = new Map(); + const fileMap = new Map(); + + // perform the search on the required terms + searchTerms.forEach((word) => { + const files = []; + const arr = [ + { files: terms[word], score: Scorer.term }, + { files: titleTerms[word], score: Scorer.title }, + ]; + // add support for partial matches + if (word.length > 2) { + const escapedWord = _escapeRegExp(word); + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord) && !terms[word]) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord) && !titleTerms[word]) + arr.push({ files: titleTerms[word], score: Scorer.partialTitle }); + }); + } + + // no match but word was a required one + if (arr.every((record) => record.files === undefined)) return; + + // found search word in contents + arr.forEach((record) => { + if (record.files === undefined) return; + + let recordFiles = record.files; + if (recordFiles.length === undefined) recordFiles = [recordFiles]; + files.push(...recordFiles); + + // set score for the word in each file + recordFiles.forEach((file) => { + if (!scoreMap.has(file)) scoreMap.set(file, {}); + scoreMap.get(file)[word] = record.score; + }); + }); + + // create the mapping + files.forEach((file) => { + if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1) + fileMap.get(file).push(word); + else fileMap.set(file, [word]); + }); + }); + + // now check if the files don't contain excluded terms + const results = []; + for (const [file, wordList] of fileMap) { + // check if all requirements are matched + + // as search terms with length < 3 are discarded + const filteredTermCount = [...searchTerms].filter( + (term) => term.length > 2 + ).length; + if ( + wordList.length !== searchTerms.size && + wordList.length !== filteredTermCount + ) + continue; + + // ensure that none of the excluded terms is in the search result + if ( + [...excludedTerms].some( + (term) => + terms[term] === file || + titleTerms[term] === file || + (terms[term] || []).includes(file) || + (titleTerms[term] || []).includes(file) + ) + ) + break; + + // select one (max) score for the file. + const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w])); + // add result to the result list + results.push([ + docNames[file], + titles[file], + "", + null, + score, + filenames[file], + ]); + } + return results; + }, + + /** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words. + */ + makeSearchSummary: (htmlText, keywords) => { + const text = Search.htmlToText(htmlText); + if (text === "") return null; + + const textLower = text.toLowerCase(); + const actualStartPosition = [...keywords] + .map((k) => textLower.indexOf(k.toLowerCase())) + .filter((i) => i > -1) + .slice(-1)[0]; + const startWithContext = Math.max(actualStartPosition - 120, 0); + + const top = startWithContext === 0 ? "" : "..."; + const tail = startWithContext + 240 < text.length ? "..." : ""; + + let summary = document.createElement("p"); + summary.classList.add("context"); + summary.textContent = top + text.substr(startWithContext, 240).trim() + tail; + + return summary; + }, +}; + +_ready(Search.init); diff --git a/docs/build/html/_static/sphinx_highlight.js b/docs/build/html/_static/sphinx_highlight.js new file mode 100644 index 0000000000000000000000000000000000000000..8a96c69a1942318413af68fd459122b56edd8d69 --- /dev/null +++ b/docs/build/html/_static/sphinx_highlight.js @@ -0,0 +1,154 @@ +/* Highlighting utilities for Sphinx HTML documentation. */ +"use strict"; + +const SPHINX_HIGHLIGHT_ENABLED = true + +/** + * highlight a given string on a node by wrapping it in + * span elements with the given class name. + */ +const _highlight = (node, addItems, text, className) => { + if (node.nodeType === Node.TEXT_NODE) { + const val = node.nodeValue; + const parent = node.parentNode; + const pos = val.toLowerCase().indexOf(text); + if ( + pos >= 0 && + !parent.classList.contains(className) && + !parent.classList.contains("nohighlight") + ) { + let span; + + const closestNode = parent.closest("body, svg, foreignObject"); + const isInSVG = closestNode && closestNode.matches("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.classList.add(className); + } + + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + const rest = document.createTextNode(val.substr(pos + text.length)); + parent.insertBefore( + span, + parent.insertBefore( + rest, + node.nextSibling + ) + ); + node.nodeValue = val.substr(0, pos); + /* There may be more occurrences of search term in this node. So call this + * function recursively on the remaining fragment. + */ + _highlight(rest, addItems, text, className); + + if (isInSVG) { + const rect = document.createElementNS( + "http://www.w3.org/2000/svg", + "rect" + ); + const bbox = parent.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute("class", className); + addItems.push({ parent: parent, target: rect }); + } + } + } else if (node.matches && !node.matches("button, select, textarea")) { + node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); + } +}; +const _highlightText = (thisNode, text, className) => { + let addItems = []; + _highlight(thisNode, addItems, text, className); + addItems.forEach((obj) => + obj.parent.insertAdjacentElement("beforebegin", obj.target) + ); +}; + +/** + * Small JavaScript module for the documentation. + */ +const SphinxHighlight = { + + /** + * highlight the search words provided in localstorage in the text + */ + highlightSearchWords: () => { + if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight + + // get and clear terms from localstorage + const url = new URL(window.location); + const highlight = + localStorage.getItem("sphinx_highlight_terms") + || url.searchParams.get("highlight") + || ""; + localStorage.removeItem("sphinx_highlight_terms") + url.searchParams.delete("highlight"); + window.history.replaceState({}, "", url); + + // get individual terms from highlight string + const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); + if (terms.length === 0) return; // nothing to do + + // There should never be more than one element matching "div.body" + const divBody = document.querySelectorAll("div.body"); + const body = divBody.length ? divBody[0] : document.querySelector("body"); + window.setTimeout(() => { + terms.forEach((term) => _highlightText(body, term, "highlighted")); + }, 10); + + const searchBox = document.getElementById("searchbox"); + if (searchBox === null) return; + searchBox.appendChild( + document + .createRange() + .createContextualFragment( + '" + ) + ); + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords: () => { + document + .querySelectorAll("#searchbox .highlight-link") + .forEach((el) => el.remove()); + document + .querySelectorAll("span.highlighted") + .forEach((el) => el.classList.remove("highlighted")); + localStorage.removeItem("sphinx_highlight_terms") + }, + + initEscapeListener: () => { + // only install a listener if it is really needed + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; + if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { + SphinxHighlight.hideSearchWords(); + event.preventDefault(); + } + }); + }, +}; + +_ready(() => { + /* Do not call highlightSearchWords() when we are on the search page. + * It will highlight words from the *previous* search query. + */ + if (typeof Search === "undefined") SphinxHighlight.highlightSearchWords(); + SphinxHighlight.initEscapeListener(); +}); diff --git a/docs/build/html/authors.html b/docs/build/html/authors.html new file mode 100644 index 0000000000000000000000000000000000000000..e73161e5b7d2909f53c8f0603e0f4194b3bf2e8d --- /dev/null +++ b/docs/build/html/authors.html @@ -0,0 +1,119 @@ + + + + + + + Authors — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Authors

+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/changelog.html b/docs/build/html/changelog.html new file mode 100644 index 0000000000000000000000000000000000000000..584369525afd9f8c0553ffa5308969b50648c8b5 --- /dev/null +++ b/docs/build/html/changelog.html @@ -0,0 +1,315 @@ + + + + + + + CHANGELOG — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CHANGELOG

+
+

1.1.4

+
    +
  • Release date: Unknown

  • +
  • Changes and new features:

    +
      +
    • Statistics:

      +
        +
      • Rolling mean

      • +
      +
    • +
    • Documentation

    • +
    +
  • +
+
+
+

1.1.3

+
    +
  • Release date: 2023/06/16

  • +
  • Changes and new features:

    +
      +
    • Rotated nested projection

    • +
    • Improved documentation

    • +
    • New function get_fids()

    • +
    • Climatology options added

    • +
    • Milliseconds, seconds, minutes and days time units accepted

    • +
    • Option to change the time units’ resolution.

    • +
    • Bugs fixing:

      +
        +
      • The input arguments in function new() have been corrected

      • +
      • Months to day time units fixed

      • +
      +
    • +
    +
  • +
+
+
+

1.1.2

+ +
+
+

1.1.1

+ +
+
+

1.1.0

+
    +
  • Release date: 2023/03/02

  • +
  • Changes and new features:

    +
      +
    • Improve Lat-Lon to Cartesian coordinates method (used in Providentia).

    • +
    • Horizontal interpolation: Conservative

    • +
    • Function to_shapefile() to create shapefiles from a NES object without losing data from the original grid and being able to select the time and level.

    • +
    • Function from_shapefile() to create a new grid with data from a shapefile after doing a spatial join.

    • +
    • Function create_shapefile() can now be used in parallel.

    • +
    • Function calculate_grid_area() to calculate the area of each cell in a grid.

    • +
    • Function calculate_geometry_area() to calculate the area of each cell given a set of geometries.

    • +
    • Function get_spatial_bounds_mesh_format() to get the lon-lat boundaries in a mesh format (used in pcolormesh).

    • +
    • Bugs fixing:

      +
        +
      • Correct the dimensions of the resulting points datasets from any interpolation.

      • +
      • Amend the interpolation method to take into account the cases in which the distance among points equals zero.

      • +
      • Correct the way we retrieve the level positive value.

      • +
      • Correct how to calculate the spatial bounds of LCC and Mercator grids: the dimensions were flipped.

      • +
      • Correct how to calculate the spatial bounds for all grids: use read axis limits instead of write axis limits.

      • +
      • Calculate centroids from coordinates in the creation of shapefiles, instead of using the geopandas function ‘centroid’, that raises a warning for possible errors.

      • +
      • Enable selection of variables on the creation of shapefiles.

      • +
      • Correct read and write parallel limits.

      • +
      • Correct data type in the parallelization of points datasets.

      • +
      • Correct error that appear when trying to select coordinates and write the file.

      • +
      +
    • +
    +
  • +
+
+
+

1.0.0

+
    +
  • Release date: 2022/11/24

  • +
  • Changes and new features:

    +
      +
    • First beta release

    • +
    • Open:

      +
        +
      • NetCDF:

        +
          +
        • Regular Latitude-Longitude

        • +
        • Rotated Lat-Lon

        • +
        • Lambert Conformal Conic

        • +
        • Mercator

        • +
        • Points

        • +
        • Points in GHOST format

        • +
        • Points in PROVIDENTIA format

        • +
        +
      • +
      +
    • +
    • Parallelization:

      +
        +
      • Balanced / Unbalanced

      • +
      • By time axis

      • +
      • By Y axis

      • +
      • By X axis

      • +
      +
    • +
    • Create:

      +
        +
      • NetCDF:

        +
          +
        • Regular Latitude-Longitude

        • +
        • Rotated Lat-Lon

        • +
        • Lambert Conformal Conic

        • +
        • Mercator

        • +
        • Points

        • +
        +
      • +
      • Shapefile

      • +
      +
    • +
    • Write:

      +
        +
      • NetCDF

        +
          +
        • CAMS REANALYSIS format

        • +
        +
      • +
      • Grib2

      • +
      • Shapefile

      • +
      +
    • +
    • Interpolation:

      +
        +
      • Vertical interpolation

      • +
      • Horizontal interpolation

        +
          +
        • Nearest Neighbours

        • +
        +
      • +
      • Providentia interpolation

      • +
      +
    • +
    • Statistics:

      +
        +
      • Daily_mean

      • +
      • Daily_max

      • +
      • Daily_min

      • +
      • Last time step

      • +
      +
    • +
    • Methods:

      +
        +
      • Concatenate (variables of the same period in different files)

      • +
      +
    • +
    +
  • +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/contributing.html b/docs/build/html/contributing.html new file mode 100644 index 0000000000000000000000000000000000000000..ff412e6b73e820abd4903b90db77c0fc435b7e56 --- /dev/null +++ b/docs/build/html/contributing.html @@ -0,0 +1,130 @@ + + + + + + + Contributing — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Contributing

+
    +
  1. Create an issue and complete the description. Complete the issue description as much as possible with (estimated time, corresponding milestone, assigned person, etc.)

  2. +
  3. Create a branch (from master) directly in the issue. Its name should start with develop-, followed by the number and title of the issue that appears by default.

  4. +
  5. Clone and checkout to the new branch, without any modification, in Nord3v2. It is recommended to run some tests to ensure the current behavior.

  6. +
  7. Modify the code.

  8. +
  9. Run the simulation with the new branch: It is important to prepend the cloned path in the PYTHONPATH, e.g. export PYTHONPATH=/gpfs/scratch/bsc32/bsc32538/NES:${PYTHONPATH}.

  10. +
  11. Create and run a specific test for your case in the folder tests.

  12. +
  13. Update the CHANGELOG.rst and include information on the new development or bug fix.

  14. +
  15. Update the wiki with the new specifications.

  16. +
  17. Merge master into your development branch. To ensure that if there has been any changes in the master branch, these are included.

  18. +
  19. Run all tests in tests.

  20. +
  21. Create a merge request and assign it to Alba (@avilanov) or Carles (@ctena), who will review it.

  22. +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/formats.html b/docs/build/html/formats.html new file mode 100644 index 0000000000000000000000000000000000000000..5eefd790aef8b0c51198c56269f80cdc8a3f4bee --- /dev/null +++ b/docs/build/html/formats.html @@ -0,0 +1,515 @@ + + + + + + + Formats — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Formats

+
+

CAMS RA format

+
+
+nes.nes_formats.cams_ra_format.create_dimension_variables(self, netcdf)[source]
+

Create the ‘time’, ‘time_bnds’, ‘lev’, ‘lat’, ‘lat_bnds’, ‘lon’ and ‘lon_bnds’ variables.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
netcdfDataset

netcdf4-python open dataset.

+
+
+
+
+ +
+
+nes.nes_formats.cams_ra_format.create_dimensions(self, netcdf)[source]
+

Create ‘time’, ‘time_bnds’, ‘lev’, ‘lon’ and ‘lat’ dimensions.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
netcdfDataset

netcdf4-python open dataset.

+
+
+
+
+ +
+
+nes.nes_formats.cams_ra_format.create_variables(self, netcdf, i_lev)[source]
+

Create the netCDF file variables.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
netcdfDataset

netcdf4-python open dataset.

+
+
+
+
+ +
+
+nes.nes_formats.cams_ra_format.date2num(time_array, time_units=None, time_calendar=None)[source]
+
+ +
+
+nes.nes_formats.cams_ra_format.to_netcdf_cams_ra(self, path)[source]
+

Horizontal methods from one grid to another one.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
pathstr

Path to the output netCDF file.

+
+
+
+
+ +
+
+

CMAQ format

+
+
+nes.nes_formats.cmaq_format.change_variable_attributes(self)[source]
+

Modify the emission list to be consistent to use the output as input for CMAQ model.

+
+

Parameters

+

self : nes.Nes

+
+
+ +
+
+nes.nes_formats.cmaq_format.create_dimension_variables(self, netcdf)[source]
+

Create the ‘y’ and ‘x’ variables.

+
+

Parameters

+

self : nes.Nes +netcdf : Dataset

+
+

NetCDF object.

+
+
+
+ +
+
+nes.nes_formats.cmaq_format.create_dimensions(self, netcdf)[source]
+

Create ‘time’, ‘time_bnds’, ‘lev’, ‘lon’ and ‘lat’ dimensions.

+
+

Parameters

+

self : nes.Nes +netcdf : Dataset

+
+

netcdf4-python open dataset.

+
+
+
+ +
+
+nes.nes_formats.cmaq_format.create_tflag(self)[source]
+

Create the content of the CMAQ variable TFLAG

+
+

Parameters

+

self : nes.Nes

+
+
+

Returns

+
+
numpy.ndarray

Array with the content of TFLAG

+
+
+
+
+ +
+
+nes.nes_formats.cmaq_format.create_variables(self, netcdf)[source]
+

Create the netCDF file variables.

+
+

Parameters

+

self : nes.Nes +netcdf : Dataset

+
+

netcdf4-python open dataset.

+
+
+
+ +
+
+nes.nes_formats.cmaq_format.set_global_attributes(self)[source]
+

Set the NetCDF global attributes

+
+

Parameters

+

self : nes.Nes

+
+
+ +
+
+nes.nes_formats.cmaq_format.str_var_list(self)[source]
+

Transform the list of variable names to a string with the elements with 16 white spaces.

+
+

Parameters

+

self : nes.Nes

+
+
+

Returns

+
+
str

List of variable names transformed on string.

+
+
+
+
+ +
+
+nes.nes_formats.cmaq_format.to_cmaq_units(self)[source]
+

Change the data values according to the CMAQ conventions

+
+

Parameters

+

self : nes.Nes

+
+
+

Returns

+
+
dict

Variable in the MONARCH units

+
+
+
+
+ +
+
+nes.nes_formats.cmaq_format.to_netcdf_cmaq(self, path, chunking=False, keep_open=False)[source]
+

Create the NetCDF using netcdf4-python methods.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
pathstr

Path to the output netCDF file.

+
+
chunking: bool

Indicates if you want to chunk the output netCDF.

+
+
keep_openbool

Indicates if you want to keep open the NetCDH to fill the data by time-step

+
+
+
+
+ +
+
+

MONARCH format

+
+
+nes.nes_formats.monarch_format.to_monarch_units(self)[source]
+

Change the data values according to the MONARCH conventions

+
+

Parameters

+

self : nes.Nes

+
+
+

Returns

+
+
dict

Variable in the MONARCH units

+
+
+
+
+ +
+
+nes.nes_formats.monarch_format.to_netcdf_monarch(self, path, chunking=False, keep_open=False)[source]
+

Create the NetCDF using netcdf4-python methods.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
pathstr

Path to the output netCDF file.

+
+
chunking: bool

Indicates if you want to chunk the output netCDF.

+
+
keep_openbool

Indicates if you want to keep open the NetCDH to fill the data by time-step

+
+
+
+
+ +
+
+

WRF CHEM format

+
+
+nes.nes_formats.wrf_chem_format.change_variable_attributes(self)[source]
+

Modify the emission list to be consistent to use the output as input for WRF-CHEM model.

+
+

Parameters

+

self : nes.Nes

+
+
+ +
+
+nes.nes_formats.wrf_chem_format.create_dimension_variables(self, netcdf)[source]
+

Create the ‘y’ and ‘x’ variables.

+
+

Parameters

+

self : nes.Nes +netcdf : Dataset

+
+

NetCDF object.

+
+
+
+ +
+
+nes.nes_formats.wrf_chem_format.create_dimensions(self, netcdf)[source]
+

Create ‘time’, ‘time_bnds’, ‘lev’, ‘lon’ and ‘lat’ dimensions.

+
+

Parameters

+

self : nes.Nes +netcdf : Dataset

+
+

netcdf4-python open dataset.

+
+
+
+ +
+
+nes.nes_formats.wrf_chem_format.create_times_var(self)[source]
+

Create the content of the WRF-CHEM variable times

+
+

Parameters

+

self : nes.Nes

+
+
+

Returns

+
+
numpy.ndarray

Array with the content of TFLAG

+
+
+
+
+ +
+
+nes.nes_formats.wrf_chem_format.create_variables(self, netcdf)[source]
+

Create the netCDF file variables.

+
+

Parameters

+

self : nes.Nes +netcdf : Dataset

+
+

netcdf4-python open dataset.

+
+
+
+ +
+
+nes.nes_formats.wrf_chem_format.set_global_attributes(self)[source]
+

Set the NetCDF global attributes

+
+

Parameters

+

self : nes.Nes

+
+
+ +
+
+nes.nes_formats.wrf_chem_format.to_netcdf_wrf_chem(self, path, chunking=False, keep_open=False)[source]
+

Create the NetCDF using netcdf4-python methods.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
pathstr

Path to the output netCDF file.

+
+
chunking: bool

Indicates if you want to chunk the output netCDF.

+
+
keep_openbool

Indicates if you want to keep open the NetCDH to fill the data by time-step

+
+
+
+
+ +
+
+nes.nes_formats.wrf_chem_format.to_wrf_chem_units(self)[source]
+

Change the data values according to the WRF-CHEM conventions

+
+

Parameters

+

self : nes.Nes

+
+
+

Returns

+
+
dict

Variable in the MONARCH units

+
+
+
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/genindex.html b/docs/build/html/genindex.html new file mode 100644 index 0000000000000000000000000000000000000000..1763a0c1d3f7d8119223490ef18f7457dde778bd --- /dev/null +++ b/docs/build/html/genindex.html @@ -0,0 +1,816 @@ + + + + + + Index — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + +

Index

+ +
+ A + | C + | D + | E + | F + | G + | I + | K + | L + | M + | N + | O + | P + | R + | S + | T + | W + +
+

A

+ + + +
+ +

C

+ + + +
+ +

D

+ + + +
+ +

E

+ + +
+ +

F

+ + + +
+ +

G

+ + + +
+ +

I

+ + + +
+ +

K

+ + +
+ +

L

+ + + +
+ +

M

+ + +
+ +

N

+ + + +
    +
  • Nes (class in nes.nc_projections.default_nes) +
  • +
  • + nes.create_nes + +
  • +
  • + nes.load_nes + +
  • +
  • + nes.methods.cell_measures + +
  • +
  • + nes.methods.horizontal_interpolation + +
  • +
  • + nes.methods.spatial_join + +
  • +
  • + nes.methods.vertical_interpolation + +
  • +
  • + nes.nc_projections.default_nes + +
  • +
  • + nes.nc_projections.latlon_nes + +
  • +
  • + nes.nc_projections.lcc_nes + +
  • +
  • + nes.nc_projections.mercator_nes + +
  • +
  • + nes.nc_projections.points_nes + +
  • +
  • + nes.nc_projections.points_nes_ghost + +
  • +
+ +

O

+ + + +
+ +

P

+ + + +
+ +

R

+ + + +
+ +

S

+ + + +
+ +

T

+ + + +
+ +

W

+ + +
+ + + +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/index.html b/docs/build/html/index.html new file mode 100644 index 0000000000000000000000000000000000000000..b229d250ac053d5a6167f8c74278d53dd9de089d --- /dev/null +++ b/docs/build/html/index.html @@ -0,0 +1,167 @@ + + + + + + + Contents — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/build/html/methods.html b/docs/build/html/methods.html new file mode 100644 index 0000000000000000000000000000000000000000..df79a9bfe7fdf3c9068281bddb179295d11593f0 --- /dev/null +++ b/docs/build/html/methods.html @@ -0,0 +1,714 @@ + + + + + + + Methods — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Methods

+
+

Generic methods

+
+
+nes.methods.cell_measures.calculate_cell_area(grid_corner_lon, grid_corner_lat, earth_radius_minor_axis=6356752.3142, earth_radius_major_axis=6378137.0)[source]
+

Calculate the area of each cell of a grid.

+
+

Parameters

+
+
grid_corner_lonnp.array

Array with longitude bounds of grid.

+
+
grid_corner_latnp.array

Array with longitude bounds of grid.

+
+
earth_radius_minor_axisfloat

Radius of the minor axis of the Earth.

+
+
earth_radius_major_axisfloat

Radius of the major axis of the Earth.

+
+
+
+
+ +
+
+nes.methods.cell_measures.calculate_geometry_area(geometry_list, earth_radius_minor_axis=6356752.3142, earth_radius_major_axis=6378137.0)[source]
+

Get coordinate bounds and call function to calculate the area of each cell of a set of geometries.

+
+

Parameters

+
+
geometry_listList

List with polygon geometries.

+
+
earth_radius_minor_axisfloat

Radius of the minor axis of the Earth.

+
+
earth_radius_major_axisfloat

Radius of the major axis of the Earth.

+
+
+
+
+ +
+
+nes.methods.cell_measures.calculate_grid_area(self)[source]
+

Get coordinate bounds and call function to calculate the area of each cell of a grid.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
+
+
+ +
+
+nes.methods.cell_measures.cross_product(a, b)[source]
+

Calculate cross product between two points.

+
+

Parameters

+
+
anp.array

Position of point a in cartesian coordinates.

+
+
bnp.array

Position of point b in cartesian coordinates.

+
+
+
+
+ +
+
+nes.methods.cell_measures.lon_lat_to_cartesian(lon, lat, earth_radius_major_axis=6378137.0)[source]
+

Calculate lon, lat coordinates of a point on a sphere.

+
+

Parameters

+
+
lonnp.array

Longitude values.

+
+
latnp.array

Latitude values.

+
+
earth_radius_major_axisfloat

Radius of the major axis of the Earth.

+
+
+
+
+ +
+
+nes.methods.cell_measures.mod_huiliers_area(cell_corner_lon, cell_corner_lat)[source]
+

Calculate the area of each cell according to Huilier’s theorem. +Reference: CDO (https://earth.bsc.es/gitlab/ces/cdo/)

+
+

Parameters

+
+
cell_corner_lonnp.array

Longitude boundaries of each cell

+
+
cell_corner_latnp.array

Latitude boundaries of each cell

+
+
+
+
+ +
+
+nes.methods.cell_measures.norm(cp)[source]
+

Normalize the result of the cross product operation.

+
+

Parameters

+
+
cpnp.array

Cross product between two points.

+
+
+
+
+ +
+
+nes.methods.cell_measures.tri_area(point_0, point_1, point_2)[source]
+

Calculate area between three points that form a triangle. +Reference: CDO (https://earth.bsc.es/gitlab/ces/cdo/)

+
+

Parameters

+
+
point_0np.array

Position of first point in cartesian coordinates.

+
+
point_1np.array

Position of second point in cartesian coordinates.

+
+
point_2np.array

Position of third point in cartesian coordinates.

+
+
+
+
+ +
+
+

Horizontal interpolation

+
+
+nes.methods.horizontal_interpolation.create_area_conservative_weight_matrix(self, dst_nes, wm_path=None, flux=False, info=False)[source]
+

To create the weight matrix with the area conservative method.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
dst_nesnes.Nes

Final projection Nes object.

+
+
wm_pathstr

Path where write the weight matrix

+
+
fluxbool

Indicates if you want to calculate the weight matrix for flux variables

+
+
info: bool

Indicates if you want to print extra info during the methods process.

+
+
+
+
+

Returns

+
+
nes.Nes

Weight matrix.

+
+
+
+
+ +
+
+nes.methods.horizontal_interpolation.create_nn_weight_matrix(self, dst_grid, n_neighbours=4, wm_path=None, info=False)[source]
+

To create the weight matrix with the nearest neighbours method.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
dst_gridnes.Nes

Final projection Nes object.

+
+
n_neighboursint

Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4.

+
+
wm_pathstr

Path where write the weight matrix

+
+
info: bool

Indicates if you want to print extra info during the methods process.

+
+
+
+
+

Returns

+
+
nes.Nes

Weight matrix.

+
+
+
+
+ +
+
+nes.methods.horizontal_interpolation.get_src_data(comm, var_data, idx, parallel_method)[source]
+

To obtain the needed src data to interpolate.

+
+

Parameters

+

comm : MPI.Communicator. +var_data : np.array

+
+

Rank source data.

+
+
+
idxnp.array

Index of the needed data in a 2D flatten way.

+
+
parallel_method: str

Source parallel method.

+
+
+
+
+

Returns

+
+
np.array

Flatten source needed data.

+
+
+
+
+ +
+
+nes.methods.horizontal_interpolation.get_weights_idx_t_axis(self, dst_grid, weight_matrix_path, kind, n_neighbours, only_create, wm, flux)[source]
+

To obtain the weights and source data index through the T axis.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
dst_gridnes.Nes

Final projection Nes object.

+
+
weight_matrix_pathstr, None

Path to the weight matrix to read/create.

+
+
kindstr

Kind of horizontal interpolation. Accepted values: [‘NearestNeighbour’, ‘Conservative’].

+
+
n_neighboursint

Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4.

+
+
only_createbool

Indicates if you want to only create the Weight Matrix.

+
+
wmNes

Weight matrix Nes File

+
+
fluxbool

Indicates if you want to calculate the weight matrix for flux variables

+
+
+
+
+

Returns

+
+
tuple

Weights and source data index.

+
+
+
+
+ +
+
+nes.methods.horizontal_interpolation.get_weights_idx_xy_axis(self, dst_grid, weight_matrix_path, kind, n_neighbours, only_create, wm, flux)[source]
+

To obtain the weights and source data index through the X or Y axis.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
dst_gridnes.Nes

Final projection Nes object.

+
+
weight_matrix_pathstr, None

Path to the weight matrix to read/create.

+
+
kindstr

Kind of horizontal interpolation. Accepted values: [‘NearestNeighbour’, ‘Conservative’].

+
+
n_neighboursint

Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4.

+
+
only_createbool

Indicates if you want to only create the Weight Matrix.

+
+
wmNes

Weight matrix Nes File

+
+
fluxbool

Indicates if you want to calculate the weight matrix for flux variables

+
+
+
+
+

Returns

+
+
tuple

Weights and source data index.

+
+
+
+
+ +
+
+nes.methods.horizontal_interpolation.interpolate_horizontal(self, dst_grid, weight_matrix_path=None, kind='NearestNeighbour', n_neighbours=4, info=False, to_providentia=False, only_create_wm=False, wm=None, flux=False)[source]
+

Horizontal methods from one grid to another one.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
dst_gridnes.Nes

Final projection Nes object.

+
+
weight_matrix_pathstr, None

Path to the weight matrix to read/create.

+
+
kindstr

Kind of horizontal interpolation. Accepted values: [‘NearestNeighbour’, ‘Conservative’].

+
+
n_neighboursint

Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4.

+
+
infobool

Indicates if you want to print extra info during the methods process.

+
+
to_providentiabool

Indicates if we want the interpolated grid in Providentia format.

+
+
only_create_wmbool

Indicates if you want to only create the Weight Matrix.

+
+
wmNes

Weight matrix Nes File

+
+
fluxbool

Indicates if you want to calculate the weight matrix for flux variables

+
+
+
+
+ +
+
+nes.methods.horizontal_interpolation.lon_lat_to_cartesian(lon, lat, radius=6378137.0)[source]
+

Calculate lon, lat coordinates of a point on a sphere.

+

DEPRECATED!!!!

+
+

Parameters

+
+
lonnp.array

Longitude values.

+
+
latnp.array

Latitude values.

+
+
radiusfloat

Radius of the sphere to get the distances.

+
+
+
+
+ +
+
+nes.methods.horizontal_interpolation.lon_lat_to_cartesian_ecef(lon, lat)[source]
+

# convert observational/model geographic longitude/latitude coordinates to cartesian ECEF (Earth Centred, +# Earth Fixed) coordinates, assuming WGS84 datum and ellipsoid, and that all heights = 0. +# ECEF coordiantes represent positions (in meters) as X, Y, Z coordinates, approximating the earth surface +# as an ellipsoid of revolution. +# This conversion is for the subsequent calculation of euclidean distances of the model gridcell centres +# from each observational station. +# Defining the distance between two points on the earth’s surface as simply the euclidean distance +# between the two lat/lon pairs could lead to inaccurate results depending on the distance +# between two points (i.e. 1 deg. of longitude varies with latitude).

+
+

Parameters

+
+
lonnp.array

Longitude values.

+
+
latnp.array

Latitude values.

+
+
+
+
+ +
+
+nes.methods.horizontal_interpolation.read_weight_matrix(weight_matrix_path, comm=None, parallel_method='T')[source]
+

Read weight matrix.

+
+

Parameters

+
+
weight_matrix_pathstr

Path of the weight matrix.

+
+
commMPI.Communicator

Communicator to read the weight matrix.

+
+
parallel_methodstr

Nes parallel method to read the weight matrix.

+
+
+
+
+

Returns

+
+
nes.Nes

Weight matrix.

+
+
+
+
+ +
+
+

Spatial join

+
+
+nes.methods.spatial_join.get_bbox(self)[source]
+

Obtain the bounding box of the rank data

+

(lon_min, lat_min, lon_max, lat_max)

+
+

Parameters

+

self : nes.Nes

+
+
+

Returns

+
+
tuple

Bounding box

+
+
+
+
+ +
+
+nes.methods.spatial_join.prepare_external_shapefile(self, ext_shp, var_list, info=False)[source]
+

Prepare the external shapefile.

+

It is high recommended to pass ext_shp parameter as string because it will clip the external shapefile to the rank.

+
    +
  1. Read if it is not already read

  2. +
  3. Filter variables list

  4. +
  5. Standardize projections

  6. +
+
+

Parameters

+

self : nes.Nes +ext_shp : GeoDataFrame or str

+
+

External shapefile or path to it

+
+
+
var_listList[str] or None

External shapefile variables to be computed

+
+
infobool

Indicates if you want to print the information

+
+
+
+
+

Returns

+
+
GeoDataFrame

External shapefile

+
+
+
+
+ +
+
+nes.methods.spatial_join.spatial_join(self, ext_shp, method=None, var_list=None, info=False)[source]
+

Compute overlay intersection of two GeoPandasDataFrames.

+
+

Parameters

+

self : nes.Nes +ext_shp : GeoPandasDataFrame or str

+
+

File or path from where the data will be obtained on the intersection.

+
+
+
methodstr

Overlay method. Accepted values: [‘nearest’, ‘intersection’, ‘centroid’].

+
+
var_listList or None or str

Variables that will be included in the resulting shapefile.

+
+
infobool

Indicates if you want to print the process info or no

+
+
+
+
+ +
+
+nes.methods.spatial_join.spatial_join_centroid(self, ext_shp, info=False)[source]
+

Perform the spatial join using the centroid method

+
+

Parameters

+

self : nes.Nes +ext_shp : GeoDataFrame

+
+

External shapefile

+
+
+
infobool

Indicates if you want to print the information

+
+
+
+
+ +
+
+nes.methods.spatial_join.spatial_join_intersection(self, ext_shp, info=False)[source]
+

Perform the spatial join using the intersection method

+
+

Parameters

+

self : nes.Nes +ext_shp : GeoDataFrame

+
+

External shapefile

+
+
+
infobool

Indicates if you want to print the information

+
+
+
+
+ +
+
+nes.methods.spatial_join.spatial_join_nearest(self, ext_shp, info=False)[source]
+

Perform the spatial join using the nearest method

+
+

Parameters

+

self : nes.Nes +ext_shp : GeoDataFrame

+
+

External shapefile

+
+
+
infobool

Indicates if you want to print the information

+
+
+
+
+ +
+
+

Vertical interpolation

+
+
+nes.methods.vertical_interpolation.add_4d_vertical_info(self, info_to_add)[source]
+

To add the vertical information from other source.

+
+

Parameters

+
+
selfnes.Nes

Source Nes object.

+
+
info_to_addnes.Nes, str

Nes object with the vertical information as variable or str with the path to the NetCDF file that contains +the vertical data.

+
+
+
+
+ +
+
+nes.methods.vertical_interpolation.interpolate_vertical(self, new_levels, new_src_vertical=None, kind='linear', extrapolate=None, info=None, overwrite=False)[source]
+

Vertical interpolation.

+
+

Parameters

+
+
selfNes

Source Nes object.

+
+
new_levelsList

List of new vertical levels.

+
+
+

new_src_vertical +kind : str

+
+

Vertical methods type.

+
+
+
extrapolateNone, tuple, str

Extrapolate method (for non linear operations).

+
+
info: None, bool

Indicates if you want to print extra information.

+
+
overwrite: bool

Indicates if you want to compute the vertical interpolation in the same object or not

+
+
+
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/object.html b/docs/build/html/object.html new file mode 100644 index 0000000000000000000000000000000000000000..19f34d85e6e471677be4be0d7b8fa730ed4eac5d --- /dev/null +++ b/docs/build/html/object.html @@ -0,0 +1,248 @@ + + + + + + + The NES object — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

The NES object

+
+

Creating a NES object

+
+
+nes.create_nes.create_nes(comm=None, info=False, projection=None, parallel_method='Y', balanced=False, times=None, avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, **kwargs)[source]
+

Create a Nes class from scratch.

+
+

Parameters

+
+
comm: MPI.Communicator

MPI Communicator.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘Y’. +accepted values: [‘X’, ‘Y’, ‘T’].

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
first_levelint

Index of the first level to use.

+
+
last_levelint, None

Index of the last level to use. None if it is the last.

+
+
+
+
+ +
+
+nes.create_nes.from_shapefile(path, method=None, parallel_method='Y', **kwargs)[source]
+

Create NES from shapefile data.

+
    +
  1. Create NES grid.

  2. +
  3. Create shapefile for grid.

  4. +
  5. Spatial join to add shapefile variables to NES variables.

  6. +
+
+

Parameters

+
+
pathstr

Path to shapefile.

+
+
methodstr

Overlay method. Accepted values: [‘nearest’, ‘intersection’, None].

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘Y’. +accepted values: [‘X’, ‘Y’, ‘T’].

+
+
+
+
+ +
+
+

Loading a NES object

+
+
+nes.load_nes.concatenate_netcdfs(nessy_list, comm=None, info=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, balanced=False)[source]
+

Concatenate variables form different sources.

+
+

Parameters

+
+
nessy_listlist

List of Nes objects or list of paths to concatenate.

+
+
commMPI.Communicator

MPI Communicator.

+
+
+
+
+

Returns

+
+
Nes

Nes object with all the variables.

+
+
+
+
+ +
+
+nes.load_nes.open_netcdf(path, comm=None, xarray=False, info=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, balanced=False)[source]
+

Open a netCDF file.

+
+

Parameters

+
+
pathstr

Path to the NetCDF file to read.

+
+
commMPI.COMM

MPI communicator to use in that netCDF. Default: MPI.COMM_WORLD.

+
+
xarraybool

(Not working) Indicates if you want to use xarray. Default: False.

+
+
infobool

Indicates if you want to print (stdout) the reading/writing steps.

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘Y’. +Accepted values: [‘X’, ‘Y’, ‘T’]

+
+
balancedbool

Indicates if you want a balanced parallelization or not. Balanced dataset cannot be written in chunking mode.

+
+
first_levelint

Index of the first level to use.

+
+
last_levelint, None

Index of the last level to use. None if it is the last.

+
+
+
+
+

Returns

+
+
Nes

Nes object. Variables read in lazy mode (only metadata).

+
+
+
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/objects.inv b/docs/build/html/objects.inv new file mode 100644 index 0000000000000000000000000000000000000000..0d6816cde833bb79240f0dbfbc17e0d3cee66562 Binary files /dev/null and b/docs/build/html/objects.inv differ diff --git a/docs/build/html/projections.html b/docs/build/html/projections.html new file mode 100644 index 0000000000000000000000000000000000000000..dde678609d0e57e575196a12fb1feaea5825a0b2 --- /dev/null +++ b/docs/build/html/projections.html @@ -0,0 +1,2248 @@ + + + + + + + Projections — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Projections

+
+

Default projection

+
+
+class nes.nc_projections.default_nes.Nes(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Bases: object

+
+

Attributes

+

comm : MPI.Communicator. +rank : int

+
+

MPI rank.

+
+
+
masterbool

True when rank == 0.

+
+
sizeint

Size of the communicator.

+
+
infobool

Indicates if you want to print reading/writing info.

+
+
is_xarraybool

(Not working) Indicates if you want to use xarray as default.

+
+
__ini_pathstr

Path to the original file to read when open_netcdf is called.

+
+
hours_startint

Number of hours to avoid from the first original values.

+
+
hours_endint

Number of hours to avoid from the last original values.

+
+
datasetxr.Dataset

(not working) xArray Dataset.

+
+
netcdfDataset

netcdf4-python Dataset.

+
+
variablesdict

Variables information. +The variables are stored in a dictionary with the var_name as key and another dictionary with the information. +The information dictionary contains the ‘data’ key with None (if the variable is not loaded) or the array values +and the other keys are the variable attributes or description.

+
+
_timeList

Complete list of original time step values.

+
+
_levdict

Vertical level dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
_latdict

Latitudes dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
_lon _ dict

Longitudes dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
parallel_methodstr

Parallel method to read/write. +Can be chosen any of the following axis to parallelize: ‘T’, ‘Y’ or ‘X’.

+
+
read_axis_limitsdict

Dictionary with the 4D limits of the rank data to read. +t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max.

+
+
write_axis_limitsdict

Dictionary with the 4D limits of the rank data to write. +t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max.

+
+
timeList[datetime]

List of time steps of the rank data.

+
+
levdict

Vertical levels dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
latdict

Latitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
londict

Longitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
global_attrsdict

Global attributes with the attribute name as key and data as values.

+
+
_var_dimNone or tuple

Name of the Y and X dimensions for the variables.

+
+
_lat_dimNone or tuple

Name of the dimensions of the Latitude values.

+
+
_lon_dimNone or tuple

Name of the dimensions of the Longitude values.

+
+
projectionpyproj.Proj

Grid projection.

+
+
projection_datadict

Dictionary with the projection information.

+
+
+
+
+add_4d_vertical_info(info_to_add)[source]
+

To add the vertical information from other source.

+
+

Parameters

+
+
info_to_addnes.Nes, str

Nes object with the vertical information as variable or str with the path to the NetCDF file that contains +the vertical data.

+
+
+
+
+ +
+
+add_variables_to_shapefile(var_list, idx_lev=0, idx_time=0)[source]
+

Add variables data to shapefile.

+
+
var_listList or str

Variables to be loaded and saved in the shapefile.

+
+
idx_levint

Index of vertical level for which the data will be saved in the shapefile.

+
+
idx_timeint

Index of time for which the data will be saved in the shapefile.

+
+
+
+ +
+
+append_time_step_data(i_time, out_format='DEFAULT')[source]
+

Fill the netCDF data for the indicated index time.

+
+

Parameters

+
+
i_timeint

index of the time step to write

+
+
out_formatstr

Indicates the output format type to change the units (if needed)

+
+
+
+
+ +
+
+static calculate_geometry_area(geometry_list, earth_radius_minor_axis=6356752.3142, earth_radius_major_axis=6378137.0)[source]
+

Get coordinate bounds and call function to calculate the area of each cell of a set of geometries.

+
+

Parameters

+
+
geometry_listList

List with polygon geometries.

+
+
earth_radius_minor_axisfloat

Radius of the minor axis of the Earth.

+
+
earth_radius_major_axisfloat

Radius of the major axis of the Earth.

+
+
+
+
+ +
+
+calculate_grid_area(overwrite=True)[source]
+

Get coordinate bounds and call function to calculate the area of each cell of a grid.

+
+

Parameters

+
+
selfnes.Nes

Source projection Nes Object.

+
+
overwritebool

Indicates if we want to overwrite the grid area

+
+
+
+
+ +
+
+clear_communicator()[source]
+

Erase the communicator and the parallelization indexes.

+
+ +
+
+close()[source]
+

Close the NetCDF with netcdf4-python.

+
+ +
+
+concatenate(aux_nessy)[source]
+

Concatenate different variables into the same nes object.

+
+

Parameters

+
+
aux_nessyNes, str

Nes object or str with the path to the NetCDF file that contains the variables to add.

+
+
+
+
+

Returns

+
+
list

List of var names added.

+
+
+
+
+ +
+
+copy(copy_vars=False)[source]
+

Copy the Nes object. +The copy will avoid to copy the communicator, dataset and variables by default.

+
+

Parameters

+
+
copy_vars: bool

Indicates if you want to copy the variables (in lazy mode).

+
+
+
+
+

Returns

+
+
nessyNes

Copy of the Nes object.

+
+
+
+
+ +
+
+create_shapefile()[source]
+

Create spatial geodataframe (shapefile).

+
+

Returns

+
+
shapefileGeoPandasDataFrame

Shapefile dataframe.

+
+
+
+
+ +
+
+static create_single_spatial_bounds(coordinates, inc, spatial_nv=2, inverse=False)[source]
+

Calculate the vertices coordinates.

+
+

Parameters

+
+
coordinatesnp.array

Coordinates in degrees (latitude or longitude).

+
+
incfloat

Increment between centre values.

+
+
spatial_nvint

Non-mandatory parameter that informs the number of vertices that the boundaries must have. Default: 2.

+
+
inversebool

For some grid latitudes.

+
+
+
+
+

Returns

+
+
boundsnp.array

Array with as many elements as vertices for each value of coords.

+
+
+
+
+ +
+
+create_spatial_bounds()[source]
+

Calculate longitude and latitude bounds and set them.

+
+ +
+
+daily_statistic(op, type_op='calendar')[source]
+

Calculate daily statistic.

+
+

Parameters

+
+
opstr

Statistic to perform. Accepted values: “max”, “mean” and “min”.

+
+
type_opstr

Type of statistic to perform. Accepted values: “calendar”, “alltsteps”, and “withoutt0”. +- “calendar”: Calculate the statistic using the time metadata.

+
+

It will avoid single time step by day calculations

+
+
    +
  • “alltsteps”: Calculate a single time statistic with all the time steps.

  • +
  • “withoutt0”: Calculate a single time statistic with all the time steps avoiding the first one.

  • +
+
+
+
+
+ +
+
+filter_coordinates_selection()[source]
+

Use the selection limits to filter time, lev, lat, lon, lon_bnds and lat_bnds.

+
+ +
+
+find_time_id(time)[source]
+

Find index of time in time array.

+
+

Parameters

+
+
timedatetime.datetime

Time element.

+
+
+
+
+

Returns

+
+
int

Index of time element.

+
+
+
+
+ +
+
+free_vars(var_list)[source]
+

Erase the selected variables from the variables’ information.

+
+

Parameters

+
+
var_listList or str

List (or single string) of the variables to be loaded.

+
+
+
+
+ +
+
+get_centroids_from_coordinates()[source]
+

Get centroids from geographical coordinates.

+
+

Returns

+
+
centroids_gdf: GeoPandasDataFrame

Centroids dataframe.

+
+
+
+
+ +
+
+get_climatology()[source]
+
+ +
+
+static get_coordinate_id(array, value, axis=0)[source]
+

Get the index of the corresponding coordinate value.

+
+

Parameters

+
+
arraynp.array

Array with the coordinate data

+
+
valuefloat

Coordinate value to search.

+
+
axisint

Axis where find the value +Default: 0.

+
+
+
+
+

Returns

+
+
int

Index of the coordinate array.

+
+
+
+
+ +
+
+static get_earth_radius(ellps)[source]
+

Get minor and major axis of Earth.

+
+

Parameters

+
+
ellpsstr

Spatial reference system.

+
+
+
+
+ +
+
+get_fids()[source]
+

Obtain the FIDs in a 2D format

+
+

Returns

+
+
np.array

2D array with the FID data

+
+
+
+
+ +
+
+get_full_levels()[source]
+
+ +
+
+get_full_times()[source]
+
+ +
+
+get_idx_intervals()[source]
+

Calculate the index intervals

+
+

Returns

+
+
dict

Dictionary with the index intervals

+
+
+
+
+ +
+
+get_read_axis_limits()[source]
+

Calculate the 4D reading axis limits depending on if them have to balanced or not.

+
+

Returns

+
+
dict

Dictionary with the 4D limits of the rank data to read. +t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max.

+
+
+
+
+ +
+
+get_read_axis_limits_balanced()[source]
+

Calculate the 4D reading balanced axis limits.

+
+

Returns

+
+
dict

Dictionary with the 4D limits of the rank data to read. +t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max.

+
+
+
+
+ +
+
+get_read_axis_limits_unbalanced()[source]
+

Calculate the 4D reading axis limits.

+
+

Returns

+
+
dict

Dictionary with the 4D limits of the rank data to read. +t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max.

+
+
+
+
+ +
+
+get_spatial_bounds_mesh_format()[source]
+

Get the spatial bounds in the pcolormesh format:

+

see: https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html

+
+

Returns

+
+
lon_bnds_meshnumpy.ndarray

Longitude boundaries in the mesh format

+
+
lat_bnds_meshnumpy.ndarray

Latitude boundaries in the mesh format

+
+
+
+
+ +
+
+get_time_id(hours, first=True)[source]
+

Get the index of the corresponding time value.

+
+

Parameters

+
+
hoursint

Number of hours to avoid.

+
+
firstbool

Indicates if you want to avoid from the first hours (True) or from the last (False). +Default: True.

+
+
+
+
+

Returns

+
+
int

Index of the time array.

+
+
+
+
+ +
+
+get_time_interval()[source]
+

Calculate the interrval of hours between time steps.

+
+

Returns

+
+
int

Number of hours between time steps.

+
+
+
+
+ +
+
+get_write_axis_limits()[source]
+

Calculate the 4D writing axis limits depending on if them have to balanced or not.

+
+

Returns

+
+
dict

Dictionary with the 4D limits of the rank data to write. +t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max.

+
+
+
+
+ +
+
+get_write_axis_limits_balanced()[source]
+

Calculate the 4D reading balanced axis limits.

+
+

Returns

+
+
dict

Dictionary with the 4D limits of the rank data to read. +t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max.

+
+
+
+
+ +
+
+get_write_axis_limits_unbalanced()[source]
+

Calculate the 4D writing axis limits.

+
+

Returns

+
+
dict

Dictionary with the 4D limits of the rank data to write. +t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max.

+
+
+
+
+ +
+
+interpolate_horizontal(dst_grid, weight_matrix_path=None, kind='NearestNeighbour', n_neighbours=4, info=False, to_providentia=False, only_create_wm=False, wm=None, flux=False)[source]
+

Horizontal methods from the current grid to another one.

+
+

Parameters

+
+
dst_gridnes.Nes

Final projection Nes object.

+
+
weight_matrix_pathstr, None

Path to the weight matrix to read/create.

+
+
kindstr

Kind of horizontal methods. choices = [‘NearestNeighbour’, ‘Conservative’].

+
+
n_neighbours: int

Used if kind == NearestNeighbour. Number of nearest neighbours to interpolate. Default: 4.

+
+
info: bool

Indicates if you want to print extra info during the methods process.

+
+
to_providentiabool

Indicates if we want the interpolated grid in Providentia format.

+
+
only_create_wmbool

Indicates if you want to only create the Weight Matrix.

+
+
wmNes

Weight matrix Nes File

+
+
fluxbool

Indicates if you want to calculate the weight matrix for flux variables

+
+
+
+
+ +
+
+interpolate_vertical(new_levels, new_src_vertical=None, kind='linear', extrapolate=None, info=None, overwrite=False)[source]
+

Vertical interpolation function.

+
+

Parameters

+
+
selfNes

Source Nes object.

+
+
new_levelsList

New vertical levels.

+
+
+

new_src_vertical +kind : str

+
+

Vertical methods type.

+
+
+
extrapolateNone, tuple, str

Extrapolate method (for non linear operations).

+
+
info: None, bool

Indicates if you want to print extra information.

+
+
overwrite: bool

Indicates if you want to compute the vertical interpolation in the same object or not

+
+
+
+
+ +
+
+keep_vars(var_list)[source]
+

Keep the selected variables and erases the rest.

+
+

Parameters

+
+
var_listList or str

List (or single string) of the variables to be loaded.

+
+
+
+
+ +
+
+last_time_step()[source]
+

Modify variables to keep only the last time step.

+
+ +
+
+load(var_list=None)[source]
+

Load of the selected variables.

+

That function will fill the variable ‘data’ key with the corresponding values.

+
+

Parameters

+
+
var_listList, str, None

List (or single string) of the variables to be loaded.

+
+
+
+
+ +
+
+static lon_lat_to_cartesian_ecef(lon, lat)[source]
+

# Convert observational/model geographic longitude/latitude coordinates to cartesian ECEF (Earth Centred, +# Earth Fixed) coordinates, assuming WGS84 datum and ellipsoid, and that all heights = 0. +# ECEF coordiantes represent positions (in meters) as X, Y, Z coordinates, approximating the earth surface +# as an ellipsoid of revolution. +# This conversion is for the subsequent calculation of euclidean distances of the model gridcell centres +# from each observational station. +# Defining the distance between two points on the earth’s surface as simply the euclidean distance +# between the two lat/lon pairs could lead to inaccurate results depending on the distance +# between two points (i.e. 1 deg. of longitude varies with latitude).

+
+

Parameters

+
+
lonnp.array

Longitude values.

+
+
latnp.array

Latitude values.

+
+
+
+
+ +
+
+static new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Initialize the Nes class.

+
+

Parameters

+
+
comm: MPI.COMM

MPI Communicator.

+
+
path: str

Path to the NetCDF to initialize the object.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
dataset: Dataset

NetCDF4-python Dataset to initialize the class.

+
+
xarray: bool

(Not working) Indicates if you want to use xarray as default.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default over Y axis +accepted values: [‘X’, ‘Y’, ‘T’].

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
first_levelint

Index of the first level to use.

+
+
last_levelint or None

Index of the last level to use. None if it is the last.

+
+
create_nesbool

Indicates if you want to create the object from scratch (True) or through an existing file.

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
timesList[datetime] or None

List of times to substitute the current ones while creation.

+
+
+
+
+ +
+
+open()[source]
+

Open the NetCDF.

+
+ +
+
+reverse_level_direction()[source]
+
+ +
+
+rolling_mean(var_list=None, hours=8)[source]
+

Calculate rolling mean for given hours

+
+

Parameters

+
+
var_list: List, str, None

List (or single string) of the variables to be loaded.

+
+
hoursint, optional

Window hours to calculate rolling mean, by default 8

+
+
+
+
+

Returns

+
+
Nes

Nes object

+
+
+
+
+ +
+
+sel(hours_start=None, time_min=None, hours_end=None, time_max=None, lev_min=None, lev_max=None, lat_min=None, lat_max=None, lon_min=None, lon_max=None)[source]
+

Select a slice of time, lev, lat or lon given a minimum and maximum limits.

+
+ +
+
+sel_time(time, copy=False)[source]
+

To select only one time step.

+
+

Parameters

+
+
timedatetime.datetime

Time stamp to select.

+
+
copybool

Indicates if you want a copy with the selected time step (True) or to modify te existing one (False).

+
+
+
+
+

Returns

+
+
Nes

Nes object with the data (and metadata) of the selected time step.

+
+
+
+
+ +
+
+set_climatology(is_climatology)[source]
+
+ +
+
+set_communicator(comm)[source]
+

Set a new communicator and the correspondent parallelization indexes.

+
+

Parameters

+
+
comm: MPI.COMM

Communicator to be set.

+
+
+
+
+ +
+
+set_level_direction(new_direction)[source]
+
+ +
+
+set_levels(levels)[source]
+

Modify the original level values with new ones.

+
+

Parameters

+
+
levelsdict

Dictionary with the new level information to be set.

+
+
+
+
+ +
+
+set_strlen(strlen=75)[source]
+

Set the strlen

+

75 is the standard value used in GHOST data

+
+

Parameters

+
+
strlenint or None

Max length of the string

+
+
+
+
+ +
+
+set_time(time_list)[source]
+

Modify the original level values with new ones.

+
+

Parameters

+
+
time_listList[datetime]

List of time steps

+
+
+
+
+ +
+
+set_time_bnds(time_bnds)[source]
+

Modify the original time bounds values with new ones.

+
+

Parameters

+
+
time_bndsList

List with the new time bounds information to be set.

+
+
+
+
+ +
+
+set_time_resolution(new_resolution)[source]
+
+ +
+
+spatial_join(ext_shp, method=None, var_list=None, info=False)[source]
+

Compute overlay intersection of two GeoPandasDataFrames.

+
+

Parameters

+
+
ext_shpGeoPandasDataFrame or str

File or path from where the data will be obtained on the intersection.

+
+
methodstr

Overlay method. Accepted values: [‘nearest’, ‘intersection’, ‘centroid’].

+
+
var_listList or None

Variables that will be included in the resulting shapefile.

+
+
infobool

Indicates if you want to print the process info or no

+
+
+
+
+ +
+
+str2char(data)[source]
+
+ +
+
+sum_axis(axis='Z')[source]
+
+ +
+
+to_dtype(data_type='float32')[source]
+

Cast variables data into selected data type.

+
+

Parameters

+
+
data_typestr or Type

Data type, by default ‘float32’

+
+
+
+
+ +
+
+to_grib2(path, grib_keys, grib_template_path, lat_flip=True, info=False)[source]
+

Write output file with grib2 format.

+
+

Parameters

+
+
pathstr

Path to the output file.

+
+
grib_keysdict

Dictionary with the grib2 keys.

+
+
grib_template_pathstr

Path to the grib2 file to use as template.

+
+
lat_flipbool

Indicates if the latitude values (and data) has to be flipped

+
+
infobool

Indicates if you want to print extra information during the process.

+
+
+
+
+ +
+
+to_netcdf(path, compression_level=0, serial=False, info=False, chunking=False, type='NES', keep_open=False)[source]
+

Write the netCDF output file.

+
+

Parameters

+
+
pathstr

Path to the output netCDF file.

+
+
compression_levelint

Level of compression (0 to 9) Default: 0 (no compression).

+
+
serialbool

Indicates if you want to write in serial or not. Default: False.

+
+
infobool

Indicates if you want to print the information of each writing step by stdout Default: False.

+
+
chunkingbool

Indicates if you want a chunked netCDF output. Only available with non-serial writes. Default: False.

+
+
typestr

Type to NetCDf to write. ‘CAMS_RA’ or ‘NES’

+
+
keep_openbool

Indicates if you want to keep open the NetCDH to fill the data by time-step

+
+
+
+
+ +
+
+to_shapefile(path, time=None, lev=None, var_list=None)[source]
+

Create shapefile from NES data.

+
    +
  1. Create grid shapefile.

  2. +
  3. Add variables to shapefile (as independent function).

  4. +
  5. Write shapefile.

  6. +
+
+

Parameters

+
+
pathstr

Path to the output file.

+
+
timedatetime.datetime

Time stamp to select.

+
+
levint

Vertical level to select.

+
+
var_listList, str, None

List (or single string) of the variables to be loaded and saved in the shapefile.

+
+
+
+
+ +
+
+write_shapefile(path)[source]
+

Save spatial geodataframe (shapefile).

+
+

Parameters

+
+
pathstr

Path to the output file.

+
+
+
+
+ +
+
+ +
+
+

Regulat lat lon projection

+
+
+class nes.nc_projections.latlon_nes.LatLonNes(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Bases: Nes

+
+

Attributes

+
+
_var_dimtuple

Tuple with the name of the Y and X dimensions for the variables. +(‘lat’, ‘lon’) for a regular latitude-longitude projection.

+
+
_lat_dimtuple

Tuple with the name of the dimensions of the Latitude values. +(‘lat’,) for a regular latitude-longitude projection.

+
+
_lon_dimtuple

Tuple with the name of the dimensions of the Longitude values. +(‘lon’,) for a regular latitude-longitude projection.

+
+
+
+
+create_providentia_exp_centre_coordinates()[source]
+

Calculate centre latitudes and longitudes from original coordinates and store as 2D arrays.

+
+

Returns

+
+
model_centre_latdict

Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude).

+
+
model_centre_londict

Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude).

+
+
+
+
+ +
+
+create_providentia_exp_grid_edge_coordinates()[source]
+

Calculate grid edge latitudes and longitudes and get model grid outline.

+
+

Returns

+
+
grid_edge_latdict

Dictionary with data of grid edge latitudes.

+
+
grid_edge_londict

Dictionary with data of grid edge longitudes.

+
+
+
+
+ +
+
+static new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Initialize the Nes class.

+
+

Parameters

+
+
comm: MPI.COMM

MPI Communicator.

+
+
path: str

Path to the NetCDF to initialize the object.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
dataset: Dataset

NetCDF4-python Dataset to initialize the class.

+
+
xarray: bool:

(Not working) Indicates if you want to use xarray as default.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘Y’. +Accepted values: [‘X’, ‘Y’, ‘T’].

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
first_levelint

Index of the first level to use.

+
+
last_levelint, None

Index of the last level to use. None if it is the last.

+
+
create_nesbool

Indicates if you want to create the object from scratch (True) or through an existing file.

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
timeslist, None

List of times to substitute the current ones while creation.

+
+
+
+
+ +
+
+to_grib2(path, grib_keys, grib_template_path, lat_flip=False, info=False)[source]
+

Write output file with grib2 format.

+
+

Parameters

+
+
lat_flipbool

Indicates if the latitudes have to be flipped

+
+
pathstr

Path to the output file.

+
+
grib_keysdict

Dictionary with the grib2 keys.

+
+
grib_template_pathstr

Path to the grib2 file to use as template.

+
+
infobool

Indicates if you want to print extra information during the process.

+
+
+
+
+ +
+
+ +
+
+

LCC projection

+
+
+class nes.nc_projections.lcc_nes.LCCNes(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Bases: Nes

+
+

Attributes

+
+
_ydict

Y coordinates dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
_xdict

X coordinates dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
ydict

Y coordinates dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
xdict

X coordinates dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
_var_dimtuple

Tuple with the name of the Y and X dimensions for the variables. +(‘y’, ‘x’,) for a LCC projection.

+
+
_lat_dimtuple

Tuple with the name of the dimensions of the Latitude values. +(‘y’, ‘x’,) for a LCC projection.

+
+
_lon_dimtuple

Tuple with the name of the dimensions of the Longitude values. +(‘y’, ‘x’) for a LCC projection.

+
+
+
+
+create_providentia_exp_centre_coordinates()[source]
+

Calculate centre latitudes and longitudes from original coordinates and store as 2D arrays.

+
+

Returns

+
+
model_centre_latdict

Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude).

+
+
model_centre_londict

Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude).

+
+
+
+
+ +
+
+create_providentia_exp_grid_edge_coordinates()[source]
+

Calculate grid edge latitudes and longitudes and get model grid outline.

+
+

Returns

+
+
grid_edge_latdict

Dictionary with data of grid edge latitudes.

+
+
grid_edge_londict

Dictionary with data of grid edge longitudes.

+
+
+
+
+ +
+
+create_shapefile()[source]
+

Create spatial geodataframe (shapefile).

+
+

Returns

+
+
shapefileGeoPandasDataFrame

Shapefile dataframe.

+
+
+
+
+ +
+
+create_spatial_bounds()[source]
+

Calculate longitude and latitude bounds and set them.

+
+ +
+
+filter_coordinates_selection()[source]
+

Use the selection limits to filter y, x, time, lev, lat, lon, lon_bnds and lat_bnds.

+
+ +
+
+get_centroids_from_coordinates()[source]
+

Get centroids from geographical coordinates.

+
+

Returns

+
+
centroids_gdf: GeoPandasDataFrame

Centroids dataframe.

+
+
+
+
+ +
+
+static new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Initialize the Nes class.

+
+

Parameters

+
+
comm: MPI.COMM

MPI Communicator.

+
+
path: str

Path to the NetCDF to initialize the object.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
dataset: Dataset

NetCDF4-python Dataset to initialize the class.

+
+
xarray: bool:

(Not working) Indicates if you want to use xarray as default.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘Y’. +Accepted values: [‘X’, ‘Y’, ‘T’].

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
first_levelint

Index of the first level to use.

+
+
last_levelint, None

Index of the last level to use. None if it is the last.

+
+
create_nesbool

Indicates if you want to create the object from scratch (True) or through an existing file.

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
timeslist, None

List of times to substitute the current ones while creation.

+
+
+
+
+ +
+
+to_grib2(path, grib_keys, grib_template_path, lat_flip=False, info=False)[source]
+

Write output file with grib2 format.

+
+

Parameters

+
+
pathstr

Path to the output file.

+
+
grib_keysdict

Dictionary with the grib2 keys.

+
+
grib_template_pathstr

Path to the grib2 file to use as template.

+
+
infobool

Indicates if you want to print extra information during the process.

+
+
+
+
+ +
+
+ +
+
+

Mercator projection

+
+
+class nes.nc_projections.mercator_nes.MercatorNes(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Bases: Nes

+
+

Attributes

+
+
_ydict

Y coordinates dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
_xdict

X coordinates dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
ydict

Y coordinates dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
xdict

X coordinates dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
_var_dimtuple

Tuple with the name of the Y and X dimensions for the variables. +(‘y’, ‘x’) for a Mercator projection.

+
+
_lat_dimtuple

Tuple with the name of the dimensions of the Latitude values. +(‘y’, ‘x’) for a Mercator projection.

+
+
_lon_dimtuple

Tuple with the name of the dimensions of the Longitude values. +(‘y’, ‘x’) for a Mercator projection.

+
+
+
+
+create_providentia_exp_centre_coordinates()[source]
+

Calculate centre latitudes and longitudes from original coordinates and store as 2D arrays.

+
+

Returns

+
+
model_centre_latdict

Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude).

+
+
model_centre_londict

Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude).

+
+
+
+
+ +
+
+create_providentia_exp_grid_edge_coordinates()[source]
+

Calculate grid edge latitudes and longitudes and get model grid outline.

+
+

Returns

+
+
grid_edge_latdict

Dictionary with data of grid edge latitudes.

+
+
grid_edge_londict

Dictionary with data of grid edge longitudes.

+
+
+
+
+ +
+
+create_shapefile()[source]
+

Create spatial geodataframe (shapefile).

+
+

Returns

+
+
shapefileGeoPandasDataFrame

Shapefile dataframe.

+
+
+
+
+ +
+
+create_spatial_bounds()[source]
+

Calculate longitude and latitude bounds and set them.

+
+ +
+
+filter_coordinates_selection()[source]
+

Use the selection limits to filter y, x, time, lev, lat, lon, lon_bnds and lat_bnds.

+
+ +
+
+get_centroids_from_coordinates()[source]
+

Get centroids from geographical coordinates.

+
+

Returns

+
+
centroids_gdf: GeoPandasDataFrame

Centroids dataframe.

+
+
+
+
+ +
+
+static new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Initialize the Nes class.

+
+

Parameters

+
+
comm: MPI.COMM

MPI Communicator.

+
+
path: str

Path to the NetCDF to initialize the object.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
dataset: Dataset

NetCDF4-python Dataset to initialize the class.

+
+
xarray: bool:

(Not working) Indicates if you want to use xarray as default.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘Y’. +Accepted values: [‘X’, ‘Y’, ‘T’].

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
first_levelint

Index of the first level to use.

+
+
last_levelint, None

Index of the last level to use. None if it is the last.

+
+
create_nesbool

Indicates if you want to create the object from scratch (True) or through an existing file.

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
timeslist, None

List of times to substitute the current ones while creation.

+
+
+
+
+ +
+
+to_grib2(path, grib_keys, grib_template_path, lat_flip=False, info=False)[source]
+

Write output file with grib2 format.

+
+

Parameters

+
+
pathstr

Path to the output file.

+
+
grib_keysdict

Dictionary with the grib2 keys.

+
+
grib_template_pathstr

Path to the grib2 file to use as template.

+
+
infobool

Indicates if you want to print extra information during the process.

+
+
+
+
+ +
+
+ +
+
+

Points projection

+
+
+class nes.nc_projections.points_nes.PointsNes(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Bases: Nes

+
+

Attributes

+
+
_var_dimtuple

Tuple with the name of the Y and X dimensions for the variables. +(‘lat’, ‘lon’) for a points grid.

+
+
_lat_dimtuple

Tuple with the name of the dimensions of the Latitude values. +(‘lat’,) for a points grid.

+
+
_lon_dimtuple

Tuple with the name of the dimensions of the Longitude values. +(‘lon’,) for a points grid.

+
+
_stationtuple

Tuple with the name of the dimensions of the station values. +(‘station’,) for a points grid.

+
+
+
+
+add_variables_to_shapefile(var_list, idx_lev=0, idx_time=0)[source]
+

Add variables data to shapefile.

+
+
var_listlist, str

List (or single string) of the variables to be loaded and saved in the shapefile.

+
+
idx_levint

Index of vertical level for which the data will be saved in the shapefile.

+
+
idx_timeint

Index of time for which the data will be saved in the shapefile.

+
+
+
+ +
+
+create_shapefile()[source]
+

Create spatial geodataframe (shapefile).

+
+

Returns

+
+
shapefileGeoPandasDataFrame

Shapefile dataframe.

+
+
+
+
+ +
+
+create_spatial_bounds()[source]
+

Calculate longitude and latitude bounds and set them.

+
+ +
+
+get_centroids_from_coordinates()[source]
+

Get centroids from geographical coordinates.

+
+

Returns

+
+
centroids_gdf: GeoPandasDataFrame

Centroids dataframe.

+
+
+
+
+ +
+
+static new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Initialize the Nes class.

+
+

Parameters

+
+
comm: MPI.COMM

MPI Communicator.

+
+
path: str

Path to the NetCDF to initialize the object.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
dataset: Dataset

NetCDF4-python Dataset to initialize the class.

+
+
xarray: bool:

(Not working) Indicates if you want to use xarray as default.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘X’. +accepted values: [‘X’, ‘T’].

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
first_levelint

Index of the first level to use.

+
+
last_levelint, None

Index of the last level to use. None if it is the last.

+
+
create_nesbool

Indicates if you want to create the object from scratch (True) or through an existing file.

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
timeslist, None

List of times to substitute the current ones while creation.

+
+
+
+
+ +
+
+to_grib2(path, grib_keys, grib_template_path, lat_flip=False, info=False)[source]
+

Write output file with grib2 format.

+
+

Parameters

+
+
pathstr

Path to the output file.

+
+
grib_keysdict

Dictionary with the grib2 keys.

+
+
grib_template_pathstr

Path to the grib2 file to use as template.

+
+
infobool

Indicates if you want to print extra information during the process.

+
+
+
+
+ +
+
+to_providentia(model_centre_lon, model_centre_lat, grid_edge_lon, grid_edge_lat)[source]
+

Transform a PointsNes into a PointsNesProvidentia object

+
+

Returns

+
+
points_nes_providentianes.Nes

Points Nes Providentia Object

+
+
+
+
+ +
+
+ +
+
+

GHOST projection

+
+
+class nes.nc_projections.points_nes_ghost.PointsNesGHOST(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Bases: PointsNes

+
+

Attributes

+
+
_qadict

Quality flags (GHOST checks) dictionary with the complete ‘data’ key for all the values and the rest of the +attributes.

+
+
_flagdict

Data flags (given by data provider) dictionary with the complete ‘data’ key for all the values and the rest of +the attributes.

+
+
_qadict

Quality flags (GHOST checks) dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
_flagdict

Data flags (given by data provider) dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
+
+
+add_variables_to_shapefile(var_list, idx_lev=0, idx_time=0)[source]
+

Add variables data to shapefile.

+
+
var_listlist, str

List (or single string) of the variables to be loaded and saved in the shapefile.

+
+
idx_levint

Index of vertical level for which the data will be saved in the shapefile.

+
+
idx_timeint

Index of time for which the data will be saved in the shapefile.

+
+
+
+ +
+
+erase_flags()[source]
+
+ +
+
+get_standard_metadata(GHOST_version)[source]
+

Get all possible GHOST variables for each version.

+
+

Parameters

+
+
GHOST_versionstr

Version of GHOST file.

+
+
+
+
+

Returns

+
+
metadata_variables[GHOST_version]list

List of metadata variables for a certain GHOST version

+
+
+
+
+ +
+
+static new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Initialize the PointsNesGHOST class.

+
+

Parameters

+
+
comm: MPI.COMM

MPI Communicator.

+
+
path: str

Path to the NetCDF to initialize the object.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
dataset: Dataset

NetCDF4-python Dataset to initialize the class.

+
+
xarray: bool:

(Not working) Indicates if you want to use xarray as default.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘X’. +Accepted values: [‘X’].

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
first_levelint

Index of the first level to use.

+
+
last_levelint, None

Index of the last level to use. None if it is the last.

+
+
create_nesbool

Indicates if you want to create the object from scratch (True) or through an existing file.

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
timeslist, None

List of times to substitute the current ones while creation.

+
+
+
+
+ +
+
+to_netcdf(path, compression_level=0, serial=False, info=False, chunking=False)[source]
+

Write the netCDF output file.

+
+

Parameters

+
+
pathstr

Path to the output netCDF file.

+
+
compression_levelint

Level of compression (0 to 9) Default: 0 (no compression).

+
+
serialbool

Indicates if you want to write in serial or not. Default: False.

+
+
infobool

Indicates if you want to print the information of each writing step by stdout Default: False.

+
+
chunkingbool

Indicates if you want a chunked netCDF output. Only available with non serial writes. Default: False.

+
+
+
+
+ +
+
+to_points()[source]
+

Transform a PointsNesGHOST into a PointsNes object

+
+

Returns

+
+
points_nesnes.Nes

Points Nes Object (without GHOST metadata variables)

+
+
+
+
+ +
+
+ +
+
+

Providentia projection

+
+
+class nes.nc_projections.points_nes_providentia.PointsNesProvidentia(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, model_centre_lon=None, model_centre_lat=None, grid_edge_lon=None, grid_edge_lat=None, **kwargs)[source]
+

Bases: PointsNes

+
+

Attributes

+
+
_model_centre_londict

Model centre longitudes dictionary with the complete ‘data’ key for all the values and the rest of the +attributes.

+
+
_model_centre_latdict

Model centre latitudes dictionary with the complete ‘data’ key for all the values and the rest of the +attributes.

+
+
_grid_edge_londict

Grid edge longitudes dictionary with the complete ‘data’ key for all the values and the rest of the +attributes.

+
+
_grid_edge_latdict

Grid edge latitudes dictionary with the complete ‘data’ key for all the values and the rest of the +attributes.

+
+
model_centre_londict

Model centre longitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
model_centre_latdict

Model centre latitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
grid_edge_londict

Grid edge longitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
grid_edge_latdict

Grid edge latitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
+
+
+add_variables_to_shapefile(var_list, idx_lev=0, idx_time=0)[source]
+

Add variables data to shapefile.

+
+
var_listlist, str

List (or single string) of the variables to be loaded and saved in the shapefile.

+
+
idx_levint

Index of vertical level for which the data will be saved in the shapefile.

+
+
idx_timeint

Index of time for which the data will be saved in the shapefile.

+
+
+
+ +
+
+static new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='X', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, model_centre_lon=None, model_centre_lat=None, grid_edge_lon=None, grid_edge_lat=None, **kwargs)[source]
+

Initialize the PointsNesProvidentia class.

+
+

Parameters

+
+
comm: MPI.COMM

MPI Communicator.

+
+
path: str

Path to the NetCDF to initialize the object.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
dataset: Dataset

NetCDF4-python Dataset to initialize the class.

+
+
xarray: bool:

(Not working) Indicates if you want to use xarray as default.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘X’. +Accepted values: [‘X’].

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
first_levelint

Index of the first level to use

+
+
last_levelint, None

Index of the last level to use. None if it is the last.

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
timeslist, None

List of times to substitute the current ones while creation.

+
+
create_nesbool

Indicates if you want to create the object from scratch (True) or through an existing file.

+
+
model_centre_londict

Model centre longitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
model_centre_latdict

Model centre latitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
grid_edge_londict

Grid edge longitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
grid_edge_latdict

Grid edge latitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
+
+
+ +
+
+to_netcdf(path, compression_level=0, serial=False, info=False, chunking=False)[source]
+

Write the netCDF output file.

+
+

Parameters

+
+
pathstr

Path to the output netCDF file.

+
+
compression_levelint

Level of compression (0 to 9) Default: 0 (no compression).

+
+
serialbool

Indicates if you want to write in serial or not. Default: False.

+
+
infobool

Indicates if you want to print the information of each writing step by stdout Default: False.

+
+
chunkingbool

Indicates if you want a chunked netCDF output. Only available with non serial writes. Default: False.

+
+
+
+
+ +
+
+ +
+
+

Rotated projection

+
+
+class nes.nc_projections.rotated_nes.RotatedNes(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Bases: Nes

+
+

Attributes

+
+
_rlatdict

Rotated latitudes dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
_rlondict

Rotated longitudes dictionary with the complete ‘data’ key for all the values and the rest of the attributes.

+
+
rlatdict

Rotated latitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
rlondict

Rotated longitudes dictionary with the portion of ‘data’ corresponding to the rank values.

+
+
_var_dimtuple

Tuple with the name of the Y and X dimensions for the variables. +(‘rlat’, ‘rlon’) for a rotated projection.

+
+
_lat_dimtuple

Tuple with the name of the dimensions of the Latitude values. +(‘rlat’, ‘rlon’) for a rotated projection.

+
+
_lon_dimtuple

Tuple with the name of the dimensions of the Longitude values. +(‘rlat’, ‘rlon’) for a rotated projection.

+
+
+
+
+create_providentia_exp_centre_coordinates()[source]
+

Calculate centre latitudes and longitudes from original coordinates and store as 2D arrays.

+
+

Returns

+
+
model_centre_latdict

Dictionary with data of centre coordinates for latitude in 2D (latitude, longitude).

+
+
model_centre_londict

Dictionary with data of centre coordinates for longitude in 2D (latitude, longitude).

+
+
+
+
+ +
+
+create_providentia_exp_grid_edge_coordinates()[source]
+

Calculate grid edge latitudes and longitudes and get model grid outline.

+
+

Returns

+
+
grid_edge_latdict

Dictionary with data of grid edge latitudes.

+
+
grid_edge_londict

Dictionary with data of grid edge longitudes.

+
+
+
+
+ +
+
+create_shapefile()[source]
+

Create spatial geodataframe (shapefile).

+
+

Returns

+
+
shapefileGeoPandasDataFrame

Shapefile dataframe.

+
+
+
+
+ +
+
+create_spatial_bounds()[source]
+

Calculate longitude and latitude bounds and set them.

+
+ +
+
+filter_coordinates_selection()[source]
+

Use the selection limits to filter rlat, rlon, time, lev, lat, lon, lon_bnds and lat_bnds.

+
+ +
+
+get_centroids_from_coordinates()[source]
+

Get centroids from geographical coordinates.

+
+

Returns

+
+
centroids_gdf: GeoPandasDataFrame

Centroids dataframe.

+
+
+
+
+ +
+
+static new(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Initialize the Nes class.

+
+

Parameters

+
+
comm: MPI.COMM

MPI Communicator.

+
+
path: str

Path to the NetCDF to initialize the object.

+
+
info: bool

Indicates if you want to get reading/writing info.

+
+
dataset: Dataset

NetCDF4-python Dataset to initialize the class.

+
+
xarray: bool:

(Not working) Indicates if you want to use xarray as default.

+
+
parallel_methodstr

Indicates the parallelization method that you want. Default: ‘Y’. +Accepted values: [‘X’, ‘Y’, ‘T’].

+
+
avoid_first_hoursint

Number of hours to remove from first time steps.

+
+
avoid_last_hoursint

Number of hours to remove from last time steps.

+
+
create_nesbool

Indicates if you want to create the object from scratch (True) or through an existing file.

+
+
balancedbool

Indicates if you want a balanced parallelization or not. +Balanced dataset cannot be written in chunking mode.

+
+
timeslist, None

List of times to substitute the current ones while creation.

+
+
+
+
+ +
+
+rotated2latlon(lon_deg, lat_deg, lon_min=-180)[source]
+

Calculate the unrotated coordinates using the rotated ones.

+
+

Parameters

+
+
lon_degnumpy.array

Rotated longitude coordinate.

+
+
lat_degnumpy.array

Rotated latitude coordinate.

+
+
lon_minfloat

Minimum value for the longitudes: -180 (-180 to 180) or 0 (0 to 360).

+
+
+
+
+

Returns

+
+
almdnumpy.array

Unrotated longitudes.

+
+
aphdnumpy.array

Unrotated latitudes.

+
+
+
+
+ +
+
+to_grib2(path, grib_keys, grib_template_path, lat_flip=False, info=False)[source]
+

Write output file with grib2 format.

+
+

Parameters

+
+
pathstr

Path to the output file.

+
+
grib_keysdict

Dictionary with the grib2 keys.

+
+
grib_template_pathstr

Path to the grib2 file to use as template.

+
+
infobool

Indicates if you want to print extra information during the process.

+
+
+
+
+ +
+
+ +
+
+

Rotated nested projection

+
+
+class nes.nc_projections.rotated_nested_nes.RotatedNestedNes(comm=None, path=None, info=False, dataset=None, xarray=False, parallel_method='Y', avoid_first_hours=0, avoid_last_hours=0, first_level=0, last_level=None, create_nes=False, balanced=False, times=None, **kwargs)[source]
+

Bases: RotatedNes

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/py-modindex.html b/docs/build/html/py-modindex.html new file mode 100644 index 0000000000000000000000000000000000000000..450db1f1e2dbbff443d47566cc44df084d4b7a37 --- /dev/null +++ b/docs/build/html/py-modindex.html @@ -0,0 +1,223 @@ + + + + + + Python Module Index — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + +

Python Module Index

+ +
+ n +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 
+ n
+ nes +
    + nes.create_nes +
    + nes.load_nes +
    + nes.methods.cell_measures +
    + nes.methods.horizontal_interpolation +
    + nes.methods.spatial_join +
    + nes.methods.vertical_interpolation +
    + nes.nc_projections.default_nes +
    + nes.nc_projections.latlon_nes +
    + nes.nc_projections.lcc_nes +
    + nes.nc_projections.mercator_nes +
    + nes.nc_projections.points_nes +
    + nes.nc_projections.points_nes_ghost +
    + nes.nc_projections.points_nes_providentia +
    + nes.nc_projections.rotated_nes +
    + nes.nc_projections.rotated_nested_nes +
    + nes.nes_formats.cams_ra_format +
    + nes.nes_formats.cmaq_format +
    + nes.nes_formats.monarch_format +
    + nes.nes_formats.wrf_chem_format +
+ + +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/readme.html b/docs/build/html/readme.html new file mode 100644 index 0000000000000000000000000000000000000000..99d9aebc1a3eccabca465cc8719d67b935310ca2 --- /dev/null +++ b/docs/build/html/readme.html @@ -0,0 +1,148 @@ + + + + + + + Introduction — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Introduction

+
+

About

+

NES (NetCDF for Earth Science) is the Python I/O library used by SNES, the framework that implements the data post-processing pipelines at the Earth Sciences department, to read and write netCDF files.

+
+
+

How to clone

+

Use the following command to get a copy of the repository:

+
git clone https://earth.bsc.es/gitlab/es/NES.git
+
+
+

You can use the latest stable version of NES +by accessing the production branch:

+
git checkout production
+
+
+

You can also access the master branch to test new features, +that are to be implemented in the upcoming release:

+
git checkout master
+
+
+
+
+

How to run

+

For running NES, please follow the instruction in +the wiki.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/docs/build/html/search.html b/docs/build/html/search.html new file mode 100644 index 0000000000000000000000000000000000000000..f2e9839e80ce51d3daea5958b6b773e3bf2234da --- /dev/null +++ b/docs/build/html/search.html @@ -0,0 +1,128 @@ + + + + + + Search — NES 1.1.3 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + + + +
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023, Carles Tena Medina, Alba Vilanova Cortezon.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/docs/build/html/searchindex.js b/docs/build/html/searchindex.js new file mode 100644 index 0000000000000000000000000000000000000000..a6d1152be16ee2bd259a5cf552740aba5fba54e3 --- /dev/null +++ b/docs/build/html/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"docnames": ["authors", "changelog", "contributing", "formats", "index", "methods", "object", "projections", "readme"], "filenames": ["authors.rst", "changelog.rst", "contributing.rst", "formats.rst", "index.rst", "methods.rst", "object.rst", "projections.rst", "readme.rst"], "titles": ["Authors", "CHANGELOG", "Contributing", "Formats", "Contents", "Methods", "The NES object", "Projections", "Introduction"], "terms": {"carl": [0, 2], "tena": 0, "ctwebpag": 0, "alba": [0, 2], "vilanova": 0, "cortez\u00f3n": 0, "avcwebpag": 0, "releas": [1, 8], "date": 1, "unknown": 1, "chang": [1, 2, 3, 7], "new": [1, 2, 5, 7, 8], "featur": [1, 8], "statist": [1, 7], "roll": [1, 7], "mean": [1, 7], "document": 1, "2023": 1, "06": 1, "16": [1, 3], "rotat": [1, 4], "nest": [1, 4], "project": [1, 3, 4, 5, 6], "improv": 1, "function": [1, 5, 7], "get_fid": [1, 7], "climatologi": 1, "option": [1, 7], "ad": [1, 7], "millisecond": 1, "second": [1, 5], "minut": 1, "dai": [1, 7], "time": [1, 2, 3, 6, 7], "unit": [1, 3, 7], "accept": [1, 5, 6, 7], "resolut": 1, "bug": [1, 2], "fix": [1, 2, 5, 7], "The": [1, 4, 7], "input": [1, 3], "argument": 1, "have": [1, 7], "been": [1, 2], "correct": 1, "month": 1, "05": 1, "15": 1, "minor": [1, 5, 7], "tutori": 1, "updat": [1, 2], "write": [1, 5, 6, 7, 8], "format": [1, 4, 5, 7], "cmaq": [1, 4], "monarch": [1, 4], "wrf_chem": 1, "63": 1, "http": [1, 5, 7, 8], "earth": [1, 5, 7, 8], "bsc": [1, 5, 8], "e": [1, 2, 5, 7, 8], "gitlab": [1, 5, 8], "ne": [1, 2, 3, 4, 5, 7, 8], "issu": [1, 2], "04": 1, "12": 1, "sum": 1, "object": [1, 3, 4, 5, 7], "48": 1, "2d": [1, 5, 7], "string": [1, 3, 5, 7], "data": [1, 3, 5, 6, 7, 8], "save": [1, 7], "variabl": [1, 3, 5, 6, 7], "from": [1, 2, 3, 5, 6, 7], "shapefil": [1, 5, 6, 7], "after": 1, "do": 1, "spatial": [1, 4, 6, 7], "join": [1, 4, 6], "49": 1, "horizont": [1, 3, 4, 7], "interpol": [1, 4, 7], "conserv": [1, 5, 7], "memori": 1, "usag": 1, "when": [1, 7], "calcul": [1, 5, 7], "weight": [1, 5, 7], "matrix": [1, 5, 7], "54": 1, "concatenate_netcdf": [1, 6], "55": 1, "step": [1, 3, 6, 7], "avoid": [1, 7], "57": 1, "flux": [1, 5, 7], "60": 1, "cell_method": 1, "serial": [1, 7], "53": 1, "avoid_first_hour": [1, 6, 7], "where": [1, 5, 7], "filter": [1, 5, 7], "read": [1, 5, 6, 7, 8], "dimens": [1, 3, 7], "59": 1, "while": [1, 7], "mask": 1, "grid_map": 1, "netcdf": [1, 3, 5, 6, 7, 8], "integ": 1, "instead": 1, "charact": 1, "03": 1, "02": 1, "lat": [1, 3, 4, 5], "lon": [1, 3, 4, 5], "cartesian": [1, 5, 7], "coordin": [1, 5, 7], "method": [1, 3, 4, 6, 7], "us": [1, 3, 5, 6, 7, 8], "providentia": [1, 4, 5], "to_shapefil": [1, 7], "creat": [1, 2, 3, 4, 5, 7], "without": [1, 2, 7], "lose": 1, "origin": [1, 7], "grid": [1, 3, 5, 6, 7], "being": 1, "abl": 1, "select": [1, 7], "level": [1, 5, 6, 7], "from_shapefil": [1, 6], "create_shapefil": [1, 7], "can": [1, 7, 8], "now": 1, "parallel": [1, 5, 6, 7], "calculate_grid_area": [1, 5, 7], "area": [1, 5, 7], "each": [1, 5, 7], "cell": [1, 5, 7], "calculate_geometry_area": [1, 5, 7], "given": [1, 7], "set": [1, 3, 5, 7], "geometri": [1, 5, 7], "get_spatial_bounds_mesh_format": [1, 7], "get": [1, 5, 6, 7, 8], "boundari": [1, 5, 7], "mesh": [1, 7], "pcolormesh": [1, 7], "result": [1, 5, 7], "point": [1, 4, 5], "dataset": [1, 3, 6, 7], "ani": [1, 2, 7], "amend": 1, "take": 1, "account": 1, "case": [1, 2], "which": [1, 7], "distanc": [1, 5, 7], "among": 1, "equal": 1, "zero": 1, "wai": [1, 5], "we": [1, 5, 7], "retriev": 1, "posit": [1, 5, 7], "valu": [1, 3, 5, 6, 7], "how": [1, 4], "bound": [1, 5, 7], "lcc": [1, 4], "mercat": [1, 4], "were": 1, "flip": [1, 7], "all": [1, 2, 5, 6, 7], "axi": [1, 5, 7], "limit": [1, 7], "centroid": [1, 5, 7], "creation": [1, 7], "geopanda": 1, "rais": 1, "warn": 1, "possibl": [1, 2, 7], "error": 1, "enabl": 1, "type": [1, 5, 7], "appear": [1, 2], "try": 1, "file": [1, 3, 5, 6, 7, 8], "2022": 1, "11": 1, "24": 1, "first": [1, 5, 6, 7], "beta": 1, "open": [1, 3, 6, 7], "regular": [1, 7], "latitud": [1, 5, 7], "longitud": [1, 5, 7], "lambert": 1, "conform": 1, "conic": 1, "ghost": [1, 4], "balanc": [1, 6, 7], "unbalanc": 1, "By": 1, "y": [1, 3, 5, 6, 7], "x": [1, 3, 5, 6, 7], "cam": [1, 4], "reanalysi": 1, "grib2": [1, 7], "vertic": [1, 4, 7], "nearest": [1, 5, 6, 7], "neighbour": [1, 5, 7], "daily_mean": 1, "daily_max": 1, "daily_min": 1, "last": [1, 6, 7], "concaten": [1, 6, 7], "same": [1, 5, 7], "period": 1, "differ": [1, 6, 7], "an": [2, 5, 7], "complet": [2, 7], "descript": [2, 7], "much": 2, "estim": 2, "correspond": [2, 7], "mileston": 2, "assign": 2, "person": 2, "etc": 2, "branch": [2, 8], "master": [2, 7, 8], "directli": 2, "Its": 2, "name": [2, 3, 7], "should": 2, "start": 2, "develop": 2, "follow": [2, 7, 8], "number": [2, 5, 6, 7], "titl": 2, "default": [2, 4, 5, 6], "clone": [2, 4], "checkout": [2, 8], "modif": 2, "nord3v2": 2, "It": [2, 5, 7], "i": [2, 5, 6, 7, 8], "recommend": [2, 5], "run": [2, 4], "some": [2, 7], "test": [2, 8], "ensur": 2, "current": [2, 7], "behavior": 2, "modifi": [2, 3, 7], "code": 2, "simul": 2, "import": 2, "prepend": 2, "path": [2, 3, 5, 6, 7], "pythonpath": 2, "g": 2, "export": 2, "gpf": 2, "scratch": [2, 6, 7], "bsc32": 2, "bsc32538": 2, "specif": 2, "your": 2, "folder": 2, "changelog": [2, 4], "rst": 2, "includ": [2, 5, 7], "inform": [2, 5, 7], "wiki": [2, 8], "merg": 2, "To": [2, 5, 7], "ha": [2, 7], "ar": [2, 7, 8], "request": 2, "avilanov": 2, "ctena": 2, "who": 2, "review": 2, "nes_format": 3, "cams_ra_format": 3, "create_dimension_vari": 3, "self": [3, 5, 7], "sourc": [3, 5, 6, 7], "time_bnd": [3, 7], "lev": [3, 7], "lat_bnd": [3, 7], "lon_bnd": [3, 7], "netcdf4": [3, 7], "python": [3, 7, 8], "create_dimens": 3, "create_vari": 3, "i_lev": 3, "date2num": 3, "time_arrai": 3, "time_unit": 3, "none": [3, 5, 6, 7], "time_calendar": 3, "to_netcdf_cams_ra": 3, "one": [3, 5, 7], "anoth": [3, 5, 7], "str": [3, 5, 6, 7], "output": [3, 7], "cmaq_format": 3, "change_variable_attribut": 3, "emiss": 3, "list": [3, 5, 6, 7], "consist": 3, "model": [3, 5, 7], "create_tflag": 3, "content": 3, "tflag": 3, "numpi": [3, 7], "ndarrai": [3, 7], "arrai": [3, 5, 7], "set_global_attribut": 3, "global": [3, 7], "attribut": 3, "str_var_list": 3, "transform": [3, 7], "element": [3, 7], "white": 3, "space": 3, "to_cmaq_unit": 3, "accord": [3, 5], "convent": 3, "dict": [3, 7], "to_netcdf_cmaq": 3, "chunk": [3, 6, 7], "fals": [3, 5, 6, 7], "keep_open": [3, 7], "bool": [3, 5, 6, 7], "indic": [3, 5, 6, 7], "you": [3, 5, 6, 7, 8], "want": [3, 5, 6, 7], "keep": [3, 7], "netcdh": [3, 7], "fill": [3, 7], "monarch_format": 3, "to_monarch_unit": 3, "to_netcdf_monarch": 3, "wrf_chem_format": 3, "create_times_var": 3, "to_netcdf_wrf_chem": 3, "to_wrf_chem_unit": 3, "introduct": 4, "about": 4, "load": [4, 7], "gener": 4, "ra": 4, "wrf": 4, "chem": 4, "regulat": 4, "contribut": 4, "1": [4, 5, 7], "4": [4, 5, 7], "3": 4, "2": [4, 7], "0": [4, 5, 6, 7], "author": 4, "cell_measur": 5, "calculate_cell_area": 5, "grid_corner_lon": 5, "grid_corner_lat": 5, "earth_radius_minor_axi": [5, 7], "6356752": [5, 7], "3142": [5, 7], "earth_radius_major_axi": [5, 7], "6378137": [5, 7], "np": [5, 7], "float": [5, 7], "radiu": [5, 7], "major": [5, 7], "geometry_list": [5, 7], "call": [5, 7], "polygon": [5, 7], "cross_product": 5, "b": 5, "cross": 5, "product": [5, 8], "between": [5, 7], "two": [5, 7], "lon_lat_to_cartesian": 5, "sphere": 5, "mod_huiliers_area": 5, "cell_corner_lon": 5, "cell_corner_lat": 5, "huilier": 5, "": [5, 7], "theorem": 5, "refer": [5, 7], "cdo": 5, "ce": 5, "norm": 5, "cp": 5, "normal": 5, "oper": [5, 7], "tri_area": 5, "point_0": 5, "point_1": 5, "point_2": 5, "three": 5, "form": [5, 6], "triangl": 5, "third": 5, "horizontal_interpol": 5, "create_area_conservative_weight_matrix": 5, "dst_ne": 5, "wm_path": 5, "info": [5, 6, 7], "final": [5, 7], "print": [5, 6, 7], "extra": [5, 7], "dure": [5, 7], "process": [5, 7, 8], "create_nn_weight_matrix": 5, "dst_grid": [5, 7], "n_neighbour": [5, 7], "int": [5, 6, 7], "kind": [5, 7], "nearestneighbour": [5, 7], "get_src_data": 5, "comm": [5, 6, 7], "var_data": 5, "idx": 5, "parallel_method": [5, 6, 7], "obtain": [5, 7], "need": [5, 7], "src": 5, "mpi": [5, 6, 7], "commun": [5, 6, 7], "rank": [5, 7], "index": [5, 6, 7], "flatten": 5, "get_weights_idx_t_axi": 5, "weight_matrix_path": [5, 7], "only_cr": 5, "wm": [5, 7], "through": [5, 7], "t": [5, 6, 7], "onli": [5, 6, 7], "tupl": [5, 7], "get_weights_idx_xy_axi": 5, "interpolate_horizont": [5, 7], "to_providentia": [5, 7], "only_create_wm": [5, 7], "deprec": 5, "lon_lat_to_cartesian_ecef": [5, 7], "convert": [5, 7], "observ": [5, 7], "geograph": [5, 7], "ecef": [5, 7], "centr": [5, 7], "assum": [5, 7], "wgs84": [5, 7], "datum": [5, 7], "ellipsoid": [5, 7], "height": [5, 7], "coordiant": [5, 7], "repres": [5, 7], "meter": [5, 7], "z": [5, 7], "approxim": [5, 7], "surfac": [5, 7], "revolut": [5, 7], "thi": [5, 7], "convers": [5, 7], "subsequ": [5, 7], "euclidean": [5, 7], "gridcel": [5, 7], "station": [5, 7], "defin": [5, 7], "simpli": [5, 7], "pair": [5, 7], "could": [5, 7], "lead": [5, 7], "inaccur": [5, 7], "depend": [5, 7], "deg": [5, 7], "vari": [5, 7], "read_weight_matrix": 5, "spatial_join": [5, 7], "get_bbox": 5, "box": 5, "lon_min": [5, 7], "lat_min": [5, 7], "lon_max": [5, 7], "lat_max": [5, 7], "prepare_external_shapefil": 5, "ext_shp": [5, 7], "var_list": [5, 7], "prepar": 5, "extern": 5, "high": 5, "pass": 5, "becaus": 5, "clip": 5, "alreadi": 5, "standard": [5, 7], "geodatafram": [5, 7], "comput": [5, 7], "overlai": [5, 6, 7], "intersect": [5, 6, 7], "geopandasdatafram": [5, 7], "spatial_join_centroid": 5, "perform": [5, 7], "spatial_join_intersect": 5, "spatial_join_nearest": 5, "vertical_interpol": 5, "add_4d_vertical_info": [5, 7], "info_to_add": [5, 7], "add": [5, 6, 7], "other": [5, 7], "contain": [5, 7], "interpolate_vert": [5, 7], "new_level": [5, 7], "new_src_vert": [5, 7], "linear": [5, 7], "extrapol": [5, 7], "overwrit": [5, 7], "non": [5, 7], "create_n": [6, 7], "avoid_last_hour": [6, 7], "first_level": [6, 7], "last_level": [6, 7], "kwarg": [6, 7], "class": [6, 7], "cannot": [6, 7], "written": [6, 7], "mode": [6, 7], "hour": [6, 7], "remov": [6, 7], "load_n": 6, "nessy_list": 6, "open_netcdf": [6, 7], "xarrai": [6, 7], "comm_world": 6, "Not": [6, 7], "work": [6, 7], "stdout": [6, 7], "lazi": [6, 7], "metadata": [6, 7], "nc_project": 7, "default_n": 7, "base": 7, "true": 7, "size": 7, "is_xarrai": 7, "__ini_path": 7, "hours_start": 7, "hours_end": 7, "xr": 7, "store": 7, "dictionari": 7, "var_nam": 7, "kei": 7, "_time": 7, "_lev": 7, "rest": 7, "_lat": 7, "_lon": 7, "_": 7, "chosen": 7, "read_axis_limit": 7, "4d": 7, "t_min": 7, "t_max": 7, "z_min": 7, "z_max": 7, "y_min": 7, "y_max": 7, "x_min": 7, "x_max": 7, "write_axis_limit": 7, "datetim": 7, "portion": 7, "global_attr": 7, "_var_dim": 7, "_lat_dim": 7, "_lon_dim": 7, "pyproj": 7, "proj": 7, "projection_data": 7, "add_variables_to_shapefil": 7, "idx_lev": 7, "idx_tim": 7, "append_time_step_data": 7, "i_tim": 7, "out_format": 7, "static": 7, "clear_commun": 7, "eras": 7, "close": 7, "aux_nessi": 7, "var": 7, "copi": [7, 8], "copy_var": 7, "nessi": 7, "datafram": 7, "create_single_spatial_bound": 7, "inc": 7, "spatial_nv": 7, "invers": 7, "degre": 7, "increment": 7, "mandatori": 7, "must": 7, "For": [7, 8], "mani": 7, "coord": 7, "create_spatial_bound": 7, "them": 7, "daily_statist": 7, "op": 7, "type_op": 7, "calendar": 7, "daili": 7, "max": 7, "min": 7, "alltstep": 7, "withoutt0": 7, "singl": 7, "filter_coordinates_select": 7, "find_time_id": 7, "find": 7, "free_var": 7, "get_centroids_from_coordin": 7, "centroids_gdf": 7, "get_climatologi": 7, "get_coordinate_id": 7, "search": 7, "get_earth_radiu": 7, "ellp": 7, "system": 7, "fid": 7, "get_full_level": 7, "get_full_tim": 7, "get_idx_interv": 7, "interv": 7, "get_read_axis_limit": 7, "get_read_axis_limits_balanc": 7, "get_read_axis_limits_unbalanc": 7, "see": 7, "matplotlib": 7, "org": 7, "stabl": [7, 8], "api": 7, "_as_gen": 7, "pyplot": 7, "html": 7, "lon_bnds_mesh": 7, "lat_bnds_mesh": 7, "get_time_id": 7, "get_time_interv": 7, "interrv": 7, "get_write_axis_limit": 7, "get_write_axis_limits_balanc": 7, "get_write_axis_limits_unbalanc": 7, "choic": 7, "keep_var": 7, "last_time_step": 7, "That": 7, "initi": 7, "over": 7, "exist": 7, "substitut": 7, "ones": 7, "reverse_level_direct": 7, "rolling_mean": 7, "8": 7, "window": 7, "sel": 7, "time_min": 7, "time_max": 7, "lev_min": 7, "lev_max": 7, "slice": 7, "minimum": 7, "maximum": 7, "sel_tim": 7, "stamp": 7, "te": 7, "set_climatologi": 7, "is_climatologi": 7, "set_commun": 7, "set_level_direct": 7, "new_direct": 7, "set_level": 7, "set_strlen": 7, "strlen": 7, "75": 7, "length": 7, "set_tim": 7, "time_list": 7, "set_time_bnd": 7, "set_time_resolut": 7, "new_resolut": 7, "str2char": 7, "sum_axi": 7, "to_dtyp": 7, "data_typ": 7, "float32": 7, "cast": 7, "to_grib2": 7, "grib_kei": 7, "grib_template_path": 7, "lat_flip": 7, "templat": 7, "to_netcdf": 7, "compression_level": 7, "compress": 7, "9": 7, "avail": 7, "cams_ra": 7, "independ": 7, "write_shapefil": 7, "latlon_n": 7, "latlonn": 7, "create_providentia_exp_centre_coordin": 7, "model_centre_lat": 7, "model_centre_lon": 7, "create_providentia_exp_grid_edge_coordin": 7, "edg": 7, "outlin": 7, "grid_edge_lat": 7, "grid_edge_lon": 7, "lcc_ne": 7, "lccne": 7, "_y": 7, "_x": 7, "mercator_n": 7, "mercatorn": 7, "points_n": 7, "pointsn": 7, "_station": 7, "pointsnesprovidentia": 7, "points_nes_providentia": 7, "points_nes_ghost": 7, "pointsnesghost": 7, "_qa": 7, "qualiti": 7, "flag": 7, "check": 7, "_flag": 7, "provid": 7, "erase_flag": 7, "get_standard_metadata": 7, "ghost_vers": 7, "version": [7, 8], "metadata_vari": 7, "certain": 7, "to_point": 7, "_model_centre_lon": 7, "_model_centre_lat": 7, "_grid_edge_lon": 7, "_grid_edge_lat": 7, "rotated_n": 7, "rotatedn": 7, "_rlat": 7, "_rlon": 7, "rlat": 7, "rlon": 7, "rotated2latlon": 7, "lon_deg": 7, "lat_deg": 7, "180": 7, "unrot": 7, "360": 7, "almd": 7, "aphd": 7, "rotated_nested_n": 7, "rotatednestedn": 7, "scienc": 8, "o": 8, "librari": 8, "sne": 8, "framework": 8, "implement": 8, "post": 8, "pipelin": 8, "depart": 8, "command": 8, "repositori": 8, "git": 8, "latest": 8, "access": 8, "also": 8, "upcom": 8, "pleas": 8, "instruct": 8}, "objects": {"nes": [[6, 0, 0, "-", "create_nes"], [6, 0, 0, "-", "load_nes"]], "nes.create_nes": [[6, 1, 1, "", "create_nes"], [6, 1, 1, "", "from_shapefile"]], "nes.load_nes": [[6, 1, 1, "", "concatenate_netcdfs"], [6, 1, 1, "", "open_netcdf"]], "nes.methods": [[5, 0, 0, "-", "cell_measures"], [5, 0, 0, "-", "horizontal_interpolation"], [5, 0, 0, "-", "spatial_join"], [5, 0, 0, "-", "vertical_interpolation"]], "nes.methods.cell_measures": [[5, 1, 1, "", "calculate_cell_area"], [5, 1, 1, "", "calculate_geometry_area"], [5, 1, 1, "", "calculate_grid_area"], [5, 1, 1, "", "cross_product"], [5, 1, 1, "", "lon_lat_to_cartesian"], [5, 1, 1, "", "mod_huiliers_area"], [5, 1, 1, "", "norm"], [5, 1, 1, "", "tri_area"]], "nes.methods.horizontal_interpolation": [[5, 1, 1, "", "create_area_conservative_weight_matrix"], [5, 1, 1, "", "create_nn_weight_matrix"], [5, 1, 1, "", "get_src_data"], [5, 1, 1, "", "get_weights_idx_t_axis"], [5, 1, 1, "", "get_weights_idx_xy_axis"], [5, 1, 1, "", "interpolate_horizontal"], [5, 1, 1, "", "lon_lat_to_cartesian"], [5, 1, 1, "", "lon_lat_to_cartesian_ecef"], [5, 1, 1, "", "read_weight_matrix"]], "nes.methods.spatial_join": [[5, 1, 1, "", "get_bbox"], [5, 1, 1, "", "prepare_external_shapefile"], [5, 1, 1, "", "spatial_join"], [5, 1, 1, "", "spatial_join_centroid"], [5, 1, 1, "", "spatial_join_intersection"], [5, 1, 1, "", "spatial_join_nearest"]], "nes.methods.vertical_interpolation": [[5, 1, 1, "", "add_4d_vertical_info"], [5, 1, 1, "", "interpolate_vertical"]], "nes.nc_projections": [[7, 0, 0, "-", "default_nes"], [7, 0, 0, "-", "latlon_nes"], [7, 0, 0, "-", "lcc_nes"], [7, 0, 0, "-", "mercator_nes"], [7, 0, 0, "-", "points_nes"], [7, 0, 0, "-", "points_nes_ghost"], [7, 0, 0, "-", "points_nes_providentia"], [7, 0, 0, "-", "rotated_nes"], [7, 0, 0, "-", "rotated_nested_nes"]], "nes.nc_projections.default_nes": [[7, 2, 1, "", "Nes"]], "nes.nc_projections.default_nes.Nes": [[7, 3, 1, "", "add_4d_vertical_info"], [7, 3, 1, "", "add_variables_to_shapefile"], [7, 3, 1, "", "append_time_step_data"], [7, 3, 1, "", "calculate_geometry_area"], [7, 3, 1, "", "calculate_grid_area"], [7, 3, 1, "", "clear_communicator"], [7, 3, 1, "", "close"], [7, 3, 1, "", "concatenate"], [7, 3, 1, "", "copy"], [7, 3, 1, "", "create_shapefile"], [7, 3, 1, "", "create_single_spatial_bounds"], [7, 3, 1, "", "create_spatial_bounds"], [7, 3, 1, "", "daily_statistic"], [7, 3, 1, "", "filter_coordinates_selection"], [7, 3, 1, "", "find_time_id"], [7, 3, 1, "", "free_vars"], [7, 3, 1, "", "get_centroids_from_coordinates"], [7, 3, 1, "", "get_climatology"], [7, 3, 1, "", "get_coordinate_id"], [7, 3, 1, "", "get_earth_radius"], [7, 3, 1, "", "get_fids"], [7, 3, 1, "", "get_full_levels"], [7, 3, 1, "", "get_full_times"], [7, 3, 1, "", "get_idx_intervals"], [7, 3, 1, "", "get_read_axis_limits"], [7, 3, 1, "", "get_read_axis_limits_balanced"], [7, 3, 1, "", "get_read_axis_limits_unbalanced"], [7, 3, 1, "", "get_spatial_bounds_mesh_format"], [7, 3, 1, "", "get_time_id"], [7, 3, 1, "", "get_time_interval"], [7, 3, 1, "", "get_write_axis_limits"], [7, 3, 1, "", "get_write_axis_limits_balanced"], [7, 3, 1, "", "get_write_axis_limits_unbalanced"], [7, 3, 1, "", "interpolate_horizontal"], [7, 3, 1, "", "interpolate_vertical"], [7, 3, 1, "", "keep_vars"], [7, 3, 1, "", "last_time_step"], [7, 3, 1, "", "load"], [7, 3, 1, "", "lon_lat_to_cartesian_ecef"], [7, 3, 1, "", "new"], [7, 3, 1, "", "open"], [7, 3, 1, "", "reverse_level_direction"], [7, 3, 1, "", "rolling_mean"], [7, 3, 1, "", "sel"], [7, 3, 1, "", "sel_time"], [7, 3, 1, "", "set_climatology"], [7, 3, 1, "", "set_communicator"], [7, 3, 1, "", "set_level_direction"], [7, 3, 1, "", "set_levels"], [7, 3, 1, "", "set_strlen"], [7, 3, 1, "", "set_time"], [7, 3, 1, "", "set_time_bnds"], [7, 3, 1, "", "set_time_resolution"], [7, 3, 1, "", "spatial_join"], [7, 3, 1, "", "str2char"], [7, 3, 1, "", "sum_axis"], [7, 3, 1, "", "to_dtype"], [7, 3, 1, "", "to_grib2"], [7, 3, 1, "", "to_netcdf"], [7, 3, 1, "", "to_shapefile"], [7, 3, 1, "", "write_shapefile"]], "nes.nc_projections.latlon_nes": [[7, 2, 1, "", "LatLonNes"]], "nes.nc_projections.latlon_nes.LatLonNes": [[7, 3, 1, "", "create_providentia_exp_centre_coordinates"], [7, 3, 1, "", "create_providentia_exp_grid_edge_coordinates"], [7, 3, 1, "", "new"], [7, 3, 1, "", "to_grib2"]], "nes.nc_projections.lcc_nes": [[7, 2, 1, "", "LCCNes"]], "nes.nc_projections.lcc_nes.LCCNes": [[7, 3, 1, "", "create_providentia_exp_centre_coordinates"], [7, 3, 1, "", "create_providentia_exp_grid_edge_coordinates"], [7, 3, 1, "", "create_shapefile"], [7, 3, 1, "", "create_spatial_bounds"], [7, 3, 1, "", "filter_coordinates_selection"], [7, 3, 1, "", "get_centroids_from_coordinates"], [7, 3, 1, "", "new"], [7, 3, 1, "", "to_grib2"]], "nes.nc_projections.mercator_nes": [[7, 2, 1, "", "MercatorNes"]], "nes.nc_projections.mercator_nes.MercatorNes": [[7, 3, 1, "", "create_providentia_exp_centre_coordinates"], [7, 3, 1, "", "create_providentia_exp_grid_edge_coordinates"], [7, 3, 1, "", "create_shapefile"], [7, 3, 1, "", "create_spatial_bounds"], [7, 3, 1, "", "filter_coordinates_selection"], [7, 3, 1, "", "get_centroids_from_coordinates"], [7, 3, 1, "", "new"], [7, 3, 1, "", "to_grib2"]], "nes.nc_projections.points_nes": [[7, 2, 1, "", "PointsNes"]], "nes.nc_projections.points_nes.PointsNes": [[7, 3, 1, "", "add_variables_to_shapefile"], [7, 3, 1, "", "create_shapefile"], [7, 3, 1, "", "create_spatial_bounds"], [7, 3, 1, "", "get_centroids_from_coordinates"], [7, 3, 1, "", "new"], [7, 3, 1, "", "to_grib2"], [7, 3, 1, "", "to_providentia"]], "nes.nc_projections.points_nes_ghost": [[7, 2, 1, "", "PointsNesGHOST"]], "nes.nc_projections.points_nes_ghost.PointsNesGHOST": [[7, 3, 1, "", "add_variables_to_shapefile"], [7, 3, 1, "", "erase_flags"], [7, 3, 1, "", "get_standard_metadata"], [7, 3, 1, "", "new"], [7, 3, 1, "", "to_netcdf"], [7, 3, 1, "", "to_points"]], "nes.nc_projections.points_nes_providentia": [[7, 2, 1, "", "PointsNesProvidentia"]], "nes.nc_projections.points_nes_providentia.PointsNesProvidentia": [[7, 3, 1, "", "add_variables_to_shapefile"], [7, 3, 1, "", "new"], [7, 3, 1, "", "to_netcdf"]], "nes.nc_projections.rotated_nes": [[7, 2, 1, "", "RotatedNes"]], "nes.nc_projections.rotated_nes.RotatedNes": [[7, 3, 1, "", "create_providentia_exp_centre_coordinates"], [7, 3, 1, "", "create_providentia_exp_grid_edge_coordinates"], [7, 3, 1, "", "create_shapefile"], [7, 3, 1, "", "create_spatial_bounds"], [7, 3, 1, "", "filter_coordinates_selection"], [7, 3, 1, "", "get_centroids_from_coordinates"], [7, 3, 1, "", "new"], [7, 3, 1, "", "rotated2latlon"], [7, 3, 1, "", "to_grib2"]], "nes.nc_projections.rotated_nested_nes": [[7, 2, 1, "", "RotatedNestedNes"]], "nes.nes_formats": [[3, 0, 0, "-", "cams_ra_format"], [3, 0, 0, "-", "cmaq_format"], [3, 0, 0, "-", "monarch_format"], [3, 0, 0, "-", "wrf_chem_format"]], "nes.nes_formats.cams_ra_format": [[3, 1, 1, "", "create_dimension_variables"], [3, 1, 1, "", "create_dimensions"], [3, 1, 1, "", "create_variables"], [3, 1, 1, "", "date2num"], [3, 1, 1, "", "to_netcdf_cams_ra"]], "nes.nes_formats.cmaq_format": [[3, 1, 1, "", "change_variable_attributes"], [3, 1, 1, "", "create_dimension_variables"], [3, 1, 1, "", "create_dimensions"], [3, 1, 1, "", "create_tflag"], [3, 1, 1, "", "create_variables"], [3, 1, 1, "", "set_global_attributes"], [3, 1, 1, "", "str_var_list"], [3, 1, 1, "", "to_cmaq_units"], [3, 1, 1, "", "to_netcdf_cmaq"]], "nes.nes_formats.monarch_format": [[3, 1, 1, "", "to_monarch_units"], [3, 1, 1, "", "to_netcdf_monarch"]], "nes.nes_formats.wrf_chem_format": [[3, 1, 1, "", "change_variable_attributes"], [3, 1, 1, "", "create_dimension_variables"], [3, 1, 1, "", "create_dimensions"], [3, 1, 1, "", "create_times_var"], [3, 1, 1, "", "create_variables"], [3, 1, 1, "", "set_global_attributes"], [3, 1, 1, "", "to_netcdf_wrf_chem"], [3, 1, 1, "", "to_wrf_chem_units"]]}, "objtypes": {"0": "py:module", "1": "py:function", "2": "py:class", "3": "py:method"}, "objnames": {"0": ["py", "module", "Python module"], "1": ["py", "function", "Python function"], "2": ["py", "class", "Python class"], "3": ["py", "method", "Python method"]}, "titleterms": {"author": 0, "changelog": 1, "1": 1, "4": 1, "3": 1, "2": 1, "0": 1, "contribut": 2, "format": 3, "cam": 3, "ra": 3, "paramet": [3, 5, 6, 7], "cmaq": 3, "return": [3, 5, 6, 7], "monarch": 3, "wrf": 3, "chem": 3, "content": 4, "method": 5, "gener": 5, "horizont": 5, "interpol": 5, "spatial": 5, "join": 5, "vertic": 5, "The": 6, "ne": 6, "object": 6, "creat": 6, "load": 6, "project": 7, "default": 7, "attribut": 7, "regulat": 7, "lat": 7, "lon": 7, "lcc": 7, "mercat": 7, "point": 7, "ghost": 7, "providentia": 7, "rotat": 7, "nest": 7, "introduct": 8, "about": 8, "how": 8, "clone": 8, "run": 8}, "envversion": {"sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx.ext.todo": 2, "sphinx.ext.viewcode": 1, "sphinx": 60}, "alltitles": {"Authors": [[0, "authors"]], "CHANGELOG": [[1, "changelog"]], "1.1.4": [[1, "id1"]], "1.1.3": [[1, "id2"]], "1.1.2": [[1, "id3"]], "1.1.1": [[1, "id4"]], "1.1.0": [[1, "id5"]], "1.0.0": [[1, "id6"]], "Contributing": [[2, "contributing"]], "Formats": [[3, "formats"]], "CAMS RA format": [[3, "module-nes.nes_formats.cams_ra_format"]], "Parameters": [[3, "parameters"], [3, "id1"], [3, "id2"], [3, "id3"], [3, "id4"], [3, "id5"], [3, "id6"], [3, "id7"], [3, "id8"], [3, "id9"], [3, "id10"], [3, "id12"], [3, "id14"], [3, "id15"], [3, "id17"], [3, "id18"], [3, "id19"], [3, "id20"], [3, "id21"], [3, "id23"], [3, "id24"], [3, "id25"], [3, "id26"], [5, "parameters"], [5, "id1"], [5, "id2"], [5, "id3"], [5, "id4"], [5, "id5"], [5, "id6"], [5, "id7"], [5, "id8"], [5, "id9"], [5, "id11"], [5, "id13"], [5, "id15"], [5, "id17"], [5, "id18"], [5, "id19"], [5, "id20"], [5, "id22"], [5, "id24"], [5, "id26"], [5, "id27"], [5, "id28"], [5, "id29"], [5, "id30"], [5, "id31"], [6, "parameters"], [6, "id1"], [6, "id2"], [6, "id3"], [7, "parameters"], [7, "id1"], [7, "id2"], [7, "id3"], [7, "id4"], [7, "id5"], [7, "id8"], [7, "id10"], [7, "id11"], [7, "id13"], [7, "id15"], [7, "id17"], [7, "id24"], [7, "id30"], [7, "id31"], [7, "id32"], [7, "id33"], [7, "id34"], [7, "id35"], [7, "id36"], [7, "id38"], [7, "id40"], [7, "id41"], [7, "id42"], [7, "id43"], [7, "id44"], [7, "id45"], [7, "id46"], [7, "id47"], [7, "id48"], [7, "id49"], [7, "id50"], [7, "id54"], [7, "id55"], [7, "id61"], [7, "id62"], [7, "id68"], [7, "id69"], [7, "id73"], [7, "id74"], [7, "id77"], [7, "id79"], [7, "id80"], [7, "id83"], [7, "id84"], [7, "id90"], [7, "id91"], [7, "id93"]], "CMAQ format": [[3, "module-nes.nes_formats.cmaq_format"]], "Returns": [[3, "returns"], [3, "id11"], [3, "id13"], [3, "id16"], [3, "id22"], [3, "id27"], [5, "returns"], [5, "id10"], [5, "id12"], [5, "id14"], [5, "id16"], [5, "id21"], [5, "id23"], [5, "id25"], [6, "returns"], [6, "id4"], [7, "returns"], [7, "id6"], [7, "id7"], [7, "id9"], [7, "id12"], [7, "id14"], [7, "id16"], [7, "id18"], [7, "id19"], [7, "id20"], [7, "id21"], [7, "id22"], [7, "id23"], [7, "id25"], [7, "id26"], [7, "id27"], [7, "id28"], [7, "id29"], [7, "id37"], [7, "id39"], [7, "id52"], [7, "id53"], [7, "id57"], [7, "id58"], [7, "id59"], [7, "id60"], [7, "id64"], [7, "id65"], [7, "id66"], [7, "id67"], [7, "id71"], [7, "id72"], [7, "id75"], [7, "id78"], [7, "id81"], [7, "id86"], [7, "id87"], [7, "id88"], [7, "id89"], [7, "id92"]], "MONARCH format": [[3, "module-nes.nes_formats.monarch_format"]], "WRF CHEM format": [[3, "module-nes.nes_formats.wrf_chem_format"]], "Contents": [[4, "contents"]], "Methods": [[5, "methods"]], "Generic methods": [[5, "module-nes.methods.cell_measures"]], "Horizontal interpolation": [[5, "module-nes.methods.horizontal_interpolation"]], "Spatial join": [[5, "module-nes.methods.spatial_join"]], "Vertical interpolation": [[5, "module-nes.methods.vertical_interpolation"]], "The NES object": [[6, "the-nes-object"]], "Creating a NES object": [[6, "module-nes.create_nes"]], "Loading a NES object": [[6, "module-nes.load_nes"]], "Projections": [[7, "projections"]], "Default projection": [[7, "module-nes.nc_projections.default_nes"]], "Attributes": [[7, "attributes"], [7, "id51"], [7, "id56"], [7, "id63"], [7, "id70"], [7, "id76"], [7, "id82"], [7, "id85"]], "Regulat lat lon projection": [[7, "module-nes.nc_projections.latlon_nes"]], "LCC projection": [[7, "module-nes.nc_projections.lcc_nes"]], "Mercator projection": [[7, "module-nes.nc_projections.mercator_nes"]], "Points projection": [[7, "module-nes.nc_projections.points_nes"]], "GHOST projection": [[7, "module-nes.nc_projections.points_nes_ghost"]], "Providentia projection": [[7, "module-nes.nc_projections.points_nes_providentia"]], "Rotated projection": [[7, "module-nes.nc_projections.rotated_nes"]], "Rotated nested projection": [[7, "module-nes.nc_projections.rotated_nested_nes"]], "Introduction": [[8, "introduction"]], "About": [[8, "about"]], "How to clone": [[8, "how-to-clone"]], "How to run": [[8, "how-to-run"]]}, "indexentries": {"change_variable_attributes() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.change_variable_attributes"]], "change_variable_attributes() (in module nes.nes_formats.wrf_chem_format)": [[3, "nes.nes_formats.wrf_chem_format.change_variable_attributes"]], "create_dimension_variables() (in module nes.nes_formats.cams_ra_format)": [[3, "nes.nes_formats.cams_ra_format.create_dimension_variables"]], "create_dimension_variables() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.create_dimension_variables"]], "create_dimension_variables() (in module nes.nes_formats.wrf_chem_format)": [[3, "nes.nes_formats.wrf_chem_format.create_dimension_variables"]], "create_dimensions() (in module nes.nes_formats.cams_ra_format)": [[3, "nes.nes_formats.cams_ra_format.create_dimensions"]], "create_dimensions() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.create_dimensions"]], "create_dimensions() (in module nes.nes_formats.wrf_chem_format)": [[3, "nes.nes_formats.wrf_chem_format.create_dimensions"]], "create_tflag() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.create_tflag"]], "create_times_var() (in module nes.nes_formats.wrf_chem_format)": [[3, "nes.nes_formats.wrf_chem_format.create_times_var"]], "create_variables() (in module nes.nes_formats.cams_ra_format)": [[3, "nes.nes_formats.cams_ra_format.create_variables"]], "create_variables() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.create_variables"]], "create_variables() (in module nes.nes_formats.wrf_chem_format)": [[3, "nes.nes_formats.wrf_chem_format.create_variables"]], "date2num() (in module nes.nes_formats.cams_ra_format)": [[3, "nes.nes_formats.cams_ra_format.date2num"]], "module": [[3, "module-nes.nes_formats.cams_ra_format"], [3, "module-nes.nes_formats.cmaq_format"], [3, "module-nes.nes_formats.monarch_format"], [3, "module-nes.nes_formats.wrf_chem_format"], [5, "module-nes.methods.cell_measures"], [5, "module-nes.methods.horizontal_interpolation"], [5, "module-nes.methods.spatial_join"], [5, "module-nes.methods.vertical_interpolation"], [6, "module-nes.create_nes"], [6, "module-nes.load_nes"], [7, "module-nes.nc_projections.default_nes"], [7, "module-nes.nc_projections.latlon_nes"], [7, "module-nes.nc_projections.lcc_nes"], [7, "module-nes.nc_projections.mercator_nes"], [7, "module-nes.nc_projections.points_nes"], [7, "module-nes.nc_projections.points_nes_ghost"], [7, "module-nes.nc_projections.points_nes_providentia"], [7, "module-nes.nc_projections.rotated_nes"], [7, "module-nes.nc_projections.rotated_nested_nes"]], "nes.nes_formats.cams_ra_format": [[3, "module-nes.nes_formats.cams_ra_format"]], "nes.nes_formats.cmaq_format": [[3, "module-nes.nes_formats.cmaq_format"]], "nes.nes_formats.monarch_format": [[3, "module-nes.nes_formats.monarch_format"]], "nes.nes_formats.wrf_chem_format": [[3, "module-nes.nes_formats.wrf_chem_format"]], "set_global_attributes() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.set_global_attributes"]], "set_global_attributes() (in module nes.nes_formats.wrf_chem_format)": [[3, "nes.nes_formats.wrf_chem_format.set_global_attributes"]], "str_var_list() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.str_var_list"]], "to_cmaq_units() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.to_cmaq_units"]], "to_monarch_units() (in module nes.nes_formats.monarch_format)": [[3, "nes.nes_formats.monarch_format.to_monarch_units"]], "to_netcdf_cams_ra() (in module nes.nes_formats.cams_ra_format)": [[3, "nes.nes_formats.cams_ra_format.to_netcdf_cams_ra"]], "to_netcdf_cmaq() (in module nes.nes_formats.cmaq_format)": [[3, "nes.nes_formats.cmaq_format.to_netcdf_cmaq"]], "to_netcdf_monarch() (in module nes.nes_formats.monarch_format)": [[3, "nes.nes_formats.monarch_format.to_netcdf_monarch"]], "to_netcdf_wrf_chem() (in module nes.nes_formats.wrf_chem_format)": [[3, "nes.nes_formats.wrf_chem_format.to_netcdf_wrf_chem"]], "to_wrf_chem_units() (in module nes.nes_formats.wrf_chem_format)": [[3, "nes.nes_formats.wrf_chem_format.to_wrf_chem_units"]], "add_4d_vertical_info() (in module nes.methods.vertical_interpolation)": [[5, "nes.methods.vertical_interpolation.add_4d_vertical_info"]], "calculate_cell_area() (in module nes.methods.cell_measures)": [[5, "nes.methods.cell_measures.calculate_cell_area"]], "calculate_geometry_area() (in module nes.methods.cell_measures)": [[5, "nes.methods.cell_measures.calculate_geometry_area"]], "calculate_grid_area() (in module nes.methods.cell_measures)": [[5, "nes.methods.cell_measures.calculate_grid_area"]], "create_area_conservative_weight_matrix() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.create_area_conservative_weight_matrix"]], "create_nn_weight_matrix() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.create_nn_weight_matrix"]], "cross_product() (in module nes.methods.cell_measures)": [[5, "nes.methods.cell_measures.cross_product"]], "get_bbox() (in module nes.methods.spatial_join)": [[5, "nes.methods.spatial_join.get_bbox"]], "get_src_data() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.get_src_data"]], "get_weights_idx_t_axis() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.get_weights_idx_t_axis"]], "get_weights_idx_xy_axis() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.get_weights_idx_xy_axis"]], "interpolate_horizontal() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.interpolate_horizontal"]], "interpolate_vertical() (in module nes.methods.vertical_interpolation)": [[5, "nes.methods.vertical_interpolation.interpolate_vertical"]], "lon_lat_to_cartesian() (in module nes.methods.cell_measures)": [[5, "nes.methods.cell_measures.lon_lat_to_cartesian"]], "lon_lat_to_cartesian() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.lon_lat_to_cartesian"]], "lon_lat_to_cartesian_ecef() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.lon_lat_to_cartesian_ecef"]], "mod_huiliers_area() (in module nes.methods.cell_measures)": [[5, "nes.methods.cell_measures.mod_huiliers_area"]], "nes.methods.cell_measures": [[5, "module-nes.methods.cell_measures"]], "nes.methods.horizontal_interpolation": [[5, "module-nes.methods.horizontal_interpolation"]], "nes.methods.spatial_join": [[5, "module-nes.methods.spatial_join"]], "nes.methods.vertical_interpolation": [[5, "module-nes.methods.vertical_interpolation"]], "norm() (in module nes.methods.cell_measures)": [[5, "nes.methods.cell_measures.norm"]], "prepare_external_shapefile() (in module nes.methods.spatial_join)": [[5, "nes.methods.spatial_join.prepare_external_shapefile"]], "read_weight_matrix() (in module nes.methods.horizontal_interpolation)": [[5, "nes.methods.horizontal_interpolation.read_weight_matrix"]], "spatial_join() (in module nes.methods.spatial_join)": [[5, "nes.methods.spatial_join.spatial_join"]], "spatial_join_centroid() (in module nes.methods.spatial_join)": [[5, "nes.methods.spatial_join.spatial_join_centroid"]], "spatial_join_intersection() (in module nes.methods.spatial_join)": [[5, "nes.methods.spatial_join.spatial_join_intersection"]], "spatial_join_nearest() (in module nes.methods.spatial_join)": [[5, "nes.methods.spatial_join.spatial_join_nearest"]], "tri_area() (in module nes.methods.cell_measures)": [[5, "nes.methods.cell_measures.tri_area"]], "concatenate_netcdfs() (in module nes.load_nes)": [[6, "nes.load_nes.concatenate_netcdfs"]], "create_nes() (in module nes.create_nes)": [[6, "nes.create_nes.create_nes"]], "from_shapefile() (in module nes.create_nes)": [[6, "nes.create_nes.from_shapefile"]], "nes.create_nes": [[6, "module-nes.create_nes"]], "nes.load_nes": [[6, "module-nes.load_nes"]], "open_netcdf() (in module nes.load_nes)": [[6, "nes.load_nes.open_netcdf"]], "lccnes (class in nes.nc_projections.lcc_nes)": [[7, "nes.nc_projections.lcc_nes.LCCNes"]], "latlonnes (class in nes.nc_projections.latlon_nes)": [[7, "nes.nc_projections.latlon_nes.LatLonNes"]], "mercatornes (class in nes.nc_projections.mercator_nes)": [[7, "nes.nc_projections.mercator_nes.MercatorNes"]], "nes (class in nes.nc_projections.default_nes)": [[7, "nes.nc_projections.default_nes.Nes"]], "pointsnes (class in nes.nc_projections.points_nes)": [[7, "nes.nc_projections.points_nes.PointsNes"]], "pointsnesghost (class in nes.nc_projections.points_nes_ghost)": [[7, "nes.nc_projections.points_nes_ghost.PointsNesGHOST"]], "pointsnesprovidentia (class in nes.nc_projections.points_nes_providentia)": [[7, "nes.nc_projections.points_nes_providentia.PointsNesProvidentia"]], "rotatednes (class in nes.nc_projections.rotated_nes)": [[7, "nes.nc_projections.rotated_nes.RotatedNes"]], "rotatednestednes (class in nes.nc_projections.rotated_nested_nes)": [[7, "nes.nc_projections.rotated_nested_nes.RotatedNestedNes"]], "add_4d_vertical_info() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.add_4d_vertical_info"]], "add_variables_to_shapefile() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.add_variables_to_shapefile"]], "add_variables_to_shapefile() (nes.nc_projections.points_nes.pointsnes method)": [[7, "nes.nc_projections.points_nes.PointsNes.add_variables_to_shapefile"]], "add_variables_to_shapefile() (nes.nc_projections.points_nes_ghost.pointsnesghost method)": [[7, "nes.nc_projections.points_nes_ghost.PointsNesGHOST.add_variables_to_shapefile"]], "add_variables_to_shapefile() (nes.nc_projections.points_nes_providentia.pointsnesprovidentia method)": [[7, "nes.nc_projections.points_nes_providentia.PointsNesProvidentia.add_variables_to_shapefile"]], "append_time_step_data() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.append_time_step_data"]], "calculate_geometry_area() (nes.nc_projections.default_nes.nes static method)": [[7, "nes.nc_projections.default_nes.Nes.calculate_geometry_area"]], "calculate_grid_area() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.calculate_grid_area"]], "clear_communicator() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.clear_communicator"]], "close() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.close"]], "concatenate() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.concatenate"]], "copy() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.copy"]], "create_providentia_exp_centre_coordinates() (nes.nc_projections.latlon_nes.latlonnes method)": [[7, "nes.nc_projections.latlon_nes.LatLonNes.create_providentia_exp_centre_coordinates"]], "create_providentia_exp_centre_coordinates() (nes.nc_projections.lcc_nes.lccnes method)": [[7, "nes.nc_projections.lcc_nes.LCCNes.create_providentia_exp_centre_coordinates"]], "create_providentia_exp_centre_coordinates() (nes.nc_projections.mercator_nes.mercatornes method)": [[7, "nes.nc_projections.mercator_nes.MercatorNes.create_providentia_exp_centre_coordinates"]], "create_providentia_exp_centre_coordinates() (nes.nc_projections.rotated_nes.rotatednes method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.create_providentia_exp_centre_coordinates"]], "create_providentia_exp_grid_edge_coordinates() (nes.nc_projections.latlon_nes.latlonnes method)": [[7, "nes.nc_projections.latlon_nes.LatLonNes.create_providentia_exp_grid_edge_coordinates"]], "create_providentia_exp_grid_edge_coordinates() (nes.nc_projections.lcc_nes.lccnes method)": [[7, "nes.nc_projections.lcc_nes.LCCNes.create_providentia_exp_grid_edge_coordinates"]], "create_providentia_exp_grid_edge_coordinates() (nes.nc_projections.mercator_nes.mercatornes method)": [[7, "nes.nc_projections.mercator_nes.MercatorNes.create_providentia_exp_grid_edge_coordinates"]], "create_providentia_exp_grid_edge_coordinates() (nes.nc_projections.rotated_nes.rotatednes method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.create_providentia_exp_grid_edge_coordinates"]], "create_shapefile() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.create_shapefile"]], "create_shapefile() (nes.nc_projections.lcc_nes.lccnes method)": [[7, "nes.nc_projections.lcc_nes.LCCNes.create_shapefile"]], "create_shapefile() (nes.nc_projections.mercator_nes.mercatornes method)": [[7, "nes.nc_projections.mercator_nes.MercatorNes.create_shapefile"]], "create_shapefile() (nes.nc_projections.points_nes.pointsnes method)": [[7, "nes.nc_projections.points_nes.PointsNes.create_shapefile"]], "create_shapefile() (nes.nc_projections.rotated_nes.rotatednes method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.create_shapefile"]], "create_single_spatial_bounds() (nes.nc_projections.default_nes.nes static method)": [[7, "nes.nc_projections.default_nes.Nes.create_single_spatial_bounds"]], "create_spatial_bounds() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.create_spatial_bounds"]], "create_spatial_bounds() (nes.nc_projections.lcc_nes.lccnes method)": [[7, "nes.nc_projections.lcc_nes.LCCNes.create_spatial_bounds"]], "create_spatial_bounds() (nes.nc_projections.mercator_nes.mercatornes method)": [[7, "nes.nc_projections.mercator_nes.MercatorNes.create_spatial_bounds"]], "create_spatial_bounds() (nes.nc_projections.points_nes.pointsnes method)": [[7, "nes.nc_projections.points_nes.PointsNes.create_spatial_bounds"]], "create_spatial_bounds() (nes.nc_projections.rotated_nes.rotatednes method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.create_spatial_bounds"]], "daily_statistic() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.daily_statistic"]], "erase_flags() (nes.nc_projections.points_nes_ghost.pointsnesghost method)": [[7, "nes.nc_projections.points_nes_ghost.PointsNesGHOST.erase_flags"]], "filter_coordinates_selection() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.filter_coordinates_selection"]], "filter_coordinates_selection() (nes.nc_projections.lcc_nes.lccnes method)": [[7, "nes.nc_projections.lcc_nes.LCCNes.filter_coordinates_selection"]], "filter_coordinates_selection() (nes.nc_projections.mercator_nes.mercatornes method)": [[7, "nes.nc_projections.mercator_nes.MercatorNes.filter_coordinates_selection"]], "filter_coordinates_selection() (nes.nc_projections.rotated_nes.rotatednes method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.filter_coordinates_selection"]], "find_time_id() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.find_time_id"]], "free_vars() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.free_vars"]], "get_centroids_from_coordinates() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_centroids_from_coordinates"]], "get_centroids_from_coordinates() (nes.nc_projections.lcc_nes.lccnes method)": [[7, "nes.nc_projections.lcc_nes.LCCNes.get_centroids_from_coordinates"]], "get_centroids_from_coordinates() (nes.nc_projections.mercator_nes.mercatornes method)": [[7, "nes.nc_projections.mercator_nes.MercatorNes.get_centroids_from_coordinates"]], "get_centroids_from_coordinates() (nes.nc_projections.points_nes.pointsnes method)": [[7, "nes.nc_projections.points_nes.PointsNes.get_centroids_from_coordinates"]], "get_centroids_from_coordinates() (nes.nc_projections.rotated_nes.rotatednes method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.get_centroids_from_coordinates"]], "get_climatology() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_climatology"]], "get_coordinate_id() (nes.nc_projections.default_nes.nes static method)": [[7, "nes.nc_projections.default_nes.Nes.get_coordinate_id"]], "get_earth_radius() (nes.nc_projections.default_nes.nes static method)": [[7, "nes.nc_projections.default_nes.Nes.get_earth_radius"]], "get_fids() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_fids"]], "get_full_levels() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_full_levels"]], "get_full_times() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_full_times"]], "get_idx_intervals() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_idx_intervals"]], "get_read_axis_limits() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_read_axis_limits"]], "get_read_axis_limits_balanced() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_read_axis_limits_balanced"]], "get_read_axis_limits_unbalanced() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_read_axis_limits_unbalanced"]], "get_spatial_bounds_mesh_format() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_spatial_bounds_mesh_format"]], "get_standard_metadata() (nes.nc_projections.points_nes_ghost.pointsnesghost method)": [[7, "nes.nc_projections.points_nes_ghost.PointsNesGHOST.get_standard_metadata"]], "get_time_id() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_time_id"]], "get_time_interval() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_time_interval"]], "get_write_axis_limits() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_write_axis_limits"]], "get_write_axis_limits_balanced() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_write_axis_limits_balanced"]], "get_write_axis_limits_unbalanced() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.get_write_axis_limits_unbalanced"]], "interpolate_horizontal() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.interpolate_horizontal"]], "interpolate_vertical() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.interpolate_vertical"]], "keep_vars() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.keep_vars"]], "last_time_step() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.last_time_step"]], "load() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.load"]], "lon_lat_to_cartesian_ecef() (nes.nc_projections.default_nes.nes static method)": [[7, "nes.nc_projections.default_nes.Nes.lon_lat_to_cartesian_ecef"]], "nes.nc_projections.default_nes": [[7, "module-nes.nc_projections.default_nes"]], "nes.nc_projections.latlon_nes": [[7, "module-nes.nc_projections.latlon_nes"]], "nes.nc_projections.lcc_nes": [[7, "module-nes.nc_projections.lcc_nes"]], "nes.nc_projections.mercator_nes": [[7, "module-nes.nc_projections.mercator_nes"]], "nes.nc_projections.points_nes": [[7, "module-nes.nc_projections.points_nes"]], "nes.nc_projections.points_nes_ghost": [[7, "module-nes.nc_projections.points_nes_ghost"]], "nes.nc_projections.points_nes_providentia": [[7, "module-nes.nc_projections.points_nes_providentia"]], "nes.nc_projections.rotated_nes": [[7, "module-nes.nc_projections.rotated_nes"]], "nes.nc_projections.rotated_nested_nes": [[7, "module-nes.nc_projections.rotated_nested_nes"]], "new() (nes.nc_projections.default_nes.nes static method)": [[7, "nes.nc_projections.default_nes.Nes.new"]], "new() (nes.nc_projections.latlon_nes.latlonnes static method)": [[7, "nes.nc_projections.latlon_nes.LatLonNes.new"]], "new() (nes.nc_projections.lcc_nes.lccnes static method)": [[7, "nes.nc_projections.lcc_nes.LCCNes.new"]], "new() (nes.nc_projections.mercator_nes.mercatornes static method)": [[7, "nes.nc_projections.mercator_nes.MercatorNes.new"]], "new() (nes.nc_projections.points_nes.pointsnes static method)": [[7, "nes.nc_projections.points_nes.PointsNes.new"]], "new() (nes.nc_projections.points_nes_ghost.pointsnesghost static method)": [[7, "nes.nc_projections.points_nes_ghost.PointsNesGHOST.new"]], "new() (nes.nc_projections.points_nes_providentia.pointsnesprovidentia static method)": [[7, "nes.nc_projections.points_nes_providentia.PointsNesProvidentia.new"]], "new() (nes.nc_projections.rotated_nes.rotatednes static method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.new"]], "open() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.open"]], "reverse_level_direction() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.reverse_level_direction"]], "rolling_mean() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.rolling_mean"]], "rotated2latlon() (nes.nc_projections.rotated_nes.rotatednes method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.rotated2latlon"]], "sel() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.sel"]], "sel_time() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.sel_time"]], "set_climatology() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.set_climatology"]], "set_communicator() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.set_communicator"]], "set_level_direction() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.set_level_direction"]], "set_levels() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.set_levels"]], "set_strlen() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.set_strlen"]], "set_time() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.set_time"]], "set_time_bnds() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.set_time_bnds"]], "set_time_resolution() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.set_time_resolution"]], "spatial_join() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.spatial_join"]], "str2char() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.str2char"]], "sum_axis() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.sum_axis"]], "to_dtype() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.to_dtype"]], "to_grib2() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.to_grib2"]], "to_grib2() (nes.nc_projections.latlon_nes.latlonnes method)": [[7, "nes.nc_projections.latlon_nes.LatLonNes.to_grib2"]], "to_grib2() (nes.nc_projections.lcc_nes.lccnes method)": [[7, "nes.nc_projections.lcc_nes.LCCNes.to_grib2"]], "to_grib2() (nes.nc_projections.mercator_nes.mercatornes method)": [[7, "nes.nc_projections.mercator_nes.MercatorNes.to_grib2"]], "to_grib2() (nes.nc_projections.points_nes.pointsnes method)": [[7, "nes.nc_projections.points_nes.PointsNes.to_grib2"]], "to_grib2() (nes.nc_projections.rotated_nes.rotatednes method)": [[7, "nes.nc_projections.rotated_nes.RotatedNes.to_grib2"]], "to_netcdf() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.to_netcdf"]], "to_netcdf() (nes.nc_projections.points_nes_ghost.pointsnesghost method)": [[7, "nes.nc_projections.points_nes_ghost.PointsNesGHOST.to_netcdf"]], "to_netcdf() (nes.nc_projections.points_nes_providentia.pointsnesprovidentia method)": [[7, "nes.nc_projections.points_nes_providentia.PointsNesProvidentia.to_netcdf"]], "to_points() (nes.nc_projections.points_nes_ghost.pointsnesghost method)": [[7, "nes.nc_projections.points_nes_ghost.PointsNesGHOST.to_points"]], "to_providentia() (nes.nc_projections.points_nes.pointsnes method)": [[7, "nes.nc_projections.points_nes.PointsNes.to_providentia"]], "to_shapefile() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.to_shapefile"]], "write_shapefile() (nes.nc_projections.default_nes.nes method)": [[7, "nes.nc_projections.default_nes.Nes.write_shapefile"]]}}) \ No newline at end of file diff --git a/docs/make.bat b/docs/make.bat new file mode 100644 index 0000000000000000000000000000000000000000..747ffb7b3033659bdd2d1e6eae41ecb00358a45e --- /dev/null +++ b/docs/make.bat @@ -0,0 +1,35 @@ +@ECHO OFF + +pushd %~dp0 + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set SOURCEDIR=source +set BUILDDIR=build + +%SPHINXBUILD% >NUL 2>NUL +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.https://www.sphinx-doc.org/ + exit /b 1 +) + +if "%1" == "" goto help + +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% +goto end + +:help +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% + +:end +popd diff --git a/docs/source/authors.rst b/docs/source/authors.rst new file mode 100644 index 0000000000000000000000000000000000000000..29b236c46b765dceabbbb656f3375a05ea384c35 --- /dev/null +++ b/docs/source/authors.rst @@ -0,0 +1,9 @@ +======= +Authors +======= + +* Carles Tena (`ctwebpage`_) +* Alba Vilanova Cortezón (`avcwebpage`_) + +.. _ctwebpage: https://www.bsc.es/tena-carles +.. _avcwebpage: https://www.bsc.es/vilanova-cortezon-alba diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst new file mode 100644 index 0000000000000000000000000000000000000000..4478cf9307b17bce6359103f6b77d45ee1a712ae --- /dev/null +++ b/docs/source/changelog.rst @@ -0,0 +1,6 @@ +========= +CHANGELOG +========= + +.. include:: ../../CHANGELOG.rst + :start-after: .. start-here \ No newline at end of file diff --git a/docs/source/conf.py b/docs/source/conf.py new file mode 100644 index 0000000000000000000000000000000000000000..34d35d6df54a52ff66252bc294343292b322609d --- /dev/null +++ b/docs/source/conf.py @@ -0,0 +1,53 @@ +# Configuration file for the Sphinx documentation builder. +# +# This file only contains a selection of the most common options. For a full +# list see the documentation: +# https://www.sphinx-doc.org/en/master/usage/configuration.html + +# -- Path setup -------------------------------------------------------------- + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +import os +import sys +sys.path.insert(0, os.path.abspath('../..')) +sys.path.insert(0, os.path.abspath('../../nes')) + +# -- Project information ----------------------------------------------------- + +project = 'NES' +copyright = '2023, Carles Tena Medina, Alba Vilanova Cortezon' +author = 'Carles Tena Medina, Alba Vilanova Cortezon' + +# The full version, including alpha/beta/rc tags +release = '1.1.3' + + +# -- General configuration --------------------------------------------------- + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = ["sphinx.ext.todo", "sphinx.ext.viewcode", "sphinx.ext.autodoc", "sphinx_rtd_theme"] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ['_templates'] + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +# This pattern also affects html_static_path and html_extra_path. +exclude_patterns = [] + + +# -- Options for HTML output ------------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +# +html_theme = 'sphinx_rtd_theme' + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ['_static'] \ No newline at end of file diff --git a/docs/source/contributing.rst b/docs/source/contributing.rst new file mode 100644 index 0000000000000000000000000000000000000000..52feb0790289b0e29b3b94a953b2802360243a99 --- /dev/null +++ b/docs/source/contributing.rst @@ -0,0 +1,6 @@ +============ +Contributing +============ + +.. include:: ../../CONTRIBUTING.rst + :start-after: .. start-here diff --git a/docs/source/formats.rst b/docs/source/formats.rst new file mode 100644 index 0000000000000000000000000000000000000000..5b99a69f94b2f06f75ff90493b9ef817b5ce36fb --- /dev/null +++ b/docs/source/formats.rst @@ -0,0 +1,34 @@ +Formats +======================== + +CAMS RA format +---------------------------------------- + +.. automodule:: nes.nes_formats.cams_ra_format + :members: + :undoc-members: + :show-inheritance: + +CMAQ format +------------------------------------ + +.. automodule:: nes.nes_formats.cmaq_format + :members: + :undoc-members: + :show-inheritance: + +MONARCH format +--------------------------------------- + +.. automodule:: nes.nes_formats.monarch_format + :members: + :undoc-members: + :show-inheritance: + +WRF CHEM format +----------------------------------------- + +.. automodule:: nes.nes_formats.wrf_chem_format + :members: + :undoc-members: + :show-inheritance: \ No newline at end of file diff --git a/docs/source/index.rst b/docs/source/index.rst new file mode 100644 index 0000000000000000000000000000000000000000..db32bc2a7549d32b90b2337477c0f6b2dd985fec --- /dev/null +++ b/docs/source/index.rst @@ -0,0 +1,15 @@ +======== +Contents +======== + +.. toctree:: + :maxdepth: 2 + + readme + object + methods + formats + projections + contributing + changelog + authors \ No newline at end of file diff --git a/docs/source/methods.rst b/docs/source/methods.rst new file mode 100644 index 0000000000000000000000000000000000000000..095aa39c09283b01266c18793cf50467da148f37 --- /dev/null +++ b/docs/source/methods.rst @@ -0,0 +1,34 @@ +Methods +=================== + +Generic methods +--------------------------------- + +.. automodule:: nes.methods.cell_measures + :members: + :undoc-members: + :show-inheritance: + +Horizontal interpolation +-------------------------------------------- + +.. automodule:: nes.methods.horizontal_interpolation + :members: + :undoc-members: + :show-inheritance: + +Spatial join +-------------------------------- + +.. automodule:: nes.methods.spatial_join + :members: + :undoc-members: + :show-inheritance: + +Vertical interpolation +------------------------------------------ + +.. automodule:: nes.methods.vertical_interpolation + :members: + :undoc-members: + :show-inheritance: diff --git a/docs/source/object.rst b/docs/source/object.rst new file mode 100644 index 0000000000000000000000000000000000000000..0a52b3340399edb0fbc35fc3dc6ed457b8d3bd7d --- /dev/null +++ b/docs/source/object.rst @@ -0,0 +1,18 @@ +The NES object +============== + +Creating a NES object +---------------------- + +.. automodule:: nes.create_nes + :members: + :undoc-members: + :show-inheritance: + +Loading a NES object +-------------------- + +.. automodule:: nes.load_nes + :members: + :undoc-members: + :show-inheritance: diff --git a/docs/source/projections.rst b/docs/source/projections.rst new file mode 100644 index 0000000000000000000000000000000000000000..07dd376e389fd536f53b907407f51a453c0b97c2 --- /dev/null +++ b/docs/source/projections.rst @@ -0,0 +1,74 @@ +Projections +=========================== + +Default projection +--------------------------------------- + +.. automodule:: nes.nc_projections.default_nes + :members: + :undoc-members: + :show-inheritance: + +Regulat lat lon projection +-------------------------------------- + +.. automodule:: nes.nc_projections.latlon_nes + :members: + :undoc-members: + :show-inheritance: + +LCC projection +----------------------------------- + +.. automodule:: nes.nc_projections.lcc_nes + :members: + :undoc-members: + :show-inheritance: + +Mercator projection +---------------------------------------- + +.. automodule:: nes.nc_projections.mercator_nes + :members: + :undoc-members: + :show-inheritance: + +Points projection +-------------------------------------- + +.. automodule:: nes.nc_projections.points_nes + :members: + :undoc-members: + :show-inheritance: + +GHOST projection +--------------------------------------------- + +.. automodule:: nes.nc_projections.points_nes_ghost + :members: + :undoc-members: + :show-inheritance: + +Providentia projection +--------------------------------------------------- + +.. automodule:: nes.nc_projections.points_nes_providentia + :members: + :undoc-members: + :show-inheritance: + +Rotated projection +--------------------------------------- + +.. automodule:: nes.nc_projections.rotated_nes + :members: + :undoc-members: + :show-inheritance: + +Rotated nested projection +----------------------------------------------- + +.. automodule:: nes.nc_projections.rotated_nested_nes + :members: + :undoc-members: + :show-inheritance: diff --git a/docs/source/readme.rst b/docs/source/readme.rst new file mode 100644 index 0000000000000000000000000000000000000000..c26dea0d974fb0f68eb2ea444c2cda0557e5adc0 --- /dev/null +++ b/docs/source/readme.rst @@ -0,0 +1,6 @@ +============ +Introduction +============ + +.. include:: ../../README.rst + :start-after: .. start-here \ No newline at end of file diff --git a/nes/nc_projections/default_nes.py b/nes/nc_projections/default_nes.py index 1aad7f7d666106b9a1cc0841cccf0d4ba1dc1331..877f319479db63dc22c5e42c3b058bdeed678a23 100644 --- a/nes/nc_projections/default_nes.py +++ b/nes/nc_projections/default_nes.py @@ -5,6 +5,7 @@ import gc import warnings import numpy as np import pandas as pd +from datetime import timedelta from xarray import open_dataset from netCDF4 import Dataset, num2date, date2num, stringtochar from mpi4py import MPI @@ -1059,7 +1060,7 @@ class Nes(object): """ if self.parallel_method == 'T': - raise NotImplementedError("Statistics are not implemented on time axis paralelitation method.") + raise NotImplementedError("Statistics are not implemented on time axis parallel method.") time_interval = self.get_time_interval() if type_op == 'calendar': aux_time_bounds = [] @@ -1163,6 +1164,7 @@ class Nes(object): @staticmethod def _get_axis_index_(axis): + if axis == 'T': value = 0 elif axis == 'Z': @@ -1173,9 +1175,11 @@ class Nes(object): value = 3 else: raise ValueError("Unknown axis: {0}".format(axis)) + return value def sum_axis(self, axis='Z'): + if self.parallel_method == axis: raise NotImplementedError("It is not possible to sum the axis with it is parallelized '{0}'".format( self.parallel_method)) @@ -1195,8 +1199,93 @@ class Nes(object): if axis == 'Z': self.lev['data'] = [self.lev['data'][0]] self._lev['data'] = [self._lev['data'][0]] + return None + def find_time_id(self, time): + """ + Find index of time in time array. + + Parameters + ---------- + time : datetime.datetime + Time element. + + Returns + ------- + int + Index of time element. + """ + + if time in self.time: + return self.time.index(time) + + def rolling_mean(self, var_list=None, hours=8): + """ + Calculate rolling mean for given hours + + Parameters + ---------- + var_list : : List, str, None + List (or single string) of the variables to be loaded. + hours : int, optional + Window hours to calculate rolling mean, by default 8 + + Returns + ------- + Nes + Nes object + """ + + if self.parallel_method == 'T': + raise NotImplementedError("The rolling mean cannot be calculated using the time axis parallel method.") + + aux_nessy = self.copy(copy_vars=False) + aux_nessy.set_communicator(self.comm) + + if isinstance(var_list, str): + var_list = [var_list] + elif var_list is None: + var_list = list(self.variables.keys()) + + for var_name in var_list: + # Load variables if they have not been loaded previously + if self.variables[var_name]['data'] is None: + self.load(var_name) + + # Get original file shape + nessy_shape = self.variables[var_name]['data'].shape + + # Initialise array + aux_nessy.variables[var_name] = {} + aux_nessy.variables[var_name]['data'] = np.empty(shape=nessy_shape) + aux_nessy.variables[var_name]['dimensions'] = deepcopy(self.variables[var_name]['dimensions']) + + for curr_time in self.time: + # Get previous time given a set of hours + prev_time = curr_time - timedelta(hours=(hours-1)) + + # Get time indices + curr_time_id = self.find_time_id(curr_time) + prev_time_id = self.find_time_id(prev_time) + + # Get mean if previous time is available + if prev_time_id is not None: + if self.info: + print(f'Calculating mean between {prev_time} and {curr_time}.') + aux_nessy.variables[var_name]['data'][curr_time_id, :, :, :] = self.variables[var_name]['data'][ + prev_time_id:curr_time_id, :, :, :].mean(axis=0, keepdims=True) + # Fill with nan if previous time is not available + else: + if self.info: + msg = f'Mean between {prev_time} and {curr_time} cannot be calculated ' + msg += f'because data for {prev_time} is not available.' + print(msg) + aux_nessy.variables[var_name]['data'][curr_time_id, :, :, :] = np.full(shape= + (1, nessy_shape[1], nessy_shape[2], nessy_shape[3]), fill_value=np.nan) + + return aux_nessy + # ================================================================================================================== # Reading # ================================================================================================================== @@ -1211,6 +1300,7 @@ class Nes(object): Dictionary with the 4D limits of the rank data to read. t_min, t_max, z_min, z_max, y_min, y_max, x_min and x_max. """ + if self.balanced: return self.get_read_axis_limits_balanced() else: @@ -1419,8 +1509,6 @@ class Nes(object): Index of the time array. """ - from datetime import timedelta - if first: idx = self._time.index(self._time[0] + timedelta(hours=hours)) else: diff --git a/requirements.txt b/requirements.txt index a40856e17a26c8879f41f335d629cc759a16e843..1d674e35b979cf32c8bb57d8ffac3a9b048dcdcd 100644 --- a/requirements.txt +++ b/requirements.txt @@ -13,4 +13,7 @@ filelock>=3.9.0 eccodes-python~=0.9.5 cfunits>=3.3.5 xarray>=0.20.2 -mpi4py>=3.1.4 \ No newline at end of file +mpi4py>=3.1.4 +sphinx>=7.2.6 +sphinx-rtd-theme==2.0.0 +psutil>=5.9.6 \ No newline at end of file diff --git a/tests/4.1-test_daily_stats.py b/tests/4.1-test_stats.py similarity index 54% rename from tests/4.1-test_daily_stats.py rename to tests/4.1-test_stats.py index a32ed62482141a1cbdcded7033b267c14d0a0b1f..0392d9cfadda5d7ba209e31d601912c06b504762 100644 --- a/tests/4.1-test_daily_stats.py +++ b/tests/4.1-test_stats.py @@ -17,7 +17,7 @@ result = pd.DataFrame(index=['read', 'calculate', 'write'], columns=['4.1.1.Mean']) # ====================================================================================================================== -# ================================================== CALCULATE MEAN ================================================== +# ============================================== CALCULATE DAILY MEAN ================================================ # ====================================================================================================================== test_name = '4.1.1.Mean' @@ -56,6 +56,42 @@ if rank == 0: print(result.loc[:, test_name]) sys.stdout.flush() +# ====================================================================================================================== +# ========================================== CALCULATE 8-HOUR ROLLING MEAN =========================================== +# ====================================================================================================================== + +test_name = '4.1.2.Rolling_Mean' +if rank == 0: + print(test_name) + +# Original path: /esarchive/exp/monarch/a4dd/original_files/000/2022111512/MONARCH_d01_2022111512.nc +# Rotated grid from MONARCH +cams_file = '/gpfs/projects/bsc32/models/NES_tutorial_data/MONARCH_d01_2022111512.nc' + +# READ +st_time = timeit.default_timer() +nessy = open_netcdf(path=cams_file, info=True, parallel_method=parallel_method) +comm.Barrier() +result.loc['read', test_name] = timeit.default_timer() - st_time + +# CALCULATE MEAN +st_time = timeit.default_timer() +rolling_mean = nessy.rolling_mean(var_list='O3', hours=8) +print(rolling_mean.variables['O3']['data']) +comm.Barrier() +result.loc['calculate', test_name] = timeit.default_timer() - st_time + +# WRITE +st_time = timeit.default_timer() +nessy.to_netcdf(test_name.replace(' ', '_') + "_{0:03d}.nc".format(size)) +comm.Barrier() +result.loc['write', test_name] = timeit.default_timer() - st_time + +comm.Barrier() +if rank == 0: + print(result.loc[:, test_name]) +sys.stdout.flush() + if rank == 0: result.to_csv(result_path) print("TEST PASSED SUCCESSFULLY!!!!!") diff --git a/tests/run_scalability_tests_nord3v2.sh b/tests/run_scalability_tests_nord3v2.sh index 0bd1291e9bdadc06e5a0c847a195b7d3761e9087..4c28785985d815926b57f6721396988578a4210d 100644 --- a/tests/run_scalability_tests_nord3v2.sh +++ b/tests/run_scalability_tests_nord3v2.sh @@ -8,7 +8,7 @@ module load Python/3.7.4-GCCcore-8.3.0 module load NES/1.1.3-nord3-v2-foss-2019b-Python-3.7.4 -for EXE in "1.1-test_read_write_projection.py" "1.2-test_create_projection.py" "1.3-test_selecting.py" "2.1-test_spatial_join.py" "2.2-test_create_shapefile.py" "2.3-test_bounds.py" "2.4-test_cell_area.py" "3.1-test_vertical_interp.py" "3.2-test_horiz_interp_bilinear.py" "3.3-test_horiz_interp_conservative.py" "4.1-test_daily_stats.py" "4.2-test_sum.py" "4.3-test_write_timestep.py" +for EXE in "1.1-test_read_write_projection.py" "1.2-test_create_projection.py" "1.3-test_selecting.py" "2.1-test_spatial_join.py" "2.2-test_create_shapefile.py" "2.3-test_bounds.py" "2.4-test_cell_area.py" "3.1-test_vertical_interp.py" "3.2-test_horiz_interp_bilinear.py" "3.3-test_horiz_interp_conservative.py" "4.1-test_stats.py" "4.2-test_sum.py" "4.3-test_write_timestep.py" do for nprocs in 1 2 4 8 16 do diff --git a/tests/test_bash_mn4.cmd b/tests/test_bash_mn4.cmd index a7e8c73fb79778a509dc9431f05fab22ccc66e8c..5edea6722bf95d090ce5f5bbfa5e330e1c1bfbd0 100644 --- a/tests/test_bash_mn4.cmd +++ b/tests/test_bash_mn4.cmd @@ -33,6 +33,6 @@ mpirun --mca mpi_warn_on_fork 0 -np 4 python 3.1-test_vertical_interp.py mpirun --mca mpi_warn_on_fork 0 -np 4 python 3.2-test_horiz_interp_bilinear.py mpirun --mca mpi_warn_on_fork 0 -np 4 python 3.3-test_horiz_interp_conservative.py -mpirun --mca mpi_warn_on_fork 0 -np 4 python 4.1-test_daily_stats.py +mpirun --mca mpi_warn_on_fork 0 -np 4 python 4.1-test_stats.py mpirun --mca mpi_warn_on_fork 0 -np 4 python 4.2-test_sum.py mpirun --mca mpi_warn_on_fork 0 -np 4 python 4.3-test_write_timestep.py diff --git a/tests/test_bash_nord3v2.cmd b/tests/test_bash_nord3v2.cmd index f66a17bc1903d7b87269d9fd1ab4b4625134f55d..ed5815320228191ac6456f8e6d97354b94dfe9cc 100644 --- a/tests/test_bash_nord3v2.cmd +++ b/tests/test_bash_nord3v2.cmd @@ -31,6 +31,6 @@ mpirun --mca mpi_warn_on_fork 0 -np 4 python 3.1-test_vertical_interp.py mpirun --mca mpi_warn_on_fork 0 -np 4 python 3.2-test_horiz_interp_bilinear.py mpirun --mca mpi_warn_on_fork 0 -np 4 python 3.3-test_horiz_interp_conservative.py -mpirun --mca mpi_warn_on_fork 0 -np 4 python 4.1-test_daily_stats.py +mpirun --mca mpi_warn_on_fork 0 -np 4 python 4.1-test_stats.py mpirun --mca mpi_warn_on_fork 0 -np 4 python 4.2-test_sum.py mpirun --mca mpi_warn_on_fork 0 -np 4 python 4.3-test_write_timestep.py \ No newline at end of file diff --git a/tutorials/3.Statistics/3.1.Statistics.ipynb b/tutorials/3.Statistics/3.1.Statistics.ipynb index aaf0e367570ba0e9803e290cdedd64d769b42f1b..1c47ff157e84756cbba48bcadb4c41a8f7cdc65a 100644 --- a/tutorials/3.Statistics/3.1.Statistics.ipynb +++ b/tutorials/3.Statistics/3.1.Statistics.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Calculate daily statistics" + "# Calculate statistics" ] }, { @@ -13,7 +13,10 @@ "metadata": {}, "outputs": [], "source": [ - "from nes import *" + "from nes import *\n", + "from datetime import datetime, timedelta\n", + "import numpy as np\n", + "import xarray as xr" ] }, { @@ -32,8 +35,8 @@ "name": "stdout", "output_type": "stream", "text": [ - "CPU times: user 492 ms, sys: 18.7 ms, total: 511 ms\n", - "Wall time: 790 ms\n" + "CPU times: user 587 ms, sys: 65.8 ms, total: 653 ms\n", + "Wall time: 5.16 s\n" ] } ], @@ -92,7 +95,7 @@ } ], "source": [ - "# Selecting only one variable and descarting the rest.\n", + "# Selecting only one variable and dismiss the rest\n", "nessy.keep_vars('O3')\n", "nessy.variables" ] @@ -115,8 +118,8 @@ "text": [ "Rank 000: Loading O3 var (1/1)\n", "Rank 000: Loaded O3 var ((37, 24, 271, 351))\n", - "CPU times: user 276 ms, sys: 1.44 s, total: 1.72 s\n", - "Wall time: 4.43 s\n" + "CPU times: user 294 ms, sys: 1.49 s, total: 1.78 s\n", + "Wall time: 8.31 s\n" ] } ], @@ -169,7 +172,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 2. Calculate daily statistics" + "## 2. Calculate 8-hour rolling mean" ] }, { @@ -181,13 +184,61 @@ "name": "stdout", "output_type": "stream", "text": [ - "CPU times: user 52.3 ms, sys: 12.6 ms, total: 64.9 ms\n", - "Wall time: 64.8 ms\n" + "Mean between 2022-11-15 05:00:00 and 2022-11-15 12:00:00 cannot be calculated because data for 2022-11-15 05:00:00 is not available.\n", + "Mean between 2022-11-15 06:00:00 and 2022-11-15 13:00:00 cannot be calculated because data for 2022-11-15 06:00:00 is not available.\n", + "Mean between 2022-11-15 07:00:00 and 2022-11-15 14:00:00 cannot be calculated because data for 2022-11-15 07:00:00 is not available.\n", + "Mean between 2022-11-15 08:00:00 and 2022-11-15 15:00:00 cannot be calculated because data for 2022-11-15 08:00:00 is not available.\n", + "Mean between 2022-11-15 09:00:00 and 2022-11-15 16:00:00 cannot be calculated because data for 2022-11-15 09:00:00 is not available.\n", + "Mean between 2022-11-15 10:00:00 and 2022-11-15 17:00:00 cannot be calculated because data for 2022-11-15 10:00:00 is not available.\n", + "Mean between 2022-11-15 11:00:00 and 2022-11-15 18:00:00 cannot be calculated because data for 2022-11-15 11:00:00 is not available.\n", + "Calculating mean between 2022-11-15 12:00:00 and 2022-11-15 19:00:00.\n", + "Calculating mean between 2022-11-15 13:00:00 and 2022-11-15 20:00:00.\n", + "Calculating mean between 2022-11-15 14:00:00 and 2022-11-15 21:00:00.\n", + "Calculating mean between 2022-11-15 15:00:00 and 2022-11-15 22:00:00.\n", + "Calculating mean between 2022-11-15 16:00:00 and 2022-11-15 23:00:00.\n", + "Calculating mean between 2022-11-15 17:00:00 and 2022-11-16 00:00:00.\n", + "Calculating mean between 2022-11-15 18:00:00 and 2022-11-16 01:00:00.\n", + "Calculating mean between 2022-11-15 19:00:00 and 2022-11-16 02:00:00.\n", + "Calculating mean between 2022-11-15 20:00:00 and 2022-11-16 03:00:00.\n", + "Calculating mean between 2022-11-15 21:00:00 and 2022-11-16 04:00:00.\n", + "Calculating mean between 2022-11-15 22:00:00 and 2022-11-16 05:00:00.\n", + "Calculating mean between 2022-11-15 23:00:00 and 2022-11-16 06:00:00.\n", + "Calculating mean between 2022-11-16 00:00:00 and 2022-11-16 07:00:00.\n", + "Calculating mean between 2022-11-16 01:00:00 and 2022-11-16 08:00:00.\n", + "Calculating mean between 2022-11-16 02:00:00 and 2022-11-16 09:00:00.\n", + "Calculating mean between 2022-11-16 03:00:00 and 2022-11-16 10:00:00.\n", + "Calculating mean between 2022-11-16 04:00:00 and 2022-11-16 11:00:00.\n", + "Calculating mean between 2022-11-16 05:00:00 and 2022-11-16 12:00:00.\n", + "Calculating mean between 2022-11-16 06:00:00 and 2022-11-16 13:00:00.\n", + "Calculating mean between 2022-11-16 07:00:00 and 2022-11-16 14:00:00.\n", + "Calculating mean between 2022-11-16 08:00:00 and 2022-11-16 15:00:00.\n", + "Calculating mean between 2022-11-16 09:00:00 and 2022-11-16 16:00:00.\n", + "Calculating mean between 2022-11-16 10:00:00 and 2022-11-16 17:00:00.\n", + "Calculating mean between 2022-11-16 11:00:00 and 2022-11-16 18:00:00.\n", + "Calculating mean between 2022-11-16 12:00:00 and 2022-11-16 19:00:00.\n", + "Calculating mean between 2022-11-16 13:00:00 and 2022-11-16 20:00:00.\n", + "Calculating mean between 2022-11-16 14:00:00 and 2022-11-16 21:00:00.\n", + "Calculating mean between 2022-11-16 15:00:00 and 2022-11-16 22:00:00.\n", + "Calculating mean between 2022-11-16 16:00:00 and 2022-11-16 23:00:00.\n", + "Calculating mean between 2022-11-16 17:00:00 and 2022-11-17 00:00:00.\n", + "CPU times: user 430 ms, sys: 139 ms, total: 569 ms\n", + "Wall time: 601 ms\n" ] + }, + { + "data": { + "text/plain": [ + "" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" } ], "source": [ - "%time nessy.daily_statistic(op=\"mean\")" + "%time rolling_mean = nessy.rolling_mean(var_list='O3', hours=8)\n", + "rolling_mean" ] }, { @@ -198,7 +249,529 @@ { "data": { "text/plain": [ - "(2, 24, 271, 351)" + "array([[[[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " ...,\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]]],\n", + "\n", + "\n", + " [[[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " ...,\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]]],\n", + "\n", + "\n", + " [[[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " ...,\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]],\n", + "\n", + " [[ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " ...,\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan],\n", + " [ nan, nan, nan, ..., nan,\n", + " nan, nan]]],\n", + "\n", + "\n", + " ...,\n", + "\n", + "\n", + " [[[1.29854178, 1.31015837, 1.32398117, ..., 1.13425529,\n", + " 1.13152957, 1.13137448],\n", + " [1.30740237, 0.98123085, 0.65629387, ..., 0.41101581,\n", + " 0.76499593, 1.12861729],\n", + " [1.31658065, 0.65106899, 0.67550975, ..., 0.3793425 ,\n", + " 0.3888211 , 1.12668192],\n", + " ...,\n", + " [1.87833846, 1.49412072, 1.47343981, ..., 1.04673111,\n", + " 1.05363691, 2.43560791],\n", + " [2.28306627, 1.78498054, 1.4913218 , ..., 1.07455862,\n", + " 1.73233879, 2.28594065],\n", + " [1.87525213, 1.87141323, 1.86758006, ..., 2.58912444,\n", + " 2.51521897, 2.45454621]],\n", + "\n", + " [[0.80840558, 0.81161839, 0.81650686, ..., 0.65809524,\n", + " 0.66410017, 0.67124498],\n", + " [0.81812078, 0.73046905, 0.64704901, ..., 0.40258333,\n", + " 0.52903521, 0.66092545],\n", + " [0.82843119, 0.64508814, 0.6754598 , ..., 0.37896129,\n", + " 0.38853186, 0.65027869],\n", + " ...,\n", + " [1.63060021, 1.48616636, 1.47344553, ..., 1.04691005,\n", + " 1.05505478, 1.82051587],\n", + " [1.84355354, 1.59760475, 1.47839761, ..., 1.0752002 ,\n", + " 1.43221939, 1.74805367],\n", + " [1.5893532 , 1.58230174, 1.57656932, ..., 1.88510585,\n", + " 1.85056806, 1.8281157 ]],\n", + "\n", + " [[0.57815617, 0.59089237, 0.61084896, ..., 0.36410683,\n", + " 0.36854929, 0.37023404],\n", + " [0.59862477, 0.6184451 , 0.64345884, ..., 0.39966965,\n", + " 0.37984779, 0.36916229],\n", + " [0.62249672, 0.64080441, 0.67612237, ..., 0.37880802,\n", + " 0.38201007, 0.36838749],\n", + " ...,\n", + " [1.480425 , 1.47897613, 1.47297132, ..., 1.04691052,\n", + " 1.05192661, 1.27464199],\n", + " [1.49443233, 1.48055923, 1.47238803, ..., 1.0715071 ,\n", + " 1.20996988, 1.43494701],\n", + " [1.48054683, 1.47644019, 1.47224796, ..., 1.29982948,\n", + " 1.28149879, 1.27385354]],\n", + "\n", + " ...,\n", + "\n", + " [[0.02869442, 0.02876731, 0.02885264, ..., 0.05503506,\n", + " 0.0553513 , 0.05504015],\n", + " [0.02860601, 0.03127173, 0.03386573, ..., 0.05458459,\n", + " 0.05476376, 0.05477995],\n", + " [0.02850545, 0.03384789, 0.03440624, ..., 0.05432913,\n", + " 0.0543392 , 0.05448956],\n", + " ...,\n", + " [0.03835884, 0.03774025, 0.037614 , ..., 0.02364007,\n", + " 0.01923269, 0.02035548],\n", + " [0.03778839, 0.03760003, 0.03746041, ..., 0.0233845 ,\n", + " 0.02329169, 0.02662762],\n", + " [0.03775358, 0.03741108, 0.03737034, ..., 0.02648384,\n", + " 0.02392194, 0.02173741]],\n", + "\n", + " [[0.02780337, 0.02775429, 0.02777734, ..., 0.05352789,\n", + " 0.0539673 , 0.05413141],\n", + " [0.02757636, 0.02827121, 0.02910217, ..., 0.0535218 ,\n", + " 0.05365003, 0.05363914],\n", + " [0.02742327, 0.02865203, 0.02856876, ..., 0.05343894,\n", + " 0.05347186, 0.05326553],\n", + " ...,\n", + " [0.0381183 , 0.03760983, 0.03756369, ..., 0.02312979,\n", + " 0.01986529, 0.02278587],\n", + " [0.03753237, 0.03742671, 0.03738815, ..., 0.02330526,\n", + " 0.02444199, 0.02789584],\n", + " [0.0372946 , 0.03717648, 0.03733015, ..., 0.02913875,\n", + " 0.02670157, 0.02392381]],\n", + "\n", + " [[0.02765905, 0.02761666, 0.02764649, ..., 0.05424896,\n", + " 0.05461873, 0.05450705],\n", + " [0.02742626, 0.02520877, 0.02296445, ..., 0.05349676,\n", + " 0.05373689, 0.05357135],\n", + " [0.02727422, 0.02282771, 0.02191793, ..., 0.05307287,\n", + " 0.05326071, 0.05301408],\n", + " ...,\n", + " [0.03779193, 0.03748941, 0.0374724 , ..., 0.02280954,\n", + " 0.01927338, 0.02296998],\n", + " [0.0371562 , 0.03717405, 0.03726939, ..., 0.02313424,\n", + " 0.02427122, 0.02759723],\n", + " [0.03661698, 0.03678117, 0.03718687, ..., 0.02934089,\n", + " 0.02708005, 0.02392631]]],\n", + "\n", + "\n", + " [[[1.28435171, 1.29601693, 1.31016409, ..., 1.14027464,\n", + " 1.13592947, 1.13423944],\n", + " [1.29354775, 0.97237307, 0.6523965 , ..., 0.40809566,\n", + " 0.7662639 , 1.13241553],\n", + " [1.30334365, 0.64753121, 0.66974783, ..., 0.37949342,\n", + " 0.38861486, 1.13134599],\n", + " ...,\n", + " [1.85838127, 1.4820354 , 1.45864785, ..., 1.01327562,\n", + " 1.02697802, 2.51408505],\n", + " [2.26595736, 1.77126908, 1.47472823, ..., 1.03661191,\n", + " 1.75501657, 2.36603546],\n", + " [1.8653183 , 1.86235523, 1.85948217, ..., 2.65899658,\n", + " 2.59044123, 2.53250265]],\n", + "\n", + " [[0.79513633, 0.79873043, 0.80405778, ..., 0.66278821,\n", + " 0.66856432, 0.6755361 ],\n", + " [0.80611563, 0.72233909, 0.64305466, ..., 0.39966771,\n", + " 0.53061664, 0.66577685],\n", + " [0.81752461, 0.64145547, 0.66963005, ..., 0.37914807,\n", + " 0.38845772, 0.65578979],\n", + " ...,\n", + " [1.58201897, 1.47326183, 1.45865822, ..., 1.01349425,\n", + " 1.02904475, 1.90076125],\n", + " [1.81569266, 1.57050061, 1.46004212, ..., 1.03797972,\n", + " 1.45670402, 1.82893205],\n", + " [1.54080212, 1.53300631, 1.52692723, ..., 1.96561217,\n", + " 1.93085933, 1.906937 ]],\n", + "\n", + " [[0.58091992, 0.59325665, 0.61224467, ..., 0.36490369,\n", + " 0.36917505, 0.37052628],\n", + " [0.6010738 , 0.61771876, 0.63941425, ..., 0.39652291,\n", + " 0.37925759, 0.36956057],\n", + " [0.6244573 , 0.63713026, 0.67028266, ..., 0.37903333,\n", + " 0.38177174, 0.36898488],\n", + " ...,\n", + " [1.47460616, 1.4671011 , 1.45822811, ..., 1.01381707,\n", + " 1.02508008, 1.33224428],\n", + " [1.48315442, 1.46663094, 1.45555806, ..., 1.03329754,\n", + " 1.22145164, 1.48864162],\n", + " [1.46740782, 1.46071029, 1.454193 , ..., 1.35791504,\n", + " 1.33878648, 1.32896316]],\n", + "\n", + " ...,\n", + "\n", + " [[0.02860847, 0.0287193 , 0.02884021, ..., 0.0552229 ,\n", + " 0.05554568, 0.05520093],\n", + " [0.02848911, 0.03115404, 0.03371878, ..., 0.05465074,\n", + " 0.05482761, 0.05481327],\n", + " [0.02834649, 0.03368897, 0.03429044, ..., 0.0543161 ,\n", + " 0.05430075, 0.05435163],\n", + " ...,\n", + " [0.03811692, 0.03754768, 0.03750984, ..., 0.02515605,\n", + " 0.02067216, 0.02160575],\n", + " [0.03769255, 0.03745845, 0.03737439, ..., 0.02493761,\n", + " 0.02454149, 0.02719282],\n", + " [0.03746 , 0.0372192 , 0.03730713, ..., 0.02784498,\n", + " 0.02536339, 0.02306198]],\n", + "\n", + " [[0.0275538 , 0.02747322, 0.02750878, ..., 0.05358512,\n", + " 0.05404793, 0.05419671],\n", + " [0.02730396, 0.02819499, 0.02919991, ..., 0.05339658,\n", + " 0.05358685, 0.05364329],\n", + " [0.02712804, 0.02880288, 0.02864067, ..., 0.05316434,\n", + " 0.05325959, 0.05311146],\n", + " ...,\n", + " [0.03785873, 0.03741212, 0.03745075, ..., 0.0249369 ,\n", + " 0.02145594, 0.02418734],\n", + " [0.03738395, 0.03726116, 0.03728982, ..., 0.02512565,\n", + " 0.02582127, 0.02848102],\n", + " [0.03694052, 0.03695876, 0.03726244, ..., 0.03053271,\n", + " 0.02822248, 0.02540855]],\n", + "\n", + " [[0.02740881, 0.02733335, 0.02737304, ..., 0.05426568,\n", + " 0.05466345, 0.05454525],\n", + " [0.02715266, 0.0250837 , 0.02297565, ..., 0.05338996,\n", + " 0.05366699, 0.05356648],\n", + " [0.02697729, 0.02287315, 0.02194677, ..., 0.05279499,\n", + " 0.0530481 , 0.05285586],\n", + " ...,\n", + " [0.03751946, 0.03728687, 0.03735632, ..., 0.02467575,\n", + " 0.0208389 , 0.02443987],\n", + " [0.03695789, 0.03698442, 0.037166 , ..., 0.02499765,\n", + " 0.02568879, 0.0282693 ],\n", + " [0.03618747, 0.03652691, 0.03711196, ..., 0.0307685 ,\n", + " 0.02864933, 0.02546946]]],\n", + "\n", + "\n", + " [[[1.2784425 , 1.28968489, 1.30331826, ..., 1.14892197,\n", + " 1.14310455, 1.13998997],\n", + " [1.28755128, 0.9670974 , 0.64775342, ..., 0.40460929,\n", + " 0.76864052, 1.13861692],\n", + " [1.29760277, 0.64339983, 0.66343421, ..., 0.37954411,\n", + " 0.38823134, 1.13792968],\n", + " ...,\n", + " [1.84874153, 1.46730876, 1.44175804, ..., 1.0060811 ,\n", + " 1.02113175, 2.6076138 ],\n", + " [2.25086474, 1.75897384, 1.45653439, ..., 1.01833165,\n", + " 1.79458797, 2.46382499],\n", + " [1.86417711, 1.86118674, 1.85844862, ..., 2.73344302,\n", + " 2.67506385, 2.62382388]],\n", + "\n", + " [[0.78241956, 0.78631049, 0.79192829, ..., 0.66846818,\n", + " 0.67414564, 0.68106425],\n", + " [0.79447496, 0.71404868, 0.63824832, ..., 0.39613977,\n", + " 0.5325864 , 0.67192936],\n", + " [0.80679786, 0.63716078, 0.66327852, ..., 0.37925392,\n", + " 0.38813069, 0.66272527],\n", + " ...,\n", + " [1.54232883, 1.45780337, 1.44177127, ..., 1.00624597,\n", + " 1.02395809, 1.9991318 ],\n", + " [1.78523362, 1.54450393, 1.44029748, ..., 1.02063692,\n", + " 1.50061035, 1.92890441],\n", + " [1.50327563, 1.494681 , 1.4881624 , ..., 2.06289625,\n", + " 2.02894211, 2.00303626]],\n", + "\n", + " [[0.5823397 , 0.59453583, 0.61269087, ..., 0.36513683,\n", + " 0.36922416, 0.37030086],\n", + " [0.60236412, 0.61610925, 0.63465577, ..., 0.39253685,\n", + " 0.37808377, 0.36943576],\n", + " [0.62542266, 0.6328811 , 0.66394663, ..., 0.37916636,\n", + " 0.38113832, 0.36904883],\n", + " ...,\n", + " [1.46498203, 1.45243585, 1.44134736, ..., 1.00675941,\n", + " 1.01872575, 1.40820503],\n", + " [1.46918988, 1.45036054, 1.43679214, ..., 1.01451838,\n", + " 1.25147033, 1.55820394],\n", + " [1.45194185, 1.44302392, 1.43449628, ..., 1.43378806,\n", + " 1.41443276, 1.40175247]],\n", + "\n", + " ...,\n", + "\n", + " [[0.02856354, 0.02871947, 0.02887096, ..., 0.05541828,\n", + " 0.05572167, 0.05534676],\n", + " [0.02841754, 0.03108655, 0.03364924, ..., 0.05472515,\n", + " 0.0548855 , 0.05482206],\n", + " [0.02823411, 0.03355997, 0.03417809, ..., 0.05430691,\n", + " 0.05427311, 0.05420008],\n", + " ...,\n", + " [0.03792448, 0.03738475, 0.03744894, ..., 0.02625389,\n", + " 0.02209541, 0.02293119],\n", + " [0.03757517, 0.03734322, 0.03733462, ..., 0.02610129,\n", + " 0.02569483, 0.02796202],\n", + " [0.03720658, 0.03707833, 0.0372971 , ..., 0.02889115,\n", + " 0.02662059, 0.0243883 ]],\n", + "\n", + " [[0.02727165, 0.02715584, 0.02720465, ..., 0.05377459,\n", + " 0.05423334, 0.05433375],\n", + " [0.0269985 , 0.02795687, 0.029016 , ..., 0.05340878,\n", + " 0.05363261, 0.05371693],\n", + " [0.02679438, 0.02865714, 0.02840276, ..., 0.05302648,\n", + " 0.05317137, 0.05304215],\n", + " ...,\n", + " [0.03763931, 0.03724349, 0.03738036, ..., 0.02637897,\n", + " 0.02298057, 0.0255764 ],\n", + " [0.03720314, 0.03711754, 0.03723446, ..., 0.02659015,\n", + " 0.02706061, 0.02925287],\n", + " [0.03662533, 0.03678911, 0.03724252, ..., 0.03148227,\n", + " 0.02941887, 0.02681732]],\n", + "\n", + " [[0.02712696, 0.02701419, 0.02706402, ..., 0.05436107,\n", + " 0.05476492, 0.05462555],\n", + " [0.02684733, 0.02489376, 0.02289154, ..., 0.05340138,\n", + " 0.05368813, 0.05363015],\n", + " [0.02664465, 0.02282199, 0.02188679, ..., 0.05264946,\n", + " 0.05295609, 0.05278934],\n", + " ...,\n", + " [0.03728744, 0.03711296, 0.03728106, ..., 0.02617371,\n", + " 0.02229152, 0.02590632],\n", + " [0.03671798, 0.0368133 , 0.03710368, ..., 0.02649892,\n", + " 0.02695179, 0.02914437],\n", + " [0.03579334, 0.03631859, 0.03708439, ..., 0.03173029,\n", + " 0.02987235, 0.02694446]]]])" ] }, "execution_count": 9, @@ -207,7 +780,8 @@ } ], "source": [ - "nessy.variables['O3']['data'].shape" + "# Observe the NaNs that appeared when data was not available\n", + "rolling_mean.variables['O3']['data']" ] }, { @@ -218,7 +792,7 @@ { "data": { "text/plain": [ - "('time', 'lm', 'rlat', 'rlon')" + "(37, 24, 271, 351)" ] }, "execution_count": 10, @@ -227,7 +801,7 @@ } ], "source": [ - "nessy.variables['O3']['dimensions']" + "rolling_mean.variables['O3']['data'].shape" ] }, { @@ -238,7 +812,7 @@ { "data": { "text/plain": [ - "'time: mean (interval: 1hr)'" + "('time', 'lm', 'rlat', 'rlon')" ] }, "execution_count": 11, @@ -246,6 +820,91 @@ "output_type": "execute_result" } ], + "source": [ + "rolling_mean.variables['O3']['dimensions']" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3. Calculate daily statistics" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "CPU times: user 58.6 ms, sys: 5.01 ms, total: 63.6 ms\n", + "Wall time: 63 ms\n" + ] + } + ], + "source": [ + "%time nessy.daily_statistic(op=\"mean\")" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(2, 24, 271, 351)" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "nessy.variables['O3']['data'].shape" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "('time', 'lm', 'rlat', 'rlon')" + ] + }, + "execution_count": 14, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "nessy.variables['O3']['dimensions']" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'time: mean (interval: 1hr)'" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ "# See metadata\n", "nessy.variables['O3']['cell_methods']" @@ -253,7 +912,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 16, "metadata": {}, "outputs": [ { @@ -262,7 +921,7 @@ "[datetime.datetime(2022, 11, 15, 0, 0), datetime.datetime(2022, 11, 16, 0, 0)]" ] }, - "execution_count": 12, + "execution_count": 16, "metadata": {}, "output_type": "execute_result" } @@ -274,7 +933,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 17, "metadata": {}, "outputs": [ { @@ -286,7 +945,7 @@ " datetime.datetime(2022, 11, 16, 23, 0)]]" ] }, - "execution_count": 13, + "execution_count": 17, "metadata": {}, "output_type": "execute_result" }