User Tools

Site Tools


reproducibility

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
reproducibility [2016/02/05 11:15]
macosta
reproducibility [2016/05/27 10:12]
fmassonn [27th of May 2016]
Line 309: Line 309:
  
   * Determine the best method to quantify differences between runs   * Determine the best method to quantify differences between runs
-      * Propose a reference simulation which we can use to compare the rest of experiments. This reference simulation could be use in   +      * Propose a reference which we can use to compare the rest of experiments. This reference could be use in the future to check runs in new platforms, the inclusion of new modules, etc. 
-        the future to check runs in new platforms, the inclusion of new modules, etc. +      * Use a statistical method to quantify the differences between runs and propose a minimum to achieve instead of bitwise precision in order to avoid critical restrictions in performance. 
-      * Use a statistical method to quantify the differences between runs and propose a minimum to achieve instead of bitwise precision  +      * Propose a method to know which of two simulations with valid results is the best. Some experiments using different compiler flags will obtain similar valid results (maybe with differences of only 1%). It would be convenient to know which obtain better results (quality of the simulation results). 
-        in order to avoid critical restrictions in performance. +  * Determine a combination of flags (Floating-point control and optimization) and additional optimization methods which achieve a balance between performance and accuracy & reproducibility. 
-      * Propose a method to know which of two simulations with valid results is the best. Some experiments using different compiler flags  +      * Suggest a combination of flags and/or implement some specific optimizations to achieve the best performance possible and at the same time the differences are less than X% using a particular platform and less than Y% using two different platforms with a similar architecture (being Y > X).
-        will obtain similar valid results (maybe with differences of only 1%). It would be convenient to know which obtain better  +
-        results (quality of the simulation results). +
-  * Determine a combination of flags (Floating-point control and optimization) and additional optimization methods which achieve a  +
-    balance between performance and accuracy & reproducibility. +
-      * Suggest a combination of flags and/or implement some specific optimizations to achieve the best performance possible and at the  +
-        same time the differences are less than X% using a particular platform and less than Y% using two different platforms with a  +
-        similar architecture (being Y > X).+
   * If bit for bit reproducibility was achieved using ec-earth3.1, study how to obtain it using ec-earth3.2beta at least in a debug mode.   * If bit for bit reproducibility was achieved using ec-earth3.1, study how to obtain it using ec-earth3.2beta at least in a debug mode.
  
 +===== 27th of May 2016 =====
 +See the summarizing presentations of {{20160526_groupmeeting.pdf | François }} and {{20160526_EC-Earth3.2_MarioAcosta.pdf | Mario }}. A more general set of slides about climate-reproducibility is available {{ 20160526_EC-Earth3.1_FrancoisMassonnet.pdf | here }} and was also posted on the EC-Earth development portal issue {{https://dev.ec-earth.org/issues/207 | 207}}.
 +
 +Actions:
 +* Mario runs an experiment with **-fpe0** activated, on ECMWF.
 +* Mario/Oriol: Tests are to be made with libraries (NetCDF, GRIB, etc.) compiled with the same options and the same version of the code.
reproducibility.txt · Last modified: 2017/11/10 14:03 by fmassonn