This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
reproducibility [2016/02/05 11:15] macosta |
reproducibility [2017/11/10 14:03] (current) fmassonn |
||
---|---|---|---|
Line 309: | Line 309: | ||
* Determine the best method to quantify differences between runs | * Determine the best method to quantify differences between runs | ||
- | * Propose a reference | + | * Propose a reference which we can use to compare the rest of experiments. This reference could be use in the future to check runs in new platforms, the inclusion of new modules, etc. |
- | | + | * Use a statistical method to quantify the differences between runs and propose a minimum to achieve instead of bitwise precision in order to avoid critical restrictions in performance. |
- | * Use a statistical method to quantify the differences between runs and propose a minimum to achieve instead of bitwise precision | + | * Propose a method to know which of two simulations with valid results is the best. Some experiments using different compiler flags will obtain similar valid results (maybe with differences of only 1%). It would be convenient to know which obtain better results (quality of the simulation results). |
- | | + | * Determine a combination of flags (Floating-point control and optimization) and additional optimization methods which achieve a balance between performance and accuracy & reproducibility. |
- | * Propose a method to know which of two simulations with valid results is the best. Some experiments using different compiler flags | + | * Suggest a combination of flags and/or implement some specific optimizations to achieve the best performance possible and at the same time the differences are less than X% using a particular platform and less than Y% using two different platforms with a similar architecture (being Y > X). |
- | | + | |
- | | + | |
- | * Determine a combination of flags (Floating-point control and optimization) and additional optimization methods which achieve a | + | |
- | | + | |
- | * Suggest a combination of flags and/or implement some specific optimizations to achieve the best performance possible and at the | + | |
- | | + | |
- | | + | |
* If bit for bit reproducibility was achieved using ec-earth3.1, | * If bit for bit reproducibility was achieved using ec-earth3.1, | ||
+ | ===== 27th of May 2016 ===== | ||
+ | See the summarizing presentations of {{20160526_groupmeeting.pdf | François }} and {{20160526_EC-Earth3.2_MarioAcosta.pdf | Mario }}. A more general set of slides about climate-reproducibility is available {{ 20160526_EC-Earth3.1_FrancoisMassonnet.pdf | here }} and was also posted on the EC-Earth development portal issue {{https:// | ||
+ | |||
+ | Actions: | ||
+ | * Mario runs an experiment with **-fpe0** activated, on ECMWF. | ||
+ | * Mario/ | ||
+ | |||
+ | ===== 10th of November 2017 ===== | ||
+ | Martin and François have worked to make the scripts testing the reproducibility more universal. These can now be found in the following gitlab project: | ||
+ | |||
+ | https:// | ||
+ | |||
+ | A draft of the paper has been created: | ||
+ | |||
+ | https:// |