History of AutoRPE
In its origin, AutoRPE was a set of python utilities that were designed to enable the usage of Reduced Precision Emulator in a large scale code like NEMO, by doing an automatic manipulation of the source-code to implement it. However, the tool surpassed its original purpose.
In order to have a tool that can reliably modify the code to change the type of the variables, it became necessary to properly parse the code and use the information contained to be sure about the modifications being done. The information obtained could be stored in a database that had its own value.
At this point, we had a tool that allowed us to implement the reduced precision emulator, which was a very important step to perform experiments with any given numerical precision, but was already insufficient to properly determine which variables were most sensitive to it.
The large number of real variables to study ( O(1000) ) made necessary to think in smart strategies to explore the huge variable space. One of the strategies that was promising was the adaptation of a binary search algorithm, which would be used to detect, using a reduced amount of tests, which variables actually require double-precision. To use this approach, AutoRPE was extended to include a workflow manager that could handle this analysis. It was developed and used to first identify which variables in NEMO required precision. This workflow could be used not only with NEMO but also with ROMS. The method and some results were published in Tintó-Prims et al. 2019.
At this stage, we had a tool that could be used to identify sensitive variables in a Fortran code, but in order to use this information to actually optimize the code, a lot of manual work was still required.
In Fortran, it is not straightforward to define type agnostic routines (routines in which the type of the arguments is not defined at compilation time). For this reason, the arguments provided to a routine/function call must be coherent with the arguments defined inside the routine/function. This implies that if we want to modify the type of a variable, we might be forced to change the precision of other variables to enforce code coherency. The original workflow to achieve this was to rely on the compiler complaining each time the code was not properly modified to learn which variables had to be additionally modified and keep going with this process until all the problems are solved. This might require a lot of trial and error, without a prior idea about how much time one would need to finish the process.
In order to help deal with these issues, we decided to extend our tool in order to check which parts of the code need to be fixed, and to fix it automatically. With this done, it would be possible to close the workflow and spare user intervention, making the whole process more robust, prone to errors and reproducible.