This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
library:computing:xios_impi_troubles [2017/08/04 13:20] 84.88.184.232 [Issue 1: NEMO fails to read input files] |
library:computing:xios_impi_troubles [2024/05/20 12:58] 84.88.52.107 old revision restored (2017/08/04 14:30) |
||
---|---|---|---|
Line 126: | Line 126: | ||
</ | </ | ||
- | **Actions taken: | + | **Actions taken: |
- | After looking for differences between NetCDF 4.4.0 and NetCDF 4.2 configurations (using nc-config & nf-config commands), we found out that while NetCDF 4.4.0 was compiled with no support for nc4 nor P-NetCDF (a library | + | After looking for differences between NetCDF 4.4.0 and NetCDF 4.2 configurations (using nc-config & nf-config commands), we found out that while NetCDF 4.4.0 was compiled with no support for nc4 nor P-NetCDF (a library that gives parallel I/O support for classic NetCDF files), NetCDF |
- | In order to know more about the source of this bug, we __compared the behavior of two NEMO executables__, compiled with NetCDF | + | In order to know more about the source of this bug, we __compared the behavior of two NEMO executables__: one compiled with NetCDF and another one without P-NetCDF support. Both executions were linked with NetCDF without P-NetCDF support at runtime. The result is that the __NEMO compiled with P-NetCDF did not run__, no matter the library used at runtime were not using it. The conclusion was that something was wrong at the NEMO binary itself. |
We did a __comparison of the functions included in both binaries__ through the nm command, and we found that __they were identical__. Then we did a __more in deep comparison of both binaries__ with objdump and we found out little differences, | We did a __comparison of the functions included in both binaries__ through the nm command, and we found that __they were identical__. Then we did a __more in deep comparison of both binaries__ with objdump and we found out little differences, | ||
Line 144: | Line 144: | ||
[[https:// | [[https:// | ||
- | ==== Issue 2: ==== | + | ==== Issue 2: XIOS crashes when writing model output |
**Environment: | **Environment: | ||
Line 202: | Line 202: | ||
**Solution: | **Solution: | ||
- | ==== Issue 3: ==== | + | ==== Issue 3: MPI kills XIOS when writing model output |
**Environment: | **Environment: | ||
Line 255: | Line 255: | ||
- | **Actions taken:** A similar error was observed with NEMO standalone v3.6r6499. In that case, operations | + | **Actions taken:** A similar error was observed with NEMO standalone v3.6r6499. In that case, Ops told us to use the //fabric// module, which selects //ofi// as internode fabrics, similarly to the solution used in MN3 (see above). Using this module solved the problem for NEMO standalone, although it had the collateral effect that jobs were never ending. In coupled EC-Earth this module produced a dead lock, commented below. |
- | **Diagnosis: | + | We tried an alternative solution, which was to increment the number of XIOS servers in order to reduce the number of messages sent to the same process and by the moment it seems that it is effective. |
- | **Solution: | + | **Diagnosis: |
+ | |||
+ | **Solution: | ||
About Intel Communication Fabrics control: | About Intel Communication Fabrics control: | ||
[[https:// | [[https:// | ||
+ | |||
+ | Ips_proto.c source code: | ||
+ | |||
+ | [[https:// | ||
==== Issue 4: ==== | ==== Issue 4: ==== | ||