Environment: NEMO 3.6 stable, XIOS 1.0. This bug was documented using the following compilers and MPI libraries:
The problem was reported when using default optimization flags as well as using -O3 optimization flag.
Problem: When using more than 1.920 MPI processes (120 MN3 nodes), during the XIOS initialization, the simulation was falling into a dead lock:
Some of the NEMO clients remain stuck in client.cpp doing an MPI send:
MPI_Send(buff,buffer.count(),MPI_CHAR,serverLeader,1,CXios::globalComm) ;
XIOS master server (first XIOS process), remains in CServer::listenContext(void) routine at server.cpp, trying to dispatch all the messages:
MPI_Iprobe(MPI_ANY_SOURCE,1,CXios::globalComm, &flag, &status) ;
Actions taken: Prints were placed into the code, before and after the mentioned call. It could be seen that some NEMO processes were waiting in the MPI barrier (synchronous send), while XIOS master server was looping to infinite, trying to get all the messages (the total number of messages should be equal to the number of clients, or NEMO processes).
The error could be reproduced in a small code, in order to facilitate the debug and the error fixing.
Diagnosis: It seemed that some messages sent from the clients to the master server were lost, maybe because all of these messages were sent from all the nodes at the same time.
Solution: Our first workaround was to include a call to the sleep function before de MPI_Send in the clients' code, to interleave the outcome messages and avoid flooding buffers and network. This obviously is not the cleanest solution, because it introduces a total delay of 15 secs. in the execution, but it is an affordable approach, given that this code is only executed in the initialization.
char hostName[50]; gethostname(hostName, 50); // Sleep Fix sleep(rank%16); MPI_Comm_create_errhandler(eh, &newerr); MPI_Comm_set_errhandler(CXios::globalComm, newerr ); error_code = MPI_Send(buff,buffer.count(),MPI_CHAR,serverLeader,1,CXios::globalComm) ; delete [] buff ;
BSC operations provided another solution: enabling the User Datagram protocol by using Intel's MPI environment variables (more information below). This alternative works and doesn't need any code modification, but it entails a penalty in performance: we observed that simulations using this option were increasingly slower (5%-20%) as the number of cores was aumented, in comparison with the reference ones.
I_MPI_DAPL_UD=on
More information:
This bug was reported in the XIOS portal:
http://forge.ipsl.jussieu.fr/ioserver/ticket/90
About Intel Communication Fabrics control:
https://software.intel.com/en-us/node/528821
DAPL UD-capable Network Fabrics Control:
Environment: Auto-EC-Earth 3.2.2_develop_MN4 (EC-Earth 3.2 r4063-runtime-unification).
Problem: NEMO crashes in the initialization when reading input files:
ocean.output:
===>>> : E R R O R
===========
iom_nf90_check : NetCDF: Invalid argument
iom_nf90_open ~~~
iom_nf90_open ~~~ open existing file: ./weights_WOA13d1_2_o
rca1_bilinear.nc in READ mode
===>>> : E R R O R
===========
iom_nf90_check : NetCDF: Invalid argument
iom_nf90_open ~~~
===>>> : E R R O R
===========
fld_weight : unable to read the file
879,6 Final
log.err:
forrtl: severe (408): fort: (7): Attempt to use pointer FLY_DTA when it is not associated with a target Image PC Routine Line Source nemo.exe 00000000027994F6 Unknown Unknown Unknown nemo.exe 0000000000B3F219 fldread_mp_fld_in 1375 fldread.f90 nemo.exe 0000000000B15D4E fldread_mp_fld_ge 614 fldread.f90 nemo.exe 0000000000B13A6B fldread_mp_fld_in 413 fldread.f90 nemo.exe 0000000000B0A69B fldread_mp_fld_re 175 fldread.f90 nemo.exe 0000000000978301 dtatsd_mp_dta_tsd 224 dtatsd.f90 nemo.exe 0000000000C312DF istate_mp_istate_ 196 istate.f90 nemo.exe 000000000043C33F nemogcm_mp_nemo_i 326 nemogcm.f90 nemo.exe 000000000043A64D nemogcm_mp_nemo_g 120 nemogcm.f90 nemo.exe 000000000043A606 MAIN__ 18 nemo.f90 nemo.exe 000000000043A5DE Unknown Unknown Unknown libc-2.22.so 00002B64D88596E5 __libc_start_main Unknown Unknown nemo.exe 000000000043A4E9 Unknown Unknown Unknown
Actions taken: Operations had observed this error using NEMO standalone and NeTCDF 4.4.4.1, so they installed NetCDF 4.4.0 version. We could not reproduce the failure they reported with NEMO and NetCDF 4.4.4.1 but we got the same error when running EC-Earth. So we also moved to NetCDF 4.4.0. However, we got other XIOS errors when writing outputs (commented in following issues) and we asked for the same version we were using at MN3 to be installed. When operations installed 4.2 version we surprisingly got again the same error.
Diagnosis:
Solution:
More information:
This bug had already been reported in Unidata Github:
Environment: Auto-EC-Earth 3.2.2_develop_MN4 (EC-Earth 3.2 r4063-runtime-unification).
Problem: XIOS2 breaks when writing output files. As a result, output files are incomplete: they contain the headers for each one of the variables they are supposed to store, but only values for nav_lat, nav_lon, depthu and depthu_bounds are actually saved.
ocean.output: ocean.output looks normal, the stop is just after writing the restarts.
log.err:
forrtl: error (65): floating invalid
Image PC Routine Line Source
nemo.exe 0000000001B2B832 Unknown Unknown Unknown
libpthread-2.22.s 00002B2472B71B10 Unknown Unknown Unknown
nemo.exe 000000000189CE85 _ZN5blitz5ArrayId 182 fastiter.h
nemo.exe 000000000189BDE9 _ZN5blitz5ArrayId 89 methods.cc
nemo.exe 000000000189BA8C _ZN4xios13COperat 175 operator_expr.hpp
nemo.exe 000000000188A3BA _ZN4xios27CFieldF 61 binary_arithmetic_filter.cpp
nemo.exe 00000000018152E4 _ZN4xios7CFilter1 14 filter.cpp
nemo.exe 0000000001563FBE _ZN4xios9CInputPi 37 input_pin.cpp
nemo.exe 000000000176284F _ZN4xios10COutput 46 output_pin.cpp
nemo.exe 0000000001762A48 _ZN4xios10COutput 35 output_pin.cpp
nemo.exe 0000000001815364 _ZN4xios7CFilter1 16 filter.cpp
nemo.exe 0000000001563FBE _ZN4xios9CInputPi 37 input_pin.cpp
nemo.exe 000000000176284F _ZN4xios10COutput 46 output_pin.cpp
nemo.exe 0000000001762A48 _ZN4xios10COutput 35 output_pin.cpp
nemo.exe 0000000001815364 _ZN4xios7CFilter1 16 filter.cpp
nemo.exe 0000000001563FBE _ZN4xios9CInputPi 37 input_pin.cpp
nemo.exe 000000000176284F _ZN4xios10COutput 46 output_pin.cpp
nemo.exe 0000000001762A48 _ZN4xios10COutput 35 output_pin.cpp
nemo.exe 000000000178A260 _ZN4xios13CSource 32 source_filter.cpp
nemo.exe 0000000001812DC7 _ZN4xios6CField7s 21 field_impl.hpp
nemo.exe 00000000014B3586 cxios_write_data_ 434 icdata.cpp
nemo.exe 0000000000FFE0FD idata_mp_xios_sen 552 idata.F90
nemo.exe 0000000000738081 diawri_mp_dia_wri 252 diawri.f90
nemo.exe 0000000000485B67 step_mp_stp_ 284 step.f90
nemo.exe 000000000043908B nemogcm_mp_nemo_g 147 nemogcm.f90
nemo.exe 0000000000438FDD MAIN__ 18 nemo.f90
nemo.exe 0000000000438F9E Unknown Unknown Unknown
libc-2.22.so 00002B247309B6E5 __libc_start_main Unknown Unknown
nemo.exe 0000000000438EA9 Unknown Unknown Unknown
2517,1 Final
Actions taken: Given that the error is a floating invalid we disabled the -fpe0 flag, but we still were having the same problem. Then we disabled compiler optimizations (use -O0) and the problem disappeared, but this obviously has an effect on performance.
Diagnosis:
Solution: Disabling compiler optimizations (activate -O0).
Environment:
Problem:
Actions taken:
Diagnosis:
Solution:
Environment:
Problem:
Actions taken:
Diagnosis:
Solution: