How to download , compile and run NEMO with Gyre configuration on Marenostrum 3
1-Register at the Nemo webpage (http://www.nemo-ocean.eu/)
2-Download the sources (the revision 4879 is the recommended one before the stable release):
svn --username "YOUR_USERNAME" -r 4879 co http://forge.ipsl.jussieu.fr/nemo/svn/trunk/NEMOGCM
3-Download the NEMO architecture file for Marenostrum 3 and put it into NEMOGCM/ARCH:
4-Download the environment file and copy it into NEMOGCM/CONFIG and into NEMOGCM/CONFIG/GYRE/EXP00
5-Download the execution script sample and put it into NEMOGCM/CONFIG/GYRE/EXP00
6-Download the XIOS (IO server) sources.
svn --username "YOUR_USERNAME" co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-1.0/
7-Download the 3 architecture files for XIOS and copy it into the folder xios-1.0/arch
https://www.dropbox.com/s/5ix3st0ad72pswj/arch-X64_MN3_openmpi.env https://www.dropbox.com/s/i2g3hnueeqwgf79/arch-X64_MN3_openmpi.fcm https://www.dropbox.com/s/2mad5klfi4te8dw/arch-X64_MN3_openmpi.path
8-Compress the sources and copy it to Marenostrum3:
tar -zcf NemoSources.tar.gz NEMOGCM tar -zcf XiosSources.tar.gz xios-1.0 scp NemoSources.tar.gz email@example.com:/path_to_folder/ scp XiosSources.tar.gz firstname.lastname@example.org:/path_to_folder/
9-Login to Marenostrum3:
10- Go to the folder and uncompress the sources.
tar -zxf NemoSources.tar.gz tar -zxf XiosSources.tar.gz
11- Enter to the xios-1.0 folder and compile.
cd xios-1.0/ ./make_xios --dev --arch X64_MN3_openmpi --job 8
12- Modify the arch file in NEMOGCM/ARCH/arch_mn3.fcm to correctly set the XIOS path.
You must change this paths to the correct path where you have XIOS compiled.
%XIOS_INC -I/path_to_folder/xios-1.0/inc %XIOS_LIB /path_to_folder/xios-1.0/lib/libxios.a
13-Once you did this you can compile NEMO. We will start with the GYRE configuration to forget about Input Data. To do it first we should change one compilation key in order to use the latest solver for the surface pressure gradient.
cd NEMOGCM/CONFIG vim GYRE/cpp_GYRE.fcm
and we change the key key_dynspg_flt por key_dynspg_ts
source openmpi.env (We set the proper environment for compilation) ./makenemo -n GYRE -m mn3 -j 8
14- Go inside the folder NEMOGCM/CONFIG/GYRE/EXP00 and make the first run.
chmod 777 ./submit_execution.cmd (Execution permission to the submission script) ./submit_execution.cmd 16 (We can change this 16 for the number of cores that we want to use)
Congratulations! You did your first NEMO run!
Here we have a little explanation about how to Customize a GYRE run: You can skip the points 1,2 and 3.
There is the way to run the GYRE configuration with NEMO 3.6 (trunk)
After downloading the trunk of the code: 1) create an arch file corresponding to your machine in the ARCH directory 2) go to the CONFIG directory. Replace key_dynspg_flt by key_dynspg_ts in GYRE/cpp_GYRE.fcm. This will ensure that you are no more using the solver but the time-splitting. 3) compile the code with makenemo:
./makenemo -n GYRE -m YOUR_MACHINE -j 4
Prepare the GYRE nameliste. go to CONFIG/GYRE/EXP00. You now have 2 namelistes: namelist_ref and namelist_cfg. namelist_ref is used to define all namelist variables with default values. namelist_cfg is used to overwrite the default namelist values with the values specific to the configuration you are using. Note that output.namelist.dyn is created when running the code and contains the merge of namelist_ref and namelist_cfg that has indeed been used by the code. In namelist_cfg: 1) change the time-step length (in second). We put a smaller value in order to be sure that the model do not explose whatever you do (as it is for benchmak and not scientific purpose). I propose rn_rdt = 1200. 2) change the number of time-step to do. By default in iodef.xml, we activate 5-day mean outputs so I propose you run at least 10 days → nn_itend = 720 3) activate the benchmark configuration (nonphysical option that allow you to increase the size of the domain as much as you want). Add the following parameters in the namctl of namelist_cfg !———————————————————————– &namctl ! Control prints & Benchmark !———————————————————————–
ln_ctl = .false. ! trends control print (expensive!) nn_bench = 1 ! Bench mode (1/0): CAUTION use zero except for bench ! (no physical validity of the results) nn_timing = 0 ! timing by routine activated (=1) creates timing.output file, or not (=0)
/ 4) specify the size of you domain in the namcfg bloc of namelist_cfg. We have the habit to define the domain size as
size of i direction = 30 * jp_cfg + 2 size of j direction = 20 * jp_cfg + 2
GYRE1 corresponds to jp_cfg = 1 GYRE6 ( jp_cfg = 6) has about the same size as ORCA2 GYRE48 ( jp_cfg = 48) has about the same size as ORCA025 GYRE144 ( jp_cfg = 144) has about the same size as ORCA12 If you want to define GYRE144, you must change in namcfg:
jp_cfg = 144 ! resolution of the configuration jpidta = 4322 ! 1st lateral dimension ( >= jpi ) = 30*jp_cfg+2 jpjdta = 2882 ! 2nd " " ( >= jpj ) = 20*jp_cfg+2 jpkdta = 31 ! number of levels ( >= jpk ) jpiglo = 4322 ! 1st dimension of global domain --> i = jpidta jpjglo = 2882 ! 2nd - - --> j = jpjdta
4) optional. you can change the number of subtime-step you do at each iteration. This will directly modify the number of communication you have to do at each time-step. By default you have 30 that is a kind of minimum (we usually use 30 to 60). You can put at much as you want if you want to increase the weight of communications. Add the following parameter in the namsplit of namelist_cfg
&namsplit ! time splitting parameters (“key_dynspg_ts”) !———————————————————————–
nn_baro = 30 ! Number of iterations of barotropic mode ! during rn_rdt seconds. Only used if ln_bt_nn_auto=F
Once you done all these changes, you can directly run GYRE (the namelistes and the xml files are the only input files). For example mpirun -np 4 ./opa