Skip to content

Commit 8fb3c93

Browse files
authored
Merge pull request #369 from hvelab/issue_48_build_without_eb_example
Adding an MPI Hello World example in documentation on how to run on top of EESSI without EB
2 parents e125435 + 33c5d94 commit 8fb3c93

File tree

1 file changed

+86
-1
lines changed

1 file changed

+86
-1
lines changed

docs/using_eessi/building_on_eessi.md

+86-1
Original file line numberDiff line numberDiff line change
@@ -143,13 +143,98 @@ shell before building. To do this:
143143
* Set manually `LD_RUN_PATH` to resolve libraries at runtime. `LIBRARY_PATH` should contain all the paths we need, and we also need to include the path to
144144
`libstdc++` from our GCC installation to avoid picking up the one from the host:
145145
```sh
146-
export LD_RUN_PATH=$LIBRARY_PATH:$EBROOTGCCcore/lib64
146+
export LD_RUN_PATH=$LIBRARY_PATH:$EBROOTGCCCORE/lib64
147147
```
148148
* Compile and make sure the library resolution points to the EESSI stack. For this, `ldd` from compatibility layer and **not** `/usr/bin/ldd` should be used
149149
when checking the binary.
150150

151151
* Run!
152152

153+
To exemplify this, take the classic MPI Hello World example code:
154+
155+
```
156+
/*The Parallel Hello World Program*/
157+
#include <stdio.h>
158+
#include <mpi.h>
159+
160+
int main(int argc, char **argv)
161+
{
162+
int node;
163+
164+
MPI_Init(&argc,&argv);
165+
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
166+
167+
printf("Hello World from MPI rank %d\n", rank);
168+
169+
MPI_Finalize();
170+
}
171+
172+
```
173+
174+
As described in the steps above, prepare the environment and load the required dependencies. For this case, we will use `gompi/2023b` as the toolchain to compile it.
175+
176+
```
177+
# Starting the environment
178+
$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash
179+
180+
# Loading the toolchain
181+
{EESSI 2023.06} $ module load gompi/2023b
182+
```
183+
184+
Now, set the `LD_RUN_PATH` environment variable for all the libraries to point to the runtime libraries, then compile the code.
185+
186+
```
187+
# Setting LD_RUN_PATH
188+
{EESSI 2023.06}$ export LD_RUN_PATH=$LIBRARY_PATH:$EBROOTGCCCORE/lib64
189+
190+
# Compile the code manually
191+
{EESSI 2023.06} $ mpicc -o HelloWorld mpi.c
192+
```
193+
194+
This is the moment to check if the compiler picked all the libraries from the software and compatibility layer, not the host.
195+
196+
Look at the difference on the library solving when using the compatibility layer ldd from the host one:
197+
198+
```sh
199+
200+
# ldd from the compatibility layer, notice how all libraries are resolved from the software layer -> "/cvmfs/software.eessi.io/versions/2023.06/software", the libc and the interpreter point to the compatibility layer, so all good to go!
201+
202+
{EESSI 2023.06} $ ldd HelloWorld
203+
linux-vdso.so.1 (0x00007ffce03af000)
204+
libmpi.so.40 => /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/skylake_avx512/software/OpenMPI/4.1.6-GCC-13.2.0/lib/libmpi.so.40 (0x00007fadd9e84000)
205+
libc.so.6 => /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/lib64/libc.so.6 (0x00007fadd9ca8000)
206+
[...]
207+
libevent_pthreads-2.1.so.7 => /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/skylake_avx512/software/libevent/2.1.12-GCCcore-13.2.0/lib64/libevent_pthreads-2.1.so.7 (0x00007fadd98f0000)
208+
libm.so.6 => /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/lib/../lib64/libm.so.6 (0x00007fadd9810000)
209+
/cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/lib64/ld-linux-x86-64.so.2 (0x00007fadd9fab000)
210+
211+
```
212+
213+
```sh
214+
215+
# ldd from the host, even though the libraries point to the software layer, now the linker ld-linux-x86-64.so.2 from the compat layer directly points to "/lib64/ld-linux-x86-64.so.2" from the host to do the resolving, resulting in the GLIBC mismatch as libc is also resolved in the host and not the compat layer
216+
217+
{EESSI 2023.06} $ /usr/bin/ldd HelloWorld
218+
./HelloWorld: /lib64/libc.so.6: version `GLIBC_2.36' not found (required by /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/skylake_avx512/software/libevent/2.1.12-GCCcore-13.2.0/lib64/libevent_core-2.1.so.7)
219+
./HelloWorld: /lib64/libc.so.6: version `GLIBC_ABI_DT_RELR' not found (required by /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/lib/../lib64/libm.so.6)
220+
linux-vdso.so.1 (0x00007fffe4fd3000)
221+
libmpi.so.40 => /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/skylake_avx512/software/OpenMPI/4.1.6-GCC-13.2.0/lib/libmpi.so.40 (0x00007f1fdf571000)
222+
libc.so.6 => /lib64/libc.so.6 (0x00007f1fdf200000)
223+
[...]
224+
libevent_pthreads-2.1.so.7 => /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/skylake_avx512/software/libevent/2.1.12-GCCcore-13.2.0/lib64/libevent_pthreads-2.1.so.7 (0x00007f1fdf420000)
225+
libm.so.6 => /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/lib/../lib64/libm.so.6 (0x00007f1fdeeb1000)
226+
/cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/lib64/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f1fdf698000)
227+
228+
```
229+
Now is the moment of truth, if everything looks right when checking with ldd, you should be fine to run the program:
230+
231+
```
232+
{EESSI 2023.06} $ mpirun -n 2 HelloWorld
233+
Hello World from Node 0
234+
Hello World from Node 1
235+
236+
```
237+
Even when closing the shell and restarting the environment, the libraries should point to the directories we set in `LD_RUN_PATH`, still, remember to load the required dependencies before running the binary.
153238

154239
!!! warning
155240

0 commit comments

Comments
 (0)