You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/using_eessi/building_on_eessi.md
+86-1
Original file line number
Diff line number
Diff line change
@@ -143,13 +143,98 @@ shell before building. To do this:
143
143
* Set manually `LD_RUN_PATH` to resolve libraries at runtime. `LIBRARY_PATH` should contain all the paths we need, and we also need to include the path to
144
144
`libstdc++` from our GCC installation to avoid picking up the one from the host:
* Compile and make sure the library resolution points to the EESSI stack. For this, `ldd` from compatibility layer and **not**`/usr/bin/ldd` should be used
149
149
when checking the binary.
150
150
151
151
* Run!
152
152
153
+
To exemplify this, take the classic MPI Hello World example code:
154
+
155
+
```
156
+
/*The Parallel Hello World Program*/
157
+
#include <stdio.h>
158
+
#include <mpi.h>
159
+
160
+
int main(int argc, char **argv)
161
+
{
162
+
int node;
163
+
164
+
MPI_Init(&argc,&argv);
165
+
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
166
+
167
+
printf("Hello World from MPI rank %d\n", rank);
168
+
169
+
MPI_Finalize();
170
+
}
171
+
172
+
```
173
+
174
+
As described in the steps above, prepare the environment and load the required dependencies. For this case, we will use `gompi/2023b` as the toolchain to compile it.
This is the moment to check if the compiler picked all the libraries from the software and compatibility layer, not the host.
195
+
196
+
Look at the difference on the library solving when using the compatibility layer ldd from the host one:
197
+
198
+
```sh
199
+
200
+
# ldd from the compatibility layer, notice how all libraries are resolved from the software layer -> "/cvmfs/software.eessi.io/versions/2023.06/software", the libc and the interpreter point to the compatibility layer, so all good to go!
# ldd from the host, even though the libraries point to the software layer, now the linker ld-linux-x86-64.so.2 from the compat layer directly points to "/lib64/ld-linux-x86-64.so.2" from the host to do the resolving, resulting in the GLIBC mismatch as libc is also resolved in the host and not the compat layer
216
+
217
+
{EESSI 2023.06} $ /usr/bin/ldd HelloWorld
218
+
./HelloWorld: /lib64/libc.so.6: version `GLIBC_2.36' not found (required by /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/skylake_avx512/software/libevent/2.1.12-GCCcore-13.2.0/lib64/libevent_core-2.1.so.7)
219
+
./HelloWorld: /lib64/libc.so.6: version `GLIBC_ABI_DT_RELR' not found (required by /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/lib/../lib64/libm.so.6)
Now is the moment of truth, if everything looks right when checking with ldd, you should be fine to run the program:
230
+
231
+
```
232
+
{EESSI 2023.06} $ mpirun -n 2 HelloWorld
233
+
Hello World from Node 0
234
+
Hello World from Node 1
235
+
236
+
```
237
+
Even when closing the shell and restarting the environment, the libraries should point to the directories we setin`LD_RUN_PATH`, still, remember to load the required dependencies before running the binary.
0 commit comments