Compiling OpenFOAM's scotch on macOS - macos

I am trying to compile the scotch library embedded into the OpenFOAM.org third-party repository here. I ran the command
make -C ./ThirdParty-dev/scotch_6.0.9/src/
and I get the below error message:
(cd libscotch ; make VERSION=6 RELEASE=0 PATCHLEVEL=9 scotch && make install)
make \
CC="gcc" \
CCD="gcc" \
scotch.h \
scotchf.h \
libscotch.so \
libscotcherr.so \
libscotcherrexit.so
gcc -O3 -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict=__restrict -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=9 dummysizes.c -o dummysizes -Xlinker --no-as-needed -lz -lm -lrt
ld: unknown option: --no-as-needed
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [dummysizes] Error 1
make[1]: *** [scotch] Error 2
make: *** [libscotch] Error 2
I am not sure what this error message means. If it is complaining about scotch not being available, that's why I'm compiling it in the first place. Out of desperation, I also tried to install it via brew install scotch to no avail. I would appreciate it if you could help me understand the above error message and resolve the issue.

The scotch build is a bit different in that they manage all of the OS/compiler-specific bits separately via a src/Makefile.inc that the user is responsible for providing. Of course they also provide a number of examples in the src/Make.inc/ directory, but they may not properly cover your particular OS/compiler requirements.
Since you grabbed the scotch source files from a third-party source instead of from the pristine upstream sources, you also have someone else's src/Makefile.inc that happens to be a Linux-specific version. So no surprise that it has incorrect link (or even compile) options.
The Darwin-specific makefile adjustments that are used by openfoam.com:
# Linux:
LIB = .so
ARFLAGS = $(WM_CFLAGS) -shared -o
LDFLAGS = -Xlinker --no-as-needed $(WM_LDFLAGS) -lm -lrt
# Darwin:
LIB = .dylib
ARFLAGS = $(WM_CFLAGS) -dynamiclib -undefined dynamic_lookup -o
LDFLAGS = $(WM_LDFLAGS) -lm
Without worrying about any other source of differences (in the OpenFOAM WM_CFLAGS and WM_LDFLAGS variables), it would appear that you are using Linux (gcc only?) link options for Darwin - so should be no surprise that they don't work.

The location for the pristine scotch sources move around a bit (seems to be related to their filer) but a reasonably uptodate reference is always included in the OpenFOAM ThirdParty BUILD.md. The URLs are provided as links, but also listed near the bottom of the file for easy grepping.
The current scotch link : https://gforge.inria.fr/frs/download.php/file/38352/scotch_6.1.0.tar.gz
The newest scotch is actually scotch-6.1.2 but there appears to be a regression in the dgraph calculation (the distributed graph in ptscotch) so probably better to stick with 6.1.0 for now.
Here is the information for the scotch repo itself (https://gitlab.inria.fr/scotch/scotch) - should be the most reliable source of information.

Related

CPU2017 benchmark 510.parest_r build failed with gcc9.3 and gcc9.4

everyone.
I'm try to build CPU2017 intrate and fprate test set on aarch64 server with gcc9.3. All the benchmark build successed, except 510.parest_r. Then i try build it with gcc9.4, meet the same error. I used the Example-gcc-linux-aarch64.cfg as configure file, just edit the gcc path.
Here is the failed info:
/home/gcc9.3/bin/g++ -std=c++03 -mabi=lp64 -c -o source/me-tomography/synthetic_data.o -DSPEC -DNDEBUG -Iinclude -I. -DSPEC_AUTO_SUPPRESS_OPENMP -O3 -DSPEC_LP64 source/me-tomography/synthetic_data.cc
/home/gcc9.3/bin/g++ -std=c++03 -mabi=lp64 -c -o source/multigrid/mg_base.o -DSPEC -DNDEBUG -Iinclude -I. -DSPEC_AUTO_SUPPRESS_OPENMP -O3 -DSPEC_LP64 source/multigrid/mg_base.cc
/home/gcc9.3/bin/g++ -std=c++03 -mabi=lp64 -c -o source/me-tomography/measurements.o -DSPEC -DNDEBUG -Iinclude -I. -DSPEC_AUTO_SUPPRESS_OPENMP -O3 -DSPEC_LP64 source/me-tomography/measurements.cc
init2.c:52: MPFR assertion failed: p >= 2 && p <= ((mpfr_prec_t)((mpfr_uprec_t)(~(mpfr_uprec_t)0)>>1))
during GIMPLE pass: forwprop
source/me-tomography/measurements.cc: In constructor 'METomography::Measurements::ReferencedMeasurements::RatioMinusRatio<dim, number>::RatioMinusRatio(const libparest::Slave::Stationary::ProblemDescription&, const dealii::Function<dim>&, const std::set<unsigned char>&) [with int dim = 3; number = double]':
source/me-tomography/measurements.cc:1739:7: internal compiler error: Aborted
1739 | RatioMinusRatio<dim,number>::
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
0xafbd97 crash_signal
../.././gcc/toplev.c:326
0xffff9e304d78 __GI_raise
../sysdeps/unix/sysv/linux/raise.c:51
0xffff9e2f1aab __GI_abort
/build/glibc-RIFKjK/glibc-2.31/stdlib/abort.c:79
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.
specmake: *** [/home/spec/cpu2017_aarch64/benchspec/Makefile.defaults:356: source/me-tomography/measurements.o] Error 1
specmake: *** Waiting for unfinished jobs....
The failed info seem be caused by the MPRF float percision setting?
I tried build 510.parest_r with llvm-10, build success.
By the way, i build same gcc9.3 in x86_64 server, build 510.parest_r success.
You've found a bug in an older version of GCC (or possibly your system's RAM is failing, but unlikely if it consistently crashes at the same place). Or perhaps a bug in MPFR, although that seems less likely.
If you preprocess that source (add -E or -save-temps to the command line that crashed) and put it on https://godbolt.org/, does it still crash the same way with current ARM64 GCC, e.g. a nightly build of trunk? (https://godbolt.org/z/K6GnrYrj1 is ARM64 GCC trunk, with your command line args without the preprocessor stuff, which won't matter when compiling CPP output.)
If it still crashes with current GCC, then file a bug report on https://gcc.gnu.org/bugzilla/, ideally with a MCVE of the part of the source that triggers the bug. (Remove as many parts of the file as you can while preserving the crash behaviour. e.g. take out tons of stuff, undo if that makes it compile.)
If it doesn't crash with newer GCC, it might already be a known bug, or got fixed by accident, or a different MPFR or other library version mattered. In that case, maybe not worth reporting it upstream. Or if you do make sure to include the fact that the range of affected versions doesn't include GCC12 or current trunk. Probably this Stack Overflow Q&A is sufficient for future users to know that it's a known bug.

Building MariaDB with musl: /usr/bin/ld cannot find -lgcc_s

I am trying to build MariaDB v10.3 with a musl tool chain on x86_64 Debian kernel v4.19. I have mainly been using the musl-gcc gcc wrapper to achieve this. The relevant packages I installed are as follows:
musl (1.1.21-2): standard C library
musl-dev (1.1.21-2): standard C library development files
musl-tools (1.1.21-2): standard C library tools
To build MariaDB, I first run:
CC=/usr/bin/musl-gcc cmake ../ -DWITHOUT_TOKUDB=1
which exits cleanly, and then I follow that up with:
make CC=/usr/bin/musl-gcc
which error with the following message:
Scanning dependencies of target strings-t
[ 12%] Building C object unittest/strings/CMakeFiles/strings-t.dir/strings-t.c.o
[ 12%] Linking CXX executable strings-t
/usr/bin/ld: cannot find -lgcc_s
/usr/bin/ld: cannot find -lgcc_s
collect2: error: ld returned 1 exit status
make[2]: *** [unittest/strings/CMakeFiles/strings-t.dir/build.make:94: unittest/strings/strings-t] Error 1
make[1]: *** [CMakeFiles/Makefile2:731: unittest/strings/CMakeFiles/strings-t.dir/all] Error 2
make: *** [Makefile:163: all] Error 2
Now I know the library that musl is looking for (libgcc_s.so) is located in /lib/gcc/x86_64-linux-gnu/8/ but my attempts to include the library using LDFLAGS or symlinking the library into /usr/lib/x86_64-linux-musl/ have failed.
Am I going about compiling MariaDB the right way? I imagine I am doing something wrong as Alpine Linux can run it.
Thus why don't looking how alpine is building it ?
https://git.alpinelinux.org/aports/tree/main/mariadb/APKBUILD?id=3ca8e70b047f37a01df42e3244014a6635893abc
seems they disable test
-DSKIP_TESTS=ON
ref: https://git.alpinelinux.org/aports/tree/main/mariadb/APKBUILD?id=3ca8e70b047f37a01df42e3244014a6635893abc#n186
And their ppc-glibc patch ?
https://git.alpinelinux.org/aports/tree/main/mariadb/ppc-remove-glibc-dep.patch?id=3ca8e70b047f37a01df42e3244014a6635893abc
I will update this answer when I become completely successful, but the solution thus far has been to use musl-cross-make to compile all libraries and such to specifically target musl. Since getting musl-cross-make I have been building all the dependencies from scratch (which is not fun :)). Thus far, I have gotten a more-or-less successful configuration and I am working on compilation (hammering out the last few dependencies).
I am using the following script to build things:
#!/bin/bash
set -euo pipefail
# musl paths
MUSL_PREFIX='/usr/local/x86_64-linux-musl'
MUSL_INC="$MUSL_PREFIX/include"
MUSL_LIB="$MUSL_PREFIX/lib"
CC='/usr/local/bin/x86_64-linux-musl-gcc'
CXX='/usr/local/bin/x86_64-linux-musl-g++'
#
# CMake couldn't locate lz4 when I installed it manually, so we bundle
# it in with the MariaDB build
#
wget https://github.com/lz4/lz4/archive/v1.7.5.tar.gz
tar -xzf v1.7.5.tar.gz
rm v1.7.5.tar.gz
mv lz4-1.7.5 /home/ajg/mariadb/storage/mroonga/vendor/groonga/vendor/
# Configure the build
CC="$CC" \
CXX="$CXX" \
LDFLAGS="-L$MUSL_LIB -Wl,-rpath,$MUSL_LIB" \
CFLAGS="-I$MUSL_INC" \
CXXFLAGS="-I$MUSL_INC" \
CPPFLAGS="-I$MUSL_INC" \
CMAKE_PREFIX_PATH="$MUSL_PREFIX" \
cmake . -DWITHOUT_TOKUDB=1 -DGRN_WITH_BUNDLED_LZ4=ON
# Make it
make \
CC="$CC" \
CXX="$CXX" \
LDFLAGS="-L$MUSL_LIB -Wl,-rpath,$MUSL_LIB" \
CFLAGS="-I$MUSL_INC" \
CXXFLAGS="-I$MUSL_INC" \
CPPFLAGS="-I$MUSL_INC"
I hope this helps someone else out in the future :)

Glibc configuration flags to reuse the new installed glibc

I have a question here on how a newly built GLIBC can be used from different machine.
I changed malloc code and compiled a local version of glibc
From : /home/1/glibc/puzzlebox/
Configure:**/eglibc-2.15/configure --prefix=/home/1/glibc/puzzlebox/lib32/ --host=i686-linux-gnu --build=i686-linux-gnu CC="gcc -m32 -g -ggdb -DMALLOC_DEBUG=1 -U__i686" CXX="g++ -m32 -g -ggdb -DMALLOC_DEBUG=1 -U __i686" CFLAGS="-O2 -march=i686 -U_FORTIFY_SOURCE -fno-stack-protector" CXXFLAGS="-O2 -march=i686 -U_FORTIFY_SOURCE -fno-stack-protector"
Make and install**: make clean;make;make install
Since my prefix is /home/1/glibc/puzzlebox/lib32/ , following directories are created under /home/1/glibc/puzzlebox/lib32/
bin etc include lib libexec sbin share
Now i copy library files /home/1/glibc/puzzlebox/lib32/lib/* to another repository /home/2/glibc/puzzlebox/lib32/lib
and pointed my gcc to use the library files from /home/2/glibc/puzzlebox/lib32/lib/* files
But i am getting the following error when compiling from
ld: cannot find /home/1/glibc/puzzlebox/lib32/lib/libc.so.6 inside
ld: cannot find /home/1/glibc/puzzlebox/lib32/lib/libc_nonshared.a inside
ld: cannot find /home/1/glibc/puzzlebox/lib32/lib/ld-linux.so.2 inside
collect2: error: ld returned 1 exit status
I am compilicc on /home/2 repository , but my glibc requires /home/1/glibc/puzzlebox/lib32/lib/libc.so.6
Is this because of static links? How can this be overcome? how can i build a glibc which can be used between repositories without rebuilding in each and every repository? and I dont want to override the already existing glibc so i dint use prefix as /usr
Please suggest!! Thanks in advance!!
Is this because of static links?
No. The most likely reason is that /home/2/glibc/puzzlebox/lib32/lib/libc.so (which is a linker script, i.e. a text file) has /home/1/glibc/puzzlebox/lib32/lib/libc.so.6 etc. in it.
You can edit that file, but really you should not compile GLIBC with --prefix=/foo unless that is where you intend to install it.

gold linker with --incremental flag does not work for target i386

I'm using Gold linker from binutils-2.24 to link an application for target: i686-pc-linux-gnu .
I got an out of virtual memory error when I ran Gold on my i686-pc-linux-gnu machine, so I built it on a x86_64-linux-gnu host (to get more virtual memory), and I'm running it on this machine as well, but I'm using it to link my application for target: i686-pc-linux-gnu.
The first link is successful - I don't have an executable yet - so Gold reverts to --incremental-full and I get a working executable which I can run successfully on my i686-pc-linux-gnu machine:
gold-ld -o stam32 -dynamic-linker /lib/ld-linux.so.2 -L/usr/lib32 /usr/lib32/crti.o /usr/lib32/crtn.o /usr/lib32/crt1.o main.o try.o -lc --incremental
*stam32: stat: No such file or directory
linking with --incremental-full*
‎The second link fails with the following error:
../objs-binutils-2.24/gold/ld -o stam32 -dynamic-linker /lib/ld-linux.so.2 -L/usr/lib32 /usr/lib32/crti.o /usr/lib32/crtn.o /usr/lib32/crt1.o main.o try.o -lc --incremental
../objs-binutils-2.24/gold/ld: internal error in init_got_plt_for_update, at ../../binutils-2.24/gold/target.h:949
I looked at the source code and found that "init_got_plt_for_update" is implemented only for
x86_64 and tilegx. For other targets init_got_plt_for_update simply calls gold_unreachable() which exits gold with an error.
On the other hand - there's a whole lecture on Gold's incremental linking (https://video.linux.com/videos/incremental-linking-with-gold) and i386 is specifically mentioned there, as a Gold supported target, and the speaker does not mention any limitations regarding the use of --incremental flag with i386 targets (and as far as I know i686-pc-linux-gnu is an i386 target).
So does anyone know why my incremental linking fails?
Thanks in advance,
Galit Keret
Asked and answered in binutils mailing list:
There's currently no incremental linking support for gold's i386 target.

shared library locations for matlab mex files:

I am trying to write a matlab mex function which uses libhdf5; My Linux install provides libhdf5-1.8 shared libraries and headers. However, my version of Matlab, r2007b, provides a libhdf5.so from the 1.6 release. (Matlab .mat files bootstrap hdf5, evidently). When I compile the mex, it segfaults in Matlab. If I downgrade my version of libhdf5 to 1.6 (not a long-term option), the code compiles and runs fine.
question: how do I solve this problem? how do I tell the mex compilation process to link against /usr/lib64/libhdf5.so.6 instead of /opt/matlab/bin/glnxa64/libhdf5.so.0 ? When I try to do this using -Wl,-rpath-link,/usr/lib64 in my compilation, I get errors like:
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../x86_64-pc-linux-gnu/bin/ld: warning: libhdf5.so.0, needed by /opt/matlab/matlab75/bin/glnxa64/libmat.so, may conflict with libhdf5.so.6
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: ld returned 1 exit status
mex: link of 'hdf5_read_strings.mexa64' failed.
make: *** [hdf5_read_strings.mexa64] Error 1
ack. the last resort would be to download a local copy of the hdf5-1.6.5 headers and be done with it, but this is not future proof (a Matlab version upgrade is in my future.). any ideas?
EDIT: per Ramashalanka's excellent suggestions, I
A) called mex -v to get the 3 gcc commands; the last is the linker command;
B) called that linker command with a -v to get the collect command;
C) called that collect2 -v -t and the rest of the flags.
The relevant parts of my output:
/usr/bin/ld: mode elf_x86_64
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/crti.o
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/crtbeginS.o
hdf5_read_strings.o
mexversion.o
-lmx (/opt/matlab/matlab75/bin/glnxa64/libmx.so)
-lmex (/opt/matlab/matlab75/bin/glnxa64/libmex.so)
-lhdf5 (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/libhdf5.so)
/lib64/libz.so
-lm (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/libm.so)
-lstdc++ (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libstdc++.so)
-lgcc_s (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libgcc_s.so)
/lib64/libpthread.so.0
/lib64/libc.so.6
/lib64/ld-linux-x86-64.so.2
-lgcc_s (/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libgcc_s.so)
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/crtendS.o
/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/../../../../lib64/crtn.o
So, in fact the libhdf5.so from /usr/lib64 is being referenced. However, this is being overriden, I believe, by the environment variable LD_LIBRARY_PATH, which my version of Matlab automagically sets at run-time so it can locate its own versions of e.g. libmex.so, etc.
I am thinking that the crt_file.c example works either b/c it does not use the functions I am using (H5DOpen, which had a signature change in the move from 1.6 to 1.8 (yes, I am using -DH5_USE_16_API)), or, less likely, b/c it does not hit the parts of Matlab internals that need hdf5. ack.
The following worked on my system:
Install hdf5 version 1.8.4 (you've already done this: I installed the source and compiled to ensure it is compatible with my system, that I get gcc versions and that I get the static libraries - e.g. the binaries offered for my system are icc specific).
Make a target file. You already have your own file. I used the simple h5_crtfile.c from here (a good idea to start with this simple file first a look for warnings). I changed main to mexFunction with the usual args and included mex.h.
Specify the static 1.8.4 library you want to load explicitly (the full path with no -L for it necessary) and don't include -lhdf5 in the LDFLAGS. Include a -t option so you can ensure that there is no dynamic hdf5 library being loaded. You also need -lz, with zlib installed. For darwin we also need a -bundle in LDFLAGS:
mex CFLAGS='-I/usr/local/hdf5/include' LDFLAGS='-t /usr/local/hdf5/lib/libhdf5.a -lz -bundle' h5_crtfile.c -v
For linux, you need an equivalent position-independent call, e.g. fPIC and maybe -shared, but I don't have a linux system with a matlab license, so I can't check:
mex CFLAGS='-fPIC -I/usr/local/hdf5/include' LDFLAGS='-t /usr/local/hdf5/lib/libhdf5.a -lz -shared' h5_crtfile.c -v
Run the h5_crtfile mex file. This runs without problems on my machine. It just does a H5Fcreate and H5Fclose to create "file.h5" in the current directory, and when I call file file.h5 I get file.h5: Hierarchical Data Format (version 5) data.
Note that if I include a -lhdf5 above in step 3, then matlab aborts when I try to run the executable (because it then uses matlab's dynamic libraries which for me are version 1.6.5), so this is definitely solving the problem on my system.
Thanks for the question. My solution above is definitely much easier for me than what I was doing before. Hopefully the above works for you.
I am accepting Ramashalanka's answer because it led me to the exact solution which I will post here for completeness only:
download the hdf5-1.6.5 library from the hdf5 website, and install the header files in a local directory;
tell mex to look for "hdf5.h" in this local directory, rather than in the standard location (e.g. /usr/include.)
tell mex to compile my code and the shared object library provided by matlab, and do not use the -ldfh5 flag in LDFLAGS.
the command I used is, essentially:
/opt/matlab/matlab_default/bin/mex -v CC#gcc CXX#g++ CFLAGS#"-Wall -O3 -fPIC -I./hdf5_1.6.5/src -I/usr/include -I/opt/matlab/matlab_default/extern/include" CXXFLAGS#"-Wall -O3 -fPIC -I./hdf5_1.6.5/src -I/usr/include -I/opt/matlab/matlab_default/extern/include " -O -lmwblas -largeArrayDims -L/usr/lib64 hdf5_read_strings.c /opt/matlab/matlab_default/bin/glnxa64/libhdf5.so.0
this gets translated by mex into the commands:
gcc -c -I/opt/matlab/matlab75/extern/include -DMATLAB_MEX_FILE -Wall -O3 -fPIC -I./hdf5_1.6.5/src -I/usr/include -I/opt/matlab/matlab_default/extern/include -O -DNDEBUG hdf5_read_strings.c
gcc -c -I/opt/matlab/matlab75/extern/include -DMATLAB_MEX_FILE -Wall -O3 -fPIC -I./hdf5_1.6.5/src -I/usr/include -I/opt/matlab/matlab_default/extern/include -O -DNDEBUG /opt/matlab/matlab75/extern/src/mexversion.c
gcc -O -pthread -shared -Wl,--version-script,/opt/matlab/matlab75/extern/lib/glnxa64/mexFunction.map -Wl,--no-undefined -o hdf5_read_strings.mexa64 hdf5_read_strings.o mexversion.o -lmwblas -L/usr/lib64 /opt/matlab/matlab_default/bin/glnxa64/libhdf5.so.0 -Wl,-rpath-link,/opt/matlab/matlab_default/bin/glnxa64 -L/opt/matlab/matlab_default/bin/glnxa64 -lmx -lmex -lmat -lm -lstdc++
this solution should work on all my various target machines and at least until I upgrade to matlab r2009a, which I believe uses hdf5-1.8. thanks for all the help, sorry for being so dense with this--I think I was overly-committed to using the packaged version of hdf5, rather than a local set of header files.
Note this would all have been trivial if Mathworks had provided a set of the header files with the Matlab distribution...

Resources