What is the OpenMP runtime needed for? - openmp

I'm running Arch Linux. I ran a couple of OpenMP programs on this machine, both C and Fortran, never noticing anything strange or unexpected. The correct number of threads were always used.
Now I noticed that there is a package openmp available, which is not installed:
extra/openmp 3.9.1-1
LLVM OpenMP Runtime Library
What is the OpenMP Runtime needed for if OpenMP works without it?

This runtime library is meant for the LLVM compiler, I know that in Black Arch (a penetration testing version of the Arch Linux) have gcc installed by default so your programs must be using it instead of LLVM (or clang).And default gcc installations also install OpenMP.

Related

GCC from source - possible issues with system default GCC one?

I am thinking to download the GCC source code (latest version), compile it and develop all my applications with it.
Now, my doubts are related to what would happen if I need to interact with other libraries.
E.g. suppose you are using GCC 9 and you need to use external libraries installed in your Linux OS that have been build with GCC 5 (or older). Would there be any problem at runtime?
What other problems could you expect to experience using a compiler that is different from the system default one?
What are the relationships between the new compiler runtime and the runtime of the system default one?

Compiling with glibc and running kernel that compiled with eglibc

I am using an ARM embedded system (ARM9) that running an embedded linux kernel. The kernel was compile with GCC 4.5.x with the eglibc.
Is there any harm running binary which are compiled code with GCC 4.8.x or newer which uses glibc.
I've read that you're not suppose to mix and match libc for stability reason. But as far as I understand they are both ABI compatible so they shouldn't be any problem.
Some of the code that I am using requires STD11 to compile correctly and thus I can't use GCC4.5.
The kernel was compile with GCC 4.5.x with the eglibc
The kernel build doesn't use GLIBC, so it's completely irrelevant what libc was (not) used to build the kernel.
Is there any harm running binary which are compiled code with GCC 4.8.x or newer which uses glibc.
No.
What occurs when the binary links against libc? It was linked against glibc when cross compiling but will link against eglibc that in sysroot.
In general, GLIBC and EGLIBC guarantee backward compatibility: that is, a binary linked against GLIBC-x.y will run fine against any GLIBC not older than x.y.
And EGLIBC's deviation from GLIBC is quite minimal. EGLIBC allowed for certain features to be disabled. A binary linked against EGLIBC-x.y will run just fine if at runtime it finds GLIBC that is not older than x.y (GLIBC will have features that the binary will not use (IF EGLIBC did in fact disable some features), but that's usual anyway: it's rare for a binary to use every GLIBC interface.

Is it possible to create a MingW / MSYS based Windows toolchain to compile Glibc dependent applications for Linux?

I was following instructions here and here to build a toolchain which would work on Windows and compile applications for Linux and different hardware platforms. At first I tried to create cross-compiler for i686-linux to test it on a generic Debian 8 system.
Binutils and GCC compiled fine, but I got stuck at Glibc. It told me:
*** The GNU C library is currently not available for this platform.
I see that Sysprogs toolchains are using Newlib instead of Glibc but I haven't found any explanations except that Newlib is a good choice for embedded devices.
Does it mean that Newlib is actually the only choice for Windows -> Linux and that there is no way to compile software which depends on Glibc? Maybe there are "cheats", like copying pre-built Glibc from the target platform or some other workaround?
In theory, I don't even need Glibc built on Windows, I need just some "Glibc compatible stub" built for the target architecture to link (only dynamically, of course) against while compiling for the target platform and OS. Or am I totally wrong here and GCC cannot link to a different C library than GCC itself was linked to?
Or should I forget it and accept the fact that it is impossible (and, most probably, never will be possible) to achieve full Glibc and Linux kernel compatible C/C++ cross-compiling from Windows to GNU/Linux?
I will accept the answer which explains how GCC and Glibc are related and whether it is possible or not to link against Glibc different from C library used when GCC itself was built, and provide some insight about why it is / is not possible.
my guess is you're using --target when building glibc when you really need to use --host (which is different from how newlib is configured -- best to not ask why).
that said, the glibc build system requires a case-sensitive file system as it creates files like foo.oS and foo.os which are very different things. on a system like Windows, that means the build will be corrupted and fail since foo.oS and foo.os refer to the same file. there are patches out there to hack around this, but really you'd be better off booting a VM and doing the toolchain build inside of that.
NB: i'm not saying you need the VM to do all your development. you just need the VM to build the cross-compiler which you'd then run under Windows. this would be a canadian cross build.
rather than do all this yourself by hand, please check out crosstool-ng. it handles/patches/fixes a lot of common errors people make when trying to create cross-compilers.

cross compiling Java JNI libraries for Windows / RPi from OS X / Linux

I have access to a 64 bit OS X environment, but I'd like to dramatically reduce the process for releasing native library builds for x86 / x86_64 / armv6 Linux and 32 / 64 bit Windows.
How can I cross compile JNI code from OS X (and failing that, from 64 bit Ubuntu Linux)? Which compilers must I install (I'm using macports) and from where can I install the foreign JDK environments that I must include and link against? What special compiler / linker flags are needed?
I'm using the maven-native-plugin so I can easily change the compiler, linker and JDK_HOME for every target. I have one module (i.e. pom.xml) per target platform.
The project, for those interested in details, is netlib-java/native_ref.
I've found out that various Linux cross-compilers come with macports in the form of
arm-elf-gcc
i386-elf-gcc
x86_64-elf-gcc
i386-mingw32-gcc
with 64 bit Windows cross-compile on its way.
Unfortunately, for my purposes I also need a Fortran compiler, so I'm asking for more help on that now on the macports mailing lists
EDIT: the current state of fortran cross-compilers (and mingw in general) on OS X is woeful. Best advice at the moment is to run a Linux box in VirtualBox and cross-compile all the targets from there. Two builds, not optimal, but better than all native.

Setting earlier minimum kernel version when compiling static libraries

My distribution (Arch Linux) recently increased the minimum supported Linux kernel version for its toolchain. I am compiling a web application that I link statically and then upload to a web server, and the kernel version on the web server is too old for static libraries compiled with the new toolchain. (I get a segmentation fault when I try to run static binaries on the server.) Is there a way to compile applications using the GNU toolchain (GCC, binutils, glibc) such that features requiring newer kernel versions are left out?
Glibc compatibility is really only guaranteed in one direction. (Older binaries work on newer systems; vice versa, not necessarily so.)
To guarantee that your binaries work on older systems, compile linking with an older glibc. The easiest way to do this is to find an older distribution, but I would recommend setting up a "crosstool" or similar cross-compiling toolchain targeting a different libc than what your build system uses (and this allows for repeatable builds across hosts regardless of what the system is).
Thanks. I also found the --enable-kernel option to glibc, which enables working with earlier kernels.

Resources