I have a program that statically links glib library and dynamically links a shared library that in turn also statically links the same glib library. When I run the program I get a segfault. After debugging in gdb I found that there is a global static variable defined in glib that's being set and it had different values in one call trace and than a later call trace. I then noticed that the variable addresses were different as well. So it seems like there are two copies of the global static variable? Shouldn't the executable override the symbol from shared library so there is only one global static variable in the executable during dynamic linking?
The other part of the story is that there is another executable that does the same as above, which seems to behave okay i.e., no segfault (haven't debugged to see if the different code paths load the same static variable). So perhaps this behavior is not deterministic.
The following issue is happening with gcc (8.3.1) on Linux (centos 7).
executableA (segfault) executableB (no segfault)
| \ | \
| (static) \(shared) |(static) \(shared)
| \ | \
libglib-2.0.a libA.so libglib-2.0.a libA.so
| |
| (static) |(static)
| |
libglib-2.0.a libglib-2.0.a
So it seems like there are two copies of the global static variable?
Yes, that is expected.
Shouldn't the executable override the symbol from shared library so there is only one global static variable in the executable during dynamic linking?
A static variable by definition has local linkage -- it is not accessible from any other compilation unit, and is not exported from the shared library(ies).
You would have to make this variable (and any other similar variables) non-static and exported from both shared libraries. Only then will the dynamic loader bind all references to this variable to a single instance.
Note that linking separate copies of libglib-2.0.a into shared libraries without controlling symbol visibility is asking for trouble. Whatever you hoped to achieve by doing that, you are not achieving.
there is another executable that does the same as above, which seems to behave okay
Ah, programming by coincidence. The mine you stepped on didn't explode, so it should be ok to continue doing that.
Related
Recently I tried using link time optimization but didn't get very far. On the first attempt to link an exe I get a load of
{path}/bin/ld: <artificial>:(.text.startup+0x136): undefined reference to `some_function`
errors.
I can't see anything much special about the functions. We do take their addresses, and also refer to them via macros.
This is on RHEL 7.6 home rolled GCC 5.3 and binutils 2.34 (I don't know how they were configured unfortunately).
For a non-lto build I see that one of the functions is in a read-onlu section (according to nm). I see the same symbol in a .a file. From that I can find the .o file.
Going back to the lto version, with objdump -D I see
.gnu.lto_{missing function}.7c974f7d7bc920e2
And that's about as far as I can get. My only idea is that this is some sort of ODR violation that doesn't show up otherwise.
EDIT:
I've made some progress. Some if not all of the symbols are in .rodata arrays of pointers to functions.
These are generated in multiple files using some nasty C macros, something like this:
// file1.c
#include "param1_def.h"
#include "pfn_table.c"
// file2.c
#include "param2_def.h"
#include "pfn_table.c"
and
// pfn_table.c
function_type const MAKE_NAME(NAME, _functions) =
{
MAKE_NAME(NAME, _write_file),
MAKE_NAME(NAME, _read_file),
// etc
}
Where NAME is a macro defined in the paramX_def.h headers (and is different each time) and MAKE_NAME is a macro that pastes together the final names.
I have to make ELF file to use absolute paths for libraries instead of searching in default paths (RPATH).
This is result from readelf:
readelf -d example
Dynamic section at offset 0xe28 contains 24 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
But I want to get something like this:
readelf -d example
Dynamic section at offset 0xe28 contains 24 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [/lib/libc.so.6]
Are there any linker options to achieve this?
The tool you want is ldd, because those absolute paths are not part of the ELF file but are decided by the dynamic loader. ldd is a wrapper around environment variables that cause the dynamic loader to output the paths to the libraries that would be (or have been, depending on how you see it) loaded.
Of course, library resolution is a system-specific task and your results may vary across installs even of the same distribution.
AFAIK specifying the .so file as a normal input, using the absolute path, will result in a binary that also refers to the .so using that same absolute path.
Not sure how that works with default libs like libc, but you could try adding /lib/libc.so.6 as the first linker input.
Introduction
I have a do_install task in a BitBake recipe which I've written for a driver where I execute a custom install script. The task fails because the installation script cannot find kernel source header files within <the image rootfs>/usr/src/kernel. This script runs fine on the generated OS.
What's Happening
Here's the relevant part of my recipe:
SRC_URI += "file://${TOPDIR}/example"
DEPENDS += " virtual/kernel linux-libc-headers "
do_install () {
( cd ${TOPDIR}/example/Install ; ./install )
}
Here's a relevant portion of the install script:
if [ ! -d "/usr/src/kernel/include" ]; then
echo ERROR: Linux kernel source include directory not found.
exit 1
fi
cd /usr/src/kernel
make scripts
...
./install_drv pci ${DRV_ARGS}
I checked changing to if [ ! -d "/usr/src/kernel" ], which also failed. install passes different options to install_drv, which I have a relevant portion of below:
cd ${DRV_PATH}/pci
make NO_SYSFS=${ARG_NO_SYSFS} NO_INSTALL=${ARG_NO_INSTALL} ${ARGS_HWINT}
if [ ${ARG_NO_INSTALL} == 0 ]; then
if [ `/sbin/lsmod | grep -ci "uceipci"` -eq 1 ]; then
./unload_pci
fi
./load_pci DEBUG=${ARG_DEBUG}
fi
The make target build: within ${DRV_PATH}/pci is essentially this:
make -C /usr/src/kernel SUBDIRS=${PWD} modules
My Research
I found these comments within linux-libc-headers.inc relevant:
# You're probably looking here thinking you need to create some new copy
# of linux-libc-headers since you have your own custom kernel. To put
# this simply, you DO NOT.
#
# Why? These headers are used to build the libc. If you customise the
# headers you are customising the libc and the libc becomes machine
# specific. Most people do not add custom libc extensions to the kernel
# and have a machine specific libc.
#
# But you have some kernel headers you need for some driver? That is fine
# but get them from STAGING_KERNEL_DIR where the kernel installs itself.
# This will make the package using them machine specific but this is much
# better than having a machine specific C library. This does mean your
# recipe needs a DEPENDS += "virtual/kernel" but again, that is fine and
# makes total sense.
#
# There can also be a case where your kernel extremely old and you want
# an older libc ABI for that old kernel. The headers installed by this
# recipe should still be a standard mainline kernel, not your own custom
# one.
I'm a bit unclear if I can 'get' the headers from the STAGING_KERNEL_DIR properly since I'm not using make.
Within kernel.bbclass provided in the meta/classes directory, there is this variable assigment:
# Define where the kernel headers are installed on the target as well as where
# they are staged.
KERNEL_SRC_PATH = "/usr/src/kernel"
This path is then packaged later within that .bbclass file here:
PACKAGES = "kernel kernel-base kernel-vmlinux kernel-image kernel-dev kernel-modules"
...
FILES_kernel-dev = "/boot/System.map* /boot/Module.symvers* /boot/config* ${KERNEL_SRC_PATH} /lib/modules/${KERNEL_VERSION}/build"
Update (1/21):
A suggestion on the yocto IRC channel was to use the following line:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
which is corroborated by the Yocto Project Reference Manual, which states that in version 1.8, there was the following change:
The kernel build process was changed to place the source in a common shared work area and to place build artifacts separately in the source code tree. In theory, migration paths have been provided for most common usages in kernel recipes but this might not work in all cases. In particular, users need to ensure that ${S} (source files) and ${B} (build artifacts) are used correctly in functions such as do_configure and do_install. For kernel recipes that do not inherit from kernel-yocto or include linux-yocto.inc, you might wish to refer to the linux.inc file in the meta-oe layer for the kinds of changes you need to make. For reference, here is the commit where the linux.inc file in meta-oewas updated.
Recipes that rely on the kernel source code and do not inherit the module classes might need to add explicit dependencies on the do_shared_workdir kernel task, for example:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
But I'm having difficulties applying this to my recipe. From what I understand, I should be able to change the above line to:
do_install[depends] += "virtual/kernel:do_shared_workdir"
Which would mean that the do_install task now must be run after do_shared_workdir task of the virtual/kernel recipe, which means that I should be able to work with the shared workdir (see Question 3 below), but I still have the same missing kernel header issue.
My Questions
I'm using a custom linux kernel (v3.14) from git.kernel.org. which inherits the kernel class. Here are some of my questions:
Shouldn't the package kernel-dev be a part of any recipe which inherits the kernel class? (this section of the variables glossary)
If I add the virtual/kernel to the DEPENDS variable, wouldn't that mean that the kernel-dev would be brought in?
If kernel-dev is part of the dependencies of my recipe, wouldn't I be able to point to the /usr/src/kernel directory from my recipe? According to this reply on the Yocto mailing list, I think I should.
How can I properly reference the kernel source header files, preferably without changing the installation script?
Consider your Environment
Remember that there are different environments within the the build time environment, consisting of:
sysroots
in the case of kernels, a shared work directory
target packages
kernel-dev is a target package, which you'd install into the rootfs of the target system for certain things like kernel symbol maps which are needed by profiling tools like perf/oprofile. It is not present at build time although some of its contents are available in the sysroots or shared workdir.
Point to the Correct Directories
Your do_install runs at build time so this is within the build directory structures of the build system, not the target one. In particular, /usr/src/ won't be correct, it would need to be some path within your build directory. The virtual/kernel do_shared_workdir task populates ${STAGING_DIR_KERNEL} so you would want to change to that directory in your script.
Adding a Task Dependency
The:
do_install[depends] += "virtual/kernel:do_shared_workdir
dependency like looks correct for your use case, assuming nothing in do_configure or do_compile accesses the data there.
Reconsider the module BitBake class
The other answers are correct in the recommendation to look at module.bbclass, since this illustrates how common kernel modules can be built. If you want to use custom functions or make commands, this is fine, you can just override them. If you really don't want to use that class, I would suggest taking inspiration from it though.
Task Dependencies
Adding virtual/kernel to DEPENDS means virtual/kernel:do_populate_sysroot must run before our do_configure task. Since you need a dependency for do_shared_workdir here, a DEPENDS on virtual/kernel is not enough.
Answer to Question 3
The kernel-dev package would be built, however it would then need to be installed into your target image and used at runtime on a real target. You need this at build time so kernel-dev is not appropriate.
Other Suggestions
You'd likely want the kernel-devsrc package for what you're doing, not the kernel-dev package.
I don't think anyone can properly answer that last question here. You are using a non-standard install method: we can't know how to interact with it...
That said, take a look at what meta/classes/module.bbclass does. It sets several related variables for make: KERNEL_SRC=${STAGING_KERNEL_DIR}, KERNEL_PATH=${STAGING_KERNEL_DIR}, O=${STAGING_KERNEL_BUILDDIR}. Maybe your installer supports some of these environment variables and you could set them in your recipe?
By default librtmp compile produces librtmp.so.1 file and symlink librtmp.so. I need to have librtmp.so without number suffix as andorid does not support it.
I was able to modify Makefile to get librtmp.so file:
#SO_VERSION=1
#SO_posix=.${SOX}.${SO_VERSION}
SO_posix=${SOX}
so the file produced file is now librtmp.so
But android can't load it as it still tries to load librtmp.so. (with dot):
Caused by: java.lang.UnsatisfiedLinkError: Cannot load library: link_image[1891]: 170 could not load needed library 'librtmp.so.' for 'libffmpeg.so' (load_library[1093]: Library 'librtmp.so.' not found)
If a shared library has DT_SONAME dynamic tag of foobar.so.56, then no matter what you call the actual file (e.g. foo.so, or libbar.so), when you use that library to link an executable, the SONAME is recorded in the executable (as DT_NEEDED dynamic tag), and not the actual file name.
It follows that your librtmp.so has a DT_SONAME of librtmp.so.. You can confirm that with:
readelf -d librtmp.so | grep SONAME
So what do you need to do to get rid of SONAME? Get rid of -Wl,--soname=... somewhere in your Makefile.
how can i check executable if it uses SONAME or filename
The executable will always use SONAME (if present). You can check the libraries that executable needs by looking for DT_NEEDED tags in the executable's dynamic section:
readelf -d a.out | grep NEEDED
I have a question regarding an article of JNI at http://java.sun.com/developer/onlineTraining/Programming/JDCBook/jniexamp.html.
gcc -o libnativelib.so -shared -Wl,-soname,libnative.so
-I/export/home/jdk1.2/include
-I/export/home/jdk1.2/include/linux nativelib.c
-static -lc
I guess I am still a little confused with the function of '-o libnativelib.so' and '-Wl,-soname,libnative.so'.
'-o libnativelib.so' specify the name of output file of gcc to be libnativelib.so. From what i understand it is the library name to load from JAVA side as shown in the article:
static {
System.loadLibrary("nativelib");
}
So what's the use of '-Wl,-soname,libnative.so'?
I found following info on ld option manual:
-soname=name
When creating an ELF shared object, set the internal DT_SONAME field to the specified name. When an executable is linked with a shared object which has a DT_SONAME field, then when the executable is run the dynamic linker will attempt to load the shared object specified by the DT_SONAME field rather than the using the file name given to the linker.
So what does it mean? When final executable is run, linker will attempt to load ?? rather than ?? in the name of ??
This is useful for a system, where one library can be present under several names, for example: libz.so, libz.so.1, libz.so.1.2.3. All these libraries are symlinks to one file, and DT_SONAME inside it points to "libz.so.1". When you link your code against libz.so, it will record dependency on "libz.so.1" in the executable file. And when your file is executed on another system, which contains, say, libz.so.1.2.5, it will still work, because it will look for libz.so.1. But if the destination system will have much newer version, like libz.2.3.4, it will fail, because libz.so.2, but not libz.so.1 will be present.
DT_SONAME field is used only by linker. When you use System.loadLibrary(), the file name is specified by you, and the value of this option is not used. If you want, you can implement a similar versioning scheme for you libnative, to ensure that you java code always load a compatible version.
From GCC-HOWTO:
Each library has a soname. When the linker finds one of these in a library it is searching, it embeds the soname into the binary instead of the actual filename it is looking at. At runtime, the dynamic loader will then search for a file with the name of the soname, not the library filename. Thus a library called libfoo.so could have a soname libbar.so, and all programs linked to it would look for libbar.so instead when they started.
In your case, the soname libnative.so is different from the file name libnativelib.so.
You'll have to symlink libnative.so to libnativelib.so to allow the dynamic loader to find the shared lib.