Centos Kernelheaders include changes from future kernel - linux-kernel

TL;DR
I get Kernelheaders with the correct version number but function definitions that were only introduced some kernel versions later. How do I get rid of these definition from the future?
Background
I have been writing a kernelmodule and noted that it did not compile on another machine with the error that pci_bus_address was not defined. A quick investigation yielded, that it should not be defined since it is running a 3.10 Kernel, and this function is only available since 3.14.
I figured that a quick #if (LINUX_VERSION_CODE < KERNEL_VERSION(3,14,0)) block should fix the issue. However, my host machine is running a 3.10 Kernel as well.
Why do my kernel-headers know about a function, that should only be defined in a later version of the kernel? How can I get correct Kernel headers, that do not include this function?
I ran repoquery -i kernel-devel to show the installed version:
Name : kernel-devel
Version : 3.10.0
Release : 327.18.2.el7
Architecture: x86_64
Size : 34442356
Packager : None
Group : System Environment/Kernel
URL : http://www.kernel.org/
Repository : updates
Summary : Development package for building kernel modules to match the kernel
Source : kernel-3.10.0-327.18.2.el7.src.rpm
Description :
This package provides kernel headers and makefiles sufficient to build modules
against the kernel package.
however running grep pci_bus_addr /usr/src/kernels/3.10.0-327.18.2.el7.x86_64/include/linux/pci.h returns static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar)

#Ian Abbott pointed me the right way. As it turns out, Redhat packports patches. Stumbling through their git repository I could pinpoint the change to a specific kernel version.
While the specification in the changelog shows a specific release I am lucky, and the changes are tied to the RHEL_RELEASE_CODE (which is composed of the Major and Minor number of centos (i.e.the change in question is in Kernels past 7.1) and not the release code from the changelog, which can only be found as a string in <linux/version.h>. (Doing conditional compilation based on a string would be a nightmare (call an extra tool to parse the string and generate a headerfile based on this)).
Adding #if (RHEL_RELEASE_VERSION() <= RHEL_RELEASE_CODE) does the trick (relying on undefined constants being 0 for non redhat distibutions).

Related

appImage-builder V1.0.3

I am trying to use the latest version of the appImage-builder because appimages of my application built with the old version of appImage-builder do not run on ubuntu 22.04 anymore. So I got the order to try and see if it works with the new appImage-builder.
Currently (June 2022), only versions below 1.0 which are based on ubuntu 18.04 are available on docker (which we previously used to build our appimage).
The newer versions are available via github (https://github.com/AppImageCrafters/appimage-builder/releases).
However, I seem to be unable to execute:
appimage-builder --generate
or
appimage-builder --recipe AppImageBuilder.yml
Is there any documentation available on how to correctly use the .appimage version of appImage-builder? All I could find in https://appimage-builder.readthedocs.io/en/latest/ seems to refer to the docker version or a manually built version of appImage-builder.
Depending on the error message you get, there could be a couple of issues at play here.
If you got an error related to FUSE, then you need to install the libfuse2 package with apt install libfuse2. AppImages rely on libfuse2, but Ubuntu has stopped including it since 22.04, in favor of libfuse3.
If you get an error related to "file not found", then it could be that you do not have AppImageLauncher installed. Sadly, with type 2 AppImages the design decision was taken to modify the ELF header of the executable with 3 magic bytes at offset 8 of the executable. This means that Linux linkers will not run the file. AppImageLauncher actually copies the file to a temporary directory and zeroes out the magic number in order to be able to execute it.
A good starting point for debugging issues like this is to run the strace command, which will let you see which system call likely cause the error. Keep in mind that if you try to execute a file and you get File not found, it might mean that the linker specified by the file can not be found on the system or the ELF header is not valid. You can also run the executable by using the linker directly, which might give you more clues. For example with: /lib64/ld-linux-x86-64.so.2 <NAME-OF-YOUR-EXECUTABLE>.

pp (perl compiler) issue - still has a dependency

I'm trying to use pp (the perl compiler) to create an application that can run independent of the perl installed library and interpreter.
It successfully creates a compiled executable although I had to use the -x -c options to get it to find dependencies successfully. It will run on my machine but when I try it on another machine I get this error so clearly there is still some dependency:
501 Protocol scheme 'https' is not supported (LWP::Protocol::https not installed)
I am running it on MacOS 10.14.1 if that makes any difference. Thanks!
LWP::Protocol::https is loaded dynamically when needed, so pp has no way of knowing it's needed by default.
Solution 1
Pass -x to pp, and make sure the module is actually loaded in the run pp uses to determine the modules to include. This would probably be achieved by using LWP to make an HTTPS request during that run. --xargs=... might come in useful for this.
Solution 2
Pass -M LWP::Protocol::https to pp. You could also pass -M 'LWP::Protocol::**' to get all protocols handlers you have installed.
Solution 3
Add use LWP::Protocol::https (); to your script or an included module. Including a comment indicating why you are doing this would be appropriate.
You were building Net::SSLeay on MacOS 10.14 linking it to libssl.44.dylib which is not present on MacOS 10.12 where you try to run it.
I've found it annoying having to switch between build and test systems to find out which of the libraries are missing or incompatible and need to be packed.
I am now using the following strategy:
I use perlbrew instead of system perl.
For alien dependencies I use homebrew instead of the system libraries.
I build the packed executable using pp and run the resulting program with export DYLD_PRINT_LIBRARIES=YES being set (on the development machine)
I examine the list of loaded libraries and add all those referenced in the homebrew directory tree (/usr/local/opt/ and /usr/local/cellar/in my case) using pp -l /full/path/name -l ...
I rebuild the executable.
I still check on a target machine before deploying, but chances are very high now that it just works.

boost::filesystem::current_path() returns empty path

I have a C++ program where I need the current path to later create a folder. The location of my executable is, let's say /home/me/foo/bin. This is what I run:
//Here I expect '/home/me/foo/bin/', but get ''
auto currentPath = boost::filesystem::current_path();
//Here I expect '/home/me/foo/', but get ''
auto parentPath = currentPath.parent_path();
//Here I expect '/home/me/foo/foo2/', but get 'foo2/'
string subFolder = "foo2";
string folderPath = parentPath.string() + "/" + subFolder + "/";
//Here I expect to create '/home/me/foo/foo2/', but get a core dump
boost::filesystem::path boostPath{ folderPath};
boost::filesystem::create_directories( boostPath);
I am running on Ubuntu 16.04, using Boost 1.66 installed with the package manager Conan.
I used to run this successfully with a previous version of Boost (1.45 I believe) without using Conan. Boost was just normally installed on my machine. I now get a core dump when running create_directories( boostPath);.
Two questions:
Why isn't current_path() providing me with the actual path, and returns and empty path instead?
Even if current_path() returned nothing why would I still have a core dump even if I run it with sudo? Wouldn't I simply create the folder(s) at root?
Edit:
Running the compiled program, having some cout outputs of the above variables in between the lines rather than using debug mode, normally gives me the following output:
currentPath: ""
parentPath: ""
folderPath: /foo2/
Segmentation fault (core dumped)
But sometimes (about 20% of the times) gives me the following output:
currentPath: "/"
parentPath: "/home/me/fooA�[oFw�[oFw#"
folderPath: /home/me/fooA�[oFw�[oFw#/foo2/
terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
what(): boost::filesystem::create_directories: Invalid argument
Aborted (core dumped)
Edit 2:
Running conan profile show default I get:
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=5
compiler.libcxx=libstdc++
build_type=Release
[options]
[build_requires]
[env]
There is some discrepance between the libcxx used in the dependencies, and the one that you are using to build your application.
In g++ (linux) there are 2 standard library modes you can use, libstdc++, built without C++11 enabled, and libstdc++11, built with C++11 enabled. When you are building an executable (application or shared library), all the individual libraries linked together must link with the same libcxx.
libstdc++11 was made the default for g++ >= 5, but this also depends on the linux distro. It happens that even if you install a g++ >=5 in older distros like Ubuntu 14, the default libcxx will still be libstdc++, apparently it is not easy to upgrade it without breaking. It also happens that very popular CI services used in open-source, like travis-ci, used older linux distros, and thus libstdc++ linkage was the most popular.
libstdc++ was the default for g++ < 5.
For historical and backwards compatibility reasons, conan default profile always use libstdc++, even for modern compilers in modern distros. You can read your default profile the first time conan is executed, but also find it as a file in .conan/profiles/default, or show it with conan profile show default. This will likely change in conan 2.0 (or even sooner), and the correct libcxx will be detected for each compiler if possible.
So, if you are not changing the default profile (using your own profiles is recommended for production), then when you execute conan install, the depedencies which are installed are built against libstdc++. Note that this conan install is independent on the build in most cases, it just downloads, unzip and configure the dependencies you want, with the requested configuration (from the default profile).
Then, when you are building, if you are not changing _GLIBCXX_USE_CXX11_ABI, then you can be using your system compiler default, in this case, libstdc++11. In most cases, a linking error appears that shows this discrepance. But in your case you were unlucky, and your application managed to link, but then crashed at runtime.
There are a couple of approaches to solve this:
Build your application with libstdc++ too. Make sure to define _GLIBCXX_USE_CXX11_ABI=0.
Install your dependencies for libstdc++11. Edit your default profile to use libstdc++11, then issue a new conan install and rebuild your app.

Find correct kernel version to build module

I want to checkout kernel sources to build a kernel module. However when I want to insmod the module I get a "Invalid module format" error. The kernel versions appaerently do not match.
uname -r results in version 3.0.35-gd0fc8d0.
I am on a i.Mx6 Processor and have to checkout a branch from here: https://github.com/boundarydevices/linux-imx6
But I can't seem to find the exact matching kernel version?
You need to build a kernel module against the specific kernel version so that they are compatible with each other.
you should be able to know the kernel version with which a module is built using modinfo command.
#modinfo kernel_mod.ko
look at vermagic field here.
If you are in a hurry, you can try to change Vermagic of kernel module in order to insert the module.
Reference: http://www.linuxquestions.org/questions/linux-kernel-70/how-to-change-the-vermagic-of-a-module-728387/
Or
just google, "Change vermagic of kernel module".
By the way, you should keep in mind that this method can cause a problem.

Kernel Version Error, insmod fails

I am running with kernel version-2.6.35
When I hit uname -r it gives as 2.6.35-22-generic
Compiled a module from Kernel-2.6.35 source tree,
But it fails to insert the module in my running kernel.
I don't have any clue.
can anybody help me out of this !!
Thank you.
Have to compile LKM against the correct kernel version i.e. output of uname -r. In your case you have downloaded the kernel version-2.6.35 source tree and compiled your LKM against it. While inserting LKM, checks for the KERNEL_VERSION, if they match will not get any errors while module insertion but if they mismatch fails to insert the module.
You want to ensure that CONFIG_MODVERSIONS is enable in the running Kernel, 2.6.35-22-generic in your case. When you the build a Kernel Module from the 2.6.35 sources the running Kernel will allow modules with matching symbols to be loaded or if symbols are missing, it'll fail to load.
Not having CONFIG_MODVERSIONS enabled means that you MUST match the version between the Kernel version and the module.
I am supposing that you are using the official kernel tree, but you are trying to load your module in your distribution. You must you the kernel source/header from your Linux distribution. I am supposing this because of this version 2.6.35-22-generic, -22-generic it is not part of the official version name.

Resources