Currently running a CENTOS 7 machine trying to install lttng-tools and lttng-modules.
I am going through the steps required to get lttng setup on my machine located at http://lttng.org/docs/v2.9/ and am experiencing issues with just about every step along the way. My issue right now is getting lttng-modules installed. I tried running the set of commands provided to install lttng-modules:
cd $(mktemp -d) &&
wget http://lttng.org/files/lttng-modules/lttng-modules-latest2.9.tar.bz2 &&
tar -xf lttng-modules-latest-2.9.tar.bz2 &&
cd lttng-modules-2.9.* &&
make &&
sudo make modules_install &&
sudo depmod -a
To which I received errors "Can't read private key" followed by INSTALL /probe/path/name/probe_name.ko for an entire list of probes. I read through the readme and made sure that the OS config variable dependencies were properly set. From here, I am completely unsure and any help would be appreciated.
A snippet of the terminal output is as follows:
Can't read private key
INSTALL /tmp/tmp.frbWYvVaL8/lttng-modules-2.9.1/probes/lttng-probe-x86-exceptions.ko
Can't read private key
INSTALL /tmp/tmp.frbWYvVaL8/lttng-modules-2.9.1/probes/lttng-probe-x86-irq-vectors.ko
Can't read private key
INSTALL /tmp/tmp.frbWYvVaL8/lttng-modules-2.9.1/tests/lttng-clock-plugin-test.ko
Can't read private key
INSTALL /tmp/tmp.frbWYvVaL8/lttng-modules-2.9.1/tests/lttng-test.ko
Can't read private key
DEPMOD 3.10.0-327.el7.x86_64
make[1]: Leaving directory `/usr/src/kernels/3.10.0-327.el7.x86_64'
This sounds like enabled Linux Module signing (Documented at http://lxr.free-electrons.com/source/Documentation/module-signing.txt?v=4.8), which usually is turned on on modern UEFI Secureboot-enabled systems. Your bootloader (shim-signed or other) is signed with some UEFI-preinstalled (trusted) OEM/KEK key, shim have some OS vendor keys preinstalled, and the vendor's kernel and modules are signed with OS vendor key (more at https://wiki.ubuntu.com/SecurityTeam/SecureBoot). Your kernel probably has CONFIG_MODULE_SIG_FORCE enabled (as it was done in ubuntu https://askubuntu.com/questions/755238), and will not load unsigned module (or module signed with non-trusted key).
If you are not author of your OS distribution, you have no OS vendor private key to sign modules. And the message says that you have no any key to sign module with.
You have several variants:
Try to find the needed module in your OS (prebuild and signed by your OS vendor). If there is no required module, try asking OS vendor to include it (or pay them money for signing your module with their key). (RedHat with help of EfficiOS did some lttng for RHEL7 in 2015: https://developers.redhat.com/blog/2015/07/09/lttng-packages-available-for-rhel-7/ "LTTng Packages now Available for Red Hat Enterprise Linux 7" - probably still posted on packages.efficios.com portal and probably compatible with CentOS)
Make your own key hierarchy. You can't add any key into the kernel binary signed by vendor, but kernel will allow you to use your MOK key to sign modules. So you need to create your key, install it into shim with mokutil, (it will be added to kernel as trusted if recorded in hardware store - UEFI key database), sign new modules with it (original kernel and OS modules will work with OS vendor key).
UNSAFE: disable secure boot and use (custom compiled?) kernel with module signing required and with your own key registered as trusted (it should be listed in cat /proc/keys or keyctl list %:.system_keyring), and sign all modules of the kernel
UNSAFE, not recommended and only can be used as temporary solution on testing PC: disable secure boot and use (custom compiled or from OS vendor if it has such version) kernel with module signing disabled (disable CONFIG_MODULE_SIG_FORCE).
There are some manuals from OS vendors about module signing:
https://docs.fedoraproject.org/en-US/Fedora/23/html/System_Administrators_Guide/sect-signing-kernel-modules-for-secure-boot.html "The Fedora distribution includes signed boot loaders, signed kernels, and signed kernel modules. In addition, the signed first-stage boot loader and the signed kernel include embedded Fedora public keys...These sections also provide an overview of available options for getting your public key onto the target system where you want to deploy your kernel module."
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/sect-signing-kernel-modules-for-secure-boot.html "When Secure Boot is enabled, the EFI operating system boot loaders, the Red Hat Enterprise Linux kernel, and all kernel modules must be signed with a private key and authenticated with the corresponding public key. The Red Hat Enterprise Linux 7 distribution includes signed boot loaders, signed kernels, and signed kernel modules. In addition, the signed first-stage boot loader and the signed kernel include embedded Red Hat public keys. .. These sections also provide an overview of available options for getting your public key onto the target system where you want to deploy your kernel module."
Related
I have a stupid question about homebrew: Why are executables that I install via homebrew trusted by MacOS (gatekeeper)? i.e. after installation I can run an executable and don't get a security popup and don't have to allow an exception - why is that?
I initially thought that homebrew might sign/notarize the binaries in their CI, but looking at some random executables it doesn't look like they have a signature: spctl -a -v $(which <some-executable-installed-with-homebrew>).
edit: meaning executables installed from bottles (pre-compiled binaries, not source packages compiled on my local machine)
There is no quarantining flag for a CLI app downloaded with curl.
Home-brew, uses UNIX core tools to download the bottles, and thus they don't have this flag set.
Next home-brew also ad-hoc signs binaries.
Don't confuse code sign with notarisation.
Notarisation is where Apple vouches for software signed with a dev cert private key.
They cannot notarise ad-hoc signed software (like home-brew bottles) by definition.
Now when my executable is NOT notarized it terminates with "Killed: 9", regardless if there's a quarantine attribute or not.
This is happening, I would speculate because the binary here isnt ad-hoc signed. Nothing to do with notarisation.
I bet you are on Apple Silicon right?
I have xillinux OS (based on ubunutu 12.04.LTS) installed on my hardware (ZYNQ FPGA Board). I have done some hardware reconfiguration and I need to rebuild my kernel after editing the config-3.12.0-xillinux-1.3 file. My question is how do I rebuild the existing kernel on the hardware after making changes to the config file
http://www.wiki.xilinx.com/Uartlite+Driver
This is the page above that I am referring to where they say that:
To enable the uartlite driver in the linux kernel you either have to integrate it or build it as kernel module (.ko). You can enable it with:
make menuconfig
---> Device Drivers ---> Character devices ---> Serial drivers ---> Xilinx uartlite serial port support
make menuconfig - I have to enter this command on the OS running on my hardware in the /root/boot/.config folder to enable it ?
What does , ---> Device Drivers ---> Character devices ---> Serial drivers ---> Xilinx uartlite serial port support THIS MEAN ? I have to change directory ?
The other option as per the link posted above is to add certain lines as below to the config file, for which I would use the nano editor and then save it with ctrl+X and then Y.
# integrate into the kernel
CONFIG_SERIAL_UARTLITE=y
# build as loadable module
CONFIG_SERIAL_UARTLITE=m
But they say that, "After that you of course have to rebuild the kernel and deploy it to your Zynq device."
Where zynq is the hardware I am running my OS on. What commands do I have to use to rebuild the existing kernel on my hardware after making changes to the .config file ?
So, after rebuilding the kernel with the changes above, I just reboot to observer the changes ?
EDIT:
I was referring to this link, http://www.thegeekstuff.com/2013/06/...-linux-kernel/
So, in order to compile the exisiting kernel on the hadrware and build it, I edit the .config file using nano in /boot folder and save it.
Then, I type "make" in the same folder as config.
Then, I type "make modules" in the same folder
Then I type make modules_install Then I type make install
Then I reboot the system to see the new kernel installed.
Is this the right way of doing it ?
Is this how you recompiled and rebuilt it ?
Currently in my boot directory, there are 4 files.
One config file and 3 .dts files. After rebuilding the kernel, this might change ?
I am having issues with my kernel driver build. I'm and building a custom (albeit very basic) NVidia RDMA driver and am receiving build warnings during the make file. Specifically it is looking for two NVidia API calls nvidia_p2p_put_pages and nvidia_p2p_get_pages. Using 'nm' is see these entry points are in the NVidia driver module (nvidia.ko). However, I'm not familiar enough with the internals of the Linux driver make file system to locate those entry points at build time.
The RDMA tool kit documentation refers to an extraction script "./NVIDIA-Linux-x86_64-.run" and a build directory. However, I was unable to locate any build files after extracting the latest driver sources.
As you can tell, I'm rather new to this. Any help would be greatly appreciated.
Thanks
The basic GPUDirect RDMA documentation is here.
As indicated in section 4.3, building an nvidia driver linux kernel module requires various driver header files and makefiles.
These files can be accessed as follows:
get an appropriate NVIDIA linux driver installer (.run file) such as 319.72 here
All nvidia linux driver installers have command line switch options. Basic options can be found by appending --help to the driver installer command string, such as:
sh NVIDIA-Linux-x86_64-319.72.run --help
more advanced options can be accessed with:
sh NVIDIA-Linux-x86_64-319.72.run --advanced-options
one of the advanced options is -x which will only extract the driver files, it will not "install" any:
sh NVIDIA-Linux-x86_64-319.72.run -x
This will create a directory where the files are available. Within this directory, the kernel directory has the necessary header files and a sample kernel module makefile which can be used to learn appropriate libraries to link against:
cd NVIDIA-Linux-x86_64-319.72/kernel
I'm developing a product that includes kernel extension and have found a weird problem in one of our testing machines that I can't find a solution for.
In my development machine, (OSX 10.8.3 and latest Xcode) I codesign our kext like this:
$ codesign -s "Developer ID Application: Mycompany" my.kext
my.kext: signed bundle with Mach-O thin (x86_64) [com.mycompany.kext]
it all goes fine, my.kext/Contents/MacOS/mykext binary is modified (signature added) and a folder my.kext/Contents/_CodeSignature is created with a file CodeResources in it.
When loading this kext on one of our testing machines (OSX 10.7.5 with Xcode 3.2.6, Darwin Kernel 11.4.2 x86_64), it refuses to do so:
kxld[com.mycompany.kext]: The Mach-O file is malformed: Invalid segment type in MH_KEXT_BUNDLE kext: 29.
Can't load kext com.mycompany.kext - link failed.
Failed to load executable for kext com.mycompany.kext.
Kext com.mycompany.kext failed to load (0xdc008016).
If I load the module unsigned, there is no problem. Also tried signing the kext from Xcode and not from command-line, with the same results.
I moved the signing certificate to that troublesome computer, and signed the kext there. The signing process goes different:
$ codesign -v -s "Developer ID Application: Mycompany" my.kext
my.kext: signed bundle with generic [com.mycompany.kext]
Once signed, the kext executable at my.kext/Contents/MacOS/mykext is unmodified, and the folder Contents/_CodeSignature includes more files: CodeDirectory, CodeRequirements, CodeResources and CodeSignature. This signed kext seems to work on all devices so far.
So the question is:
What's going on in here? What am I doing wrong in my signing process? How can I create a signature in a updated device that will work on this "outdated" machine? I understand that the target machine is refusing load of the kext because it does not undertand the signed binary. Signing from this device creates some kind of detached signature where the binary is untouched. I can't get my codesign to do that, the -D option seems useless and won't create a _CodeSignature folder inside the bundle.
Update
As of XCode 4.6, the problem still persists. Only i386 kexts are signed in a backwards-compatible way. x64 and mixed arch kexts can't be loaded by some 10.6 and 10.7 kernels due to them not understanding the signature embedded into the binaries.
The codesign command-line tool has an undocumented --no-macho flag for this purpose, but seems to be unimplemented.
Update 2
The problem still persists as of Xcode 4.6.2 4.6.3
Preamble: Explanation of what's going on
The older kernel linker/loaders can't handle certain types of load commands in the kext's Mach-O object code, including the LC_CODE_SIGNATURE section. This has also caused problems with e.g. mixed 32-bit/64-bit kexts built using Xcode 4.5.x, where the toolchain added various other sections that the Lion and Snow Leopard kernel linkers weren't expecting. (this bug is fixed in 4.6.x)
Apple hasn't released any specific info on codesigning kexts that I can find. If you look at their own kexts, some are signed, and some are not. (the open source ones seem to be unsigned as far as I can tell) If you look at the Mach-O sections in the binaries for their signed kexts (using otool -l), you will notice that LC_CODE_SIGNATURE is absent, unlike .app bundle binaries, where this inline signature is now the default. This is the case even for kexts that ship with Mountain Lion.
So the solution for supporting older versions is to place the signature in a separate file rather than letting codesign insert a signature section into the binary.
Solution
I found the undocumented --no-macho flag in the codesign source code and that seems to do the trick. No LC_CODE_SIGNATURE section, and the signature ends up in _CodeSignature/CodeSignature.
I believe the solution can be found under the BSD and Kernel Features section at the bottom of the OS X v10.9 Mavericks page in the What's New in OS X document. Unfortunately, I'm not sure if I can disclose the information here as it falls under the pre-release category. However, for those of you who have a paid Mac Dev account, here's the URL:
https://developer.apple.com/library/prerelease/mac/releasenotes/MacOSX/WhatsNewInOSX/Articles/MacOSX10_9.html#//apple_ref/doc/uid/TP40013207-CH100
I'm cross-compiling kernel modules and some libraries on x86 for ppc.
Is it possible to create ld.so.cache and modules.dep on my host system?
P.S I'm using ELDK tools.
modules.dep should be generated when the modules are built. It's also a text file so is readable on either architecture.
I'm pretty sure it'd be hard to generate ld.so.cache on anything but the system target system. It's a binary file that built up given the specific libraries available on your rootfs and configuration in /etc/ld.so.conf.
depmod can run just fine against foreign architecture modules. Assuming you've built your kernel and deployed your modules (eg 3rd party modules) to your system-root:
/sbin/depmod -ae -F /path/to/System.map -b /path/to/system/root <kernel-version-name>
Haven't found a solution for ldconfig, yet.