After a small modification of a kernel module (eg. adding just another USB DeviceId to a device driver), is it possible to hold back updates for this module (eg. by apt-get upgrade) and is there any reliable method to determine how long the module will stay compatible?
What are the conditions for a binary module to fit into a kernel image?
Would it be possible to have a post install hook, that keeps track of the updated module sources, patches and build it on any kernel image update, as long as the patch matches?
How are the chances that such a patch would actually make up a working module, as long as it can be applied?
The goal is to have the machine with the modified module following package updates as long as possible, and then stay stuck on compatible versions until the developer delivers an updated patched module binary or source.
you may consider to use DKMS http://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support.
Related
I wrote a C++ program with multiple classes and divided it into multiple files, which is intended to run on an embedded device (raspi 2 to be specific) that has no internet access. Therefore building the source and installing the dependencies on every device would be very laborious.
Is there a way to compile the program on one of the devices (as an exception to the others, this one has internet access), so that I can just transfer the build files, e.g. via USB, to the other devices? This should also include the various libraries I used so that I don't have to install them on every device. These are mainly std, but also a self-cloned and build library and a with apt installed library (I linked the libraries used as an example, but they shouldn't affect the process, I guess).
I'm using CMake. Is there an option, to make CMake compile a program into a (set of) files that run independently of the system-installed libraries with other words: they run without the need to have the required libraries installed on the system, but shipped with the build files.
Edit:
My main problem is, that I cannot get a certain dependency on the target devices due to a lack of internet access. Can I build the package and also include the library in that build, without me having to install it?
I'm not sure I fully understand why you need internet for your deployment but I can give you several methods and you'll choose the one you seem the best.
Method A: Cloning the SD card image
During your development phase, you ended to successfully have a RPi device working and you want to replicate this. You can use some tools to duplicate your image on another SD card, N times and eventually, this could be sufficient to make it work.
Pros: Very quick method
Cons: Usually, your development phase involve adjustments, trying different tools, different versions, etc.. your original RPi image is not clean and so you'll replicate this. Definitely not valid for an industrial project for could be sufficient for a personal one.
Method B: Create deployments scripts
You can create a deployment script on your computer to copy, configure, install what you need. Assuming you start with a certain version of Raspberry pi OS, you flash it, then you boot your PI that is connected via Ethernet for example. You can start a script on your computer that will:
Copy needed sources / packages / binaries
(Optional) compile sources (if you have a compiler that suits your need on RPi OS)
Miscellaneous configuration
To do all these, a script like this can do the job:
PI_USERNAME="pi"
PI_PASSWORD="raspberry"
PI_IPADDRESS="192.168.0.3"
# example on how to execute a command remotely
sshpass -p ${PI_PASSWORD} ssh ${PI_USERNAME}#${PI_IPADDRESS} sudo apt update
# example on how to copy a local file on the RPI
sshpass -p ${PI_PASSWORD} scp local_dir/nlohmann/json.hpp ${PI_USERNAME}#${PI_IPADDRESS}:/home/pi/sources
Importante note:
Hard coded credentials is not recommended.
This script is assuming you are using linux but you'll find equivalent tool under Windows.
This assume your RPI has a fixed IP but you can still improve this script to automatically find the RPI on the network. (lots of possibilities)
Pros: While you create this deployment script, you'll force yourself to start from a clean image and no dirty environment is duplicated.
Cons: Take a bit longer than method A
Method C: Create your own Raspberry PI image using Yocto
Yocto is a tool to create your own images, suitable for Raspberry PI. You can customize absolutely everything and produce an sdcard image you can just flash your RPis SD cards.
Pros: Very complete tool, industrial process
Cons: Quite complicated to deal with, not suitable for beginners, time cost
Saying in the comments it's only for 10 devices and that you were a bit scared to cross compile, I would not promote the Yocto method for you. I would not recommend the method A as well mostly because of the dirty environment duplication (but up to you in the end). The method B with the deployment script may be the best to go.
I need a debug version of glibc.I have some doubts regarding the installation of glibc-2.29 from source in kali linux.Based on the post https://www.tldp.org/HOWTO/html_single/Glibc-Install-HOWTO/,
To install glibc you need a system with nothing running on it, since many processes (for example sendmail) always try to use the library and therefore block the files from being replaced. Therefore we need a "naked" system, running nothing except the things we absolutely need. You can achieve this by passing the boot option
init=/bin/bash to your kernel.
it says that we need to install the glibc in a single usermode environment.In another post https://www.tldp.org/HOWTO/Glibc2-HOWTO-5.html
single usermode is not required for installation but backing up the old libraries.I dont know which one to follow.Can anyone help?
I found that we can use glibc without installing but building from source by adding '-g' flag in ./configure and setting LD_LIBRARY_PATH varible as follows after building
LD_LIBRARY_PATH=/path/to/the/build_directory gdb -q application
Note: this solution only works when the system GLIBC and the built-from-source GLIBC exactly match, as explained here.
I need a debug version of glibc.
Most distributions supply ready-made libc6-dbg packages that match your installed GLIBC. This is the best approach unless you are a GLIBC developer (or plan to become one).
I have some doubts regarding the installation of glibc-2.29 from source in kali linux.
Installing / replacing system libc is almost guaranteed to render your system unbootable if there are any mistakes. Recent example.
Before you begin, make sure you either know how to recover from such a mistake (have a rescue disk ready and know how to use it), or you have nothing of value on the system and can re-image it from installation media in the likely case that you do make a mistake.
The document you referenced talks about upgrading from libc5 to libc6. It was last updated on 22 June 1998, and is more than 20 years old. I suggest you find some more recent sources. Current documentation does suggest doing make install while in single-user mode.
I've just compiled BPF examples from kernel tools/testing/selftests/bpf and tried to load as explained in http://cilium.readthedocs.io/en/v0.10/bpf/:
% tc filter add dev enp0s1 ingress bpf \
object-file ./net-next.git/tools/testing/selftests/bpf/sockmap_parse_prog.o \
section sk_skb1 verbose
Program section 'sk_skb1' not found in ELF file!
Error fetching program/map!
This happens on Ubuntu 16.04.3 LTS with kernel 4.4.0-98, llvm and clang of version 3.8 installed from packages, iproute2 is the latest from github.
I suspect I'm running into some toolchain/kernel version/features mismatch.
What am I doing wrong?
I do not know why tc complains. On my setup, with a similar command, the program loads. Still, here are some hints:
I think the problem might come, as you suggest, from some incompatibility between kernel headers version and iproute2, and that some relocation fails to occur, although on a quick investigation I did not find exactly why it refuses to load the section. On my side I'm using clang-3.8, latest iproute2, but also the latest kernel (some commit close to 4.14).
If you manage to load the section somehow, I believe you would still encounter problems when trying to attach the program in the kernel. The feature called “direct packet access” is only present on kernels 4.7 and higher. This is what makes you able to use skb->data and skb->data_end in your programs.
Then as a side note, this program sockmap_parse_prog.c is not meant to be used with tc. It is supposed to be attached directly to a socket (search for SOCKMAP_PARSE_PROG in file test_maps.c in the same directory to see how it is loaded there). Technically this does not prevent one to attach the program as a tc filter, but it will probably not work as expected. In particular, the value returned from the program will probably not have a meaning that tc classifier hook will understand.
So I would advise to try with a recent kernel, and to see if you have more success. Alternatively, try compiling and running the examples that you can find in your own kernel sources. Good luck!
I need to provide a single update file to a customer in order to update an embedded system via USB. The system is built using Yocto. I'm curious if the plan I have to implement USB updating is viable, or if I'm missing something that should be obvious.
opkg exists on the system, but in order to use opkg update it needs to have a repo to pull from. Since I have no network capabilities I will need to put the entire repo on a USB drive. Since I need to provide a single file to the customer the repo will then need to be a tar file.
Procedure
Plug USB drive in
The udev rule calls a script and pushes it to the background since this will be a long process (see this)
Un-tar the repo update file
opkg update
Notify user they may remove USB drive
At least from a high-level point of view does this sound like it is a good way to update an embedded system via USB? What pitfalls might exist?
Your use case is covered by SWUpdate - it is maybe worth to take a look at my project (github.com/sbabic/swupdate). Upgrading from USB with a single image file is one of the use case, and you can use meta-swupdate (listed on openembedded) to generate the single image with all artifacts.
Well, regarding what pitfalls might exist, one of the biggest pitfall would likely be a power outage in the middle of the process. How do you recover in that case? (The answer might depend on what type of embedded device your making). (Personally, I'm in favour of complete image based upgrades and not package based upgrades).
Regarding your scenario with the tarball, do you have enough space on your device to unpack out? It might be wiser to instead of distributing a tarball with your opkg repository, to distribute eg a ext2 or squashfs image of the repository. That would allow you to mount it from the USB drive using the loopback device.
Apart from that, as long as you have a good way to communicate work the user, your approach should work. The main issue is what do you do in case of an interrupted upgrade process. That is something you need to think about in advance.
When deciding on the update format and design there are several things to consider; including what you want to update (e.g. kernel, applications, bootloader); there is a paper on the most popular designs here: https://mender.io/user/pages/04.resources/_white-papers/Software%20Updates.pdf
In your case (no network, lots of storage available on USB) the easiest approach is probably a full rootfs update. I am involved in an open source project Mender.io which does a dual A/B rootfs update and integrates with the Yocto Project to make it easy and fast to enable updates on devices without custom low-level coding.
I am trying to compile the kernel from downloaded source. I made the kernel image using sources from kernel.org.
I have successfully loaded it into grub, but when I try to run the loaded module it gives error message: "invalid magic number". I am not getting what I need to fix to get the things done.
Steps that I've followed:
make xconfig,
make bzImage
make modules
make modules_install
I also changed the name of image from bzImage (in /boot folder), then created initrd image from:
# dracut /boot/initramfs-3.1.6-1.fc16.x86_64.img 3.1.6-1.fc16.x86_64 (command copied from net)
Every time you compile a kernel, you must re-compile also the kernel module that you need to use within that kernel. For example, you cannot load a module compiled for kernel 2.6.39 on kernel 3.7. You must recompile it for kernel 3.7.
More details --> better answer
Actually I doubt this has anything to do with kernel modules. As it seems the kernel itself is being refered to as a module. It is possible the kernel got built incorrectly or is being loaded incorrectly possibly from the grub commandline.
http://forums.gentoo.org/viewtopic-t-932358-start-0.html try that.
It is possible that some file in the kernel build didn't get cleaned up properly an so has incorrect data in it since any changes you made in a previous attempt at building it.
Also do note that the x86 images will end up at arch/x86_64/boot/bzImage or arch/x86/boot/bzImage inside the kernel source make sure you actually have copied the kernel itself and not some other incorrect file.
If that fails try grub 1.x as its simpler to use than grub 2.x just note that alot of things are different and you should read tutorials for the correct version of grub. Often grub 1.x will be in a grub-legacy or similar package depending on the distro.
Edit: If you are building your kernel for your hardware only... do not use an initramfs its overkill. There are places you would want to do this is if your system is incapable of loading a kernel large enough for essential drivers (sparc for instance is very limited in kernel image size). another being booting over network possibly but by and large it isn't needed. If you must use an initramfs get your kernel build working without it first.
Also personally I build my kernel with essential drivers included (disk and filesystem basically) and build it with.
make mrproper (save/backup your .config first) ;
make menuconfig ;
make -j8 ;
make modules_install ;
cp arch/x86_64/boot/bzImage /boot/linux-3.7.1 ;
(modify grub to boot the new kernel) and im done and ready to reboot.
Any chance you could attach a screenshot of the failure?
I am not getting your question 100% clearly. Anyway, you downloaded some kernel tree from kernel.org and successfully booted with new Image.
Then you are trying to load a LKM i.e kernel module using insmod or modprobe.
so you are getting "Invalid magic number".
Solution
Need to re-compile the kernel module in new kernel, then try to insert.