I have a manual process for compiling a Linux kernel for an embedded ARM device. I use the CONFIG_APPENDED_DTB=y option, append my DTB file to the zImage file, then convert to uImage:
cat arch/arm/boot/zImage ./arch/arm/boot/dts/lpc3250-phy3250.dtb > arch/arm/boot/zImage-dtb
mkimage -A arm -O linux -C none -T kernel -a 0x80008000 -e 0x80008000 -n 'Linux' \
-d arch/arm/boot/zImage-dtb arch/arm/boot/uImage
I'm attempting to migrate to Yocto. How can I implement the same process? I can set KERNEL_IMAGETYPE = "zImage" but I'm not sure how to postprocess the zImage file.
Note that my version of u-boot is fixed and quite old, so I am not currently considering migrating to other formats such as FIT.
I am a newcomer to Yocto, so expository answers are especially welcome.
Related
I need to prepare an apt-package for stuff to enable building kernel modules for a custom Linux. I have cross-built on a different machine the kernel headers and modules using headers_install and modules_install make-targets. After copying the generated directories I'm still not able to build kernel modules on the target machine since /lib/modules/$(shell uname -r)/build is missing.
Here is my question. What are the minimal dependencies I need to include to my package in order to enable module builds (alongside with the generated kernel headers and modules mentioned above)?
Thanks in advance.
After some experimenting I've got to a working solution:
#!/bin/bash
ARCH=arm
SRC_DIR=$1
MOD_DIR=$2
BUILD_DIR=$MOD_DIR/build
set -ex
cd $SRC_DIR
make modules_install INSTALL_HDR_PATH=$MOD_DIR
rm $MOD_DIR/{build,source}
mkdir $BUILD_DIR
cp $SRC_DIR/{.config,Makefile,System.map,Module.symvers} $BUILD_DIR
mkdir -p $BUILD_DIR/arch/$ARCH
cp $SRC_DIR/arch/$ARCH/Makefile $BUILD_DIR/arch/$ARCH/
cp -r $SRC_DIR/scripts $BUILD_DIR/
# Build a headers tree manually, because
# `make headers_install` doesn't put everything needed.
cp -r $SRC_DIR/include $BUILD_DIR/
cp -r $SRC_DIR/arch/$ARCH/include/* $BUILD_DIR/include/
cp -r $SRC_DIR/include/generated/* $BUILD_DIR/include/
cp -r $SRC_DIR/arch/$ARCH/include/generated/* $BUILD_DIR/include/
cp $SRC_DIR/include/linux/kconfig.h $BUILD_DIR/include/linux/
This script is fed with a path to a kernel source tree after the latter was built natively (not cross-platform).
Working through Jon Erickson's book on Hacking. He uses an intel format assembly code. He provides the following snippet:
reader#hacking:~/booksrc 08048374 <main>:
$ objdump -M intel -D
a.out | grep -A20 main.
I'm getting this error:
Mac-of-Thor:test thorkamphefner$ objdump -M
objdump: Unknown command line argument '-M'. Try: '/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/objdump -help'
objdump: Did you mean '-C'?
What do I need to do to update objdump?
objdump on a Mac is llvm-objdump, not GNU Binutils objdump that takes command-line options like -Mintel
I think I've read that the standard ways of installing GNU binutils on Mac will give you gobjdump.
See Disassemble into x86_64 on OSX10.6 (But with _Intel_ Syntax)
objdump -disassemble -x86-asm-syntax=intel should work on a Mac (for llvm-objdump).
I have zImage and kernel source. I did
make zImage
to generate the zImage.
When I flash this, the board won't boot up.
So how do I convert this to uImage which u-boot reads properly?
Thanks!
Install u-boot-tools. The command depends on your distribution. If you are using Debian/Ubuntu, it should look like
sudo apt-get install u-boot-tools
See U-Boot documentation on tool installation
make uImage
mkimage -A <arch> -O linux -T kernel -C none -a <load-address> -e <entry-point> -n "Linux kernel" -d arch/arm/boot/zImage uImage
or in the kernel source
make uImage
I've written some custom shellcode that I want to encode using Metasploit's msfvenom. Back when msfencode was still working this is the way the command would have gone:
$ echo -ne “\x31…\x80” | sudo msfencode -a x86 -t c -e x86/jmp_call_additive
"pipe the shellcode to msfencode for architecture x86 with the output as a c array with the x86/jmp_call_additive encoder"
Now I want to do the same thing except with msfvenom, so I tried:
$ echo -ne "\x31...\x80" | sudo msfvenom -e x86/jmp_call_additive -a x86 -t c
But I get the following error message:
Attempting to read payload from STDIN...
You must select a platform for a custom payload
I thought that giving the -a flag was specifying the correct platform/architecture, I've also tried --platform in place of -a but I still get the same error message.
I'm running this on a on a virtual machine using Ubuntu 32 bit. Thanks for any help
$ echo -ne “\x31...x80" | sudo msfvenom -e x86/jmp_call_additive -a x86 -p - --platform linux -f c
“pipe the custom shellcode into msfvenom with the x86/jmp_call_additive encoder on x86 architecture with a custom payload on a linux platform with a c array output format"
I am understanding how vmlinux will create with the help of link-vmlinux.sh script, I could see it is passing -p option to the linker while building vmlinux, but I couldn't see any option named -p when executed linker with --help.
#arm-linux-gnueabihf-ld -EL -p --no-undefined -X --build-id -o vmlinux
Can you please tel me what is the use of '-p' option in the above command.
I reckon it prints the program headers object file format uses also known as
segments.The program headers describe how the program should be loaded into memory. You can print them out by using the objdump program with the -p option.
Did you try arm-linux-gnueabihf-ld --help ?