I want to learn about the linux kernel and this is why I wanted a simple but powerful enough way test kernel changes that I do.
I used the info on this page https://mgalgs.github.io/2015/05/16/how-to-build-a-custom-linux-kernel-for-qemu-2015-edition.html to start.
So now I can start a qemu session with the kernel I choose and also have busybox utilities.
The part I cannot understand is how do I transfer a kernel module .ko on this virtual machine as to load it in my modified kernel ? I tried also transfering a c program by incorporating it in the initramfs but when I try to run the program I receive the following error message:
"/bin/sh: ./proc1: not found" .
Should I use a virtual hdd image ? If so how do I create and use one ? How do I transfer files from host os to the virtual hdd ?
Thnaks in advance.
The created virtual hdd was not discovered because I didn't use mdev -s in the init file.
After that I could mount the sda in qemu session.
The c program that could not be ran I solved by compiling it with the -static flag.
Related
I am using linux perf tool for profiling shared library. Though it worked well on Ubuntu but now I want to run it on embedded linux and I cannot use apt-get to install linux perf tools on embedded linux. That's why I should have to compile everything from scratch.
Can anyone please guide how to compile linux perf tools and dependent kernel module from scratch/source.
Any help will be highly appreciated.
Thanks
Arslan Ali
Source code of perf is found in linux-kernel/tools/perf. So use the same kernel which use are using for your board.
For building the perf tools, go to the perf directory as told above. Then run the below command
These commands will change based on your cross toolchain
export CC=arm-linux-gnueabihf-gcc
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-
After building is over copy the perf binary to your board /bin directory and
add the execution permission to it. Then you can use this feature.
I am self-studying the 2019 version of MIT 6.828/6.S081: Operating System Engineering.
I was trying to attach GDB to xv6 running on RISC-V using QEMU, to learn about what is going on when context switching happens between user mode and kernel mode.
After doing make qemu-gdb and gdb in the same directory, my GDB connected to QEMU successfully. However:
(gdb) x/2i $pc
=> 0xd8c: ecall
0xd90: ret
The problem is: Now if I stepi, it "jumps over" to 0xd90 instead of stepping into the kernel space.
Additionally, accessing any kernel addresses is not allowed, as if I was debugging a normal userland program:
(gdb) i r stvec
stvec 0x3ffffff000 274877902848
(gdb) x/i $stvec
0x3ffffff000: Cannot access memory at address 0x3ffffff000
Environment:
Host VM: Manjaro 19.0.2
sudo pacman -Syy
sudo pacman -S riscv64-linux-gnu-binutils riscv64-linux-gnu-gcc riscv64-linux-gnu-gdb qemu-arch-extra
GDB: 9.1
QEMU: 4.2.0
GCC: 9.2.0
Much appreciate anyone could share some insight about what is going on here. Thanks a lot!
I guess you run your code on ubuntu, that is the problem I experienced, then I change to mac, and flow mit tools tutorials, finally, it works.
run make CPUS=1 qemu-gdb in one window.
run riscv64-unknown-elf-gdb in another window.
ignore the Python Exception
I managed to get around this problem by building the riscv toolchain as explained here.
Building the toolchain as explained in the site, generates a generic ELF/Newlib toolchain identified with the prefix riscv64-unknown-elf- in contrast to the more sophisticated Linux-ELF/glibc toolchain identified by the prefix riscv64-unknown-linux-gnu-. The Newlib build allows the debugger to stepi into kernel space.
For crossdev users it is possible to build the toolchain with Newlib support by running:
crossdev --ex-gcc --ex-gdb --target riscv64-unknown-elf
I'm learning how to write a basic OS kernel with intermezzos.github.io
I'm running in Windows Subsystem for Linux on Windows 10 v1607.
I'm at the point where I want to run my .iso with qemu-systems-x86_64 -cdrom os.iso.
Previously I was able to run the command and QEMU would run a window, which was running into another problem, posted here: QEMU, No bootable device, Windows Subsystem for Linux
Now when running the command, I receive the following error: Could not initialize SDL(No available video device) - exiting
When I ran into this problem before I installed Xming, ran it, and then QEMU successfully ran. But now, when I try to run Xming it no longer solves the problem.
I even tried installing xorg and running startx on WSL but that starts another issue: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory)
I really don't know what I'm doing and I have so many questions.
I'm under the impression that for QEMU to successfully run, it needs to be able to find a video driver. Is that the purpose of X11?
I am able to get qemu-system-x86_64 -cdrom os.iso to run the expected window after setting: export DISPLAY=:0
Partially solves my problem because I'm still running into QEMU, No bootable device, Windows Subsystem for Linux
I'm wondering if I'm setting the DISPLAY environment variable correctly.
Here's documentation on the DISPLAY variable, for anyone else that wants to learn: http://gerardnico.com/wiki/linux/display
Anyway, this portion is solved!
I have xillinux OS (based on ubunutu 12.04.LTS) installed on my hardware (ZYNQ FPGA Board). I have done some hardware reconfiguration and I need to rebuild my kernel after editing the config-3.12.0-xillinux-1.3 file. My question is how do I rebuild the existing kernel on the hardware after making changes to the config file
http://www.wiki.xilinx.com/Uartlite+Driver
This is the page above that I am referring to where they say that:
To enable the uartlite driver in the linux kernel you either have to integrate it or build it as kernel module (.ko). You can enable it with:
make menuconfig
---> Device Drivers ---> Character devices ---> Serial drivers ---> Xilinx uartlite serial port support
make menuconfig - I have to enter this command on the OS running on my hardware in the /root/boot/.config folder to enable it ?
What does , ---> Device Drivers ---> Character devices ---> Serial drivers ---> Xilinx uartlite serial port support THIS MEAN ? I have to change directory ?
The other option as per the link posted above is to add certain lines as below to the config file, for which I would use the nano editor and then save it with ctrl+X and then Y.
# integrate into the kernel
CONFIG_SERIAL_UARTLITE=y
# build as loadable module
CONFIG_SERIAL_UARTLITE=m
But they say that, "After that you of course have to rebuild the kernel and deploy it to your Zynq device."
Where zynq is the hardware I am running my OS on. What commands do I have to use to rebuild the existing kernel on my hardware after making changes to the .config file ?
So, after rebuilding the kernel with the changes above, I just reboot to observer the changes ?
EDIT:
I was referring to this link, http://www.thegeekstuff.com/2013/06/...-linux-kernel/
So, in order to compile the exisiting kernel on the hadrware and build it, I edit the .config file using nano in /boot folder and save it.
Then, I type "make" in the same folder as config.
Then, I type "make modules" in the same folder
Then I type make modules_install Then I type make install
Then I reboot the system to see the new kernel installed.
Is this the right way of doing it ?
Is this how you recompiled and rebuilt it ?
Currently in my boot directory, there are 4 files.
One config file and 3 .dts files. After rebuilding the kernel, this might change ?
I make a modification in linux kernel of OpenWrt and then I compile the new (kernel) with command :
make target/linux/compile V=99
but I don't found the new image under
build_dir\linux-x86_generic\linux-3.3.8
in order to upgrade the kernel in my OpenWrt running in VM VirtuaBox
how to proceed to get the new kernel and upgrade the Openwrt ?
I am a bit puzzled by the fact you are looking at the linux-x86 folder since with openWRT you usually cross-compile everytime, or at least I've never used it for not cross compilations.
What are you compiling for ?
You should see a build/$TARGET folder with a linux-x-x directory in it where the linux kernel was compiled.