How to fix gdb error: Cannot access memory at address - debugging

When I debug my linux kernel module using gdb and qemu I get unconsistency results. When I put a breakpoint or disassemble my own fucntion, Sometimes I get "Cannot access memory at address: {function_address}" and sometimes I get the disassembled code
I have built an automation script which raises a linux vm using qemu. The script transfers my kernel module files to the vm, builds the kernel module and loads it.
The script retrieves the loaded sections addresses from "/sys/module/{ko name}/sections/" ".text" ".bss" ".data" and then loads the ko file with the retrieved addresses of the sections to gdb.
I have to mention that I disable kaslr and compile the LKM with debug symbols.
Then I try to "disas" my own function and its fail sometimes.
Qemu cmdline: sudo qemu-system-x86_64 -enable-kvm -hda ~/ubuntu.qcow -m 4096 -kernel /boot/vmlinuz-5.0.0-23-generic -append "root=/dev/sda1 nokaslr" -net user,hostfwd=tcp::10022-:22 -net nic -snapshot -s
LKM symbols loading cmdline: add-symbol-file ${module_path} ${text} -s .data ${data} -s .bss ${bss}
I expect the output of "disas" command to be the disassembled code, but for example the actual result sometimes is:
Dump of assembler code for function :
0xffffffffc0344000 <+0>: Cannot access memory at address 0xffffffffc0344000

Related

"ARP -a" Command Explanation

I have tried to execute the ARP -a command in windows. I knew that it is used for mapping IP with MAC addresses.
I know the first part there but what about the other two-block? Why is at last there 0x10 and 0x16?

howto redirect tty in a gdb/gdbserver setup

I have a remote debugging setup with gdb (on a host machine) and a gdbserver (on a target machine).
The gdb command I am using is something like
target remote | ssh -T user#192.168.222.111 gdbserver - app_to_debug arg1 arg2 arg3
This works fine and the stdio gets redirected from gdbserver to gdb via ssh.
Is it possible to redirect this stdio to a different terminal on my host?
I tried to do something like within gdb:
tty /dev/pts/10
But this only works in the case without the ssh/gdbserver setup.

How do I pass the real/physical NVMe to a qemu machine? (Host MacOS, guest Windows)

I just wanna boot my windows install (already installed) via Qemu. I was able to do it by VMware Fusion. But it got buggy and after days trying to solve it. I give it up and thought about Qemu.
I have this lines
qemu-system-x86_64 -m 9072 -cpu Penryn,+invtsc,vmware-cpuid-freq=on,$MY_OPTIONS\
-machine q35 \
-smp 4,cores=2 \
-usb -device usb-kbd -device usb-mouse \
-smbios type=2 \
-device ich9-ahci,id=sata \
-drive id=WIN,format=raw,if=none,file="/dev/disk2s4",index=0,media=disk \
-device ide-hd,bus=sata.4,drive=WIN \
-monitor stdio \
-vga vmware
It is "draft". I was trying out. But my issue is that I wanna pass my SSD NVMe to this machine. I couldn't find anything useful for MacOS in the internet, searching for hours. Those lines are what I found. Not even in Qemu docs I couldn't find anything.
I got "Booting from Hard Disk..." forever...
from https://www.qemu.org/docs/master/system/qemu-block-drivers.html
https://www.qemu.org/docs/master/system/images.html#nvme-disk-images
should be :
qemu-system-x86_64 -drive file=nvme://0000:06:00.0/2
where dmesg | grep nvme you see [1] and you want p2
[1]
nvme nvme0: pci function 0000:06:00.0
nvme0n1: p1 p2 p3 p4
but for me still doesn't work ...
qemu-system-x86_64: -drive file=nvme://0000:06:00.0/0002: Failed to find iommu group sysfs path: No such file or directory
Well, I've tried the same but with SATA SSD drive, I just pointed to the main drive (not specific partition), and with other configurations in the args, it booted normally. Actually this SSD drive had a Windows 11 installed on it already, so just attaching it via QEMU took a little bit to configure new hardware then worked properly.
First, you need to find the disk name in Disk Utility, right click on the drive itself, not the partition. In my case it's disk1, so I'll be using /dev/disk1 when running the command.
See below in order, you can save this text to boot-windows.sh file, then run it from terminal with sudo.
DISK="/dev/disk1"
OVMFDIR="usr/share/edk2/ovmf" #for enabling secure EFI boot
diskutil umountDisk "$DISK" #to make sure it's forcibly unmounted
MY_OPTIONS="+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check"
ALLOCATED_RAM="8G" #GB
CPU_SOCKETS="2"
CPU_CORES="4"
CPU_THREADS="4"
args=(
-m "$ALLOCATED_RAM"
-vga virtio
-display cocoa #default,show-cursor=off,gl=es
-usb
-device usb-tablet
-smp "$CPU_THREADS",cores="$CPU_CORES",sockets="$CPU_SOCKETS"
-drive if=ide,index=2,file="$DISK",format=raw
-machine type=q35
-accel hvf
#-drive file=/Volumes/OSes/win/21H1.iso,media=cdrom,index=0
#-drive file=virtio-win-0.1.208.iso,media=cdrom
-nic user,model=virtio
-rtc base=localtime,clock=host
-cpu Nehalem,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time
-device intel-hda
-drive if=pflash,format=raw,readonly=on,file="$OVMFDIR"/OVMF_CODE.fd
-drive if=pflash,format=raw,readonly=on,file="$OVMFDIR"/OVMF_VARS.fd
-boot c
)
qemu-system-x86_64 "${args[#]}"
Make sure you download the win-virtio drivers if needed.
Files needed for enabling secure EFI boot can be downloaded here.
Credits:
Kholia's detailed GitHub project.
Hikmat Ustad's amazing video tutorial.

Why QEMU takes much time to boot if I use -initrd file instead of normal boot?

QEMU has the option where we can point the -kernel and -initrd images to be used to boot the VM. However, when I use it, the QEMU takes much time to boot completely if I simply run the normal Ubuntu version.
Here is the normal execution (takes about 6 seconds to get to login page):
qemu-system-x86_64 -enable-kvm -smp 2 -m 4096 -drive file=~/ubuntu.img,if=virtio,cache=none -drive file=~/drive_10G.raw,format=raw,if=virtio,cache=none -redir tcp:7777::22
Then, if I run the other way, where I can point the kernel and initrd images, it takes much more time (takes about 26 seconds):
sudo qemu-system-x86_64 -smp 2 -cpu host -m 4096 -enable-kvm -kernel ~/kernel/arch/x86_64/boot/bzImage -initrd ~/initrd.img-4.13.8 -append "root=/dev/mapper/ubuntu--vg-root ro earlyprintk console=ttyS0" -drive file=~/ubuntu.img,format=raw,if=virtio,cache=none -drive file=~/drive_10G.raw,format=raw,if=virtio,cache=none -redir tcp:7779::22 -serial stdio
As you can notice, there is a huge boot time difference, from 6 to 26 seconds. This is bothering me because I need to reboot the VM several times, and not it is 4x slower than before.
P.S.: I must get the serial output of the QEMU in my HOST terminal, because I can follow the kernel messages at runtime. That is why I am using the -serial stdio option.
P.S.2.: My HOST machine is an Intel Xeon E3-1270 v5 3.6GHz 4 cores 8 threads + 32GB memory.
P.S.3.: My GUEST machine is running UBUNTU server 14.04 LTS + linux-4.13.8

Embedded: OMAP3 EVM boot arguments

I'm a beginner. I'm using OMAP3 EVM. Currently, I'm able to boot via NFS. But, I want it to be from SD card. I removed the boot.scr file while changing it to SD boot. It was booting properly. But, after the line 'Uncompressing Linux...' it waits for some time and then file system gets loaded directly and asks for login. The so many lines of initialization logs which used to come after the line 'Uncompressing Linux...' are completely missing. But, the root file system is fully loaded and I'm able to use it as I did previously. So, I tried making the boot.scr file by removing the nfs related arguments alone.
The boot.scr commands previously,
setenv bootargs 'mem=128M console=ttyS0,115200n8 noinitrd rw rootfstype=ext3 ip=dhcp root=/dev/nfs nfsroot=192.168.15.3:/home/mistral/nfsroot,nolock'
setenv bootcmd 'mmc init; fatload mmc 0 0x80000000 uImage; bootm 0x80000000'
fatload mmc 0 0x80000000 uImage
bootm 0x80000000
The boot.scr commands now,
setenv bootcmd 'mmc init; fatload mmc 0 0x80000000 uImage; bootm 0x80000000'
fatload mmc 0 0x80000000 uImage
bootm 0x80000000
I haven't modified the uEnv.txt. It's contents are,
bootargs=console=ttyS0,115200n8 mem=256M root=/dev/mmcblk0p2 rw rootfstype=ext3 rootwait init=/linuxrc ip=off
bootcmd=mmc rescan ; fatload mmc 0 81000000 uImage ; bootm 81000000
uenvcmd=bootd
Now, it has completely stopped booting after the line 'Uncompressing Linux...'.
Please guide me in where I'm going wrong.
The /dev/ttyS0 that you set in minicom is the serial port on your PC and NOT the OMAP EVM board.
Refer to the original bootargs or release-notes of user-guide to determine the proper value of the console variable for your EVM board and BSP release.
In addition to specifying the proper console= option,
pass the earlyprintk param
do NOT pass the silent param
on the kernel cmd-line (bootargs).

Resources