Running u-boot hello_world on an image partition with qemu - image

I'm developing on an ubuntu x86 machine, trying to run the u-boot hello_world standalone application which resides on an image sd.img which contains a partition.
I've compiled u-boot (v2022.10) with qemu-x86_64_defconfig
I run qemu with qemu-system-x86_64 -m 1024 -nographic -bios u-boot.rom -drive format=raw,file=sd.img
u-boot starts up, doesn't find a script, doesn't detect tftp, and awaits a command. If I type ext4ls ide 0:1, I can clearly see hello_world.bin (3932704 hello_world.bin).
When I do a ext4load ide 0:1 0x40000 hello_world.bin (in preparation for go 40000 This is another test), qemu/u-boot restarts.
0x40000 is the CONFIG_STANDALONE_LOAD_ADDR for x86.
I have even tried making an image of hello_world mkimage -n "Hello stand alone" -A x86_64 -O u-boot -T standalone -C none -a 0x40000 -d hello_world.bin -v hello_world.img and tried to load the image into 0x40000 with the intention of using bootm in case of cache issues - qemu/u-boot still resets.
Could anyone possibly point out the basic mistake I'm making.
Cheers

The memory area 0xa0000-0xffffff is reserved and you are overwriting it when loading your 4 MiB file to 040000 due to the excessive size of the file.
If you build hello_world.bin correctly, it will be a few kilobytes.

Related

qemu-system-x86_64 No bootable device. ARM M2 processor. mit6.858

I'm learning the mit6.858. In Lab1, I need to set up the lab environment on my M2 Mac using qemu (version 7.2.0 installed by homebrew).
I follow the instruction of the lab hints and run the course VM Image with this shell scripts:
#!/bin/bash
if ! command -v qemu-system-x86_64 > /dev/null; then
echo "You do not have QEMU installed."
echo "If you are on a Linux system, install QEMU and try again."
echo "Otherwise, follow the lab instructions for your OS instead of using this script."
exit
fi
# can we use the -nic option?
version=$(qemu-system-x86_64 --version \
| grep 'QEMU emulator version' \
| sed 's/QEMU emulator version \([0-9]\)\.\([0-9]\).*/\1.\2/')
major=$(echo "$version" | cut -d. -f1)
minor=$(echo "$version" | cut -d. -f2)
net=()
if (( major > 2 || major == 2 && minor >= 12 )); then
net=("-nic" "user,ipv6=off,model=virtio,hostfwd=tcp:127.0.0.1:2222-:2222,hostfwd=tcp:127.0.0.1:8080-:8080,hostfwd=tcp:127.0.0.1:8888-:8888")
else
net=("-netdev" "user,id=n1,ipv6=off,hostfwd=tcp:127.0.0.1:2222-:2222,hostfwd=tcp:127.0.0.1:8080-:8080,hostfwd=tcp:127.0.0.1:8888-:8888" "-device" "virtio-net,netdev=n1")
fi
qemu-system-x86_64 \
-m 2048 \
-nographic -serial mon:stdio \
"$#" \
# -enable-kvm \
"${net[#]}" \
6.858-x86_64-v22.vmdk
But I got this output:
SeaBIOS (version rel-1.16.1-0-g3208b098f51a-prebuilt.qemu.org)
iPXE (http://ipxe.org) 00:03.0 CA00 PCI2.10 PnP PMM+7EFD11A0+7EF311A0 CA00
Booting from Hard Disk...
Boot failed: could not read the boot disk
Booting from Floppy...
Boot failed: could not read the boot disk
Booting from DVD/CD...
Boot failed: Could not read from CDROM (code 0003)
Booting from ROM...
iPXE (PCI 00:03.0) starting execution...ok
iPXE initialising devices...ok
iPXE 1.20.1+ (g4bd0) -- Open Source Network Boot Firmware -- http://ipxe.org
Features: DNS HTTP iSCSI TFTP AoE ELF MBOOT PXE bzImage Menu PXEXT
net0: 52:54:00:12:34:56 using 82540em on 0000:00:03.0 (open)
[Link:up, TX:0 TXE:0 RX:0 RXE:0]
Configuring (net0 52:54:00:12:34:56)...... ok
net0: 10.0.2.15/255.255.255.0 gw 10.0.2.2
Nothing to boot: No such file or directory (http://ipxe.org/2d03e13b)
No more network devices
No bootable device.
When I type ctrlA+X to quit, I got another lines of output.
QEMU: Terminated
./6.858-x86_64-v22.sh: line 30: -nic: command not found
My homebrew installation is correct.
I'd like to know how to start the course VM correctly on M2 mac.
So, at first glance this looked off-topic as not a programming question as such (running Qemu is more of a superuser.com or serverfault.com question, or perhaps apple.stackexchange.com as we're talking about running Qemu on macOS), but looking more closely, your problem appears to be a bash scripting one, which makes it on topic again!
The clues
One thing you don't explicitly mention in your question is that you modified the script, attempting to comment out this line:
# -enable-kvm \
(The reason to remove that flag is because kvm is not available on macOS hosts, and the alternative, hvf, is not available when using binary translation to run an x86-64 VM on an arm64 host CPU.)
Another clue to the problem is this error:
./6.858-x86_64-v22.sh: line 30: -nic: command not found
What's happened here is that the backslashes (\) at the end of each of these lines in the original script turn the multi-line block into a single line:
qemu-system-x86_64 \
-m 2048 \
-nographic -serial mon:stdio \
"$#" \
-enable-kvm \
"${net[#]}" \
6.858-x86_64-v22.vmdk
Unfortunately, when commenting a line with #, bash ignores any backslash at the end of the line - this interrupts and splits the multi-line command.
This means that your networking and disk image command line options are not making it into the qemu command line, which in turn is why it can't find the virtual disk image. The -nic error comes from treating the following as a new command:
"${net[#]}" \
6.858-x86_64-v22.vmdk
Solution:
Don't comment out the flag -enable-kvm \ in place: either remove the line entirely, or move it out of the command and comment it out there.

QEMU kernel for raspberry pi 3 with networking and virtio support [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
I used the QEMU(qemu-system-aarch64 -M raspi3) for emulating the Raspberry pi3 with the kernel from the working image. Everything was working but there was no networking.
qemu-system-aarch64 \
-kernel ./bootpart/kernel8.img \
-initrd ./bootpart/initrd.img-4.14.0-3-arm64 \
-dtb ./debian_bootpart/bcm2837-rpi-3-b.dtb \
-M raspi3 -m 1024 \
-nographic \
-serial mon:stdio \
-append "rw earlycon=pl011,0x3f201000 console=ttyAMA0 loglevel=8 root=/dev/mmcblk0p3 fsck.repair=yes net.ifnames=0 rootwait memtest=1" \
-drive file=./genpi64lite.img,format=raw,if=sd,id=hd-root \
-no-reboot
I tried to add this option
-device virtio-blk-device,drive=hd-root \
-netdev user,id=net0,hostfwd=tcp::5555-:22 \
-device virtio-net-device,netdev=net0 \
But there would be an error
qemu-system-aarch64: -device virtio-blk-device,drive=hd-root: No 'virtio-bus' bus found for device 'virtio-blk-device'
I have referenced some forum, and used the "virt" machine instead of raspi3 in order of emulating virtio-network
qemu-system-aarch64 \
-kernel ./bootpart/kernel8.img \
-initrd ./bootpart/initrd.img-4.14.0-3-arm64 \
-m 2048 \
-M virt \
-cpu cortex-a53 \
-smp 8 \
-nographic \
-serial mon:stdio \
-append "rw root=/dev/vda3 console=ttyAMA0 loglevel=8 rootwait fsck.repair=yes memtest=1" \
-drive file=./genpi64lite.img,format=raw,if=sd,id=hd-root \
-device virtio-blk-device,drive=hd-root \
-netdev user,id=net0,net=192.168.1.1/24,dhcpstart=192.168.1.234 \
-device virtio-net-device,netdev=net0 \
-no-reboot
There is nothing printed and the terminal was suspended. It means the kernel does not work with virt machine.
I decided to build for my own custom kernel. Could anyone give me advice for options to build the kernel that works with both the QEMU and the virtio?
Thanks in advance!
The latest versions of QEMU (5.1.0 and 5.0.1) have USB emulation for the raspi3 machine (qemu-system-aarch64 -M raspi3).
You can emulate networking and access to SSH if you use: -device usb-net,netdev=net0 -netdev user,id=net0,hostfwd=tcp::5555-:22 in QEMU
I tested this configuration, and I got this:
The USB network device in QEMU raspi3
Ethernet interface in QEMU raspi3
Here the full command and options that I used:
qemu-system-aarch64 -m 1024 -M raspi3 -kernel kernel8.img -dtb bcm2710-rpi-3-b-plus.dtb -sd 2020-08-20-raspios-buster-armhf.img -append "console=ttyAMA0 root=/dev/mmcblk0p2 rw rootwait rootfstype=ext4" -nographic -device usb-net,netdev=net0 -netdev user,id=net0,hostfwd=tcp::5555-:22
The QEMU version used was 5.1.0.
I have had the same problems as user #peterbabic, in that while I could see the gadget with lsusb, I could not see any net device.
So I tried manually inserting the appropriate module g_ether -- and it said that it could not find the driver.
It was then that I realized that the kernelv8.img file I had downloaded and the Raspbian OS image that I was booting were different versions, so the kernel could not find its modules because it looked for them in the wrong directory.
On the other hand, the Raspbian OS image had the correct kernel in its first partition (I could see it in /boot). The only problem was getting it out and use it to replace the wrong kernelv8.img (I could not find the correct one online -- and anyway, the kernel of the Raspbian image is by definition more correct).
So, I copied the Raspbian OS image on my Linux box, and mounted it with loop:
# fdisk raspbian.img
- command "p" lists partitions and tells me that P#1 starts at sector 2048
- command "q" exits without changes
# losetup -o $[ 2048 * 512 ] /dev/loop9 raspbian.img # because sectors are 512 bytes
# mkdir /mnt/raspi
# mount /dev/loop9 /mnt/raspi
- now "ls -la /mnt/raspi" shows the content of image partition 1, with kernels
# cp /mnt/raspi/kernel8.img .
# umount /mnt/raspi
# losetup -d /dev/loop9 # destroy loop device
# rmdir /mnt/raspi # remove temporary mount point
# rm raspbian.img
- I no longer need the raspbian.img copy so I delete it.
- now current directory holds "kernel8.img". I can just copy it back.
To be sure, I also modified /boot/cmdline.txt on the Raspberry image (before rebooting with the new kernel) so that it now added the dwc and g_ether modules:
On boot, the gadget is now automatically recognized:
Your raspi3 command line has no networking because on a raspi3 the networking is via USB, and QEMU doesn't have a model of the USB controller for that board yet. Adding virtio-related options won't work, because the raspi3 has no PCI and so there's no way to plug in a pci virtio device.
Your command line option with virt looks basically right (at least enough so to boot; you probably want "if=none" rather than "if=sd" and I'm not sure if the network options are quite right, but if those parts are wrong they will result in errors from the guest kernel later rather than total lack of output). So your problem is likely that the kernel config is missing some important items.
You can boot a stock Debian kernel on the virt board (instructions here: https://translatedcode.wordpress.com/2017/07/24/installing-debian-on-qemus-64-bit-arm-virt-board/) so one approach you could take to finding the error in your kernel config is to compare your config with the one the Debian kernel has. The upstream kernel source 'defconfig' also should work. I find that starting with a configm that works and cutting it down is faster than building one up from nothing by trying to find all the obscure options that need to be present.
I've updated the steps needed to get this working with April 4th Raspios
# wget https://downloads.raspberrypi.org/raspios_lite_arm64/images/raspios_lite_arm64-2022-04-07/2022-04-04-raspios-bullseye-arm64-lite.img.xz
# unxz 2022-04-04-raspios-bullseye-arm64-lite.img.xz
# mkdir boot
# mount -o loop,offset=4194304 2022-04-04-raspios-bullseye-arm64-lite.img boot
# cp boot/bcm2710-rpi-3-b-plus.dtb kernel8.img .
# echo 'pi:$6$6jHfJHU59JxxUfOS$k9natRNnu0AaeS/S9/IeVgSkwkYAjwJfGuYfnwsUoBxlNocOn.5yIdLRdSeHRiw8EWbbfwNSgx9/vUhu0NqF50' > boot/userconf
# umount boot
# qemu-img convert -f raw -O qcow2 2022-04-04-raspios-bullseye-arm64-lite.img 2022-04-04-raspios-bullseye-arm64-lite.qcow2
# qemu-img resize 2022-04-04-raspios-bullseye-arm64-lite.qcow2 4g
Then run Qemu 7.1.0 this way:
# qemu-system-aarch64 -m 1024 -M raspi3b -kernel kernel8.img \
-dtb bcm2710-rpi-3-b-plus.dtb -sd 2022-04-04-raspios-bullseye-arm64-lite.qcow2 \
-append "console=ttyAMA0 root=/dev/mmcblk0p2 rw rootwait rootfstype=ext4" \
-nographic -device usb-net,netdev=net0 -netdev user,id=net0,hostfwd=tcp::5555-:22
Edit your /boot/cmdline.txt file to add modules-load=dwc2,g_ether to /boot/cmdline.txt after rootwait.

ddrescue on CygWin creates a zero size image

I try to create an image of a disk (USB flash key) in CygWin with use of the ddrescue command. I do the following:
first, with the command df I look where the disks are in CygWin.
Τhe output is:
C: 30716276 30489824 226452 100% /cygdrive/c
D: 56323856 55794432 529424 100% /cygdrive/d
F: 1953480700 1927260140 26220560 99% /cygdrive/f
H: 7847904 140324 7707580 2% /cygdrive/h
Then, to create the image of the disk h:/ I run the command like this:
ddrescue -v -n /cygdrive/h f:/___buffer/discoH.img discoH.log
The program works some time and is likely reading the disk. Αs a result, the file
f:/___buffer/discoH.img is really created but
its size is zero!
I tried some variations of the command options but with the same result. The disk to be read is fully working and readable, now I only want to learn to create its image.
When using ddrescue under true Linux (Ubuntu), the non-zero-size image of the same disk is created without any problem. What could be the cause for the fail in CygWin?
I still work in Windows XP SP3 32bit, the version of CygWin is
$ uname -r
2.0.4(0.287/5/3)
$ uname -m
i686 (32bit)
On another computer, with Windows 8, the result is the same. Probably, I lack doing something elementary?
PS the disk I want to image is 8GB, and there are 26GB of free space on the disk f:/ where I want to create the image
/cygdrive/h is not a disk image. Try with /dev/sdX
You can identify the X letter, from
$ cat /proc/partitions
major minor #blocks name win-mounts
8 0 976762584 sda
8 1 960658432 sda1 D:\
8 2 16102400 sda2 E:\
8 16 250059096 sdb
8 17 266240 sdb1
8 18 16384 sdb2
8 19 248765440 sdb3 C:\
8 20 1003520 sdb4
Thank you matzeri ! Yours was really the elementary thing I needed but did not know.
So, I use the command cat /proc/partitions instead of df, get the disk reference
sdc1 instead of /cygdrive/h, run the command
ddrescue -v -n /dev/sdc1 f:/___buffer/discoH.img discoH.log
instead of the one I indicated above in my question text, and it works! The image is being recorded

Debugging OpenCL kernel in FPGA Arria 10

I am trying to debug a OpenCL kernel for a Arria 10 FPGA board.
First I am compiling for emulation with:
$ aoc -march=emulator device/kernel.cl -v -o bin/kernel.aocx
And then I can execute the host with the recommended command and works fine:
$ env CL_CONTEXT_EMULATOR_DEVICE_ALTERA=1 ./host
But when I want to debug I do:
$ gdb host
$ (gdb) run
which gives me the error:
Context callback: Program was compiled for a different board.
aocx is for board EmulatorDevice whereas device is alaric_v3_prod_hpc
I suppose this error is because I am not including the information in "env CL_CONTEXT_EMULATOR_DEVICE_ALTERA=1". How should I execute the host program for debugging? Thanks
Prepend gdb call with env flag, same like you did without debugging:
env CL_CONTEXT_EMULATOR_DEVICE_ALTERA=1 gdb ./host

how to generate core file in docker container?

using ulimit command, i set core file size.
ulimit -c unlimited
and I compiled c source code using gcc - g option.
then a.out generated.
after command
./a.out
there is runtime error .
(core dumped)
but core file was not generated.(ex. core.294340)
how to generated core file?
First make sure the container will write the cores to an existing location in the container filesystem. The core generation settings are set in the host, not in the container. Example:
echo '/cores/core.%e.%p' | sudo tee /proc/sys/kernel/core_pattern
will generate cores in the folder /cores.
In your Dockerfile, create that folder:
RUN mkdir /cores
You need to specify the core size limit; the ulimit shell command would not work, cause it only affects at the current shell. You need to use the docker run option --ulimit with a soft and hard limit. After building the Docker image, run the container with something like:
docker run --ulimit core=-1 --mount source=coredumps_volume,target=/cores ab3ca583c907 ./a.out
where coredumps_volume is a volume you already created where the core will persist after the container is terminated. E.g.
docker volume create coredumps_volume
If you want to generate a core dump of an existing process, say using gcore, you need to start the container with --cap-add=SYS_PTRACE to allow a debugger running as root inside the container to attach to the process. (For core dumps on signals, see the other answer)
I keep forgetting how to do it exactly and keep stumbling upon this question which provides marginal help.
All in all it is very simple:
Run the container with extra params --ulimit core=-1 --privileged to allow coredumps:
docker run -it --rm \
--name something \
--ulimit core=-1 --privileged \
--security-opt seccomp=unconfined \
--entrypoint '/bin/bash' \
$IMAGE
Then, once in container, set coredump location and start your failing script:
sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t
myfailingscript.a
Enjoy your stacktrace
cd /tmp; gdb -c `ls -t /tmp | grep core | tail -1`
Well, let's resurrect an ancient thread.
If you're running Docker on Linux, then all of this is controlled by /proc/sys/kernel/core_pattern on your raw metal. That is, if you cat that file on bare metal and inside the container, they'll be the same. Note also the file is tricky to update. You have to use the tee method from some of the other posts.
echo core | sudo tee /proc/sys/kernel/core_pattern
If you change it in bare metal, it gets changed in your container. So that also means that behavior is going to be based on where you're running your containers.
My containers don't run apport, but my bare metal did, so I wasn't getting cores. I did the above (I had already solved the ulimit -c thing), and suddenly I get core files in the current directory.
The key to this is understanding that it's your environment, your bare metal, that controls the contents of that file.

Resources