Yocto "Failed to run qemu: Could not initialize SDL(x11 not > available)" - x11

So I've got Yocto on a local build server, coz who wants that massive build chewing up their workspace amirite?
Host and Server are Arch Linux 4.19.44-1-lts
Anyway I am just running up the example from the quick build page found here and when I try to
$ runqemu qemux86
from the ssh (with X11 forwarding enabled) all I get is this lousy output:
runqemu - INFO - Running MACHINE=qemux86 bitbake -e... runqemu - INFO
- Continuing with the following parameters:
KERNEL: [/home/bob/poky/build/tmp/deploy/images/qemux86/bzImage--5.0.3+git0+f0b575cda6_3df4aae607-r0-qemux86-20190520164453.bin] MACHINE: [qemux86] FSTYPE: [ext4] ROOTFS: [/home/bob/poky/build/tmp/deploy/images/qemux86/core-image-sato-qemux86-20190520164453.rootfs.ext4] CONFFILE: [/home/bob/poky/build/tmp/deploy/images/qemux86/core-image-sato-qemux86-20190520164453.qemuboot.conf]
runqemu - INFO - Setting up tap interface under sudo [sudo] password for bob: runqemu - INFO - Network configuration:
192.168.7.2::192.168.7.1:255.255.255.0 runqemu - INFO - Running /home/bob/poky/build/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-i386
-device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -drive file=/home/bob/poky/build/tmp/deploy/images/qemux86/core-image-sato-qemux86-20190520164453.rootfs.ext4,if=virtio,format=raw
-vga vmware -show-cursor -usb -device usb-tablet -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 -cpu pentium2 -m 256 -serial mon:vc -serial null -kernel /home/bob/poky/build/tmp/deploy/images/qemux86/bzImage--5.0.3+git0+f0b575cda6_3df4aae607-r0-qemux86-20190520164453.bin
-append 'root=/dev/vda rw highres=off mem=256M ip=192.168.7.2::192.168.7.1:255.255.255.0 vga=0 uvesafb.mode_option=640x480-32 oprofile.timer=1 uvesafb.task_timeout=-1 '
runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not available) - exiting
runqemu - INFO - Cleaning up Set 'tap0' nonpersistent
It's this part that is clearly a concern:
runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not
available) - exiting
Which is weird, because X is actually running on the machine and I can run qemu just fine. Running
$ qemu-system-x86_64
Opens up a qemu VM on my local machine
Is there something I'm missing here? Does SDL need to be re-compiled with X support or something? What about these options: "-vga vmware", "uvesafb.mode_option=640x480-32". Maybe it's an ssh thing? Or a build config option for sdl that I haven't yet come across....
To clarify it works fine from the console of the server and from the tty using the 'nographic' option. Just not over tty with a graphical option, wondering if that's even possible.
Thanks.

I had the same problem with a build for a minimal image on ubuntu 18.04 server.
Try: runqemu qemux86 nographic

Related

Testcontainers with Podman in Java tests

Is it possible to use Testcontainers with Podman in Java tests?
As of March 2022 Testcontainers library doesn't detect an installed Podman as a valid Docker environment.
Can Podman be a Docker replacement on both MacOS with Apple silicon (local development environment) and Linux x86_64 (CI/CD environment)?
It is possible to use Podman with Testcontainers in Java projects, that use Gradle on Linux and MacOS (both x86_64 and Apple silicon).
Prerequisites
Podman Machine and Remote Client are installed on MacOS - https://podman.io/getting-started/installation#macos
Podman is installed on Linux - https://podman.io/getting-started/installation#linux-distributions
Enable the Podman service
Testcontainers library communicates with Podman using socket file.
Linux
Start Podman service for a regular user (rootless) and make it listen to a socket:
systemctl --user enable --now podman.socket
Check the Podman service status:
systemctl --user status podman.socket
Check the socket file exists:
ls -la /run/user/$UID/podman/podman.sock
MacOS
Podman socket file /run/user/1000/podman/podman.sock can be found inside the Podman-managed Linux VM. A local socket on MacOS can be forwarded to a remote socket on Podman-managed VM using SSH tunneling.
The port of the Podman-managed VM can be found with the command podman system connection list --format=json.
Install jq to parse JSON:
brew install jq
Create a shell alias to forward the local socket /tmp/podman.sock to the remote socket /run/user/1000/podman/podman.sock:
echo "alias podman-sock=\"rm -f /tmp/podman.sock && ssh -i ~/.ssh/podman-machine-default -p \$(podman system connection list --format=json | jq '.[0].URI' | sed -E 's|.+://.+#.+:([[:digit:]]+)/.+|\1|') -L'/tmp/podman.sock:/run/user/1000/podman/podman.sock' -N core#localhost\"" >> ~/.zprofile
source ~/.zprofile
Open an SSH tunnel:
podman-sock
Make sure the SSH tunnel is open before executing tests using Testcontainers.
Configure Gradle build script
build.gradle
test {
OperatingSystem os = DefaultNativePlatform.currentOperatingSystem;
if (os.isLinux()) {
def uid = ["id", "-u"].execute().text.trim()
environment "DOCKER_HOST", "unix:///run/user/$uid/podman/podman.sock"
} else if (os.isMacOsX()) {
environment "DOCKER_HOST", "unix:///tmp/podman.sock"
}
environment "TESTCONTAINERS_RYUK_DISABLED", "true"
}
Set DOCKER_HOST environment variable to Podman socket file depending on the operating system.
Disable Ryuk with the environment variable TESTCONTAINERS_RYUK_DISABLED.
Moby Ryuk helps you to remove containers/networks/volumes/images by given filter after specified delay.
Ryuk is a technology for Docker and doesn't support Podman. See testcontainers/moby-ryuk#23
Testcontainers library uses Ruyk to remove containers. Instead of relying on Ryuk to implicitly remove containers, we will explicitly remove containers with a JVM shutdown hook:
Runtime.getRuntime().addShutdownHook(new Thread(container::stop));
Pass the environment variables
As an alternative to configuring Testcontainers in a Gradle build script, you can pass the environment variables to Gradle.
Linux
DOCKER_HOST="unix:///run/user/$UID/podman/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
MacOS
DOCKER_HOST="unix:///tmp/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
Full example
See the full example https://github.com/evgeniy-khist/podman-testcontainers
For Linux, it definitely work even though official testcontainers documentation is not really clear about it.
# Enable socket
systemctl --user enable podman.socket --now
# Export env var expected by Testcontainers
export DOCKER_HOST=unix:///run/user/${UID}/podman/podman.sock
export TESTCONTAINERS_RYUK_DISABLED=true
Sources:
https://quarkus.io/blog/quarkus-devservices-testcontainers-podman/
https://github.com/testcontainers/testcontainers-java/issues/2088#issuecomment-893404306
I was able to build on Evginiy's excellent answer, since Podman has improved in the time since the original answer. On Mac OS, these steps were sufficient for me and made testcontainers happy:
Edit ~/.testcontainers.properties and add the following line
ryuk.container.privileged=true
Then run the following
brew install podman
podman machine init
sudo /opt/homebrew/Cellar/podman/4.0.3/bin/podman-mac-helper install
podman machine set --rootful
podman machine start
If you don't want to run rootful podman, ryuk needs to be disabled:
export TESTCONTAINERS_RYUK_DISABLED="true"
Running without ryuk basically works, but lingering containers can sometimes cause problems and name collisions in automated tests. Evginiy's suggestion of a shutdown hook would resolve this, but would need code changes.
An add-on to #hollycummins answer. You can get it working without --rootful by setting the following environment variables (or their testcontainers properties counter part):
DOCKER_HOST=unix:///Users/steve/.local/share/containers/podman/machine/podman-machine-default/podman.sock`
TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/var/run/user/501/podman/podman.sock
TESTCONTAINERS_RYUK_CONTAINER_PRIVILEGED=true
This will mount the podman socket of the linux VM into the Ryuk container. 501 is the UID of the user core in the linux VM user. See podman machine ssh.
if you running testcontainer build inside a docker container, alternatively you can start the service like this
podman system service -t 0 unix:///tmp/podman.sock &
OR
podman system service -t 0 tcp:127.0.0.1:19999 &

Troubleshooting KVM guest's anaconda installer logs

I am trying to install RHEL7.9 KVM guest on Ubuntu 18.04 Azure VM. The anaconda installer fails due to some error but the virt-viewer screen got closed so fast, I was unable to read what is the exact error. I know all anaconda logs get stored in /tmp/anaconda.log file on the KVM guest disk image but I am unable to figure out a way to check the contents of the file. I tried mounting the KVM guest disk image using "mount -o loop .img " command but it fails with NTFS signature is missing, thats probably because the installation fails before the KVM guest's disk is partitioned properly. I am looking for ways to check the contents of that file. Is there any way to redirect the anaconda logs of the guest machine to the Ubuntu host machine ? Pasting the virt-install script and kickstart file used. The RHEL7.9 installation media was downloaded from https://developers.redhat.com/products/rhel/download site.
virt-install.sh
virt-install --location /datadrive/iso_images/rhel7.9-dvd.iso \
--disk /datadrive/rhel79-oracle-$1.img,size=40,format=raw \
--os-variant rhel7.0 \
--initrd-inject ./ks-rhel79-oracle.cfg \
--extra-args="ks=file:/ks-rhel79-oracle.cfg" \
--vcpus 2 \
--memory 2048 \
--noreboot \
--name rhel79-oracle-$1
ks-rhel79-oracle.cfg
#version=DEVEL
# System authorization information
auth --passalgo=sha512 --useshadow
text
firstboot --disable
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8
# Installation logging level
logging --level=debug
# Network information
network --bootproto=dhcp --device=link --activate
network --bootproto=dhcp --hostname=Azure-image
# Shutdown after installation
shutdown
# Root password
rootpw --plaintext password
# SELinux configuration
selinux --disabled
# System services
services --enabled="sshd,chronyd"
# Do not configure the X Window System
skipx
# System timezone
timezone US/Eastern
# System bootloader configuration
bootloader --append="rootdelay=60 mpath crashkernel=2048M intel_idle.max_cstate=1 processor.max_cstate=1 transparent_hugepage=never numa_balancing=disable mce=ignore_ce modprobe.blacklist=kvm_intel,kvm,iTCO_wdt,iTCO_vendor_support,sb_edac,edac_core" --location=mbr
# Partition scheme
zerombr
clearpart --all
# Disk partitioning information
part swap --fstype="swap" --size=32768
part / --fstype="xfs" --grow --size=6144
%post --logfile=/root/anaconda-composer.log --erroronfail
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
%end
%packages
#base
%end
%addon com_redhat_kdump --enable --reserve-mb=2048
%end
The below solution worked for me. Used below commands to start rsyslog on port 6080 on the host machine (Redhat7.8 Azure VM) and modified the virt-install script as below to direct anaconda logging to the host machine
yum install -y anaconda
mkdir -p /home/shaaga/remote_inst
eval `analog -p 6080 -o rsyslogd.conf -s /home/shaaga/remote_inst
virt-install --location /datadrive/iso_images/rhel7.9-dvd.iso
--disk /datadrive/rhel79-oracle-$1.img,size=40,format=raw
--os-variant rhel7.0
--initrd-inject ./ks-rhel79-oracle.cfg
--extra-args="ks=file:/ks-rhel79-oracle.cfg"
--vcpus 2
--memory 2048
--noreboot
--name rhel79-oracle-$1 --channel tcp,host=127.0.0.1:6080,mode=connect,target_type=virtio,name=org.fedoraproject.anaconda.log.0

Build to deploy guest on KVM hangs

I'm using Jenkins to automate the deploy of a virtual appliance. The first step is to build a standard CentOS 7 minimal vm in KVM. I wrote a short bash script to do this task which works when running locally on the KVM machine:
#!/bin/bash
#Variables
diskpath="/var/lib/libvirt/images/"
buildname=$(date +"%m-%d-%y-%H-%M")
vmextension=".dsk"
#Change to images directory
cd /var/lib/libvirt/images/
#Deploy VM with with kickstart file
sudo virt-install \
--name=$buildname \
--nographics \
--hvm \
--virt-type=kvm \
--file=$diskpath$buildname$vmextension \
--file-size=20 \
--nonsparse \
--vcpu=2 \
--ram=2048 \
--network bridge=br0 \
--os-type=linux \
--os-variant=generic \
--location=http://0.0.0.0/iso/ \
--initrd-inject /var/lib/libvirt/images/autobuild-ks.cfg \
--extra-args="ks=http://0.0.0.0/ks/autobuild-ks.cfg console=ttyS0"
(IP address i have changed for the purposes of security)
The ISO and the kickstart file are stored on another server and they can both be accessed via http for the purposes of making this script work. To be clear, the script does work.
The problem I have is, when I put this script into Jenkins as a build step, the script works; however, it hangs at the end after the OS has been installed and the kvm guest begins the shutdown process.
here is the kickstart file:
#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
# Use Network installation media
url --url=http://0.0.0.0/iso
# Use graphical install
#graphical
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=gb --xlayouts='gb'
# System language
lang en_GB.UTF-8
# Network information
network --bootproto=dhcp --device=ens160 --ipv6=auto --activate
network --hostname=hostname.domain.com
# Root password
rootpw --iscrypted
taken_encryption_output_out_for_the_purposes_of_security
#Shutdown after installation
shutdown
# System services
services --enabled="chronyd"
# System timezone
timezone Europe/London --isUtc
# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-
drive=sda
autopart --type=lvm
# Partition clearing information
clearpart --none --initlabel
%packages
#^minimal
#core
chrony
kexec-tools
%end
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end
%anaconda
pwpolicy root --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
pwpolicy user --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
pwpolicy luks --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
%end
I suspect it's something to do with the shutdown option in the Kickstart file but unsure. When I ssh to the kvm server, I can see my newly created vm so the script does work but Jenkins hangs.
[root#sut-kvm01 ~]# virsh list --all
Id Name State
----------------------------------------------------
- 09-22-17-16-21 shut off
So far I have tried shutdown, reboot and obviously halt is default in the kickstart file and they have not worked for me either.
Any ideas how I can get the build to complete successfully? If it hangs, I can't move on to what will be build step number 2.
Help please :-)
Ok so I managed to figure out what the issue was. The issue was nothing to do with Jenkins or the script but rather to do with the kickstart file. In a nutshell, I was editing the wrong kickstart file. The file i was editing was the default kickstart file in the /root/ directory but that is not the same file that was being injected into memory by the script so the changes I made were having no effect.
Note to self - just because the script works, does not mean the answer to the problem isn't written in the script.

J-Link GDB debugging in CLion

Not long ago CLion added support for Remote GDB debugging and I'm trying to set it up with Seggers's J-Link GDB server.
My setup:
VM VirtualBox running Ubuntu 16.04
J-Link drivers: V6.10
Target chip: nRF51 (ARM Cortex M0)
CLion 2016.2.2
I usually work in windows, but as CLion doesn't support remote GDB in windows I'm trying to make it work running Ubuntu in VirtualBox. I configured the debugger in CLion like shown in the image with a little help from the blog in the link above. The arguments I have used are based on the J-Link documentation (Document: UM08001) and some guessing.
GDB server setup
My problem is that when running the debugger the process just stops and CLion's console outputs:
"Could not connect to target. Please check power, connection and
settings."
I have tried to run JLinkGDBServer from the terminal and then I get as far as this:
/usr/bin/JLinkGDBServer -device nrf51422_xxAC -if swd -speed 1000 -endian little
SEGGER J-Link GDB Server V6.10 Command Line Version
JLinkARM.dll V6.10 (DLL compiled Sep 14 2016 16:46:16)
-----GDB Server start settings-----
GDBInit file: none
GDB Server Listening port: 2331
SWO raw output listening port: 2332
Terminal I/O port: 2333
Accept remote connection: yes
Generate logfile: off
Verify download: off
Init regs on start: off
Silent mode: off
Single run mode: off
Target connection timeout: 0 ms
------J-Link related settings------
J-Link Host interface: USB
J-Link script: none
J-Link settings file: none
------Target related settings------
Target device: nrf51422_xxAC
Target interface: SWD
Target interface speed: 1000kHz
Target endian: little
Connecting to J-Link...
J-Link is connected.
Firmware: J-Link OB-SAM3U128-V2-NordicSemi compiled Jul 5 2016 08:42:09
Hardware: V1.00
S/N: 681666518
Checking target voltage...
Target voltage: 3.30 V
Listening on TCP/IP port 2331
Connecting to target...Connected to target
Waiting for GDB connection...
Does anyone have a clue of what I'm doing wrong?
You're probably confusing GDB server and GDB itself. Those are GDB options that should be set in the GDB Remote Debug Configuration in CLion, not GDB server settings.
That is, you first run JLinkGDBServer manually, for example, from terminal, just as you've done already, and leave it waiting for GDB to attach. At this moment one should notice the connection port:
Listening on TCP/IP port 2331
Connecting to target...Connected to target
Waiting for GDB connection...
Then edit your GDB Remote Debug Configuration in CLion to use the host GDB (most likely /usr/bin/gdb in your case, install it using sudo apt install gdb if necessary), and use the port mentioned above as part of the "target remote" string:
GDB: /usr/bin/gdb
"target remote" args: :2331
Notice the preceding colon in front of the port. This is a shorthand for connecting to localhost using TCP. Just in case, the explicit form is tcp:localhost:2331.
Now you can start the debug session. CLion will start the configured host GDB, GDB communicates to JLinkGDBServer through the specified TCP connection, and finally the GDB server chats with your device.

Previously working docker now having errors

I have been testing docker with no issues but suddenly my connection(?) seems to have dropped.
Has anyone experienced this?
What is the fix?
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
exit status 255
docker is configured to use the default machine with IP
For help getting started, check out the docs at https://docs.docker.com
Unexpected error getting machine url: exit status 255
%USER%s-MacBook-Pro:~ %USER%$ docker run hello-world
Post http:///var/run/docker.sock/v1.20/containers/create: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
%USER%s-MacBook-Pro:~ %USER%$
I have been launching docker from the docker quickstart terminal.
In Mac OS, I use this command to resolve the issue
eval "$(docker-machine env default)"
Killing some processes that had things like "docker" in the name and then rebooting the terminal solver the issue for me. Unsure how I created the issue therefore tough to duplicate.
Asuming you are using 64 bit Windos os, For me it was because visualization was not enabled in my bios. Enabled it and issue was resolved.
I could see “this kernel requires an x86-64 CPU, but only detects an i686 CPU, unable to boot” error in virtual box screen
restart system and go to bios
security>virtualization>enable>save and exit
in virtual box, setings>general>basic : check if 64 bit linux is selected
Good luck

Resources