I am running a Xen server and I would like to debug the QEMU process. However, I don't want to run the QEMU process standalone, but I want it to run to back my domU.
How can I run and debug QEMU in this situation?
This answer describes two possible approaches. Hope somebody else can share more ways to do it.
Preliminary note: to have a debug-friendly build of QEMU, use the --enable-debug config option.
Debugging QEMU under Xen is a bit peculiar because of the integration between the hypervisor and QEMU. If you don't want to run QEMU standalone (no acceleration), the only choice under Xen is to use Xen as accelerator, as KVM will not work.
To do so, I don't think it's possible to manually run QEMU from the command line, but you have to let Xen start the process when the domU is launched. You can then use one of the following two options:
start your VM via Xen (xl create ...), then use gdb via command line attaching it to the QEMU process that Xen has started;
change device_model_override in the Xen configuration of the VM to point to a script that contains something like exec gdbserver :1234 /path/to/qemu $# - then you can start the VM like above (via Xen) and use target remote :123" in gdb to attach to the debugging session.
Option number 2 is particularly useful when looking at a crash that happens early in the startup if xl create -p ... doesn't help because it crashes whether the Xen VM is running or paused. It is also useful if you want to pass extra environment variables to QEMU. This is sometimes also useful in debugging.
Your script could look like the following:
#!/bin/bash
QEMU_LOG=/var/log/xen/qemu-log-debug.log # custom log file
export SSLKEYLOGFILE=/tmp/qemu-debug-ssl-key.log # extra env vars
export GNUTLS_DEBUG_LEVEL=4 # extra env vars
export XEN_QEMU_CONSOLE_LIMIT=0 # extra env vars
exec gdbserver 0.0.0.0:1234 /path/to/qemu-xen/qemu-system-i386 $# &> "$QEMU_LOG"
Related
So I am trying to get started developing on Fuchsia and I wanted to get the hello world component to run. However, following these steps doesn't work for me. I'm using core.qemu-x64 running on an Ubuntu 20.04 VM with Virtual Box. I have gotten the emulator to run with fx qemu -N but fx vdl start -N hasn't worked for me.
I run fx serve-updates but it just outputs "Discovery..." and never changes. Then I try to run fx shell run fuchsia-pkg://fuchsia.com/hello-world-cpp#meta/hello-world-cpp.cmx but it says "No devices found." It seem like this shouldn't be an issue because with Linux the device finder should automatically pick it up. Regardless I tried following the MAC instructions and setting the device with fx set-device 127.0.0.1:22. That just makes the run command say "ssh: connect to host 127.0.0.1 port 22: Connection refused". I also tried to set it to the device to the nodename outputted by the fx qemu -N command which is "fuchsia-####-####-####" but that just makes the run command say no devices are found again.
I have verified that I actually have the hello-world packages with the fx list-packages hello-world which outputs all the hello-world packages as expected.
Is there any way I can get the device to be discoverable by the shell command? Alternatively, can I run components like the hello-world component from the qemu emulator directly?
Please let me know if I can provide any additional information.
I guess I just wasn't patient enough. I assumed the emulator was done getting setup because it stopped giving console output and it allowed me to input commands but it seems I just had to wait longer. After 50 minutes of the fx qemu -N command running, the terminal that had fx serve-updates going finally picked up the device. Then I was able to execute the hello world component. It would be nice if the documentation at least gave an idea of how long the different commands would take before they'd be usable.
I would like to install a custom kernel image on a Google Compute Engine instance. I have an instance running with:
foo#instance-1:/boot/efi$ uname -a
Linux instance-1 4.10.0-22-generic #24-Ubuntu SMP Mon May 22 17:43:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
And I've built and installed my kernel image:
sudo dpkg -i linux-image-4.10.0-rc8.10.0-rc8_amd64.deb
It shows up in the grub configuration file, I've set the default grub menu item to correct number, and I've run
sudo update-grub
Yet, when I reboot, I get the same kernel I started with.
Google documentation on this seems to be non-existent. There is one spot that suggests I might have to create the image externally, install the kernel, and import it. However, I will need to do this a lot, so I'd rather just install new kernels the old fashioned way.
Turns out that in Google's stock Ubuntu image, there's a grub config file:
/etc/default/grub.d/50-cloudimg-settings.cfg
that overrides what's in
/etc/default/grub
Editing the first file got everything working.
Before attempting this, I assume you have a fallback option? Some way of falling back to your current state. This is important because it seems you may not have physical access to the system.
Please check what /boot/grub/grub.cfg shows as default kernel. It will be a section beginning with menuentry and under that, an entry starting with linux. If that points to /boot/<default-kernel> then that's what you need to update along with initrd entry so that both kernel image and initramfs point to your custom kernel.
Also, it's possible that boot order of kernel images is alphabetical so newer kernel images (later in alphabetical order) have preference over older ones. In that case if you can change kernel image's file name to be higher than default kernel image, and same for the corresponding initramfs and config files (they will all be similarly named) and then run update-grub that may be quicker way of booting into your custom kernel. You can find those files under /boot/.
What worked for me was going into /boot/ and removing the old images and then running sudo dpkg -i <new_image> and rebooting the system with sudo reboot
I have a Linux x86 application inside a docker container and I want to run it under Windows. I don't want to force users to install Virtual Box. Ideally a qemu or similar virtualization tool can be used, since it is very tiny and requires no installation at all.
My approach was to use qemu for Windows and
boot2docker, so I can boot a minimal Linux with docker installed and than run my docker container within it.
This is the command I'm using to run it:
qemu-system-x86_64.exe -m 256 -cdrom boot2docker.iso
The boot goes well, but I have several problems:
at every boot the image goes trough all the configuration steps (generating keys for ssh, setting hostname, etc.) that can be skipped the second time the image runs; seems that the changes to the image are not persisted trough runs. I want to build an image that is already configured and needs only to boot;
to add my application inside the image I have to rebuild the whole boot2docker.iso image by using the steps described in How to build boot2docker.iso locally.
So, the question is: how can I use the base boot2docker.iso image and add some persisting data (such as configurations and my application)? Perhaps a read/write partition mounted from another file?
like the idea.
Maybe you can check MobaliveCD, it has a nice lightweight GUI and it embeds qemu system inside. I tried it for tinycore live cd iso (base of boot2docker), which works quite ok.
While it seems it doesn't support 64bit (which boot2docker needs), but the function fits for you need.
Your command
qemu-system-x86_64.exe -m 256 -cdrom boot2docker.iso
launches an ISO, what you want is
reserve some disk place for this iso in a .img
run this iso and install it in this .img
reboot
In a Linux you would start by doing
qemu-img create -f qcow2 /home/myuser/my_image.img 6G
There is docker-cli for Windows, it seems to be what you look for, see
http://azure.microsoft.com/blog/2014/11/18/docker-cli-for-windows-clients/
You can use boot2docker http://boot2docker.io/
On boot2docker installation, it will install virtualbox behind the scenes.
You only have to start the boot2docker shortcut and the virtual box management and vms are hidden.
One of my commands in my bash script will depend on the virtualization of the server (XEN or OpenVZ or KVM ). How can I check which of these is in use in bash?
There's a very useful script called imvirt that handles Xen, OpenVZ, VMware, VirtualBox, KVM, and lots of others. It's available as a package in Debian, or from the imvirt web site.
$ imvirt
Xen PV 4.1
I found a small shell script that is able to detect virtualization and it handles Xen,OpenVZ,KVM,Parallels, Vmware and many more
virt-what
Installation with yum is pretty straight forward
Here is the output on my system
$ virt-what
kvm
If you want to detect host (dom0) for xen, check
[ "$(cat /proc/xen/capabilities)" == "control_d" ]
If you want to detect in VM,
You need to execute cpuid instruction in VM, with original_eax=1.
If the resultant ecx has MSB set ((ecx & 0x80000000) != 0), then you are under VM.
This is assuming that your hypervisor supports viridian interface. Xen does.
cpuid package is easily available for many linux distros. I'm sure windows port would be available too. Else, the code is pretty simple for you to write...
My shell provisioner is a small bash script that apt-gets a few things, installs a few Perl modules through cpan, sets up Apache and MySQL, echos some text, and exists.
Except that after printing its final message, it seems not to exit, but hangs forever.
Am I forgetting to do something? How can I begin to debug this?
If I use the VirtualBox manager to close the VM, I get a stack trace whose head reads,
/Applications/Vagrant/embedded/gems/gems/net-ssh-2.6.7/lib/net/ssh/ruby_compat.rb:30:in `select': closed stream (IOError)
Host OS: OS X Snow Leopard
Guest OS: Ubunut via precise32
TIA
This is really a comment but I don't have enough reputation to post it as a comment.
I would suggest two techniques to debug this problem.
1) Enable debugging in Vagrant like so:
VAGRANT_LOG=info vagrant up
2) Define set -x at the top of your shell script to link one line of your shell script to the output it creates when run. This should allow you to see which line of your shell script is hanging.
Updating your question with the Vagrantfile will also help us guide you in the right direction.
This issue should be resolved in a Vagrant release 1.2.4 or newer, which includes a fix which closes the ssh channel when the shell provisioner exits.