I'm trying to boot my kernel image (bzImage format and which is built using buildroot) with kexec. This bzImage got its embedded initrd too. But when I try kexec, it just hangs. I'm not sure where I can see kexec logs.
[root#localhost boot]# kexec -v
kexec-tools-testing 20070330 released 30th March 2007
[root#localhost boot]# kexec -l /boot/bzImage -d --command-line=quiet noapic ro noswap
setup_linux_vesafb: 800x600x16 # f0000000 +1d4c00
[root#localhost boot]# kexec -e
Older kexec binary clearly mentions in help that it doesn't support bzImage yet but newer ones do support it. So I in fact tried all kexec versions (2.0.0/1/2/3 and some test versions too). But I get same result. I'm trying kexec from Centos 5.5 32 bit and bzImage is built for i386. This is actually VM on Xenserver but I don't think that should matter anyway. Interestingly if I install my bzImage locally with grub.conf (and using same cmdline above), it boots fine with bootloader. So image seems to be fine.
I'm pretty new to Linux boot stuff so probably missing something very obvious here. Any help or pointers provided will be appreciated.
Not sure what played magic here but upgrading busybox package inside kernel image (through buildroot) helped. It started booting fine with all kexec versions. There's one problem which I see is console of box is garbeled for some reason (post kexec) but if I ssh to box it shows everything fine.
Related
https://i.imgur.com/hYf1Bes.jpgm
I am trying to set up ROS and Gazebo in a VM running Ubuntu.
The goal is that I want to simulate Turtlebot with the open manipulator.
I installed everything without any issues.
Though I am not able to launch the Turtlebot environment on Gazebo (like here: http://emanual.robotis.com/docs/en/platform/turtlebot3/simulation/)
$roslaunch turtlebot3_fake turtlebot3_fake.launch
results in Gazebo staying forever in the state loading your world. After some time, it stops responding.
Launching the empty world however works.
I am using ROS 1 with Gazebo 7.0
My hardware setup:
MacBook Pro 13" 2019 with 16 GB RAM
Parallels VM: 3D virtual. ON, no performance limit, 4 CPU kernels, 12 GB RAM enabled
Thank you so much for your help.
After every change you made source your bash and make sure to run :
catkin make
if you've done this already then check if ros is installed properly by running
roscore
on one terminal and let it stay running.
After that try to launch your turtlebot on another terminal.
If it doesnt work even you have installed all of the needed things, i think the problem is with your VM, id recommend you to run ROS on Ubuntu running USB Stick.
cd ~/.gazebo/
mkdir models
cd models/
wget http://file.ncnynl.com/ros/gazebo_models.txt
wget -i gazebo_models.txt
ls model.tar.g* | xargs -n1 tar xzvf
try this gazebo try to download to packages that's why it waits u need internet for that this may take few mins
I have a lot of log files from JBoss Fuse that I want to visualize in Kibana.
I've installed Elasticsearch and Kibana.
I have also installed the plugin ingest-geoip (bin/elasticsearch-plugin install ingest-geoip).
Now I am trying to install Filebeat.
I've done this OK:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-darwin-x86_64.tar.gz
tar xzvf filebeat-6.2.2-darwin-x86_64.tar.gz
cd filebeat-6.2.2-darwin-x86_64/
But when I want to run it I get the following:
sindre#selite:/usr/lib/filebeat$ ./filebeat modules enable system
bash: ./filebeat: cannot execute binary file: Exec format error
NB! This is my first time using Kibana. Please point me in the right direction if I am using it wrong. As I wrote earlier I want to use it for Jboss Fuse Log files.
filebeat-6.2.2-darwin-x86_64
There's your clue. darwin is the name given to the core OS-X unix distribution
https://en.wikipedia.org/wiki/Darwin_(operating_system)
It is extremely unlikely that a compiled darwin binary would be compatible with a linux system.
You really want to be looking at the Linx X86 64 package
If you have running instance of Kibana on your system, you can easily configure it for any underlying operating system(Linux/macOS) with a few provided commands:
visit: Home>>Add data>>System logs
current_url_for_demo: http://localhost:5601/app/kibana#/home/tutorial/systemLogs?_g=()
Visual Explanation:
I would like to install a custom kernel image on a Google Compute Engine instance. I have an instance running with:
foo#instance-1:/boot/efi$ uname -a
Linux instance-1 4.10.0-22-generic #24-Ubuntu SMP Mon May 22 17:43:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
And I've built and installed my kernel image:
sudo dpkg -i linux-image-4.10.0-rc8.10.0-rc8_amd64.deb
It shows up in the grub configuration file, I've set the default grub menu item to correct number, and I've run
sudo update-grub
Yet, when I reboot, I get the same kernel I started with.
Google documentation on this seems to be non-existent. There is one spot that suggests I might have to create the image externally, install the kernel, and import it. However, I will need to do this a lot, so I'd rather just install new kernels the old fashioned way.
Turns out that in Google's stock Ubuntu image, there's a grub config file:
/etc/default/grub.d/50-cloudimg-settings.cfg
that overrides what's in
/etc/default/grub
Editing the first file got everything working.
Before attempting this, I assume you have a fallback option? Some way of falling back to your current state. This is important because it seems you may not have physical access to the system.
Please check what /boot/grub/grub.cfg shows as default kernel. It will be a section beginning with menuentry and under that, an entry starting with linux. If that points to /boot/<default-kernel> then that's what you need to update along with initrd entry so that both kernel image and initramfs point to your custom kernel.
Also, it's possible that boot order of kernel images is alphabetical so newer kernel images (later in alphabetical order) have preference over older ones. In that case if you can change kernel image's file name to be higher than default kernel image, and same for the corresponding initramfs and config files (they will all be similarly named) and then run update-grub that may be quicker way of booting into your custom kernel. You can find those files under /boot/.
What worked for me was going into /boot/ and removing the old images and then running sudo dpkg -i <new_image> and rebooting the system with sudo reboot
I have a Linux x86 application inside a docker container and I want to run it under Windows. I don't want to force users to install Virtual Box. Ideally a qemu or similar virtualization tool can be used, since it is very tiny and requires no installation at all.
My approach was to use qemu for Windows and
boot2docker, so I can boot a minimal Linux with docker installed and than run my docker container within it.
This is the command I'm using to run it:
qemu-system-x86_64.exe -m 256 -cdrom boot2docker.iso
The boot goes well, but I have several problems:
at every boot the image goes trough all the configuration steps (generating keys for ssh, setting hostname, etc.) that can be skipped the second time the image runs; seems that the changes to the image are not persisted trough runs. I want to build an image that is already configured and needs only to boot;
to add my application inside the image I have to rebuild the whole boot2docker.iso image by using the steps described in How to build boot2docker.iso locally.
So, the question is: how can I use the base boot2docker.iso image and add some persisting data (such as configurations and my application)? Perhaps a read/write partition mounted from another file?
like the idea.
Maybe you can check MobaliveCD, it has a nice lightweight GUI and it embeds qemu system inside. I tried it for tinycore live cd iso (base of boot2docker), which works quite ok.
While it seems it doesn't support 64bit (which boot2docker needs), but the function fits for you need.
Your command
qemu-system-x86_64.exe -m 256 -cdrom boot2docker.iso
launches an ISO, what you want is
reserve some disk place for this iso in a .img
run this iso and install it in this .img
reboot
In a Linux you would start by doing
qemu-img create -f qcow2 /home/myuser/my_image.img 6G
There is docker-cli for Windows, it seems to be what you look for, see
http://azure.microsoft.com/blog/2014/11/18/docker-cli-for-windows-clients/
You can use boot2docker http://boot2docker.io/
On boot2docker installation, it will install virtualbox behind the scenes.
You only have to start the boot2docker shortcut and the virtual box management and vms are hidden.
I'm trying to automate a download of a 3 gig file into a vm managed by using vagrant and puppet. The file downloads, and appears to be the full three gigs, but the md5 I download with it consistently fails the md5sum test. Conversely, if I download it outside the vm on my mac (with wget) into a shared folder with the vm, and then ssh into the vm and check the md5 it downloads fine. Any suggestions?
Code example:
wget http://mymachine.com/archive.zip && wget http://mymachine.com/archive.md5
md5sum -c archive.md5
What I'm running:
Local Machine: Mac OSX Mavericks
VM OS: CentOS 6.4
Vagrant Version: 1.3.5
VirtualBox Version: 4.3.4
I would run the problematic wget with the -d flag on its own.
Review http://www.gnu.org/software/wget/manual/wget.html#Reporting-Bugs for more info on finding out what is happening for wget.
This may give you some clues as to why this does not work in the VirtualBox VM
Then you will need to review https://forums.virtualbox.org/viewtopic.php?f=24&t=48476
to review the steps required to query Virtual Box logs to report probelems with it.
Finally, you could try older versions of Virtual Box to see if the problem still occurs.
Hope this helps.
I was having the same problem with a very similar configuration and downgrading to VirtualBox 4.2.24 works for me as a workaround.
Host Hardware: MacBook Pro
Host OS: Mac OS X Mavericks (10.9.2)
Guest OS: CentOS 6.4
Vagrant: 1.4.3
VirtualBox: Tried 4.3.6, 4.3.8, 4.3.10 and they all resulted in corrupted wget downloads inside the guest OS for large zip files (600MB - 1.2GB)
Not sure what is going on as I thought VirtualBox 4.3.+ added better Mavericks support.