While trying to compile/build and boot custom kernel inside vmware workstation, while booting new kernel, it fails and falls to shell with error "failed to find disk by uuid".
I tried this with both ubuntu and centos.
Things I tried but didn't help
check mapping by uuid in boot entry and existence in directory.
initramfs-update
replaced root=uuid=<> with /dev/disk/sda3
is it issue with vmware workstation?
how can it be rectified..??
I had a similar fault with my own attempts to bootstrap Fedora 22 onto a blank partition using a Centos install on another partition. I never did solve it completely, but I did find the problem was in my initrd rather than the kernel.
The problem is the initrd isn't starting LVM because dracut didn't tell the initrd that it needs LVM. Therefore if you start LVM manually you should be able to boot into your system to fix it.
I believe this is the sequence of commands I ran from the emergency shell to start LVM:
vgscan
vgchange -ay
lvs
this link helped me remember
Followed by exit to resume normal boot.
You might have to mount your LVM /etc/fstab entries manually, I don't recall whether I did or not.
Try this:
sudo update-grub
Then:
mkinitcpio -p linux
It won't hurt to check your fstab file. There, you should find the UUID of your drive. Make sure you have the proper flags set in the fstab.
Also, there's a setting in the grub.cfg that has has GRUB use the old style of hexadecimal UUIDs. Check that out as well!
The issue is with creation of initramfs, after doing a
make oldconfig
and choosing default for new options, make sure the ENOUGH diskspace is available for the image to be created.
in my case the image created was not correct and hence it was failing to mount the image at boot time.
when compared; the image size was quite less than the existing image of lower version, so I added another disk with more than sufficient size and then
make bzImage
make modules
make modules_install
make install
starts working like a charm.
I wonder why the image creation got completed earlier and resulted in corrupt image (with less size) without throwing any error [every single time]
Related
i resently upgraded a system. after reboot i was not able to login again. all users have been rejected with Login incorrect. systemd with journaling was running and writing error messages to file in /var/log/journal as usual.
i so booted a system from a revovery usb stick (same distribution) mounted the root device of the failed system /mnt and tried to analyze the logs with journalctl --root=/mnt/var/log/journal -xe. journalctl did not find journal files.
question: how can i read systemd journal content of a dead system using a recovery system?
have fun
I may be a bit late, but I stumbled upon this question and here is what I found:
journalctl logs are located in /var/log/journal/*
journalctl app can read foreign journal files with the following switches:
--file= followed by the *.journal file of your choice. This option may be used multiples times
--root= followed by the root directory of you choice, probably a mounted partition
--image= followed by a disk image,
files as argument, with the option --file
I want to run linux on qemu to test some system design, I use default config for x86_64, I found that I must append "console=ttyS0" to boot arguments so that I can get kernel message, without the ttyS0 I can't get any output.
So I want to check qemu's device tree, another strange thing is that I found there is no device-tree directory under /sys/fireware.
I want to check following question:
why we need "console=ttyS0" to get output ?
doesn't x86 use device-tree during booting ?
I've been building my own kernel (4.19.37) and have no issues during build (make) or install (make install_modules + make install). Everything seems to go fine until I execute grub2-mkconfig -o /boot/grub2/grub.cfg. When executing this command, grub finds both my existing and new vmlinuz-* kernels in /boot/ as well as their corresponding initramfs-*.img. However, at that point the system hangs indefinitely (> several hours). Ctrl+C does not seem to stop it and I must reboot. I have looked into this issue and all I have found that could be a problem is the probing of removal disks for bootable OS's, which I have eliminated by both removing them and by adding GRUB_DISABLE_OS_PROBER=true to /etc/default/grub per this SE post. Neither has helped.
Upon reboot, I end up at the grub> command line, presumably because the grub2-mkconfig never finished and corrupted the grub configuration file. Here I can load both the old and new kernel without any issue, as well as initramfs, but when I execute boot I get a kernel panic:
end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
Naturally, it is my assumption that there is something wrong with my initramfs-4.19.37.img that was created by my build process. As an experiment, I tested if I could load the new kernel, but use the old initramfs (4.19.10), and indeed it does boot into emergency mode. I however cannot do the opposite, old kernel with new initramfs. So something is fishy with my new initramfs image.
Getting smarter, my last experiment was mounting the old and new initramfs image with mount. They both mount successfully with no errors, and seem to have identical file structures. I have also compared both my new and old .config files for the kernel builds, and the differences are trivial.
A few other notes/observations:
In the image above, you can see List of all partions: produces nothing, so I am wondering if there is an issue with the file system type? My hard drive is xfs, what is the file system for the initramfs? CPIO?
At the grub> command line, ls / produces what I expect to see in /boot. It contains all my vmlinuz-* and initramfs-*.img files
My file system is xfs
I've tried various other kernel versions with same results
I have twice had successful builds and installs, once was the existing kernel (4.19.10), it was an upgrade, and a second time with the same kernel with a low-latency pre-emption model. I can't for the life of me figure out what I did differently then.
So the final question(s) are - What's wrong with the initramfs form these builds? What else can I do to validate it's integrity? Are there any .config changes I should make when building the kernel for the xfs file system?
Disclaimer: So this actually an continuation of [this question][3], but I've simplified the problem a bit. Some background info there might be relevant.
After updating the kernel using yum update, reboot the VM using the new kernel you get a kernel panic error.
Following commands will fix this problem.
yum remove kernel
yum update
I am trying to add a .bin file (name wiki.de.bin) to the docker via a DockerFile. When I try to build it, I get an error message:
Error processing tar file(exit status 1): write /app/wiki.de.bin: no space left on device.
I have done docker system prune as well as docker volume ls -qf dangling=true, however it does not help.
What should I do?
I am using Windows 10 Home which has Hyper-V available.
Here is the relevant System information.
Does it have anything to do with the fact that I have only 6.42GB available virtual memory? IF yes, how do I resolve this issue?
Did you also tried docker volume prune ?
Sometimes I have to do this and then regenerate my volumes later.
I have a pretty "unconventional" setup that looks like this:
I have a VMware virtual machine running Ubuntu Desktop 12.10 with networking set to bridge. Installed inside the VM is nginx, php-fpm, php-cli and MySQL. PHP is 5.4.10.
I have a folder containing all the files for the application I am working on in the host (Windows 7). This folder is then made available as a network share.
The virtual machine then mounts the windows share via samba into a directory. This directory is then served by nginx and php-fpm. So far, everything has worked fine.
I have a script that I run via the php CLI to help me build and process some files. Previously, this script has worked fine on my windows host. However, it seems to throw cannot allocate memory errors when I run it in the Ubuntu VM. The weird thing is that it's sporadic as well and does not happen all the time.
user#ubuntu:~$ sudo /usr/local/php/bin/php -f /www/app/process.php
Warning: require_once(/www/app/somecomponent.php): failed to open stream: Cannot allocate memory in /www/app/loader.php on line 130
Fatal error: require_once(): Failed opening required '/www/app/somecomponent.php' (include_path='.:/usr/local/php/lib/php') in /www/app/loader.php on line 130
I have checked and confirmed the following:
/www/app/somecomponent.php definitely exists.
The permissions for /www and all files and sub directories inside are set to execute and read+write for owner, group and others.
I have turned off APC after reading this question to see if APC is the cause, but the problem still persists after doing so.
php-cli is using the same php.ini as php-fpm, which is located in /etc/php/php.ini.
memory_limit in php.ini is set to 128M (default) which seems plenty for the app.
Even after increasing the memory limit to 256M, the error still occurs.
The VM has 2GB of memory.
I have been googling to find out what causes cannot allocate memory errors, but have found nothing useful. What could be causing this problem?
It turns out this was a problem with my windows share. Perhaps because Windows 7 is a client OS, it is not tuned to serve large amounts of files frequently (which is happening in my case).
To fix, set the following keys in the registry:
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache to 1
HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size to 3
I have been running my setup with these new settings for about a week and have not encountered any memory allocation errors since making the change.