I'm running Docker for Windows (similar effect described below is observable on MacOSX)
I have a Docker Container in which a program tries to access a squashfs image. To access squashfs the kernel has to be either complied with loopdevice support statically or load the relevant kernel module.
When I try to mount the image or setup the loop device the kernel that's shared between docker containers cannot find the loopdevice module.
I could possibly use unsquashfs tool but the squashfs image is used for a reason: squashfs has a very decent property: it allows unlimited number of files and inodes - if I try to to unpack the image I quickly hit the inode limit of the images.
Is Moby Linux kernel which is shipped with docker a statically compiled kernel? What volume to mount to have access to its /lib/modules? lsmod run in a privileged container lists no loaded modules. Trying to modprobe loop yields the following error message:
root#6e1b23cc65e5:/# modprobe loop
modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.9.8-moby/modules.dep.bin'
Related
I'm getting started with imx6 processors and the procedure involved to bring up the board is to flash u-boot kernel dtb and rootfs which is taken care of by mfg tools provided by nxp.
For creating a rootfs partition the command run is
mkfs.ext3 -F -E nodiscard /dev/mmcblk1p2
and for untaring the rootfs into this partition it is.
pipe tar -jxv -C /mnt/mmcblk1p2
I'd like to know the working of this, which kernel driver is being called for executing these commands?
My kernel version is 4.9.88.
I did find a few driver files related to mmc in the path
/drivers/mmc/core
but there is nothing related to filesystem reading or writing here.
Can anyone explain which driver files are used to create filesystems?
Creating the filesystem is done by the userspace program mke2fs (of which mkfs.ext3 is an alias) which is part of the e2fsprogs package. The kernel and drivers have no way of creating filesystems. Therefore, only the block driver for accessing the MMC device is involved, but no file system driver.
I've been building my own kernel (4.19.37) and have no issues during build (make) or install (make install_modules + make install). Everything seems to go fine until I execute grub2-mkconfig -o /boot/grub2/grub.cfg. When executing this command, grub finds both my existing and new vmlinuz-* kernels in /boot/ as well as their corresponding initramfs-*.img. However, at that point the system hangs indefinitely (> several hours). Ctrl+C does not seem to stop it and I must reboot. I have looked into this issue and all I have found that could be a problem is the probing of removal disks for bootable OS's, which I have eliminated by both removing them and by adding GRUB_DISABLE_OS_PROBER=true to /etc/default/grub per this SE post. Neither has helped.
Upon reboot, I end up at the grub> command line, presumably because the grub2-mkconfig never finished and corrupted the grub configuration file. Here I can load both the old and new kernel without any issue, as well as initramfs, but when I execute boot I get a kernel panic:
end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
Naturally, it is my assumption that there is something wrong with my initramfs-4.19.37.img that was created by my build process. As an experiment, I tested if I could load the new kernel, but use the old initramfs (4.19.10), and indeed it does boot into emergency mode. I however cannot do the opposite, old kernel with new initramfs. So something is fishy with my new initramfs image.
Getting smarter, my last experiment was mounting the old and new initramfs image with mount. They both mount successfully with no errors, and seem to have identical file structures. I have also compared both my new and old .config files for the kernel builds, and the differences are trivial.
A few other notes/observations:
In the image above, you can see List of all partions: produces nothing, so I am wondering if there is an issue with the file system type? My hard drive is xfs, what is the file system for the initramfs? CPIO?
At the grub> command line, ls / produces what I expect to see in /boot. It contains all my vmlinuz-* and initramfs-*.img files
My file system is xfs
I've tried various other kernel versions with same results
I have twice had successful builds and installs, once was the existing kernel (4.19.10), it was an upgrade, and a second time with the same kernel with a low-latency pre-emption model. I can't for the life of me figure out what I did differently then.
So the final question(s) are - What's wrong with the initramfs form these builds? What else can I do to validate it's integrity? Are there any .config changes I should make when building the kernel for the xfs file system?
Disclaimer: So this actually an continuation of [this question][3], but I've simplified the problem a bit. Some background info there might be relevant.
After updating the kernel using yum update, reboot the VM using the new kernel you get a kernel panic error.
Following commands will fix this problem.
yum remove kernel
yum update
I get a new kernel(3.14) from kontron for my board after i compiled and try to run it on my board but i getting the following error any one can help me !
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
The kernel can't find its root filesystem.
Investigate why that is the case. Where is the kernel looking for it? Check the kernel config and/or the command line passed from the bootloader. Check that the kernel has all the drivers to actually access the hardware hosting the root filesystem. Has the kernel the needed filesystem support complied in?
I'm working on wrapping some scientific software by docker image using boot2docker on Mac OS X. And that software (https://github.com/voutcn/megahit.git) is using named pipes (in python code, but it's not important) to wire different parts (written in C) to each other. I mount temporary folder from host Mac OS X machine to provide scratch area in docker container (because temporary output of software could be huge) with something like this:
docker run -v /external/folder:/tmp/scratch <image> <args>
It gives me this mount line inside container:
none on /tmp/scratch type vboxsf (rw,nodev,relatime)
And inside this mounted folder named pipe creation fails when it runs inside container. It's not even related to python, C or any particular language. I double checked with linux command mkfifo pipe1 in this folder with an error:
mkfifo: cannot create fifo 'pipe1': Operation not permitted
It works well for any internal not mounted folder inside container though. Why does it happen and how could it be fixed?
PS: Here is what I do to easily reproduce the problem.
1) Mac OS X with boot2docker
2) Dockerfile is:
FROM ubuntu:14.04
#WORKDIR /tmp <- this one would work
WORKDIR /tmp/scratch
ENTRYPOINT [ "mkfifo" ]
CMD [ "pipe1" ]
3) Image building:
docker build --rm -t mine/namedpipes:latest .
4) Running (being in external host folder to be mounted):
docker run -v $(pwd):/tmp/scratch mine/namedpipes:latest
Upgrade to a recent version of Docker for Mac, and your problem will likely be solved: https://docs.docker.com/docker-for-mac/release-notes/#beta-2-release-2016-03-08-1102-beta2
The issue is that FIFOs are actually kernel objects you access using the filesystem, and so you would need extra work to support cross-kernel FIFOs (or unix domain sockets) - a fifo is either valid inside the Linux guest running the docker daemon or in the OS X host, not in both, and it makes sense that you can't create an OS X fifo from inside the linux box. It would be sort of like trying to create a fifo on a network drive, it doesn't make sense as a local IPC mechanism.
Current support for special files is detailed in https://docs.docker.com/docker-for-mac/osxfs/#file-types
The issue for cross-hypervisor support is located at https://github.com/docker/for-mac/issues/483
I have an autoscaling cloudformation that I think I have set up to replace failed instances based on StatusCheckFailed_Instance. I want to test this. Can I test this by terminating one of the EC2 instances? Thanks!
Instance Status Check might fail for one of the following reasons:
Memory Errors
Out of memory: kill process
ERROR: mmu_update failed (Memory management update failed)
Device Errors
I/O error (Block device failure)
IO ERROR: neither local nor remote disk (Broken distributed block device)
Kernel Errors
request_module: runaway loop modprobe (Looping legacy kernel modprobe on older Linux versions)
"FATAL: kernel too old" and "fsck: No such file or directory while trying to open /dev" (Kernel and AMI mismatch)
"FATAL: Could not load /lib/modules" or "BusyBox" (Missing kernel modules)
ERROR Invalid kernel (EC2 incompatible kernel)
File System Errors
request_module: runaway loop modprobe (Looping legacy kernel modprobe on older Linux versions)
fsck: No such file or directory while trying to open... (File system not found)
General error mounting filesystems (Failed mount)
VFS: Unable to mount root fs on unknown-block (Root filesystem mismatch)
Error: Unable to determine major/minor number of root device... (Root file system/device mismatch)
XENBUS: Device with no driver...
... days without being checked, check forced (File system check required)
fsck died with exit status... (Missing device)
Operating System Errors
GRUB prompt (grubdom>)
Bringing up interface eth0: Device eth0 has different MAC address than expected, ignoring. (Hard-coded MAC address)
Unable to load SELinux Policy. Machine is in enforcing mode. Halting now. (SELinux misconfiguration)
XENBUS: Timeout connecting to devices (Xenbus timeout)
It seems to me that #1 is the easiest to implement to fail on demand. You can add web hook or launch a shell script with the delay to start some process which will result in OutOfMemory failure, to confirm that your autoscaling configuration works as configured.
Terminating the instance will not help to test your configuration, as when you gracefully terminating the instance, it is removed from the pool of available instances, and the check will not be performed.
More details on status checks can be found here: Troubleshooting Instances with Failed Status Checks