Arch Linux Installation: ERROR: Root device mounted successfully, but /sbin/init does not exist - installation

I'm fairly new to linux but decided to dive right in with arch-linux to become familiar with everything.
Unfortunatelly I can't even finish the installation - shame on me.
The error while booting after setting arch up is:
ERROR: Root device mounted successfully, but /sbin/init does not exist.
Bailing out, you are on your own. Good luck.
I went for btrfs on luks on lvm
The layout looks like this
sda
|- sda1 512MB fat32 /boot
`- sda2 remaining lvm
|- cryptswap 4GB swap
|- crypttmp 2GB tmp /tmp
`- cryptroot remaining btrfs
|- # /
|- #home /home
|- #snapshots /.snapshots
|- #log /var/log
|- #cache /var/cache
`- #tmp /var/tmp
Those are the commands and configurations I used to setup arch:
dd status=progress if=/dev/zero of=/dev/sda
wipe disk
gdisk /dev/sda
o clear gpt table
boot partition
n
↵
↵
+512M
ef00
lvm partition
n
↵
↵
↵
8e00
w write partition changes
setup lvm
pvcreate /dev/sda2
vgcreate vg1 /dev/sda2
lvcreate -L 4G -n cryptswap vg1
lvcreate -L 2G -n crypttmp vg1
lvcreate -l 100%FREE cryptroot vg1
setup encryption
cryptsetup luksFormat /dev/vg1/cryptroot
cryptsetup open /dev/vg1/cryptroot root
make filesystems
mkfs.fat -F32 -n BOOT /dev/sda1
mkfs.btrfs --label ROOT /dev/mapper/root
create btrfs subvolumes
mount /dev/mapper/root /mnt
cd /mnt
btrfs subvolume create #
btrfs subvolume create #home
btrfs subvolume create #snapshots
btrfs subvolume create #log
btrfs subvolume create #cache
btrfs subvolume create #tmp
cd ..
umount /mnt
mount btrfs subvolumes and BOOT partition
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=# /dev/mapper/root /mnt
mkdir /mnt/home
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#home /dev/mapper/root /mnt/home
mkdir /mnt/.snapshots
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#snapshots /dev/mapper/root /mnt/.snapshots
mkdir /mnt/var
mkdir /mnt/var/log
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#log /dev/mapper/root /mnt/var/log
mkdir /mnt/var/cache
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#cache /dev/mapper/root /mnt/var/cache
mkdir /mnt/var/tmp
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#tmp /dev/mapper/root /mnt/var/tmp
mkdir /mnt/boot
mount /dev/sda1 /mnt/boot
pacstrap /mnt base linux linux-firmware lvm2 btrfs-progs amd-ucode vim install necessities
genfstab -L /mnt > mnt/etc/fstab generate fstab
arch-chroot /mnt
basic configuration
ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime
hwclock --systohc
vim /etc/locale.gen
locale-gen
echo "LANG=en_US.UTF-8" >> /etc/locale.conf
echo "KEYMAP=de-latin1" >> /etc/vconsole.conf
echo "devstation" >> /etc/hostname
vim /etc/hosts
vim /etc/mkinitcpio.conf
the mkinitcpio.conf content:
MODULES=(btrfs)
HOOKS=(base udev autodetect keyboard keymap consolefont modconf block lvm2 encrypt filesystems fsck)
mkinitcpio -p linux
bootctl install
echo "default arch" > /boot/loader/loader.conf
vim /boot/loader/entries/arch.conf
the arch.conf content
title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options cryptdevice=UUID={/dev/vg1/cryptroot uuid inserted here}:root root=/dev/mapper/root rw
exit
umount -a
poweroff
Pulling the arch installation medium out of the computer and starting it.
The booting output
:: running early hook [udev]
Starting version 248.3-2-arch
:: running hook [udev]
:: Triggering uevents...
:: running hook [keymap]
:: Loading keymap...done.
:: running hook [encrypt]
A password is requires to acces the root volume:
Enter passphrase for /dev/mapper/vg1-cryptroot: {inserting passphrase}
:: performing fsck on '/dev/mapper/root'
:: mounting '/dev/mapper/root' on real root
:: running cleanup hook [udev]
ERROR: Root device mounted successfully, but /sbin/init does not exist.
Bailing out, you are on your own. Good luck.
sh: can't access tty; job control turned off
[rootfs ]#
Obviously I didnt setup cryptswap and crypttmp, yet. Those will be setup with crypttab and fstab. I am just mentioning this, and highly doubt it is part of the problem, because they are just partitions not recognized by anything at the moment, aren't they.
I hope I didn't miss any command or configuration I did - I am typing off videos I watched and from head, because no single video I found had the btrfs, luks, lvm config I went with. Thanks for your time/help and reading this through.

Adding rootflags=subvol=# to /boot/loader/entries/arch.conf like so
title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options cryptdevice=UUID={/dev/vg1/cryptroot uuid inserted here}:root root=/dev/mapper/root rootflags=subvol=# rw
did the trick.

Related

Veracrypt container on mac - how to set permissions (compatible with .ssh files)

I would like to know how to make the veracrypt volume (container) able to change permission with chmod (and have key permissions compatible with ssh).
I wish to store my .ssh folder securely with veracrypt. But when try to ssh using my credentials in a mounted veracrypt volume (using mac) I got an error: "Bad owner or permissions on xxxxxxx" and I cannot use ssh.
I tried to chown/chmod the files but it didnot work. All files have permissions "-rwxrwxrwx" for my user, even when I mount the volume in read only.
Is there a way t set the permissions properly or use a different FS for the container?
I tried a volume in exfat and fat from a file.
I first tried with the GUI.
Then I tried this :
veracrypt /dev/sda3 /mnt/ssh --filesystem=none
sudo mount -t exfat -o -m=022 /dev/mapper/veracrypt1 /mnt/ssh
and with fat :
veracrypt /dev/sda3 /mnt/ssh --filesystem=none
sudo mount -t fat -o -umask=022 /dev/mapper/veracrypt1 /mnt/ssh
but chmod still failed :
mount: exec /Library/Filesystems/lfs.fs/Contents/Resources/mount_[exfat/fat] for /mnt/ssh : No such file or directory
mount: /mnt/ssh failed with 72
Of course the /mnt/ssh directory do exist ;)
Do I misuse mount ? Or missed some veracrypt options ? Or choose the bad filesystem ?
Thank you !
Seem like choosing APFS works like a charm. And it's linux compatible.

Is there a way to recreate /dev within a directory on macOS for the purpose of chroot-ing?

I've been experimenting with running apps within a chroot-ed directory.
Many apps and binaries require access to items within /dev, such as /dev/null and /dev/random to work.
Is there a way to recreate or bind mount the /dev filesystem within a directory to this end?
I have tried the following without success:
(Where root is the directory I want to chroot into)
$ sudo bindfs -o dev -o allow_other /dev ./root/dev/
Leading to:
$ cat ./root/dev/urandom
cat: ./root/dev/urandom: Operation not permitted
$ mount -t devfs devfs ./root/dev
Leading to:
$ cat ./root/dev/urandom
cat: ./root/dev/urandom: Device not configured
Attempting to manually make the devices with mknod doesn't work either.
$ sudo mknod null c 1 3
$ sudo chmod 666 ./null
$ cat ./null
cat: ./null: Operation not permitted
Is there a way to either use the existing /dev items within a chroot or to recreate them?
Unfortunately, there doesn't appear to be much documentation of using chroot with OSX/macOS on the internet.
Operating System Details: macOS Mojave, 10.14.6. SIP enabled.
Well, this one is mainly on me being dumb.
sudo mount -t devfs devfs ./dev
Works just fine.
If the above command is ran without root, it will bind the devfs devices within ./dev, but all devices will respond with a "Device not configured" error. If it is ran as root, all ./dev devices will work as expected.

Custom path for APFS volume in a external SSD

I have external SSD with APFS volumes inside, when plug-in this automatic mount at /Volumes/Workspace. Have a way to define to automatic mount in a predefined path?
Take a look at current volumes:
ls /Volumes
Get the UUID of volume:
diskutil info /Volumes/[name] | grep UUID
Create a folder where you wish add the volume:
mkdir -p [/absolute/path/to/folder]
Put follow code at fstab:
echo "UUID=[UUID] [/absolute/path/to/folder] apfs rw" | sudo tee -a /etc/fstab
The example mount a volume with apfs format, but you can use for ntfs, nfs, etc.

UBUNTU 14.04/16.04 Server Boot: Dev Mount failed, falls to initramfs

When booting a virtual server with Ubuntu 14.04/16.04 (I had the issues with both), it cant find the boot partition for root and the system falls to the initramfs shell with the following error:
(initframs) exit
Gave up waiting for root device. Common problems:
- Boot args (cat proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/CAC_VG-CAC_LV does not exist. Dropping to a shell!
if I type
ls /dev/mapper/
I still can see the partition mentioned in the error (and in the GRUB)
root=/dev/mapper/CAC_VG-CAC_LV
cat output as suggested in the error message
(initframs) cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.4.0-66-generic root=/dev/mapper/CAC_VG-CAC_LV ro
Notice: it seems to mount the device in Read-Only (ro). Maybe I should change this after I manage to start the system...
If I type exit I get the same error as above.
Then I try to mount:
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV
mount: can't find /dev/mapper/CAC_VG-CAC_LV in /etc/fstab`
I had the same problem after a fresh install of Ubuntu 14.04
And this actually worked!!
mount -o remount, rw /
lvm vgscan
lvm vgchange -a y
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV /root
exit

NFS in Docker: exportfs: <path> does not support NFS export

I'm working on a docker NFS container, but running into trouble mounting the exported directories (even on localhost).
The problem
exportfs: <path> does not support NFS export
The setup
My container uses an entrypoint script to
write directories (provided via command-line arguments) into /etc/exports,
invokes rpcbind and service nfs-kernel-server start, and
defers to inotifywait to remain running.
The Dockerfile is nothing special. I install inotify-tools and nfs-kernel-server, expose port 2049, and copy over the entrypoint script.
I'm using docker-machine on an El Capitan Macbook.
I map volumes from the host into the container to give the nfs server access to directories that I want to export.
Entrypoint script
modprobe nfs
modprobe nfsd
for x in ${#}; do
echo -e "$x\t*(rw,sync,no_subtree_check,crossmnt,fsid=root,no_root_squash)" >> /etc/exports
done
source /etc/default/nfs-kernel-server
source /etc/default/nfs-common
rpcbind
service nfs-kernel-server start
exec inotifywait --monitor /exports
Debugging
Here's the setup for what I am trying to export.
%> ls $HOME/mounts
a
%> ls $HOME/mounts/a
asdf
Here's how I start the server.
%> docker run --privileged --rm --name=nfs-server --volume=$HOME/mounts/a/:/exports/a docker-nfs-server /exports/a
Exporting directories for NFS kernel daemon...exportfs: /exports/a does not support NFS export
.
Starting NFS kernel daemon: nfsd mountd.
Setting up watches.
Watches established.
And here's the debugging I've been doing while the container is running.
%> docker exec -it nfs-server bash
root#6056a33f061e:/# ls /exports
a
root#6056a33f061e:/# ls /exports/a
asdf
root#6056a33f061e:/# showmount -a
All mount points on 6056a33f061e:
root#6056a33f061e:/# exportfs -a
exportfs: /exports/a does not support NFS export
root#8ad67c951ecd:/# mount
none on / type aufs (rw,relatime,si=3ca85db062268b32,dio,dirperm1)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
/dev/sda1 on /exports type ext4 (rw,relatime,data=ordered)
Users on /exports/a type vboxsf (rw,nodev,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
What I know
Less than Jon Snow. I can identify a few variables that might each be responsible for my problem, but I don't know how to verify any of them:
$HOME/mounts/a is on an OSX filesystem
That filesystem is encrypted
/exports/a is being mounted into the docker-machine VM
I don't have enough experience with NFS to know how to debug this effectively. Any assistance or information would be appreciated.
Update
It works in Parallels!
%> docker run --privileged --rm --name=nfs-server --volume=$HOME/mounts/a/:/exports/a docker-nfs-server /exports/a
%> docker exec -it nfs-server bash
root#3786d888f039:/# mkdir --parents /imports/a
root#3786d888f039:/# mount --types nfs --options nolock,proto=tcp,port=2049 localhost:/exports/a /imports/a
root#3786d888f039:/# ls /imports
a
root#3786d888f039:/# ls /imports/a
root#3786d888f039:/# ls /exports
a
root#3786d888f039:/# ls /exports/a
root#3786d888f039:/# touch /exports/a/asdf
root#3786d888f039:/# ls /exports/a
asdf
root#3786d888f039:/# ls /imports/a/
asdf
So that narrows down the problem to OSX/docker-machine or maybe even the encrypted filesystem on OSX.
the problem is in the docker-machine. If you want to use nfs mounts you need to run modprobe nfs in the machine itself, not in container. Container uses kernel of the machine. Same with modprobe nfs and nfs server.
In my case it worked when the exported path was a volume specified in the dockerfile. Here is the repo where I got it working (with the replaced volume definition).
This happened with me. The reason was that my mounted hard disk was of the exfat format. This could also happen if the format is ntfs or fat32. To fix it, first mount the hard disk with ext4 format, then try again. It should now work.

Resources