Error using guestmount to mount windows qcow2 image - windows

my os is centos 7.4
root#wllabs:/home/wllabs/instances/image2016$ cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
kernel version: 3.10.0
root#wllabs:/home/wllabs/instances/image2016$ uname -r
3.10.0-693.5.2.el7.x86_64
Here is my mount command and error prompt, use guestmount -m /dev/sda1
root#wllabs:/home/wllabs/instances/image2016$ guestmount -a win2016
--ro -m /dev/sda1 /mount libguestfs: error: mount: unsupported filesystem type guestmount: '/dev/sda1' could not be mounted.
guestmount: Did you mean to mount one of these filesystems?
guestmount: /dev/sda1 (ntfs) guestmount: /dev/sda2 (ntfs)
here is use guestmount -m /dev/sda2
root#wllabs:/home/wllabs/instances/image2016$ guestmount -a win2016
--ro -m /dev/sda2 /mount libguestfs: error: mount: unsupported filesystem type guestmount: '/dev/sda2' could not be mounted.
guestmount: Did you mean to mount one of these filesystems?
guestmount: /dev/sda1 (ntfs) guestmount: /dev/sda2 (ntfs)
-m /dev/sda report error , so i use -i to guestmout, but also error
root#wllabs:/home/wllabs/instances/image2016$ guestmount -a win2016
--ro -i /mount guestmount: no operating system was found on this disk
If using guestfish '-i' option, remove this option and instead
use the commands 'run' followed by 'list-filesystems'.
You can then mount filesystems you want by hand using the
'mount' or 'mount-ro' command.
If using guestmount '-i', remove this option and choose the
filesystem(s) you want to see by manually adding '-m' option(s).
Use 'virt-filesystems' to see what filesystems are available.
If using other virt tools, this disk image won't work
with these tools. Use the guestfish equivalent commands
(see the virt tool manual page).
the libguestfs-winsupport and ntfs are all installed.
root#wllabs:/home/wllabs/instances/image2016$ rpm -qa | grep winsupport
libguestfs-winsupport-7.2-2.el7.x86_64
root#wllabs:/home/wllabs/instances/image2016$ rpm -qa | grep ntfs
ntfs-3g-devel-2017.3.23-1.el7.x86_64
ntfsprogs-2017.3.23-1.el7.x86_64
ntfs-3g-2017.3.23-1.el7.x86_64

I'm noticing the same behavior..
According the documentation, this is normal behavior.
They removed ntfs support starting from RHEL/CentOS 7.3.
More info on: http://libguestfs.org/guestfs-faq.1.html#mount:-unsupported-filesystem-type-with-ntfs-in-rhel-7.2
It is possible to compile your own libguestfs which supports ntfs, but this is not supported.
I haven't tested it yet, but this thread mentions the steps.
https://www.redhat.com/archives/libguestfs/2016-February/msg00145.html
I hope this helps since this is my first post. :-)
Kind regards,
Jeff

Related

Arch Linux Installation: ERROR: Root device mounted successfully, but /sbin/init does not exist

I'm fairly new to linux but decided to dive right in with arch-linux to become familiar with everything.
Unfortunatelly I can't even finish the installation - shame on me.
The error while booting after setting arch up is:
ERROR: Root device mounted successfully, but /sbin/init does not exist.
Bailing out, you are on your own. Good luck.
I went for btrfs on luks on lvm
The layout looks like this
sda
|- sda1 512MB fat32 /boot
`- sda2 remaining lvm
|- cryptswap 4GB swap
|- crypttmp 2GB tmp /tmp
`- cryptroot remaining btrfs
|- # /
|- #home /home
|- #snapshots /.snapshots
|- #log /var/log
|- #cache /var/cache
`- #tmp /var/tmp
Those are the commands and configurations I used to setup arch:
dd status=progress if=/dev/zero of=/dev/sda
wipe disk
gdisk /dev/sda
o clear gpt table
boot partition
n
↵
↵
+512M
ef00
lvm partition
n
↵
↵
↵
8e00
w write partition changes
setup lvm
pvcreate /dev/sda2
vgcreate vg1 /dev/sda2
lvcreate -L 4G -n cryptswap vg1
lvcreate -L 2G -n crypttmp vg1
lvcreate -l 100%FREE cryptroot vg1
setup encryption
cryptsetup luksFormat /dev/vg1/cryptroot
cryptsetup open /dev/vg1/cryptroot root
make filesystems
mkfs.fat -F32 -n BOOT /dev/sda1
mkfs.btrfs --label ROOT /dev/mapper/root
create btrfs subvolumes
mount /dev/mapper/root /mnt
cd /mnt
btrfs subvolume create #
btrfs subvolume create #home
btrfs subvolume create #snapshots
btrfs subvolume create #log
btrfs subvolume create #cache
btrfs subvolume create #tmp
cd ..
umount /mnt
mount btrfs subvolumes and BOOT partition
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=# /dev/mapper/root /mnt
mkdir /mnt/home
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#home /dev/mapper/root /mnt/home
mkdir /mnt/.snapshots
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#snapshots /dev/mapper/root /mnt/.snapshots
mkdir /mnt/var
mkdir /mnt/var/log
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#log /dev/mapper/root /mnt/var/log
mkdir /mnt/var/cache
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#cache /dev/mapper/root /mnt/var/cache
mkdir /mnt/var/tmp
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#tmp /dev/mapper/root /mnt/var/tmp
mkdir /mnt/boot
mount /dev/sda1 /mnt/boot
pacstrap /mnt base linux linux-firmware lvm2 btrfs-progs amd-ucode vim install necessities
genfstab -L /mnt > mnt/etc/fstab generate fstab
arch-chroot /mnt
basic configuration
ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime
hwclock --systohc
vim /etc/locale.gen
locale-gen
echo "LANG=en_US.UTF-8" >> /etc/locale.conf
echo "KEYMAP=de-latin1" >> /etc/vconsole.conf
echo "devstation" >> /etc/hostname
vim /etc/hosts
vim /etc/mkinitcpio.conf
the mkinitcpio.conf content:
MODULES=(btrfs)
HOOKS=(base udev autodetect keyboard keymap consolefont modconf block lvm2 encrypt filesystems fsck)
mkinitcpio -p linux
bootctl install
echo "default arch" > /boot/loader/loader.conf
vim /boot/loader/entries/arch.conf
the arch.conf content
title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options cryptdevice=UUID={/dev/vg1/cryptroot uuid inserted here}:root root=/dev/mapper/root rw
exit
umount -a
poweroff
Pulling the arch installation medium out of the computer and starting it.
The booting output
:: running early hook [udev]
Starting version 248.3-2-arch
:: running hook [udev]
:: Triggering uevents...
:: running hook [keymap]
:: Loading keymap...done.
:: running hook [encrypt]
A password is requires to acces the root volume:
Enter passphrase for /dev/mapper/vg1-cryptroot: {inserting passphrase}
:: performing fsck on '/dev/mapper/root'
:: mounting '/dev/mapper/root' on real root
:: running cleanup hook [udev]
ERROR: Root device mounted successfully, but /sbin/init does not exist.
Bailing out, you are on your own. Good luck.
sh: can't access tty; job control turned off
[rootfs ]#
Obviously I didnt setup cryptswap and crypttmp, yet. Those will be setup with crypttab and fstab. I am just mentioning this, and highly doubt it is part of the problem, because they are just partitions not recognized by anything at the moment, aren't they.
I hope I didn't miss any command or configuration I did - I am typing off videos I watched and from head, because no single video I found had the btrfs, luks, lvm config I went with. Thanks for your time/help and reading this through.
Adding rootflags=subvol=# to /boot/loader/entries/arch.conf like so
title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options cryptdevice=UUID={/dev/vg1/cryptroot uuid inserted here}:root root=/dev/mapper/root rootflags=subvol=# rw
did the trick.

Custom path for APFS volume in a external SSD

I have external SSD with APFS volumes inside, when plug-in this automatic mount at /Volumes/Workspace. Have a way to define to automatic mount in a predefined path?
Take a look at current volumes:
ls /Volumes
Get the UUID of volume:
diskutil info /Volumes/[name] | grep UUID
Create a folder where you wish add the volume:
mkdir -p [/absolute/path/to/folder]
Put follow code at fstab:
echo "UUID=[UUID] [/absolute/path/to/folder] apfs rw" | sudo tee -a /etc/fstab
The example mount a volume with apfs format, but you can use for ntfs, nfs, etc.

UBUNTU 14.04/16.04 Server Boot: Dev Mount failed, falls to initramfs

When booting a virtual server with Ubuntu 14.04/16.04 (I had the issues with both), it cant find the boot partition for root and the system falls to the initramfs shell with the following error:
(initframs) exit
Gave up waiting for root device. Common problems:
- Boot args (cat proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/CAC_VG-CAC_LV does not exist. Dropping to a shell!
if I type
ls /dev/mapper/
I still can see the partition mentioned in the error (and in the GRUB)
root=/dev/mapper/CAC_VG-CAC_LV
cat output as suggested in the error message
(initframs) cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.4.0-66-generic root=/dev/mapper/CAC_VG-CAC_LV ro
Notice: it seems to mount the device in Read-Only (ro). Maybe I should change this after I manage to start the system...
If I type exit I get the same error as above.
Then I try to mount:
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV
mount: can't find /dev/mapper/CAC_VG-CAC_LV in /etc/fstab`
I had the same problem after a fresh install of Ubuntu 14.04
And this actually worked!!
mount -o remount, rw /
lvm vgscan
lvm vgchange -a y
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV /root
exit

NFS in Docker: exportfs: <path> does not support NFS export

I'm working on a docker NFS container, but running into trouble mounting the exported directories (even on localhost).
The problem
exportfs: <path> does not support NFS export
The setup
My container uses an entrypoint script to
write directories (provided via command-line arguments) into /etc/exports,
invokes rpcbind and service nfs-kernel-server start, and
defers to inotifywait to remain running.
The Dockerfile is nothing special. I install inotify-tools and nfs-kernel-server, expose port 2049, and copy over the entrypoint script.
I'm using docker-machine on an El Capitan Macbook.
I map volumes from the host into the container to give the nfs server access to directories that I want to export.
Entrypoint script
modprobe nfs
modprobe nfsd
for x in ${#}; do
echo -e "$x\t*(rw,sync,no_subtree_check,crossmnt,fsid=root,no_root_squash)" >> /etc/exports
done
source /etc/default/nfs-kernel-server
source /etc/default/nfs-common
rpcbind
service nfs-kernel-server start
exec inotifywait --monitor /exports
Debugging
Here's the setup for what I am trying to export.
%> ls $HOME/mounts
a
%> ls $HOME/mounts/a
asdf
Here's how I start the server.
%> docker run --privileged --rm --name=nfs-server --volume=$HOME/mounts/a/:/exports/a docker-nfs-server /exports/a
Exporting directories for NFS kernel daemon...exportfs: /exports/a does not support NFS export
.
Starting NFS kernel daemon: nfsd mountd.
Setting up watches.
Watches established.
And here's the debugging I've been doing while the container is running.
%> docker exec -it nfs-server bash
root#6056a33f061e:/# ls /exports
a
root#6056a33f061e:/# ls /exports/a
asdf
root#6056a33f061e:/# showmount -a
All mount points on 6056a33f061e:
root#6056a33f061e:/# exportfs -a
exportfs: /exports/a does not support NFS export
root#8ad67c951ecd:/# mount
none on / type aufs (rw,relatime,si=3ca85db062268b32,dio,dirperm1)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
/dev/sda1 on /exports type ext4 (rw,relatime,data=ordered)
Users on /exports/a type vboxsf (rw,nodev,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
What I know
Less than Jon Snow. I can identify a few variables that might each be responsible for my problem, but I don't know how to verify any of them:
$HOME/mounts/a is on an OSX filesystem
That filesystem is encrypted
/exports/a is being mounted into the docker-machine VM
I don't have enough experience with NFS to know how to debug this effectively. Any assistance or information would be appreciated.
Update
It works in Parallels!
%> docker run --privileged --rm --name=nfs-server --volume=$HOME/mounts/a/:/exports/a docker-nfs-server /exports/a
%> docker exec -it nfs-server bash
root#3786d888f039:/# mkdir --parents /imports/a
root#3786d888f039:/# mount --types nfs --options nolock,proto=tcp,port=2049 localhost:/exports/a /imports/a
root#3786d888f039:/# ls /imports
a
root#3786d888f039:/# ls /imports/a
root#3786d888f039:/# ls /exports
a
root#3786d888f039:/# ls /exports/a
root#3786d888f039:/# touch /exports/a/asdf
root#3786d888f039:/# ls /exports/a
asdf
root#3786d888f039:/# ls /imports/a/
asdf
So that narrows down the problem to OSX/docker-machine or maybe even the encrypted filesystem on OSX.
the problem is in the docker-machine. If you want to use nfs mounts you need to run modprobe nfs in the machine itself, not in container. Container uses kernel of the machine. Same with modprobe nfs and nfs server.
In my case it worked when the exported path was a volume specified in the dockerfile. Here is the repo where I got it working (with the replaced volume definition).
This happened with me. The reason was that my mounted hard disk was of the exfat format. This could also happen if the format is ntfs or fat32. To fix it, first mount the hard disk with ext4 format, then try again. It should now work.

XenServer increase vm-disk error

We have currently the XenServer Version 6.2 with SP1 and the updates to XS62ESP1014.
If we tried to increase one of our vm disks, then there is an error:
[root#xenserver-xx ~]# xe vdi-resize uuid=5101f789-78c2-44e1-9a06-7fe7794dd98e disk-size=100GiB
Error code: SR_BACKEND_FAILURE_110
Error parameters: , VDI resize failed [opterr=Command ['/usr/sbin/lvcreate', '-n', 'inflate_5101f789-78c2-44e1-9a06-7fe7794dd98e_53800337408', '-L', '4', 'VG_XenStorage-81d9f03d-b7fc-80f3-240e-9f6a172059c7', '--addtag', 'journaler', '--inactive', '--zero=n'] failed (3): /usr/sbin/lvcreate: unrecognized option `--inactive'
Error during parsing of command line.],
The lvcreate version:
[root#xenserver-xx ~]# lvcreate --version
LVM version: 2.02.88(2)-RHEL5 (2014-04-04)
Library version: 1.02.67-RHEL5 (2011-10-14)
Driver version: 4.15.0
The redhat version:
[root#xenserver-xx ~]# more /etc/redhat-release
CentOS release 5.11 (Final)
Does sombody know somthing about this error or have somebody the some problem?
Is there a way to fix this?
The problem is also there, then we create a new vm disc and try the increase the disc immediately.
I've got a solution:
The Probelm was that XenServer needs a special version of lvm.
LVM version: 2.02.88(2)-RHEL5 (2014-04-04)
Library version: 1.02.67-RHEL5 (2011-10-14)
Driver version: 4.15.0
In this case lvcreate is a symbolic lik to lvm and the newer version has other arguments to increase one of the vm disks.
My workaround is that I copied the old version from an other XenServer to this XenServer and exchange the lvcreate link.
copy lvm__2_02_84_2 into /usr/sbin/
cp /usr/sbin/
chmod 555 lvm__2_02_84_2
ls -lah lv* # check if lvm and lvm__2_02_84_2 are not different (rights)
mv lvcreate lvcreate_<date>_bak # <date> e.g. 2014-12-02 # backup the old link
ln -s lvm__2_02_84_2 lvcreate # create the new link
ls -lah lv* # check again
Perhaps it's better to exchange the hole lvm:
copy lvm__2_02_84_2 into /usr/sbin/
cp /usr/sbin/
chmod 555 lvm__2_02_84_2
ls -lah lv* # check if lvm and lvm__2_02_84_2 are not different (rights)
mv lvm lvm_<date>_bak # <date> e.g. 2014-12-02 # backup the old link
mv lvm__2_02_84_2 lvm # create the new link
ls -lah lv* # check again
I believe you are missing some hotfixes.
You can try to run rpm -qa| grep lvm2
If your RPM name doesn't have 'xs' string, then definately there is some lvm2 related update is missing.
e.g.
[root#xenserver~]# rpm -qa | grep lvm
lvm2-2.02.88-12.xs1420

Resources