I would like to know how to make the veracrypt volume (container) able to change permission with chmod (and have key permissions compatible with ssh).
I wish to store my .ssh folder securely with veracrypt. But when try to ssh using my credentials in a mounted veracrypt volume (using mac) I got an error: "Bad owner or permissions on xxxxxxx" and I cannot use ssh.
I tried to chown/chmod the files but it didnot work. All files have permissions "-rwxrwxrwx" for my user, even when I mount the volume in read only.
Is there a way t set the permissions properly or use a different FS for the container?
I tried a volume in exfat and fat from a file.
I first tried with the GUI.
Then I tried this :
veracrypt /dev/sda3 /mnt/ssh --filesystem=none
sudo mount -t exfat -o -m=022 /dev/mapper/veracrypt1 /mnt/ssh
and with fat :
veracrypt /dev/sda3 /mnt/ssh --filesystem=none
sudo mount -t fat -o -umask=022 /dev/mapper/veracrypt1 /mnt/ssh
but chmod still failed :
mount: exec /Library/Filesystems/lfs.fs/Contents/Resources/mount_[exfat/fat] for /mnt/ssh : No such file or directory
mount: /mnt/ssh failed with 72
Of course the /mnt/ssh directory do exist ;)
Do I misuse mount ? Or missed some veracrypt options ? Or choose the bad filesystem ?
Thank you !
Seem like choosing APFS works like a charm. And it's linux compatible.
I've been experimenting with running apps within a chroot-ed directory.
Many apps and binaries require access to items within /dev, such as /dev/null and /dev/random to work.
Is there a way to recreate or bind mount the /dev filesystem within a directory to this end?
I have tried the following without success:
(Where root is the directory I want to chroot into)
$ sudo bindfs -o dev -o allow_other /dev ./root/dev/
Leading to:
$ cat ./root/dev/urandom
cat: ./root/dev/urandom: Operation not permitted
$ mount -t devfs devfs ./root/dev
Leading to:
$ cat ./root/dev/urandom
cat: ./root/dev/urandom: Device not configured
Attempting to manually make the devices with mknod doesn't work either.
$ sudo mknod null c 1 3
$ sudo chmod 666 ./null
$ cat ./null
cat: ./null: Operation not permitted
Is there a way to either use the existing /dev items within a chroot or to recreate them?
Unfortunately, there doesn't appear to be much documentation of using chroot with OSX/macOS on the internet.
Operating System Details: macOS Mojave, 10.14.6. SIP enabled.
Well, this one is mainly on me being dumb.
sudo mount -t devfs devfs ./dev
Works just fine.
If the above command is ran without root, it will bind the devfs devices within ./dev, but all devices will respond with a "Device not configured" error. If it is ran as root, all ./dev devices will work as expected.
I have external SSD with APFS volumes inside, when plug-in this automatic mount at /Volumes/Workspace. Have a way to define to automatic mount in a predefined path?
Take a look at current volumes:
ls /Volumes
Get the UUID of volume:
diskutil info /Volumes/[name] | grep UUID
Create a folder where you wish add the volume:
mkdir -p [/absolute/path/to/folder]
Put follow code at fstab:
echo "UUID=[UUID] [/absolute/path/to/folder] apfs rw" | sudo tee -a /etc/fstab
The example mount a volume with apfs format, but you can use for ntfs, nfs, etc.
When booting a virtual server with Ubuntu 14.04/16.04 (I had the issues with both), it cant find the boot partition for root and the system falls to the initramfs shell with the following error:
(initframs) exit
Gave up waiting for root device. Common problems:
- Boot args (cat proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/CAC_VG-CAC_LV does not exist. Dropping to a shell!
if I type
ls /dev/mapper/
I still can see the partition mentioned in the error (and in the GRUB)
root=/dev/mapper/CAC_VG-CAC_LV
cat output as suggested in the error message
(initframs) cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.4.0-66-generic root=/dev/mapper/CAC_VG-CAC_LV ro
Notice: it seems to mount the device in Read-Only (ro). Maybe I should change this after I manage to start the system...
If I type exit I get the same error as above.
Then I try to mount:
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV
mount: can't find /dev/mapper/CAC_VG-CAC_LV in /etc/fstab`
I had the same problem after a fresh install of Ubuntu 14.04
And this actually worked!!
mount -o remount, rw /
lvm vgscan
lvm vgchange -a y
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV /root
exit
I'm working on a docker NFS container, but running into trouble mounting the exported directories (even on localhost).
The problem
exportfs: <path> does not support NFS export
The setup
My container uses an entrypoint script to
write directories (provided via command-line arguments) into /etc/exports,
invokes rpcbind and service nfs-kernel-server start, and
defers to inotifywait to remain running.
The Dockerfile is nothing special. I install inotify-tools and nfs-kernel-server, expose port 2049, and copy over the entrypoint script.
I'm using docker-machine on an El Capitan Macbook.
I map volumes from the host into the container to give the nfs server access to directories that I want to export.
Entrypoint script
modprobe nfs
modprobe nfsd
for x in ${#}; do
echo -e "$x\t*(rw,sync,no_subtree_check,crossmnt,fsid=root,no_root_squash)" >> /etc/exports
done
source /etc/default/nfs-kernel-server
source /etc/default/nfs-common
rpcbind
service nfs-kernel-server start
exec inotifywait --monitor /exports
Debugging
Here's the setup for what I am trying to export.
%> ls $HOME/mounts
a
%> ls $HOME/mounts/a
asdf
Here's how I start the server.
%> docker run --privileged --rm --name=nfs-server --volume=$HOME/mounts/a/:/exports/a docker-nfs-server /exports/a
Exporting directories for NFS kernel daemon...exportfs: /exports/a does not support NFS export
.
Starting NFS kernel daemon: nfsd mountd.
Setting up watches.
Watches established.
And here's the debugging I've been doing while the container is running.
%> docker exec -it nfs-server bash
root#6056a33f061e:/# ls /exports
a
root#6056a33f061e:/# ls /exports/a
asdf
root#6056a33f061e:/# showmount -a
All mount points on 6056a33f061e:
root#6056a33f061e:/# exportfs -a
exportfs: /exports/a does not support NFS export
root#8ad67c951ecd:/# mount
none on / type aufs (rw,relatime,si=3ca85db062268b32,dio,dirperm1)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
/dev/sda1 on /exports type ext4 (rw,relatime,data=ordered)
Users on /exports/a type vboxsf (rw,nodev,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
What I know
Less than Jon Snow. I can identify a few variables that might each be responsible for my problem, but I don't know how to verify any of them:
$HOME/mounts/a is on an OSX filesystem
That filesystem is encrypted
/exports/a is being mounted into the docker-machine VM
I don't have enough experience with NFS to know how to debug this effectively. Any assistance or information would be appreciated.
Update
It works in Parallels!
%> docker run --privileged --rm --name=nfs-server --volume=$HOME/mounts/a/:/exports/a docker-nfs-server /exports/a
%> docker exec -it nfs-server bash
root#3786d888f039:/# mkdir --parents /imports/a
root#3786d888f039:/# mount --types nfs --options nolock,proto=tcp,port=2049 localhost:/exports/a /imports/a
root#3786d888f039:/# ls /imports
a
root#3786d888f039:/# ls /imports/a
root#3786d888f039:/# ls /exports
a
root#3786d888f039:/# ls /exports/a
root#3786d888f039:/# touch /exports/a/asdf
root#3786d888f039:/# ls /exports/a
asdf
root#3786d888f039:/# ls /imports/a/
asdf
So that narrows down the problem to OSX/docker-machine or maybe even the encrypted filesystem on OSX.
the problem is in the docker-machine. If you want to use nfs mounts you need to run modprobe nfs in the machine itself, not in container. Container uses kernel of the machine. Same with modprobe nfs and nfs server.
In my case it worked when the exported path was a volume specified in the dockerfile. Here is the repo where I got it working (with the replaced volume definition).
This happened with me. The reason was that my mounted hard disk was of the exfat format. This could also happen if the format is ntfs or fat32. To fix it, first mount the hard disk with ext4 format, then try again. It should now work.