First let me describe the system that I'm working on.
Intro
I've got a Mac with a small SSD drive. The system contains different
tools for data analysis that run as separated dockers.
Docker for OSX does not run directly on the system and it requires a
linux-based Virtual Machine which runs the dockers internally.
Some of these tools store huge amounts of data (several TB), which means that the internal HDD would not be enough so we need to store the data in an external media.
Therefore, for each docker it would be interesting to separate the data storage (databases) and the system files (OS + tool sources). Databases must be stored at the external HDD.
What I've done is:
Create a shared directory at the external HDD between OSX and the Linux-based VM (host for the dockers). Let's say that this directory is mounted at "/data" within the Linux VM.
Then, when creating a docker, use the "/data" directory from host as a volume mounted at the "/data" directory of the docker. If I ssh to one of the created dockers and list the content of the "/data" dir, I can see the content of the external HDD. So that works.
Here is when the problem appears, and it is related with PERMISSIONS.
PROBLEM
Considering that:
At the OSX our user is peterfoo
At the dockers the main user is also peterfoo, but we may have secondary users for the different system services. For example, if we install MYSQL, the MySQL service will be executed by default by the user mysql and all the files will belong to this user.
I want to change the MySQL data directory to enforce that all the data is stored at the "/data" directory (i.e. at the external HDD)
If I user the user peterfoo there is not problem: I can write files, read files, remove and create directories, etc. at the "/data" directory (external HDD) from the docker. However, using a different user the system does not allow writting, which means that when the mysql user tries to write/read data from the new location it fails and MySQL stops working.
SOLUTIONS?
I couldn't find a solution at this forum and I've being trying to fix this issue from yesterday.
The only solution that I found for the moment is a really naive one: run the MySQL (and other services) as the peterfoo user, instead of the correspoding one (mysql). This is not a critical issue but introduces a security problem: it is always recomendable to use specific users for the system services to keep control of the access to data.
Any idea?
Thanks!!
More info
If I run ls -l to /data directory, I get:
drwxr-xr-x 8 peterfoo staff 272B 17 may 09:48 app1-data/
drwxr-xr-x 3 peterfoo staff 102B 25 may 12:38 app2-data/
drwxr-xr-x 7 peterfoo staff 238B 25 abr 10:35 app3-data/
If I try to change the ownership of the directory to mysql, it doesn't work and keeps peterfoo.
$ sudo chown mysql:mysql /data/app1-data
$ ls -l /data | grep app1
drwxr-xr-x 8 peterfoo staff 272B 17 may 09:48 app1-data/
If also tries changing the rwx permissions to 777 and then using a secondary user (let's say user2) can create dirs and files but once created, the ownership changes automatically to peterfoo, and the created subdirectories become unwritable to user2.
$ sudo chmod 777 /data/app1-data
$ ls -l /data | grep app1
drwxrwxrwx 8 peterfoo staff 272B 17 may 09:48 app1-data/
$ su user2
$ touch test.txt
$ mkdir /data/app1-data/test
$ ls -l /data/app1-data/
drwxrwxrwx 1 peterfoo staff 204 May 25 12:38 ./
drwxrwxr-x 1 peterfoo staff 510 May 24 14:29 ../
-rw-r--r-- 1 peterfoo staff 0 May 25 08:07 test.txt
drwxr-xr-x 1 peterfoo staff 68 May 25 12:38 test/
Uhm let me get this straight:
you got a virtual machine for linux to wich you attached part of the external storage; you then mounted that storage at /data in linux system.
created a docker container with a volume at /data inside the container mounted at /data inside the linux vm.
if i got that right the docker should automatically have permission to write in /data at linux vm and the only problems i can think of are the permission that are set inside the docker at the /data directory.
if my hunch is right when you ssh inside the docker and ls -l at / you will find owner is peterfoo and lacking permission for other users
Related
When I use docker or docker-compose with volumes I often have issues with permissions as the container user is not known on the host:
mkdir i-can-to-what-i-want
rmdir i-can-to-what-i-want
docker run -v$(pwd):/home -w/home ubuntu touch you-shall-not-delete-it
$ ls -al you-shall-not-delete-it
-rw-r--r-- 2 root root 0 2020-08-08 00:11 you-shall-not-delete-it
One solution is to always do this:
UID=$(id -u) GID=$(id -g) docker-compose up
Or
UID=$(id -u) GID=$(id -g) docker run ...
But... it is cumbersome...
Any other method?
--user will do the job, unless this is the exact cumbersome solution that you are trying to avoid:
who
neo tty7 2020-08-08 04:46 (:0)
docker run --user $UID:$GID -v$(pwd):/home -w/home ubuntu touch you-shall-delete-it
ls -la
total 12
drwxr-xr-x 3 neo neo 4096 Aug 8 02:12 .
drwxr-xr-x 34 neo neo 4096 Aug 8 02:03 ..
drwxr-xr-x 2 neo neo 4096 Aug 8 02:03 i-can-to-what-i-want
-rw-r--r-- 1 neo neo 0 Aug 8 02:12 you-shall-delete-it
In fact you don't use volume here :
docker run -v$(pwd):/home
you use bind mound.
When you use a bind mount, the resource on the host machine is mounted into a container.
Relying on the host machine’s filesystem has advantages (speed and a dynamic data source) but has also its limitations (file ownership and portability).
How I see things :
1)When you use docker-compose in dev and that you need to bind your source code that constantly changes, bind mount is unavoidable but you can simplify things by setting the user/group of the container directly in the compose.
version: '3.5'
services:
app:
user: "${UID}:${GID}"
...
Note that ${UID} and ${GID} are here shell variables.
${UID} is defined in bash, but ${GID} is not. You could export it if required or so use the user id for both : user: "${UID}:${UID}".
2)When you use docker or docker-compose in a frame where you don't need to provide the files/folders from that host at container creation time but that you can do it in the image creation, favor volume (name volume) over bind mount.
I want to enable Remote Connections to MySQL on 1&1
i followed their explanation
https://help.1and1.com/servers-c37684/dedicated-server-linux-c37687/administration-c37694/enable-remote-connections-to-mysql-a781586.html
but the file system is Read-only.
with ls -l ==>
drwxr-xr-x 3 root root 4096 Nov 8 16:43 mysql
inside this folder mysql i have this file
-rw-r--r-- 105 root root 3533 Oct 22 2015 my.cnf
with ls -alt
***drwxr-xr-x 88 root root 4096 Nov 8 16:52 ..
drwxr-xr-x 3 root root 4096 Nov 8 16:43 .
-rw-r--r-- 105 root root 3533 Oct 22 2015 my.cnf***
I want to modify this file with chmod
but i have this error
chmod: changing permissions of 'my.cnf': Read-only file system
How to make a my.cnf writable? and after modification make it readable
Thanks
If a filesystem has been mounted read-only, chmod will not work since it's a write operation too.
sudo mount -o remount,rw '/mnt/yourmounthere'
If the device has a write lock on it (like SD memory cards), you need to turn it off. Hardware locks cannot be disabled by software. Note that the write lock on SD memory cards is located from the sight you see the letters near the up left corner and it looks like a very small switch.
"sudo chmod 777 my.cnf" for change like root user
I ran into some strange behavior while setting up a script to start kvm instances today, and am hoping you all can weigh in what's going on here.
Setup:
I have a script that starts a kvm with virt-install.
virt-install ... --disk=image.qcow2 ...
I want to run the same script on different versions of image.qcow2, so I created a symbolic link of a to my latest image.
My directory stucture would look something like this:
startKvm.sh
image.qcow2 -> image_v2.0.qcow2
image_v2.0.qcow2
image_v1.0.qcow2
However, when I tried to run my virt-install command, it returned the following error.
ERROR internal error: process exited while connecting to monitor:
datetime qemu-kvm: -drive file=/path/image.qcow2,if=none,id=drive-ide0-0-0,format=qcow2: could not open
disk image /path/image.qcow2: Could not open file: Permission
denied
Thoughts on the cause and alternate solutions?
I've had a similar idea to use symlinks, however in my scenario I run virt-manager via SSH with X forwarding as a user in wheel (admin) group. Creating a virtual guest produced same error:
Unable to complete install: 'internal error: process exited while connecting to monitor: 2018-02-24T07:53:19.064452Z qemu-kvm: -drive file=/var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso,format=raw,if=none,id=drive-ide0-0-1,readonly=on: could not open disk image /var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso: Could not open '/var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso': Permission denied'
Issue is, in fact, with permissions. When setting up an .iso image using qemu-kvm, it changes its ownership to user/group qemu upon deployment. We need to make sure user qemu has access rights throughout whole path to the file.
Test case
User running virt-manager: yahol (wheel/admin group)
Iso or qcow2 image location: /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
Note: please ignore the timestamps as I've hacked together this answer over 2 days. It was late and I was too tired to finish it the previous day.
Starting with image in user's home directory:
$ ll /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
$ -rw-r--r--. 1 yahol yahol 565182464 Feb 24 08:49 archlinux-2018.02.01-x86_64.iso
Creating symlink as root in default path to KVM images:
# cd /var/lib/libvirt/images
# ln -s /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
lrwxrwxrwx. 1 root root 48 Feb 23 21:35 archlinux-2018.02.01-x86_64.iso -> /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
Symlink belongs to user root, while actual file still has user/group of a regular user.
Setting up and deploying virtual machine via virt-manager. Notice how user/group of image file changes to qemu:
$ ll /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
-rw-r--r--. 1 qemu qemu 565182464 Feb 24 08:49 archlinux-2018.02.01-x86_64.iso
At this point VM manager, be it virt-manager or virt-install, throws the aforementioned error. This happens because user qemu doesn't have access to full path. Home dir of user yahol is only accessible to user yahol:
$ ll -d /home/yahol/
drwx------. 7 yahol yahol 258 Feb 24 08:37 /home/yahol/
Now let's create another path to which qemu has full access:
# mkdir -p /Qemu/Test/Iso
# mv archlinux-2018.02.01-x86_64.iso /Qemu/Test/Iso/
# chown -R qemu:qemu /Qemu/
# cd /var/lib/libvirt/images
# ln -s /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
# ll /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
-rw-rw-r--. 1 qemu qemu 565182464 Feb 23 08:53 /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
# ll /var/lib/libvirt/images
lrwxrwxrwx. 1 root root 46 Feb 24 09:39 archlinux-2018.02.01-x86_64.iso -> /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
This works perfectly fine, virtual machine has been deployed and it's running.
Possible solutions:
change ownership to entire path to actual image to user/group qemu
# chown -R qemu:qemu /Qemu/
give read+execute permissions to others (world) to entire path to image
# chmod -R o+rx /Qemu/
WARNING: This might have security implications!
create a directory owned by user/group qemu dedicated to iso images that will be symlinked
# mkdir -p /Qemu/Test/Iso
# chown -R qemu:qemu /Qemu/
Summary:
Even though everything is done by user with root privileges, the actual user working with VM images is qemu. Adding qemu to wheel group would still require providing means to authenticate and, since it's a no-login user, it might be tricky.
I'm getting permission denied attempting to copy a file at the command-line from my Mac to a remote Windows IIS server. I have access to the IIS server and have confirmed that I have write permissions to the folder. I can remote desktop to the server and navigate and work in the directories i want. I can copy the file successfully using Finder. From the Terminal command-line I'm able to mount a volume, navigate, ls and cat the file in the directory i'm trying to cp to.
$cp -f ham.html /Volumes/external-api/eggs.html
cp: /Volumes/external-api/eggs.html: Permission denied
$ ls -l ham.html
-rw-r--r-- 1 kellykx LEGAL\Domain Users 18218 Jul 29 22:58 ham.html
$ ls -ld
drwxr-xr-x 31 kellykx LEGAL\Domain Users 1054 Jul 29 23:02 .
$ ls -l /Volumes/external-api/eggs.html
-rwx------+ 1 kellykx LEGAL\Domain Users 18218 Jul 29 15:23 /Volumes/external-api/eggs.html
$ ls -ld /Volumes/external-api
drwx------+ 1 johnsob2 LEGAL\Domain Users 16384 Jul 29 17:53 /Volumes/external-api
I'm worried there's some IIS voodoo i'm missing. Or worse, something obviously trivial.
Ideas welcome.
Resolved.
The permissions of the Windows share were more restrictive than the file system permissions and took precedence, causing the permission denied message. The Windows share permission was r while the underlying directory and files were rw, as shown.
To resolve:
I used remote desktop to login to the Windows server.
Navigated file manager to the parent dir of external-api,
Right clicked and followed properties->Sharing->Advanced Sharing->Permissions
Selected my name from the list box -- already set up when the share was created
Checked Full Control, Change and Read checkboxes.
Open question:
How do you inspect permissions of mounted SMB share from the OSX command line? The equiv of ls -l.
This might seem a little strange question at first but here me out.
I'm writing a shell script that makes up a file system that'll get compressed back into an archive and it needs some files in it to be owned by the root user. This whole thing is going to be automated soon but right now it's a bit of a problem because if I use sudo I need to enter in a password.
Seeing as the files are created beneath my own home directory for which I have full access I thought perhaps I can change their ownership to a root user. Is that possible?
If I try it normally I get "Operation not permitted". Maybe there is an alternative?
You can do what you want using fakeroot. It's a library that makes programs think they're running as root, when they are not. IIRC, it is used by dpkg to allow non-root users to build .deb packages that contain root-owned files.
Check out this shell script:
#!/bin/bash
mkdir image
touch image/user-owned
touch image/root-owned
chown renato.renato image/user-owned
chown root.root image/root-owned
tar cf image.tar image
Normally, I would only be able to create this tar archive as root. However, if I use fakeroot:
$ fakeroot ./create-image.sh
$ tar tvf image.tar
drwxr-xr-x root/root 0 2014-04-09 01:09 image/
-rw-r--r-- root/root 0 2014-04-09 01:09 image/root-owned
-rw-r--r-- renato/renato 0 2014-04-09 01:09 image/user-owned
However, the files on the disk are still user-owned, so no security risk here:
$ ls -l image/
total 0
-rw-r--r-- 1 renato renato 0 Abr 9 01:09 root-owned
-rw-r--r-- 1 renato renato 0 Abr 9 01:09 user-owned