Writable directories with rootless Podman - ansible

I'm trying to run rootless Podman containers with
podman container create --name postgres --expose 5432 --memory 512M --hostname postgres --volume /home/user/some/path/postgres:/var/lib/postgresql/data:Z,U --userns keep-id --env POSTGRES_USER=admin --env POSTGRES_PASSWORD=secret docker.io/postgres:14
but I'm getting the error message
Error: error stat'ing file `/home/user/some/path/postgres`: Permission denied: OCI permission denied
The destination path /home/user/some/path is within a gocryptfs mount. Mapping the volume outside the path works flawlessly.
So far I thought that --userns keep-id should avoid permission issues for rootless containers but if I'm removing the option I'm getting the error message
chown: changing ownership of '/var/lib/postgresql/data': Operation not permitted
As far as I understood, providing the options --uidmap and --gidmap could help as well but I'm not sure how I can provide the proper values for it.
Under the hood I'm using Ansible to configure the containers.
EDIT: Now I also created a Podman issue.

The reason for this error was that the mount wasn't done with the fuse parameter allow_other.

Make sure that you had done with /etc/subuid and /etc/subgid configuration, as it described here:
https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md
Rootless Podman requires the user running it to have a range of UIDs listed in the files /etc/subuid and /etc/subgid.
...
The format of this file is USERNAME:UID:RANGE
username as listed in /etc/passwd or in the output of getpwent.
The initial UID allocated for the user.
The size of the range of UIDs allocated for the user.
Example:
# cat /etc/subuid
johndoe:100000:65536
test:165536:65536

Related

Why is Podman trying to pull an image that already exists after loading from file?

I am working in an air-gapped environment running Fedora CoreOS which comes packaged with Podman. I have several container images I have been working on transporting into the air-gapped environment. In order to do this I have followed these steps:
I acquired the images on a machine with internet access. Some of the images were pulled into Podman from my Docker registry using podman pull docker-daemon:docker.io/my-example-image:latest while some were pulled directly from the online repositories using podman pull.
I saved the images to a tar file using (for example) podman save docker.io/my-example-image:latest -o my-example-image.tar
I transported the tar files to the air-gapped environment on physical media and loaded them using podman load -i my-example-image.tar
When I check the images using podman images they all appear in the images list. However, if I try to run a container from one of these images, using sudo podman run docker.io/my-example-image I get a long error message:
Trying to pull docker.io/my-example-image
Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on [::1}:53: read udp [::1]:50762 ->
[::1]:53: read: connection refused
Error: unable to pull docker.io/my-example-image: Error initializing source docker://my-example-image:latest:
error pinging docker registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": dial tcp: lookup
registry-1.docker.io on [::1]:53: read udp [::1]50762 -> [::1]:53: read: connection refused
I get a similar message for images that were acquired from other repositories like quay.io
It seems to me that the error is caused by the machine's inability to establish a connection with a registry, which makes sense to me considering that the environment is air-gapped. But I am not sure why podman is even trying to pull these images when they already exist in the environment as confirmed by podman images
I have tried using various ways of referencing the image within the podman run command including
sudo podman run docker.io/my-example-image:latest
sudo podman run my-example-image
sudo podman run my-example-image:latest
I have tried searching for a solution to this problem to no avail and would very much appreciate any guidance on this.
Each user has its own container storage.
The user root uses the directory /var/lib/containers/
Normal users use the directory ~/.local/share/containers/
The command
podman load -i my-example-image.tar
will use the directory ~/.local/share/containers/
The command
sudo podman run docker.io/my-example-image
will use the directory /var/lib/containers
If you would like to share a read-only container storage between users,
check out the setting additionalimagestores in the file storage.conf
[storage.options]
additionalimagestores = [ "/var/lib/mycontainers",]
Reference:
https://www.redhat.com/sysadmin/image-stores-podman

Data directory permissions on host for Clickhouse installation via docker on Windows

I have a similar issue to the following link, but in powershell as I am running a clickhouse docker container in windows 10.
Data directory permissions on host for Clickhouse installation via docker
My setup is run as such:
docker run -d -p 8124:8124 --name my_database --ulimit nofile=262144:262144 --volume=E:/:/var/lib/clickhouse yandex/clickhouse-server
E drive is one of the drives on my windows computer.
I cannot seem to access /var/lib/clickhouse/data when running a mergetree table creation. It seems that clickhouse client is not being given adequate permissions to reach this file system. The error looks as such:
Access to file denied: /var/lib/clickhouse/data/default/test_mergetree/tmp_insert_20150731_20150731_8_8_0
Since I am in powershell I am unsure how I might approach solving this. I am attempting to access the file system to give powershell permissions:
Something like this
ICACLS "var/lib/clickhouse/data" /setowner "administrator"
But then since clickhouse is dockerized it seems I cannot find the path:
The system cannot find the path specified.
Would I have to run docker compose? Or am I approaching this the wrong way?
ATTEMPT 1
I've tried running the following:
docker run --rm -i --entrypoint /bin/sh yandex/clickhouse-server -c id clickhouse
#got back:
uid=0(root) gid=0(root) groups=0(root)
#went into the system and ran
docker exec -it container-id bash
chown -R 0:0 /var/lib/clickhouse
#got back
chown: cannot read directory '/var/lib/clickhouse/System Volume Information': Permission denied
You should run docker and clickhouse in Linux instead of Windows.
Turns out this is an issue which has not been repaired in windows docker desktop:
https://github.com/docker/for-win/issues/39
Volume mounts are necesarry. But I got around it by changing the disk image to the target host drive. Under settings -> advanced -> change the virtual hard disk image to the drive you want and you can write within that drive. Note you still won't have access to raw data.
Still an issue but the practical solution is to move to Docker Volumes.
With bind mount there is a problem with wsl2

unable to invoke DOCKER using JENKINS user

I am trying to run a docker command as part of a jenkins job using shell. I get a standard error stating
"Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/json: dial unix /var/run/docker.sock: connect: permission denied"
I will require some help on
1. how to find the path where docker is installed which can be added to JENKINS global configuration
2. workaround to fix this permission issue(running as a sudo user/any other specific user)
I have already experimented by adding the JENKINS users to admin group, staff group, made administrator. But, nothing has actually helped. I still get the standard error
Tried the below code on terminal too
sudo -u jenkins docker images
OUTPUT:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/json: dial unix /var/run/docker.sock: connect: permission denied
I am expecting to run this code without the error. Only then my jenkins pipeline will be complete.
To find the path where docker is installed, simply run a which docker. Usually, it'll be installed somewhere in the standard PATH already, so probably Jenkins will already have access. As you get the permission denied error message, it looks like Jenkins is already using the correct docker executable.
Depending on the distribution or operating system you are using, you will most likely need to add the jenkins user to a docker group, e.g. sudo usermod -aG docker jenkins. To find out which group you need, run:
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Apr 30 16:20 /var/run/docker.sock
In the second line, you see the group that docker.sock is owned by. Add the jenkins user to that group.

Docker Postgres with windows share

I migrated from Linux to Windows and tried to setup a postgres container with a mounted directory (copied from my Linux install) containing the database.
This does not work.
Windows mounts are always owned by root
Postgres does not run under root
How to get this unholy combination to work?
You don't provide much details so it is difficult to tell what actually went wrong. However there is a known issue with Postgres setup on Windows Docker using a windows mount for database data files. In that case, running docker logs will show something along the following lines
waiting for server to start....FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
stopped waiting
pg_ctl: could not start server
Unfortunately there is no way to overcome this issue so you cannot use Windows mount, see Postgres Data has wrong ownership. You may use docker volumes in order to make database data indipendent from docker postgres container, using the following commands
docker create -v /var/lib/postgresql/data --name PostgresData alpine
docker run -p 5432:5432 --name yourPostgres -e POSTGRES_PASSWORD=yourPassword -d --volumes-from PostgresData postgres
You may find a more thoroughful explanation at Setup Postgresql on Windows with Docker

How to set up a data volume within docker?

I am trying to set up rocker/rstudio docker on a linux ubuntu 14.04.5 with a data volume so all my data is outside of the docker. I have looked at Manage data in containers for some some guidance.
sudo docker run -d -p 8787:8787 rocker/rstudio -v ~/data/
I get back the following error:
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\"-v\\": executable file not found in $PATH\"\n".
Your -v flag should appear before the name of the image your want to run. If you list it after the image name docker will interpret it as the command used to launch the container.
You shouldn't use ~ when referring to the container volume. A better approach would be to use an absolute path like /data
If you are using a data volume in order to get persistence, consider mounting a host directory as your data volume (as seen in the tutorial you've linked to under Mount a host directory as a data volume.
Your final command should look something like this -
sudo docker run -d -p 8787:8787 -v /src/data:/data/ rocker/rstudio

Resources