I am working with Docker, and I want to mount a dynamic folder that changes a lot (so I would not have to make a Docker image for each execution, which would be too costly), but I want that folder to be read-only. Changing the folder owner to someone else works. However, chown requires root access, which I would prefer not to expose to an application.
When I use -v flag to mount, it gives whatever the username I give, I created a non-root user inside the docker image, however, all the files in the volume with the owner as the user that ran docker, changes into the user I give from the command line, so I cannot make read-only files and folders. How can I prevent this?
I also added mustafa ALL=(docker) NOPASSWD: /usr/bin/docker, so I could change to another user via terminal, but still, the files have permissions for my user.
You can specify that a volume should be read-only by appending :ro to the -v switch:
docker run -v volume-name:/path/in/container:ro my/image
Note that the folder is then read-only in the container and read-write on the host.
2018 Edit
According to the Use volumes documentation, there is now another way to mount volumes by using the --mount switch. Here is how to utilize that with read-only:
$ docker run --mount source=volume-name,destination=/path/in/container,readonly my/image
docker-compose
Here is an example on how to specify read-only containers in docker-compose:
version: "3"
services:
redis:
image: redis:alpine
read_only: true
docker-compose
Here is a proper way to specify read-only volume in docker-compose:
Long syntax
version: "3.2" # Use version 3.2 or above
services:
my_service:
image: my:image
volumes:
- type: volume
source: volume-name
target: /path/in/container
read_only: true
volumes:
volume-name:
https://docs.docker.com/compose/compose-file/compose-file-v3/#long-syntax-3
Short syntax
Add :ro to the volume mount definition:
version: "3.0" # Use version 3.0 or above
services:
my_service:
image: my:image
volumes:
- /path/on/host:/path/inside/container:ro
https://docs.docker.com/compose/compose-file/compose-file-v3/#short-syntax-3
Related
I'm slightly confused about the correct way to use Filebeat's modules, whilst running Filebeat in a Docker container. It appears that the Developers prefer the modules.d method, however it's not clear to me of their exact intentions.
Here is the relevant part of my filebeat.yml:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
And the filebeat container definition from my docker-compose.yml:
filebeat:
build:
context: filebeat/
args:
ELK_VERSION: $ELK_VERSION
container_name: filebeat
mem_limit: 2048m
labels:
co.elastic.logs/json.keys_under_root: true
co.elastic.logs/json.add_error_key: true
co.elastic.logs/json.overwrite_keys: true
volumes:
- type: bind
source: ./filebeat/config/filebeat.docker.yml
target: /usr/share/filebeat/filebeat.yml
read_only: true
- type: bind
source: /volume1/#docker/containers
target: /var/lib/docker/containers
read_only: true
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
read_only: true
- type: bind
source: /var/log
target: /host-log
read_only: true
- type: volume
source: filebeat-data
target: /usr/share/filebeat/data
user: root
depends_on:
- elasticsearch
- kibana
command: filebeat -e -strict.perms=false
networks:
- elk
depends_on:
- elasticsearch
With this configuration, I can docker exec into my container and activate modules (pulling in their default configuration), and setup pipelines and dashboards like so:
filebeat modules enable elasticsearch kibana system nginx
filebeat setup -e --pipelines
This all works fine, until I come to recreate my container, at which point the enabled modules are (unsurprisingly) disabled and I have to run this stuff again.
I tried to mount the modules.d directory on my local filesystem, expecting this to build the default modules.d config files (with their .disabled suffix) on my local filesystem, such that recreation of the container would persist the installed modules. To do so, the following was added as a mount to docker-compose.yml:
- type: bind
source: ./filebeat/config/modules.d
target: /usr/share/filebeat/modules.d
read_only: false
When I do this, and recreate the container, it spins up with an empty modules.d directory and filebeat modules list returns no modules at all, meaning none can be enabled.
My current workaround is to copy each individual monitor's config file, and mount it specifically, like so:
- type: bind
source: ./filebeat/config/modules.d/system.yml
target: /usr/share/filebeat/modules.d/system.yml
read_only: true
- type: bind
source: ./filebeat/config/modules.d/nginx.yml
target: /usr/share/filebeat/modules.d/nginx.yml
read_only: true
This is suboptimal for a number of reasons, chiefly, if I want to enable a new module, and for it to persist my container being recreated, I need to:
docker exec into the container
get the default config file for the module I want to use
create a file on the local filesystem for the module
edit the docker-compose.yml file with the new bind mounted module config
recreate the container with docker-compose up --detach
The way I feel this should work is:
I mount modules.d to my local filesystem
I recreate the container
modules.d gets populated with all the default module config files
I enable modules by filebeat modules enable blah or by renaming the module config file from my local filesystem (removing the .disabled suffix)
Enabled modules and their config survive container recreation
One way around this could be to copy (urgh) the whole modules.d directory from a running container to my local filesystem and mounting that wholesale. That feels wrong too.
What am I misunderstanding or misconfiguring here? How are other people doing this?
I build a custom image for each type of beat, and embed the .yml configuration in my image. Then I use the filebeat.module property of the configuration file to setup my modules inside of that file.
I think the intention of using the modules.d folder approach is that it makes it easier to understand your module configuration for a filebeat instance that is working with multiple files.
That is my intention of this 1 image to 1 module/file type approach that I use for sure. All of my logic/configuration is stored along with the service that I am monitoring and not in one central, monolithic location.
Another benefit to this approach is each individual filebeat only has access to the log files it needs. In the case of collecting the docker logs like you are, you need to run in in privileged mode to bind mount to /var/run/docker.sock. If I want to run your compose file but do not want to run privileged (or if I am using windows and cannot) then I loose all the other monitoring that you have built out.
Given a spring boot app that writes files to /var/lib/app/files.
I create an docker image with the gradle task:
./gradlew bootBuildImage --imageName=app:latest
Then, I want to use it in docker-compose:
version: '3.5'
services:
app:
image: app:latest
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This will fail, because the folder is created during docker-compose up and is owned by root and the app, hence, has no write access to the folder.
The quick fix is to run the image as root by specifying user: root:
version: '3.5'
services:
app:
image: app:latest
user: root # <------------ required
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This works fine, but I do not want to run it as root. I wonder how to achieve it? I normally could create a Dockerfile that creates the desired folder with correct ownership and write permissions. But as far as I know build packs do not use a custom Dockerfile and hence bootBuildImage would not use it - correct? How can we create writable volumes then?
By inspecting the image I found that the buildpack uses /cnb/lifecycle/launcher to launch the application. Hence I was able to customize the docker command and fix the owner of the specific folder before launch:
version: '3.5'
services:
app:
image: app:latest
# enable the app to write to the storage folder (docker will create it as root by default)
user: root
command: "/bin/sh -c 'chown 1000:1000 /var/lib/app/files && /cnb/lifecycle/launcher'"
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
Still, this is not very nice, because it is not straight forward (and hence my future self will need to spent time on understand it again) and also it is very limited in its extensibility.
Update 30.10.2020 - Spring Boot 2.3
We ended up creating another Dockerfile/layer so that we do not need to hassle with this in the docker-compose file:
# The base_image should hold a reference to the image created by ./gradlew bootBuildImage
ARG base_image
FROM ${base_image}
ENV APP_STORAGE_LOCAL_FOLDER_PATH /var/lib/app/files
USER root
RUN mkdir -p ${APP_STORAGE_LOCAL_FOLDER_PATH}
RUN chown ${CNB_USER_ID}:${CNB_GROUP_ID} ${APP_STORAGE_LOCAL_FOLDER_PATH}
USER ${CNB_USER_ID}:${CNB_GROUP_ID}
ENTRYPOINT /cnb/lifecycle/launcher
Update 25.11.2020 - Spring Boot 2.4
Note that the above Dockerfile will result in this error:
ERROR: failed to launch: determine start command: when there is no default process a command is required
The reason is that the default entrypoint by the paketo builder changed. Changing the entrypoint from /cnb/lifecycle/launcher to the new one fixes it:
ENTRYPOINT /cnb/process/web
See also this question: ERROR: failed to launch: determine start command: when there is no default process a command is required
I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.
I'm trying to mount a directory with configuration files in my docker-compose.yml.
In my case it is logstash, which tells me the mounted directory is empty.
Loading a bash and ls -la in the parent directory shows that the pipeline directory is empty and is owned by root.
One weird thing is, that it worked a few days ago.
docker-compose.yml:
version: '3'
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.6.3
ports:
- 5000:5000
- 8989:8989
volumes:
- C:/PROJECT_DIR/config/logstash/pipeline/:/usr/share/logstash/pipeline/
I found it better to try around with docker itself, as it gives more feedback
docker run --rm -it -v C:/PROJECT_DIR/config/logstash/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:5.6.3
From here and some googling I found out I had to reset my shared drives credentials under "Docker for Windows" -> Settings... -> Shared Drives, because I had changed my windows domain user password.
If you changed your system username or password then you need to re-apply the credentials to get the volume mount working.
I just want to test Docker and it seems something is not working as it should. When I have my docker-compose.yml like this:
web:
image: nginx:latest
ports:
- "80:80"
when in browser I run my docker.app domain (sample domain pointed to docker IP) I'm getting default nginx webpage.
But when I try to do something like this:
web:
image: nginx:latest
volumes:
- /d/Dev/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
when I run:
docker-compose up -id
when I run same url in browser I'm getting:
403 Forbidden
nginx/1.9.12
I'm using Windows 8.1 as my host.
Do I do something wrong or maybe folders cannot be shared this way?
EDIT
Solution (based on #HemersonVarela answer):
The volume I've tried to pass was in D:\Dev\docker location so I was using /d/Dev/docker at the beginning of my path. But looking at https://docs.docker.com/engine/userguide/containers/dockervolumes/ you can read:
If you are using Docker Machine on Mac or Windows, your Docker daemon has only limited access to your OS X or Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory.
so what I needed to do, is to create my nginx-ww/nginx/html directory in C:\users\marcin directory, so I ended with:
web:
image: nginx:latest
volumes:
- /c/Users/marcin/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
and this is working without a problem. Files are now shared as they should be
If you are using Docker Machine on Windows, docker has limited access to your Windows filesystem. By default Docker Machine tries to auto-share your C:\Users (Windows) directory.
So the folder .../Dev/docker/nginx-www/nginx/html/ must be located somewhere under C:\Users directory in the host.
All other paths come from your virtual machine’s filesystem, so if you want to make some other host folder available for sharing, you need to do additional work. In the case of VirtualBox you need to make the host folder available as a shared folder in VirtualBox.
You have to set a command to copy your nginx.conf into the nginx container:
Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf`
Creat a dir name it nginx and put the Dockerfile & nginx.conf there, then you have to set a build:
docker-compose.yml:
web:
image: nginx:latest
build :./nginx/
volumes:
- /d/Dev/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
Then build your containers with : sudo docker-compose build