Why are files copied to mounted folder from inside the container not appearing in the host folder? - windows

When I mount a host folder to my neo4j container the files from in that folder appear inside the containers directory and things work. However files that are inside the containers directory do not appear in the corresponding host folder - like the config file inside the container, located in the mounted conf folder.
version: "3"
services:
cneo4j:
image: neo4j:3.3.5
container_name: cneo
ports:
- "7474:7474"
- "7687:7687"
environment:
- VIRTUAL_PORT=80
- VIRTUAL_HOST=db.localhost.vm
- VIRUAL_ENABLE_SSL=True
- LETSENCRYPT_HOST=db.localhost.vm
- LETSENCRYPT_EMAIL=email#example.com
- CERT_NAME=db.localhost.vm
volumes:
- /c/Users/moeter/cmcr/data/graph_main/neo4j/data:/data:rw
- /c/Users/moeter/cmcr/data/graph_main/neo4j/logs:/logs:rw
- /c/Users/moeter/cmcr/data/graph_main/neo4j/conf:/conf:rw
restart: always
I am using Docker Toolbox for Windows.

A host volume will always map to the contents of the host directory, there is no merging of contents from the image or initialization of the host directory from the image. The process of populating the host directory has to be done by your container entrypoint or command. If there's initial data you want to load, that will need to be saved elsewhere in the image since you cannot access the image folder contents of they are hidden under a volume mount.

Related

Unable to mount docker folder into host using docker-compose

I'm using a docker container to run cucumber test, after the test finished it will generate a report, I want to copy the report to my host machine.
Have a report folder at the root of my project and create a test folder in Dockerfile and copy all the files into the container.
Expect: copy reports from container's /test/report folder to host's /report folder
docker-compose:
version: '3'
services:
test:
build:
context: .
args:
xx : "xxx"
volumes:
- ./report:/test/report
Dockerfile:
RUN mkdir /test
WORKDIR /test
COPY . /test
RUN /test/testing.sh
have other configuration in Dockerfile but not related to volume/mount, so didn't post it here.
There should be three reports in the report folder.
Inside the container, the reports can be seen at /test/report after the test, and if the /report in my host not null, it will override the reports in the container. But the volume doesn't work in reverse order.
Running this on a windows machine currently.
Sounds like you want a bind mount.
Also it is normally best to post your whole docker-compose file just so it's easier to debug/reproduce/help.
Try explicitly setting the volume as a bind mount and see how you go:
- type: bind
source: ./report
target: /test/report
https://docs.docker.com/compose/compose-file/compose-file-v3/#volumes

Docker volume "./lib:/lib" causes "no such file or directory

I have this simple Dockerfile for Spring:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/chatbot-2.4.3.jar
COPY ${JAR_FILE} /opt/sprintbotserver/chatbot.jar
COPY . /opt/sprintbotserver
ENTRYPOINT ["java","-jar","./opt/sprintbotserver/chatbot.jar"]
and I'm deploying it via docker-compose.yml:
version: "3"
services:
sprintbotserver:
container_name: sprintbotserver
image: sprintbotserver:latest
volumes:
- "./logs:/logs"
- "./chatbots:/chatbots"
- "./config:/config"
- "./db:/db"
- "./demo:/demo"
- "./dic:/dic"
- "./lib:/lib" //this line causes error
- "./models:/models"
- "./service:/service"
ports:
- "8080:8080"
I've commented on the troubling line. All volumes except this one work as intended. When I add this line, I get standard_init_linux.go:211: exec user process caused "no such file or directory".
Anyone can help?
/lib is a standard directory in Linux, containing important parts of the system; mounting anything else over it will probably make pretty much everything break.
Can you (a) use a different directory name, or (b) run the whole thing in a subdirectory (eg /opt, so you then use ./lib:/opt/lib)?
Alright, the keyword I was missing is WORKDIR :)

Docker Logstash, Failed to fetch pipeline configuration

On my windows machine i run this docker compose file:
docker-compose.yml:
version: '3.0'
services:
logstash:
image: logstash:latest
command: -f ./etc/logstash/config/ --log.level debug
volumes:
- ./config/:/etc/logstash/config/
- ./pipeline/:/etc/logstash/pipeline/
logstash.yml:
http.host: "0.0.0.0"
path.config: /etc/logstash/pipeline/logstash.conf
path.settings: /etc/logstash/config/logstash.yml
This gives the error:
Failed to fetch pipeline configuration {:message=>"No config files found: ./etc/logstash/config/. Can you make sure this path is a logstash config file?"}
What can be the problem? This is my directory structure on my Win10 machine:
/docker
/config
/logstash.yml
/pipeline
/logstash.conf
-- Edit
Problem solved with the next config file:
logstash:
image: logstash:latest
command: -f ./etc/logstash/pipeline/
volumes:
- ./config/:/etc/logstash/config/
- ./pipeline/:/etc/logstash/pipeline/
You defined to mount ./config folder of the host machine to /etc/logstash/config/ inside container according to the part of your docker-compose.yaml file:
volumes:
- ./config/:/etc/logstash/config/
The ./config folder path on the host machine is relative path to the location of docker-compose.yaml file.
The docker-compose.yaml is located in ./docker/config folder, so there is no .config folder inside it.
To achieve what you want just move docker-compose.yaml file to one level up:
mv docker-compose.yaml ../
File should be located inside ./docker folder:
/docker
docker-compose.yaml
/config
/logstash.yml
/pipeline
/logstash.conf
You are mounting to wrong image directory.
Logstas docker Image directory layout is as follows:
/usr/share/logstash/config
/usr/share/logstash/pipeline
and like this for bin, data etc.
Refer it for more: Logstash Docker Directory Layout

how can i mount data from oracle docker container?

i just want to start an oracle docker container with docker-compose.yml.
that works so far, till i added a folder to sync/mount/whatever from the container.
problem 1: if the folder on the host is missing - docker doesn't mount anything
problem 2: a empty folder is getting ignored by git - so i added an empty file .emptyfileforgit
So if i now start my docker-compose up, docker mounts the folder with my fake file to the oracle container, and so the database is "broken".
docker compose file:
version: "3"
services:
mysql_db:
container_name: oracle_db
image: wnameless/oracle-xe-11g:latest
ports:
- "49160:22"
- "49161:1521"
- "49162:8080"
restart: always
volumes:
- "./oracle_data:/u01/app/oracle/oradata"
- "./startup_scripts:/docker-entrypoint-initdb.d"
environment:
- ORACLE_ALLOW_REMOTE=true
Question: how can i get rid of this behaviour?
With a mysql container that works fine...
Thanks a lot!
By using volume , folder/path/in/host : folder/path/in/container , the data in the container folder is mapped to provided location in the host. Initially the db data is empty, that is why your host folder does not contain any data. And do not put mock-invalid data in the db folder in your container. Because it will corrupt your db. If you want to add dump db data, just put it in the folder of your host folder, it will mapped to the path in your container
You need to copy files in /0u1/app/oracle/oradata
then you can access it out of container in your system in path ./orcle_data location.

Docker compose - share volume Nginx

I just want to test Docker and it seems something is not working as it should. When I have my docker-compose.yml like this:
web:
image: nginx:latest
ports:
- "80:80"
when in browser I run my docker.app domain (sample domain pointed to docker IP) I'm getting default nginx webpage.
But when I try to do something like this:
web:
image: nginx:latest
volumes:
- /d/Dev/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
when I run:
docker-compose up -id
when I run same url in browser I'm getting:
403 Forbidden
nginx/1.9.12
I'm using Windows 8.1 as my host.
Do I do something wrong or maybe folders cannot be shared this way?
EDIT
Solution (based on #HemersonVarela answer):
The volume I've tried to pass was in D:\Dev\docker location so I was using /d/Dev/docker at the beginning of my path. But looking at https://docs.docker.com/engine/userguide/containers/dockervolumes/ you can read:
If you are using Docker Machine on Mac or Windows, your Docker daemon has only limited access to your OS X or Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory.
so what I needed to do, is to create my nginx-ww/nginx/html directory in C:\users\marcin directory, so I ended with:
web:
image: nginx:latest
volumes:
- /c/Users/marcin/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
and this is working without a problem. Files are now shared as they should be
If you are using Docker Machine on Windows, docker has limited access to your Windows filesystem. By default Docker Machine tries to auto-share your C:\Users (Windows) directory.
So the folder .../Dev/docker/nginx-www/nginx/html/ must be located somewhere under C:\Users directory in the host.
All other paths come from your virtual machine’s filesystem, so if you want to make some other host folder available for sharing, you need to do additional work. In the case of VirtualBox you need to make the host folder available as a shared folder in VirtualBox.
You have to set a command to copy your nginx.conf into the nginx container:
Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf`
Creat a dir name it nginx and put the Dockerfile & nginx.conf there, then you have to set a build:
docker-compose.yml:
web:
image: nginx:latest
build :./nginx/
volumes:
- /d/Dev/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
Then build your containers with : sudo docker-compose build

Resources