How to mount a NFS file share inside a Windows Docker Container - windows

I am running Docker on a Windows Server 2022.
I need to start a Windows Container - must be a windows container because I am porting a Net Framework App that cannot run on Linux.
My Windows Container needs to write uploaded files to a network share.
I have read lots of discussion, but I could only find example on Linux containers.
Currently I am trying using a Docker Compose file. This is the compose file:
version: "3.9"
services:
web:
image: misterpanel
ports:
- "80:80"
volumes:
- misterpanel_images:/inetpub/wwwroot/Img
volumes:
misterpanel_images:
driver: local
driver_opts:
type: cifs
o: username=<myysernane>,password=<mypassword>,rw,domain=<mydomain>
device: "\\\\<server-ip-address>\\<files-share>"
Misterpanel image was created FROM mcr.microsoft.com/dotnet/framework/aspnet
When running docker-compose up, I am getting the following error:
PS C:\Users\Administrator\docker> docker-compose up
Creating volume "docker_misterpanel_images" with local driver
ERROR: create docker_misterpanel_images: options are not supported on this platform
I have also tryed to map the network drive at the host, and then mount the bind when starting the container like this:
docker run -p 80:80 --dns 8.8.8.8 --mount type=bind,source=z:\,target=c:\inetpub\wwwroot\Img misterpanel
But then I get this error ( even with Z drive working at the host ):
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: z:.
I have tryed all the possible sintaxes for the source ( \Z\ and //Z/ for example )
I could only mount binds on the Windows Container to the local C: drive.
Does anyone has ever mounted a bind to a file share inside a Windows Container?
Any help will be appreciated.
Thanks,

Related

Configuring user permissions in docker-compose

I have struggled the past few days with getting docker volumes to use the correct permissions with a headless ubuntu server. I am very new to docker and still a novice with ubuntu so it may well be obvious to others!
My setup:
I have a mini-PC with a relatively small SSD running Windows.
I have an ubuntu server that has multiple (old) HDD that combine to form a storage pool for all my media (photos, movies etc)
I run emby server on the Windows machine to manage the media. However, I use the server for the emby cache and transcoding as I want to minimise the read/write activity on the SSD to prolong its life.
Because I use emby away from home too, I have a VPN to protect my home network with port-forwarding. But because I don't want all my traffic on the VPN I recently decided to try to move emby and the VPN to a docker container so that I could use the Windows PC for other purposes too (and because I wanted to learn a bit on Docker).
I had managed to get my original emby server working with ubuntu network folders by using SAMBA and I also ultimately had to set all permissions on the relevant ubuntu folders with the media to 2777 so that I could get windows to communicate with read/write access. That in itself might suggest a problem.
Note: I have created a user:group on ubuntu called emby:emby and the uid:gid is 996:998.
But for the life of me I am unable to get emby server as a docker container running with the correct permissions as whenever emby needs to access the ubuntu folders to write (it can read fine) then a get an access denied error in the logs. This was not the case when emby server ran on Windows outside a docker container (although as mentioned I had to open all the security of the folders up to get it working).
What I have tried (without any success):
removed VPN
set uid:gid to 996:998 in docker-compose (I have also tried other users like 1000:1000 as root)
I have tried to test by creating a new file from the windows PC into an ubuntu network folder and double-checked that the user:group is emby:emby (which it is)
I also mapped the relevant ubuntu folder to a Windows drive and used this Windows drive in docker-compose
tried changing in the emby container to the network-mode: bridge in emby server
removed the FIREWALL
But I have run out of ideas and nothing I find on google seems to provide the detail I need. Below is my docker-compose file and any suggestion are very welcome! Note I may need my hand-held to walk through the steps...
version: '3.7'
volumes:
emby-movies:
driver_opts:
type: cifs
o: "user=u,password=p,file_mode=0777,dir_mode=0777,iocharset=utf8,vers=3.1.1,rw,uid=996,gid=998,sec=ntlmssp"
device: //192.168.1.162/Entertainment/Movies
emby-tvshows:
driver_opts:
type: cifs
o: "user=u,password=p,file_mode=0777,dir_mode=0777,iocharset=utf8,vers=3.1.1,rw,uid=996,gid=998,sec=ntlmssp"
device: //192.168.1.162/Entertainment/TV_Shows
emby-cache:
driver_opts:
type: cifs
o: "user=u,password=p,file_mode=0777,dir_mode=0777,iocharset=utf8,vers=3.1.1,rw,uid=996,gid=998,sec=ntlmssp"
device: //192.168.1.162/EmbyCache/cache
emby-homepictures:
driver_opts:
type: cifs
o: "user=u,password=p,file_mode=0777,dir_mode=0777,iocharset=utf8,vers=3.1.1,rw,uid=996,gid=998,sec=ntlmssp"
device: //192.168.1.162/Multimedia/Pictures
emby-homevideos:
driver_opts:
type: cifs
o: "user=u,password=p,file_mode=0777,dir_mode=0777,iocharset=utf8,vers=3.1.1,rw,uid=996,gid=998,sec=ntlmssp"
device: //192.168.1.162/Multimedia/Home_Videos
services:
gluetun:
image: qmcgaw/gluetun
container_name: provider
cap_add:
- NET_ADMIN
environment:
- VPN_SERVICE_PROVIDER=zzz
- OPENVPN_USER=xxx
- OPENVPN_PASSWORD=yyy
- SERVER_CITIES=Paris,Lisbon,Prague
- FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24
ports:
- 7170:7170 # emby HTTP port
- 7180:7180 # emby HTTPS port
volumes:
- C:\Docker\gluetun:/gluetun
restart: unless-stopped
emby:
image: emby/embyserver
container_name: embyserver
network_mode: "service:gluetun" # VPN
environment:
- UID=996 # The UID to run emby as (default: 2)
- GID=998 # The GID to run emby as (default 2)
# - GIDLIST=100 # A comma-separated list of additional GIDs to run emby as (default: 2)
volumes:
- C:\Docker\Emby_Data:/config # Configuration directory
- emby-cache:/mnt/cache # cache directory
- emby-tvshows:/mnt/tvshows # Media directory
- emby-movies:/mnt/movies # Media directory
- emby-homepictures:/mnt/homepictures # Media directory
- emby-homevideos:/mnt/homevideos # Media directory
restart: unless-stopped

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

Create docker image with named/host volume for spring boot application

I have spring boot application which I am trying to dockerize for the first time. I am using docker version 20.10.1 and my host pc is ubuntu 20.04
for this spring boot application, I have a data directory , which has data created when the application is running. I want to access this data from the host operating system. That is why I am using volume.
When I try to mount my container to named volume or to a host volume, but it always create anonymous volume regardless of the command I type.
Here is my docker file.
FROM openjdk:15
COPY target/lib/* /usr/src/app/lib/
COPY target/core-api-7.3.6.jar /usr/src/app/lib/core-api-7.3.6.jar
COPY config/application.properties /usr/src/app/config/application.properties
COPY data/poscms/config/* /usr/src/app/data/poscms/config/
WORKDIR /usr/src/app
ENTRYPOINT ["java", "-jar", "lib/core-api-7.3.6.jar"]
VOLUME /usr/src/app/data
/usr/src/app/data this is the directory where core-app.jar application will create its runtime data, I need to access these data from my host pc
Following is the command for building the image
docker build -t core-app:5.0 .
then I create image using following command
docker run -it -d -p 7071:7071 core-app:5.0 -v /home/bob/data/:/usr/src/app/data
when I check the volumes by running following command
docker volume ls
I can see anonymous volume being created by this container
and my host path which is /home/kapila/data/ is empty and container data is not written to host path.
I experience the same behaviour with named volume as well.
I created a named volume using following command
docker volume create tmp
docker run -it -d -p 7071:7071 core-app:5.0 -v tmp:/usr/src/app/data
and still docker create anonymous volume and data is not written to tmp volume
my host PC is ubuntu pc. Could someone point out what I am doing wrong here
I do something like this:
In your project root , have these files pertaining to docker as required:
1. DockerFile 2.docker-compose.yml 3. docker-env-preview.env
DockerFile content
FROM openjdk:8-jdk-alpine
ARG jarfilepath
RUN mkdir /src
WORKDIR /src
VOLUME /src/tomcat
ADD $jarfilepath yourprojectname.jar
docker-compose.yml content
version: '3'
services:
project-name:
container_name: project-name-service
build:
context: .
args:
jarfilepath: ./target/project-0.0.1.jar
env_file:
- docker-env-preview.env
ports:
- "8831:8831"
- '5005:5005'
networks:
- projectname_subnet
command: java -jar -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 projectname.jar --spring.profiles.active=preview
networks:
project-name_subnet:
external: true
docker-env-preview.env
This file will contain your environment variables values. The applicaiton.properties can read this file to fetch the values, like buildserver.ip=${BUILD_SERVER_DOMAIN}. Basically you define what you want need . Like the example below.
GARBABE_SERVER_DOMAIN=h-db-preview
GARBABE_SERVER_PORT=5422
GARBABE_DB=projectdb
GARBABE_USER=user
GARBABE_PASSWORD=pwd
JPA_DDL_AUTO=validate
JPA_DIALECT=org.hibernate.dialect.PostgreSQLDialect
JPA_SHOW_SQL=false
JPA_SE_SQL_COMMENTS=true
JPA_FORMAT_SQL=false
JPA_NON_CONTEXTUAL_CREATION=true
APP_NAME=project-name-service
BUILD_SERVER_METHOD=http
BUILD_SERVER_DOMAIN=7.8.9.4
Commands to execute :
mvn clean package (if you use maven )
docker-compose up -d --build ( execute docker ps -> check the details on the running container),
To view the logs : sudo docker logs <project-name-service> -f
To get into the container console, docker exec -it <project-name-service> bash
I was able to fix the issue, and only change I did, to make it work, is that, to change the base image from
FROM openjdk:15
to
FROM adoptopenjdk/openjdk15:ubi
and now named and host volume mounts are working as expected. I am not sure what is wrong with official openjdk:15 image.

Docker (for Windows) does not mount volume

I'm trying to mount a directory with configuration files in my docker-compose.yml.
In my case it is logstash, which tells me the mounted directory is empty.
Loading a bash and ls -la in the parent directory shows that the pipeline directory is empty and is owned by root.
One weird thing is, that it worked a few days ago.
docker-compose.yml:
version: '3'
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.6.3
ports:
- 5000:5000
- 8989:8989
volumes:
- C:/PROJECT_DIR/config/logstash/pipeline/:/usr/share/logstash/pipeline/
I found it better to try around with docker itself, as it gives more feedback
docker run --rm -it -v C:/PROJECT_DIR/config/logstash/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:5.6.3
From here and some googling I found out I had to reset my shared drives credentials under "Docker for Windows" -> Settings... -> Shared Drives, because I had changed my windows domain user password.
If you changed your system username or password then you need to re-apply the credentials to get the volume mount working.

Docker compose - share volume Nginx

I just want to test Docker and it seems something is not working as it should. When I have my docker-compose.yml like this:
web:
image: nginx:latest
ports:
- "80:80"
when in browser I run my docker.app domain (sample domain pointed to docker IP) I'm getting default nginx webpage.
But when I try to do something like this:
web:
image: nginx:latest
volumes:
- /d/Dev/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
when I run:
docker-compose up -id
when I run same url in browser I'm getting:
403 Forbidden
nginx/1.9.12
I'm using Windows 8.1 as my host.
Do I do something wrong or maybe folders cannot be shared this way?
EDIT
Solution (based on #HemersonVarela answer):
The volume I've tried to pass was in D:\Dev\docker location so I was using /d/Dev/docker at the beginning of my path. But looking at https://docs.docker.com/engine/userguide/containers/dockervolumes/ you can read:
If you are using Docker Machine on Mac or Windows, your Docker daemon has only limited access to your OS X or Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory.
so what I needed to do, is to create my nginx-ww/nginx/html directory in C:\users\marcin directory, so I ended with:
web:
image: nginx:latest
volumes:
- /c/Users/marcin/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
and this is working without a problem. Files are now shared as they should be
If you are using Docker Machine on Windows, docker has limited access to your Windows filesystem. By default Docker Machine tries to auto-share your C:\Users (Windows) directory.
So the folder .../Dev/docker/nginx-www/nginx/html/ must be located somewhere under C:\Users directory in the host.
All other paths come from your virtual machine’s filesystem, so if you want to make some other host folder available for sharing, you need to do additional work. In the case of VirtualBox you need to make the host folder available as a shared folder in VirtualBox.
You have to set a command to copy your nginx.conf into the nginx container:
Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf`
Creat a dir name it nginx and put the Dockerfile & nginx.conf there, then you have to set a build:
docker-compose.yml:
web:
image: nginx:latest
build :./nginx/
volumes:
- /d/Dev/docker/nginx-www/nginx/html/:/usr/share/nginx/html/
ports:
- "80:80"
Then build your containers with : sudo docker-compose build

Resources