Docker volume - windows host and linux container - windows

I have this very simple Dockerfile :
FROM node:current-alpine3.14 AS baseImage
WORKDIR /app
COPY package* .
RUN npm install
COPY . .
CMD ["npm", "run", "watch"]
Then I run :
docker build . -t myImage
And then docker run -p 8080:3000 -v /c/myFolder:/app myImage
So basically I want a "shared" between the app folder in the container and the C:\myFolder on my host.
But it doesn't work :
For instance, if I update C:\myFolder\index.js, the changes doesnt occur in the container.
And here is what docker inspect myContainer returns under the Mounts section.
Is it something to do with escaping / path format? Or am I missing something fundamental in working with volume ? or WORKDIR ?
Mounts :
The Mounts seems to consider "Program Files/Git/app" the destination / container folder but it should simply be "/app"
Binds:

the command
docker run -p 8080:3000 -v /c/myFolder:/app myImage
was run with gitbash for windows
Running the exact same command with CMD solved my issue.
it looks like GitBash was converting the /app destination folder into \Program Files\Git\app

Related

deploy golang app on docker from ubuntu image (multistage)

using cmd docker compose up inside testdocker folder.
getting below error
here is my dockerfile
# golang base image
FROM golang:1.19-alpine as golangBase
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN go mod tidy && go build -o handlerBuild main.go
FROM ubuntu:latest
RUN mkdir /app
WORKDIR /app
COPY --from=golangBase /app .
ENTRYPOINT [ "/app/handlerBuild" ]
here is my docker-compose.yml
having file structure like below
/testdocker
/testdocker/main.go
/testdocker/go.mod
/testdocker/dockerfild
/testdocker/docker-compose.yml
Edit:
found the solution here, needed flag CGO_ENABLED=0, while building.
Go-compiled binary won't run in an alpine docker container on Ubuntu host
I think that when you COPY --from=golangBase /app . from golangBase, because golangBase already has WORKDIR app, Docker tries to copy from /app/app on golangBase (resulting in a file not found error).
Can you try COPY --from=golangBase . .? Additionally, because you set WORKDIR app on the ubuntu image, I think your entrypoint should just be ENTRYPOINT ["/handlerBuild"]. In other words, you can think of WORKDIR appas telling the Docker imagecd appso all commands thereafter should be run relative to the/app` path.
Unrelated note: I'm pretty sure WORKDIR app creates the directory app and also adds it. So instead of
RUN mkdir /app
ADD . /app
WORKDIR /app
I'm pretty sure you could just put
WORKDIR /app
and have the same effect. Correct me if I'm wrong though — I couldn't actually run the Docker build commands because I don't have your source code.

Copy contents from host OS into Docker image without rebuilding image

I'm building a new image and copy contents from host OS folder D:\Programs\scrapy into it like so: docker build . -t scrapy
Dockerfile
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN mkdir root
RUN cd root
WORKDIR /root
RUN mkdir scrapy
COPY scrapy to /root/scrapy
Now when I add new contents to the host OS folder "D:\Programs\scrapy" I want to also add it to image folder "root/scrapy", but I DON'T want to build a completely new image (it takes quite a while).
So how can I keep the existing image and just overwrite the contents of the image folder "root/scrapy".
Also: I don't want to copy the new contents EACH time I run the container (so NOT at run-time), I just want to have a SEPARATE command to add more files to an existing image and then run a new container based on that image at another time.
I checked here: How to update source code without rebuilding image (but not sure if OP tries to do the same as me)
UPDATE 1
Checking What is the purpose of VOLUME in Dockerfile and docker --volume format for Windows
I tried the commands below, all resulting in error:
docker: Error response from daemon: invalid volume specification: ''. See 'docker run --help'.
Where <pathiused> is for example D:/Programs/scrapy:/root/scrapy
docker run -v //D/Programs/scrapy:/root/scrapy scrapy
docker run -v scrapy:/root/scrapy scrapy
docker run -it -v //D/Programs/scrapy:/root/scrapy scrapy
docker run -it -v scrapy:/root/scrapy scrapy
UPDATE WITH cp command based on #Makariy's feedback
docker images -a gives:
REPOSITORY TAG IMAGE ID CREATED SIZE
scrapy latest e35e03c8cbbd 29 hours ago 5.71GB
<none> <none> 2089ad178feb 29 hours ago 5.71GB
<none> <none> 6162a0bec2fc 29 hours ago 5.7GB
<none> <none> 116a0c593544 29 hours ago 5.7GB
mcr.microsoft.com/windows/servercore ltsc2019 d1724c2d9a84 5 weeks ago 5.7GB
I run docker run -it scrapy and then docker container ls which gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fcda458a14c scrapy "c:\\windows\\system32…" About a minute ago Up About a minute thirsty_bassi
If I run docker cp D:\Programs\scrapy scrapy:/root/scrapy I get:
Error: No such container:path: scrapy:\root
So in a separate PowerShell instance I then run docker cp D:\Programs\scrapy thirsty_bassi:/root/scrapy whichs show no output in PowerShell whatsoever, so I think it should've done something.
But then in my container instance when I goto /root/scrapy folder I only see the files that were already added when the image was built, not the new ones I wanted to add.
Also, I think I'm adding files to the container here, but is there no way to add it to the image instead? Without rebuilding the whole image?
UPDATE 2
My folder structure:
D:\Programs
Dockerfile
\image_addons
Dockerfile
\scrapy
PS D:\Programs>docker build . -t scrapybase
Successfully built 95676d084e28
Successfully tagged scrapybase:latest
PS D:\Programs\image_addons> docker build -t scrapy .
Step 2/2 : COPY scrapy to /root/scrapy
COPY failed: file not found in build context or excluded by .dockerignore: stat to: file does not exist
Dockerfile A
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
WORKDIR /root/scrapy
Dockerfile B
FROM scrapybase
COPY scrapy to /root/scrapy
You also can use docker cp, to manually copy files from your host to running container
docker cp ./path/to/file containername:/another/path
Docs
answer if you want it quick and dirty
docker run -it -v c:/programs/test:/root/test ubuntu:latest cat /root/test/myTestFile.txt
to update one file quickly:
If you don't have to build your code (I don't know what language you are using) you can build some base image with the initial code and when you want to change only one file (again I'm assuming you don't need to compile your project again for that, otherwise if you do that is not possible to due the nature of compiled programming language):
FROM previous-version-image:latest
COPY myfile dest/to/file
then because your CMD and ENTRYPOINT are saved from the previous stages no need to declare them. (if you don't remember use docker history <docker-image-name> to view virtual dockerfile for image to this stage).
Notice though not to repetitively use this method or you'll get a very big image with many useless layers. Use this only for quick testing and debugging.
explanation
Usually people use it for frontend development on docker containers but the basic idea persists, you create the basic working image with the dependencies installed and the directory layout setup with the last Dockerfile command being the development server start command.
example:
Dockerfile:
# pull the base image
FROM node:slim
# set the working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# copy dependencies files
COPY package.json ./
COPY package-lock.json ./
# install app dependencies
RUN npm install
# add app
COPY . ./
# start development server
CMD ["npm", "start"]
startup command:
docker run -it --rm \
-v ${PWD}:/app \ <mount current working directory in host to container in path /app>
-v /app/node_modules \ <or other dependency directory if exists>
-p 80:3000 \ <ports if needs exposing>
ps-container:dev
I'm not sure if that use case will 100% work for you because it needs the code to be mounted using bind-mount all the time and when needed to be exported will have to be exported as the image and the source code directory, on the other hand, it allows you to make quick changes without waiting for the image to be built each time you add something new and in the end build the final image that contains all that's needed.
more relatable example to question provided code:
As you can see there is a file on the host machine that contains some text
the command that uses bind-mount to have access to the file:
docker run -it -v c:/programs/test:/root/test ubuntu:latest cat /root/test/myTestFile.txt
hope you find something that works for you from what I've provided here.
thanks to this tutorial and this example for starting examples and information.
EDIT:
Let's say your original Dockerfile looks like this:
FROM python:latest
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD python /app/app.py
This will build your initial image on top of we'll add layers and change the python files.
The next Dockerfile we'd use (let's call it Dockerfile.fix file) would copy the file we want to change instead of the ones already in the image
FROM previous-image-name
COPY app.py .
Now with after building with this Dockerfile the final image Dockerfile would look (sort of) like so:
FROM python:latest
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD python /app/app.py
FROM previous-image-name
COPY app.py .
And each time we'll want to change the file we'll use the second Dockerfile
There's no way you can change a Docker image without (at least partially) rebuilding it. But you don't have to rebuild all of it, you can just rebuild the layer copying your scrapy content.
You can optimize your build to have two images:
First image is your static image you don't want to rebuild each time. Let's call it scrapy-base.
Second and final image is based on first image scrapy-base and will only exist for the purpose of copying your dynamic scrapy content
scrapy-base's Dockerfile:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN mkdir root
RUN cd root
WORKDIR /root
RUN mkdir scrapy
And build it like:
docker build -t scrapy-base .
This command only needs to be run once. You won't have to build this image if you only change the content of local scrapy folder. (as you can see, the build does not use it at all)
scrapy's Dockerfile:
FROM scrapy-base
COPY scrapy /root/scrapy
With build command:
docker build -t scrapy .
This second build command will re-use the previous static image and only copy content without having to rebuild the entire image. Even with lots of files it should be pretty quick. You don't need to have a running container.
For your scenario :
docker run -v D:/test:/root/test your-image
A lots of valuable details available in this thread

how to create directory in docker from terminal from zsh terminal?

so i have downloaded Docker Desktop and until now i have tested out containers and stuff just executing regular commands (docker ps, docker images..., docker run...) inside my zsh terminal and it works fine but now i am in a position where i want to create a directory inside docker host so that i can put my dockerfile inside, but if i run mkdir directory-name it is going to create the directory inside my mac not docker! so what command can i use to indicate that i want the directory to be created on docker not on my own mac machine?
While your docker container is running, you can start a new shell session inside using docker exec.
docker exec -it mycontainer bash
-i means interactive - so you can type
-t allocates a pseudo-TTY - just know that you need the argument
Then inside this bash, you can create folders and files all you want and they will be placed inside your running container. Note that whenever you remove the container (e.g. to update its image), these changes will be entirely lost. For persistence, use docker volumes.
Say you have the following directory structure:
.
├── Dockerfile
└── simple-web-app
Your Dockerfile:
FROM scratch
ADD simple-web-app simple-web-app
Then you would run
docker build .

VS React-Redux container connecting to api-sqlserver containers

Got a problem with VS React-Redux template deployed as a docker container connecting to api docker container. Below are the given facts:
Fact 1. I've got 3 Docker Windows containers in docker hub (https://hub.docker.com/repository/docker/solomiosisante/test):
solomiosisante/test:sqlserver
solomiosisante/test:api
solomiosisante/test:react
Fact 2. I managed to make the api connect to sqlserver and make them communicate by creating a docker nat network. API container can get and display data from the sqlserver container.
Fact 3. I also run the react container using the same nat network.
Fact 4. I can successfully docker run the react container.
Fact 5. They are all running Net 5.0 (VS projects), but not sure with sqlserver because I just got it from microsoft/mssql-server-windows-developer image.
Fact 6. I can run the react project from visual studio and load the pages in the browser with no problem. (and it connects to api container)
Problem: I could not make the react container browse to any of my pages. Browser says it can't be reached connection timed out.
React project Dockerfile:
# escape=`
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
#Depending on the operating system of the host machines(s) that will build or run the containers, the image specified in the FROM statement may need to be changed.
#For more information, please see https://aka.ms/containercompat
###########################################################################################
FROM mcr.microsoft.com/powershell:nanoserver-1903 AS downloadnodejs
RUN mkdir -p C:\nodejsfolder
WORKDIR C:\nodejsfolder
SHELL ["pwsh", "-Command", "$ErrorActionPreference = 'Stop';$ProgressPreference='silentlyContinue';"]
RUN Invoke-WebRequest -OutFile nodejs.zip -UseBasicParsing "https://nodejs.org/dist/v15.6.0/node-v15.6.0-win-x64.zip"; `
Expand-Archive nodejs.zip -DestinationPath C:\; `
Rename-Item "C:\node-v15.6.0-win-x64" c:\nodejs
###########################################################################################
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
###########################################################################################
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN mkdir -p C:\nodejs
COPY --from=downloadnodejs C:\nodejs\ C:\nodejs
# needs to use ContainerAdministrator to be able to setx path
USER ContainerAdministrator
RUN setx /M PATH "%PATH%;C:\nodejs"
USER ContainerUser
RUN echo %PATH%
#RUN echo "%PATH%"
#RUN echo $PATH
#RUN echo {$PATH}
WORKDIR /src
#COPY ["Consequence.React/Consequence.React.csproj", "Consequence.React/"]
#COPY ["Consequence.API/Consequence.API.csproj", "Consequence/"]
#COPY ["Consequence.EF/Consequence.EF.csproj", "Consequence.EF/"]
#COPY ["Consequence.Repositories/Consequence.Repositories.csproj", "Consequence.Repositories/"]
COPY . .
RUN dotnet restore "Consequence.React/Consequence.React.csproj"
#WORKDIR "/src/Consequence.React/ClientApp"
#RUN npm install
#RUN npm audit fix
WORKDIR "/src/Consequence.React"
RUN dotnet build "Consequence.React.csproj" -c Release -o /app/build
WORKDIR /src
RUN dir /s
WORKDIR "/src/Consequence.React"
###########################################################################################
FROM build AS publish
RUN dotnet publish "Consequence.React.csproj" -c Release -o /app/publish
###########################################################################################
FROM base AS final
RUN mkdir -p C:\nodejs
COPY --from=downloadnodejs C:\nodejs\ C:\nodejs
# needs to use ContainerAdministrator to be able to setx path
USER ContainerAdministrator
RUN setx /M PATH "%PATH%;C:\nodejs"
USER ContainerUser
RUN echo %PATH%
WORKDIR /app
COPY --from=publish /app/publish .
RUN dir /s
ENV ASPNETCORE_URLS="https://+;http://+"
ENV ASPNETCORE_HTTP_PORT=8089
ENV ASPNETCORE_HTTPS_PORT=44319
ENV ASPNETCORE_Kestrel__Certificates__Default__Password="P#ssw0rd123"
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=/src/certs/consequence.pfx
ENTRYPOINT ["dotnet", "Consequence.React.dll"]
Any ideas, questions, please comment. Thank you in advance.
I finally solved this problem by using isolation=process. I thought I've got so much trouble with Hyper-V and can do without it. I followed the article Docker on Windows without Hyper-V by Chris and updated my docker hub test repo. I also put the instructions there on how to make it work. Please check this out. I now have docker images I can use as a base image for my future Docker-React-Redux docker deployment using Windows Containers. I hope this helps those who have encountered the same problems like me.

Cannot write into ~/.m2 in docker maven container

Why aren't files written in /root/.m2 in the maven3 docker image persistent during the build?
A simple dockerfile:
FROM maven:3-jdk-8
RUN touch /root/.m2/testfilehere && ls -a /root/.m2
RUN ls -a /root/.m2
CMD ["bash"]
Produces the following output.
$ docker build -t test --no-cache .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM maven:3-jdk-8
---> 42e3884987fb
Step 2 : RUN touch /root/.m2/testfilehere && ls -a /root/.m2
---> Running in 1c1dc5e9f082
.
..
testfilehere
---> 3da352119c4d
Removing intermediate container 1c1dc5e9f082
Step 3 : RUN ls -a /root/.m2
---> Running in df506db8c1dd
.
..
---> d07cc155b20e
Removing intermediate container df506db8c1dd
Step 4 : RUN stat /root/.m2/testfilehere
---> Running in af44f30aafe5
stat: cannot stat ‘/root/.m2/testfilehere’: No such file or directory
The command '/bin/sh -c stat /root/.m2/testfilehere' returned a non-zero code: 1
The file created at the first command is gone when the intermmediate container exists.
Also, this does not happen in the ubuntu image, just maven.
edit: using ADD hostfile /root/.m2/containerfile does work as a workaround, but is not what I want.
the maven docker image has an entrypoint
ENTRYPOINT ["/usr/local/bin/mvn-entrypoint.sh"]
On container started, entrypoint copy files from /usr/share/maven/ref into ${MAVEN_CONFIG} and erase your file
You can see script executed on startup by following this link
https://github.com/carlossg/docker-maven/blob/33eeccbb0ce15440f5ccebcd87040c6be2bf9e91/jdk-8/mvn-entrypoint.sh
That's because /root/.m2 is defined as a VOLUME in the image. When a container runs with a volume, the volume storage is not part of the UnionFS - so its data is not stored in the container's writable layer:
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System.
The first RUN command creates a file in the volume, but that's in an intermediary container with its own volume. The file isn't saved to the image layer because it's in a volume.
The second RUN command is running in a new intermediary container which has its own volume. There's no content in the volume from the base image, so the volume is empty in the new container.
If you want to pre-populate the volume with data, you need to do it in the Dockerfile as you've seen.
There is a documentation for this in Docker Maven:
https://github.com/carlossg/docker-maven#packaging-a-local-repository-with-the-image
COPY settings.xml /usr/share/maven/ref/
After you run your Docker image, the settings.xml will appear in /root/.m2.

Resources