I apologize if this a silly question but when I type inside PowerShell of Window 10:
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python
bob/python
It works just fine and I receive the following prompt:
root#63eef6ac2b96:/usr/python#
To avoid repeating the command over and over, I build a makefile that has the following command
docker:
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python bob/python
when I try to execute
make docker
I receive the following error
PS C:\Users\Bob\documents\test> make docker
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python
bob/python
c:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from
daemon: the working directory 'C:/MinGW/msys/1.0/python' i
s invalid, it needs to be an absolute path.
See 'c:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
make.exe": *** [docker] Error 125
Any suggestion is greatly appreciated.
You do not have to use a Makefile. Docker compose is what you are looking for.
In brief, you need to create a docker-compose.yml file and inside it describe all the desired steps. I am not aware of your full setup but I will try to provide a skeleton for your docker-compose file.
version: '3.7'(depends on your docker engine version)
services:
python_service(add a name of your choice):
build: build/ (The path of image's Dockerfile)
volumes:
- C:\Users\Bob\Documents\test:/usr/python
working_dir: /usr/python
In the snippet above:
-v flag replaced with volumes section
-w flag replced with working_dir section
How to use:
Now that your docker-compose file is ready, you need to use it. So you do not need to remember/repeat the docker run command, you will simple execute docker-compose up in the directory where your compose file is located and you will have your container up and running.
Note that this is a simple example on how to use docker-compose. It is a powerful feature allowing to start containers from multiple images, creating networks and much more. I would recommend you to read the official documentation for additional information.
Related
I am running an application with a dockerfile that I made.
I run at first my image with this command:
docker run -it -p 8501:8501 99aa9d3b7cc1
Everything works fine, but I was expecting to see a file in a specific folder of my directory of the app, which is an expected behaviour. But running with docker, seems like the application cannot write in my host directory.
Then I tried to mount a volume with this command
docker 99aa9d3b7cc1:/output .
I got this error docker: invalid reference format.
Which is the right way to persist the data that the application generates?
Use docker bind mounts.
e.g.
-v "$(pwd)"/volume:/output
The files created in /output in the container will be accessible in the volume folder relative to where the docker command has been run.
I'm building a new image and copy contents from host OS folder D:\Programs\scrapy into it like so: docker build . -t scrapy
Dockerfile
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN mkdir root
RUN cd root
WORKDIR /root
RUN mkdir scrapy
COPY scrapy to /root/scrapy
Now when I add new contents to the host OS folder "D:\Programs\scrapy" I want to also add it to image folder "root/scrapy", but I DON'T want to build a completely new image (it takes quite a while).
So how can I keep the existing image and just overwrite the contents of the image folder "root/scrapy".
Also: I don't want to copy the new contents EACH time I run the container (so NOT at run-time), I just want to have a SEPARATE command to add more files to an existing image and then run a new container based on that image at another time.
I checked here: How to update source code without rebuilding image (but not sure if OP tries to do the same as me)
UPDATE 1
Checking What is the purpose of VOLUME in Dockerfile and docker --volume format for Windows
I tried the commands below, all resulting in error:
docker: Error response from daemon: invalid volume specification: ''. See 'docker run --help'.
Where <pathiused> is for example D:/Programs/scrapy:/root/scrapy
docker run -v //D/Programs/scrapy:/root/scrapy scrapy
docker run -v scrapy:/root/scrapy scrapy
docker run -it -v //D/Programs/scrapy:/root/scrapy scrapy
docker run -it -v scrapy:/root/scrapy scrapy
UPDATE WITH cp command based on #Makariy's feedback
docker images -a gives:
REPOSITORY TAG IMAGE ID CREATED SIZE
scrapy latest e35e03c8cbbd 29 hours ago 5.71GB
<none> <none> 2089ad178feb 29 hours ago 5.71GB
<none> <none> 6162a0bec2fc 29 hours ago 5.7GB
<none> <none> 116a0c593544 29 hours ago 5.7GB
mcr.microsoft.com/windows/servercore ltsc2019 d1724c2d9a84 5 weeks ago 5.7GB
I run docker run -it scrapy and then docker container ls which gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fcda458a14c scrapy "c:\\windows\\system32…" About a minute ago Up About a minute thirsty_bassi
If I run docker cp D:\Programs\scrapy scrapy:/root/scrapy I get:
Error: No such container:path: scrapy:\root
So in a separate PowerShell instance I then run docker cp D:\Programs\scrapy thirsty_bassi:/root/scrapy whichs show no output in PowerShell whatsoever, so I think it should've done something.
But then in my container instance when I goto /root/scrapy folder I only see the files that were already added when the image was built, not the new ones I wanted to add.
Also, I think I'm adding files to the container here, but is there no way to add it to the image instead? Without rebuilding the whole image?
UPDATE 2
My folder structure:
D:\Programs
Dockerfile
\image_addons
Dockerfile
\scrapy
PS D:\Programs>docker build . -t scrapybase
Successfully built 95676d084e28
Successfully tagged scrapybase:latest
PS D:\Programs\image_addons> docker build -t scrapy .
Step 2/2 : COPY scrapy to /root/scrapy
COPY failed: file not found in build context or excluded by .dockerignore: stat to: file does not exist
Dockerfile A
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
WORKDIR /root/scrapy
Dockerfile B
FROM scrapybase
COPY scrapy to /root/scrapy
You also can use docker cp, to manually copy files from your host to running container
docker cp ./path/to/file containername:/another/path
Docs
answer if you want it quick and dirty
docker run -it -v c:/programs/test:/root/test ubuntu:latest cat /root/test/myTestFile.txt
to update one file quickly:
If you don't have to build your code (I don't know what language you are using) you can build some base image with the initial code and when you want to change only one file (again I'm assuming you don't need to compile your project again for that, otherwise if you do that is not possible to due the nature of compiled programming language):
FROM previous-version-image:latest
COPY myfile dest/to/file
then because your CMD and ENTRYPOINT are saved from the previous stages no need to declare them. (if you don't remember use docker history <docker-image-name> to view virtual dockerfile for image to this stage).
Notice though not to repetitively use this method or you'll get a very big image with many useless layers. Use this only for quick testing and debugging.
explanation
Usually people use it for frontend development on docker containers but the basic idea persists, you create the basic working image with the dependencies installed and the directory layout setup with the last Dockerfile command being the development server start command.
example:
Dockerfile:
# pull the base image
FROM node:slim
# set the working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# copy dependencies files
COPY package.json ./
COPY package-lock.json ./
# install app dependencies
RUN npm install
# add app
COPY . ./
# start development server
CMD ["npm", "start"]
startup command:
docker run -it --rm \
-v ${PWD}:/app \ <mount current working directory in host to container in path /app>
-v /app/node_modules \ <or other dependency directory if exists>
-p 80:3000 \ <ports if needs exposing>
ps-container:dev
I'm not sure if that use case will 100% work for you because it needs the code to be mounted using bind-mount all the time and when needed to be exported will have to be exported as the image and the source code directory, on the other hand, it allows you to make quick changes without waiting for the image to be built each time you add something new and in the end build the final image that contains all that's needed.
more relatable example to question provided code:
As you can see there is a file on the host machine that contains some text
the command that uses bind-mount to have access to the file:
docker run -it -v c:/programs/test:/root/test ubuntu:latest cat /root/test/myTestFile.txt
hope you find something that works for you from what I've provided here.
thanks to this tutorial and this example for starting examples and information.
EDIT:
Let's say your original Dockerfile looks like this:
FROM python:latest
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD python /app/app.py
This will build your initial image on top of we'll add layers and change the python files.
The next Dockerfile we'd use (let's call it Dockerfile.fix file) would copy the file we want to change instead of the ones already in the image
FROM previous-image-name
COPY app.py .
Now with after building with this Dockerfile the final image Dockerfile would look (sort of) like so:
FROM python:latest
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD python /app/app.py
FROM previous-image-name
COPY app.py .
And each time we'll want to change the file we'll use the second Dockerfile
There's no way you can change a Docker image without (at least partially) rebuilding it. But you don't have to rebuild all of it, you can just rebuild the layer copying your scrapy content.
You can optimize your build to have two images:
First image is your static image you don't want to rebuild each time. Let's call it scrapy-base.
Second and final image is based on first image scrapy-base and will only exist for the purpose of copying your dynamic scrapy content
scrapy-base's Dockerfile:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN mkdir root
RUN cd root
WORKDIR /root
RUN mkdir scrapy
And build it like:
docker build -t scrapy-base .
This command only needs to be run once. You won't have to build this image if you only change the content of local scrapy folder. (as you can see, the build does not use it at all)
scrapy's Dockerfile:
FROM scrapy-base
COPY scrapy /root/scrapy
With build command:
docker build -t scrapy .
This second build command will re-use the previous static image and only copy content without having to rebuild the entire image. Even with lots of files it should be pretty quick. You don't need to have a running container.
For your scenario :
docker run -v D:/test:/root/test your-image
A lots of valuable details available in this thread
I was following this post - the reference code is on GitHub. I have cloned the repository on my local.
The project has got a react app inside it. I'm trying to run it on my local following step 7 on the same post:
docker run -p 8080:80 shakyshane/cra-docker
This returns:
Unable to find image 'shakyshane/cra-docker:latest' locally
docker: Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'.
See 'docker run --help'.
I tried login to docker again but looks like since it belongs to #shakyShane I cannot access it.
I idiotically tried npm start too but it's not a simple react app running on node - it's in the container and containers are not controlled by npm
Looks like docker pull shakyshane/cra-docker:latest throws this:
Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'
So the question is how do I run this docker image on my local mac machine?
Well this is illogical but still sharing so future people like me don't get stuck.
The problem was that I was trying to run a docker image which doesn't exist.
I needed to build the image:
docker build . -t xameeramir/cra-docker
And then run it:
docker run -p 8080:80 xameeramir/cra-docker
In my case, my image had TAG specified with it and I was not using it.
REPOSITORY TAG IMAGE ID CREATED SIZE
testimage testtag 189b7354c60a 13 hours ago 88.3MB
Unable to find image 'testimage:latest' locally for this command docker run testimage
So specifying tag like this - docker run testimage:testtag worked for me
Posting my solution since non of the above worked.
Working on macbook M1 pro.
The issue I had is that the image was built as arm/64. And I was running the command:
docker run --platform=linux/amd64 ...
So I had to build the image for amd/64 platform in order to run it.
Command below:
docker buildx build --platform=linux/amd64 ...
In conclusion your docker image platform and docker run platform needs to be the same from what I experienced.
In my case, the docker image did exist on the system and still I couldn't run the container locally, so I used the exact image ID instead of image name and tag, like this:
docker run myContainer c29150c8588e
I received this error message when I typed the name/character wrong. That is, "name1\name2" instead of "name1/name2" (wrong slash).
In my case, I saw this error when I had logged in to the dockerhub in my docker desktop. The repo I was pulling was local to my enterprise. Once i logged out of dockerhub, the pull worked.
This just happened to me because my local docker vm on macos ran out of disk space.
I just deleted some old images using docker image prune and it started working correctly again.
shakyshane/cra-docker Does not exist in that user's repo https://hub.docker.com/u/shakyshane/
The problem is you are trying to run an imagen that does not exists. If you are executing a Dockerfile, the image was not created until Dockerfile pass with no errors; so when Dockerfile tries to run the image, it can't find it. Be sure you have no errors in the execution of your scripts.
The simplest answer can be the correct one!.. make sure you have permissions to execute the command, use:
sudo docker run -p 8080:80 shakyshane/cra-docker
In my case, I didn't realise there was a difference between docker run and docker start, and I kept using the run command when I should've been using the start command.
FYI, run is for building and creating the docker container, start is to just start a stopped container
Use -d
sudo docker run -d -p 8000:8000 rasa/duckling
learn about -d here
sudo docker run --help
At first, i build image on mac-m1-pro with this command docker build -t hello_k8s_world:0.0.1 ., when is run this image the issue appear.
After read Master Yi's answer, i realize the crux of the matter and rebuild my images like this docker build --platform=arm64 -t hello_k8s_world:0.0.1 .
Finally,it worked.
I'm a beginner in working with docker especially docker compose. Currently, creation my initial easy docker environment, I run into the first error and I've no clue why.
I tried to search for a solution in stackoverflow but found nothing that could help me.
Starting my docker with "docker-compose up" I get the following error:
$ docker-compose up
Removing errorinstance_app_1
Recreating 8a358dfcb306_8a358dfcb306_8a358dfcb306_errorinstance_app_1 ...
Recreating 8a358dfcb306_8a358dfcb306_8a358dfcb306_errorinstance_app_1 ... error
ERROR: for 8a358dfcb306_8a358dfcb306_8a358dfcb306_errorinstance_app_1 Cannot start service app: oci runtime error: container_linux.go:265: starting container process caused "exec: \"./run.sh\": stat ./run.sh: no such file or directory"
ERROR: for app Cannot start service app: oci runtime error: container_linux.go:265: starting container process caused "exec: \"./run.sh\": stat ./run.sh: no such file or directory"
ERROR: Encountered errors while bringing up the project.
So. Following my folder structure:
Project
docker-compose.yml
Docker
Java
Dockerfile
src
run.sh
Following my docker-compose.yml:
version: '2'
services:
app:
build:
dockerfile: ./Docker/Java/Dockerfile
context: .
volumes:
- ./src:/usr/local/etc/
working_dir: /usr/local/etc/
command: ./run.sh
And following my docker file:
FROM java:7-jdk-alpine
# WORKDIR /usr/local/etc
run.sh
echo "Hello world."
Yes, I know that I could do that solution only in a docker-compose file. But in the future I need to extend the Dockerfile.
Can someone help me respectively does anyone see the issue?
The problem is with the base docker image you are using in dockerfile:
FROM java:7-jdk-alpine
You are trying to start container by running run.sh bash script. But the above image doesn't support bash itself
For reference, you can see the documentation of above image in docker hub page here. Quoting the necessary portion here:
java:alpine
...
To minimize image size, it's uncommon for additional related tools
(such as git or bash) to be included in Alpine-based images. Using
this image as a base, add the things you need in your own Dockerfile
(see the alpine image description for examples of how to install
packages if you are unfamiliar).
That's about the problem.
Now, I can think of 2 solutions:
Just use java:7-jdk as base image instead of java:7-jdk-alpine
Install bash on top of the base image java:7-jdk-alpine by changing dockerfile to:
FROM java:7-jdk-alpine
RUN apk update && apk upgrade && apk add bash
#WORKDIR /usr/local/etc
*source of steps to install bash in alpine linux is here
It looks like docker compose can't find your run.sh file. This file needs to be included in your docker image.
Change your Dockerfile to the following, then rebuild the image with docker build -t <YOUR_IMAGE_NAME> ..
FROM java:7-jdk-alpine
ADD run.sh /usr/local/etc/run.sh
Once your image is rebuilt, run docker-compose up again.
The easiest way to tackle the problem is to execute a bash session in the container, then inside the container, you have to check if the file exists in the
indicated path if the file is not in the path, it must be included when you create the image into the docker file or through a volume inside de docker-compose.
Another thing to check is the relative path you are using. It will be clear when you check the existence of the file inside de docker container
docker exec -it CONTAINER_NAME bash
I recommend you to create a volume in the docker compose file, as it is the easier way, and also the best way.
there is a question that I want to do you, why are you putting the Dockerfile file inside a Java path?
It is not a good idea o guideline to follow
The correct way is to put your dockerfile file into an environment folder, in such a way the dockerfile file is not related to the java source of your application
I got this Error quite a lot and after a lot of investigation, it looked like some images were corrupted.
Deleting those and rebuilding solve the problem. It was not docker installation or configuration itself.
I am building dockerfile to create an image where I want to build a package. I want to pull this package inside the docker image. I need to do a git clone for that. I saw the discussion on this post :
Using SSH keys inside docker container
Based on that, here is the content in my Dockerfile :
ENV SSH_HOME /Users/myid
ADD $SSH_HOME/.ssh/id_rsa /root/.ssh/id_rsa
RUN echo " IdentityFile ~/.ssh/id_rsa" >> /etc/ssh/ssh_config
When I run docker build, I am getting an error due to relative path. If I provide absolute path, it says the location is outside of context. Any idea how to fix this? I am running on Mac OSX 10.
With the script above, I am getting the following error :
ADD failed: stat /var/lib/docker/tmp/docker-builder6164655/Users/myid/.ssh/id_rsa: no such file or directory
Looks like it might be related to this bug here
The broken example is very similar to your path problem:
# Dockerfile
FROM ubuntu:14.04
COPY start.sh /start.sh
# comes back with:
stat /var/lib/docker/aufs/mnt/e89417ccaafbc91c3f930b56819427f83b3f2d3b3a246fbd6b48c9abcc7233f6/start.sh: no such file or directory
I'd try
updating to the most recent docker
restarting your docker engine