I'm trying to write a Dockerfile for use as a standardized development environment.
Dockerfile:
FROM node:9.5.0
WORKDIR /home/project
# install some stuff...
# clone the repo
RUN git clone https://github.com/... .
# expose
VOLUME /home/project
# fetch latest changes from git, install them and fire up a shell.
CMD git pull && npm install && /bin/bash
After building the image (say with tag dev),
Running docker run -it dev gives me a shell with the project installed and up-to-date.
Of course changes I make in the container are ephemeral so I want to mount /home/project on the host OS, I go to an empty directory project on the host and run:
docker run -it -v ${pwd}:/home/project dev
But my project folder gets overwritten and is empty inside the container.
How can mount the volume such that the container writes to the host folder and not the opposite?
OS: Windows 10 Pro.
When you mount a volume, read/write are both bi-directional by default. That means that anything you write in the container will show up in the host directory and vice versa.
But something weird is happening in your case, right?
In your build process, you are cloning a git repository. During the build process, the volume does not get mounted. The git data resides in your docker image.
Now, when you are running the docker container, you are mounting the volume. The mount path in your container will be synced with your source path. That means the container directory will be overwritten with the contents of the host directory. That's why your git data has been lost.
Possible Solution:
Run a script as CMD. That script can clone the git repo, among other things.
run.sh
#!/bin/bash
# clone the repo
RUN git clone https://github.com/... .
Dockerfile
RUN ADD run.sh /bin
# run run.sh, install them and fire up a shell.
CMD run.sh && npm install && /bin/bash
Related
The following command works and mounts the local volume:
sudo docker run -ti -v "$PWD/codebase/realsmart-saml-copy":/var/www/html realsmart-docker_smartlogin bash
The following command does not work and does not mount the volume
docker run -ti -v "$PWD/codebase/realsmart-saml-copy":/var/www/html realsmart-docker_smartlogin bash
For some reason, docker is only able to mount volumes using the sudo command, rendering our local docker environment useless on a colleagues laptop. The same docker-compose file works on my laptop (also a mac, same OS).
Any idea as to what the issue might be with his laptop configuration? Or indeed the docker setup.
(The code extract is to make clear the problem with mounting volumes, the same issue presents itself using a compose.yml file.)
Non working code:
docker run -ti -v "$PWD/codebase/realsmart-saml-copy":/var/www/html realsmart-docker_smartlogin bash
No error messages are displayed, but the results are not as expected as the volume does not mount without using sudo.
Try to see if the user is part of the docker group.
It would make sense that sudo works, but not for the local user, if that local user is not part of the docker group.
The solution for anyone interested.
After upgrading to Docker Desktop Boot2Docker has been replaced.
Steps to fix the issue:
docker-machine rm machine-name
unset DOCKER_TLS_VERIFY
unset DOCKER_CERT_PATH
unset DOCKER_MACHINE_NAME
unset DOCKER_HOST
restart Docker Desktop
cd path/to/docker-project.
docker-compose build
docker-compose up (or docker run)
project now available on localhost
Further details: https://docs.docker.com/docker-for-mac/docker-toolbox/
Add your user to the docker group.
sudo usermod -aG docker $USER
I have Docker Toolbox installed on my local machine and I'm trying to run Ruby commands to perform database migrations. I am using the following docker commands within the Docker Toolbox Quickstart Terminal Command Line:
docker-compose run app /usr/local/bin/bundle exec rake db:migrate
docker-compose run app bundle exec rake db:create RAILS_ENV=production
docker-compose run app /usr/local/bin/bundle exec rake db:seed
However, after these commands are called, I get the following error:
Could not locate Gemfile or .bundle/ directory
Within Docker Toolbox, I am within my project's directory as I run these commands (C:\project).
After doing some research, it appears that I need to mount my Host directory somewhere inside my Home directory.
So I tried using the following Docker Mount commands:
docker run --mount /var/www/docker_example/config/containers/app.sh:/usr/local/bin
docker run --mount /var/www/docker_example/config/containers/app.sh:/c/project
Both of these commands are giving me the following error:
invalid argument "/var/www/docker_example/config/containers/app.sh:/usr/local/bin" for --mount: invalid field '/var/www/docker_example/config/containers/app.sh:/usr/local/bin' must be a key=value pair
See 'docker run --help'
Here is what I have in my docker-compose.yml file:
docker-compose.yml:
app:
build: .
command: /var/www/docker_example/config/containers/app.sh
volumes:
- C:\project:/var/www/docker_example
expose:
- "3000"
- "9312"
links:
- db
tty: true
Any help would be greatly appreciated!
The issue is because you are running on windows. You need a shared folder between your Docker machine and the Host machine.
Above is on my mac. You can see my /Users is shared as /Users inside the VM. Which means when I do
docker run -v ~/test:/test ...
It will share /Users/tarun.lalwani/test inside the VM to /test inside the container. Now since /Users inside the VM is shared to my host this would work perfectly. But if I do
docker run -v /test:/test ...
Then even if I have /test on my mac it won't be shared. Because the host mount path is dependent on the Docker host server.
So in your case you should check which folder is shared and then to what path is it shared. Assuming C:\ is shared at /c Then you would use below to get your files inside the VM
docker run -v /c/Project:/var/www/html ..
I have an angular UI and a nodejs api. I am currently running windows server 2016 TP4 in Azure.Here are the steps I go through:
I am able to remote in, create images, create containers based off those images, and attach to those containers no problem.
I pulled a nodejs image from docker: docker pull microsoft/node and then created a container from that image: docker run --name 'my_api_name' -it microsoft/node cmd
That command takes me into the container via a windows command prompt. I type powershell which takes me into a powershell shell and i can run npm commands.
My question is, how do I install git onto this container? I want to reach out to the repository holding my app, pull it down and run it in this container. I will eventually push this container image up to the docker registry so clients can pull it down and run it on their windows env.
You can do it like this in admin shell:
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
cinst -y git
Ideally you wouldn't add git to the container and try to pull your repo into it (that will also get messy with credentials for private repos
You should do your source control management on your host and then build the source code into a container. It's not yet there for the Windows Dockerfiles, but the Linux ones have ONBUILD. It should be possible to replicate that for Windows.
RUN #powershell iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
RUN cinst -y git
Refer: Unable to install git and python packages inside Windows container
Solved this by downloading the "Portable" version of Git. Copying those files into the container and ***then running the post installation script provided by Git.
Find appropriate download here: https://git-scm.com/download/win
Inside docker file:
ADD Git64/ C:/Git/
WORKDIR C:/Git/
RUN %windir%\System32\cmd.exe "/K" C:\Git\post-install.bat
I trying to create Dockerfile file from scratch on Windows7. However, currently have trouble on very first step. My Dockerfile is placed under C:\Users\Arturas\Docker\Jenkins. Virtual Box shared folder path on c:\Users and folder name on c/Users (defaults from boot2docker was not changed).
When I run (on git bash):
$ docker build --file Docker/Jenkins/ .
I get:
unable to process Dockerfile: read C:\Users\Arturas\Docker\Jenkins:
The handle is invalid.
Dockerfile content is just one line:
FROM jenkins
I just started learning Docker so my experience is very limited yet. However from tools like boot2docker I expect basic commands to work out of the box so I must be missing something.
Try instead:
cd /C/Users/Arturas/Docker/Jenkins
docker build -t myimage .
I assume here that you have a file named Dockerfile under the Jenkins folder.
The -f option of a docker build is for referencing the Dockerfile (if it is named differently for instance)
Don't forget to use the latest docker-machine (the 0.5.4 one: an auto-extractible exe docker-machine_windows-amd64.exe): it uses a VM named boot2docker.iso through VirtualBox.
Try to specify the "Dockerfile" name
$ docker build --file Docker/Jenkins/Dockerfile .
I have Docker Toolbox installed on my Mac, but I'm having issues adding a file to a container during build. I'm using the ADD command in the Dockerfile. I can't seem to add any local files. I understand that Docker Toolbox uses VirtualBox under the hood, but I am not sure how to get those files into the VM to build the container. Is there a way I can do it that allows me to keep a clean OS-agnostic Dockerfile without any absolute paths?
Here is my Dockerfile. It's built from the Node.js container with some additional dependencies.
FROM node:4.2.2
RUN apt-get update
RUN apt-get install -y libvips-dev libgsf-1-dev libkrb5-dev
RUN apt-get clean
ADD app/ /app
RUN cd /app && npm install --production
RUN npm install forever -g
Turns out this does work, but only for my current directory. My Dockerfile and the files I wanted to add were not in the same directory. Moving the shell to the files I wanted, and then manually specifying the Dockerfile worked.
docker build -f my/other/Dockerfile .
Since docker will use a VirtualBox VM on Mac (with boot2docker or with docker-machine), it will use VirtualBox Guest Additions, which is there for the express purpose of using VirtualBox folder sharing.
Make sure to be in such a shared path, typically in /Users/....
If app/ is in /Users/path/to/app, then ADD should work.
You can mount other paths with boot2docker, but it can be problematic with docker machine (see issue 13).
Of course, for ADD app ... to work, you need to be in the parent folder of app/.
From docker ADD:
The <src> path must be inside the context of the build; you cannot ADD ../something /something