My objective is to have a self contained Go Workspace per project.
Is it possible to retrieve a Go workspace and Go environment variables from a running Docker container to be used by an IDE or Text Editor for development?
I have already tried mapping a volume to the container with the go tools and dependencies. But that requires always setting the GOPATH to the current workspace, and requires to have the go tools and dependencies on the host.
You can at least set and pass those environment variable when launching your container:
docker run -e "GOPATH=/a/mounted/path" -v [host-src:]container-dest --rm -it <yourImage>
By using -v, you allow your host to share a folder with your container.
Related
I've reviewed the documentation here:
https://docs.docker.com/docker-for-mac/install/#install-and-run-docker-for-mac
It doesn't say anything about boot2docker, although some other questions along these lines talk about this:
Mount volume to Docker image on OSX
So the question is – the Docker for Mac application provides File Sharing via Preferences -> File Sharing; how does one make use of these shared folders from the docker image (for example if one ssh's into the docker image)? When I say how, I don't mean "what are the use-cases", I mean "please show me an example of how to access a shared folder from the command line of the running container".
Ideally I'm trying to create a similar scenario to Vagrant's synched folders whereby I can edit files on my Host env, independently of the Docker Image but these are updated automatically to the Docker image on save.
UPDATE:
To be clear, the reason for asking this question is because I couldn't get the -v docker command to work. E.g.
docker run -v /Users/geoidesic/Documents/projects/arc/mysite/djangocms_demo:/home/djangocms/djangocms/djangocms_demo -d -p 8001:8000 --name test_shared_volumes bluszcz/djangocms
With the above command the container immediately stops, so if I run docker ps the list of running containers is empty.
However, if I run the container without the -v command, then it stays running as expected:
docker run -d -p 8001:8000 --name test_shared_volumes bluszcz/djangocms
Updated:
Well, if you want to share file/directory between host and container, you're gonna use Docker's bind-mount.
For example, if I want to share my host's /etc/resolv.conf to my container, I do the following:
docker run -v /etc/resolv.conf:/etc/resolv.conf <IMAGE>
In which the -v ... part tells the container to reuse host's /etc/resolve.conf. And whenever I edit this file, the changes will be immediately visible to the container.
In Linux, you can use this way to share almost any of your host files to containers. Unfortunately, this is not the case for Mac. As I mentioned in my old answer, by default you can only share /Users/, /Volumes/, /private/, and /tmp directly.
On my Mac, saying, I want to share the /data directory to a container. I run below command:
docker run -it --rm -v /data:/data busybox sh
Then it pops up an unhappy error:
docker: Error response from daemon: Mounts denied:
The path /data
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
So you see, this is where File Sharing comes up.
Then comes my answers to your questions:
File Sharing does not provide you a ready-to-use way to do the sharing as you have experienced in Vagrant;
To share file/folder between host and container, use Dockers bind-mount.
Hope that helps.
Old answer:
File Sharing is used by Docker's bind-mount feature. By default, you can bind-mount files in /Users/, /Volumes/, /private/, and /tmp directly. For other paths, you need to add them to Preferences -> File Sharing first.
Use cases for bind-mount:
Persisting data generated by the running container, so that you can backup or migrate data.
Sharing data amount multiple running containers.
Share host configuration files to containers.
Share source code between host and containers, to make debugging easier.
Note: For cases #1 and #2, consider using volumes instead of bind-mount.
My scenario is as follow
I need to add "project" folder to docker container for production build but for development build I want to mount local volume to project folder of container
eg. ADD project /var/www/html/project in production
Nothing in development (I can copy a dummy folder in development)
If I copy whole project folder to container in development then any changes in project folder will invalidate the docker cache of layers after the add command. It will take time to build docker image in development.
I want to use same docker file for both environment
To achieve that I used ADD $PROJECT_DIR /var/www/html/project in docker file, where $PROJECT_DIR is environment variable
Setting the environment variable in docker file like ENV PROJECT_DIR project or ENV CONFIG_FILE_PATH dummy-folder adds correct folders to container, but it needs me to change docker file each time.
I can also pass "build-arg" parameter when building docker image like
docker build -t myproject --build-arg "BUILD_TYPE=PROD" --build-arg "PROJECT_DIR=project" .
As variables BUILD_TYPE and PROJECT_DIR are related, I want to set CONFIG_FILE_PATH variable based on BUILD_TYPE. This will prevent case of me forgetting to change one parameter.
For setting the PROJECT_DIR variable I written following script "set_config_path.sh"
if [ $BUILD_TYPE="PROD" ]; then
PROJECT_DIR="project";
else
PROJECT_DIR="dummy-folder";
fi
I then run the script in dockerfile using
RUN . /root/set_project_folder.sh
Doing this, set_project_folder.sh script can access BUILD_TYPE variable but PROJECT_DIR is not reflected back in docker file
When running the set_project_folder.sh in my local machine's terminal, the PROJECT_DIR variable is changed but it is not working with dockerfile
Is there anyway we can change environment variable from subshell script e.g "set_config_path.sh" in above questions?
If it is possible, It can be used in many use cases to make docker build dynamic
Am I doing anything wrong here?
OR
Is there another good way to achieve this?
You can use something like below
FROM alpine
ARG BUILD_TYPE=prod
ARG CONFIG_FILE_PATH=config-$BUILD_TYPE.yml
RUN echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
CMD echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
The output would be like
Step 4/4 : RUN echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
---> Running in b5de774d9ebe
BUILD_TYPE=prod CONFIG_FILE_PATH=config-prod.yml
But if you run the image
$ docker run 9df23a126bb1
BUILD_TYPE= CONFIG_FILE_PATH=
This is because build args are not persisted as environment variables. If you want to persists these variables in the image also then you need to add below
ENV BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH
And now docker run will also output
$ docker run c250a9d1d109
BUILD_TYPE=prod CONFIG_FILE_PATH=config-prod.yml
We would like to leverage the excellent catalogue of DockerFiles on DockerHub, but the team is not in a position to use Docker.
Is there any way to run a DockerFile as if it were a shell script against a machine?
For example, if I chose to run the Docker container ruby:2.4.1-jessie against a server running only Debian Jessie, I'd expect it to ignore the FROM directive but be able to set the environment from ENV and run the RUN commands from this Dockerfile: Github docker-library/ruby:2.4.1-jessie
A dockerfile assumes to be executed in an empty container or an image on which it builds (using FROM). The knowledge about the environment (specifically the file system and all the installed software) is important and running something similar outside of docker might have side effects because files are at places where no files are expected.
I wouldn't recomend it
For the sake of simplicity, use ubuntu image as an example.
I often find it easier to use docker-compose, particularly if there's a high chance I'll want to both mount-volumes and link the container to another container at some point in the future.
Create a folder for working in, say "ubuntu".
In the "ubuntu" folder, create another folder called "files"
Create a file in that folder called "docker-compose.yml". In this file, enter:
ubuntucontainer:
image: "ubuntu:latest"
ports:
- "80:80"
volumes:
- ./files:/files
Whenever you need to start the box, navigate to "ubuntu" and type docker-compose up. To stop again, use docker-compose stop.
The advantage of using docker compose is that if you ever want to link-up a database container this can be done easily by adding another container to the yaml file, and then in the ubuntucontainer container adding a links section.
Not to mention, docker-compose up is quite minimal on the typing.
(Also, forwarding the ports with 80:80 may not be strictly necessary, it depends on what you want the box to do.)
TL;DR version:
Open Docker Quickstart Terminal. If it is already open, run $ cd ~
Run this once: $ docker run -it -v /$(pwd)/ubuntu:/windows --name ubu ubuntu
To start every time: $ docker start -i ubu
You will get an empty folder named ubuntu in your Windows user directory. You will see this folder with the name windows in your ubuntu container.
Explanation:
cd ~ is for making sure you are in Windows user directory.
-it stands for interactive, so you can interact with the container in the terminal environment. -v host_folder:container_folder enables sharing a folder between the host and the container. The host folder should be inside the Windows user folder. /$(pwd) translates to //c/Users/YOUR_USER_DIR in Windows 10. --name ubu assigns the name ubu to the container.
-i stands for interactive
I'm new to Jenkins and I have been searching around but I couldn't find what I was looking for.
I'd like to know how to run docker command in Jenkins (Build - Execute Shell):
Example: docker run hello-world
I have set Docker Installation for "Install latest from docker.io" in Jenkins Configure System and also have installed several Docker plugins. However, it still didn't work.
Can anyone help me point out what else should I check or set?
John
One of the following plugins should work fine:
CloudBees Docker Custom Build Environment Plugin
CloudBees Docker Pipeline Plugin
I normally run my builds on slave nodes that have docker pre-installed.
I came across another generic solution. Since I'm not expert creating a Jenkins-Plugin out of this, here the manual steps:
Create/change your Jenkins (I use portainer) with environment variables DOCKER_HOST=tcp://192.168.1.50 (when using unix protocol you also have to mount your docker socket) and append :/var/jenkins_home/bin to the actual PATH variable
On your docker host copy the docker command to the jenkins image "docker cp /usr/bin/docker jenkins:/var/jenkins_home/bin/"
Restart the Jenkins container
Now you can use docker command from any script or command line. The changes will persist an image update.