I am wondering if it is possible to RUN a remote file stored in an NFS share when building an image from a dockerfile.
Currently I am using the COPY command and then the RUN command to execute the files, however many of the files I need to create the image are extremely large.
Is it possible to execute files stored in an NFS share directly in the dockerfile without having to copy them all over?
You can only RUN files inside your container - so it needs to copied to your container.
What you can do is move the COPY commands to the beginning of your Dockerfile so that they are cached and don't need to be copied every time you change a command later in the Dockerfile.
You can RUN curl.... to grab the remote file ,then execute it sure.
But this will only run at image build time, not during lifecycle of the container
You could also mount the NFS volume to your host, then COPY the files.
Otherwise, remote execution is a pretty basic security flaw and shouldn't be possible under any circumstances
Related
I needed to create a Docker image of a Springboot application and I achieved that by creating a Dockerfile and building it into an image. Then, I used "docker run" to bring up a container. This container is used for all the activities for which my application was written.
My problem, however, is that the JAR file that I have used needs constant changes and that requires me to rebuild the Docker image everytime. Furthermore, I need to take the contents of the earlier running Docker container and transfer it into a container created from the newly built image.
I know this whole process can be written as a Shell script and exected every time I have changes on my JAR file. But, is there any tool I can use to somehow automate it in a simple manner?
Here is my Dockerfile:
FROM java:8
WORKDIR /app
ADD ./SuperApi ./SuperApi
ADD ./config ./config
ADD ./Resources ./Resources
EXPOSE 8000
CMD java -jar SuperApi/SomeName.jar --spring.config.location=SuperApi/application.properties
If you have a JAR file that you need to copy into an otherwise static Docker image, you can use a bind mount to save needing to rebuild repeatedly. This allows for directories to be shared from the host into the container.
Say your project directory (the build location where the JAR file is located) on the host machine is /home/vishwas/projects/my_project, and you need to have the contents placed at /opt/my_project inside the container. When starting the container from the command line, use the -v flag:
docker run -v /home/vishwas/projects/my_project:/opt/my_project [...]
Changes made to files under /home/vishwas/projects/my_project locally will be visible immediately inside the container1, so no need to rebuild (and probably no need to restart) the container.
If using docker-compose, this can be expressed using a volumes stanza under the services listing for that container:
volumes:
- type: bind
source: /home/vishwas/projects/my_project
target: /opt/my_project
This works for development, but later on, it's likely you'll want to bundle the JAR file into the image instead of sharing from the host system (so it can be placed into production). When that time comes, just re-build the image and add a COPY directive to the Dockerfile:
COPY /home/vishwas/projects/my_project /opt/my_project
1: Worth noting that it will default to read/write, so the container will also be able to modify your project files. To mount as read-only, use: docker run -v /home/vishwas/projects/my_project:/opt/my_project:ro
You are looking for docker compose
You can build and start containers with a single command using compose.
I am sorry for taking up your time.
I have a local docker setup and I want to copy files from my local host to my container.
But the thing is that I need a command that I can use WHILE i am inside the container.
To explain the situation further: I executed "docker exec -it CONTAINERNAME bash" to enter my container,
and now I am on /var/www/html
and I need to find a way to copy a file/folder from my local environment into that container.
Reason: I am currently writing a dockerfile which automates the process of setting things up. I need that very specific command because a Dockerfile RUN-command can only be executed while inside the container.
What I tried:
"docker cp" is a good command to use when I am outside the container but it doesn't work while in the container.
"DOCKERFILE COPY" might do the trick but I need a general shell command to double check if it really does what it is supposed to do. I must be able to reproduce the same process of my Dockerfile via manually executing the commands one by one.
Once again, I apologize for my inability to solve this problem by myself. My inexperience has caused me nothing but trouble.
Edit: I am using a Win10 64bit OS with dual monitor setup and a lefthanded mouse. My keyboard, albeit old, should possess all the necessary keys to replicate any essential keyboard-shortcuts if required. All my drivers are installed and updated.
When you build an image you need to put there everything you need for a normal work of your container. You shouldn't copy files from the host once your image is built. You might use volumes as a common storage for both the host and the container but I don't think this is your case.
Until this is not totally clear what you do I'd suggest to prepare all the data you need and put it within docker context. Then build an image. You also may find docker-compose useful as, at least, it helps separately define the context and the path to your dockerfile if needed.
I want to create a watcher that will automatically sync file changes from a local directory to a remote docker container. I need to find a way to transfer the files efficiently. I will also need it for a one time upload command which would transfer a complete folder from local directory to a remote docker container.
I figure one solution would be to scp to a tmp directory on the remote host, and then run docker cp via ssh to copy the files from tmp directory. Is that a good solution? Is there anything better?
By the way, if anyone knows a file sync utility for that use case, please let me know. I tried to search, but it seems like it's not the most popular development workflow?
I would tr using rsync for local to remote host syncing. From their volume mount the directory into the docker container.
Windows 10. I have in folder just:
app (directory with many files)
Dockerfile (simpliest docker file)
I run "docker build ." and it just hangs.
If I remove "app" directory. Build runs ok.
In docker file just one line:
FROM node
Didn't find any issues like that. It fills like it tries to scan the directory or something.
Any advice?
UPD: It seems that I should use .dockerignore https://docs.docker.com/engine/reference/builder/#/dockerignore-file
When you run docker build ... the Docker client sends the context (recursive contents of the directory) via REST to the Docker daemon for building. If that context is large, this could take some time (depending on a variety of factors, if your daemon is local / remote, platform maybe, etc...).
How long are you giving it to hang before giving up? Could be that it's still just working? Or could be that the context was so large maybe the client / daemon experienced an issue. Checking the (client / daemon) logs would help debug that.
And yes, a .dockerignore file (basically a .gitignore but for Docker context) is probably what you're looking for, unless you need the contents of the app directory during your build.
Your Dockerfile should be put in the directory that only includes it's build context. For example, if you are building a spring-boot app, you can put the Dockerfile right under /app, as shown in this official docker sample.
Docker's documentation:
In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile.
Warning: Do not use your root directory, /, as the PATH as it causes the build to transfer the entire contents of your hard drive to the Docker daemon.
I've seen that simple docker examples put dockerfile in the root directory, but for complicated examples like the one I posted above, the dockerfile is put only in it's relevant directory. You can dig through the dockersamples repository and find your case.
Several articles have been extremely helpful in understanding Docker's volume and data management. These two in particular are excellent:
http://container-solutions.com/understanding-volumes-docker/
http://www.alexecollins.com/docker-persistence/
However, I am not sure if what I am looking for is discussed. Here is my understanding:
When running docker run -v /host/something:/container/something the host files will overlay (but not overwrite) the container files at the specified location. The container will no longer have access to the location's previous files, but instead only have access to the host files at that location.
When defining a VOLUME in a Dockerfile, other containers may share the contents created by the image/container.
The host may also view/modify a Dockerfile volume, but only after discovering the true mountpoint using docker inspect. (usually somewhere like /var/lib/docker/vfs/dir/cde167197ccc3e138a14f1a4f7c....). However, this is hairy when Docker has to run inside a Virtualbox VM.
How can I reverse the overlay so that when mounting a volume, the container files take precedence over my host files?
I want to specify a mountpoint where I can easily access the container filesystem. I understand I can use a data container for this, or I can use docker inspect to find the mountpoint, but neither solution is a good solution in this case.
The docker 1.10+ way of sharing files would be through a volume, as in docker volume create.
That means that you can use a data volume directly (you don't need a container dedicated to a data volume).
That way, you can share and mount that volume in a container which will then keep its content in said volume.
That is more in line with how a container is working: isolating memory, cpu and filesystem from the host: that is why you cannot "mount a volume and have the container's files take precedence over the host file": that would break that container isolation and expose to the host its content.
Begin your container's script with copying files from a read-only mount bind reflecting the host files to a work location in the container. End the script with copying necessary results from the container's work location back to the host using either the same or different mount point.
Alternatively to the end-of-the script command, run the container without automatically removing it at the end, then run docker cp CONTAINER_NAME:CONTAINER_DIR HOST_DIR, then docker rm CONTAINER_NAME.
Alternatively to copying results back to the host, keep them in a separate "named" volume, provided that the container had it mounted (type=volume,src=datavol,dst=CONTAINER_DIR/work). Use the named volume with other docker run commands to retrieve or use the results.
The input files may be modified in the host during development between the repeated runs of the container. Avoid shadowing them with the frozen files in the named volume. Beginning the container script with copying the input files from the host may help.
Using a named volume helps running the container read-only. (One may still need --tmpfs /tmp for temporary files or --tmpfs /tmp:exec if some container commands create and run executable code in the temporary location).