How to check if the docker image has all the files? - image

Is there a way to check if the docker image has all of the files that the Dockerfile copies over and to understand if the image is built as configured in the Dockerfile? My situation is that the image is built successfully, however when I try running it, docker complains that it cant find some file or other and the container fails to run, so I cant exec on it.
Doing docker inspect is not helping since it does not report on the files in the image. Is there some method?

You can run a shell based on that image:
docker run -it <image-name> bash
Use sh instead if there is no bash available. There you can search for files as any shell.
But maybe you have not bash in the image, so use sh:
docker run -it <image-name> sh
But maybe you have an odd entrypoint, so override it:
docker run -it --entrypoint sh <image-name>

You can see the history of file and check if all the required files are present at the time of image creation
docker image history --no-trunc [image_name] > [file_name]

Related

Run a shell script with arguments on any given file with docker run

I am a docker beginner. I have used this SO post to run a shell script with docker run and this works fine. However, what I am trying to do is to apply my shell script to a file that lives in my current working directory, where Dockerfile and script are.
My shell script - given a file as an argument, return its name and the number of lines:
#!/bin/bash
echo $1
wc -l $1
Dockerfile:
FROM ubuntu
COPY ./file.sh /
CMD /bin/bash file.sh
then build and run:
docker build -t test .
docker run -ti test /file.sh text_file
This is what I get:
text_file
wc: text_file: No such file or directory
I'm left clueless why the second line doesn't work, why the file can't be found. I don't want to copy my text_file to the container. Ideally, I'd like to run my script from docker container on any file in my current working directory.
Any help will be much appreciated.
Thanks!!
You're building your Docker image containing the script /file.sh. Still, your Docker container does not contain (or know) about the file text_file which you're passing as an argument.
In order to make it known to your Docker container, you have to mount it when running the container.
docker run --rm -it -v "$PWD"/text_file:/text_file test /file.sh /text_file
In order to check for other files, you just have to swap text_file in both the mount and the argument.
Notes
In addition to Docker volume mounts, I might suggest some more improvements to spice up your image.
In order to run a script, you don't have to use ubuntu as your base image. You might be fine with alpine or even more focused bash. And don't forget to use tags in order to enforce the exact same behavior over time.
You can set your script as an ENTRYPOINT of your Dockerfile. Then, your only specifying the script name (text_file in that case) as your command.
When mounting files, you can change the name of the file in your container. Therefore, you can simplify your script and just mounting the file to test at the exact same place every time you run the container.
FROM alpine:3.10
WORKDIR /tmp
COPY file.sh /usr/local/bin/wordcount
ENTRYPOINT /usr/local/bin/wordcount
CMD file
Then,
docker run --rm -it -v "PWD"/text_file:/tmp/file test
will do the job.

Dockerfile CMD for taking bash commands from host

I've created a dockerfile with various compile and build tools. The goal of the dockerimage is to standardize our development tools, and make it easy and consistent for developing.
Everything is installed.
What I am stuck on, is how to make the docker container keep running, and be able to have a bash shell to that container so that I can run, for example, make etc. ?
If I use ENTRYPOINT /bin/bash my container exits immediately. How to keep the container running?
You should use the command at run time. You run your Docker container in interatice mode (-i) and set the command to "/bin/bash":
docker run -it myDockerImage myCommandToExecuteInteractively
For instance:
docker run -it myDocker /bin/bash
Here is a real life example:
a) Pulling the most basic image
docker pull debian:jessie-slim
b) Let's have a bash there:
docker run -it debian:jessie-slim /bin/bash
c) Enjoy:
A docker container will run as long as the CMD/Entrypoint from your Dockerfile takes.
You can run your Docker container in interactive mode using switch i
sudo docker run -it --entrypoint=/bin/bash <imagename>
Example : docker run -it --entrypoint=/bin/bash ubuntu:14.04
This will start an interactive shell in your container. Your container will exit as soon as you exit that shell.

Change ENTRYPOINT to container after building

I have a Dockerfile, which ends with:
ENTRYPOINT ["/bin/bash", "/usr/local/cdt-tests/run-tests.sh"]
After building this container, I want to run it, but instead of executing this bash script (run-tests.sh), I want to open up a terminal window inside the container to inspect the filesystem.
If there were no ENTRYPOINT line, I could do this:
docker build -t x .
docker run -it x /bin/bash
and I could examine the container's files.
However, since there is an ENTRYPOINT, then that script will run and I cannot examine the container's files.
Is there anything I can do to get into the container to snoop around?
docker run has an --entrypoint option

Cannot run script added to existing docker container

I have a container that is running with no issues. I added a bash script to compliment a couple other scripts already in the container. The docker image copy 2 scripts to /usr/local/bin and they can be accessed with docker exec -c container-name existingscript.
I added my own script to the same directory and when running the same command I get an error that exec cannot run the script: no file or directory,script not located in $PATH. I check path and sure enough, /usr/local/bin is listed. I checked permissions and the script is 755.
I then open an interactive shell with docker exec -it mycontainer bash and run /usr/local/bin/myscript and it runs with no problem.
Why can I not run the script from outside the container like I can the other two (that were included in the image). All three have almost the same functions a day do not use any special programs, one lists files, one adds files, one reads the file.
The base is Ubuntu.
EDIT: Found where I was running into the issue. Provided the answer in case anyone else happens to make the same mistake.
EDIT-2: So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript
The reason is "/usr/local/bin" not in your script's $PATH, you can use /usr/local/bin/myscript explicitly in your script. Or export $PATH first in the script.
While I was adding snippets to help explain the issue I found the problem and the solution.
So I access the scripts inside the container from the host with another script that allows you to do different things based on switch case. The scripts are called against the docker image and not the container so the script I added does not actually exist in the image.
I modified the script to call the container instead of the image and it works as expected.
EDIT: I updated the question with the answer but I am adding it here as well:
So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript

how to modify files in a container using a script

I am trying to run a container and modify certain files in it. I am trying to do this using a script. If I use:
docker run -i -t <container> <image>, it is giving me
STDERR: cannot enable tty mode on non tty input
If I use:
docker run -d <container> <image> bash, the container is not starting.
Is there anyway to do this?
Thanks
Run the docker image in background using:
docker run -d <image>:<version>
Check running docker containers using:
docker ps
If there is only one container running you can use below command to attach to a running docker container and use bash to browser files/directories inside container:
docker exec -it $(docker ps -q) bash
You can then modify/edit any file you want and restart the container.
To stop a running container:
docker stop $(docker ps -q)
To run a stopped container:
docker start -ia $(docker ps -lq)
So to start off, the -i -t is for an interactive tty mode for interacting with the container. If you are invoking this in a script then it's likely that this won't work as you expect.
This is not really the way containers are meant to be used. If it is a permanent change, you should be rebuilding the image and using that for the container.
However, if you want to make changes to files that are reflected in the container, you could consider using volumes to mount directories from the host into the container. This would look something like:
docker run -v /some/host/dir:/some/container/dir -d container
At this point anything you change within /some/host/dir will be within the container at /some/container/dir. You can then make your changes with a script on the host, without having to invoke the docker cli.

Resources