extend docker image preserving its entrypoint - bash

I have created an image that has an entrypoint script that is run on container start. I use this image for different purposes. Now, I want to extend this image, but it needs to modify some files in the container before starting the container but after the image creation. So the second image will also have an entrypoint script. So, do I just call the base image's entrypoint script from the second image's entrypoint script? Or is there a more elegant solution?
Thanks in advance

An image only has one ENTRYPOINT (and one CMD). In the situation you describe, your new entrypoint needs to explicitly call the old one.
#!/bin/sh
# new-entrypoint.sh
# modify some files in the container
sed -e 's/PLACEHOLDER/value/g' /etc/config.tmpl > /etc/config
# run the original entrypoint; make sure to pass the CMD along
exec original-entrypoint.sh "$#"
Remember that setting ENTRYPOINT in a derived Dockerfile also resets CMD so you'll have to restate this.
ENTRYPOINT ["new-entrypoint.sh"]
CMD original command from base image
It's also worth double-checking to see if the base image has some extension facility or another path to inject configuration files. Most of the standard database images will run initialization scripts from a /docker-entrypoint-initdb.d directory, which can be bind-mounted, and so you can avoid a custom image for this case; the nginx image knows how substitute environment variables; in many cases you can docker run -v to bind-mount a directory of config files into a container. Those approaches can be easier than replacing or wrapping ENTRYPOINT.

Related

I have tried and really need some help in understanding why docker file wont run this script

I am trying, as part of an exercise, to create an image and run a simple bash script. This is my Dockerfile:
FROM ubuntu
RUN chmod 700 .
#Create container to store file in
RUN mkdir doc-conatiner
# source then the destination of container in docker if I have one
COPY . /functionfibonnaci/doc-conatiner
#when conatiner starts what is the executable
CMD ["bash", "functionfibonnaci.sh"]
when I run docker run:
bash: functionfibonnaci.sh: No such file or directory```
No such file or direcotry
I have been at this for two days and just cant get this to work- so answers will be appreiacted.
As #KapilKhandelwal indicates in their answer, you're having trouble because the bash functionfibonnaci.sh command is looking for the script in the current directory, but you've never changed directories, so you're in the container filesystem's root directory.
I'd suggest updating this in a couple of ways:
On your host system, outside of Docker, make sure that the script starts with a "shebang" line; the very first line, starting at the very first character, should be #!/bin/sh (or if you have bash-specific extensions and can't remove them, #!/bin/bash, but try to stick to POSIX shell syntax if you can).
On your host system, outside of Docker, make sure the script is executable; chmod +x functionfibonnaci.sh. With this and the previous step, you'll be able to just run ./functionfibonnaci.sh without explicitly mentioning the shell.
In the Dockerfile, change WORKDIR to some directory early. Often a short directory name like /app works well.
You don't need to RUN mkdir the WORKDIR directory or directories you COPY into; Docker creates them for you.
When you COPY content into the Dockerfile, the right-hand side can be a relative path like ., relative to the current WORKDIR, so you don't need to repeat the directory name.
In your CMD you can also specify the script location relative to the current directory.
These updates will get you:
FROM ubuntu
# do not need to mkdir this directory first
WORKDIR /app # or /functionfibonnaci/doc-conatiner if you prefer
# copy the entire build-context directory into the current workdir
COPY . .
# the command does not need to explicitly name the interpreter
# (assuming the script has a "shebang" line and is executable)
CMD ["./functionfibonnaci.sh"]
From the error message, it is clear that functionfibonnaci.sh is not found.
Update the CMD command in the Dockerfile to this:
CMD ["bash", "/functionfibonnaci/doc-conatiner/functionfibonnaci.sh"]
Note: This will work if the functionfibonnaci.sh file is in the same directory where the Dockerfile is present on the host machine. If it is present in a different directory, feel free to update the path of the file in the CMD accordingly.
TL;DR
Let's look closely what you are trying to do. The first two lines of the Dockerfile are self-explainatory.
In the third command, you are creating a directory with the intention to copy your script files. Sounds good so far!!!
The fourth line of the Dockerfile is what created a mess IMO. You are actually copying all the files from host to the directory /functionfibonnaci/doc-conatiner. But wait, you were supposed to copy those file inside the doc-conatiner directory that you created earlier.. right?
Now in the last line of the Dockerfile, you are trying to run the bash script functionfibonnaci.sh. But now, since the default WORKDIR is / by default, it will search for the functionfibonnaci.sh file inside the / directory. This file is actually present inside the /functionfibonnaci/doc-conatiner directory.
Hence, you are facing this issue.

Dockerfile - copying a bash file from host to Dockerfile

I'm trying to copy a bash file called setup_envs.sh which is in the same directory of my Dockerfile.
How can I run this bash file only once after Dockerfile is created?
My code is (in the end of the Dockerfile):
RUN mkdir -p /scripts
COPY setup_env.sh /scripts
WORKDIR /scripts
RUN chmod +x /scripts/setup_env.sh
CMD [./scripts/setup_env.sh]
Current error:
/bin/bash: [./scripts/setup_env.sh]: No such file or directory
I don't have a type in the file btw, I checked this.
Moreover, after I solve this and run the image to create a container - how can I make sure this bash script is only called once? Should I just write a command in the bash script that checks if some folder exists - and if it does - don't install it?
Based on the different comments including mine, this is what your Dockerfile extract should be replaced with:
COPY --chmod 755 setup_env.sh /scripts/
WORKDIR /scripts
CMD /scripts/setup_env.sh
Alternatively you can use the exec form for CMD but there is not much added value here since you're not passing any command line parameters.
CMD ["/scripts/setup_env.sh"]
At this point, I'm not really sure the WORKDIR instruction is useful (it depends on the rest of your Dockerfile and the content of your script).
Regarding your single bash script execution, I think you need to give a bit more background on the exact goal you are targeting. I have the feeling you could be in an X/Y Problem. And since this is a totally different issue, it should go inside a new question anyway with all required details.

sourcing a setup.bash in a Dockerfile

I am trying to build me a Dockerfile for my ROS project.
In ROS it is required that you source a setup bash in every terminal before starting to work.
(You can replace this by putting the source command in your bashrc file)
So, what I do is to source the file in the Dockerfle so that it gets run when the container is built. It works fine on that terminal
However when I open another terminal , predictably it seems that that file is not sourced and I have to do it manually.
Is there any way I can avoid this?
As I said in a non docker way, you put this into a file that gets called everytime a terminal is open but how do you do this with docker?
(in other words, how do you make sure a sh file is executed everytime I execute (or attach to) a docker container)
In your Dockerfile, copy your script to Docker WORKDIR:
COPY ./setup.bash .
Then set the entry point to run that script at container launch:
ENTRYPOINT ["/bin/bash", "-c", "./setup.bash"]
Note that with this approach, you won't be able to start your container in an interactive terminal with docker run -it. You'll need to do a few more things if that's what you want. Also, this will overwrite your original image's ENTRYPOINT (which you can find by docker image history), so make sure that is not essential. Otherwise, sourcing the script may be the better option for both cases:
RUN source ./setup.bash
Just add the script to startup configuration files in bash...
COPY ./setup.bash /etc/
RUN echo "source /etc/setup.bash" >> /etc/bash.bashrc
ENTRYPOINT /bin/bash
The file /etc/bash.bashrc might be named /etc/bashrc, or you might want to use /etc/profile.d directory, depending if you want the file to be sourced in interactive shells or not. Read the relevant documentation about startup files in bash.

get container's name from shared directory

Currently I am running docker with more than 15 containers with various apps. I am exactly at the point that I am getting sick and tired of looking into my docs every time the command I used to create the container. Trying to create scripts and alias commands to get this procedure easier I encountered this problem:
Is there a way to get the container's name from the host's shared folder?
For example, I have a directory "MyApp" and inside this I start a container with a shared folder "shared". It would be perfect if:
a. I had a global script somewhere and an alias command set respectively and
b. I could just run something like "startit"/"stopit"/"rmit" from any of my "OneOfMyApps" directory and its subdirectories. I would like to skip docker ps-> Cp -> etc etc every time, and just get the container's name from the script. Any ideas?
Well, one solution would be to use environment variables to pass the name into the container and use some pre-determined file in the volume to store the name. So, you would create the container with -e flag
docker create --name myapp -e NAME=myapp myappimage
And inside the image entry point script you would have something like
cd /shared/volume
echo $NAME >> .containers
And in your shell script you would do something like
function stopit() {
for name in `cat .containers`; do
docker stop $name;
done;
}
But this is a bit fragile. If you are going to script the commands anyway, I would suggest using docker ps to get a list of containers and then using docker inspect to find which ones use this particular shared volume. You can do all of it inside the script, so what is the problem.

Setting $PATH in docker image?

I'm creating a base image for my projects. In this base image, I will download a few .tar.gzs and extract them.
I want to add these unzipped directories to be added to the path, so in child images, I can call up the downloaded executables directly without needing to specify the full path.
I tried running export PATH... in the base image, but that doesn't seem to work (at least when i tty into it, i don't see the path updated, I assume because the export doesn't transfer over into the new bash session).
Any other way to do this? Should I edit the .bashrc?
If you are trying to set some environment variables you can use the -e option to set environment variables. for example suppose you can do
docker run -e PASSWORD=Cookies -it <image name> bash
which when run you can check to see if $PASSWORD exists with an echo $PASSWORD
Currently the way you are setting $PATH will not cause the modification to be persistent across sessions. Please see the Bash manual on startup files to see which files you can edit to set the environment permanently,

Resources