Currently I am running docker with more than 15 containers with various apps. I am exactly at the point that I am getting sick and tired of looking into my docs every time the command I used to create the container. Trying to create scripts and alias commands to get this procedure easier I encountered this problem:
Is there a way to get the container's name from the host's shared folder?
For example, I have a directory "MyApp" and inside this I start a container with a shared folder "shared". It would be perfect if:
a. I had a global script somewhere and an alias command set respectively and
b. I could just run something like "startit"/"stopit"/"rmit" from any of my "OneOfMyApps" directory and its subdirectories. I would like to skip docker ps-> Cp -> etc etc every time, and just get the container's name from the script. Any ideas?
Well, one solution would be to use environment variables to pass the name into the container and use some pre-determined file in the volume to store the name. So, you would create the container with -e flag
docker create --name myapp -e NAME=myapp myappimage
And inside the image entry point script you would have something like
cd /shared/volume
echo $NAME >> .containers
And in your shell script you would do something like
function stopit() {
for name in `cat .containers`; do
docker stop $name;
done;
}
But this is a bit fragile. If you are going to script the commands anyway, I would suggest using docker ps to get a list of containers and then using docker inspect to find which ones use this particular shared volume. You can do all of it inside the script, so what is the problem.
Related
I have created an image that has an entrypoint script that is run on container start. I use this image for different purposes. Now, I want to extend this image, but it needs to modify some files in the container before starting the container but after the image creation. So the second image will also have an entrypoint script. So, do I just call the base image's entrypoint script from the second image's entrypoint script? Or is there a more elegant solution?
Thanks in advance
An image only has one ENTRYPOINT (and one CMD). In the situation you describe, your new entrypoint needs to explicitly call the old one.
#!/bin/sh
# new-entrypoint.sh
# modify some files in the container
sed -e 's/PLACEHOLDER/value/g' /etc/config.tmpl > /etc/config
# run the original entrypoint; make sure to pass the CMD along
exec original-entrypoint.sh "$#"
Remember that setting ENTRYPOINT in a derived Dockerfile also resets CMD so you'll have to restate this.
ENTRYPOINT ["new-entrypoint.sh"]
CMD original command from base image
It's also worth double-checking to see if the base image has some extension facility or another path to inject configuration files. Most of the standard database images will run initialization scripts from a /docker-entrypoint-initdb.d directory, which can be bind-mounted, and so you can avoid a custom image for this case; the nginx image knows how substitute environment variables; in many cases you can docker run -v to bind-mount a directory of config files into a container. Those approaches can be easier than replacing or wrapping ENTRYPOINT.
I am trying to build me a Dockerfile for my ROS project.
In ROS it is required that you source a setup bash in every terminal before starting to work.
(You can replace this by putting the source command in your bashrc file)
So, what I do is to source the file in the Dockerfle so that it gets run when the container is built. It works fine on that terminal
However when I open another terminal , predictably it seems that that file is not sourced and I have to do it manually.
Is there any way I can avoid this?
As I said in a non docker way, you put this into a file that gets called everytime a terminal is open but how do you do this with docker?
(in other words, how do you make sure a sh file is executed everytime I execute (or attach to) a docker container)
In your Dockerfile, copy your script to Docker WORKDIR:
COPY ./setup.bash .
Then set the entry point to run that script at container launch:
ENTRYPOINT ["/bin/bash", "-c", "./setup.bash"]
Note that with this approach, you won't be able to start your container in an interactive terminal with docker run -it. You'll need to do a few more things if that's what you want. Also, this will overwrite your original image's ENTRYPOINT (which you can find by docker image history), so make sure that is not essential. Otherwise, sourcing the script may be the better option for both cases:
RUN source ./setup.bash
Just add the script to startup configuration files in bash...
COPY ./setup.bash /etc/
RUN echo "source /etc/setup.bash" >> /etc/bash.bashrc
ENTRYPOINT /bin/bash
The file /etc/bash.bashrc might be named /etc/bashrc, or you might want to use /etc/profile.d directory, depending if you want the file to be sourced in interactive shells or not. Read the relevant documentation about startup files in bash.
-I am writing a shell script to check if file exists and is non zero or not. The script is basic though but the challenge here is I want to check the files contained inside a docker. How can I access a docker file location within a script running from outside the docker.
-I am using an array which takes file locations as values for eg :
array=(/u01/FDT/FDT_Inbox/MAINFRAME_FILES/DC_NETWORK_CONFIG/sample.txt /u01/FDT/FDT_Inbox/MAINFRAME_FILES/DC_NETWORK_CONFIG/abc.txt)
and a for loop for every index i of the array to check if the file exists.
I have 60 ec2 instances which share the same folder structure are similar to one another but not completely identical. The incorrect file was uploaded to all 60 instances and I was wondering what would be the best way to replace that file with the correct one? The file is named the same and is placed in the same location throughout all the instances. Am new to using AWS in general so any help would be much appreciated.
Assuming you don't want to use something like ansible, have access to the servers and want to use just bash you could do something like:
Put all your IP addresses of your servers into a file, one on each line - like so:
IpAddresses.txt
10.20.15.1
10.20.15.44
10.20.15.65
Then create a script:
myscript.sh
#!/bin/bash
while read line; do
ssh -i path_to_key.pem ec2-user#$line 'sudo rm -rf /path_to_directory | command 2 | command 3'
done < IpAddresses.txt
Maybe you could do something like the above to first remove the directories you don't want and then do an scp to copy the correct file in.
Depends on the commands you need to correct the problem, but this is an option.
Note, I haven't tested this command exactly - so you may need to correct/test a bit.
Refs:
https://www.shellhacks.com/ssh-execute-remote-command-script-linux/
If your EC2 instances have the correct IAM permissions, you could use the Simple Systems Manager (SSM) console, using the Run Command service. Click 'Run a command', then select AWS-RunShellScript from the list of command documents. In the text box you can specify a shell command to run, and below that you can choose the set of instances you want to run the command on.
This is the recommended way to update and administer large fleets of instances such as you have.
I'm creating a base image for my projects. In this base image, I will download a few .tar.gzs and extract them.
I want to add these unzipped directories to be added to the path, so in child images, I can call up the downloaded executables directly without needing to specify the full path.
I tried running export PATH... in the base image, but that doesn't seem to work (at least when i tty into it, i don't see the path updated, I assume because the export doesn't transfer over into the new bash session).
Any other way to do this? Should I edit the .bashrc?
If you are trying to set some environment variables you can use the -e option to set environment variables. for example suppose you can do
docker run -e PASSWORD=Cookies -it <image name> bash
which when run you can check to see if $PASSWORD exists with an echo $PASSWORD
Currently the way you are setting $PATH will not cause the modification to be persistent across sessions. Please see the Bash manual on startup files to see which files you can edit to set the environment permanently,