How can I set the current working directory for docker exec with an internal bash shell? - bash

I have a developer docker image based on ubuntu:14.04 that I use to develop apps for Ubuntu 14.04. I start this image when the machine boots with docker start image-name
My home directory was bind mounted with --volumes when initially created.
To enter the image I have an alias defined in .bash_aliases
alias d_enter="docker exec -ti ub1404-dev /bin/bash"
So to enter the image I just type d_enter
But I often forget to run d_enter after entering a long path and would like d_enter to switch to that internal directory automatically.
The following doesn't work.
docker exec -ti ub1404-dev /bin/bash <(echo ". ~/.bashrc && cd $(pwd)")
Is there another way I could achieve the desired result?
For example, if my current working directory is: /home/matt/dev/somepath/blabla
And I type d_enter, my current working directory currently becomes: /home/matt what I want to do for current directory after exec is be /home/matt/dev/somepath/blabla

From API 1.35+ you can use the -w (--workdir) option:
docker exec -w /my/dir container_name command
https://docs.docker.com/engine/reference/commandline/exec/

You can achieve it with:
docker exec -it containerName sh -c "cd /var/www && /bin/bash"

When building an image, one can specify the WORKDIR variable in Dockerfile:
WORKDIR /var/www/html

Definitely a hack but you could do something like making d_enter a shell function (could stay an alias but this is easier to maintain):
d_enter() {
pwd > ~/.docker_initial_pwd
docker exec -ti ub1404-dev /bin/bash
}
And then in the container in your user account's .bashrc, add something like:
if [[ -f "${HOME}/.docker_initial_pwd ]]; then
cd $(cat "${HOME}/.docker_initial_pwd")
fi

Related

How to execute script that opens interactive and continues inside container

Hello i want to create a script that starts an interactive session with a docker container , and then reads a file from that container's filesystem (and maybe more).
How can i can make the commands from a bash script execute inside that session ?
myscript.sh
docker exec -it server0 bash
cat dock.txt --this file is in the container filesystem and i want to see it
//do more stuff in the filesystem
Option 1
You can add multiple actions in:
docker exec server0 /bin/sh -c "cmd1;cmd2;...;cmdn"
Option 2
You add your script from local folder with volume (-v) parameter and then execute it inside docker:
docker exec -it -v ./myscript.sh:/myscript.sh server0 /myscript.sh
myscript.sh
cat dock.txt
ls -la
//do more stuff in the filesystem
//do more stuff in the filesystem
start.sh
docker exec -it -v ./myscript.sh:/myscript.sh server0 /myscript.sh
Start script locally:
./start.sh

Set ENV variable in container is not working, is every under "/usr/local/bin" executed on container run?

I have the following piece of definition in a Dockerfile:
# This aims to be the default value if -e is not present on the run command
ENV HOST_IP=127.0.0.1
...
COPY /container-files/etc/php.d/zz-php.ini /etc/php5/mods-available/zz-php.ini
RUN ln -s /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
COPY /container-files/init-scripts/setup_xdebug_ip.sh /usr/local/bin/setup_xdebug_ip.sh
RUN chmod +x /usr/local/bin/setup_xdebug_ip.sh
CMD ["/usr/local/bin/setup_xdebug_ip.sh", "/usr/local/bin/setup_php_settings.sh"]
This is the relevant piece of definition at zz-php.ini:
; Xdebug
[Xdebug]
xdebug.remote_enable=true
xdebug.remote_host="192.168.3.1" => this should be overwrited by HOST_IP
xdebug.remote_port="9001"
xdebug.idekey="XDEBUG_PHPSTORM"
This is the content of the script setup_xdebug_ip.sh:
#!/usr/bin/bash
sed -i -E "s/xdebug.remote_host.*/xdebug.remote_host=$HOST_IP/" /etc/php5/apache2/conf.d/zz-php.ini
Updated the script
I have updated the script to see it that's the reason why the value isn't changed and still not working. See the code below:
#!/usr/bin/bash
sed -ri "s/^xdebug.remote_host\s*=.*$//g" /etc/php5/apache2/conf.d/zz-php.ini
echo "xdebug.remote_host = $HOST_IP" >> /etc/php5/apache2/conf.d/zz-php.ini
In order to build the image and run the container I follow this steps:
Build the image:
docker build -t reynierpm/dev-php55 .
Run the container:
docker run -e HOST_IP=$(hostname -I | cut -d' ' -f1)
--name dev-php5
-it /bin/bash reynierpm/dev-php55
After the image gets built and the container is running I open a browser and point to: http://container_address/index.php (which contains phpinfo()) and I can see the value of xdebug.remote_host as 192.168.3.1 ...
why? What is not running when the container start? Why the value doesn't get overwritten using the provided value by -e on the run command?
UPDATE:
I've notice that I am only copying the file and setting up the permissions but I am not running it at all:
# Copy the script for change the xdebug.remote_host value based on HOST_IP
COPY /container-files/init-scripts/setup_xdebug_ip.sh /usr/local/bin/setup_xdebug_ip.sh
# Execute the script
RUN chmod +x /usr/local/bin/setup_xdebug_ip.sh
Could this be the issue? Everything that I put under /usr/local/bin is executed at container start? If not that's definitively the issue or at least I think.
UPDATE #2:
After the suggestions from #charles-dufly I've fixed a few things but still not working.
Now the Dockerfile looks like:
# This aims to be the default value if -e is not present on the run command
ENV HOST_IP=127.0.0.1
...
ADD container-files /
RUN chmod +x /usr/local/bin/setup_xdebug_ip && \
/usr/local/bin/setup_xdebug_ip && \
chmod +x /usr/local/bin/setup_php_settings && \
ln -s /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini && \
ln -s /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini && \
a2enmod rewrite
EXPOSE 80 9001
CMD ["/usr/local/bin/setup_php_settings"]
After build the image I am running the following command:
$ docker run -e HOST_IP=192.168.3.120 -p 80:80 --name php55-img-6 -it reynierpm/php5-dev-4 /bin/bash
I can see the value of xdebug.remote_host being set as 127.0.0.1 but is not taking the value passed as -e on the run command, why?
You're correct in that items under /usr/local/bin are not automatically executed.
The Filesystem Hierarchy Standard specifies /usr/local as a "tertiary hierarchy" with its own bin, lib, &c. subdirectories, equivalent in their intent and use to the like-named directories under / or /usr but for content installed local to the machine (in practice, this means software installed without the benefit of the local distro's packaging system).
If you want a command to be executed, you need a RUN that directly or indirectly invokes it.
As for the other matters discussed as this question has morphed, consider the following:
FROM alpine
ENV foo=bar
RUN echo $foo >/tmp/foo-value
CMD cat /tmp/foo-value; echo $foo
When invoked with:
docker run -e foo=qux
...this emits as output:
bar
qux
...because bar is the environment variable laid down by the RUN command, whereas qux is the environment variable as it exists at the CMD command's execution.
Thus, to ensure that an environment variable is honored in configuration, it must be read and applied during the CMD's execution, not during a prior RUN stage.
Multiple problems with your repo:
First of all when using CMD in docker file, the command added after the image name in the docker run : /bin/bash will override the CMD ["/usr/local/bin/setup_php_settings"] from your Dockerfile.
Thus your setup_php_settings is never executed!
You should use ENTRYPOINT i.s.o. CMD in your Dockerfile. I found good explanation here and here.
In conclusion for the Dockerfile change the CMD [...] line in:
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
then you can run your container with:
docker run -it -e HOST_IP=<your_ip_address> -e PHP_ERROR_REPORTING='E_ALL & ~E_STRICT' -p 80:80 --name dev-php5 mmi/dev-php55
No need to add /bin/bash at the end. Check-out test-repo for test-setup.
Secondly, in your /usr/local/bin/setup_php_settings, you should add
a2enmod rewrite
service apache2 restart
at the end, just before
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND`
this in order for your new settings to be applied in your web app.

Automatically enter only running docker container

In the cloud, I have multiple instances, each running a container with a different random name, e.g.:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc97950d924 aws_beanstalk/my-app:latest "/bin/sh -c 'python 3 hours ago Up 3 hours 80/tcp, 5000/tcp, 8080/tcp jolly_galileo
To enter them, I type:
sudo docker exec -it jolly_galileo /bin/bash
Is there a command or can you write a bash script to automatically execute the exec to enter the correct container?
"the correct container"?
To determine what is the "correct" container, your bash script would still need either the id or the name of that container.
For example, I have a function in my .bashrc:
deb() { docker exec -u git -it $1 bash; }
That way, I would type:
deb jolly_galileo
(it uses the account git, but you don't have to)
Here's my final solution. It edits the instance's .bashrc if it hasn't been edited yet, prints out docker ps, defines the dock function, and enters the container. A user can then type "exit" if they want to access the raw instances, and "exit" again to quit ssh.
commands:
bashrc:
command: if ! grep -Fxq "sudo docker ps" /home/ec2-user/.bashrc; then echo -e "dock() { sudo docker exec -it $(sudo docker ps -lq) bash; } \nsudo docker ps\ndock" >> /home/ec2-user/.bashrc; fi
As VonC indicated, usually you have to make some shell scripting of your own if you find yourself doing something repetitive. I made a tool myself here which works if you have Bash 4+.
Install
wget -qO- https://raw.githubusercontent.com/Pithikos/dockerint/master/docker_autoenter >> ~/.bashrc
Then you can enter a container by simply typing the first letters of the container.
$> docker ps
CONTAINER ID IMAGE ..
807b1e7eab7e ubuntu ..
18e953015fa9 ubuntu ..
19bd96389d54 ubuntu ..
$> 18
root#18e953015fa9:/#
This works by taking advantage of the function command_not_found_handle introduced in Bash 4. If a command is not found, the script will try and see if what you typed is a container and if it is, it will run docker exec <container> bash.

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Docker RUN statement (modifying a file) not executed

I am experiencing strange behavior when executing a Dockerfile (in https://github.com/Krijger/es-nagios-docker). Basically, I add a file to append its contents to a file in the image
ADD es-command /tmp/
RUN cat tmp/es-command >> /opt/nagios/etc/objects/commands.cfg
The problem is that, while /tmp/es-command is present in the resulting image, the commands.cfg file was not changed.
As a prelude to the accepted answer: my Dockerfile extends cpuguy83/nagios, which defines /opt/nagios/etc as a volume.
Good to the see sample code, which find the route cause.
Your docker image comes from cpuguy83/nagios, from this image https://github.com/cpuguy83/docker-nagios/blob/master/Dockerfile
You can see /opt/nagios/etc directory is set as VOLUME
VOLUME ["/opt/nagios/var", "/opt/nagios/etc", "/opt/nagios/libexec", "/var/log/apache2", "/usr/share/snmp/mibs"]
Then you can notice that docker volume can't be changed at the next commit by your new build.
And this is the reason you can see your changes when you enter into the container and lost it when exits.
Here is how I use it:
ls ./
configure.sh
commands.cfg
cat configure.sh
#!/bin/bash
script_path=$( cd "$( dirname "$0" )" && pwd )
cp ${script_path}/commands.cfg /opt/nagios/etc/objects/
docker run -d --name nagios cpuguy83/nagios
docker run --rm -v $(pwd):/tmp --volumes-from nagios --entrypoint /tmp/configure.sh cpuguy83/nagios

Resources