ddev exec: command not found (.bash_aliases) - ddev

In a local ddev instance, I have added a few aliases and functions to .ddev/homeadditions/bash_aliases.
For example: alias ll="ls -lhA"
While ddev ssh and then ll will work, ddev exec ll returns
bash: ll: command not found
Failed to execute command ll: exit status 127
Why?

It's really about how bash works, not about how ddev works. The .bashrc (and thus .bash_aliases, which gets loaded by .bashrc) is only loaded by interactive shells (contexts like ddev ssh). Here'a an Stack Overflow answer on it: Why aliases in a non-interactive Bash shell do not work
ddev exec just does a bash -c "<your command>", and bash -c is noninteractive by design.
You might consider adding ddev custom web commands for things you can't live without.
A ddev ll custom command could work like this. Create a file named "ll" in .ddev/commands/web with the contents
#!/bin/bash
## Description: Run ls -l inside web container
## Usage: ll [flags] [arguments]
## Example: "ddev ll" or `ddev ll /tmp`
ls -l $#

Here's an example of my setup (actually, I have scripts more than just ll)
.ddev/docker-compose.env.yaml
version: '3.6'
services:
web:
environment:
- PROD_USER=foo
- PROD_SERVER=bar.com
- PROD_ROOT=path/to/root
- LOCAL_ROOT=that/path/to/root
- ASSET_DIRS=bi ba bu
.ddev/commands/web/sync_down_files
#!/bin/bash
# rsync prod assets to local
# download all assets
for directory in ${ASSET_DIRS} ; do
rsync -zra \
--delete \
--exclude='.env' \
${PROD_USER}#${PROD_SERVER}:/home/${PROD_USER}/${PROD_ROOT}/$directory \
${LOCAL_ROOT};
done
Now I do ddev sync_down_files and get all remote assets into the local site. Same for the db.

Related

Docker Container Start Command Did Not Get .bashrc variables

I'm using docker to execute a command when starting the container but seems the environment variable did not get from the .bashrc file, please give me some advice.
thanks
dockerFile I add this to .bashrc:
echo "export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim" >> /root/.bashrc
docker-compose.yml file with:
command: ["python2", "/usr/bin/supervisord", "--nodaemon", "--configuration", "/etc/supervisor/supervisord.conf"]
PS:if I exec echo $PYTHPATH or just exec python2 /usr/bin/supervisord -c /etc/supervisor/supervisor.conf from container, there have not issues.
The System is Ubuntu 16.04
supervisor config:
[program:mosquitto-subscrible]
process_name=%(program_name)s_%(process_num)02d
command=python3 detection.py start_mosquitto_subscrible
autostart=true
autorestart=true
user=root
numprocs=1
directory=/var/www/html/detection
redirect_stderr=true
stdout_logfile=/var/www/html/detection/logs/detection.log
docker-compose.yml
version: '3'
services:
tensorflow:
container_name: object-detection
build:
context: ./tensorflow
dockerfile: Dockerfile
# environment:
# - PYTHONPATH=:/models/research:/models/research/slim
volumes:
- ./www:/var/www/html:cached
- ./tensorflow/supervisor:/etc/supervisor/conf.d
command: ['tail', '-f', '/dev/null']
# command: ["python2", "-c", "/usr/bin/supervisord", "--nodaemon","--configuration", "/etc/supervisor/supervisord.conf"]
In conclusion, I write a command in Dockfile echo "export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim" >> /root/.bashrc to make /models/research can be found by PYTHON.
there have a python model /models/research/object_detection.
with my supervisor, the command python3 detection.py start_mosquitto_subscrible can't find object_detection model if I start supervisord just from docker-compose command instead of exec it inside docker container.
supervisord need python2 to start, my code needs python3
~/.bashrc wont run untill the shell is opened interactively, that's why no issues when you do docker exec which is interactive, see the first few lines of bashrc file :
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
you need to comment these lines.
If you just need one Environment variable, better get the value of PYTHON_PATH from your container and add the complete variable to your docker-compose.yml file.
command: ["python2", "/usr/bin/supervisord", "--nodaemon", "--configuration", "/etc/supervisor/supervisord.conf"]
The command you've provided is using the exec syntax. See the documentation on CMD (the same applies to RUN and ENTRYPOINT):
If you use the shell form of the CMD, then the <command> will execute
in /bin/sh -c:
FROM ubuntu
CMD echo "This is a test." | wc -
If you want to run your <command> without a shell then you must
express the command as a JSON array and give the full path to the
executable. This array form is the preferred format of CMD. Any
additional parameters must be individually expressed as strings in the
array:
FROM ubuntu
CMD ["/usr/bin/wc","--help"]
In your case, you want a bash shell to process the .bashrc file, which means you need something along the lines of:
command: ["/bin/bash", "-c", "python2 /usr/bin/supervisord --nodaemon --configuration /etc/supervisor/supervisord.conf"]
Edit: with the /root/.bashrc in ubuntu:16.04, you'll see the following at the top of the file:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
You can modify the file before this line with this sed command:
sed -i '4s;^;export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim\n;' /root/.bashrc
I'd consider placing this in a script used to start the container instead of hacking the .bashrc, e.g. a start.sh:
#!/bin/sh
export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim
exec python2 /usr/bin/supervisord --nodaemon --configuration /etc/supervisor/supervisord.conf
And then add that to your image with:
COPY start.sh /
RUN chmod 755 /start.sh # if your build server doesn't have this permission set
CMD [ "/start.sh" ]
Try to start docker compose with command:
PYTHONPATH="$PYTHONPATH:/models/research:/models/research/slim" docker-compose up -d

How can I use bash constructs like 'cd' or '&&' or '>' redirection with ddev exec?

I'm trying to do some complex things with bash in the container using ddev exec and can't seem to get it to work. For example, ddev exec cd /var/tmp results in a big error message
Failed to execute command [cd /var/tmp]: Failed to run docker-compose [-f /Users/rfay/workspace/d8git/.ddev/docker-compose.yaml exec -T web cd /var/tmp], err='exit status 126', stdout='OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"cd\": executable file not found in $PATH": unknown
And trying to use "||" and "&&" or shell redirection with ">" doesn't work either.
Edit 2019-05-14: As of today's ddev release, v1.8.0, the answer below is obsolete, as ddev exec and exec hooks are executed in bash context. So ddev exec "ls | grep php" now works, ddev exec "mysql db <somefile.sql" works, as does an exec hook like exec: mysql <somefile.sql
ddev exec (and the "exec" hook in config.yaml) both execute actual comamnds, and not in the context of the shell. "cd" is not a Linux command, but rather a shell built-in. And '&&', '||', and '>' or '>>' are also shell constructs. So we have to do a bit of workaround to make them work.
But we can use bash explicitly to get these things to work:
ddev exec bash -c "cd /var/tmp && ls > /tmp/junk.txt"
To do the same thing in a post-start hook in config.yaml:
hooks:
post-start:
- exec: bash -c "cd /var/tmp && ls > /tmp/junk.txt"
Note that environment variables will not persist between exec statements, because they're in different shells, so it's best if you need to keep context to do it in one-liners.
Note also that if you want to redirect stdout/stderr you can redirect either within the container (as above) or to the host (redirecting the ddev exec output) like this:
ddev exec bash -c "cd /var/tmp && ls" >/tmp/junk.txt
It's possible that ddev exec might in the future execute commands in the context of bash to make this more transparent.

Set ENV variable in container is not working, is every under "/usr/local/bin" executed on container run?

I have the following piece of definition in a Dockerfile:
# This aims to be the default value if -e is not present on the run command
ENV HOST_IP=127.0.0.1
...
COPY /container-files/etc/php.d/zz-php.ini /etc/php5/mods-available/zz-php.ini
RUN ln -s /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
COPY /container-files/init-scripts/setup_xdebug_ip.sh /usr/local/bin/setup_xdebug_ip.sh
RUN chmod +x /usr/local/bin/setup_xdebug_ip.sh
CMD ["/usr/local/bin/setup_xdebug_ip.sh", "/usr/local/bin/setup_php_settings.sh"]
This is the relevant piece of definition at zz-php.ini:
; Xdebug
[Xdebug]
xdebug.remote_enable=true
xdebug.remote_host="192.168.3.1" => this should be overwrited by HOST_IP
xdebug.remote_port="9001"
xdebug.idekey="XDEBUG_PHPSTORM"
This is the content of the script setup_xdebug_ip.sh:
#!/usr/bin/bash
sed -i -E "s/xdebug.remote_host.*/xdebug.remote_host=$HOST_IP/" /etc/php5/apache2/conf.d/zz-php.ini
Updated the script
I have updated the script to see it that's the reason why the value isn't changed and still not working. See the code below:
#!/usr/bin/bash
sed -ri "s/^xdebug.remote_host\s*=.*$//g" /etc/php5/apache2/conf.d/zz-php.ini
echo "xdebug.remote_host = $HOST_IP" >> /etc/php5/apache2/conf.d/zz-php.ini
In order to build the image and run the container I follow this steps:
Build the image:
docker build -t reynierpm/dev-php55 .
Run the container:
docker run -e HOST_IP=$(hostname -I | cut -d' ' -f1)
--name dev-php5
-it /bin/bash reynierpm/dev-php55
After the image gets built and the container is running I open a browser and point to: http://container_address/index.php (which contains phpinfo()) and I can see the value of xdebug.remote_host as 192.168.3.1 ...
why? What is not running when the container start? Why the value doesn't get overwritten using the provided value by -e on the run command?
UPDATE:
I've notice that I am only copying the file and setting up the permissions but I am not running it at all:
# Copy the script for change the xdebug.remote_host value based on HOST_IP
COPY /container-files/init-scripts/setup_xdebug_ip.sh /usr/local/bin/setup_xdebug_ip.sh
# Execute the script
RUN chmod +x /usr/local/bin/setup_xdebug_ip.sh
Could this be the issue? Everything that I put under /usr/local/bin is executed at container start? If not that's definitively the issue or at least I think.
UPDATE #2:
After the suggestions from #charles-dufly I've fixed a few things but still not working.
Now the Dockerfile looks like:
# This aims to be the default value if -e is not present on the run command
ENV HOST_IP=127.0.0.1
...
ADD container-files /
RUN chmod +x /usr/local/bin/setup_xdebug_ip && \
/usr/local/bin/setup_xdebug_ip && \
chmod +x /usr/local/bin/setup_php_settings && \
ln -s /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini && \
ln -s /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini && \
a2enmod rewrite
EXPOSE 80 9001
CMD ["/usr/local/bin/setup_php_settings"]
After build the image I am running the following command:
$ docker run -e HOST_IP=192.168.3.120 -p 80:80 --name php55-img-6 -it reynierpm/php5-dev-4 /bin/bash
I can see the value of xdebug.remote_host being set as 127.0.0.1 but is not taking the value passed as -e on the run command, why?
You're correct in that items under /usr/local/bin are not automatically executed.
The Filesystem Hierarchy Standard specifies /usr/local as a "tertiary hierarchy" with its own bin, lib, &c. subdirectories, equivalent in their intent and use to the like-named directories under / or /usr but for content installed local to the machine (in practice, this means software installed without the benefit of the local distro's packaging system).
If you want a command to be executed, you need a RUN that directly or indirectly invokes it.
As for the other matters discussed as this question has morphed, consider the following:
FROM alpine
ENV foo=bar
RUN echo $foo >/tmp/foo-value
CMD cat /tmp/foo-value; echo $foo
When invoked with:
docker run -e foo=qux
...this emits as output:
bar
qux
...because bar is the environment variable laid down by the RUN command, whereas qux is the environment variable as it exists at the CMD command's execution.
Thus, to ensure that an environment variable is honored in configuration, it must be read and applied during the CMD's execution, not during a prior RUN stage.
Multiple problems with your repo:
First of all when using CMD in docker file, the command added after the image name in the docker run : /bin/bash will override the CMD ["/usr/local/bin/setup_php_settings"] from your Dockerfile.
Thus your setup_php_settings is never executed!
You should use ENTRYPOINT i.s.o. CMD in your Dockerfile. I found good explanation here and here.
In conclusion for the Dockerfile change the CMD [...] line in:
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
then you can run your container with:
docker run -it -e HOST_IP=<your_ip_address> -e PHP_ERROR_REPORTING='E_ALL & ~E_STRICT' -p 80:80 --name dev-php5 mmi/dev-php55
No need to add /bin/bash at the end. Check-out test-repo for test-setup.
Secondly, in your /usr/local/bin/setup_php_settings, you should add
a2enmod rewrite
service apache2 restart
at the end, just before
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND`
this in order for your new settings to be applied in your web app.

How can I set the current working directory for docker exec with an internal bash shell?

I have a developer docker image based on ubuntu:14.04 that I use to develop apps for Ubuntu 14.04. I start this image when the machine boots with docker start image-name
My home directory was bind mounted with --volumes when initially created.
To enter the image I have an alias defined in .bash_aliases
alias d_enter="docker exec -ti ub1404-dev /bin/bash"
So to enter the image I just type d_enter
But I often forget to run d_enter after entering a long path and would like d_enter to switch to that internal directory automatically.
The following doesn't work.
docker exec -ti ub1404-dev /bin/bash <(echo ". ~/.bashrc && cd $(pwd)")
Is there another way I could achieve the desired result?
For example, if my current working directory is: /home/matt/dev/somepath/blabla
And I type d_enter, my current working directory currently becomes: /home/matt what I want to do for current directory after exec is be /home/matt/dev/somepath/blabla
From API 1.35+ you can use the -w (--workdir) option:
docker exec -w /my/dir container_name command
https://docs.docker.com/engine/reference/commandline/exec/
You can achieve it with:
docker exec -it containerName sh -c "cd /var/www && /bin/bash"
When building an image, one can specify the WORKDIR variable in Dockerfile:
WORKDIR /var/www/html
Definitely a hack but you could do something like making d_enter a shell function (could stay an alias but this is easier to maintain):
d_enter() {
pwd > ~/.docker_initial_pwd
docker exec -ti ub1404-dev /bin/bash
}
And then in the container in your user account's .bashrc, add something like:
if [[ -f "${HOME}/.docker_initial_pwd ]]; then
cd $(cat "${HOME}/.docker_initial_pwd")
fi

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources