Docker Container Start Command Did Not Get .bashrc variables - bash

I'm using docker to execute a command when starting the container but seems the environment variable did not get from the .bashrc file, please give me some advice.
thanks
dockerFile I add this to .bashrc:
echo "export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim" >> /root/.bashrc
docker-compose.yml file with:
command: ["python2", "/usr/bin/supervisord", "--nodaemon", "--configuration", "/etc/supervisor/supervisord.conf"]
PS:if I exec echo $PYTHPATH or just exec python2 /usr/bin/supervisord -c /etc/supervisor/supervisor.conf from container, there have not issues.
The System is Ubuntu 16.04
supervisor config:
[program:mosquitto-subscrible]
process_name=%(program_name)s_%(process_num)02d
command=python3 detection.py start_mosquitto_subscrible
autostart=true
autorestart=true
user=root
numprocs=1
directory=/var/www/html/detection
redirect_stderr=true
stdout_logfile=/var/www/html/detection/logs/detection.log
docker-compose.yml
version: '3'
services:
tensorflow:
container_name: object-detection
build:
context: ./tensorflow
dockerfile: Dockerfile
# environment:
# - PYTHONPATH=:/models/research:/models/research/slim
volumes:
- ./www:/var/www/html:cached
- ./tensorflow/supervisor:/etc/supervisor/conf.d
command: ['tail', '-f', '/dev/null']
# command: ["python2", "-c", "/usr/bin/supervisord", "--nodaemon","--configuration", "/etc/supervisor/supervisord.conf"]
In conclusion, I write a command in Dockfile echo "export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim" >> /root/.bashrc to make /models/research can be found by PYTHON.
there have a python model /models/research/object_detection.
with my supervisor, the command python3 detection.py start_mosquitto_subscrible can't find object_detection model if I start supervisord just from docker-compose command instead of exec it inside docker container.
supervisord need python2 to start, my code needs python3

~/.bashrc wont run untill the shell is opened interactively, that's why no issues when you do docker exec which is interactive, see the first few lines of bashrc file :
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
you need to comment these lines.
If you just need one Environment variable, better get the value of PYTHON_PATH from your container and add the complete variable to your docker-compose.yml file.

command: ["python2", "/usr/bin/supervisord", "--nodaemon", "--configuration", "/etc/supervisor/supervisord.conf"]
The command you've provided is using the exec syntax. See the documentation on CMD (the same applies to RUN and ENTRYPOINT):
If you use the shell form of the CMD, then the <command> will execute
in /bin/sh -c:
FROM ubuntu
CMD echo "This is a test." | wc -
If you want to run your <command> without a shell then you must
express the command as a JSON array and give the full path to the
executable. This array form is the preferred format of CMD. Any
additional parameters must be individually expressed as strings in the
array:
FROM ubuntu
CMD ["/usr/bin/wc","--help"]
In your case, you want a bash shell to process the .bashrc file, which means you need something along the lines of:
command: ["/bin/bash", "-c", "python2 /usr/bin/supervisord --nodaemon --configuration /etc/supervisor/supervisord.conf"]
Edit: with the /root/.bashrc in ubuntu:16.04, you'll see the following at the top of the file:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
You can modify the file before this line with this sed command:
sed -i '4s;^;export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim\n;' /root/.bashrc
I'd consider placing this in a script used to start the container instead of hacking the .bashrc, e.g. a start.sh:
#!/bin/sh
export PYTHONPATH=$PYTHONPATH:/models/research:/models/research/slim
exec python2 /usr/bin/supervisord --nodaemon --configuration /etc/supervisor/supervisord.conf
And then add that to your image with:
COPY start.sh /
RUN chmod 755 /start.sh # if your build server doesn't have this permission set
CMD [ "/start.sh" ]

Try to start docker compose with command:
PYTHONPATH="$PYTHONPATH:/models/research:/models/research/slim" docker-compose up -d

Related

How to force Git for Windows' bash-shell to not convert path-string to windows path?

I'm using the bash shell provided by Git for Windows for Docker toolbox for Windows. I want to export a string representing a unix path to a environment variable to then use in a docker container. Something like:
export MY_VAR=/my/path; docker-compose up
The problem is that in my container the variable will be something like:
echo $MY_VAR # prints c:/Program Files/Git/my/path
So it seems the shell (my guess) recognizes the string as a path and converts it to windows format. Is there a way to stop this?
I've attempted to use MSYS_NO_PATHCONV=1:
MSYS_NO_PATHCONV=1; export LOG_PATH=/my/path; docker-compose up
But it did not have any effect.
I don't think it's an issue with my docker-compose and dockerfile but I'll attach them if someone is interested.
My Dockerfile:
FROM node:8-slim
RUN mkdir /test \
&& chown node:node /test
USER node
ENTRYPOINT [ "/bin/bash" ]
My docker-compose.yml:
version: '2'
services:
test:
build:
context: .
image: test
environment:
- MY_VAR
volumes:
- ${MY_VAR}:/test
command: -c 'sleep 100000'
The Final goal here is to make a directory on the host machine accessible from the docker container (for logs and such). The directory should be set by an environment variable. Setting the directory in the docker-compose.yml does work, just not for my use case.
If you want your command docker-compose up to be run with MSYS_NO_PATHCONV=1; you have two options:
export LOG_PATH=/c/Windows; export MSYS_NO_PATHCONV=1; docker-compose up This will affect your bash session as the variable is exported
export LOG_PATH=/c/Windows; MSYS_NO_PATHCONV=1 docker-compose up; (note I removed one semi-colon intentionally) This will set MSYS_NO_PATHCONV only in the context of the command to run
Test it with:
$ export LOG_PATH=/c/Windows ; cmd "/c echo %LOG_PATH%";
C:/Windows --> Fails
$ export LOG_PATH=/c/Windows ; MSYS_NO_PATHCONV=1 cmd "/c echo %LOG_PATH%"
/c/Windows --> Success
$ export LOG_PATH=/c/Windows ; export MSYS_NO_PATHCONV=1; cmd "/c echo %LOG_PATH%";
/c/Windows --> Success but MSYS_NO_PATHCONV is now "permanently" set
Seems a workaround is to remove the first / from the string and add it in the docker-compose.yml instead.
new docker-compose.yml:
version: '2'
services:
test:
build:
context: .
image: test
environment:
- MY_VAR
volumes:
- /${MY_VAR}:/test # added '/' to the beginning of the line
command: -c 'sleep 100000'
and then starting the container with:
export MY_VAR=my/path; docker-compose up # removed the '/' from the beginning of the path.
This does seem more like a "lucky" workaround than a perfect solution as when I'll build this on other systems I'll have to remind myself to remove the /. Doable but a bit annoying. Maybe someone has a better idea.

Run sh script from docker container

I build a docker image with bellow Dockerfile:
FROM node:8.9.4
ADD setENV.sh /usr/local/bin/setENV.sh
RUN chmod +x /usr/local/bin/setENV.sh
CMD [ "/bin/bash" "usr/local/bin/setENV.sh" ]
The setENV script is:
#!/bin/sh
echo "PORT=${PORT:-1234}" >> .env
echo "PORT_SERVICE=${PORT_SERVICE:-8888}" > .env
echo "HOST_SERVICE=${HOST_SERVICE:-1234}" > .env
I build the image as:
docker image build -t my-node .
And then I run the image as:
docker run -it my-node bash
But the script is not executed.
From inside the container a run the script as:
/bin/bash usr/local/bin/setENV.sh
And working fine.
Note that I am using docker for windows.
The command
docker run -it my-node bash
just runs bash.
To run the CMD, you have to do
docker run -it my-node
However, note that your container will immediately exit because there is nothing to do after writing to the file. So to see the result, you would need to add cat .env or something to setENV.sh.
I think the last line should be
CMD [ "/bin/bash", "usr/local/bin/setENV.sh" ]
You missed a ,.

Running a bash script from alpine based docker

I have Dockerfile containing:
FROM alpine
COPY script.sh /script.sh
CMD ["./script.sh"]
and a script.sh (with executable permission):
#!/bin/bash
echo "hello world from script file"
when I run
docker run --name testing fff0e5c81ca0
where fff0e5c81ca0 is the id after building, I get an error
standard_init_linux.go:195: exec user process caused "no such file or directory"
So how can I solve it?
To run a bash script in alpine based image, you need to do either one
Install bash
$ RUN apk add --update bash
Use #!/bin/sh in script instead of #!/bin/bash
You need to do any one of these two or both
Or, like #Maroun's answer in comment, you can change your CMD to execute your bash script
CMD ["sh", "./script.sh"]
Your Dockerfile may look like this:
FROM openjdk:8u171-jre-alpine3.8
COPY script.sh /script.sh
CMD ["sh", "./script.sh"]

Executing a shell script within docker with RUN command

New to dockers, so please bear with me.
My Dockerfile contains an ENTRYPOINT:
ENV MONGOD_START "mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"
ENTRYPOINT ["/bin/sh", "-c", "$MONGOD_START"]
I have a shell script add an entry to database through python script, and starts the server.
The script startApp.sh
chmod +x /addAddress.py
python /addAddress.py $1
cd /myapp/webapp
grunt serve --force
Now, all the below RUN commands are unsuccessful in executing this script.
sudo docker run -it --privileged myApp -C /bin/bash && /myApp/webapp/startApp.sh loc
sudo docker run -it --privileged myApp /myApp/webapp/startApp.sh loc
The docker log of container is
"about to fork child process, waiting until server is ready for connections. forked process: 7 child process started successfully, parent exiting "
Also, the startApp.sh executes fine when I open a bash prompt in docker and run it.
I am unable to figure out what wrong I am doing, help please.
I would suggest you to create an entrypoint.sh file:
#!/bin/sh
# Initialize start DB command
# Pick from env variable MONGOD_START if it exists
# else use the default value provided in quotes
START_DB=${MONGOD_START:-"mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"}
# This will start your DB in background
${START_DB} &
# Go to startApp directory and execute commands
`chmod +x /addAddress.py;python /addAddress.py $1; \
cd /myapp/webapp ;grunt serve --force`
Then modify your Dockerfile by removing the last line and replacing it with following 3 lines:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Then rebuild your container image using
docker build -t NAME:TAG .
Now you run following command to verify if ENTRYPOINT is /entrypoint.sh
docker inspect NAME:TAG | less
I guess (and I might be wrong, since I'm neither a MongoDB nor a Docker expert) that your combination of mongod --fork and /bin/sh -c is the culprit.
What you're essentially executing is this:
/bin/sh -c mongod --fork ...
which
executes a shell
this shell executes a single command and waits for it to finish
this command launches MongoDB in daemon mode
MongoDB forks and immediately exits
The easiest fix is probably to just use
CMD ["mongod"]
like the official MongoDB Docker does.

Setting environment variables when running docker in detached mode

If I include the following line in /root/.bashrc:
export $A = "AAA"
then when I run the docker container in interactive mode (docker run -i), the $A variable keeps its value. However if I run the container in detached mode I cannot access the variable. Even if I run the container explicitly sourcing the .bashrc like
docker run -d my_image /bin/bash -c "cd /root && source .bashrc && echo $A"
such line produces an empty output.
So, why is this happening? And how can I set the environment variables defined in the .bashrc file?
Any help would be very much appreciated!
The first problem is that the command you are running has $A being interpreted by your hosts shell (not the container shell). On your host, $A is likely black, so your effectively command becomes:
docker run -i my_image /bin/bash -c "cd /root && source .bashrc && echo "
Which does exactly as it says. We can escape the variable so it is sent to the container and properly evaluated there:
docker run -i my_image /bin/bash -c "echo \$A"
But this will also be blank because, although the container is, the shell is not in interactive mode. But we can force it to be:
docker run -i my_image /bin/bash -i -c "echo \$A"
Woohoo, we finally got our desired result. But with an added error from bash because there is no TTY. So, instead of interactive mode, we can just set a psuedo-TTY:
docker run -t my_image /bin/bash -i -c "echo \$A"
After running some tests, it appears that when running a container in detached mode, overidding the default environment variables doesnt always happen the way we want, depending on where you are in the Dockerfile.
As an exemple if, running a container in a detached container like so:
docker run **-d** --name image_name_container image_name
Whatever ENV variables you defined within the Dockerfile takes effect everywhere (read the rest and you will understand what the everywhere means).
example of a simple dockerfile (alpine is just a lighweight linux distribution):
FROM alpine:latest
#declaring a docker env variable and giving it a default value
ENV MY_ENV_VARIABLE dummy_value
#copying two dummy scripts into a place where i can execute them straight away
COPY ./start.sh /usr/sbin
COPY ./not_start.sh /usr/sbin
#in this script i could do: echo $MY_ENV_VARIABLE > /test1.txt
RUN not_start.sh
RUN echo $MY_ENV_VARIABLE > /test2.txt
#in this script i could do: echo $MY_ENV_VARIABLE > /test3.txt
ENTRYPOINT ["start.sh"]
Now if you want to run your container in detached and override some ENV variables, like so:
docker run **-d** -e MY_ENV_VARIABLE=new_value --name image_name_container image_name
Surprise! The var MY_ENV_VARIABLE is only overidden inside the script that is run in the ENTRYPOINT (and i checked, same thing happens if your replace ENTRYPOINT with CMD). It would also be overidden in a subscript that you could call from this start.sh script. But the MY_EV_VARIABLE variables that are called within a RUN dockerfile command or within the dockerfile itself do not get overidden.
In other words we would have $MY_ENV_VARIABLE being replaced by the value dummy_value and new_value depending on if you are in the ENTRYPOINT or not.

Resources