Docker on AWS - Environment Variables not inheriting from host to container - bash

I have a script (hosted on GitHub) that does the following:
Creates an EC2 instance on AWS
Saves the local IP (private IP address) as an environment variable $LOCALIP
Installs Docker (official repo)
Updates the base instance (Ubuntu 16.04 LTS)
Pulls a custom image of mine
Runs said image with the -e LOCALIP trying to pass the hosts environment variable to the container (I have also tried -e LOCALIP=$LOCALIP
However when I docker exec into the container on that instance and run echo $LOCALIP it displays nothing. Running env shows me that LOCALIP is there but nothing is against it
If I destroy the container and remake using the exact same line from the original script (with -e LOCALIP=$LOCALIP) it works - I need this process automating however and some additional help would be greatly appreciated.
Essentially sudo docker run -dit -e LOCALIP -p 1099:1099 -p 50000:50000 screamingjoypad/armada-server /bin/bash is not sharing the hosts LOCALIP variable.
UPDATE
Trying the suggestions from below I added the following line to my script
source /etc/bash.bashrc but this still does not work. I'm still getting a blank when trying the echo $LOCALIP in the container...

The problem is because of this line of your shell script:
echo "export LOCALIP=$(hostname -i)" >> /etc/bash.bashrc
After this command is executed, the export instruction does not take effect yet. It will take effect after your next login.
To make the LOCALIP environment variable take effect immediately, add this line after the echo "export ... command:
source /etc/bash.bashrc

I believe I've solved it. I'm now using the AWS metadata to harvest the private IP address using the following addition to the script:
sudo docker run -dit -e LOCALIP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4) -p 1099:1099 -p 50000:50000 screamingjoypad/armada-server /bin/bash

Related

Problem in executing a shell script present on host using docker exec

I'm trying to execute a script on the master node of AWS EMR cluster. The intention is to create a new conda env and link it to jupyter. I'm following this doc from AWS. Problem is, whatever be the content of the script, I'm getting the same error: bash: /home/hadoop/scripts/bootstrap.sh: No such file or directory while executing sudo docker exec jupyterhub bash /home/hadoop/scripts/bootstrap.sh. I've made sure the sh file is in the correct location.
But if I copy the bootstrap.sh file inside the container, and then run the same docker exec cmd, it's working fine. What am I missing here? I've tried with a simple script with the following entries, but it throws the same error:
#!/bin/bash
echo "Hello"
The doc clearly says:
Kernels are installed within the Docker container. The easiest way to
accomplish this is to create a bash script with installation commands,
save it to the master node, and then use the sudo docker exec
jupyterhub script_name command to run the script within the jupyterhub
container.
The docker exec command runs a command within the container's namespaces. One of those namespaces is the filesystem. So unless the command is part of the image, written into the container directly, or you have mounted a host volume to map a host directory into the container, you won't be able to execute it. A host volume could look like:
docker run -v /host/scripts:/container/scripts -n your_container $your_image
docker exec -it your_container /container/scripts/test.sh
That host volume could be the same path on both the host and the container.
If it is a shell script, you could use I/O redirection, e.g.:
docker exec -i $container_id /bin/bash <local_script.sh
but be aware that you cannot do interactive stuff this way since the script content has replaced your terminal as stdin. This works because the shell inside the container is just processing commands from stdin.
Other than those scenarios, I don't know what to tell you other than the documentation from AWS appears to be wrong.

docker-compose script with prompt…. better solution?

I have a bash script to start various docker-compose.yml(s)
One of these compose instances is docker-compose.password.yml to create a password file for mysql. For that I need to prompt the user to input a user name and then run a service in docker (that is actually not running).
basically the only way I can think of to accomplish this is run the docker in idle state, exec the command and close the docker. Is there a better way?
(easier would be to do it directly with docker run, but then I would have to check if the image is already available and have image definitions in the various docker-compose.ymls plus now also in the bash script)
XXXXXXXXXX
My solution:
docker-compose.password.yml
version: '2'
services:
createpw:
command:
top -b -d 3600
then
docker-compose -f docker-compose.password.yml up -d
prompt the user by my bash script outside of docker for the credentials
read -p "Input user name.echo $’\n> ’" username
and send it to the running docker
docker exec createpw /bin/bash -c "mysql_config_editor set --user=${username} --password"
and then docker-compose down
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Tried and not working:
I tried to have just a small subscript prompting for the input right under command
command:
/bin/bash /somewhere/createpassword.sh
This did produce the file, but the user was an empty string, as the prompt didn’t stop the docker execution. It didn’t matter if I used compose -d or not.
Any suggestions are welcome. Thanks.

docker-compose ignores DOCKER_HOST

I am attempting to run 3 Docker images, MySQL, Redis and a project of mine on Bash for Windows (WSL).
To do that I have to connect to the Docker engine running on Windows, specifically on tcp://locahost:2375. I have appended the following line to .bashrc:
export DOCKER_HOST=tcp://127.0.0.1:2375
I can successfully run docker commands like docker ps or docker run hello-world but whenever I cd into my project directory and run
sudo docker-compose up --build to load the images and spin up the containers I get an error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I know that if I use the -H argument I can supply the address but I'd rather find a more permanent solution. For some reason docker-compose seems to ignore the DOCKER_HOST environmental variable and I can't figure out why..
Your problem is sudo. It's a totally different program than your shell and doesn't transfer the exported environment unless you specifically tell it to. You can either add the following line in your /etc/sudoers (or /etc/sudoers.d/docker):
Defaults env_keep += DOCKER_HOST
Or you can just pass it directly to the command line:
sudo DOCKER_HOST=$DOCKER_HOST docker-compose up --build
By set DOCKER_HOST you tell for every run of docker in command line to use http api, instead of default - socket on localhost.
By default http api is not turned on
$ sudo cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
you can add -H tcp://127.0.0.1:2375 for tern on http api on localhost
but usually you want to tern on api for remote servers by -H tcp://0.0.0.0:2375 (do it only with proper firewall)
so you need to change in /lib/systemd/system/docker.service to next line
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375 --containerd=/run/containerd/containerd.sock

Why set VISIBLE=NOW in /etc/profile?

I'm reading a Dockerfile - Dockerizing an SSH Service and it contains the following code:
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
Just curious what the purpose of that is?
TIA,
Ole
P.S Great article here on ways to avoid running an SSH Server in a Docker container: https://jpetazzo.github.io/2014/06/23/docker-ssh-considered-evil/
It's an example of how to pass environment variables when running a Dockerized SSHD service. SSHD scrubs the environment, therefore ENV variables contained in Dockerfile must be pushed to /etc/profile in order for them to be available.

Ahow to use multiple terminals in the docker container?

I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.

Resources