Connecting to container using ansible - ansible

I have build a docker image and started the container using ansible. I'm running into an issue trying to create a dynamic connection to the container from the docker host to set some environment variable and execute some script. I know ansible does not use ssh to connect to the container where I can can use the expect module to run this command "ssh root#localhost -p 12345". How do I add and maintain a connection to the container using ansible docker connection plugin or pointing directly to the docker host? This is all running in AWS EC2 instance.
I think I need to run ansible as an equivalent to this command use by ansible to connect to the container host "docker exec -i -t container_host_server /bin/bash".
name: Create a data container
docker_container:
name: packageserver
image: my_image/image_name
tty: yes
state: started
detach: True
detach: yes
volumes:
/var/www/html
published_ports:
"12345:22"
"80:80"
register: container
Thanks in Advance,
DT

To set environment variables you can use parameter "env" in your docker_container task.
In the docker_container task you can add the parameter "command" to override the command defined as CMD in the Dockerfile of your docker image, somethning like
command: PathToYourScript && sleep infinity
In your example you expose container port 22, so it seems you want run sshd inside container. Although it's not a best practice in Docker, if you want sshd running you have to start that using command parameter in the docker_container task:
command: ['/usr/sbin/sshd', '-D']
Doing it (and having defined a user in the container), you'll be able to connect your container with
ssh -p 12345 user#dockerServer
or, as for your example, "ssh -p 12345 root#localhost" if your image already defined root user and you are working on localhost.

Related

Run Laravel docker image with exposing ports -p

I have a laravel app but I can't make it run with docker run command. The last two instructions are
EXPOSE 9000
CMD ["php", "artisan", "serve","--port=9000"]
I am trying to make it run trying with:
docker run -p 9000:9000 my_image:latest
docker run --net="host" -p 9000:9000 my_image:latest
docker run --net="bridge" -p 9000:9000 my_image:latest
The only thing I see is the classic laravel output
Laravel development server started: <http://127.0.0.1:9000>
What am I missing?
The problem is 127.0.0.1:9000, i.e., the server is bound to localhost within the container, instead of listening on an external interface. The solution is to use the --host 0.0.0.0 argument, which will bind the server to all available interfaces.
CMD ["php", "artisan", "serve", "--host", "0.0.0.0", "--port=9000"]

Setting redis configuration with docker in windows

I want to set up redis configuration in docker.
I have my own redis.conf under D:/redis/redis.conf and have configured it to have bind 127.0.0.1 and have uncommented requirepass foobared
Then used this command to load this configuration in docker:
docker run --volume D:/redis/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
Next,
I have docker-compose.yml in my application in maven Project under src/resources.
I have the following in my docker-compase.yml
redis:
image: redis
ports:
- "6379:6379"
And i execute the command :
docker-compose up
The Server runs, but when i check with the command:
docker ps -a
it Shows that redis Image runs at 0.0.0.0:6379.
I want it to run at 127.0.0.1.
How do i get that?
isn't my configuration file loading or is it wrong? or my commands are wrong?
Any suggestions are of great help.
PS: I am using Windows.
Thanks
Try to execute:
docker inspect <container_id>
And use "NetworkSettings"->"Gateway" (it must be 172.17.0.1) value instead of 127.0.0.1.
You can't use 127.0.0.1 as your Redis was run in the isolated environment.
Or you can link your containers.
So first of all you should not be worried about redis saying listening on 0.0.0.0:6379. Because redis is running inside the container. And if it doesn't listen on 0.0.0.0 then you won't be able to make any connections.
Next if you want redis to only listen on localhost on localhost then you need to use below
redis:
image: redis
ports:
- "127.0.0.1:6379:6379"
PS: I have not run container or docker for windows with 127.0.0.1 port mapping, so you will have to see if it works. Because host networking in Windows, Mac and Linux are different and may not work this way

Permission denied error invoking Docker on Mac host from inside Docker Ubuntu container as non-root user

I'm trying to invoke docker on my OSX host running Docker for Mac 17.06.0-ce-mac17 from inside a running jenkins docker container (jenkins:latest), per the procedure described at http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/.  
I mount /var/run/docker.sock into the container, I stick a ubuntu docker binary inside it, and it's able to execute - but from inside the container as user "jenkins" when I run e.g. "docker ps" I get
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.30/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied.  
If I connect to the container as root (docker exec -u 0) it works though.
I need the jenkins user to be able to run this. I tried adding a docker group and adding jenkins to it inside the ubuntu container but that didn't help, since it's got nothing to do with the outside and Docker for Mac doesn't work like running this on linux where you can do semi easy uid/gid matching. I want to distribute this container so answers that go and hack part of my Docker for Mac install won't really help me. I'd rather not run the whole jenkins setup as root if I can help it. (I also tried running the container as privileged, that didn't help.)
Per the advice in Permission Denied while trying to connect to Docker Daemon while running Jenkins pipeline in Macbook I chowned the /var/run/docker.sock file inside the container manually to jenkins and now jenkins can run docker. But I'm having trouble coming up with a solution for a distributable container - I can't do that chown in the Dockerfile because the file doesn't exist yet, and shimming in into the entrypoint doesn't help because that runs as jenkins.
What do I need to do in order to build and run an image that will run external docker containers on my Mac as a non-root user from inside the container?
Follow this: https://forums.docker.com/t/mounting-using-var-run-docker-sock-in-a-container-not-running-as-root/34390
Basically, all you need to do is to change /var/run/docker.sock permissions inside your container and run the docker with sudo.
I've created a Dockerfile that can be used to help:
FROM jenkinsci/blueocean:latest
USER root
# change docker sock permissions after moutn
RUN if [ -e /var/run/docker.sock ]; then chown jenkins:jenkins /var/run/docker.sock; fi
I got this working, at least automated but currently only working on docker for Mac. Docker for Mac has a unique file permission model. Chowning /var/run/docker.sock to the jenkins user manually works, and it persists across container restarts and even image regeneration, but not past docker daemon restarts. Plus, you can't do the chown in the Dockerfile because docker.sock doesn't exist yet, and you can't do it in the entrypoint because that runs as jenkins.
So what I did was add jenkins to the "staff" group, because on my Mac, /var/run/docker.sock is symlinked down into /Users//Library/Containers/com.docker.docker/Data/‌​s60 and is uid and gid staff. This lets the jenkins user run docker commands on the host.
Dockerfile:
FROM jenkins:latest
USER root
RUN \
apt-get update && \
apt-get install -y build-essential && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY docker /usr/bin/docker
# To allow us to access /var/run/docker.sock on the Mac
RUN gpasswd -a jenkins staff
USER jenkins
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
docker-compose.yml file:
version: "3"
services:
jenkins:
build: ./cd_jenkins
image: cd_jenkins:latest
ports:
- "8080:8080"
- "5000:5000"
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
This is, however, not portable to other systems (and depends on that docker for mac group staying "staff," which I imagine isn't guaranteed). I'd love suggested improvements to make this solution work across host systems. Other options suggested in questions like Execute docker host command inside jenkins docker container include:
Install sudo and let jenkins sudo and run all docker commands with sudo: adds security issues
"Add jenkins to the docker group" - UNIX only and probably relies on matching up gids from host to container right?
Setuid'ing the included docker executable might work, but has the same security elevation issues as sudo.
Another approach that worked for me - set the uid argument to the uid that owns /var/run/docker.sock (501 in my case). Not sure of the syntax for Dockerfile, but for docker-compose.yml, it's like this:
version: 3
services:
jenkins:
build:
context: ./JENKINS
dockerfile: Dockerfile
args:
uid: 501
volumes:
- /var/run/docker.sock:/var/run/docker.sock
...
Note this is based on using a Dockerfile to build the jenkins image, so many details left out. The key bit here is the uid: 501 under args.

Running a Bash Script from (on Docker Container B) from Docker Container A

I have two Docker Containers configured through a Docker Compose file.
Docker Container A - (teamcity-agent)
Docker Container B - (build-tool)
Both start up fine. But as part of the build process in TeamCity - I would like the Agent (Container A) to run a bash script which is on Docker Container B (Only B can run this script).
I tried to set this up using the SSH build step in Team City, but I get connection refused.
Further reading into it shows that SSH isn't enabled in containers and that I shouldn't really be trying to SSH into a container.
So how can I get Container A to run the script on Container B and see the output of the script on A?
What is the best practice for this?
The only way without modifying the application itself is through SSH. It is completely false you cannot SSH to a container. I use SSH to a database container to run database export inside it.
First be sure openssh-server is installed on B. Then you must setup a passwordless connection between A and B.
Then be sure you link your containers in the docker-compose file so you won't need to expose the SSH port.
Snippet to add in Dockerfile for container B
RUN apt-get install -q -y openssh-server
ADD id_rsa.pub /home/ubuntu/.ssh/authorized_keys
RUN chown -R ubuntu:ubuntu /home/ubuntu/.ssh ; \
chmod 700 /home/ubuntu/.ssh ; \
chmod 600 /home/ubuntu/.ssh/authorized_keys
Also you can run the script outside the containers using docker exec in a crontab in the host. But I think you are not looking for this extreme solution.
I can help you via comments
Regards

Failed to communicate a dockerized process with elastic search with "None of the configured nodes are available"

I have spring boot application which communicate with ElasticSearch 5.0.0 alpha 2.
My application successfully communicate with elastic and preform several queries.
When I try to dockerize my application, it fails to communicate with ElasticSearch, and I get the following error:
None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}]
I have spent a lot of time on the internet, but I have found problems when the ElasticSearch is dockerized, but in my case, the client is dockerized, and it is working fine without the docker.
The command I used to create the docker image is: docker build -t my-service .
The DockerFile is:
FROM java:8
VOLUME /tmp
ADD ./build/libs/myjarfile-2.0.0.jar app.jar
EXPOSE 8090
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
To execute the image i use: docker run --name myname -d -p 8090:8090 -t my-service
Can someone share his/her experience with this issue?
Thanks
Guy Hudara
The problem is that your elasticsearch is not available on your dockerized host. When you put something in a docker container it also gets isolated on a network layer and localhost is localhost of the docker container but not the host itself. Therefore if you have elasticsearch also in a docker container use container linking and environment variable injection or reference your host machines address of your main network interface – not loopback – to your app.
Option 1
assuming that elasticsearch exposes 9200 try to run the following
$ docker run -d --name=elasticsearch elasticsearch
$ docker run -d --name=my-app --link elasticsearch:elasticsearch -p 8090:8090 my-app
Then you can define elasticsearch address in your app using env variable ${ELASTICSEARCH_PORT_9200_TCP_ADDR}.
Option 2
assuming your host machine runs on 192.168.1.10 you can also do the following:
$ docker run -d -p 9200:9200 elasticsearch
$ docker run -d -p 8090:8090 my-app
note that the name for the easticsearch container is optional here but the exposing of elasticsearch port mandatory. In this case you'll have to configure your elasticsearch host in your app given address of 192.168.1.10.

Resources