Port forwarding through nested docker containers on Jenkins - ruby

My Jenkins pipeline uses the docker plugin that then runs a docker container from inside of that to set up a general test environment like this:
node('docker') {
sh """
cat > .Dockerfile.build <<EOF
FROM ruby:$rubyVersion
RUN apt-get update && apt-get install -y locales && localedef -i en_US -f UTF-8 en_US.UTF-8
ENV LANG=en_US.UTF-8 \\
LANGUAGE=en_US:en \\
LC_LANG=en_US.UTF-8 \\
LC_ALL=en_US.UTF-8
RUN \\
curl -sSL -o /tmp/docker.tgz https://download.docker.com/linux/static/stable/x86_64/docker-${dockerVersion}.tgz && \\
tar --strip-components 1 --directory /usr/local/bin/ --extract --file /tmp/docker.tgz
RUN \\
groupadd -g $gid docker && \\
useradd -d $env.HOME -u $uid build -r -m && \\
usermod -a -G docker build
EOF
""".stripIndent().trim()
}
Once the test environment container is up, I run another container that has my code and tests inside that previously made environment container. One of my tests includes making sure a firewall was set up through iptables that allow certain ports through. To test to see if my firewall is setup correctly, I simple run this from inside that container (now 3 docker containers deep):
def listener_response(port, host = 'localhost')
TCPSocket.open(host, port) do |socket|
socket.read(2)
end
rescue SystemCallError
nil
end
This is called by simply passing in the random port I used and the Jenkins docker node IP. When I run my test container, I do something like:
docker run -d -e DOCKER_HOST_IP=10.x.x.x -e RANDOM_OPEN_PORT=52459 -p 52459:52459 -v /var/run/docker.sock:/var/run/docker.sock
However, I still get a nil response from my test rather than an OK. Is there a way to port forward from the Jenkins host to my test environment to my test container?

Running the test environment with the option --network host seemed to solve the problem for me.

Related

Setting Docker environment variables in Azure DevOps using bash task

I am having a problem setting up an environment variable as part of Releasing my container. in Azure DevOps, I have a bash task in which I am trying to set an environment variable (CUSER)
export CUSER=$(CUSER) && \
echo $(MYPASS) | sudo -S docker run \
-e CUSER \
--name $(CNAME) \
-p 80:80 $(INAME):$(Build.BuildId) &
The container runs but the environment variable is not set. But it is set when I execute the script directly on the host like export CUSER="Dev9" && docker run -e CUSER --name demo1 -p 80:80 myimage:256
I suspect there is a problem with the way my command is formatted but I am not sure where or what.
Resolved it by changing how I authenticate sudo.
sudo -S <<< $(MYPASS) echo export CUSER=$(CUSER) && \
docker run -e CUSER \
--name $(CNAME) \
-p 80:80 $(INAME):$(Build.BuildId) &

How to run cucumber/selenium tests in Docker?

I am struggling to run my cucumber tests from a Docker image.
Here is my setup:
I use OSX with XQuartz to run an X11 session
I use an Ubuntu 14 Vagrant image for development where I forward my X11 session
I am trying to run a docker image with Firefox that will use my XQuartz session for display
So far, I managed to start Firefox with the following setup:
# Dockerfile
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y firefox
# Replace 1000 with something appropriate ;)
RUN export uid=1000 gid=1000 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/dev:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
USER developer
ENV HOME /home/developer
CMD /usr/bin/firefox
I can start Firefox with --net=host from my Vagrant machine:
docker build -t firefox .
docker run --net=host -ti --rm -e DISPLAY=$DISPLAY -v $HOME/.Xauthority:/home/developer/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix:rw firefox:latest
But this is not ideal because I can't link other containers to my machine in the docker-compose.yml file. Ideally, I would like to run my docker machine without --net=host like this:
docker build -t firefox .
docker run -ti --rm -e DISPLAY=$DISPLAY -v $HOME/.Xauthority:/home/developer/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix:rw firefox:latest
But I get the following error:
error: XDG_RUNTIME_DIR not set in the environment.
Error: cannot open display: localhost:10.0
Please help :)
You could simply use elgalu/docker-selenium to avoid dealing with what's already solved for you, and maintained:
docker run --rm -ti --net=host --pid=host --name=grid \
-e SELENIUM_HUB_PORT=4444 -e TZ="US/Pacific" \
-v /dev/shm:/dev/shm --privileged elgalu/selenium
If you need advanced features like a dashboard with video recording for example, or live preview, you can use Zalenium and start it with:
curl -sSL https://raw.githubusercontent.com/dosel/t/i/p | bash -s start -i

Dockerfile CMD not running at container start

So i've written a Dockerfile for a project, i've defined a CMD to run on starting the container to bootstrap the application.
The Dockerfile looks like
# create our mount folders and volumes
ENV MOUNTED_VOLUME_DIR=sites
RUN mkdir /$MOUNTED_VOLUME_DIR
ENV PATH=$MOUNTED_VOLUME_DIR/sbin:$MOUNTED_VOLUME_DIR/common/bin:$PATH
RUN chown -Rf www-data:www-data /$MOUNTED_VOLUME_DIR
# Mount folders
VOLUME ["/$MOUNTED_VOLUME_DIR/"]
# Expose Ports
EXPOSE 443
# add our environment variables to the server
ADD ./env /env
# Add entry point script
ADD ./start.sh /usr/bin/startContainer
RUN chmod 755 /usr/bin/startContainer
# define entrypoint command
CMD ["/bin/bash", "/usr/bin/startContainer"]
The start.sh script, does some git stuff like cloning the right repo, setting environment vars, as well as starting supervisor.
The start script begins with this
#!/bin/bash
now=$(date +"%T")
echo "Container Start Time : $now" >> /tmp/start.txt
/usr/bin/supervisord -n -c /etc/supervisord.conf
I start my new container like this
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
when i login to the container i see that supervisor hasn't been started, and neither has nginx or php5-fpm. the /tmp/start.txt file with a timestamp set from the startContainer script doesn't exist, showing its never ran the CMD in the Dockerfile.
Any hints on to get this fixed would be great
This:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
Says 'run /bin/bash' after instantiating the container. E.g. skip CMD.
Try this:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID

Using ssh-agent with docker on macOS

I would like to use ssh-agent to forward my keys into the docker image and pull from a private github repo.
I am using a slightly modified version of https://github.com/phusion/passenger-docker with boot2docker on Yosemite.
ssh-add -l
...key details
boot2docker up
Then I use the command which I have seen in a number of places (i.e. https://gist.github.com/d11wtq/8699521):
docker run --rm -t -i -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
However it doesn't seem to work:
root#299212f6fee3:/# ssh-add -l
Could not open a connection to your authentication agent.
root#299212f6fee3:/# eval `ssh-agent -s`
Agent pid 19
root#299212f6fee3:/# ssh-add -l
The agent has no identities.
root#299212f6fee3:/# ssh git#github.com
Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts.
Permission denied (publickey).
Since version 2.2.0.0, docker for macOS allows users to access the host’s SSH agent inside containers.
Here's an example command that let's you do it:
docker run --rm -it \
-v /run/host-services/ssh-auth.sock:/ssh-agent \
-e SSH_AUTH_SOCK="/ssh-agent" \
my_image
Note that you have to mount the specific path (/run/host-services/ssh-auth.sock) instead of the path contained in $SSH_AUTH_SOCK environment variable, like you would do on linux hosts.
A one-liner:
Here’s how to set it up on Ubuntu 16 running a Debian Jessie image:
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
https://techtip.tech.blog/2016/12/04/using-ssh-agent-forwarding-with-a-docker-container/
I expanded on #wilwilson's answer, and created a script that will setup agent forwarding in an OSX boot2docker environment.
https://gist.github.com/rcoup/53e8dee9f5ea27a51855
#!/bin/bash
# Use a unique ssh socket name per-invocation of this script
SSH_SOCK=boot2docker.$$.ssh.socket
# ssh into boot2docker with agent forwarding
ssh -i ~/.ssh/id_boot2docker \
-o StrictHostKeyChecking=no \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile=/dev/null \
-o LogLevel=quiet \
-p 2022 docker#localhost \
-A -M -S $SSH_SOCK -f -n \
tail -f /dev/null
# get the agent socket path from the boot2docker vm
B2D_AGENT_SOCK=$(ssh -S $SSH_SOCK docker#localhost echo \$SSH_AUTH_SOCK)
# mount the socket (from the boot2docker vm) onto the docker container
# and set the ssh agent environment variable so ssh tools pick it up
docker run \
-v $B2D_AGENT_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
"$#"
# we're done; kill off the boot2docker ssh agent
ssh -S $SSH_SOCK -O exit docker#localhost
Stick it in ~/bin/docker-run-ssh, chmod +x it, and use docker-run-ssh instead of docker run.
I ran into a similar issue, and was able to make things pretty seamless by using ssh in master mode with a control socket and wrapping it all in a script like this:
#!/bin/sh
ssh -i ~/.vagrant.d/insecure_private_key -p 2222 -A -M -S ssh.socket -f docker#127.0.0.1 tail -f /dev/null
HOST_SSH_AUTH_SOCK=$(ssh -S ssh.socket docker#127.0.0.1 env | grep "SSH_AUTH_SOCK" | cut -f 2 -d =)
docker run -v $HOST_SSH_AUTH_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
-t hello-world "$#"
ssh -S ssh.socket -O exit docker#127.0.0.1
Not the prettiest thing in the universe, but much better than manually keeping an SSH session open IMO.
For me accessing ssh-agent to forward keys worked on OSX Mavericks and docker 1.5 as follows:
ssh into the boot2docker VM with boot2docker ssh -A. Don't forget to use option -A which enables forwarding of the authentication agent connection.
Inside the boot2docker ssh session:
docker#boot2docker:~$ echo $SSH_AUTH_SOCK
/tmp/ssh-BRLb99Y69U/agent.7750
This session must be left open. Take note of the value of the SSH_AUTH_SOCK environmental variable.
In another OS X terminal issue the docker run command with the SSH_AUTH_SOCK value from step 2 as follows:
docker run --rm -t -i \
-v /tmp/ssh-BRLb99Y69U/agent.7750:/ssh-agent \
-e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
root#600d0e9b443d:/# ssh-add -l
2048 6c:8e:82:08:74:33:78:61:f9:9a:74:1b:65:46:be:eb
/Users/dev/.ssh/id_rsa (RSA)
I don't really like the fact that I have to keep a boot2docker ssh session open to make this work, but until a better solution is found, this at least worked for me.
Socket forwarding doesn't work on OS X yet. Here is a variation of #henrjk answer brought into 2019 using Docker for Mac instead of boot2docker which is now obsolete.
First run a ssh server in the container, with /tmp being on the exportable volume. Like this
docker run -v tmp:/tmp -v \
${HOME}/.ssh/id_rsa.pub:/root/.ssh/authorized_keys:ro \
-d -p 2222:22 arvindr226/alpine-ssh
Then ssh into this container with agent forwarding
ssh -A -p 2222 root#localhost
Inside of that ssh session find out the current socket for ssh-agent
3f53fa1f5452:~# echo $SSH_AUTH_SOCK
/tmp/ssh-9zjJcSa3DM/agent.7
Now you can run your real container. Just make sure to replace the value of SSH_AUTH_SOCK below, with the value you got in the step above
docker run -it -v tmp:/tmp \
-e SSH_AUTH_SOCK=/tmp/ssh-9zjJcSa3DM/agent.7 \
vladistan/ansible
By default, boot2docker shares only files under /Users. SSH_AUTH_SOCK is probably under /tmp so the -v mounts the agent of the VM, not the one from your mac.
If you setup your VirtualBox to share /tmp, it should be working.
Could not open a connection to your authentication agent.
This error occurs when $SSH_AUTH_SOCK env var is set incorrectly on the host or not set at all. There are various workarounds you could try. My suggestion, however, is to dual-boot Linux and macOS.
Additional resources:
Using SSH keys inside docker container - Related Question
SSH and docker-compose - Blog post
Build secrets and SSH forwarding in Docker 18.09 - Blog post

How to rebuild dockerfile quick by using cache?

I want to optimize my Dockerfile. And I wish to keep cache file in disk.
But, I found when I run docker build . It always try to get every file from network.
I wish to share My cached directory during build (eg. /var/cache/yum/x86_64/6).
But, it works only on docker run -v ....
Any suggestion?(In this example, only 1 rpm installed, in real case, I require to install hundreds rpms)
My draft Dockerfile
FROM centos:6.4
RUN yum update -y
RUN yum install -y openssh-server
RUN sed -i -e 's:keepcache=0:keepcache=1:' /etc/yum.conf
VOLUME ["/var/cache/yum/x86_64/6"]
EXPOSE 22
At second time, I want to build a similar image
FROM centos:6.4
RUN yum update -y
RUN yum install -y openssh-server vim
I don't want the fetch openssh-server from internat again(It is slow). In my real case, it is not one package, it is about 100 packages.
An update to previous answers, current docker build
accepts --build-arg that pass environment variables like http_proxy
without saving it in the resulting image.
Example:
# get squid
docker run --name squid -d --restart=always \
--publish 3128:3128 \
--volume /var/spool/squid3 \
sameersbn/squid:3.3.8-11
# optionally in another terminal run tail on logs
docker exec -it squid tail -f /var/log/squid3/access.log
# get squid ip to use in docker build
SQUID_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' squid)
# build your instance
docker build --build-arg http_proxy=http://$SQUID_IP:3128 .
Just use an intermediate/base image:
Base Dockerfile, build it with docker build -t custom-base or something:
FROM centos:6.4
RUN yum update -y
RUN yum install -y openssh-server vim
RUN sed -i -e 's:keepcache=0:keepcache=1:' /etc/yum.conf
Application Dockerfile:
FROM custom-base
VOLUME ["/var/cache/yum/x86_64/6"]
EXPOSE 22
You should use a caching proxy (f.e Http Replicator, squid-deb-proxy ...) or apt-cacher-ng for Ubuntu to cache installation packages. I think, you can install this software to the host machine.
EDIT:
Option 1 - caching http proxy - easier method with modified Dockerfile:
> cd ~/your-project
> git clone https://github.com/gertjanvanzwieten/replicator.git
> mkdir cache
> replicator/http-replicator -r ./cache -p 8080 --daemon ./cache/replicator.log --static
add to your Dockerfile (before first RUN line):
ENV http_proxy http://172.17.42.1:8080/
You should optionally clear the cache from time to time.
Option 2 - caching transparent proxy, no modification to Dockerfile:
> cd ~/your-project
> curl -o r.zip https://codeload.github.com/zahradil/replicator/zip/transparent-requests
> unzip r.zip
> rm r.zip
> mv replicator-transparent-requests replicator
> mkdir cache
> replicator/http-replicator -r ./cache -p 8080 --daemon ./cache/replicator.log --static
You need to start the replicator as some user (non root!).
Set up the transparent redirect:
> iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner <replicator-user> --dport 80 -j REDIRECT --to-port 8080
Disable redirect:
> iptables -t nat -D OUTPUT -p tcp -m owner ! --uid-owner <replicator-user> --dport 80 -j REDIRECT --to-port 8080
This method is the most transparent and general and your Dockerfile does not need to be modified. You should optionally clear the cache from time to time.

Resources