Bash script to update docker login not functioning correctly - bash

I'm trying to script updating of the login keys to the AWS docker ECR on a CoreOS instance.
If I run:
docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws ecr get-login
I get as output:
docker login -u AWS -p CiBwm0YaISJeRtJ ... -e none https://123456789012.us-east-1.amazonaws.com
If I copy and run that, it works perfectly. If I don't but instead use this form:
$(docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws ecr get-login)
It fails with an error.
/v0/: unable to ping registry endpoint https://123456789012.us-east-1.amazonaws.com
If I try to assign it to a variable, things get weird.
var=$(docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws ecr get-login)
echo "'$var' string"
Oddly, when I try to quote the string and echo it, the final quote appears in an unexpected place.
docker login -u AWS -p CiBwmEwHgYJ ... YIZIAWUDBAEuMisGdv0KB' stringivOyPO+qNJ3zo87RXwWlOW8TnCtGRd6k6tb0Z35xL2IKMO194+1va56lH0am -e none https://123456789012.dkr.ecr.us-east-1.amazonaws.com
It's quite a long string, is there perhaps some sort of buffer overflow problem here?
How might I get around it?

I've used aws ecr get-login before as input to eval to accomplish an actual login. Doing the same on my CoreOS machine with a command like yours fails with a similar problem to yours:
$ eval $(docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws --region=us-east-1 ecr get-login)
/v0/: unable to ping registry endpoint https://123456789012.dkr.ecr.us-east-1.amazonaws.com1.amazonaws.com
: no such host: lookup 123456789012.dkr.ecr.us-east-1.amazonaws.com.us-east-1.amazonaws.com
/ca.crte daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/123456789012.dkr.ecr.us-east-1.amazonaws.com
I used good ol' set -x to turn on debugging so I could see exactly what commands were being executed:
$ docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws --region=us-east-1 ecr get-login
+ docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws --region=us-east-1 ecr get-login
docker login -u AWS -p CiBwm...hb9E= -e none https://123456789012.dkr.ecr.us-east-1.amazonaws.com
$ eval $(docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws --region=us-east-1 ecr get-login)
++ docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws --region=us-east-1 ecr get-login
+ eval docker login -u AWS -p CiBwm...DES0= -e none $'https://123456789012.dkr.ecr.us-east-1.amazonaws.com\r'
++ docker login -u AWS -p CiBwm...DES0= -e none $'https://123456789012.dkr.ecr.us-east-1.amazonaws.com\r'
/v0/: unable to ping registry endpoint https://123456789012.dkr.ecr.us-east-1.amazonaws.com1.amazonaws.com
: no such host: lookup 123456789012.dkr.ecr.us-east-1.amazonaws.com.us-east-1.amazonaws.com
/ca.crte daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/123456789012.dkr.ecr.us-east-1.amazonaws.com
Wait, what? The repository URL is being turned into $'https://123456789012.dkr.ecr.us-east-1.amazonaws.com\r' when the command is fed into eval, which is causing it to be evaluated in a very bizarre manner. That \r seemed like the problem, so I fed the output of the docker command into a text file and then opened it in binary mode with Vim to check. Sure enough, it's a pesky carriage return character. Removing that character with sed did the trick for me:
$ eval $(docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws --region=us-east-1 ecr get-login | sed -e 's/^M//')
++ sed -e $'s/\r//'
++ docker run --rm --env-file=/etc/aws/environment -ti xueshanf/awscli:latest aws --region=us-east-1 ecr get-login
+ eval docker login -u AWS -p CiBwm...88A8= -e none https://123456789012.dkr.ecr.us-east-1.amazonaws.com
++ docker login -u AWS -p CiBwm...88A8= -e none https://123456789012.dkr.ecr.us-east-1.amazonaws.com
WARNING: login credentials saved in /home/core/.docker/config.json
Login Succeeded
Note: to remove the carriage return, you'll need to type Ctrl+V Ctrl+M to correctly insert the ^M character.
Hope this helps! =]

Docker is inserting the extra carriage return, because you've told it to allocate a a pseudo-tty (with the option -t). If you remove the -ti, you won't need to use sed.
eval $(docker run --rm --env-file=/etc/aws/environment \
xueshanf/awscli:latest aws ecr get-login)

Related

Redirecting aws cli v2 to file adds extra control characters

If I run the following command:
docker run -v "$PWD/aws":/root/.aws --rm -it amazon/aws-cli wafv2 list-ip-sets --profile dev --scope=CLOUDFRONT --region=us-east-1 --color off
I get the following output:
{
"IPSets": []
}
If I run the following command:
docker run -v "$PWD/aws":/root/.aws --rm -it amazon/aws-cli wafv2 list-ip-sets --profile dev --scope=CLOUDFRONT --region=us-east-1 --color off > test.txt
I get the following in test.txt:
[?1h=
{[m
"IPSets": [][m
}[m
[K[?1l>
I think these are xterm control codes or something - in any case, how do I get the contents of test.txt to match what is output to the terminal? I am on a Mac but my solution needs to work on Mac and Linux.
Apparently that -it parameter being passed to Docker was the problem. Passing just just -i made it work.

chowning the host's bound `docker.sock` inside container breaks host docker

On a vanilla install of Docker for Mac my docker.sock is owned by my local user:
$ stat -c "%U:%G" /var/run/docker.sock
juliano:staff
Even if I add the user and group on my Dockerfile, when trying to run DinD as me, the mount of the docker.sock is created with root:root.
$ docker run -it --rm \
--volume /var/run/docker.sock:/var/run/docker.sock \
--group-add staff \
--user $(id -u):$(id -g) \
"your-average-container:latest" \
/bin/bash -c 'ls -l /var/run/docker.sock'
srw-rw---- 1 root root 0 Jun 17 07:34 /var/run/docker.sock
Going the other way, running DinD as root, chowning the socket, then running commands breaks the host docker.
$ docker run -it --rm \
--volume /var/run/docker.sock:/var/run/docker.sock \
--group-add staff \
"your-average-container:latest" \
/bin/bash
$ chown juliano:staff /var/run/docker.sock
$ sudo su juliano
$ docker ps
[some valid docker output]
$ exit
$ docker ps
Error response from daemon: Bad response from Docker engine
I've seen people reporting chowning as the way to go, so maybe I'm doing something wrong.
Questions:
Why does the host docker break?
Is there some way to prevent host docker from breaking and still giving my user permission to the socket inside docker?
I believe that when you are mounting the volume the owner UID/GID is set to the same as in the host machine (the --user flag simply allows you to run the command as a specific UID/GID and it doesn't have impact on the permission for mounted volume)
The main question is - why would you need to chown? Can't you just run the commands inside the container as root?

Can't open dashboard

I'm using Windows and installed Zalenium with the .\prepare.bat
Then, when i try o start Zalenium with:
docker run --rm -ti --name zalenium -p 4444:4444
-v /var/run/docker.sock:/var/run/docker.sock
-v /tmp/videos:/home/seluser/videos
--privileged dosel/zalenium start
I get an error on the console:
Copying files for Dashboard...
cp: cannot create regular file '/home/seluser/videos/dashboard.html': No such file or directory
Everything works except the Dashboard.
What am i doing wrong?
I'm using the latest version.
Thank you
Error clearly say that it is trying to look for the file in LINUX like structure "/home/seluser/videos/" which will not be available on windows.
When you start zalenium, It looks for dashboard.html in the mount drive. Without this file dashboard will not be visible.
You should use below command in case of windows.
docker run --rm -ti --name zalenium -p 4444:4444 ^
-v /var/run/docker.sock:/var/run/docker.sock ^
-v /c/Users/your_user_name/temp/videos:/home/seluser/videos ^
--privileged dosel/zalenium start
Zalenium documentation
I'm new to Zalenium but I found out that the run command on the Zalenium Github page does not work on all systems.
Try this command i use and let me know if it works for
*docker run --rm -ti --name zalenium -p 4444:4444 -v /var/run/docker.sock:/var/run/docker.sock --privileged dosel/zalenium start*
docker run -d -ti --name zalenium -p 4445:4444 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/videos:/home/seluser/videos --restart=always --privileged dosel/zalenium start
worked for me.. dashboard opens on 4445 port

"docker run" dies after exiting a bash shell script

I'm attempting to craft system admin bash tools for starting up a Docker image.
But such docker run keeps dying on me after its bash script exited.
The actual working bash script in question is:
#!/bin/sh
docker run \
--name publicnginx1 \
-v /var/www:/usr/share/nginx/html:ro \
-v /var/nginx/conf:/etc/nginx:ro \
--rm \
-p 80 \
-p 443 \
-d \
nginx
docker ps
Executing the simple script resulted in:
# ./docker-run-nginx.sh
743a6eaa33f435e3e0d211c4047bc9af4d4667dc31cd249e481850f40f848c83
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
743a6eaa33f4 nginx "nginx -g 'daemon of…" 1 second ago Up Less than a second 0.0.0.0:32778->80/tcp, 0.0.0.0:32777->443/tcp publicnginx1
And after that bash script gets completed, I executed 'docker ps'
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
There is no Docker running.
What did I do wrong?
Try to run it without --rm.
You can see all container (including the one that already died using this command):
> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
743a6eaa33f4 nginx "nginx -g 'daemon of…" 1 second ago Exited (??) ??
^^^^^
You should be able to look at what is the exit code of the container. Using the container id, you can also look into it's log to understand better what is going on:
docker logs 743a6eaa33f4
If you still can't figure it out, you can start the container with tty to run bash, and try to run the command inside it.
docker run -it -v /var/www:/usr/share/nginx/html:ro -v /var/nginx/conf:/etc/nginx:ro --rm -p 80 -p 443 nginx bash

Using ssh-agent with docker on macOS

I would like to use ssh-agent to forward my keys into the docker image and pull from a private github repo.
I am using a slightly modified version of https://github.com/phusion/passenger-docker with boot2docker on Yosemite.
ssh-add -l
...key details
boot2docker up
Then I use the command which I have seen in a number of places (i.e. https://gist.github.com/d11wtq/8699521):
docker run --rm -t -i -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
However it doesn't seem to work:
root#299212f6fee3:/# ssh-add -l
Could not open a connection to your authentication agent.
root#299212f6fee3:/# eval `ssh-agent -s`
Agent pid 19
root#299212f6fee3:/# ssh-add -l
The agent has no identities.
root#299212f6fee3:/# ssh git#github.com
Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts.
Permission denied (publickey).
Since version 2.2.0.0, docker for macOS allows users to access the host’s SSH agent inside containers.
Here's an example command that let's you do it:
docker run --rm -it \
-v /run/host-services/ssh-auth.sock:/ssh-agent \
-e SSH_AUTH_SOCK="/ssh-agent" \
my_image
Note that you have to mount the specific path (/run/host-services/ssh-auth.sock) instead of the path contained in $SSH_AUTH_SOCK environment variable, like you would do on linux hosts.
A one-liner:
Here’s how to set it up on Ubuntu 16 running a Debian Jessie image:
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
https://techtip.tech.blog/2016/12/04/using-ssh-agent-forwarding-with-a-docker-container/
I expanded on #wilwilson's answer, and created a script that will setup agent forwarding in an OSX boot2docker environment.
https://gist.github.com/rcoup/53e8dee9f5ea27a51855
#!/bin/bash
# Use a unique ssh socket name per-invocation of this script
SSH_SOCK=boot2docker.$$.ssh.socket
# ssh into boot2docker with agent forwarding
ssh -i ~/.ssh/id_boot2docker \
-o StrictHostKeyChecking=no \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile=/dev/null \
-o LogLevel=quiet \
-p 2022 docker#localhost \
-A -M -S $SSH_SOCK -f -n \
tail -f /dev/null
# get the agent socket path from the boot2docker vm
B2D_AGENT_SOCK=$(ssh -S $SSH_SOCK docker#localhost echo \$SSH_AUTH_SOCK)
# mount the socket (from the boot2docker vm) onto the docker container
# and set the ssh agent environment variable so ssh tools pick it up
docker run \
-v $B2D_AGENT_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
"$#"
# we're done; kill off the boot2docker ssh agent
ssh -S $SSH_SOCK -O exit docker#localhost
Stick it in ~/bin/docker-run-ssh, chmod +x it, and use docker-run-ssh instead of docker run.
I ran into a similar issue, and was able to make things pretty seamless by using ssh in master mode with a control socket and wrapping it all in a script like this:
#!/bin/sh
ssh -i ~/.vagrant.d/insecure_private_key -p 2222 -A -M -S ssh.socket -f docker#127.0.0.1 tail -f /dev/null
HOST_SSH_AUTH_SOCK=$(ssh -S ssh.socket docker#127.0.0.1 env | grep "SSH_AUTH_SOCK" | cut -f 2 -d =)
docker run -v $HOST_SSH_AUTH_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
-t hello-world "$#"
ssh -S ssh.socket -O exit docker#127.0.0.1
Not the prettiest thing in the universe, but much better than manually keeping an SSH session open IMO.
For me accessing ssh-agent to forward keys worked on OSX Mavericks and docker 1.5 as follows:
ssh into the boot2docker VM with boot2docker ssh -A. Don't forget to use option -A which enables forwarding of the authentication agent connection.
Inside the boot2docker ssh session:
docker#boot2docker:~$ echo $SSH_AUTH_SOCK
/tmp/ssh-BRLb99Y69U/agent.7750
This session must be left open. Take note of the value of the SSH_AUTH_SOCK environmental variable.
In another OS X terminal issue the docker run command with the SSH_AUTH_SOCK value from step 2 as follows:
docker run --rm -t -i \
-v /tmp/ssh-BRLb99Y69U/agent.7750:/ssh-agent \
-e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
root#600d0e9b443d:/# ssh-add -l
2048 6c:8e:82:08:74:33:78:61:f9:9a:74:1b:65:46:be:eb
/Users/dev/.ssh/id_rsa (RSA)
I don't really like the fact that I have to keep a boot2docker ssh session open to make this work, but until a better solution is found, this at least worked for me.
Socket forwarding doesn't work on OS X yet. Here is a variation of #henrjk answer brought into 2019 using Docker for Mac instead of boot2docker which is now obsolete.
First run a ssh server in the container, with /tmp being on the exportable volume. Like this
docker run -v tmp:/tmp -v \
${HOME}/.ssh/id_rsa.pub:/root/.ssh/authorized_keys:ro \
-d -p 2222:22 arvindr226/alpine-ssh
Then ssh into this container with agent forwarding
ssh -A -p 2222 root#localhost
Inside of that ssh session find out the current socket for ssh-agent
3f53fa1f5452:~# echo $SSH_AUTH_SOCK
/tmp/ssh-9zjJcSa3DM/agent.7
Now you can run your real container. Just make sure to replace the value of SSH_AUTH_SOCK below, with the value you got in the step above
docker run -it -v tmp:/tmp \
-e SSH_AUTH_SOCK=/tmp/ssh-9zjJcSa3DM/agent.7 \
vladistan/ansible
By default, boot2docker shares only files under /Users. SSH_AUTH_SOCK is probably under /tmp so the -v mounts the agent of the VM, not the one from your mac.
If you setup your VirtualBox to share /tmp, it should be working.
Could not open a connection to your authentication agent.
This error occurs when $SSH_AUTH_SOCK env var is set incorrectly on the host or not set at all. There are various workarounds you could try. My suggestion, however, is to dual-boot Linux and macOS.
Additional resources:
Using SSH keys inside docker container - Related Question
SSH and docker-compose - Blog post
Build secrets and SSH forwarding in Docker 18.09 - Blog post

Resources