How to connect directly to a remote docker container with ssh - bash

I want to connect to a remote running Docker container directly with ssh. Normally I can
$ ssh -i privateKey user#host
$ docker ps #which will list all running containers
$ docker exec -it ***** bash deploy.sh # ***** is container id and this line run a deployment script
But I need to run this script from a Jenkins pipeline where I have only one chance. After many trying, I come up with this
$ ssh -tt -i ~/privateKey user#host docker exec -it $(docker ps | grep unique_text | cut -c1-10) /bin/bash deploy.sh
Which have not help my plight because it returns
"docker exec" requires at least 2 arguments.
Which actually mean the command is truncated here $(docker ps | grep ...
My Solution
sh 'ssh -tt -i $FILE -o StrictHostKeyChecking=no $USER#$HOST /bin/bash -c \'"docker exec -it $(docker ps | grep unique_text | cut -c1-10) bash start.sh"\''

$ ssh -tt -i ~/privateKey user#host docker exec -it $(docker ps | grep unique_text | cut -c1-10) /bin/bash deploy.sh
That will run the sub shell with the docker ps command on your local machine, not the remote one. You'll want to process that full command in a shell on the remote server:
$ ssh -tt -i ~/privateKey user#host /bin/sh -c "docker exec -it $(docker ps | grep unique_text | cut -c1-10) /bin/bash deploy.sh"

The best solution to this problem is to create a node in Jenkins
Step 1 − Go to the Manage Jenkins section and scroll down to the section of Manage Nodes.
Step 2 − Click on New Node
Step 3 − Give a name for the node, choose the Dumb slave option and click on Ok.
Step 4 − Enter the details of the node slave machine. In the below example, we are considering the slave machine to be a windows machine, hence the option of “Let Jenkins control this Windows slave as a Windows service” was chosen as the launch method. We also need to add the necessary details of the slave node such as the node name and the login credentials for the node machine. Click the Save button. The Labels for which the name is entered as “New_Slave” is what can be used to configure jobs to use this slave machine.
Once the above steps are completed, the new node machine will initially be in an offline state, but will come online if all the settings in the previous screen were entered correctly. One can at any time make the node slave machine as offline if required.
In my Jenkins pipeline
node("build_slave"){
sh 'docker exec -it $(docker ps | grep unique_text | cut -c1-10) bash deploy.sh'
}

Related

Automating password change inside a Docker container

I need to use a bash script:
Launch the container
Generate a password
Enter the container
Run the 'cd /' command
Change the password using htpasswd to the generated one
I tried it like this:
docker restart c1
a = date +%s | sha256sum | base64 | head -c 32 ; echo
docker exec -u 0 -it c1 bash 'echo cd /'
htpasswd user.passwd webdav a
And so:
docker restart c1
docker exec -u 0 -it c1 bash
cd /
a = date +%s | sha256sum | base64 | head -c 32 ; echo
htpasswd user.passwd webdav a
With the first option , I get:
bash: echo cd /: No such file or directory
With the second one, it enters the container and does nothing
I will be grateful for any help
I tried many variations of the script, which did not help me
You do not need Docker or debugging tools like docker exec just to generate an htpasswd file.
htpasswd is part of the Apache distribution, and you should be able to install it on your host system using your OS package manager. Since it just manipulates a credential file it doesn't need the actual server.
# On the host system, without using Docker at all
sudo apt-get update && apt-get install apache2-utils
# Make sure to wrap the password-generating command in `$()`
a=$(date +%s | sha256sum | base64 | head -c 32)
# Make sure to use a variable reference `$a`
htpasswd user.passwd webdav "$a"
This gives you a user.passwd file on your local system. Now when you launch your container, you can bind-mount the file into the container:
docker run -d -p 80:80 ... \
-v "$PWD/user.passwd:/usr/local/apache2/conf/user.passwd" \
httpd
The container will be immediately ready to use. If you delete and recreate this container, you do not need to repeat the manual setup step. If you need to launch multiple copies of the container, they can all have the same credentials file without doing manual steps.

Docker container takes long to start via shell scripting

I am new to shell scripting, Recently started with basic. I have written code to check if i have cassandra nodes and it gives me yes or no if no then do execute some command. My problem is i already have started node1 and i am checking if node is already there then get the id of that container and start that container. But when i run it, it gets the id of the container and takes so long and never starts. If i start the container without shell commands it starts. But i want to indulge them in shell.
This is my code:
if sudo docker ps -a | grep -q 'node1';then
sudo docker inspect --format="{{.Id}}" node1
read num
sudo docker start num
elif sudo docker ps -a | grep -q 'node2';then
sudo docker inspect --formar="{{.Id}}" node2
read Idnode2
sudo docker start Idnode2
else
sudo docker run --name node1 -d -e CASSANDRA_BROADCAST_ADDRESS=192.168.1.xx -p 7000:7000 cassandra:2
fi
output:
./tet.sh
f1713abbee52ca465962ec53e97dde62058d37859005f77786db3e3eebe0086c
blinks forever after this
I am not getting why its blinking and not executing.
I solved it myself by using this command below
if sudo docker ps -a | grep -q 'node1';then
sudo docker inspect --format="{{.Id}}" node1
sudo docker start node1
elif sudo docker ps -a | grep -q 'node2';then
sudo docker inspect --formar="{{.Id}}" node2
read Idnode2
sudo docker start Idnode2
else
sudo docker run --name node1 -d -e CASSANDRA_BROADCAST_ADDRESS=192.168.1.xx -p 7000:7000 cassandra:2
fi

Run Docker by using shell script from remote machine?

Hi i want to up a docker jenkins container and add jobs by using jenkins-CLI command, these process done successfully when i did manually and by using shell script also. But the main problem is when i am trying to execute this script from remote machine docker container is starting but when i am trying to execute commands in docker container from remote machine it's showing error
cannot enable tty mode on non tty input
cannot enable tty mode on non tty input
My script on docker machine
b="branch1"
sed -i "s/master/$b/g" /root/docker/config.xml
#Run docker jenkins base image
docker run -d -P localhost:5000/jenkins_base2
#Printing docker container
export c=($(docker ps))
echo "${c[8]}"
export x="${c[8]}"
sleep 5
#Copying Config file
docker exec -it ${c[8]} bash -c 'scp root#192.168.0.86:/root/docker/config.xml /root/'
sleep 25
#creating job using jenkins CLI
docker exec -ti ${c[8]} bash -c 'java -jar /opt/apache-tomcat-7.0.68/webapps/jenkins/WEB-INF/jenkins-cli.jar -s http://localhost:8080/ create-job $b < /root/config.xml '
script on remote machine
ssh 192.168.0.86 sh docker.sh
Try ssh with -tt option.
ssh -tt 192.168.0.86 sh docker.sh

Bash script to get into a running container and then run another bash script from that container

I have a shell script which runs as follows :
image_id=$(docker ps -a | grep postgres | awk -F' ' '{print $1}')
full_id=$(docker ps -a --no-trunc -q | grep $image_id)
docker exec -i -t $full_id bash
When I run this from the base linux OS, I expect to actually enter the postgres container which is a running container. But the issue is that the shell script hangs on 3rd line during ' docker exec' step.
My end goal is using the bash script, enter a running postgres container and run another bash script inside that container.
However the same command when I run it from command line, it works fine and gets me into the postgres container.
Please help, I have spent hours and hours to solve this but no progress.
Thanks again
Your setup is a bit more complex than it needs to be.
Docker ps can filter containers directly with the --filter= option
docker ps --no-trunc --quiet --filter="ancestor=postgres"
You can also --name containers when you run them which will be less fraught with danger than the script you are attempting
docker run --detach --name postgres_whatever postgres
docker exec -ti postgres_whatever bash
I'm not sure that your script is hanging as opposed to sitting there waiting for input. Try running a command directly
Using naming
exec_test.sh
#!/usr/bin/env bash
docker exec postgres_whatever echo "I have run the test"
When run
$ ./exec_test.sh
I have run the test
Without naming
exec_filter_test.sh
#!/usr/bin/env bash
id=$(docker ps --no-trunc --quiet --filter="ancestor=postgres")
[ -z "$id" ] && echo "no id" && exit 1
docker exec "${id}" echo "I have run the test"
When run
$ ./exec_filter_test.sh
I have run the test

Using ssh-agent with docker on macOS

I would like to use ssh-agent to forward my keys into the docker image and pull from a private github repo.
I am using a slightly modified version of https://github.com/phusion/passenger-docker with boot2docker on Yosemite.
ssh-add -l
...key details
boot2docker up
Then I use the command which I have seen in a number of places (i.e. https://gist.github.com/d11wtq/8699521):
docker run --rm -t -i -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
However it doesn't seem to work:
root#299212f6fee3:/# ssh-add -l
Could not open a connection to your authentication agent.
root#299212f6fee3:/# eval `ssh-agent -s`
Agent pid 19
root#299212f6fee3:/# ssh-add -l
The agent has no identities.
root#299212f6fee3:/# ssh git#github.com
Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts.
Permission denied (publickey).
Since version 2.2.0.0, docker for macOS allows users to access the host’s SSH agent inside containers.
Here's an example command that let's you do it:
docker run --rm -it \
-v /run/host-services/ssh-auth.sock:/ssh-agent \
-e SSH_AUTH_SOCK="/ssh-agent" \
my_image
Note that you have to mount the specific path (/run/host-services/ssh-auth.sock) instead of the path contained in $SSH_AUTH_SOCK environment variable, like you would do on linux hosts.
A one-liner:
Here’s how to set it up on Ubuntu 16 running a Debian Jessie image:
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
https://techtip.tech.blog/2016/12/04/using-ssh-agent-forwarding-with-a-docker-container/
I expanded on #wilwilson's answer, and created a script that will setup agent forwarding in an OSX boot2docker environment.
https://gist.github.com/rcoup/53e8dee9f5ea27a51855
#!/bin/bash
# Use a unique ssh socket name per-invocation of this script
SSH_SOCK=boot2docker.$$.ssh.socket
# ssh into boot2docker with agent forwarding
ssh -i ~/.ssh/id_boot2docker \
-o StrictHostKeyChecking=no \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile=/dev/null \
-o LogLevel=quiet \
-p 2022 docker#localhost \
-A -M -S $SSH_SOCK -f -n \
tail -f /dev/null
# get the agent socket path from the boot2docker vm
B2D_AGENT_SOCK=$(ssh -S $SSH_SOCK docker#localhost echo \$SSH_AUTH_SOCK)
# mount the socket (from the boot2docker vm) onto the docker container
# and set the ssh agent environment variable so ssh tools pick it up
docker run \
-v $B2D_AGENT_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
"$#"
# we're done; kill off the boot2docker ssh agent
ssh -S $SSH_SOCK -O exit docker#localhost
Stick it in ~/bin/docker-run-ssh, chmod +x it, and use docker-run-ssh instead of docker run.
I ran into a similar issue, and was able to make things pretty seamless by using ssh in master mode with a control socket and wrapping it all in a script like this:
#!/bin/sh
ssh -i ~/.vagrant.d/insecure_private_key -p 2222 -A -M -S ssh.socket -f docker#127.0.0.1 tail -f /dev/null
HOST_SSH_AUTH_SOCK=$(ssh -S ssh.socket docker#127.0.0.1 env | grep "SSH_AUTH_SOCK" | cut -f 2 -d =)
docker run -v $HOST_SSH_AUTH_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
-t hello-world "$#"
ssh -S ssh.socket -O exit docker#127.0.0.1
Not the prettiest thing in the universe, but much better than manually keeping an SSH session open IMO.
For me accessing ssh-agent to forward keys worked on OSX Mavericks and docker 1.5 as follows:
ssh into the boot2docker VM with boot2docker ssh -A. Don't forget to use option -A which enables forwarding of the authentication agent connection.
Inside the boot2docker ssh session:
docker#boot2docker:~$ echo $SSH_AUTH_SOCK
/tmp/ssh-BRLb99Y69U/agent.7750
This session must be left open. Take note of the value of the SSH_AUTH_SOCK environmental variable.
In another OS X terminal issue the docker run command with the SSH_AUTH_SOCK value from step 2 as follows:
docker run --rm -t -i \
-v /tmp/ssh-BRLb99Y69U/agent.7750:/ssh-agent \
-e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
root#600d0e9b443d:/# ssh-add -l
2048 6c:8e:82:08:74:33:78:61:f9:9a:74:1b:65:46:be:eb
/Users/dev/.ssh/id_rsa (RSA)
I don't really like the fact that I have to keep a boot2docker ssh session open to make this work, but until a better solution is found, this at least worked for me.
Socket forwarding doesn't work on OS X yet. Here is a variation of #henrjk answer brought into 2019 using Docker for Mac instead of boot2docker which is now obsolete.
First run a ssh server in the container, with /tmp being on the exportable volume. Like this
docker run -v tmp:/tmp -v \
${HOME}/.ssh/id_rsa.pub:/root/.ssh/authorized_keys:ro \
-d -p 2222:22 arvindr226/alpine-ssh
Then ssh into this container with agent forwarding
ssh -A -p 2222 root#localhost
Inside of that ssh session find out the current socket for ssh-agent
3f53fa1f5452:~# echo $SSH_AUTH_SOCK
/tmp/ssh-9zjJcSa3DM/agent.7
Now you can run your real container. Just make sure to replace the value of SSH_AUTH_SOCK below, with the value you got in the step above
docker run -it -v tmp:/tmp \
-e SSH_AUTH_SOCK=/tmp/ssh-9zjJcSa3DM/agent.7 \
vladistan/ansible
By default, boot2docker shares only files under /Users. SSH_AUTH_SOCK is probably under /tmp so the -v mounts the agent of the VM, not the one from your mac.
If you setup your VirtualBox to share /tmp, it should be working.
Could not open a connection to your authentication agent.
This error occurs when $SSH_AUTH_SOCK env var is set incorrectly on the host or not set at all. There are various workarounds you could try. My suggestion, however, is to dual-boot Linux and macOS.
Additional resources:
Using SSH keys inside docker container - Related Question
SSH and docker-compose - Blog post
Build secrets and SSH forwarding in Docker 18.09 - Blog post

Resources