SSH Key Forwarding - macos

I am trying to clone a private github repository when building a docker image. I have docker 18.09.2 installed and according to Build secrets and SSH forwarding in Docker 18.09 and Using SSH to access private data in builds, I should be able to use the forwarded ssh key by setting up my Dockerfile like this:
# syntax=docker/dockerfile:experimental
FROM node:10.15.3
# Update and install any dependencies.
RUN apt-get update
RUN apt-get -y install openssh-client git
# Clone the private repository
RUN --mount=type=ssh git clone git#github.com:<USER>/<PRIVATE_REPO>.git
I have already added my ssh key using ssh-add and it is listed successfully when running ssh-add -L.
To build the container I then use the following command:
docker build --ssh default .
I am still getting the following error message when trying to build the image:
Host key verification failed.
The docker client I am using is running macOS Mojave.

As pointed out in the above comments the host key is not the same thing as the ssh key and the reason why this did not work was not related to my forwarded ssh key, instead I needed to add the host into ~/.ssh/known_hosts:
RUN mkdir -p -m 0600 /root/.ssh && ssh-keyscan github.com >> /root/.ssh/known_hosts

Related

SSH agent key is not visible/forwarded to Windows Docker container

I am trying to use Windows Docker to build a docker image. When building the docker image, it will invoke pip to access remote private GitHub repositories. However, it always returned this error message: git#github.com: Permission denied (publickey). fatal: Could not read from remote repository. It seems that the SSH agent key is not forwarded to Windows Docker container. I run it in Git Bash Windows.
My device information is:
Windows Version: Windows11
Docker Desktop Version: Docker Desktop 4.12.0 (85629)
WSL2 or Hyper-V backend? WSL2 backend
The main part of the Docker file is:
FROM python:3.8.13 as builder
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN mkdir -p /home/app/
COPY requirements.txt /requirements.txt
RUN --mount=type=ssh pip install -r /requirements.txt --target
Then, running following commands to build the docker image:
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
docker build --ssh default .
When runningRUN --mount=type=ssh pip install -r /requirements.txt --target, the pip needs to access to private GitHub repositories and install them in docker image. But it always returned the permission denied error above - it seems that the ssh agent key is not visible/forwarded in docker container. I actually have already created a SSH key and added it to my GitHub.
I am just wondering if I missed something? Or it may be an underlying issue with Windows Docker? Thank you!

How to access /var/run/docker.sock from inside a docker container as a non-root user? (MacOS Host)

I have installed docker on Mac and everything is running fine. I am using a Jenkins docker image and running it. While using Jenkins as a CI server and to build further images by running docker commands through it, I came to know that we have to bind mount /var/run/docker.sock while running the Jenkins images so it can access the docker daemon.
I did that, and installed docker CLI inside Jenkins’s container. But when running docker ps or any other docker commands it is throwing an error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.28/containers/json: dial unix /var/run/docker.sock: connect: permission denied
When I connect to container as a root user, it works fine. But switching to the ‘jenkins’ user throws the above error. I have already added ‘jenkins’ user to sudo list but does not help.
I found few articles suggesting to add ‘jenkins’ user to ‘docker’ group but to my surprise I do not find any docker group on Mac or inside container.
Any help is much appreciated. Thanks
It looks like the reason this is happening is pretty straight forward: UNIX permissions are not letting the jenkins user read /var/run/docker.sock. Really the easiest option is to just change the group assignment on /var/run/docker.sock from root to another group, and then add jenkins to that group:
[as root, inside the container]
root#host:/# usermod -G docker jenkins
root#host:/# chgrp docker /var/run/docker.sock
This assumes of course that you already have the docker CLI installed, and that a group called docker exists. If not:
[as root, inside the container]
root#host:/# groupadd docker
Alternatively, you could change the world permissions on /var/run/docker.sock to allow non-root users to access the socket, but I wouldn't recommend doing that; it just seems like bad security practice. Similarly, you could outright chown the socket to the jenkins user, although I'd rather just change the group settings.
I'm confused why using sudo didn't work for you. I just tried what I believe is exactly the setup you described and it worked without problems.
Start the container:
[on macos host]
darkstar:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
docker.io/jenkins/jenkins:lts
darkstar:~$ docker exec -u root -it <container id> /bin/bash
Install Docker CLI:
[as root, inside container]
root#host:/# apt-get update
root#host:/# apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
root#host:/# rel_id=$(. /etc/os-release; echo "$ID")
root#host:/# curl -fsSL https://download.docker.com/linux/${rel_id}/gpg > /tmp/dkey
root#host:/# apt-key add /tmp/dkey
root#host:/# add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/${rel_id} \
$(lsb_release -cs) stable"
root#host:/# apt-get update
root#host:/# apt-get -y install docker-ce
Then set up the jenkins user:
[as root, inside container]
root#host:/# usermod -G sudo jenkins
root#host:/# passwd jenkins
[...]
And trying it out:
[as jenkins, inside container]
jenkins#host:/$ sudo docker ps -a
[...]
password for jenkins:
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 jenkins/jenkins:lts "/sbin/tini -- /usr/…" 8 minutes ago ...
it seems to work fine for me. Maybe you took a different route to install the Docker CLI? Not sure, but if you want to access the docker socket using sudo, those steps will work. Although, I think it would be easier to just change the group assignment as explained up above. Good luck :)
Note: All tests performed using macOS Mojave v10.14.3 running Docker Engine v19.03.2. This doesn't seem to be heavily dependent on the host platform, so I would expect it to work on Linux or any other UNIX-like OS, including other versions of macOS/OSX.
No, but this works:
Add the user (e.g. jenkins) to the staff-group: sudo dseditgroup -o edit -a jenkins -t user staff
Allow group to sudo, in sudo visudo add:
%staff ALL = (ALL) ALL

Cannot connect to the Docker daemon when running with sudo

My Docker service is up and running. However when attempting to use Docker by running it with sudo, e.g.:
12:40:26/~ $ sudo docker pull fluxcapacitor/pipeline
Using default tag: latest
I have got the following error:
Warning: failed to get default registry endpoint from daemon (Cannot connect to
the Docker daemon. Is the docker daemon running on this host?). Using system
default: https://index.docker.io/v1/
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Note that I had already followed the answers on Mac OS X sudo docker Cannot connect to the Docker daemon. Is the docker daemon running on this host?
as follows:
docker-machine start default
12:40:36/~ $ docker-machine start default
Starting "default"...
Machine "default" is already running.
docker ps
12:41:20/~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
So what more needs to be done?
This is:
$ docker --version
Docker version 1.11.2, build b9f10c9
on El Capitan.
Output of docker-machine env default
$ eval "$(docker-machine env default)"
$ docker-machine env default
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/macuser/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
The following command exports a few environment variables that the subsequent docker commands use:
eval "$(docker-machine env default)"
However, if you launch docker with sudo, the exported environment variables are not accessible by the docker executable. You could potentially get it to work by passing -E flag to sudo, e.g.:
sudo -E docker pull fluxcapacitor/pipeline
But much easier option is to use docker without root like:
docker pull fluxcapacitor/pipeline
You have to set environment variables with:
eval "$(docker-machine env default)"
More about it here.
I had same probelem on my MAC, when attempted
# eval "$(docker-machine env default)"
got this error
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.99.100:2376": x509: certificate is valid for 192.168.99.101, not 192.168.99.100
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
to regenerate certificates, find out the docker-machines available;
# docker-machine ls
Output of avalable docker machines (omitted others)
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 Unknown Unable to query docker version: Get https://192.168.99.100:2376/v1.15/version: x509: certificate is valid for 192.168.99.101, not 192.168.99.100
Generate certificates for this default docker-machine
# docker-machine regenerate-certs default
and then setup docker-machine env to default docker-machine;
# eval "$(docker-machine env default)"
and it works normally after that.
I have also tried the same but did not work.
later I have tried these steps on AWS CLI
$ sudo nano /etc/docker/daemon.json
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
$ sudo service docker restart
$ docker pull hello-world

Azure VM with Docker failing to connect

I'm trying to write a Powershell script to create a VM in Azure with Docker installed. From everything I've read, I should be able to do the following:
$image = "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_3-LTS-amd64-server-20150908-en-us-30GB"
azure vm docker create -e 22 -l 'North Europe' --docker-cert-dir dockercert --vm-size Small <myvmname> $image $userName $password
docker --tls -H tcp://<myvmname>.cloudapp.net:4243 info
The vm creation works, however the docker command fails with the following error:
An error occurred trying to connect: Get https://myvmname.cloudapp.net:4243/v1.20/info: dial tcp 40.127.169.184:4243: ConnectEx tcp: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Some articles I've found refer to port 2376 - but that doesn't work either.
Logging onto Azure portal and viewing the created VM - the Docker VM Extension doesn't seem to have been added and there's no endpoints other than the default SSH one. I was expecting these to have been created by the azure vm docker create command. Although I could be wrong with that bit.
A couple of example article I've looked at are here:
https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-docker-with-xplat-cli/
http://blog.siliconvalve.com/2015/01/07/get-started-with-docker-on-azure/
However, there's plenty of other articles saying the same thing.
Does anyone know what I'm doing wrong?
I know you are doing nothing wrong. My azurecli-dockerhost connection had been working for months and failed recently. I re-created my docker host using "azure vm docker create" but it does not work any more.
I believe it is a bug that the azure-docker team has to fix.
For the time being, my solution is to:
1) Launch a Ubuntu VM WITHOUT using the Azure docker extension
2) SSH into the VM and install docker with these lines:
sudo su; apt-get -y update
apt-get install linux-image-extra-$(uname -r)
modprobe aufs
curl -sSL https://get.docker.com/ | sh
3) Run docker within this VM directly without relying on a "client" and in particular the azure cli.
If you insist on using the docker client approach, my alternative suggestion would be to update your azure-cli and try 'azure vm docker create' again. Let me know how it goes.
sudo su
apt-get update; apt-get -y install nodejs-legacy; apt-get -y install npm; npm install azure-cli --global
To add an additional answer to my question, it turns out you can do the same using the docker create command ...
docker-machine create $vmname --driver azure --azure-publish-settings-file MySubscription.publishsettings
This method works for me.

error: cannot run ssh: No such file or directory when trying to clone on windows

I am trying to clone a remote repository on Windows, but when I did this:
git clone git#github.com:organization/xxx.git
I got this error:
error: cannot run ssh: No such file or directory
fatal: unable to fork
Am I missing something?
Check if you have installed ssh-client. This solves the problem on docker machines, even when ssh keys are present:
apt-get install openssh-client
You don't have ssh installed (or don't have it within your search path).
You can clone from github via http, too:
git clone http://github.com/organization/xxx
Most likely your GIT_SSH_COMMAND is referencing the incorrect public key.
Try:
export GIT_SSH_COMMAND="ssh -i /home/murphyslaw/.ssh/your-key.id_rsa
then
git clone git#github.com:organization/xxx.git
I am aware that it is an old topic, but having this problem recently, I want to bring here what I resolve my issue.
You might have this error on these conditions :
You use a URL like this : git#github.com:organization/repo.git
And you run a kind of command like this directly : git clone git#github.com/xxxxx.git whereas you don't have ssh client (or it is not present on path)
Or you have an ssh client installed (and git clone xxx.git work fine on direct command line) but when you run the same kind of command through a shell script file
Here, I assume that you don't want to change protocol ssh git# to http://
(git#github.com:organization/repo.git -> http://github.com/organization/repo.git), like my case, cause I needed the ssh format.
So,
If you do not have ssh client, first of all, you need to install it
If you have this error only when you execute it through a script, then you need to set GIT_SSH_COMMAND variable with your public ssh key, in front of your git command, like this :
GIT_SSH_COMMAND="/usr/bin/ssh -i ~/.ssh/id_rsa" git pull
(Feel free to change it depending on your context)
I had this issue right after my antivirus moved the cygwin ssh binary to virus vault, and restored it after.
Symptoms:
SSH seems properly installed
SSH can be run from command line without problem
Another option before reinstalling ssh in this particular case: check the ssh command permissions
$ ls -al /usr/bin/ssh.exe
----rwxrwx+
$ chmod 770 /usr/bin/ssh.exe
You can try these as well
ssh-add ~/.ssh/identity_file
chmod 400 ~/.ssh/identity_file
It so happened in my case that the new pair of ssh keys linked with my git account were not accessible.
I had to sudo chmod 777 ~/.ssh/id_rsa.* to resolve the issue.

Resources