SSH agent key is not visible/forwarded to Windows Docker container - windows

I am trying to use Windows Docker to build a docker image. When building the docker image, it will invoke pip to access remote private GitHub repositories. However, it always returned this error message: git#github.com: Permission denied (publickey). fatal: Could not read from remote repository. It seems that the SSH agent key is not forwarded to Windows Docker container. I run it in Git Bash Windows.
My device information is:
Windows Version: Windows11
Docker Desktop Version: Docker Desktop 4.12.0 (85629)
WSL2 or Hyper-V backend? WSL2 backend
The main part of the Docker file is:
FROM python:3.8.13 as builder
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN mkdir -p /home/app/
COPY requirements.txt /requirements.txt
RUN --mount=type=ssh pip install -r /requirements.txt --target
Then, running following commands to build the docker image:
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
docker build --ssh default .
When runningRUN --mount=type=ssh pip install -r /requirements.txt --target, the pip needs to access to private GitHub repositories and install them in docker image. But it always returned the permission denied error above - it seems that the ssh agent key is not visible/forwarded in docker container. I actually have already created a SSH key and added it to my GitHub.
I am just wondering if I missed something? Or it may be an underlying issue with Windows Docker? Thank you!

Related

Cannot mount a volume without using sudo

The following command works and mounts the local volume:
sudo docker run -ti -v "$PWD/codebase/realsmart-saml-copy":/var/www/html realsmart-docker_smartlogin bash
The following command does not work and does not mount the volume
docker run -ti -v "$PWD/codebase/realsmart-saml-copy":/var/www/html realsmart-docker_smartlogin bash
For some reason, docker is only able to mount volumes using the sudo command, rendering our local docker environment useless on a colleagues laptop. The same docker-compose file works on my laptop (also a mac, same OS).
Any idea as to what the issue might be with his laptop configuration? Or indeed the docker setup.
(The code extract is to make clear the problem with mounting volumes, the same issue presents itself using a compose.yml file.)
Non working code:
docker run -ti -v "$PWD/codebase/realsmart-saml-copy":/var/www/html realsmart-docker_smartlogin bash
No error messages are displayed, but the results are not as expected as the volume does not mount without using sudo.
Try to see if the user is part of the docker group.
It would make sense that sudo works, but not for the local user, if that local user is not part of the docker group.
The solution for anyone interested.
After upgrading to Docker Desktop Boot2Docker has been replaced.
Steps to fix the issue:
docker-machine rm machine-name
unset DOCKER_TLS_VERIFY
unset DOCKER_CERT_PATH
unset DOCKER_MACHINE_NAME
unset DOCKER_HOST
restart Docker Desktop
cd path/to/docker-project.
docker-compose build
docker-compose up (or docker run)
project now available on localhost
Further details: https://docs.docker.com/docker-for-mac/docker-toolbox/
Add your user to the docker group.
sudo usermod -aG docker $USER

SSH Key Forwarding

I am trying to clone a private github repository when building a docker image. I have docker 18.09.2 installed and according to Build secrets and SSH forwarding in Docker 18.09 and Using SSH to access private data in builds, I should be able to use the forwarded ssh key by setting up my Dockerfile like this:
# syntax=docker/dockerfile:experimental
FROM node:10.15.3
# Update and install any dependencies.
RUN apt-get update
RUN apt-get -y install openssh-client git
# Clone the private repository
RUN --mount=type=ssh git clone git#github.com:<USER>/<PRIVATE_REPO>.git
I have already added my ssh key using ssh-add and it is listed successfully when running ssh-add -L.
To build the container I then use the following command:
docker build --ssh default .
I am still getting the following error message when trying to build the image:
Host key verification failed.
The docker client I am using is running macOS Mojave.
As pointed out in the above comments the host key is not the same thing as the ssh key and the reason why this did not work was not related to my forwarded ssh key, instead I needed to add the host into ~/.ssh/known_hosts:
RUN mkdir -p -m 0600 /root/.ssh && ssh-keyscan github.com >> /root/.ssh/known_hosts

How to access /var/run/docker.sock from inside a docker container as a non-root user? (MacOS Host)

I have installed docker on Mac and everything is running fine. I am using a Jenkins docker image and running it. While using Jenkins as a CI server and to build further images by running docker commands through it, I came to know that we have to bind mount /var/run/docker.sock while running the Jenkins images so it can access the docker daemon.
I did that, and installed docker CLI inside Jenkins’s container. But when running docker ps or any other docker commands it is throwing an error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.28/containers/json: dial unix /var/run/docker.sock: connect: permission denied
When I connect to container as a root user, it works fine. But switching to the ‘jenkins’ user throws the above error. I have already added ‘jenkins’ user to sudo list but does not help.
I found few articles suggesting to add ‘jenkins’ user to ‘docker’ group but to my surprise I do not find any docker group on Mac or inside container.
Any help is much appreciated. Thanks
It looks like the reason this is happening is pretty straight forward: UNIX permissions are not letting the jenkins user read /var/run/docker.sock. Really the easiest option is to just change the group assignment on /var/run/docker.sock from root to another group, and then add jenkins to that group:
[as root, inside the container]
root#host:/# usermod -G docker jenkins
root#host:/# chgrp docker /var/run/docker.sock
This assumes of course that you already have the docker CLI installed, and that a group called docker exists. If not:
[as root, inside the container]
root#host:/# groupadd docker
Alternatively, you could change the world permissions on /var/run/docker.sock to allow non-root users to access the socket, but I wouldn't recommend doing that; it just seems like bad security practice. Similarly, you could outright chown the socket to the jenkins user, although I'd rather just change the group settings.
I'm confused why using sudo didn't work for you. I just tried what I believe is exactly the setup you described and it worked without problems.
Start the container:
[on macos host]
darkstar:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
docker.io/jenkins/jenkins:lts
darkstar:~$ docker exec -u root -it <container id> /bin/bash
Install Docker CLI:
[as root, inside container]
root#host:/# apt-get update
root#host:/# apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
root#host:/# rel_id=$(. /etc/os-release; echo "$ID")
root#host:/# curl -fsSL https://download.docker.com/linux/${rel_id}/gpg > /tmp/dkey
root#host:/# apt-key add /tmp/dkey
root#host:/# add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/${rel_id} \
$(lsb_release -cs) stable"
root#host:/# apt-get update
root#host:/# apt-get -y install docker-ce
Then set up the jenkins user:
[as root, inside container]
root#host:/# usermod -G sudo jenkins
root#host:/# passwd jenkins
[...]
And trying it out:
[as jenkins, inside container]
jenkins#host:/$ sudo docker ps -a
[...]
password for jenkins:
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 jenkins/jenkins:lts "/sbin/tini -- /usr/…" 8 minutes ago ...
it seems to work fine for me. Maybe you took a different route to install the Docker CLI? Not sure, but if you want to access the docker socket using sudo, those steps will work. Although, I think it would be easier to just change the group assignment as explained up above. Good luck :)
Note: All tests performed using macOS Mojave v10.14.3 running Docker Engine v19.03.2. This doesn't seem to be heavily dependent on the host platform, so I would expect it to work on Linux or any other UNIX-like OS, including other versions of macOS/OSX.
No, but this works:
Add the user (e.g. jenkins) to the staff-group: sudo dseditgroup -o edit -a jenkins -t user staff
Allow group to sudo, in sudo visudo add:
%staff ALL = (ALL) ALL

docker: command not found with Jenkins build and publish plugin on Mac

I am new to using Jenkins and docker plugins. I have jenkins installed on my Mac Os. I am trying to build a project on jenkins using docker build and publish plugin as a build step.
It fails with below error
java.io.IOException: Cannot run program "docker" (in directory "***"): error=2, No such file or directory
Looks like docker is not available to jenkins user but available to root and other user on my Mac.
sudo su jenkins
bash-3.2$ docker ps
bash: docker: command not found
sudo su XXX
bash-3.2$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bash-3.2$
Is this some permissions issue? Can you please help?
Thanks
Inspect the permissions of the docker binary:
stat $(where docker)
Check the owner and group,
in osx stuff is usually in staff group, try adding your jenkins user to that group:
sudo dseditgroup -o edit -a jenkins -t user staff

Azure VM with Docker failing to connect

I'm trying to write a Powershell script to create a VM in Azure with Docker installed. From everything I've read, I should be able to do the following:
$image = "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_3-LTS-amd64-server-20150908-en-us-30GB"
azure vm docker create -e 22 -l 'North Europe' --docker-cert-dir dockercert --vm-size Small <myvmname> $image $userName $password
docker --tls -H tcp://<myvmname>.cloudapp.net:4243 info
The vm creation works, however the docker command fails with the following error:
An error occurred trying to connect: Get https://myvmname.cloudapp.net:4243/v1.20/info: dial tcp 40.127.169.184:4243: ConnectEx tcp: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Some articles I've found refer to port 2376 - but that doesn't work either.
Logging onto Azure portal and viewing the created VM - the Docker VM Extension doesn't seem to have been added and there's no endpoints other than the default SSH one. I was expecting these to have been created by the azure vm docker create command. Although I could be wrong with that bit.
A couple of example article I've looked at are here:
https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-docker-with-xplat-cli/
http://blog.siliconvalve.com/2015/01/07/get-started-with-docker-on-azure/
However, there's plenty of other articles saying the same thing.
Does anyone know what I'm doing wrong?
I know you are doing nothing wrong. My azurecli-dockerhost connection had been working for months and failed recently. I re-created my docker host using "azure vm docker create" but it does not work any more.
I believe it is a bug that the azure-docker team has to fix.
For the time being, my solution is to:
1) Launch a Ubuntu VM WITHOUT using the Azure docker extension
2) SSH into the VM and install docker with these lines:
sudo su; apt-get -y update
apt-get install linux-image-extra-$(uname -r)
modprobe aufs
curl -sSL https://get.docker.com/ | sh
3) Run docker within this VM directly without relying on a "client" and in particular the azure cli.
If you insist on using the docker client approach, my alternative suggestion would be to update your azure-cli and try 'azure vm docker create' again. Let me know how it goes.
sudo su
apt-get update; apt-get -y install nodejs-legacy; apt-get -y install npm; npm install azure-cli --global
To add an additional answer to my question, it turns out you can do the same using the docker create command ...
docker-machine create $vmname --driver azure --azure-publish-settings-file MySubscription.publishsettings
This method works for me.

Resources