After two full days reading and trying thing, I humbling come here to ask how to make this work, because nothing from the other answers helped me to make this work.
I'm on a macos 10.13.6 (High Sierra)
Running Docker Desktop for mac 2.2.0.5 (43884)
Engine: 19.03.8
Compose 1.25.4
I want to run jenkins to study some pipeline stuff, so this is my ´docker-compose.yml´
version: "3.2"
services:
jenkins:
build:
dockerfile: dockerfile
context: ./build
ports:
- "8080:8080"
- "50000:50000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/var/jenkins_home
First problem is that the image that i'm using jenkins/jenkins:lts does not have a docker client installed, so even mapping the socket through volumes I can't use docker version the output of this command is bash: docker: command not found.
This is my pipeline just for test (from jenkins documentation):
pipeline {
agent { docker { image 'node:6.3' } }
stages {
stage('build') {
steps {
sh 'npm --version'
}
}
}
}
So through this plugin https://plugins.jenkins.io/docker-plugin/ I can go to "Manage Jenkins > Manage Nodes and Clouds > Configure Clouds > Add a new cloud" and on "Docker Cloud details..."
I have the Host URI where I can put "unix:///var/run/docker.sock" that it will use the docker from my host macos to do what jenkins need to do.
I tried all the suggestion from the internet, from create the jenkins user, docker user, put jenkins user on docker group e other stuff but none of them work on the mac.
The big majority of the asked questions is for linux and all of them seems to have solved the problem, but when I try to replicate on the macos it just don't work.
Maybe there is some step that I'm missing, or people already know that they have to do in some of the steps, but i'm failing miserably.
Some of the steps that I tried:
create use user and group jenkins:
sudo dscl . create /Users/jenkins UniqueID 1000 PrimaryGroupID 1000
sudo dscl . create /Groups/jenkins gid 1000
created the group docker:
sudo dscl . create /Groups/docker gid 1001
Added the jenkins user to the docker group
sudo dscl . append /Groups/docker GroupMembership jenkins
Checked if the user really is on the group
$ dsmemberutil checkmembership -u 1000 -g 1001
user is a member of the group
Tried to change the owner of the socket from inside the jenkins container (that's why I was building the image, but it didn't work)
Tried to changer the ownership of the socket on the host macos but it just don't change.
The socket is always with those permissions.
lrwxr-xr-x 1 root daemon 68B Apr 28 10:14 docker.sock -> /Users/metasix/Library/Containers/com.docker.docker/Data/docker.sock
For jenkins, the best is to have agents that will run all jobs and the master that will only do the orchestration jobs.
Some years ago, I build an JNLP agent that register itself to jenkins master, you can check my repo here: https://github.com/jmaitrehenry/docker-jenkins-jnlp
As I say, I made it like 3 years ago and may be a bit outdated.
About your problem, you need to know that Docker for Mac run containers inside a little VM, so when you add a user on MacOS, the VM doesn't have it. And Docker for Mac do a lot a magic to map uid inside your mac with some uid inside containers.
You can try to add the docker client inside your Dockerfile, for that, try to add those steps:
FROM jenkins/jenkins:lts
[...]
# Switch to root as the base image switch to jenkins user
USER root
# Download docker-cli and install it
RUN curl -o docker-ce-cli.deb https://download.docker.com/linux/debian/dists/stretch/pool/stable/amd64/docker-ce-cli_19.03.8~3-0~debian-stretch_amd64.deb && \
dpkg -i docker-ce-cli.deb && \
rm docker-ce-cli.deb
# Switch back to jenkins user
USER jenkins
You need to enable host mode networking by adding network: host to your compose file:
services:
jenkins:
build:
dockerfile: dockerfile
context: ./build
network: host
ports:
- "8080:8080"
- "50000:50000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/var/jenkins_home
This will allow your guest docker container to see the hosts network. The problem is that Docker Desktop for MacOS doesn't support listening over the TCP port. There is a known workaround by using socat. https://www.ivankrizsan.se/2016/05/21/docker-api-over-http-on-mac-os-x-with-docker-for-mac-beta/. Once you have socat set up to route from the docker.socker to TCP 2376 set your Host URI to tcp://0.0.0.0:2376. And of course you will need to create a new Dockerfile to extend the jenkins/jenkins:lts one with FROM jenkins/jenkins:lts and add Docker to the container as suggested in another answer
I ran into the same issue. jenkins user was not able to run docker commands even after adding the user to docker group.
When I checked the permissions on the host machine (MacOS), docker.sock file was owned by root:daemon.
ls -lart /var/run/docker.sock
lrwxr-x--x 1 root daemon 37 Feb 1 14:56 /var/run/docker.sock -> /Users/....
I updated the permissions to 755 and it started working. I am able to run the docker commands on the container as jenkins user.
Please change the host file permissions only for development environment.
Related
How To Run #Testcontainers based test cases inside the docker container ?
I have Simple Spring Boot App that has Integration Test (Component level) that are interacting with containers using Testcontainers. Test cases are ruining fine from outside container(Local machine).
We are running everything in containers and build is running on docker jenkins image.
Docker file is creating jar and then image. #Testcontainers is not able to find docker installed.
Below is my docker file.
FROM maven:3.6-jdk-11-openj9
VOLUME ["/var/run/docker.sock"]
RUN apt-get update
RUN apt-get -y install docker.io
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN mvn -Dmaven.repo.local=/root/m2 --batch-mode -f pom.xml clean package
EXPOSE 8080
CMD ["/bin/bash"]
While running build i am getting below below error
org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy - ping failed with configuration Environment variables, system properties and defaults. Resolved dockerHost=unix:///var/run/docker.sock due to org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
Whats the best way to handle this case ? I want to run my component level integration test during mvn build phase using docker file.
below reference did not helped me.
https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
This is not complete answer but you should enable access to a docker daemon from inside your container. Installing Docker and running it's daemon inside your container is complicated so not recommended. Docker can be controlled via Unix socket or over TCP (I assume the host system is a Linux).
How Test containers look for Docker:
By default it tries to connect to Unix socket /var/run/docker.sock. You can specify other socket path or TCP address by setting environment variables (DOCKER_HOST).
How docker exposes it's control API:
By default via Unix socket /var/run/docker.sock (on your host). You can expose docker API elsewhere by adding following parameters to docker start command (the location of command launching your docker is system dependent): -H fd:// -H tcp://127.0.0.1:2376. Note that you can specify more than one option. -H fd:// - is the default, tcp://127.0.0.1:2376 - tells Docker to listen on localhost port 2376.
How to make Docker available inside your container ("Docker in Docker"): If you enabled network access - no need to do additional config except pointing Testcontaners to it as mentioned above. If you want to use default Unix socket then you can map (mount) it into container via volume option:
docker run --volume /var/run/docker.sock:/var/run/docker.sock your-image-id-here
The remaining problem is that mounted docker.sock inside container will also be owned by root:docker (with same uid:gid as on your host system) so Testcontainers would work only if your container user can connect to that socket. That is user of running process is root or happen to have exact same group id inside your container as group id of docker on your host system.
I do not know yet a good solution to this one, so for starters you can run your tests inside container as root, or hard-code container's user group-id to match your host's docker group id.
I'm trying to invoke docker on my OSX host running Docker for Mac 17.06.0-ce-mac17 from inside a running jenkins docker container (jenkins:latest), per the procedure described at http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/.
I mount /var/run/docker.sock into the container, I stick a ubuntu docker binary inside it, and it's able to execute - but from inside the container as user "jenkins" when I run e.g. "docker ps" I get
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.30/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied.
If I connect to the container as root (docker exec -u 0) it works though.
I need the jenkins user to be able to run this. I tried adding a docker group and adding jenkins to it inside the ubuntu container but that didn't help, since it's got nothing to do with the outside and Docker for Mac doesn't work like running this on linux where you can do semi easy uid/gid matching. I want to distribute this container so answers that go and hack part of my Docker for Mac install won't really help me. I'd rather not run the whole jenkins setup as root if I can help it. (I also tried running the container as privileged, that didn't help.)
Per the advice in Permission Denied while trying to connect to Docker Daemon while running Jenkins pipeline in Macbook I chowned the /var/run/docker.sock file inside the container manually to jenkins and now jenkins can run docker. But I'm having trouble coming up with a solution for a distributable container - I can't do that chown in the Dockerfile because the file doesn't exist yet, and shimming in into the entrypoint doesn't help because that runs as jenkins.
What do I need to do in order to build and run an image that will run external docker containers on my Mac as a non-root user from inside the container?
Follow this: https://forums.docker.com/t/mounting-using-var-run-docker-sock-in-a-container-not-running-as-root/34390
Basically, all you need to do is to change /var/run/docker.sock permissions inside your container and run the docker with sudo.
I've created a Dockerfile that can be used to help:
FROM jenkinsci/blueocean:latest
USER root
# change docker sock permissions after moutn
RUN if [ -e /var/run/docker.sock ]; then chown jenkins:jenkins /var/run/docker.sock; fi
I got this working, at least automated but currently only working on docker for Mac. Docker for Mac has a unique file permission model. Chowning /var/run/docker.sock to the jenkins user manually works, and it persists across container restarts and even image regeneration, but not past docker daemon restarts. Plus, you can't do the chown in the Dockerfile because docker.sock doesn't exist yet, and you can't do it in the entrypoint because that runs as jenkins.
So what I did was add jenkins to the "staff" group, because on my Mac, /var/run/docker.sock is symlinked down into /Users//Library/Containers/com.docker.docker/Data/s60 and is uid and gid staff. This lets the jenkins user run docker commands on the host.
Dockerfile:
FROM jenkins:latest
USER root
RUN \
apt-get update && \
apt-get install -y build-essential && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY docker /usr/bin/docker
# To allow us to access /var/run/docker.sock on the Mac
RUN gpasswd -a jenkins staff
USER jenkins
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
docker-compose.yml file:
version: "3"
services:
jenkins:
build: ./cd_jenkins
image: cd_jenkins:latest
ports:
- "8080:8080"
- "5000:5000"
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
This is, however, not portable to other systems (and depends on that docker for mac group staying "staff," which I imagine isn't guaranteed). I'd love suggested improvements to make this solution work across host systems. Other options suggested in questions like Execute docker host command inside jenkins docker container include:
Install sudo and let jenkins sudo and run all docker commands with sudo: adds security issues
"Add jenkins to the docker group" - UNIX only and probably relies on matching up gids from host to container right?
Setuid'ing the included docker executable might work, but has the same security elevation issues as sudo.
Another approach that worked for me - set the uid argument to the uid that owns /var/run/docker.sock (501 in my case). Not sure of the syntax for Dockerfile, but for docker-compose.yml, it's like this:
version: 3
services:
jenkins:
build:
context: ./JENKINS
dockerfile: Dockerfile
args:
uid: 501
volumes:
- /var/run/docker.sock:/var/run/docker.sock
...
Note this is based on using a Dockerfile to build the jenkins image, so many details left out. The key bit here is the uid: 501 under args.
I have two Docker Containers configured through a Docker Compose file.
Docker Container A - (teamcity-agent)
Docker Container B - (build-tool)
Both start up fine. But as part of the build process in TeamCity - I would like the Agent (Container A) to run a bash script which is on Docker Container B (Only B can run this script).
I tried to set this up using the SSH build step in Team City, but I get connection refused.
Further reading into it shows that SSH isn't enabled in containers and that I shouldn't really be trying to SSH into a container.
So how can I get Container A to run the script on Container B and see the output of the script on A?
What is the best practice for this?
The only way without modifying the application itself is through SSH. It is completely false you cannot SSH to a container. I use SSH to a database container to run database export inside it.
First be sure openssh-server is installed on B. Then you must setup a passwordless connection between A and B.
Then be sure you link your containers in the docker-compose file so you won't need to expose the SSH port.
Snippet to add in Dockerfile for container B
RUN apt-get install -q -y openssh-server
ADD id_rsa.pub /home/ubuntu/.ssh/authorized_keys
RUN chown -R ubuntu:ubuntu /home/ubuntu/.ssh ; \
chmod 700 /home/ubuntu/.ssh ; \
chmod 600 /home/ubuntu/.ssh/authorized_keys
Also you can run the script outside the containers using docker exec in a crontab in the host. But I think you are not looking for this extreme solution.
I can help you via comments
Regards
I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app
I am trying to build an image for my flask based web-application using docker build. My Dockerfile looks like this:
FROM beehive-webstack:latest
MAINTAINER Anuvrat Parashar <anuvrat#zopper.com>
EXPOSE 5000
ADD . /srv/beehive/
RUN pip install -i http://localhost:4040/root/pypi/+simple/ -r /srv/beehive/requirements.txt
pip install without the -i flag works, but it downloads everything from pypi which, naturally is slow.
The problem is that pip does not access the devpi server running on my laptop. How can I go about achieving that?
localhost refers to the docker container, not to your host as RUN lines are just executed commands in the container. You thus have to use a network reachable IP of your laptop.
Con: This makes your Dockerfile unportable, if others don't have a pypi mirror running.
One answer is a devpi helper container. You start docker devpi image and have it expose port 3141. Then you can add this as an extra source for pip install using environmental variables in your docker file.
Starting devpi using docker compose:
devpi:
image: scrapinghub/devpi
container_name: devpi
expose:
- 3141
volumes:
- /path/to/devpi:/var/lib/devpi
myapp:
build: .
external_links:
- devpi:devpi
docker-compose up -d devpi
Now you need to configure the client docker container. It needs pip configured:
In your Dockerfile:
ENV PIP_EXTRA_INDEX_URL=http://devpi:3141/root/pypi/+simple/ \
PIP_TRUSTED_HOST=devpi
Check it's working by logging into your container:
docker-compose run myapp bash
pip install --verbose nose
Output should include
2 location(s) to search for versions of nose:
* https://pypi.python.org/simple/nose/
* http://devpi:3141/root/pypi/+simple/nose/
Now you can upload packages to your container either from another container or sftp.
This approach has the advantages of speeding builds but not breaking them if the devpi container is not present.
Notes: Don't publish ports to devpi without a strong password as it's a security issue. People could use it to upload arbitrary code which you application would install and execute.