Running maven integration tests with docker-maven-plugin in Jenkins - maven

I have a maven project that usese the io.fabric8 docker-maven-plugin to launch a database as part of my integration tests. When I run the integration tests locally it works, but when run on my Jenkins server I get an error saying there is no DOCKER_HOST variable.
[ERROR] Failed to execute goal io.fabric8:docker-maven-plugin:0.20.1:start (prepare-itdatabase) on project myproject: Execution prepare-itdatabase of goal io.fabric8:docker-maven-plugin:0.20.1:start failed: No <dockerHost> given, no DOCKER_HOST environment variable, no read/writable '/var/run/docker.sock' or '//./pipe/docker_engine' and no external provider like Docker machine configured -> [Help 1]
It might be worth mentioning that my Jenkins instance itself is launched through docker, by simply using something like docker run jenkins.
I tried to set the DOCKER_HOST variable to tcp://192.168.59.103:2375 when starting Jenkins, but that just caused it to time out in the build.

my Jenkins instance itself is launched through docker, by simply using something like docker run jenkins
I assume you run your build directly on the Jenkins master of your container (no slave). Your build process run inside the container, which does not have Docker binary or socket installed.
You'll have to mount the Docker socket in your container and install the Docker binaries. This blog post explains things in details, in short what you can do is:
Launch your Jenkins container with Docker socket:
docker run -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jenkins/jenkins:lts
And install Docker inside the container. Said blog post gives a handy script to be run inside the container:
apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
You can run this script by-hand or as part of your Jenkins build. From now-on you should be able to run Docker commands from inside your container (and in your builds).
Alternatively, you can configure a Jenkins Slave independently from your Master and have Docker installed on this Slave.

Related

Can you convert/build a docker image into a full OS image?

I have made a docker container meant with some code for deployment. However, I realised that the structure of the project I'm working with, it's more suitable to deploy a full ISO image, instead of running docker on top of a cloud VM running stock debian, leading to unnecessary layers of virtualization.
I know that dockers are meant to be deployed on kubernetes, but before diving into that route, is there a simple way to convert a deb9 docker image into a full deb9 OS image? Like an opposite of docker import?
You can convert your docker container to an full os image. An Dockerfile debian example would be
FROM debian:stretch
RUN apt-get -y install --no-install-recommends \
linux-image-amd64 \
systemd-sysv
In principal you have to install a kernel and an init system.
Full instructions can be found github
Docker images don't contain a Linux kernel and aren't configured to do things like run a full init system or go out and get their network configuration, so this won't work well.
Put differently: the same mismatch that makes docker import not really work well because the resulting container will have too much stuff and won't be set up for Docker, means you can't export a container or image into a VM and expect it to work in that environment.
But! If you've written a Dockerfile for your image, that's very close to saying "I'm going to start from this base Linux distribution and run this shell script to get an image". Tools like Packer are set up to take a base Linux VM image, run some sort of provisioner, and create a new image. If you have a good Docker setup already, and decide a VM is a better fit, you're probably done a lot of the leg-work already to have an automated build of your VM image.
#info
we can use docker like vm / flatpak / appimge / termux with gui, audio and hardware acceleration :)
how to do it ?
remove old docker
sudo apt-get remove docker docker-engine docker.io containerd runc
===================
install the commponents
sudo apt-get update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
=============================
3.. add docker repo docker
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
================
install docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
=============================
=========================
create docker
docker image
su
cd /home/
kwrite Dockerfile
FROM debian:testing
ARG DEBIAN_FRONTEND=noninteractive­
RUN apt-get update && apt-get -y install midori bash mpv pulseaudio pulseaudio-utils pulsemixer pulseaudio-module-jack apulse neofetch vlc smplayer wget sudo cairo-dock cairo-dock-plug-ins xfce4 xfce4-goodies falkon kde-full xfce4-whiskermenu-plugin gnome tigervnc-standalone-server openssh-server openssh-client network-manager net-tools iproute2 gerbera openjdk-11-jdk mediainfo dcraw htop gimp krita libreoffice python3-pip terminator uget alsa-utils
ENV PULSE_SERVER=tcp:host.docker.internal:4713
CMD bash
save it
the run interminal
sudo docker build -t supersifobian .
how to use gui
xhost +
run image gui
sudo apt-get -y install xpra xserver-xephyr xinit xauth
xclip x11-xserver-utils x11-utils
run docker
sudo docker run -ti --net=host --device=/dev/dri:/dev/dri -e DISPLAY=:0 --privileged --cap-add=ALL --device /dev/snd --volume /dev:/dev -v /dev:/dev --group-add audio -v /var/run/docker.sock:/host/var/run/doc -e PULSE_SERVER=tcp:$P ULSE_SERVER -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro -v /media:/host/media:ro -v /home:/host/home:ro imageid / image name
nb
enter container
docker exec -it nama/id container bash
add user
adduser namauser
make use as sudo
usermod -aG sudo username
add group audio
usermod -aG audio username
=========================
audio in docker cli
using prafer
a. install paprefs
apt-get install paprefs
b.choose network in prafers
c. how to know the pulse audio port, we can type
pax11publish
d. export to terminal in docker
export "PULSE_SERVER=tcp:192.168.43.135:37721"
==================================
save docker
docker commit idcontainer nameimage:version
========================
check images
docker images
==================================
thanks :)

Stackdriver agent in docker container

Is it possible to set up a generic Docker image with Stackdriver monitoring agents so it can send logging data within the container to Stackdriver, then which can be used across any VM instances regardless of GCE and AWS?
Update
FROM ubuntu:16.04
USER root
ADD . /
ENV GOOGLE_APPLICATION_CREDENTIALS="/etc/google/auth/application_default_credentials.json"
RUN apt-get update && apt-get -y --no-install-recommends install wget curl python-minimal ca-certificates lsb-release libidn11 openssl && \
RUN curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
RUN bash install-logging-agent.sh
Im exactly following what's been said in the documentation. The installation goes fine. But google-fluentd is failing to start/restart.
Thanks in advance.
Yes, this should be possible according to the documentation.
You will need to make sure that Stackdriver agent is installed and configured correctly in your docker image.

feathers-chat remote access fails to connect using VMware Fusion

I built the feathers-chat demo with the feathers-chat-vuex client attached via socketio, but I am having trouble with remote access via socketio. The demo works fine as long as I access the client from a browser on the same system, which happens to be an Ubuntu VM running under VMware Fusion on a Macbook Pro. But if I try to connect to the client from a browser in the host MacOS, it brings up the login page but it fails to log in. In the devtools console, it says "WebSocket connection to ...localhost:3030 failed," which of course it did, because the feathers-chat server is not running on this localhost, it is running in the VM. The socketio is set up in feathers-client.js like this: "const socket = io('http://localhost:3030', {transports: ['websocket']})". If I hard-code the VM IP address in it like this: "const socket = io('http://172.16.184.194:3030', {transports: ['websocket']})" then the remote access works fine. But of course I cannot do that because in general I don't know the IP address of the server at run time. So can someone tell me the right way to set this up? Thanks!
I haven't worked with the socketio on feathers but I've dealt with this on a rest feathers app (so hopefully this applies!).
Put the IP address into your config file. When you eventually deploy the app you will be pointing it at the IP address of the production server anyway so it makes sense to have it configurable.
Then for you local dev it can be the IP of the VM, and when you deploy your config/production.json can have the live IP.
Here's the docs about using config variables:
https://docs.feathersjs.com/api/configuration.html#example
Update
If you are using Docker instead of VMWare for your local development, then you can use docker-compose and use the built in networking functionality. Here is an example of a docker-compose file for feathers app that I'm working on. It's just an API so I'm not dealing with the client but I think it should be a good starting point for you.
With docker installed, put these two files in your project root and then run docker-compose up -d. It exposes the port 3030 on the feathers server and you should be able to connect with http://localhost:3030 again from your host machine. If you want to add another docker container to docker-compose you can and then from that container you can access the server at http://server:3030 because it will use the internal docker networking.
docker-compose.yml:
version: "3"
services:
server:
build: .
volumes:
- .:/usr/src/app
ports:
- "3030:3030"
stdin_open: true
tty: true
mongo:
image: mongo
volumes:
- ./protected_db/howlround-mongodb:/data/db
command:
- --storageEngine=mmapv1
And my Dockerfile:
FROM ubuntu:16.04
# Dockerfile based on https://github.com/nodejs/docker-node/blob/master/6.4/slim/Dockerfile
# gpg keys listed at https://github.com/nodejs/node
RUN set -ex \
&& for key in \
9554F04D7259F04124DE6B476D5A82AC7E37093B \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
; do \
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done
ENV NPM_CONFIG_LOGLEVEL info
ENV NODE_VERSION 8.11.1
ENV NODE_ENV dev
RUN buildDeps='xz-utils curl ca-certificates' \
&& set -x \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
&& apt-get purge -y --auto-remove $buildDeps
# Install yarn and feathers-cli
RUN npm install -g yarn #feathersjs/cli pm2#latest
# Install Git
RUN add-apt-repository -y ppa:git-core/ppa;\
apt-get update;\
apt-get -y install git
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN cd /usr/src/app
# Install app dependencies
RUN yarn install
EXPOSE 3030
CMD [ "node" ]

How to access /var/run/docker.sock from inside a docker container as a non-root user? (MacOS Host)

I have installed docker on Mac and everything is running fine. I am using a Jenkins docker image and running it. While using Jenkins as a CI server and to build further images by running docker commands through it, I came to know that we have to bind mount /var/run/docker.sock while running the Jenkins images so it can access the docker daemon.
I did that, and installed docker CLI inside Jenkins’s container. But when running docker ps or any other docker commands it is throwing an error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.28/containers/json: dial unix /var/run/docker.sock: connect: permission denied
When I connect to container as a root user, it works fine. But switching to the ‘jenkins’ user throws the above error. I have already added ‘jenkins’ user to sudo list but does not help.
I found few articles suggesting to add ‘jenkins’ user to ‘docker’ group but to my surprise I do not find any docker group on Mac or inside container.
Any help is much appreciated. Thanks
It looks like the reason this is happening is pretty straight forward: UNIX permissions are not letting the jenkins user read /var/run/docker.sock. Really the easiest option is to just change the group assignment on /var/run/docker.sock from root to another group, and then add jenkins to that group:
[as root, inside the container]
root#host:/# usermod -G docker jenkins
root#host:/# chgrp docker /var/run/docker.sock
This assumes of course that you already have the docker CLI installed, and that a group called docker exists. If not:
[as root, inside the container]
root#host:/# groupadd docker
Alternatively, you could change the world permissions on /var/run/docker.sock to allow non-root users to access the socket, but I wouldn't recommend doing that; it just seems like bad security practice. Similarly, you could outright chown the socket to the jenkins user, although I'd rather just change the group settings.
I'm confused why using sudo didn't work for you. I just tried what I believe is exactly the setup you described and it worked without problems.
Start the container:
[on macos host]
darkstar:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
docker.io/jenkins/jenkins:lts
darkstar:~$ docker exec -u root -it <container id> /bin/bash
Install Docker CLI:
[as root, inside container]
root#host:/# apt-get update
root#host:/# apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
root#host:/# rel_id=$(. /etc/os-release; echo "$ID")
root#host:/# curl -fsSL https://download.docker.com/linux/${rel_id}/gpg > /tmp/dkey
root#host:/# apt-key add /tmp/dkey
root#host:/# add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/${rel_id} \
$(lsb_release -cs) stable"
root#host:/# apt-get update
root#host:/# apt-get -y install docker-ce
Then set up the jenkins user:
[as root, inside container]
root#host:/# usermod -G sudo jenkins
root#host:/# passwd jenkins
[...]
And trying it out:
[as jenkins, inside container]
jenkins#host:/$ sudo docker ps -a
[...]
password for jenkins:
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 jenkins/jenkins:lts "/sbin/tini -- /usr/…" 8 minutes ago ...
it seems to work fine for me. Maybe you took a different route to install the Docker CLI? Not sure, but if you want to access the docker socket using sudo, those steps will work. Although, I think it would be easier to just change the group assignment as explained up above. Good luck :)
Note: All tests performed using macOS Mojave v10.14.3 running Docker Engine v19.03.2. This doesn't seem to be heavily dependent on the host platform, so I would expect it to work on Linux or any other UNIX-like OS, including other versions of macOS/OSX.
No, but this works:
Add the user (e.g. jenkins) to the staff-group: sudo dseditgroup -o edit -a jenkins -t user staff
Allow group to sudo, in sudo visudo add:
%staff ALL = (ALL) ALL

How to extend an existing docker image?

I'm using the official elasticsearch Docker image instead of setting up my own elastic search instance. And that works great, up to the point when I wanted to extend it. I wanted to install marvel into that ElasticSearch instance to get more information.
Now dockerfile/elasticsearch automatically runs ElasticSearch and setting the command to /bin/bash doesn't work, neither does attaching to the container or trying to access it over SSH, nor installing ssh-daemon with apt-get install -y openssh-server.
In this particular case, I could just go into the container's file system and execute opt/elasticsearch/bint/plugin -i elasticsearch/marvel/latest and everything worked.
But how could I install an additional service which needs to be installed with apt-get when I can't have a terminal inside the running container?
Simply extend it using a Dockerfile that start with
FROM dockerfile/elasticsearch
and install marvel or ssh-server or whatever you need. Then, end with the correct command to start your services. You can use supervisor to start multple services, see Run a service automatically in a docker container for more info on that.
If you don't mind using docker-compose, what I usually do is to add a first section for the base image you plan to reuse, and then use that image as the base in the rest of the services' Dockerfiles, something along the lines of:
---
version: '2'
services:
base:
build: ./images/base
collector:
build: ./images/collector
Then, in images/collector/Dockerfile, and since my project is called webtrack, I'd type
FROM webtrack_base
...
And now it's done!
Update August 2016
Having found very little current information on how to do this with latest versions of ElasticSearch (2.3.5 for example), Kibana (4.5.3) and Marvel & Sense plugins, I opted to take the steeper path and write my own image.
Please find the source code (Dockerfile) and README here
FROM java:jre-alpine
MAINTAINER arcseldon <arcseldon#gmail.com>
ENV ES_VERSION=2.3.5 \
KIBANA_VERSION=4.5.3
RUN apk add --quiet --no-progress --no-cache nodejs \
&& adduser -D elasticsearch
USER elasticsearch
WORKDIR /home/elasticsearch
RUN wget -q -O - http://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/${ES_VERSION}/elasticsearch-${ES_VERSION}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ES_VERSION} elasticsearch \
&& wget -q -O - http://download.elastic.co/kibana/kibana/kibana-${KIBANA_VERSION}-linux-x64.tar.gz \
| tar -zx \
&& mv kibana-${KIBANA_VERSION}-linux-x64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm \
&& ./elasticsearch/bin/plugin install license \
&& ./elasticsearch/bin/plugin install marvel-agent \
&& ./kibana/bin/kibana plugin --install elasticsearch/marvel/latest \
&& ./kibana/bin/kibana plugin --install elastic/sense
CMD elasticsearch/bin/elasticsearch --es.logger.level=OFF --network.host=0.0.0.0 & kibana/bin/kibana -Q
EXPOSE 9200 5601
If you just want the pre-built image then please do:
docker pull arcseldon/elasticsearch-kibana-marvel-sense
You can visit the repository on hub.docker.com here
Usage:
docker run -d -p 9200:9200 -p 5601:5601 arcseldon/elasticsearch-kibana-marvel-sense
You can connect to Elasticsearch with http://localhost:9200 and its Kibana front-end with http://localhost:5601.
You can connect to Marvel with http://localhost:5601/app/marvel and Sense with http://localhost:5601/app/sense
Hope this helps others and saves some time!

Resources