How to extend an existing docker image? - elasticsearch

I'm using the official elasticsearch Docker image instead of setting up my own elastic search instance. And that works great, up to the point when I wanted to extend it. I wanted to install marvel into that ElasticSearch instance to get more information.
Now dockerfile/elasticsearch automatically runs ElasticSearch and setting the command to /bin/bash doesn't work, neither does attaching to the container or trying to access it over SSH, nor installing ssh-daemon with apt-get install -y openssh-server.
In this particular case, I could just go into the container's file system and execute opt/elasticsearch/bint/plugin -i elasticsearch/marvel/latest and everything worked.
But how could I install an additional service which needs to be installed with apt-get when I can't have a terminal inside the running container?

Simply extend it using a Dockerfile that start with
FROM dockerfile/elasticsearch
and install marvel or ssh-server or whatever you need. Then, end with the correct command to start your services. You can use supervisor to start multple services, see Run a service automatically in a docker container for more info on that.

If you don't mind using docker-compose, what I usually do is to add a first section for the base image you plan to reuse, and then use that image as the base in the rest of the services' Dockerfiles, something along the lines of:
---
version: '2'
services:
base:
build: ./images/base
collector:
build: ./images/collector
Then, in images/collector/Dockerfile, and since my project is called webtrack, I'd type
FROM webtrack_base
...
And now it's done!

Update August 2016
Having found very little current information on how to do this with latest versions of ElasticSearch (2.3.5 for example), Kibana (4.5.3) and Marvel & Sense plugins, I opted to take the steeper path and write my own image.
Please find the source code (Dockerfile) and README here
FROM java:jre-alpine
MAINTAINER arcseldon <arcseldon#gmail.com>
ENV ES_VERSION=2.3.5 \
KIBANA_VERSION=4.5.3
RUN apk add --quiet --no-progress --no-cache nodejs \
&& adduser -D elasticsearch
USER elasticsearch
WORKDIR /home/elasticsearch
RUN wget -q -O - http://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/${ES_VERSION}/elasticsearch-${ES_VERSION}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ES_VERSION} elasticsearch \
&& wget -q -O - http://download.elastic.co/kibana/kibana/kibana-${KIBANA_VERSION}-linux-x64.tar.gz \
| tar -zx \
&& mv kibana-${KIBANA_VERSION}-linux-x64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm \
&& ./elasticsearch/bin/plugin install license \
&& ./elasticsearch/bin/plugin install marvel-agent \
&& ./kibana/bin/kibana plugin --install elasticsearch/marvel/latest \
&& ./kibana/bin/kibana plugin --install elastic/sense
CMD elasticsearch/bin/elasticsearch --es.logger.level=OFF --network.host=0.0.0.0 & kibana/bin/kibana -Q
EXPOSE 9200 5601
If you just want the pre-built image then please do:
docker pull arcseldon/elasticsearch-kibana-marvel-sense
You can visit the repository on hub.docker.com here
Usage:
docker run -d -p 9200:9200 -p 5601:5601 arcseldon/elasticsearch-kibana-marvel-sense
You can connect to Elasticsearch with http://localhost:9200 and its Kibana front-end with http://localhost:5601.
You can connect to Marvel with http://localhost:5601/app/marvel and Sense with http://localhost:5601/app/sense
Hope this helps others and saves some time!

Related

Can you convert/build a docker image into a full OS image?

I have made a docker container meant with some code for deployment. However, I realised that the structure of the project I'm working with, it's more suitable to deploy a full ISO image, instead of running docker on top of a cloud VM running stock debian, leading to unnecessary layers of virtualization.
I know that dockers are meant to be deployed on kubernetes, but before diving into that route, is there a simple way to convert a deb9 docker image into a full deb9 OS image? Like an opposite of docker import?
You can convert your docker container to an full os image. An Dockerfile debian example would be
FROM debian:stretch
RUN apt-get -y install --no-install-recommends \
linux-image-amd64 \
systemd-sysv
In principal you have to install a kernel and an init system.
Full instructions can be found github
Docker images don't contain a Linux kernel and aren't configured to do things like run a full init system or go out and get their network configuration, so this won't work well.
Put differently: the same mismatch that makes docker import not really work well because the resulting container will have too much stuff and won't be set up for Docker, means you can't export a container or image into a VM and expect it to work in that environment.
But! If you've written a Dockerfile for your image, that's very close to saying "I'm going to start from this base Linux distribution and run this shell script to get an image". Tools like Packer are set up to take a base Linux VM image, run some sort of provisioner, and create a new image. If you have a good Docker setup already, and decide a VM is a better fit, you're probably done a lot of the leg-work already to have an automated build of your VM image.
#info
we can use docker like vm / flatpak / appimge / termux with gui, audio and hardware acceleration :)
how to do it ?
remove old docker
sudo apt-get remove docker docker-engine docker.io containerd runc
===================
install the commponents
sudo apt-get update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
=============================
3.. add docker repo docker
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
================
install docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
=============================
=========================
create docker
docker image
su
cd /home/
kwrite Dockerfile
FROM debian:testing
ARG DEBIAN_FRONTEND=noninteractive­
RUN apt-get update && apt-get -y install midori bash mpv pulseaudio pulseaudio-utils pulsemixer pulseaudio-module-jack apulse neofetch vlc smplayer wget sudo cairo-dock cairo-dock-plug-ins xfce4 xfce4-goodies falkon kde-full xfce4-whiskermenu-plugin gnome tigervnc-standalone-server openssh-server openssh-client network-manager net-tools iproute2 gerbera openjdk-11-jdk mediainfo dcraw htop gimp krita libreoffice python3-pip terminator uget alsa-utils
ENV PULSE_SERVER=tcp:host.docker.internal:4713
CMD bash
save it
the run interminal
sudo docker build -t supersifobian .
how to use gui
xhost +
run image gui
sudo apt-get -y install xpra xserver-xephyr xinit xauth
xclip x11-xserver-utils x11-utils
run docker
sudo docker run -ti --net=host --device=/dev/dri:/dev/dri -e DISPLAY=:0 --privileged --cap-add=ALL --device /dev/snd --volume /dev:/dev -v /dev:/dev --group-add audio -v /var/run/docker.sock:/host/var/run/doc -e PULSE_SERVER=tcp:$P ULSE_SERVER -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro -v /media:/host/media:ro -v /home:/host/home:ro imageid / image name
nb
enter container
docker exec -it nama/id container bash
add user
adduser namauser
make use as sudo
usermod -aG sudo username
add group audio
usermod -aG audio username
=========================
audio in docker cli
using prafer
a. install paprefs
apt-get install paprefs
b.choose network in prafers
c. how to know the pulse audio port, we can type
pax11publish
d. export to terminal in docker
export "PULSE_SERVER=tcp:192.168.43.135:37721"
==================================
save docker
docker commit idcontainer nameimage:version
========================
check images
docker images
==================================
thanks :)

Stackdriver agent in docker container

Is it possible to set up a generic Docker image with Stackdriver monitoring agents so it can send logging data within the container to Stackdriver, then which can be used across any VM instances regardless of GCE and AWS?
Update
FROM ubuntu:16.04
USER root
ADD . /
ENV GOOGLE_APPLICATION_CREDENTIALS="/etc/google/auth/application_default_credentials.json"
RUN apt-get update && apt-get -y --no-install-recommends install wget curl python-minimal ca-certificates lsb-release libidn11 openssl && \
RUN curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
RUN bash install-logging-agent.sh
Im exactly following what's been said in the documentation. The installation goes fine. But google-fluentd is failing to start/restart.
Thanks in advance.
Yes, this should be possible according to the documentation.
You will need to make sure that Stackdriver agent is installed and configured correctly in your docker image.

ldconfig returning non-zero code: 1

I'm trying to build a docker image containing the oracledb client and nodejs, but I'm getting the error The command '/bin/sh -c ldconfig' returned a non-zero code: 1 on RUN ldconfig.
I cannot find anything to help me solve this problem and I've been trying to solve this myself for the last 2hours, and I need help!
Additional info:
Oddly, when I go into the container with docker exec -it container_name sh and then execute ldconfig, it runs fine...
This is the dockerfile:
FROM node:9.11-alpine
WORKDIR /
COPY ./oracle /opt/oracle
RUN apk update && \
apk add --no-cache libaio && \
mkdir /etc/ld.so.conf.d && \
sh -c "echo /opt/oracle/instantclient_12_2 > /etc/ld.so.conf.d/oracle-instantclient.conf" && \
ldconfig
ENV LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH
ENV PATH=/opt/oracle/instantclient_12_2:$PATH
CMD ["tail", "-f", "/dev/null"]
In alpine ldconfig requires the configuration directory as an argument.
Try running ldconfig like this:
ldconfig /etc/ld.so.conf.d
Theoretically that should work.
See my blog post series Docker for Oracle Database Applications in Node.js and Python that shows using Instant Client in Oracle Linux containers.
Also see the node-oracledb installation manual section Using node-oracledb in Docker.
The latest sample Oracle Instant Client container Dockerfile automatically pulls the required RPMs - no manual download required. Oracle Instant Client 19 will connect to Oracle DB 11.2 or later.

feathers-chat remote access fails to connect using VMware Fusion

I built the feathers-chat demo with the feathers-chat-vuex client attached via socketio, but I am having trouble with remote access via socketio. The demo works fine as long as I access the client from a browser on the same system, which happens to be an Ubuntu VM running under VMware Fusion on a Macbook Pro. But if I try to connect to the client from a browser in the host MacOS, it brings up the login page but it fails to log in. In the devtools console, it says "WebSocket connection to ...localhost:3030 failed," which of course it did, because the feathers-chat server is not running on this localhost, it is running in the VM. The socketio is set up in feathers-client.js like this: "const socket = io('http://localhost:3030', {transports: ['websocket']})". If I hard-code the VM IP address in it like this: "const socket = io('http://172.16.184.194:3030', {transports: ['websocket']})" then the remote access works fine. But of course I cannot do that because in general I don't know the IP address of the server at run time. So can someone tell me the right way to set this up? Thanks!
I haven't worked with the socketio on feathers but I've dealt with this on a rest feathers app (so hopefully this applies!).
Put the IP address into your config file. When you eventually deploy the app you will be pointing it at the IP address of the production server anyway so it makes sense to have it configurable.
Then for you local dev it can be the IP of the VM, and when you deploy your config/production.json can have the live IP.
Here's the docs about using config variables:
https://docs.feathersjs.com/api/configuration.html#example
Update
If you are using Docker instead of VMWare for your local development, then you can use docker-compose and use the built in networking functionality. Here is an example of a docker-compose file for feathers app that I'm working on. It's just an API so I'm not dealing with the client but I think it should be a good starting point for you.
With docker installed, put these two files in your project root and then run docker-compose up -d. It exposes the port 3030 on the feathers server and you should be able to connect with http://localhost:3030 again from your host machine. If you want to add another docker container to docker-compose you can and then from that container you can access the server at http://server:3030 because it will use the internal docker networking.
docker-compose.yml:
version: "3"
services:
server:
build: .
volumes:
- .:/usr/src/app
ports:
- "3030:3030"
stdin_open: true
tty: true
mongo:
image: mongo
volumes:
- ./protected_db/howlround-mongodb:/data/db
command:
- --storageEngine=mmapv1
And my Dockerfile:
FROM ubuntu:16.04
# Dockerfile based on https://github.com/nodejs/docker-node/blob/master/6.4/slim/Dockerfile
# gpg keys listed at https://github.com/nodejs/node
RUN set -ex \
&& for key in \
9554F04D7259F04124DE6B476D5A82AC7E37093B \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
; do \
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done
ENV NPM_CONFIG_LOGLEVEL info
ENV NODE_VERSION 8.11.1
ENV NODE_ENV dev
RUN buildDeps='xz-utils curl ca-certificates' \
&& set -x \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
&& apt-get purge -y --auto-remove $buildDeps
# Install yarn and feathers-cli
RUN npm install -g yarn #feathersjs/cli pm2#latest
# Install Git
RUN add-apt-repository -y ppa:git-core/ppa;\
apt-get update;\
apt-get -y install git
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN cd /usr/src/app
# Install app dependencies
RUN yarn install
EXPOSE 3030
CMD [ "node" ]

How do I build a Docker image for a Ruby project without build tools?

I'm trying to build a Docker image for a Ruby project. The problem is the project has some gem dependencies that need to build native extensions. My understanding is that I have a couple of choices:
Start with a base image that already has build tools installed.
Use a base image with no build tools, install build tools as a step in the Dockerfile before running bundle install.
Precompile the native extensions on the host, vendorize the gem, and simply copy the resulting bundle into the image.
1 & 2 seem to require that the resulting image contains the build tools needed to build the native extensions. I'm trying to avoid that scenario for security reasons. 3 is cumbersome, but doable, and would accomplish what I want.
Are there any options I'm missing or am I misunderstanding something?
I use option 3 all the time, the goal being to end up with an image which has only what I need to run (not to compile)
For example, here I build and install Apache first, before using the resulting image as a base image for my (patched and recompiled) Apache setup.
Build:
if [ "$(docker images -q apache.deb 2> /dev/null)" = "" ]; then
docker build -t apache.deb -f Dockerfile.build . || exit 1
fi
The Dockerfile.build declares a volume which contains the resulting Apache recompiled (in a deb file)
RUN checkinstall --pkgname=apache2-4 --pkgversion="2.4.10" --backup=no --deldoc=yes --fstrans=no --default
RUN mkdir $HOME/deb && mv *.deb $HOME/deb
VOLUME /root/deb
Installation:
if [ "$(docker images -q apache.inst 2> /dev/null)" = "" ]; then
docker inspect apache.deb.cont > /dev/null 2>&1 || docker run -d -t --name=apache.deb.cont apache.deb
docker inspect apache.inst.cont > /dev/null 2>&1 || docker run -u root -it --name=apache.inst.cont --volumes-from apache.deb.cont --entrypoint "/bin/sh" openldap -c "dpkg -i /root/deb/apache2-4_2.4.10-1_amd64.deb"
docker commit apache.inst.cont apache.inst
docker rm apache.deb.cont apache.inst.cont
fi
Here I install the deb using another image (in my case 'openldap') as a base image:
docker run -u root -it --name=apache.inst.cont --volumes-from apache.deb.cont --entrypoint "/bin/sh" openldap -c "dpkg -i /root/deb/apache2-4_2.4.10-1_amd64.deb"
docker commit apache.inst.cont apache.inst
Finally I have a regular Dockerfile starting from the image I just committed.
FROM apache.inst:latest
psmith points out in the comments to Building Minimal Docker Image for Rails App from Jari Kolehmainen.
For a ruby application, you can remove the part needed for the build easily with:
bundle install --without development test && \
apk del build-dependencies
Since ruby is needed to run the application anyway, that works great in this case.
I my case, I still need a separate image for building, as gcc is not needed to run Apache (and it is quite large, comes with multiple dependencies, some of them needed by Apache at runtime, some not...)

Resources