How to rebuild dockerfile quick by using cache? - caching

I want to optimize my Dockerfile. And I wish to keep cache file in disk.
But, I found when I run docker build . It always try to get every file from network.
I wish to share My cached directory during build (eg. /var/cache/yum/x86_64/6).
But, it works only on docker run -v ....
Any suggestion?(In this example, only 1 rpm installed, in real case, I require to install hundreds rpms)
My draft Dockerfile
FROM centos:6.4
RUN yum update -y
RUN yum install -y openssh-server
RUN sed -i -e 's:keepcache=0:keepcache=1:' /etc/yum.conf
VOLUME ["/var/cache/yum/x86_64/6"]
EXPOSE 22
At second time, I want to build a similar image
FROM centos:6.4
RUN yum update -y
RUN yum install -y openssh-server vim
I don't want the fetch openssh-server from internat again(It is slow). In my real case, it is not one package, it is about 100 packages.

An update to previous answers, current docker build
accepts --build-arg that pass environment variables like http_proxy
without saving it in the resulting image.
Example:
# get squid
docker run --name squid -d --restart=always \
--publish 3128:3128 \
--volume /var/spool/squid3 \
sameersbn/squid:3.3.8-11
# optionally in another terminal run tail on logs
docker exec -it squid tail -f /var/log/squid3/access.log
# get squid ip to use in docker build
SQUID_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' squid)
# build your instance
docker build --build-arg http_proxy=http://$SQUID_IP:3128 .

Just use an intermediate/base image:
Base Dockerfile, build it with docker build -t custom-base or something:
FROM centos:6.4
RUN yum update -y
RUN yum install -y openssh-server vim
RUN sed -i -e 's:keepcache=0:keepcache=1:' /etc/yum.conf
Application Dockerfile:
FROM custom-base
VOLUME ["/var/cache/yum/x86_64/6"]
EXPOSE 22

You should use a caching proxy (f.e Http Replicator, squid-deb-proxy ...) or apt-cacher-ng for Ubuntu to cache installation packages. I think, you can install this software to the host machine.
EDIT:
Option 1 - caching http proxy - easier method with modified Dockerfile:
> cd ~/your-project
> git clone https://github.com/gertjanvanzwieten/replicator.git
> mkdir cache
> replicator/http-replicator -r ./cache -p 8080 --daemon ./cache/replicator.log --static
add to your Dockerfile (before first RUN line):
ENV http_proxy http://172.17.42.1:8080/
You should optionally clear the cache from time to time.
Option 2 - caching transparent proxy, no modification to Dockerfile:
> cd ~/your-project
> curl -o r.zip https://codeload.github.com/zahradil/replicator/zip/transparent-requests
> unzip r.zip
> rm r.zip
> mv replicator-transparent-requests replicator
> mkdir cache
> replicator/http-replicator -r ./cache -p 8080 --daemon ./cache/replicator.log --static
You need to start the replicator as some user (non root!).
Set up the transparent redirect:
> iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner <replicator-user> --dport 80 -j REDIRECT --to-port 8080
Disable redirect:
> iptables -t nat -D OUTPUT -p tcp -m owner ! --uid-owner <replicator-user> --dport 80 -j REDIRECT --to-port 8080
This method is the most transparent and general and your Dockerfile does not need to be modified. You should optionally clear the cache from time to time.

Related

Port forwarding through nested docker containers on Jenkins

My Jenkins pipeline uses the docker plugin that then runs a docker container from inside of that to set up a general test environment like this:
node('docker') {
sh """
cat > .Dockerfile.build <<EOF
FROM ruby:$rubyVersion
RUN apt-get update && apt-get install -y locales && localedef -i en_US -f UTF-8 en_US.UTF-8
ENV LANG=en_US.UTF-8 \\
LANGUAGE=en_US:en \\
LC_LANG=en_US.UTF-8 \\
LC_ALL=en_US.UTF-8
RUN \\
curl -sSL -o /tmp/docker.tgz https://download.docker.com/linux/static/stable/x86_64/docker-${dockerVersion}.tgz && \\
tar --strip-components 1 --directory /usr/local/bin/ --extract --file /tmp/docker.tgz
RUN \\
groupadd -g $gid docker && \\
useradd -d $env.HOME -u $uid build -r -m && \\
usermod -a -G docker build
EOF
""".stripIndent().trim()
}
Once the test environment container is up, I run another container that has my code and tests inside that previously made environment container. One of my tests includes making sure a firewall was set up through iptables that allow certain ports through. To test to see if my firewall is setup correctly, I simple run this from inside that container (now 3 docker containers deep):
def listener_response(port, host = 'localhost')
TCPSocket.open(host, port) do |socket|
socket.read(2)
end
rescue SystemCallError
nil
end
This is called by simply passing in the random port I used and the Jenkins docker node IP. When I run my test container, I do something like:
docker run -d -e DOCKER_HOST_IP=10.x.x.x -e RANDOM_OPEN_PORT=52459 -p 52459:52459 -v /var/run/docker.sock:/var/run/docker.sock
However, I still get a nil response from my test rather than an OK. Is there a way to port forward from the Jenkins host to my test environment to my test container?
Running the test environment with the option --network host seemed to solve the problem for me.

Docker centos7 systemctl deos not work : Failed to connect D-bus

I am trying to run elasticsearch on docker.
My features like below
host system : OSX 10.12.5
docker : 17.05.0-ce
docker operating image : centos:latest
I was following this article, but it stuck with systemctl daemon-reload.
I found CentOS official respond about this D-bus bug, but when I ran docker run command it shows the message below.
[!!!!!!] Failed to mount API filesystems, freezing.
How could I solve this problem?
FYI, Here is Dockerfile what I build image
FROM centos
MAINTAINER juneyoung <juneyoung#hanmail.net>
ARG u=elastic
ARG uid=1000
ARG g=elastic
ARG gid=1000
ARG p=elastic
# add USER
RUN groupadd -g ${gid} ${g}
RUN useradd -d /home/${u} -u ${uid} -g ${g} -s /bin/bash ${u}
# systemctl settings from official Centos github
# https://github.com/docker-library/docs/tree/master/centos#systemd-integration
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
# yum settings
RUN yum -y update
RUN yum -y install java-1.8.0-openjdk.x86_64
ENV JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-3.b12.el7_3.x86_64/jre/
# install wget
RUN yum install -y wget
# install net-tools : netstat, ifconfig
RUN yum install -y net-tools
# Elasticsearch install
ENV ELASTIC_VERSION=5.4.0
RUN rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
RUN wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${ELASTIC_VERSION}.rpm
RUN rpm -ivh elasticsearch-${ELASTIC_VERSION}.rpm
CMD ["/usr/sbin/init"]
and I have ran with command
docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro --name=elastic2 elastic2
First, thanks to #Robert.
I did not think it that way.
All I have to do is just edit my CMD command.
Change that to
CMD["elasticsearch"]
However, have to some chores to access from the browser.
refer this elasticsearch forum post.
You could follow the commands for a systemd-enabled OS if you would replace the normal systemctl command. That's how I do install elasticsearch in a centos docker container.
See "docker-systemctl-replacement" for the details.

Build failed while appending line in source of docker container

I'm working on https://github.com/audip/rpi-haproxy and get this error message when building the docker container:
Build failed: The command '/bin/sh -c echo "deb http://httpredir.debian.org/debian jessie-backports main" >> /etc/apt/sources.list' returned a non-zero code: 1
This can be viewed at https://hub.docker.com/r/audip/rpi-haproxy/builds/brxdkayq3g45jjhppndcwnb/
I tried to find answers, but the problem seems to be something off on Line 4 of the Dockerfile. Need help to fix this build from failing.
# Pull base image.
FROM resin/rpi-raspbian:latest
# Enable Jessie backports
RUN echo "deb http://httpredir.debian.org/debian jessie-backports main" >> /etc/apt/sources.list
# Setup GPG keys
RUN gpg --keyserver pgpkeys.mit.edu --recv-key 8B48AD6246925553 \
&& gpg -a --export 8B48AD6246925553 | sudo apt-key add - \
&& gpg --keyserver pgpkeys.mit.edu --recv-key 7638D0442B90D010 \
&& gpg -a --export 7638D0442B90D010 | sudo apt-key add -
# Install HAProxy
RUN apt-get update \
&& apt-get install haproxy -t jessie-backports
# Define working directory.
WORKDIR /usr/local/etc/haproxy/
# Copy config file to container
COPY haproxy.cfg .
COPY start.bash .
# Define mountable directories.
VOLUME ["/haproxy-override"]
# Run loadbalancer
# CMD ["haproxy", "-f", "/usr/local/etc/haproxy/haproxy.cfg"]
# Define default command.
CMD ["bash", "start.bash"]
# Expose ports.
EXPOSE 80
EXPOSE 443
From your logs:
standard_init_linux.go:178: exec user process caused "exec format error"
It's complaining about an invalid binary format. The image you are using is a Raspberry Pi image, which would be based on an ARM chipset. Your build is running on an AMD64 chipset. These are not binary compatible. I believe this image is designed to be built on a Pi itself.

Firefox Proxy to Docker Fiddler refusing connection

Running docker-fiddler container on Ubuntu-14.04 host. Container brings up fiddler and redirects GUI to host, but proxy fails. Docker ver 1.11.1,
Firefox displays either "The connection was reset" or "The proxy server is refusing connections" depending on setups shown below.
Question:
What are the correct Firefox proxy settings, http and ssl?
What changes are need to docker run cmd line?
What changes are need for the Dockerfile?
Note: I am hitting an http url, not https
This configuration, localhost, assuming port fwd, FF Output: The connection was reset
Firefox proxy:
manual proxy
HTTP Proxy 127.0.0.1 Port 8888
SSL Proxy 127.0.0.1 Port 8888
This Configuration, using container ip, FF Output: The Proxy server is refusing connections
Firefox proxy:
manual proxy
HTTP Proxy 172.17.02 Port 8888
SSL Proxy 172.17.02 Port 8888
TL;DR
Docker Run:
docker run -d -p 8888:8888 -v /tmp/.X11-unix:/tmp/.X11-unix -e \
DISPLAY=$DISPLAY fiddler -h $HOSTNAME -v \
$HOME/.Xauthority:/home/$USER/.Xauthority
docker ps:
16a4f7531222 fiddler "mono /app/Fiddler.ex" 3 hours ago Up 3 hours 0.0.0.0:8888->8888/tcp cranky_pare
Dockerfile jwieringa/docker-fiddler , I added expose 8888, and User config to support bind mount X server
FROM debian:wheezy
RUN apt-get update \
&& apt-get install -y curl unzip \
&& rm -rf /var/lib/apt/lists/*
RUN apt-key adv --keyserver pgp.mit.edu --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
RUN echo "deb http://download.mono-project.com/repo/debian wheezy/snapshots/3.12.0 main" > /etc/apt/sources.list.d/mono-xamarin.list \
&& apt-get update \
&& apt-get install -y mono-devel ca-certificates-mono fsharp mono-vbnc nuget \
&& rm -rf /var/lib/apt/lists/*
RUN cd /tmp && curl -O http://ericlawrence.com/dl/MonoFiddler-v4484.zip
RUN unzip /tmp/MonoFiddler-v4484.zip
## I added this for X11 Display of Fiddler GUI on linux Host
RUN groupadd -g <gid> <user>
RUN useradd -d /home/<user> -s /bin/bash -m <user> -u <uid> -g <gid>
USER <user>
ENV HOME /home/<user>
# I added this also
EXPOSE 8888
ENTRYPOINT ["mono", "/app/Fiddler.exe"]
1) The Host is considered a remote computer to docker-fiddler container
Fiddler > Tools > Fiddler Options > Connections > [x] Allow remote computers to connect
2) Fiddler requires a reset after changing this attribute, this closes the container. must add bind-mount volume to Dockerfile to maintain config
-v /tmp/docker-fiddler/.mono:/home/$USER/.mono
3) create /tmp/docker-fiddler/.mono on the host first and give it $USER permissions. Docker should do this for me but, I'm not sure how
4) Changed docker run to :
docker run -d -p 8888:8888 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-h $HOSTNAME \
-v $HOME/.Xauthority:/home/$USER/.Xauthority \
-v /tmp/docker-fiddler/.mono:/home/$USER/.mono \
-e DISPLAY=$DISPLAY fiddler
5) For debugging, change the first line above to add Debug (-D) and remove daemon (-d), doing this was key to finding the missing libs
docker -D run -p 8888:8888
6) There were several libs missing, the last one was gsettings-desktop-schema which contains/brings in the gnome proxy schema. This is used by fiddler, until this was in place the "AllowRemote" config setting was not being stored
.mono/registry/CurrentUser/software/telerik/fiddler/values.xml:<value name="AllowRemote"
7) Several changes to Dockerfile, including using ubuntu, creates a very large image, might be able to backout libglib2.0-bin libcanberra-gtk-module:
FROM ubuntu:14.04
RUN apt-get update \
&& apt-get install -y curl unzip libglib2.0-bin libcanberra-gtk-module gsettings-desktop-schemas \
&& rm -f /etc/apt/sources.list.d/mono-xamarin* \
&& rm -rf /var/lib/apt/lists/*
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
RUN echo "deb http://download.mono-project.com/repo/debian wheezy main" > /etc/apt/sources.list.d/mono-xamarin.list \
&& apt-get update \
&& apt-get install -y mono-complete ca-certificates-mono fsharp mono-vbnc nuget \
&& rm -rf /var/lib/apt/lists/*
RUN cd /tmp && curl -O http://ericlawrence.com/dl/MonoFiddler-v4484.zip
RUN unzip /tmp/MonoFiddler-v4484.zip
RUN groupadd -g 1000 <USER>
RUN useradd -d /home/<USER> -s /bin/bash \
-m <USER> -u <UID> -g <GID>
USER <user>
ENV HOME /home/<USER>
EXPOSE 8888
ENTRYPOINT ["mono", "/app/Fiddler.exe"]
8) Firefox Proxy, - did not address HTTPS/SSL
FF > edit > preferences > Advanced > settings
manual proxy
HTTP Proxy <container-ip> Port 8888
SSL Proxy <left this blank>
see: Install Mono on Linux
see: Docker In Practice, Miell/Sayers - CH4 Tech 26 Running GUIs, X11

Docker intercontainer communication

I would like to run Hadoop and Flume dockerized. I have a standard Hadoop image with all the default values. I cannot see how can these services communicate each other placed in separated containers.
Flume's Dockerfile looks like this:
FROM ubuntu:14.04.4
RUN apt-get update && apt-get install -q -y --no-install-recommends wget
RUN mkdir /opt/java
RUN wget --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" -qO- \
https://download.oracle.com/otn-pub/java/jdk/8u20-b26/jre-8u20-linux-x64.tar.gz \
| tar zxvf - -C /opt/java --strip 1
RUN mkdir /opt/flume
RUN wget -qO- http://archive.apache.org/dist/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz \
| tar zxvf - -C /opt/flume --strip 1
ADD flume.conf /var/tmp/flume.conf
ADD start-flume.sh /opt/flume/bin/start-flume
ENV JAVA_HOME /opt/java
ENV PATH /opt/flume/bin:/opt/java/bin:$PATH
CMD [ "start-flume" ]
EXPOSE 10000
You should link your containers. There are some variants how you can implement this.
1) Publish ports:
docker run -p 50070:50070 hadoop
option p binds port 50070 of your docker container with port 50070 of host machine
2) Link containers (using docker-compose)
docker-compose.yml
version: '2'
services:
hadoop:
image: hadoop:2.6
flume:
image: flume:last
links:
- hadoop
link option here binds your flume container with hadoop
more info about this https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/

Resources