Docker Build Image -- cant cd into directory and run commands - image

Docker Version: 17.09.1-ce
I am beginner in docker and I am trying to build docker image on centos. The below is the snippet of docker file i am having
FROM centos
RUN yum -y install samba-common && \
yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && \
yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && \
yum -y remove libbsd-devel && \
WORKDIR /usr/src && \
git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit && \
WORKDIR /usr/src/samba && \
WORKDIR /usr/src/winexe-winexe-wafgit/source && \
head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && \
echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && \
echo -e '\t'"lib='dl gnutls'", >> wscript_build && \
echo -e '\t'")" >> wscript_build && \
rm -rf tmp.txt && \
./waf --samba-dir=../../samba configure build
I tried with the normal cd which not work. WORKDIR does not work. How I can set working directory in Dockerfile?
I am getting an error like below using the above Dockerfile
/bin/sh: WORKDIR: command not found
The command '/bin/sh -c yum -y install samba-common && yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && yum -y remove libbsd-devel && WORKDIR /usr/src && git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit && WORKDIR /usr/src/samba && git reset --hard a6bda1f2bc85779feb9680bc74821da5ccd401c5 && WORKDIR /usr/src/winexe-winexe-wafgit/source && head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && echo -e '\t'"lib='dl gnutls'", >> wscript_build && echo -e '\t'")" >> wscript_build && rm -rf tmp.txt && ./waf --samba-dir=../../samba configure build' returned a non-zero code: 127
When I tried with normal cd instead work WORKDIR then I got below error
/bin/sh: line 0: cd: /usr/src/samba: No such file or directory but with sudo i can go into it. Then I tried to include sudo cd directory in docker file then it said no sudo found
UPDATE 1:
This is how I started build
sudo docker build -t abwinexeimage -f ./abwinexeimage . The build got successfully but unfortunately when i list images i dont see any image with tag namme of abwinexeimage.
I dont understand what is that first entry with tag name as none. what it represents ? it shows size of 1.23 GB, Do I really need this image or can i safely delete ?
When I started build first line showed that Sending build context to Docker daemon 303.9MB that means in that image list repository named centos with tag name latest is one the right image which I built ? I assuming so as the size says 202 MB ?
Then I issued docker ps, but no container running, then issued docker ps -a to see stopped containers
Then I tried to run image as container..
Now tried to issue docker ps to check whether container is running
Now i can tell you why i am so concerned about multiple containers present. Actually I wanted to manually CD into cd /usr/src/samba this inside docker container to verify if changes done via docker file got updated correctly or not. Now since i have multiple containers, really not sure which container I need to look into. In that stunt, i tried to start all containers, then manually issue
docker exec -it CONTAINER_NAME [bash | sh] to verify if i am able to find that file system there. This is the reason why I asked whether I can have single container so that i can easily find the file system there,my understanding is since multiple RUN statements created different layers, then its difficult for me to find in which container my file system resides, so that I can CD into it.. sorry for big explanation... I am trying to understand concepts better. Your comments please..

You need to use WORKDIR as a Dockerfile instruction, instead of using it together with run instruction.
RUN has 2 forms:
RUN (shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows) RUN
["executable", "param1", "param2"] (exec form)
WORKDIR
WORKDIR /path/to/workdir The WORKDIR instruction sets the working
directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that
follow it in the Dockerfile
FROM centos
RUN yum -y install samba-common && \
yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && \
yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && \
yum -y remove libbsd-devel
WORKDIR /usr/src
#use git clone with RUN not with WORKDIR
RUN git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit
#So start it with new line
WORKDIR /usr/src/samba
WORKDIR /usr/src/winexe-winexe-wafgit/source
#start RUN with new line
RUN head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && \
echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && \
echo -e '\t'"lib='dl gnutls'", >> wscript_build && \
echo -e '\t'")" >> wscript_build && \
rm -rf tmp.txt && \
./waf --samba-dir=../../samba configure build

Related

How to install oracle instantclient on a custom Jenkins Agent

I am struggeling to install an oracle driver and use it in my Jenkins builds.
My goal is this:
I would like to run cypress e2e tests in a jenkins pipe. Cypress should be able to execute oracle statements to prepare the database for some tests.
Therefore I created a script with the npm package 'oracledb'. This is working fine on my windows machine.
But in the Jenkins pipe, I can not download the needed instantclient driver files.
So I started to create my own jenkins agent. This agent is a Docker Image which is created from 'FROM cypress/browsers:node14.17.0-chrome91-ff89'. And it is then downloading and unzipping the instantclient drivers.
So far so good, but when I execute a Task on that created agent. I can not find or use the driver that the agent should provide.
So my question is: How can I provide a instantclient installation on an Jenkins agent to be used in executions?
Dockerfile:
FROM cypress/browsers:node14.17.0-chrome91-ff89
USER root
ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8
ENV LANG=en_GB.UTF-8
RUN mkdir -p /home/jenkins && \
chown -R 1001:0 /home/jenkins && \
chmod -R g+w /home/jenkins
RUN ls /home -all
RUN mkdir -p /opt/oracle && \
chown -R 1001:0 /opt/oracle && \
chmod -R g+w /opt/oracle
WORKDIR /opt/oracle
RUN apt-get update && apt-get install -y libaio1 wget unzip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && rm -f instantclient-basiclite-linuxx64.zip && \
ls && \
ls /opt && \
ls /opt/oracle && \
cd /opt/oracle/instantclient* && rm -f *jdbc* *occi* *mysql* *mql1* *ipc1* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && ldconfig
RUN chmod -R a+rwX /opt/oracle
RUN mkdir -p /srv/install
WORKDIR /srv/install
COPY buster_addons/ca-certificates-java_20190405_all.deb /srv/install/ca-certificates-java_20190405_all.deb
COPY buster_addons/fonts-dejavu-extra_2.37-1_all.deb /srv/install/fonts-dejavu-extra_2.37-1_all.deb
COPY buster_addons/java-common_0.71_all.deb /srv/install/java-common_0.71_all.deb
COPY buster_addons/libpcsclite1_1.8.24-1_amd64.deb /srv/install/libpcsclite1_1.8.24-1_amd64.deb
COPY buster_addons/libatk-wrapper-java_0.38.0-1_all.deb /srv/install/libatk-wrapper-java_0.38.0-1_all.deb
COPY buster_addons/libatk-wrapper-java_0.33.3-22_all.deb /srv/install/libatk-wrapper-java_0.33.3-22_all.deb
COPY buster_addons/libatk-wrapper-java-jni_0.33.3-22_amd64.deb /srv/install/libatk-wrapper-java-jni_0.33.3-22_amd64.deb
COPY buster_addons/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb
RUN dpkg -i /srv/install/java-common_0.71_all.deb && \
dpkg -i /srv/install/libpcsclite1_1.8.24-1_amd64.deb && \
dpkg -i --force-all /srv/install/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/ca-certificates-java_20190405_all.deb && \
dpkg -i /srv/install/libatk-wrapper-java_0.38.0-1_all.deb && \
dpkg -i /srv/install/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/libatk-wrapper-java-jni_0.33.3-22_amd64.deb && \
dpkg -i /srv/install/libatk-wrapper-java_0.33.3-22_all.deb && \
dpkg -i /srv/install/fonts-dejavu-extra_2.37-1_all.deb && \
dpkg -i /srv/install/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb
COPY run-jnlp-client /usr/local/bin/run-jnlp-client
COPY generate_container_user /usr/local/bin/generate_container_user
RUN chmod a+rwx /usr/local/bin/run-jnlp-client && \
chmod a+rwx /usr/local/bin/generate_container_user
WORKDIR /e2e
# The user who is starting the docker container is not root, but temporary npm data is stored in root!
RUN chmod -R a+rwX /e2e
RUN mkdir -p /tmp && \
chown -R 1001:0 /tmp
RUN ls /tmp -all
# Run the Jenkins JNLP client
ENTRYPOINT ["/usr/local/bin/run-jnlp-client"]
The JNLP stuff is from here: https://github.com/openshift/jenkins/blob/master/slave-base/contrib/bin/run-jnlp-client

Failed to Call Access Method Exception when Creating a MedicationOrder in FHIR

I am using this http://fhirtest.uhn.ca/baseDstu2 test FHIR server and it worked okay so far.
Now I am getting an HTTP-500 - Failed to Call Access Method exception.
Anyone has any idea on what has gone wrong?
This happens frequently. Probably because someone tested weird queries or similar that put the server in an unstable status.
I suggest posting a comment in https://chat.fhir.org/#narrow/stream/hapi to get the server restarted,
or install http://hapifhir.io/doc_cli.html which does basically the same but you have full control.
I built a Dockerfile:
FROM debian:sid
MAINTAINER Günter Zöchbauer <guenter#yyy.com>
ENV DEBIAN_FRONTEND noninteractive
RUN \
apt-get -q update && \
DEBIAN_FRONTEND=noninteractive && \
apt-get install --no-install-recommends -y -q \
apt-transport-https \
apt-utils \
wget \
bzip2 \
default-jdk
# net-tools sudo procps telnet
RUN \
apt-get update && \
rm -rf /var/lib/apt/lists/*
https://github.com/jamesagnew/hapi-fhir/releases/download/v2.0/hapi-fhir-2.0-cli.tar.bz2 && \
ADD hapi-* /hapi_fhir_cli/
RUN ls -la
RUN ls -la /hapi_fhir_cli
ADD prepare_server.sh /hapi_fhir_cli/
RUN \
cd /hapi_fhir_cli && \
bash -c /hapi_fhir_cli/prepare_server.sh
ADD start.sh /hapi_fhir_cli/
WORKDIR /hapi_fhir_cli
EXPOSE 5555
ENTRYPOINT ["/hapi_fhir_cli/start.sh"]
Which requires in the same directory as the Dockerfile
prepare_server.sh
#!/usr/bin/env bash
ls -la
./hapi-fhir-cli run-server --allow-external-refs &
while ! timeout 1 bash -c "echo > /dev/tcp/localhost/8080"; do sleep 10; done
./hapi-fhir-cli upload-definitions -t http://localhost:8080/baseDstu2
./hapi-fhir-cli upload-examples -c -t http://localhost:8080/baseDstu2
start.sh
#!/usr/bin/env bash
cd /hapi_fhir_cli
./hapi-fhir-cli run-server --allow-external-refs -p 5555
Build
docker build myname/hapi_fhir_cli_dstu2 -t . #--no-cache
Run
docker run -d -p 5555:5555 [image id from docker build]
Hope this helps.

How to change the version of Ruby in a Docker image (replace 2.2.0 with 2.0.0 )

The Heroku Docker image heroku/ruby installs ruby 2.2.3.
How do I use that image, but use ruby 2.0.0 instead (trying to Dockerize a Rails 3.2 app).
I know that the location of the Heroku buildpack for 2.0.0 is
https://heroku-buildpack-ruby.s3.amazonaws.com/cedar-14/ruby-2.0.0.tgz
but cannot see how to modify my Dockerfile so that it will use that version of Ruby instead.
I tried:
# Dockerfile
FROM heroku/ruby
# Install Ruby
ONBUILD RUN curl -s --retry 3 -L https://heroku-buildpack-ruby.s3.amazonaws.com/cedar-14/ruby-2.0.0.tgz | tar xz -C /app/heroku/ruby/ruby-2.2.0
which I'd hoped might overwrite the 2.2.0 with 2.0.0 (keeping the path etc the same) but that command gets ignored when I run docker-compose build
This is what I ended up doing (ruby and node) on the same docker file reproducing heroku environment:
FROM heroku/heroku:16
# Ruby dependencies
RUN apt-get update -qq && \
apt-get install -y -q --no-install-recommends \
build-essential\
libpq-dev\
libxml2-dev\
libxslt1-dev\
nodejs\
npm \
qt5-default\
libqt5webkit5-dev\
gstreamer1.0-plugins-base\
gstreamer1.0-tools\
gstreamer1.0-x\
xvfb \
&& rm -rf /var/lib/apt/lists/* \
&& truncate -s 0 /var/log/*log
# Ruby heroku
RUN apt remove -y --purge ruby && curl -s --retry 3 -L https://heroku-buildpack-ruby.s3.amazonaws.com/heroku-16/ruby-2.3.4.tgz | tar -xz
# Node heroku
RUN export NODE_VERSION=6.11.0 && \
curl -s --retry 3 -L https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz -o /tmp/node-v$NODE_VERSION-linux-x64.tar.gz && \
tar -xzf /tmp/node-v$NODE_VERSION-linux-x64.tar.gz -C /tmp && \
rsync -a /tmp/node-v$NODE_VERSION-linux-x64/ / && \
rm -rf /tmp/node-v$NODE_VERSION-linux-x64*
WORKDIR /var/app
You need to build an image yourself with the right versions. Change this Dockerfile as necessary - https://github.com/heroku/docker-ruby/blob/master/Dockerfile

Why does "docker run" error with "no such file or directory"?

I am trying to run a container which runs an automated build. Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER pmandayam
# update dpkg repositories
RUN apt-get update
# install wget
RUN apt-get install -y wget
# get maven 3.2.2
RUN wget --no-verbose -O /tmp/apache-maven-3.2.2.tar.gz http://archive.apache.or
g/dist/maven/maven-3/3.2.2/binaries/apache-maven-3.2.2-bin.tar.gz
# verify checksum
RUN echo "87e5cc81bc4ab9b83986b3e77e6b3095 /tmp/apache-maven-3.2.2.tar.gz" | md5
sum -c
# install maven
RUN tar xzf /tmp/apache-maven-3.2.2.tar.gz -C /opt/
RUN ln -s /opt/apache-maven-3.2.2 /opt/maven
RUN ln -s /opt/maven/bin/mvn /usr/local/bin
RUN rm -f /tmp/apache-maven-3.2.2.tar.gz
ENV MAVEN_HOME /opt/maven
# remove download archive files
RUN apt-get clean
# set shell variables for java installation
ENV java_version 1.8.0_11
ENV filename jdk-8u11-linux-x64.tar.gz
ENV downloadlink http://download.oracle.com/otn-pub/java/jdk/8u11-b12/$filename
# download java, accepting the license agreement
RUN wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie
" -O /tmp/$filename $downloadlink
# unpack java
RUN mkdir /opt/java-oracle && tar -zxf /tmp/$filename -C /opt/java-oracle/
ENV JAVA_HOME /opt/java-oracle/jdk$java_version
ENV PATH $JAVA_HOME/bin:$PATH
# configure symbolic links for the java and javac executables
RUN update-alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 20000 &
& update-alternatives --install /usr/bin/javac javac $JAVA_HOME/bin/javac 20000
# install mongodb
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen
' | sudo tee /etc/apt/sources.list.d/mongodb.list && \
apt-get update && \
apt-get --allow-unauthenticated install -y mongodb-org mongodb-org-s
erver mongodb-org-shell mongodb-org-mongos mongodb-org-tools && \
echo "mongodb-org hold" | dpkg --set-selections && \
echo "mongodb-org-server hold" | dpkg --set-selections && \
echo "mongodb-org-shell hold" | dpkg --set-selections &&
\
echo "mongodb-org-mongos hold" | dpkg --set-selectio
ns && \
echo "mongodb-org-tools hold" | dpkg --set-selec
tions
RUN mkdir -p /data/db
VOLUME /data/db
EXPOSE 27017
COPY build-script /build-script
CMD ["/build-script"]
I can build the image successfully but when I try to run the container I get this error:
$ docker run mybuild
no such file or directory
Error response from daemon: Cannot start container 3e8aa828909afcd8fb82b5a5ac894
97a537bef2b930b71a5d20a1b98d6cc1dd6: [8] System error: no such file or directory
what does it mean 'no such file or directory'?
Here is my simple script:
#!/bin/bash
sudo service mongod start
mvn clean verify
sudo service mongod stop
I copy it like this: COPY build-script /build-script
and run it like this: CMD ["/build-script"] not sure why its not working
Using service isn't going to fly - the Docker base images are minimal and don't support this. If you want to run multiple processes, you can use supervisor or runit etc.
In this case, it would be simplest just to start mongo manually in the script e.g. /usr/bin/mongod & or whatever the correct incantation is.
BTW the lines where you try to clean up don't have much effect:
RUN rm -f /tmp/apache-maven-3.2.2.tar.gz
...
# remove download archive files
RUN apt-get clean
These files have already been committed to a previous image layer, so doing this doesn't save any disk-space. Instead you have to delete the files in the same Dockerfile instruction in which they're added.
Also, I would consider changing the base image to a Java one, which would save a lot of work. However, you may have trouble finding one which bundles the official Oracle JDK rather than OpenJDK if that's a problem.

How to generate a Dockerfile from an image?

Is it possible to generate a Dockerfile from an image? I want to know for two reasons:
I can download images from the repository but would like to see the recipe that generated them.
I like the idea of saving snapshots, but once I am done it would be nice to have a structured format to review what was done.
How to generate or reverse a Dockerfile from an image?
You can. Mostly.
Notes: It does not generate a Dockerfile that you can use directly with docker build; the output is just for your reference.
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
dfimage -sV=1.36 nginx:latest
It will pull the target docker image automatically and export Dockerfile. Parameter -sV=1.36 is not always required.
Reference: https://hub.docker.com/r/alpine/dfimage
Now hub.docker.com shows the image layers with detail commands directly, if you choose a particular tag.
Bonus
If you want to know which files are changed in each layer
alias dive="docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive"
dive nginx:latest
On the left, you see each layer's command, on the right (jump with tab), the yellow line is the folder that some files are changed in that layer
(Use SPACE to collapse dir)
Old answer
below is the old answer, it doesn't work any more.
$ docker pull centurylink/dockerfile-from-image
$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm centurylink/dockerfile-from-image"
$ dfimage --help
Usage: dockerfile-from-image.rb [options] <image_id>
-f, --full-tree Generate Dockerfile for all parent layers
-h, --help Show this message
To understand how a docker image was built, use the
docker history --no-trunc command.
You can build a docker file from an image, but it will not contain everything you would want to fully understand how the image was generated. Reasonably what you can extract is the MAINTAINER, ENV, EXPOSE, VOLUME, WORKDIR, ENTRYPOINT, CMD, and ONBUILD parts of the dockerfile.
The following script should work for you:
#!/bin/bash
docker history --no-trunc "$1" | \
sed -n -e 's,.*/bin/sh -c #(nop) \(MAINTAINER .*[^ ]\) *0 B,\1,p' | \
head -1
docker inspect --format='{{range $e := .Config.Env}}
ENV {{$e}}
{{end}}{{range $e,$v := .Config.ExposedPorts}}
EXPOSE {{$e}}
{{end}}{{range $e,$v := .Config.Volumes}}
VOLUME {{$e}}
{{end}}{{with .Config.User}}USER {{.}}{{end}}
{{with .Config.WorkingDir}}WORKDIR {{.}}{{end}}
{{with .Config.Entrypoint}}ENTRYPOINT {{json .}}{{end}}
{{with .Config.Cmd}}CMD {{json .}}{{end}}
{{with .Config.OnBuild}}ONBUILD {{json .}}{{end}}' "$1"
I use this as part of a script to rebuild running containers as images:
https://github.com/docbill/docker-scripts/blob/master/docker-rebase
The Dockerfile is mainly useful if you want to be able to repackage an image.
The thing to keep in mind, is a docker image can actually just be the tar backup of a real or virtual machine. I have made several docker images this way. Even the build history shows me importing a huge tar file as the first step in creating the image...
I somehow absolutely missed the actual command in the accepted answer, so here it is again, bit more visible in its own paragraph, to see how many people are like me
$ docker history --no-trunc <IMAGE_ID>
A bash solution :
docker history --no-trunc $argv | tac | tr -s ' ' | cut -d " " -f 5- | sed 's,^/bin/sh -c #(nop) ,,g' | sed 's,^/bin/sh -c,RUN,g' | sed 's, && ,\n & ,g' | sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' | head -n -1
Step by step explanations:
tac : reverse the file
tr -s ' ' trim multiple whitespaces into 1
cut -d " " -f 5- remove the first fields (until X months/years ago)
sed 's,^/bin/sh -c #(nop) ,,g' remove /bin/sh calls for ENV,LABEL...
sed 's,^/bin/sh -c,RUN,g' remove /bin/sh calls for RUN
sed 's, && ,\n & ,g' pretty print multi command lines following Docker best practices
sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' remove layer size information
head -n -1 remove last line ("SIZE COMMENT" in this case)
Example:
~  dih ubuntu:18.04
ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in /
RUN set -xe
&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d
&& echo 'exit 101' >> /usr/sbin/policy-rc.d
&& chmod +x /usr/sbin/policy-rc.d
&& dpkg-divert --local --rename --add /sbin/initctl
&& cp -a /usr/sbin/policy-rc.d /sbin/initctl
&& sed -i 's/^exit.*/exit 0/' /sbin/initctl
&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup
&& echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean
&& echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages
&& echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes
&& echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests
RUN rm -rf /var/lib/apt/lists/*
RUN sed -i 's/^#\s*\(deb.*universe\)$/\1/g' /etc/apt/sources.list
RUN mkdir -p /run/systemd
&& echo 'docker' > /run/systemd/container
CMD ["/bin/bash"]
Update Dec 2018 to BMW's answer
chenzj/dfimage - as described on hub.docker.com regenerates Dockerfile from other images. So you can use it as follows:
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage IMAGE_ID > Dockerfile
This is derived from #fallino's answer, with some adjustments and simplifications by using the output format option for docker history. Since macOS and Gnu/Linux have different command-line utilities, a different version is necessary for Mac. If you only need one or the other, you can just use those lines.
#!/bin/bash
case "$OSTYPE" in
linux*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tac | # reverse the file
sed 's,^\(|3.*\)\?/bin/\(ba\)\?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed 's, *&& *, \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
darwin*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tail -r | # reverse the file
sed -E 's,^(\|3.*)?/bin/(ba)?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed $'s, *&& *, \\\ \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
*)
echo "unknown OSTYPE: $OSTYPE"
;;
esac
It is not possible at this point (unless the author of the image explicitly included the Dockerfile).
However, it is definitely something useful! There are two things that will help to obtain this feature.
Trusted builds (detailed in this docker-dev discussion
More detailed metadata in the successive images produced by the build process. In the long run, the metadata should indicate which build command produced the image, which means that it will be possible to reconstruct the Dockerfile from a sequence of images.
If you are interested in an image that is in the Docker hub registry and wanted to take a look at Dockerfile?.
Example:
If you want to see the Dockerfile of image "jupyter/datascience-notebook" type the word "Dockerfile" in the address bar of your browser as shown below.
https://hub.docker.com/r/jupyter/datascience-notebook/
https://hub.docker.com/r/jupyter/datascience-notebook/Dockerfile
Note:
Not all the images have Dockerfile, for example, https://hub.docker.com/r/redislabs/redisinsight/Dockerfile
Sometimes this way is much faster than searching for Dockerfile in Github.
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage image_id
Below is the output of the dfimage command:
$ dfimage 0f1947a021ce
FROM node:8
WORKDIR /usr/src/app
COPY file:e76d2e84545dedbe901b7b7b0c8d2c9733baa07cc821054efec48f623e29218c in ./
RUN /bin/sh -c npm install
COPY dir:a89a4894689a38cbf3895fdc0870878272bb9e09268149a87a6974a274b2184a in .
EXPOSE 8080
CMD ["npm" "start"]
it is possible in just two step. First pull the image then run docker history command. also, shown in SS.
docker pull kalilinux/kali-rolling
docker history --format "{{.CreatedBy}}" kalilinux/kali-rolling --no-trunc
What is image2df
image2df is tool for Generate Dockerfile by an image.
This tool is very useful when you only have docker image and need to generate a Dockerfile whit it.
How does it work
Reverse parsing by history information of an image.
How to use this image
# Command alias
echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
. ~/.bashrc
# Excute command
image2df <IMAGE>
See help
docker run --rm cucker/image2df --help
For example
$ echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
$ . ~/.bashrc
$ docker pull mysql
$ image2df mysql
========== Dockerfile ==========
FROM mysql:latest
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN apt-get update && apt-get install -y --no-install-recommends gnupg dirmngr && rm -rf /var/lib/apt/lists/*
ENV GOSU_VERSION=1.12
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates wget; \
rm -rf /var/lib/apt/lists/*; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-get update && apt-get install -y --no-install-recommends \
pwgen \
openssl \
perl \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN set -ex; \
key='A4A9406876FCBD3C456770C88C718D3B5072E1F5'; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
gpg --batch --export "$key" > /etc/apt/trusted.gpg.d/mysql.gpg; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
apt-key list > /dev/null
ENV MYSQL_MAJOR=8.0
ENV MYSQL_VERSION=8.0.24-1debian10
RUN echo 'deb http://repo.mysql.com/apt/debian/ buster mysql-8.0' > /etc/apt/sources.list.d/mysql.list
RUN { \
echo mysql-community-server mysql-community-server/data-dir select ''; \
echo mysql-community-server mysql-community-server/root-pass password ''; \
echo mysql-community-server mysql-community-server/re-root-pass password ''; \
echo mysql-community-server mysql-community-server/remove-test-db select false; \
} | debconf-set-selections \
&& apt-get update \
&& apt-get install -y \
mysql-community-client="${MYSQL_VERSION}" \
mysql-community-server-core="${MYSQL_VERSION}" \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld \
&& chown -R mysql:mysql /var/lib/mysql /var/run/mysqld \
&& chmod 1777 /var/run/mysqld /var/lib/mysql
VOLUME [/var/lib/mysql]
COPY dir:2e040acc386ebd23b8571951a51e6cb93647df091bc26159b8c757ef82b3fcda in /etc/mysql/
COPY file:345a22fe55d3e6783a17075612415413487e7dba27fbf1000a67c7870364b739 in /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 33060
CMD ["mysqld"]
reference

Resources