How can I inspect the file system of a failed `docker build`? - debugging

I'm trying to build a new Docker image for our development process, using cpanm to install a bunch of Perl modules as a base image for various projects.
While developing the Dockerfile, cpanm returns a failure code because some of the modules did not install cleanly.
I'm fairly sure I need to get apt to install some more things.
Where can I find the /.cpanm/work directory quoted in the output, in order to inspect the logs? In the general case, how can I inspect the file system of a failed docker build command?
After running a find I discovered
/var/lib/docker/aufs/diff/3afa404e[...]/.cpanm
Is this reliable, or am I better off building a "bare" container and running stuff manually until I have all the things I need?

Everytime docker successfully executes a RUN command from a Dockerfile, a new layer in the image filesystem is committed. Conveniently you can use those layers ids as images to start a new container.
Take the following Dockerfile:
FROM busybox
RUN echo 'foo' > /tmp/foo.txt
RUN echo 'bar' >> /tmp/foo.txt
and build it:
$ docker build -t so-26220957 .
Sending build context to Docker daemon 47.62 kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : RUN echo 'foo' > /tmp/foo.txt
---> Running in 4dbd01ebf27f
---> 044e1532c690
Removing intermediate container 4dbd01ebf27f
Step 3/3 : RUN echo 'bar' >> /tmp/foo.txt
---> Running in 74d81cb9d2b1
---> 5bd8172529c1
Removing intermediate container 74d81cb9d2b1
Successfully built 5bd8172529c1
You can now start a new container from 00f017a8c2a6, 044e1532c690 and 5bd8172529c1:
$ docker run --rm 00f017a8c2a6 cat /tmp/foo.txt
cat: /tmp/foo.txt: No such file or directory
$ docker run --rm 044e1532c690 cat /tmp/foo.txt
foo
$ docker run --rm 5bd8172529c1 cat /tmp/foo.txt
foo
bar
of course you might want to start a shell to explore the filesystem and try out commands:
$ docker run --rm -it 044e1532c690 sh
/ # ls -l /tmp
total 4
-rw-r--r-- 1 root root 4 Mar 9 19:09 foo.txt
/ # cat /tmp/foo.txt
foo
When one of the Dockerfile command fails, what you need to do is to look for the id of the preceding layer and run a shell in a container created from that id:
docker run --rm -it <id_last_working_layer> bash -il
Once in the container:
try the command that failed, and reproduce the issue
then fix the command and test it
finally update your Dockerfile with the fixed command
If you really need to experiment in the actual layer that failed instead of working from the last working layer, see Drew's answer.

The top answer works in the case that you want to examine the state immediately prior to the failed command.
However, the question asks how to examine the state of the failed container itself. In my situation, the failed command is a build that takes several hours, so rewinding prior to the failed command and running it again takes a long time and is not very helpful.
The solution here is to find the container that failed:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6934ada98de6 42e0228751b3 "/bin/sh -c './utils/" 24 minutes ago Exited (1) About a minute ago sleepy_bell
Commit it to an image:
$ docker commit 6934ada98de6
sha256:7015687976a478e0e94b60fa496d319cdf4ec847bcd612aecf869a72336e6b83
And then run the image [if necessary, running bash]:
$ docker run -it 7015687976a4 [bash -il]
Now you are actually looking at the state of the build at the time that it failed, instead of at the time before running the command that caused the failure.

Update for newer docker versions 20.10 onwards
Linux or macOS
DOCKER_BUILDKIT=0 docker build ...
Windows
# Command line
set DOCKER_BUILDKIT=0 docker build ...
# PowerShell
$env:DOCKER_BUILDKIT=0
Use
DOCKER_BUILDKIT=0 docker build ...
to get the intermediate container hashes as known from older versions.
On newer versions, Buildkit is activated per default. It is recommended to only use it for debugging purposes. Build Kit can make your build faster.
For reference:
Buildkit doesn't support intermediate container hashes: https://github.com/moby/buildkit/issues/1053
Thanks to #David Callanan and #MegaCookie for their inputs.

Docker caches the entire filesystem state after each successful RUN line.
Knowing that:
to examine the latest state before your failing RUN command, comment it out in the Dockerfile (as well as any and all subsequent RUN commands), then run docker build and docker run again.
to examine the state after the failing RUN command, simply add || true to it to force it to succeed; then proceed like above (keep any and all subsequent RUN commands commented out, run docker build and docker run)
Tada, no need to mess with Docker internals or layer IDs, and as a bonus Docker automatically minimizes the amount of work that needs to be re-done.

Currently with the latest docker-desktop, there isn't a way to opt out
of the new Buildkit, which doesn't support debugging yet (follow the
latest updates on this on this GitHub Thread:
https://github.com/moby/buildkit/issues/1472).
Find out at which line in your Dockerfile it is failing.
Add to the top of your Dockerfile: FROM xxx as debug
Add an additional target: FROM xxx as next just one line before the failing command (as you don't want to build that part). Example:
FROM xxx as debug
RUN echo "working command"
FROM xxx as next
RUN echoo "failing command"
Run docker build -f Dockerfile --target debug --tag debug .
Then you can debug the container with: docker run -it debug /bin/sh
You can quit the shell by pressing CTRL P + CTRL Q
If you want to use docker compose build instead of docker build it's possible by adding target: debug in your docker-compose.yml under build.
Then start the container by docker compose run xxxYourServiceNamexxx and use either:
The second top answer to find out how to run a shell inside the container.
Or add ENTRYPOINT /bin/sh before the FROM xxx as next line in your Dockerfile.

Debugging build step failures is indeed very annoying.
The best solution I have found is to make sure that each step that does real work succeeds, and adding a check after those that fails. That way you get a committed layer that contains the outputs of the failed step that you can inspect.
A Dockerfile, with an example after the # Run DB2 silent installer line:
#
# DB2 10.5 Client Dockerfile (Part 1)
#
# Requires
# - DB2 10.5 Client for 64bit Linux ibm_data_server_runtime_client_linuxx64_v10.5.tar.gz
# - Response file for DB2 10.5 Client for 64bit Linux db2rtcl_nr.rsp
#
#
# Using Ubuntu 14.04 base image as the starting point.
FROM ubuntu:14.04
MAINTAINER David Carew <carew#us.ibm.com>
# DB2 prereqs (also installing sharutils package as we use the utility uuencode to generate password - all others are required for the DB2 Client)
RUN dpkg --add-architecture i386 && apt-get update && apt-get install -y sharutils binutils libstdc++6:i386 libpam0g:i386 && ln -s /lib/i386-linux-gnu/libpam.so.0 /lib/libpam.so.0
RUN apt-get install -y libxml2
# Create user db2clnt
# Generate strong random password and allow sudo to root w/o password
#
RUN \
adduser --quiet --disabled-password -shell /bin/bash -home /home/db2clnt --gecos "DB2 Client" db2clnt && \
echo db2clnt:`dd if=/dev/urandom bs=16 count=1 2>/dev/null | uuencode -| head -n 2 | grep -v begin | cut -b 2-10` | chgpasswd && \
adduser db2clnt sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# Install DB2
RUN mkdir /install
# Copy DB2 tarball - ADD command will expand it automatically
ADD v10.5fp9_linuxx64_rtcl.tar.gz /install/
# Copy response file
COPY db2rtcl_nr.rsp /install/
# Run DB2 silent installer
RUN mkdir /logs
RUN (/install/rtcl/db2setup -t /logs/trace -l /logs/log -u /install/db2rtcl_nr.rsp && touch /install/done) || /bin/true
RUN test -f /install/done || (echo ERROR-------; echo install failed, see files in container /logs directory of the last container layer; echo run docker run '<last image id>' /bin/cat /logs/trace; echo ----------)
RUN test -f /install/done
# Clean up unwanted files
RUN rm -fr /install/rtcl
# Login as db2clnt user
CMD su - db2clnt

In my case, I have to have:
DOCKER_BUILDKIT=1 docker build ...
and as mentioned by Jannis Schönleber in his answer, there is currently no debug available in this case (i.e. no intermediate images/containers get created).
What I've found I could do is use the following option:
... --progress=plain ...
and then add various RUN ... or additional lines on existing RUN ... to debug specific commands. This gives you what to me feels like full access (at least if your build is relatively fast).
For example, you could check a variable like so:
RUN echo "Variable NAME = [$NAME]"
If you're wondering whether a file is installed properly, you do:
RUN find /
etc.
In my situation, I had to debug a docker build of a Go application with a private repository and it was quite difficult to do that debugging. I've other details on that here.

If you are using docker-compose to build docker images try to add DOCKER_BUILDKIT=0 before the command to see the last successful layer id
DOCKER_BUILDKIT=0 docker-compose ...
This will temporarily disable DOCKER_BUILDKIT for the command only.
Having the last layer id you can connect to it using the command from the top answer
docker run --rm -it LAST_LAYER_ID sh

my solution would be to see what step failed in the docker file, RUN bundle install in my case,
and change it to
RUN bundle install || cat <path to the file containing the error>
This has the double effect of printing out the reason for the failure, AND this intermediate step is not figured as a failed one by docker build. so it's not deleted, and can be inspected via:
docker run --rm -it <id_last_working_layer> bash -il
in there you can even re run your failed command and test it live.

What I would do is comment out the Dockerfile below and including the offending line. Then you can run the container and run the docker commands by hand, and look at the logs in the usual way. E.g. if the Dockerfile is
RUN foo
RUN bar
RUN baz
and it's dying at bar I would do
RUN foo
# RUN bar
# RUN baz
Then
$ docker build -t foo .
$ docker run -it foo bash
container# bar
...grep logs...

Still using BuildKit, as in Alexis Wilke's answer, you can use ktock/buildg.
See "Interactive debugger for Dockerfile" from Kohei Tokunaga
buildg is a tool to interactively debug Dockerfile based on BuildKit.
Source-level inspection
Breakpoints and step execution
Interactive shell on a step with your own debugigng tools
Based on BuildKit (needs unmerged patches)
Supports rootless
Example:
$ buildg.sh debug --image=ubuntu:22.04 /tmp/ctx
WARN[2022-05-09T01:40:21Z] using host network as the default
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 195B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 3.0s
#4 [build1 1/2] FROM docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
#4 resolve docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8 0.0s done
#4 sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 772.81kB / 772.81kB 0.2s done
Filename: "Dockerfile"
2| RUN echo hello > /hello
3|
4| FROM busybox AS build2
=> 5| RUN echo hi > /hi
6|
7| FROM scratch
8| COPY --from=build1 /hello /
>>> break 2
>>> breakpoints
[0]: line 2
>>> continue
#4 extracting sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 0.0s done
#4 DONE 0.3s
...

Related

Docker: How to ADD a service via ENV variables?

I have built a Docker Cron Environment to run Cronjobs based on alseambusher/crontab-ui using alpine:3.15.3 & it works great.
For it to work I have had to install a number of things via the Dockerfile, editing it & adding python so it could run a python script, perl for another service, openssl so I could use a Self-signed certificate, etc.
As it stands the Container is a lot bigger, which is fine, but if I am to share the container others won't necessarily want or need the services I have added & will likely need other that I haven't.
I would like to be able to add a command in the ENV of a Docker Compose to add services at startup without having to do a full build each time. I'm sure it would be simpler to add build:>args: & have it rebuild the container each startup, but my goal is to have it add to an image only the services that each user needs & declares in the Docker-Compose with no need to have the files for the build on the system.
I know this will mean a longer startup depending on the services, I'm okay with that.
I know it's normal to run cron on the host & have it call into containers, but cron on Windows WSL has to be manually started every time the WSL starts & is easy to forget about & can't really be automated aside from on startup, & I'd like to do this entirely inside Docker.
How can I add an ENV like SERVICE_INSTALL to have it run in BASH (which is already added in the Dockerfile & present at /bin/bash) at container startup?
Ideally I'd like to be able to add multiple SERVICE_INSTALL lines if at all possible.
Example:
SERVICE_INSTALL1='apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python'
SERVICE_INSTALL2='python3 -m ensurepip'
SERVICE_INSTALL3='apk add --no-cache perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs'
Or, if nothing else:
SERVICE_INSTALL=apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs && && wget && curl && nodejs && npm
but then that leaves the problem of installing things through pip or npm.
I have tried adding a command: to the Docker-Compose but every variation I have tried does not work. I'm also concerned with this method as from my understanding a command: replaces the startup script in the container, not adds to it, so that is not ideal, regardless, it doesn't seem like an install command: is possible anyway
I have tried: (Each as a single command: not together)
command:
- BASH apk --update add openssl
- /bin/bash apk --update add openssl
- BASH RUN apk --update add openssl
- /bin/bash RUN apk --update add openssl
- sh apk --update add openssl
- /bin/sh apk --update add openssl
- apk --update add openssl
Each ends with a message along the lines of Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash run apk --update add openssl": stat /bin/bash run apk --update add openssl: no such file or directory: unknown
UPDATE: I discovered a few things trying to get this to work
for command: to work there needs to not be any - before it
anything, even on multiple lines, is considered a single command essentially as though they were all on the same line & have to be separated with an &&
it will repeat the command or show the error of it failing to execute the command & not continue to next until it is completed.
for example the command mkdir -p /test leaves no logs, but the container never actually starts. While portainer says it's running trying to bash into it gives a is restarting, wait until the container is running message
mkdir "-p /test" repeats this message
mkdir: unrecognized option:
BusyBox v1.34.1 (2022-02-02 18:21:20 UTC) multi-call binary.
Usage: mkdir [-m MODE] [-p] DIRECTORY...
Create DIRECTORY
-m MODE Mode
-p No error if exists; make parent directories as needed
3 times 3-4 seconds apart, them 7 seconds, then 8 seconds, then 15 seconds, 27 seconds, 53 seconds, then hits a minute & continues to grow a few seconds each try.
It also returns the same wait for the container to be running message when trying to bash in
mkdir -p "/test" seems to be the correct formatting, it appears to work but leaves no logs & when attempting to bash in it connects, shows the terminal, then exits, attempting to reconnect shows the same container is restarting message, likely because the container stopped once the command was finished & is set to restart: always. commenting out the restart command the container exits.
mkdir -p "/test" followed by a new line with supervisord -c /etc/supervisord.conf (the default start command) has mkdir reporting mkdir: unrecognized option: c
adding "supervisord -c /etc/supervisord.conf" leaves no logs & a restarting container.
reversing the order, with supervisord -c /etc/supervisord.conf 1st has supervisord reporting the error Error: positional arguments are not supported: ['mkdir', '-p', '/test'] For help, use /usr/bin/supervisord -h
bash -c "supervisord -c /etc/supervisord.conf with a new line & && mkdir -p /test with a new line & && mkdir -p /test2" runs with a working container, but no directories created
reversing the order seems to work & creates the directories, with a running container
command:
bash -c "mkdir -p /test
&& mkdir -p /test2
&& supervisord -c /etc/supervisord.conf"
Which indicates that it will run them in order, but only proceeds to the next after the one finishes.
a test confirmed that the same can be done with other dependencies so long as the initial startup is last. I'd rather have the container start 1st, then install the dependencies while it is running as they are not required for the container itself to run, but rather are added for use in the cronjobs that will be running on a schedule, so if the container starts & the dependencies cannot be used for the 1st 2, 3, even 5 or 10 minutes that might only affect their 1st attempt if it happens to be in that time.
This is alright, I now understand better how the command: option works, but it still requires users to know & properly include the default start command. The command: options are also a lot more particular & easy to get wrong, while ENV variables are something every docker user knows, has experience with, & is simpler to implement

Typing two letters at the same time causes docker exec -it shell to exit abruptly

I'm running Docker Toolbox on VirtualBox on Windows 10.
I'm having an annoying issue where if I docker exec -it mycontainer sh into a container - to inspect things, the shell will abruptly exit randomly back to the host shell, while I'm typing commands. Some experimenting reveals that it's when I press two letters at the same time (as is common when touch typing) that causes the exit.
The container will still be running.
Any ideas what this is?
More details
Here's a minimal docker image I'm running inside. Essentially, I'm trying to deploy kubernetes clusters to AWS via kops, but because I'm on Windows, I have to use a container to run the kops commands.
FROM alpine:3.5
#install aws-cli
RUN apk add --no-cache \
bind-tools\
python \
python-dev \
py-pip \
curl
RUN pip install awscli
#install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#install kops
RUN curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
RUN chmod +x kops-linux-amd64
RUN mv kops-linux-amd64 /usr/local/bin/kops
I build this image:
docker build -t mykube .
I run this in the working directory of my the project I'm trying to deploy:
docker run -dit -v "${PWD}":/app mykube
I exec into the shell:
docker exec -it $containerid sh
Inside the shell, I start running AWS commands as per here.
Here's some example output:
##output of previous dig command
;; Query time: 343 msec
;; SERVER: 10.0.2.3#53(10.0.2.3)
;; WHEN: Wed Feb 14 21:32:16 UTC 2018
;; MSG SIZE rcvd: 188
##me entering a command
/ # aws s3 mb s3://clus
##shell exits abruptly to host shell while I'm writing
DavidJ#DavidJ-PC001 MINGW64 ~/git-workspace/webpack-react-express (master)
##container is still running
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
37a341cfde83 mykube "/bin/sh" 5 minutes ago Up 3 minutes gifted_bhaskara
##nothing in docker logs
$ docker logs --details 37a341cfde83
A more useful update
Adding the -D flag gives an important clue:
$ docker -D exec -it 04eef8107e91 sh -x
DEBU[0000] Error resize: Error response from daemon: no such exec
/ #
/ #
/ #
/ #
/ # sdfsdfjskfdDEBU[0006] [hijack] End of stdin
DEBU[0006] [hijack] End of stdout
Also, I've ascertained that what specifically is causing the issue is pressing two letters at the same time (which is quite common when I'm touch typing).
There appears to be a github issue for this here, though this one is for docker for windows, not docker toolbox.
This issue appears to be a bug with docker and windows. See the github issue here.
As a work around, prefix your docker exec command with winpty, which comes with git bash.
eg.
winpty docker exec -it mycontainer sh
Check the USER which is the one you are login with when doing a docker exec -it yourContainer sh.
Its .bahsrc, .bash_profile or .profile might include a command which would explain why the session abruptly quits.
Check also the logs associated to that container (docker logs --details yourContainer) in order to see if that closed session generated anything in stderr.
Reasons I can think of for a process to be killed in your container include:
Pid 1 exiting in the container. This would cause the container to go into a stopped state, but a restart policy could have restarted it. See your docker container inspect output to see if this is happening. This is the most common cause I've seen.
Out of memory on the OS, where the kernel would then kill processes. View your system logs and dmesg to see if this is happening.
Exceeding the container memory limit, where docker would kill the container, possibly restarting it depending on your policy. You would again view docker container inspect but the status will have different details.
Process being killed on the host, potentially by a security tool.
Perhaps a selinux or apparmor policy being violated.
Networking issues. Never encountered it myself, but since docker is a client / server design, there's a potential for a network disconnect to drop the exec session.
The server itself is failing, and you'd see various logs in syslog / dmesg indicating problems it can't recover from.

Difference between "docker build" and "docker run" if we running dockerfile having .sh files

This is my Dockerfile
# This Dockerfile describes the standard way to build
FROM centos:latest
MAINTAINER praveen
# Run a root to allow "rpm"
USER root
WORKDIR /root/
# Get the ACE-TAO rpm from seachange repo
COPY TAO-1.7.7-0.x86_64.rpm /root/TAO-1.7.7-0.x86_64.rpm
# Insatall the rpm
RUN rpm -ivh /root/TAO-1.7.7-0.x86_64.rpm
#Start the TAO service
#CMD /etc/init.d/tao start
COPY namingServiceConfig.sh /
RUN /namingServiceConfig.sh
EXPOSE 13021
EXPOSE 13022
EXPOSE 13023
ENV NS_PORTS=13021,13022,13023
#ENTRYPOINT /etc/init.d/tao start && bash
While doing the docker build
Whether it'll execute the shell script and reflect the changes as part images or while running the images using docker run its will reflect the changes to container level
In my case ,I'm suspecting that, it is executing while docker build and docker run both time
I'm using below commands as part of building and running via vagrant file
d.build_image "/vagrant/tao", args: " -t tao/basic"
d.run "tao/basic:latest",
args: " -t -d"\
" --name tao-basic"\
" -p 13021:13021"\
" -e NS_PORT=13025,13026,13027"
let me know, need any more information
The Dockerfile instructions (such as RUN etc...) are actioned at build time (docker build -t something . etc...). Only the CMD and ENTRYPOINT instructions happen at run time (when the container is started).
In your example the shell script will get run as part of the build and whatever changes occur will be committed as a new layer in the image.

Set ENV variable in container is not working, is every under "/usr/local/bin" executed on container run?

I have the following piece of definition in a Dockerfile:
# This aims to be the default value if -e is not present on the run command
ENV HOST_IP=127.0.0.1
...
COPY /container-files/etc/php.d/zz-php.ini /etc/php5/mods-available/zz-php.ini
RUN ln -s /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
COPY /container-files/init-scripts/setup_xdebug_ip.sh /usr/local/bin/setup_xdebug_ip.sh
RUN chmod +x /usr/local/bin/setup_xdebug_ip.sh
CMD ["/usr/local/bin/setup_xdebug_ip.sh", "/usr/local/bin/setup_php_settings.sh"]
This is the relevant piece of definition at zz-php.ini:
; Xdebug
[Xdebug]
xdebug.remote_enable=true
xdebug.remote_host="192.168.3.1" => this should be overwrited by HOST_IP
xdebug.remote_port="9001"
xdebug.idekey="XDEBUG_PHPSTORM"
This is the content of the script setup_xdebug_ip.sh:
#!/usr/bin/bash
sed -i -E "s/xdebug.remote_host.*/xdebug.remote_host=$HOST_IP/" /etc/php5/apache2/conf.d/zz-php.ini
Updated the script
I have updated the script to see it that's the reason why the value isn't changed and still not working. See the code below:
#!/usr/bin/bash
sed -ri "s/^xdebug.remote_host\s*=.*$//g" /etc/php5/apache2/conf.d/zz-php.ini
echo "xdebug.remote_host = $HOST_IP" >> /etc/php5/apache2/conf.d/zz-php.ini
In order to build the image and run the container I follow this steps:
Build the image:
docker build -t reynierpm/dev-php55 .
Run the container:
docker run -e HOST_IP=$(hostname -I | cut -d' ' -f1)
--name dev-php5
-it /bin/bash reynierpm/dev-php55
After the image gets built and the container is running I open a browser and point to: http://container_address/index.php (which contains phpinfo()) and I can see the value of xdebug.remote_host as 192.168.3.1 ...
why? What is not running when the container start? Why the value doesn't get overwritten using the provided value by -e on the run command?
UPDATE:
I've notice that I am only copying the file and setting up the permissions but I am not running it at all:
# Copy the script for change the xdebug.remote_host value based on HOST_IP
COPY /container-files/init-scripts/setup_xdebug_ip.sh /usr/local/bin/setup_xdebug_ip.sh
# Execute the script
RUN chmod +x /usr/local/bin/setup_xdebug_ip.sh
Could this be the issue? Everything that I put under /usr/local/bin is executed at container start? If not that's definitively the issue or at least I think.
UPDATE #2:
After the suggestions from #charles-dufly I've fixed a few things but still not working.
Now the Dockerfile looks like:
# This aims to be the default value if -e is not present on the run command
ENV HOST_IP=127.0.0.1
...
ADD container-files /
RUN chmod +x /usr/local/bin/setup_xdebug_ip && \
/usr/local/bin/setup_xdebug_ip && \
chmod +x /usr/local/bin/setup_php_settings && \
ln -s /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini && \
ln -s /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini && \
a2enmod rewrite
EXPOSE 80 9001
CMD ["/usr/local/bin/setup_php_settings"]
After build the image I am running the following command:
$ docker run -e HOST_IP=192.168.3.120 -p 80:80 --name php55-img-6 -it reynierpm/php5-dev-4 /bin/bash
I can see the value of xdebug.remote_host being set as 127.0.0.1 but is not taking the value passed as -e on the run command, why?
You're correct in that items under /usr/local/bin are not automatically executed.
The Filesystem Hierarchy Standard specifies /usr/local as a "tertiary hierarchy" with its own bin, lib, &c. subdirectories, equivalent in their intent and use to the like-named directories under / or /usr but for content installed local to the machine (in practice, this means software installed without the benefit of the local distro's packaging system).
If you want a command to be executed, you need a RUN that directly or indirectly invokes it.
As for the other matters discussed as this question has morphed, consider the following:
FROM alpine
ENV foo=bar
RUN echo $foo >/tmp/foo-value
CMD cat /tmp/foo-value; echo $foo
When invoked with:
docker run -e foo=qux
...this emits as output:
bar
qux
...because bar is the environment variable laid down by the RUN command, whereas qux is the environment variable as it exists at the CMD command's execution.
Thus, to ensure that an environment variable is honored in configuration, it must be read and applied during the CMD's execution, not during a prior RUN stage.
Multiple problems with your repo:
First of all when using CMD in docker file, the command added after the image name in the docker run : /bin/bash will override the CMD ["/usr/local/bin/setup_php_settings"] from your Dockerfile.
Thus your setup_php_settings is never executed!
You should use ENTRYPOINT i.s.o. CMD in your Dockerfile. I found good explanation here and here.
In conclusion for the Dockerfile change the CMD [...] line in:
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
then you can run your container with:
docker run -it -e HOST_IP=<your_ip_address> -e PHP_ERROR_REPORTING='E_ALL & ~E_STRICT' -p 80:80 --name dev-php5 mmi/dev-php55
No need to add /bin/bash at the end. Check-out test-repo for test-setup.
Secondly, in your /usr/local/bin/setup_php_settings, you should add
a2enmod rewrite
service apache2 restart
at the end, just before
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND`
this in order for your new settings to be applied in your web app.

How can I run a docker container and commit the changes once a script completes?

I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:
CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")
but I'm having trouble with the second part--committing the changes to the image once the sleep 10 command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?
EDIT: As an alternative, is there a way to trigger ctrl-p-q via a shell script in the container to leave the container running but return to the host?
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it
I didn't find any of these answers satisfying, as my goal was to 1) launch a container, 2) run a setup script, and 3) capture/store the state after setup, so I can instantly run various scripts against that state later. And all in a local, automated, continuous integration environment (e.g. scripted and non-interactive).
Here's what I came up with (and I run this in Travis-CI install section) for setting up my test environment:
#!/bin/bash
# Run a docker with the env boot script
docker run ubuntu:14.04 /path/to/env_setup_script.sh
# Get the container ID of the last run docker (above)
export CONTAINER_ID=`docker ps -lq`
# Commit the container state (returns an image_id with sha256: prefix cut off)
# and write the IMAGE_ID to disk at ~/.docker_image_id
(docker commit $CONTAINER_ID | cut -c8-) > ~/.docker_image_id
Note that my base image was ubuntu:14.04 but yours could be any image you want.
With that setup, now I can run any number of scripts (e.g. unit tests) against this snapshot (for Travis, these are in my script section). e.g.:
docker run `cat ~/.docker_image_id` /path/to/unit_test_1.sh
docker run `cat ~/.docker_image_id` /path/to/unit_test_2.sh
Try this if you want an auto commit for all which are running. Put this in a cron or something, if this helps
#!/bin/bash
for i in `docker ps|tail -n +2|awk '{print $1}'`; do docker commit -m "commit new change" $i; done

Resources