How to run cloud-init manually? - amazon-ec2

I'm writing a CloudFormation template and I'm trying to debug the user-data script I provide in the template. How can I run the cloud-init manually and make it perform the same actions it does when starting a new instance?

You can just run it like this:
/usr/bin/cloud-init -d init
This runs the cloud init setup with the initial modules. (The -d option is for debug) If want to run all the modules you have to run:
/usr/bin/cloud-init -d modules
Keep in mind that the second time you run these it doesn't do much since it has already run at boot time. To force to run after boot time you can run from the command line:
( cd /var/lib/cloud/ && sudo rm -rf * )
In older versions the equivalent of cloud-init init is:
/usr/bin/cloud-init start
You may also find this question useful although it applies to the older versions of cloud-init: How do I make cloud-init startup scripts run every time my EC2 instance boots?
The documentation for cloud init here just gives you examples. But it doesn't explain the command line options or each one of the modules, so you have to play around with different values in the config to get your desired results. Of course you can also look at the code.

rm -f /var/log/cloud-init.log \
&& rm -Rf /var/lib/cloud/* \
&& cloud-init -d init \
&& cloud-init -d modules --mode final

Kudus to #Rico, and also, if you want to run a single module - either for testing or because your distro doesn't enable a module by default (hi Precise!), you can
/usr/bin/cloud-init -d single -n <module-name>
For example when my distro doesn't run write_files by default (like a lot of old distros), I use this at the top of runcmd:
runcmd:
- /usr/bin/cloud-init -d single -n write-files
[I know its not really an answer to the OP, but when looking to solve my problem this question was one of the top results, so I figure other people might find this useful]

As documentation, you can simply run
sudo cloud-init clean
and add --logs to clean all log file.
It will redo everything when you reboot

On most Linux distros (including CentOS and Ubuntu), you can restart the cloud-init service using systemctl:
systemctl restart cloud-init
And then check the output of the journal to see the results:
journalctl -f -u cloud-init

On Amazon Linux 2, we figured out that cloud-init is run after initial launch and then removed. This caused a problem when we built custom AMIs with Packer and then wanted to launch them with user-data scripts. Here is the Packer shell provisioner (HCL2 format) we use at the end of a build to reset cloud-init:
provisioner "shell" {
inline = [
"echo 'Waiting for cloud-init'; while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done; echo 'Done'",
"sudo yum install cloud-init -y",
"sudo cloud-init clean",
]
}
AMIs built with templates that have this will launch with cloud-init support.

Related

Docker: How to ADD a service via ENV variables?

I have built a Docker Cron Environment to run Cronjobs based on alseambusher/crontab-ui using alpine:3.15.3 & it works great.
For it to work I have had to install a number of things via the Dockerfile, editing it & adding python so it could run a python script, perl for another service, openssl so I could use a Self-signed certificate, etc.
As it stands the Container is a lot bigger, which is fine, but if I am to share the container others won't necessarily want or need the services I have added & will likely need other that I haven't.
I would like to be able to add a command in the ENV of a Docker Compose to add services at startup without having to do a full build each time. I'm sure it would be simpler to add build:>args: & have it rebuild the container each startup, but my goal is to have it add to an image only the services that each user needs & declares in the Docker-Compose with no need to have the files for the build on the system.
I know this will mean a longer startup depending on the services, I'm okay with that.
I know it's normal to run cron on the host & have it call into containers, but cron on Windows WSL has to be manually started every time the WSL starts & is easy to forget about & can't really be automated aside from on startup, & I'd like to do this entirely inside Docker.
How can I add an ENV like SERVICE_INSTALL to have it run in BASH (which is already added in the Dockerfile & present at /bin/bash) at container startup?
Ideally I'd like to be able to add multiple SERVICE_INSTALL lines if at all possible.
Example:
SERVICE_INSTALL1='apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python'
SERVICE_INSTALL2='python3 -m ensurepip'
SERVICE_INSTALL3='apk add --no-cache perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs'
Or, if nothing else:
SERVICE_INSTALL=apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs && && wget && curl && nodejs && npm
but then that leaves the problem of installing things through pip or npm.
I have tried adding a command: to the Docker-Compose but every variation I have tried does not work. I'm also concerned with this method as from my understanding a command: replaces the startup script in the container, not adds to it, so that is not ideal, regardless, it doesn't seem like an install command: is possible anyway
I have tried: (Each as a single command: not together)
command:
- BASH apk --update add openssl
- /bin/bash apk --update add openssl
- BASH RUN apk --update add openssl
- /bin/bash RUN apk --update add openssl
- sh apk --update add openssl
- /bin/sh apk --update add openssl
- apk --update add openssl
Each ends with a message along the lines of Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash run apk --update add openssl": stat /bin/bash run apk --update add openssl: no such file or directory: unknown
UPDATE: I discovered a few things trying to get this to work
for command: to work there needs to not be any - before it
anything, even on multiple lines, is considered a single command essentially as though they were all on the same line & have to be separated with an &&
it will repeat the command or show the error of it failing to execute the command & not continue to next until it is completed.
for example the command mkdir -p /test leaves no logs, but the container never actually starts. While portainer says it's running trying to bash into it gives a is restarting, wait until the container is running message
mkdir "-p /test" repeats this message
mkdir: unrecognized option:
BusyBox v1.34.1 (2022-02-02 18:21:20 UTC) multi-call binary.
Usage: mkdir [-m MODE] [-p] DIRECTORY...
Create DIRECTORY
-m MODE Mode
-p No error if exists; make parent directories as needed
3 times 3-4 seconds apart, them 7 seconds, then 8 seconds, then 15 seconds, 27 seconds, 53 seconds, then hits a minute & continues to grow a few seconds each try.
It also returns the same wait for the container to be running message when trying to bash in
mkdir -p "/test" seems to be the correct formatting, it appears to work but leaves no logs & when attempting to bash in it connects, shows the terminal, then exits, attempting to reconnect shows the same container is restarting message, likely because the container stopped once the command was finished & is set to restart: always. commenting out the restart command the container exits.
mkdir -p "/test" followed by a new line with supervisord -c /etc/supervisord.conf (the default start command) has mkdir reporting mkdir: unrecognized option: c
adding "supervisord -c /etc/supervisord.conf" leaves no logs & a restarting container.
reversing the order, with supervisord -c /etc/supervisord.conf 1st has supervisord reporting the error Error: positional arguments are not supported: ['mkdir', '-p', '/test'] For help, use /usr/bin/supervisord -h
bash -c "supervisord -c /etc/supervisord.conf with a new line & && mkdir -p /test with a new line & && mkdir -p /test2" runs with a working container, but no directories created
reversing the order seems to work & creates the directories, with a running container
command:
bash -c "mkdir -p /test
&& mkdir -p /test2
&& supervisord -c /etc/supervisord.conf"
Which indicates that it will run them in order, but only proceeds to the next after the one finishes.
a test confirmed that the same can be done with other dependencies so long as the initial startup is last. I'd rather have the container start 1st, then install the dependencies while it is running as they are not required for the container itself to run, but rather are added for use in the cronjobs that will be running on a schedule, so if the container starts & the dependencies cannot be used for the 1st 2, 3, even 5 or 10 minutes that might only affect their 1st attempt if it happens to be in that time.
This is alright, I now understand better how the command: option works, but it still requires users to know & properly include the default start command. The command: options are also a lot more particular & easy to get wrong, while ENV variables are something every docker user knows, has experience with, & is simpler to implement

I have made a dockerfile and I was going to run it on AWS ECS but I cant as it requires -t

Here is my docker run and the docker file is there a reason why it requires -t and isnt working on ECS thanks for any help. I dont
understand what -t does so if someone could also help with that thanks.
This is just a basic docker that connects to my rds and uses wordpress. I dont have any plugins and shapely is the theme i'm using .
command docker run -t --name wordpress -d -p 80:80 dockcore/wordpress
FROM ubuntu
#pt-get clean all
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda
RUN rm -fr /var/cashe/*files neeeded
ADD wordpress.conf /etc/apache2/sites-enabled/000-default.conf
# Wordpress install
RUN wget -P /var/www/html/ https://wordpress.org/latest.zip
RUN unzip /var/www/html/latest.zip -d /var/www/html/
RUN rm -fr /var/www/html/latest.zip
# Copy he wp config file
RUN cp /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php
# Expose web port
EXPOSE 80
# wp config for database
RUN sed -ie 's/database_name_here/wordpress/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/username_here/root/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/password_here/password/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/localhost/wordpressrds.xxxxxxxxxxxxxx.ap-southeast-2.rds.amazonaws.com:3306/g' /var/www/html/wordpress/wp-config.php
RUN rm -fr /var/www/html/wordpress/wp-content/themes/*
RUN rm -fr /var/www/html/wordpress/wp-content/plugins/*
ADD /shapely /var/www/html/wordpress/wp-content/themes/
# Start apache on boot
RUN echo "service apache2 start" >> ~/.bashrc
I see a couple problems. First of all your container should never require -t in order to run unless it is a temporary container that you plan to interact with using a shell. Background containers shouldn't require an interactive TTY interface, they just run in the background autonomously.
Second in your docker file I see a lot of RUN statements which are basically the build time commands for setting up the initial state of the container, but you don't have any CMD statement.
You need a CMD which is the process to actually kick off and start in the container when you try to run the container. RUN statements only execute once during the initial docker build, and then the results of those run statements are saved into the container image. When you run a docker container it has the initial state that was setup by the RUN statements, and then the CMD statement kicks off a running process in the container.
So it looks like that last RUN in your Dockerfile should be a CMD since the Apache server is the long running process that you want to run with the container state that you previously setup using all those RUN statements.
Another thing you should do is chain many of those consecutive RUN statements into one. Docker creates a separate layer for each RUN command, where each layer is kind of like a Git commit of the state of the container. So it is very wasteful to have so many RUN statements because it makes way too many container layers. You can do something like this ot chain RUN statements together instead to make a smaller, more efficient container:
RUN apt-get -y update && \
DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda && \
rm -fr /var/cashe/*files neeeded
I recommend reading through this guide from Docker that covers best practices for writing a Dockerfile: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#cmd

How to have two JARs start automatically on "docker run container"

I want two seperate JAR files to be executed automatically once a docker container is called via run command, so when I type docker run mycontainer they are both called. So far, I have a dockerfile that looks like this:
# base image is java:8 (ubuntu)
FROM java:8
# add files to image
ADD first.jar .
ADD second.jar .
# start on run
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java", "-jar", "first.jar"]
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java", "-jar", "second.jar"]
This, however, only starts second.jar.
Now, both jars are servers in a loop, so I guess once one is started it just blocks the terminal. If I run the container using run -it mycontainer bash and call them manually, too, the first one will do its outputs and I can't start the other one.
Is there a way to open different terminals and switch between them to have each JAR run in its own context? Preferably already in the dockerfile.
I know next to nothing about ubuntu but I found the xterm command that opens a new terminal, however this won't work after calling a JAR. What I'm looking for are instructions for inside the dockerfile that for example open a new terminal, execute first.jar, alt-tab into the old terminal and execute second.jar there, or at least achieve the same.
Thanks!
The second CMD instruction replaces the first, so you need to use a single instruction for both commands.
Easy (not so good) Approach
You could add a bash script that executes both commands and blocks on the second one:
# start.sh
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar first.jar &
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar
Then change your Dockerfile to this:
# base image is java:8 (ubuntu)
FROM java:8
# add files to image
ADD first.jar .
ADD second.jar .
ADD start.sh .
# start on run
CMD ["bash", "start.sh"]
When using docker stop it might not shut down properly, see:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
Better Approach
To solve this, you could use Phusion:
https://hub.docker.com/r/phusion/baseimage/
It has an init-system that is much easier to use than e.g. supervisord.
Here is a good starting point:
https://github.com/phusion/baseimage-docker#getting_started
Instructions for using phusion
Sadly there is not official openjdk-8-jdk available for Ubuntu 14.04 LTS. You could try with an inofficial ppa, which is used in the following explanation.
In your case you would need to bash scripts (which act like "services"):
# start-first.sh (the file has to start with the following line!):
#!/bin/bash
usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar /root/first.jar
# start-second.sh
#!/bin/bash
usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar /root/second.jar
And your Dockerfile would look like this:
# base image is phusion
FROM phusion/baseimage:latest
# Use init service of phusion
CMD ["/sbin/my_init"]
# Install unofficial openjdk-8
RUN add-apt-repository ppa:openjdk-r/ppa && apt-get update && apt-get dist-upgrade -y && apt-get install -y openjdk-8-jdk
ADD first.jar /root/first.jar
ADD second.jar /root/second.jar
# Add first service
RUN mkdir /etc/service/first
ADD start-first.sh /etc/service/first/run
RUN chmod +x /etc/service/first/run
# Add second service
RUN mkdir /etc/service/second
ADD start-second.sh /etc/service/second/run
RUN chmod +x /etc/service/second/run
# Clean up
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
This should install two services which will be run on startup and shut down properly when using docker stop.
A Docker container has only a single process when it is started.
You can still create several processes afterward.:
One simple way is to create a second process inside a bash script.
You can also use Supervisor : https://docs.docker.com/articles/using_supervisord/
You have a few options. A lot of the answers have mentioned using supervisor for this, which is a fine solution. Here are some others:
Create a short script that just kicks off both jars. Add that to your CMD. For example, the script, which we'll call run_jars.sh could look like:
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar first.jar;
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar;
Then your CMD would be CMD sh run_jars.sh
Another alternative is just running two separate containers-- one for first.jar and the other for second.jar. You can run each one through docker run, for example:
docker run my_repo/my_image:some_tag /usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar
If you want to start two different processes inside one docker container (not recommanded behaviour) you can use something like supervisord

Ahow to use multiple terminals in the docker container?

I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.

How can I inspect the file system of a failed `docker build`?

I'm trying to build a new Docker image for our development process, using cpanm to install a bunch of Perl modules as a base image for various projects.
While developing the Dockerfile, cpanm returns a failure code because some of the modules did not install cleanly.
I'm fairly sure I need to get apt to install some more things.
Where can I find the /.cpanm/work directory quoted in the output, in order to inspect the logs? In the general case, how can I inspect the file system of a failed docker build command?
After running a find I discovered
/var/lib/docker/aufs/diff/3afa404e[...]/.cpanm
Is this reliable, or am I better off building a "bare" container and running stuff manually until I have all the things I need?
Everytime docker successfully executes a RUN command from a Dockerfile, a new layer in the image filesystem is committed. Conveniently you can use those layers ids as images to start a new container.
Take the following Dockerfile:
FROM busybox
RUN echo 'foo' > /tmp/foo.txt
RUN echo 'bar' >> /tmp/foo.txt
and build it:
$ docker build -t so-26220957 .
Sending build context to Docker daemon 47.62 kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : RUN echo 'foo' > /tmp/foo.txt
---> Running in 4dbd01ebf27f
---> 044e1532c690
Removing intermediate container 4dbd01ebf27f
Step 3/3 : RUN echo 'bar' >> /tmp/foo.txt
---> Running in 74d81cb9d2b1
---> 5bd8172529c1
Removing intermediate container 74d81cb9d2b1
Successfully built 5bd8172529c1
You can now start a new container from 00f017a8c2a6, 044e1532c690 and 5bd8172529c1:
$ docker run --rm 00f017a8c2a6 cat /tmp/foo.txt
cat: /tmp/foo.txt: No such file or directory
$ docker run --rm 044e1532c690 cat /tmp/foo.txt
foo
$ docker run --rm 5bd8172529c1 cat /tmp/foo.txt
foo
bar
of course you might want to start a shell to explore the filesystem and try out commands:
$ docker run --rm -it 044e1532c690 sh
/ # ls -l /tmp
total 4
-rw-r--r-- 1 root root 4 Mar 9 19:09 foo.txt
/ # cat /tmp/foo.txt
foo
When one of the Dockerfile command fails, what you need to do is to look for the id of the preceding layer and run a shell in a container created from that id:
docker run --rm -it <id_last_working_layer> bash -il
Once in the container:
try the command that failed, and reproduce the issue
then fix the command and test it
finally update your Dockerfile with the fixed command
If you really need to experiment in the actual layer that failed instead of working from the last working layer, see Drew's answer.
The top answer works in the case that you want to examine the state immediately prior to the failed command.
However, the question asks how to examine the state of the failed container itself. In my situation, the failed command is a build that takes several hours, so rewinding prior to the failed command and running it again takes a long time and is not very helpful.
The solution here is to find the container that failed:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6934ada98de6 42e0228751b3 "/bin/sh -c './utils/" 24 minutes ago Exited (1) About a minute ago sleepy_bell
Commit it to an image:
$ docker commit 6934ada98de6
sha256:7015687976a478e0e94b60fa496d319cdf4ec847bcd612aecf869a72336e6b83
And then run the image [if necessary, running bash]:
$ docker run -it 7015687976a4 [bash -il]
Now you are actually looking at the state of the build at the time that it failed, instead of at the time before running the command that caused the failure.
Update for newer docker versions 20.10 onwards
Linux or macOS
DOCKER_BUILDKIT=0 docker build ...
Windows
# Command line
set DOCKER_BUILDKIT=0 docker build ...
# PowerShell
$env:DOCKER_BUILDKIT=0
Use
DOCKER_BUILDKIT=0 docker build ...
to get the intermediate container hashes as known from older versions.
On newer versions, Buildkit is activated per default. It is recommended to only use it for debugging purposes. Build Kit can make your build faster.
For reference:
Buildkit doesn't support intermediate container hashes: https://github.com/moby/buildkit/issues/1053
Thanks to #David Callanan and #MegaCookie for their inputs.
Docker caches the entire filesystem state after each successful RUN line.
Knowing that:
to examine the latest state before your failing RUN command, comment it out in the Dockerfile (as well as any and all subsequent RUN commands), then run docker build and docker run again.
to examine the state after the failing RUN command, simply add || true to it to force it to succeed; then proceed like above (keep any and all subsequent RUN commands commented out, run docker build and docker run)
Tada, no need to mess with Docker internals or layer IDs, and as a bonus Docker automatically minimizes the amount of work that needs to be re-done.
Currently with the latest docker-desktop, there isn't a way to opt out
of the new Buildkit, which doesn't support debugging yet (follow the
latest updates on this on this GitHub Thread:
https://github.com/moby/buildkit/issues/1472).
Find out at which line in your Dockerfile it is failing.
Add to the top of your Dockerfile: FROM xxx as debug
Add an additional target: FROM xxx as next just one line before the failing command (as you don't want to build that part). Example:
FROM xxx as debug
RUN echo "working command"
FROM xxx as next
RUN echoo "failing command"
Run docker build -f Dockerfile --target debug --tag debug .
Then you can debug the container with: docker run -it debug /bin/sh
You can quit the shell by pressing CTRL P + CTRL Q
If you want to use docker compose build instead of docker build it's possible by adding target: debug in your docker-compose.yml under build.
Then start the container by docker compose run xxxYourServiceNamexxx and use either:
The second top answer to find out how to run a shell inside the container.
Or add ENTRYPOINT /bin/sh before the FROM xxx as next line in your Dockerfile.
Debugging build step failures is indeed very annoying.
The best solution I have found is to make sure that each step that does real work succeeds, and adding a check after those that fails. That way you get a committed layer that contains the outputs of the failed step that you can inspect.
A Dockerfile, with an example after the # Run DB2 silent installer line:
#
# DB2 10.5 Client Dockerfile (Part 1)
#
# Requires
# - DB2 10.5 Client for 64bit Linux ibm_data_server_runtime_client_linuxx64_v10.5.tar.gz
# - Response file for DB2 10.5 Client for 64bit Linux db2rtcl_nr.rsp
#
#
# Using Ubuntu 14.04 base image as the starting point.
FROM ubuntu:14.04
MAINTAINER David Carew <carew#us.ibm.com>
# DB2 prereqs (also installing sharutils package as we use the utility uuencode to generate password - all others are required for the DB2 Client)
RUN dpkg --add-architecture i386 && apt-get update && apt-get install -y sharutils binutils libstdc++6:i386 libpam0g:i386 && ln -s /lib/i386-linux-gnu/libpam.so.0 /lib/libpam.so.0
RUN apt-get install -y libxml2
# Create user db2clnt
# Generate strong random password and allow sudo to root w/o password
#
RUN \
adduser --quiet --disabled-password -shell /bin/bash -home /home/db2clnt --gecos "DB2 Client" db2clnt && \
echo db2clnt:`dd if=/dev/urandom bs=16 count=1 2>/dev/null | uuencode -| head -n 2 | grep -v begin | cut -b 2-10` | chgpasswd && \
adduser db2clnt sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# Install DB2
RUN mkdir /install
# Copy DB2 tarball - ADD command will expand it automatically
ADD v10.5fp9_linuxx64_rtcl.tar.gz /install/
# Copy response file
COPY db2rtcl_nr.rsp /install/
# Run DB2 silent installer
RUN mkdir /logs
RUN (/install/rtcl/db2setup -t /logs/trace -l /logs/log -u /install/db2rtcl_nr.rsp && touch /install/done) || /bin/true
RUN test -f /install/done || (echo ERROR-------; echo install failed, see files in container /logs directory of the last container layer; echo run docker run '<last image id>' /bin/cat /logs/trace; echo ----------)
RUN test -f /install/done
# Clean up unwanted files
RUN rm -fr /install/rtcl
# Login as db2clnt user
CMD su - db2clnt
In my case, I have to have:
DOCKER_BUILDKIT=1 docker build ...
and as mentioned by Jannis Schönleber in his answer, there is currently no debug available in this case (i.e. no intermediate images/containers get created).
What I've found I could do is use the following option:
... --progress=plain ...
and then add various RUN ... or additional lines on existing RUN ... to debug specific commands. This gives you what to me feels like full access (at least if your build is relatively fast).
For example, you could check a variable like so:
RUN echo "Variable NAME = [$NAME]"
If you're wondering whether a file is installed properly, you do:
RUN find /
etc.
In my situation, I had to debug a docker build of a Go application with a private repository and it was quite difficult to do that debugging. I've other details on that here.
If you are using docker-compose to build docker images try to add DOCKER_BUILDKIT=0 before the command to see the last successful layer id
DOCKER_BUILDKIT=0 docker-compose ...
This will temporarily disable DOCKER_BUILDKIT for the command only.
Having the last layer id you can connect to it using the command from the top answer
docker run --rm -it LAST_LAYER_ID sh
my solution would be to see what step failed in the docker file, RUN bundle install in my case,
and change it to
RUN bundle install || cat <path to the file containing the error>
This has the double effect of printing out the reason for the failure, AND this intermediate step is not figured as a failed one by docker build. so it's not deleted, and can be inspected via:
docker run --rm -it <id_last_working_layer> bash -il
in there you can even re run your failed command and test it live.
What I would do is comment out the Dockerfile below and including the offending line. Then you can run the container and run the docker commands by hand, and look at the logs in the usual way. E.g. if the Dockerfile is
RUN foo
RUN bar
RUN baz
and it's dying at bar I would do
RUN foo
# RUN bar
# RUN baz
Then
$ docker build -t foo .
$ docker run -it foo bash
container# bar
...grep logs...
Still using BuildKit, as in Alexis Wilke's answer, you can use ktock/buildg.
See "Interactive debugger for Dockerfile" from Kohei Tokunaga
buildg is a tool to interactively debug Dockerfile based on BuildKit.
Source-level inspection
Breakpoints and step execution
Interactive shell on a step with your own debugigng tools
Based on BuildKit (needs unmerged patches)
Supports rootless
Example:
$ buildg.sh debug --image=ubuntu:22.04 /tmp/ctx
WARN[2022-05-09T01:40:21Z] using host network as the default
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 195B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 3.0s
#4 [build1 1/2] FROM docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
#4 resolve docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8 0.0s done
#4 sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 772.81kB / 772.81kB 0.2s done
Filename: "Dockerfile"
2| RUN echo hello > /hello
3|
4| FROM busybox AS build2
=> 5| RUN echo hi > /hi
6|
7| FROM scratch
8| COPY --from=build1 /hello /
>>> break 2
>>> breakpoints
[0]: line 2
>>> continue
#4 extracting sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 0.0s done
#4 DONE 0.3s
...

Resources