Docker: RUN touch doesn't create file - bash

While trying to debug a RUN statements in my Dockerfile, I attempted to redirect output to a file in a bound volume (./mongo/log).
To my surprise I was unable to create files via the RUN command, or to pipe the output of another command to a file using the redirection/appending (>,>>) operators. I was however able to perform the said task by logging in the running container via docker exec -ti mycontainer /bin/sh and issuing the command from there.
Why is this behaviour happening? How can I touch file in the Dockerfile / redirect output to a file or to the console from which the Dockerfile is run?
Here is my Dockerfile:
FROM mongo:3.4
#Installing NodeJS
RUN apt-get update && \
apt-get install -y curl && \
curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y nodejs
#Setting Up Mongo
WORKDIR /var/www/smq
COPY ./mongo-setup.js mongo-setup.js
##for testing
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
##this was the command to debug
#RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log
Here an excerpt from my docker-compose.yml:
mongodb:
build:
context: ./
dockerfile: ./mongodb-dockerfile
container_name: smqmongodb
volumes:
- /var/lib/mongodb/data
- ./mongo/log/:/var/log/
- ../.config:/var/www/.config

You are doing this during your build:
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
The file /var/log/node.log is created and fixed immutably into the resulting image.
Then you run the container with this volume mount:
volumes:
- ./mongo/log/:/var/log/
Whatever is in ./mongo/log/ is mounted as /var/log in the container, which hides whatever was there before (from the image). This is the thing that's making it look like your touch didn't work (even though it probably worked fine).
You're thinking about this backward - your volume mount doesn't expose the container's version of /var/log externally - it replaces whatever was there.
Nothing you do in Dockerfile (build) will ever show up in an external mount.

Instead of RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log, within the container, what if you just say `RUN node mongo-setup.js'?
Docker recommends using docker logs. Like so:
docker logs container-name
To accomplish what you're after (see the mongo setup logs?), you can split the stdout & stderr of the container by piping the separate streams: and send them to files:
me#host~$ docker logs foo > stdout.log 2>stderr.log
me#host~$ cat stdout.log
me#host~$ cat stderr.log
Also, refer to the docker logs documentation

Related

How do I add a job to Jobber docker image?

I have a Docker container that exclusively runs the official Jobber job scheduling tool. It comes loaded with an example script that just prints a statement every second. Great. The help menu and documentation don't suggest how I would actually add my own job here. I want to add a script that runs a container that executes a python script.
This is the current .jobber list of jobs that I can see once I enter the bash for the container:
~ $ cat .jobber
[jobs]
- name: ExampleJob
cmd: echo "Jobber is running!"
time: '*'
How can I add my own? I am using docker-compose to build this container and override the entrypoint to execute my own command (example below), but then it executes it and stops the container runtime:
jobber:
image: jobber
entrypoint: /bin/sh/home/jobberuser -c "
echo The donkey is in charge;"
The above command is just a test to override the entry point. Ultimately, the job will be running a Dockerfile that runs a script:
FROM python:3.8.2-slim
WORKDIR /src
RUN pip install --upgrade -v pip \
lxml \
requests \
beautifulsoup4
COPY ./scrape.py .
RUN mkdir -p /src/output
CMD scrape.py

/bin/sh: No such file or directory when setting a docker-compose entrypoint

I have a container that runs a database migration (source):
FROM golang:1.12-alpine3.10 AS downloader
ARG VERSION
RUN apk add --no-cache git gcc musl-dev
WORKDIR /go/src/github.com/golang-migrate/migrate
COPY . ./
ENV GO111MODULE=on
ENV DATABASES="postgres mysql redshift cassandra spanner cockroachdb clickhouse mongodb sqlserver firebird"
ENV SOURCES="file go_bindata github github_ee aws_s3 google_cloud_storage godoc_vfs gitlab"
RUN go build -a -o build/migrate.linux-386 -ldflags="-s -w -X main.Version=${VERSION}" -tags "$DATABASES $SOURCES" ./cmd/migrate
FROM alpine:3.10
RUN apk add --no-cache ca-certificates
COPY --from=downloader /go/src/github.com/golang-migrate/migrate/build/migrate.linux-386 /migrate
ENTRYPOINT ["/migrate"]
CMD ["--help"]
I want to integrate it into a docker-compose and make it dependent on the Postgres database service. However, since I have to wait until the database is fully initialised I have to wrap the migrate command in a script and thus replace the entrypoint of the migration container. I'm using the wait-for script to poll the database, which is a pure shell (not bash) script and should thus work in an alpine container.
This is how the service is defined in the docker-compose:
services:
database:
# ...
migration:
depends_on:
- database
image: migrate/migrate:v4.7.0
volumes:
- ./scripts/migrations:/migrations
- ./scripts/wait-for:/wait-for
entrypoint: ["/bin/sh"]
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running docker-compose up on this fails with
migration_1 | /bin/sh: can't open './wait-for database:5432': No such file or directory
Running the migrate container for itself with
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
does work flawlessly, the script is there and can be run with /bin/sh ./wait-for.
So why does it fail as part of the docker-compose?
If you read the error message carefully, you will see that the file that cannot be found is not ./waitfor, it is ./wait-for database:5432. This is consistent with your input file, where that whole thing is given as the first element of the command list:
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
It's unclear to me what you actually want instead, since the working alternatives presented do not seem to be fully analogous, but possibly it's
command: ["./wait-for", "database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running the migrate container for itself with does work flawlessly
When you run it like:
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
entrypoint /bin/sh is executed.
When you run it using docker-compose:
entrypoint (/bin/sh ) + command (./wait-for database:5432) ...` is executed.
./wait-for database:5432 as whole stands for executable that will run and it can't be found, that's why you get the error No such file or directory
Try to specify an absolute path to wait-for in command: and split ./wait-for database:5432 into "./wait-for", "database:5432".
It's possible that splitting will be enough
As an alternative you can follow CMD syntax docs and use different command syntax without array: command: ./wait-for database:5432 ...
ENTRYPOINT ["/bin/sh"] is not enough, you also need the -c argument.
Example (testing a docker-compose.yml with docker-compose run --rm MYSERVICENAMEFROMTHEDOCKERCOMPOSEFILE bash here):
entrypoint: ["/bin/sh"]
Throws:
/bin/sh: 0: cannot open bash: No such file
ERROR: 2
And some wrong syntax examples like
entrypoint: ["/bin/sh -c"]
(wrong!)
or
entrypoint: ["/bin/sh, -c"]
(wrong!)
throw errors:
starting container process caused: exec: "/bin/sh, -c": stat /bin/sh, -c: no such file or directory: unknown
ERROR: 1
starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
ERROR: 1
In docker-compose or Dockerfile, for an entrypoint, you need the -c argument.
This is right:
entrypoint: "/bin/sh -c"
or:
entrypoint: ["/bin/sh", "-c"]
The -c is to make clear that this is a command executed in the command line, waiting for an additional command to be used in that command line. but not starting the bash /bin/sh just on its own. You can read that between the lines at What is the difference between CMD and ENTRYPOINT in a Dockerfile?.

executable file not found in $PATH Dockerfile

I am building a Dockerfile for an application. I want to execute a bash script with parameters when the container starts to run, so I have made it an entry point. However, Docker cannot find the directory in which my script is located. Thi script is located in the Intellij Idea project folder and the path practically looks like this: /home/user/Documents/folder1/folder2/folder3/Projectname/runapp.sh
I have tried to mount this directory as volume, but while running built image an error occurred:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"runapp.sh\": executable file not found in $PATH": unknown.
What may be the reason of such behavior? How else can I reach this bash script from Dockerfile?
Here is how the Dockerfile looks like:
FROM java:8
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
ENV NEO4J_CONFIG default
ENV BENCHMARK_NAME default
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 7474 7687 7473
VOLUME /home/user/Documents/folder1/folder2/folder3/Projectname /workdir1
WORKDIR /workdir1
ENTRYPOINT ["runapp.sh"]
CMD ["$NEO4J_CONFIG", "$BENCHMARK_NAME"]
You misunderstood volumes in Docker I think. (see What is the purpose of VOLUME in Dockerfile)
I'm citing #VonC answer:
A volume is a persistent data stored in /var/lib/docker/volumes/...
You can either declare it in a Dockerfile, which means each time a container is stated from the image, the volume is created (empty), even if you don't have any -v option.
You can declare it on runtime docker run -v [host-dir:]container-dir.
combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/....
docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.
So you should use docker run -v /home/user/Documents/folder1/folder2/folder3/Projectname:/workdir1 when starting the container
And your Dockerfile volume declaration should be:
VOLUME /workdir1
That being said, you define both Entrypoint and CMD. What is the CMD being for ? You will never use your image without using runapp.sh ? I prefer using only CMD for development since you can still do docker run -it my_container bash for debugging purpose with this syntax.
This time I'm using #Daishi answer from What is the difference between CMD and ENTRYPOINT in a Dockerfile?
The ENTRYPOINT specifies a command that will always be executed when the container starts.
The CMD specifies arguments that will be fed to the ENTRYPOINT.
If you want to make an image dedicated to a specific command you will use ENTRYPOINT ["/path/dedicated_command"]
Otherwise, if you want to make an image for general purpose, you can leave ENTRYPOINT unspecified and use CMD ["/path/dedicated_command"] as you will be able to override the setting by supplying arguments to docker run
Moreover, runapp.sh isn't in your $PATH and you call it without absolute path, so it will not find the file even if the volume is mounted correctly.
You could just use:
CMD /workdir1/runapp.sh "$NEO4J_CONFIG" "$BENCHMARK_NAME"
Now be careful, on your host you mention that the shell script is named script.sh and you call runapp.sh in your Dockerfile, I hope it's a typo. By the way your script needs to be executable.

How can I solve "crontab: your UID isn't in the passwd file. bailing out."?

Hi I'm using Docker and whenever to write cron schedule rules, but when I run whenever --update-crontab in my docker container this errors is showing to me.
crontab: your UID isn't in the passwd file.
bailing out.
[fail] Couldn't write crontab; try running `whenever' with no options to ensure your schedule file is valid.
Dockerfile
FROM ruby:2.4.1-slim
RUN apt-get update && apt-get -y install cron
ENV RAILS_ENV production
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile Gemfile.lock ./
RUN bundle install --binstubs --jobs 20 --retry 5
COPY . .
RUN chown -R nobody:nogroup /app
USER nobody
# use docker run -it --entrypoint="" demo "ls -la" to skip
EXPOSE 3000
CMD puma -C config/puma.rb
Docker Version: Docker version 17.05.0-ce, build 89658be
My Docker compose file
chatbot_web:
container_name: chatbot_web
depends_on:
- postgres
- chatbot_redis
- chatbot_lita
user: "1000:1000"
build: .
image: dpe/chatbot
ports:
- '3000:3000'
volumes:
- '.:/app'
restart: always
How can I solve this?
EDIT:
When I use:
host$ docker run -it dpe/chatbot bash
container $ whenever --update-cron
[write] crontab file updated
Works, but when I use:
host$ docker exec -it chatbot_web bash
I have no name!#352c6a7500d2:/app$ whenever --update-cron
crontab: your UID isn't in the passwd file.
bailing out.
[fail] Couldn't write crontab; try running `whenever' with no options to ensure your schedule file is valid.
Don't Work =(
To fix I use same user in Dockerfile and docker-compose
Dockerfile
RUN chown -R nobody:nogroup /app
USER nobody
Docker Compose
chatbot_web:
user: "nobody:nogroup"

How can I inspect the file system of a failed `docker build`?

I'm trying to build a new Docker image for our development process, using cpanm to install a bunch of Perl modules as a base image for various projects.
While developing the Dockerfile, cpanm returns a failure code because some of the modules did not install cleanly.
I'm fairly sure I need to get apt to install some more things.
Where can I find the /.cpanm/work directory quoted in the output, in order to inspect the logs? In the general case, how can I inspect the file system of a failed docker build command?
After running a find I discovered
/var/lib/docker/aufs/diff/3afa404e[...]/.cpanm
Is this reliable, or am I better off building a "bare" container and running stuff manually until I have all the things I need?
Everytime docker successfully executes a RUN command from a Dockerfile, a new layer in the image filesystem is committed. Conveniently you can use those layers ids as images to start a new container.
Take the following Dockerfile:
FROM busybox
RUN echo 'foo' > /tmp/foo.txt
RUN echo 'bar' >> /tmp/foo.txt
and build it:
$ docker build -t so-26220957 .
Sending build context to Docker daemon 47.62 kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : RUN echo 'foo' > /tmp/foo.txt
---> Running in 4dbd01ebf27f
---> 044e1532c690
Removing intermediate container 4dbd01ebf27f
Step 3/3 : RUN echo 'bar' >> /tmp/foo.txt
---> Running in 74d81cb9d2b1
---> 5bd8172529c1
Removing intermediate container 74d81cb9d2b1
Successfully built 5bd8172529c1
You can now start a new container from 00f017a8c2a6, 044e1532c690 and 5bd8172529c1:
$ docker run --rm 00f017a8c2a6 cat /tmp/foo.txt
cat: /tmp/foo.txt: No such file or directory
$ docker run --rm 044e1532c690 cat /tmp/foo.txt
foo
$ docker run --rm 5bd8172529c1 cat /tmp/foo.txt
foo
bar
of course you might want to start a shell to explore the filesystem and try out commands:
$ docker run --rm -it 044e1532c690 sh
/ # ls -l /tmp
total 4
-rw-r--r-- 1 root root 4 Mar 9 19:09 foo.txt
/ # cat /tmp/foo.txt
foo
When one of the Dockerfile command fails, what you need to do is to look for the id of the preceding layer and run a shell in a container created from that id:
docker run --rm -it <id_last_working_layer> bash -il
Once in the container:
try the command that failed, and reproduce the issue
then fix the command and test it
finally update your Dockerfile with the fixed command
If you really need to experiment in the actual layer that failed instead of working from the last working layer, see Drew's answer.
The top answer works in the case that you want to examine the state immediately prior to the failed command.
However, the question asks how to examine the state of the failed container itself. In my situation, the failed command is a build that takes several hours, so rewinding prior to the failed command and running it again takes a long time and is not very helpful.
The solution here is to find the container that failed:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6934ada98de6 42e0228751b3 "/bin/sh -c './utils/" 24 minutes ago Exited (1) About a minute ago sleepy_bell
Commit it to an image:
$ docker commit 6934ada98de6
sha256:7015687976a478e0e94b60fa496d319cdf4ec847bcd612aecf869a72336e6b83
And then run the image [if necessary, running bash]:
$ docker run -it 7015687976a4 [bash -il]
Now you are actually looking at the state of the build at the time that it failed, instead of at the time before running the command that caused the failure.
Update for newer docker versions 20.10 onwards
Linux or macOS
DOCKER_BUILDKIT=0 docker build ...
Windows
# Command line
set DOCKER_BUILDKIT=0 docker build ...
# PowerShell
$env:DOCKER_BUILDKIT=0
Use
DOCKER_BUILDKIT=0 docker build ...
to get the intermediate container hashes as known from older versions.
On newer versions, Buildkit is activated per default. It is recommended to only use it for debugging purposes. Build Kit can make your build faster.
For reference:
Buildkit doesn't support intermediate container hashes: https://github.com/moby/buildkit/issues/1053
Thanks to #David Callanan and #MegaCookie for their inputs.
Docker caches the entire filesystem state after each successful RUN line.
Knowing that:
to examine the latest state before your failing RUN command, comment it out in the Dockerfile (as well as any and all subsequent RUN commands), then run docker build and docker run again.
to examine the state after the failing RUN command, simply add || true to it to force it to succeed; then proceed like above (keep any and all subsequent RUN commands commented out, run docker build and docker run)
Tada, no need to mess with Docker internals or layer IDs, and as a bonus Docker automatically minimizes the amount of work that needs to be re-done.
Currently with the latest docker-desktop, there isn't a way to opt out
of the new Buildkit, which doesn't support debugging yet (follow the
latest updates on this on this GitHub Thread:
https://github.com/moby/buildkit/issues/1472).
Find out at which line in your Dockerfile it is failing.
Add to the top of your Dockerfile: FROM xxx as debug
Add an additional target: FROM xxx as next just one line before the failing command (as you don't want to build that part). Example:
FROM xxx as debug
RUN echo "working command"
FROM xxx as next
RUN echoo "failing command"
Run docker build -f Dockerfile --target debug --tag debug .
Then you can debug the container with: docker run -it debug /bin/sh
You can quit the shell by pressing CTRL P + CTRL Q
If you want to use docker compose build instead of docker build it's possible by adding target: debug in your docker-compose.yml under build.
Then start the container by docker compose run xxxYourServiceNamexxx and use either:
The second top answer to find out how to run a shell inside the container.
Or add ENTRYPOINT /bin/sh before the FROM xxx as next line in your Dockerfile.
Debugging build step failures is indeed very annoying.
The best solution I have found is to make sure that each step that does real work succeeds, and adding a check after those that fails. That way you get a committed layer that contains the outputs of the failed step that you can inspect.
A Dockerfile, with an example after the # Run DB2 silent installer line:
#
# DB2 10.5 Client Dockerfile (Part 1)
#
# Requires
# - DB2 10.5 Client for 64bit Linux ibm_data_server_runtime_client_linuxx64_v10.5.tar.gz
# - Response file for DB2 10.5 Client for 64bit Linux db2rtcl_nr.rsp
#
#
# Using Ubuntu 14.04 base image as the starting point.
FROM ubuntu:14.04
MAINTAINER David Carew <carew#us.ibm.com>
# DB2 prereqs (also installing sharutils package as we use the utility uuencode to generate password - all others are required for the DB2 Client)
RUN dpkg --add-architecture i386 && apt-get update && apt-get install -y sharutils binutils libstdc++6:i386 libpam0g:i386 && ln -s /lib/i386-linux-gnu/libpam.so.0 /lib/libpam.so.0
RUN apt-get install -y libxml2
# Create user db2clnt
# Generate strong random password and allow sudo to root w/o password
#
RUN \
adduser --quiet --disabled-password -shell /bin/bash -home /home/db2clnt --gecos "DB2 Client" db2clnt && \
echo db2clnt:`dd if=/dev/urandom bs=16 count=1 2>/dev/null | uuencode -| head -n 2 | grep -v begin | cut -b 2-10` | chgpasswd && \
adduser db2clnt sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# Install DB2
RUN mkdir /install
# Copy DB2 tarball - ADD command will expand it automatically
ADD v10.5fp9_linuxx64_rtcl.tar.gz /install/
# Copy response file
COPY db2rtcl_nr.rsp /install/
# Run DB2 silent installer
RUN mkdir /logs
RUN (/install/rtcl/db2setup -t /logs/trace -l /logs/log -u /install/db2rtcl_nr.rsp && touch /install/done) || /bin/true
RUN test -f /install/done || (echo ERROR-------; echo install failed, see files in container /logs directory of the last container layer; echo run docker run '<last image id>' /bin/cat /logs/trace; echo ----------)
RUN test -f /install/done
# Clean up unwanted files
RUN rm -fr /install/rtcl
# Login as db2clnt user
CMD su - db2clnt
In my case, I have to have:
DOCKER_BUILDKIT=1 docker build ...
and as mentioned by Jannis Schönleber in his answer, there is currently no debug available in this case (i.e. no intermediate images/containers get created).
What I've found I could do is use the following option:
... --progress=plain ...
and then add various RUN ... or additional lines on existing RUN ... to debug specific commands. This gives you what to me feels like full access (at least if your build is relatively fast).
For example, you could check a variable like so:
RUN echo "Variable NAME = [$NAME]"
If you're wondering whether a file is installed properly, you do:
RUN find /
etc.
In my situation, I had to debug a docker build of a Go application with a private repository and it was quite difficult to do that debugging. I've other details on that here.
If you are using docker-compose to build docker images try to add DOCKER_BUILDKIT=0 before the command to see the last successful layer id
DOCKER_BUILDKIT=0 docker-compose ...
This will temporarily disable DOCKER_BUILDKIT for the command only.
Having the last layer id you can connect to it using the command from the top answer
docker run --rm -it LAST_LAYER_ID sh
my solution would be to see what step failed in the docker file, RUN bundle install in my case,
and change it to
RUN bundle install || cat <path to the file containing the error>
This has the double effect of printing out the reason for the failure, AND this intermediate step is not figured as a failed one by docker build. so it's not deleted, and can be inspected via:
docker run --rm -it <id_last_working_layer> bash -il
in there you can even re run your failed command and test it live.
What I would do is comment out the Dockerfile below and including the offending line. Then you can run the container and run the docker commands by hand, and look at the logs in the usual way. E.g. if the Dockerfile is
RUN foo
RUN bar
RUN baz
and it's dying at bar I would do
RUN foo
# RUN bar
# RUN baz
Then
$ docker build -t foo .
$ docker run -it foo bash
container# bar
...grep logs...
Still using BuildKit, as in Alexis Wilke's answer, you can use ktock/buildg.
See "Interactive debugger for Dockerfile" from Kohei Tokunaga
buildg is a tool to interactively debug Dockerfile based on BuildKit.
Source-level inspection
Breakpoints and step execution
Interactive shell on a step with your own debugigng tools
Based on BuildKit (needs unmerged patches)
Supports rootless
Example:
$ buildg.sh debug --image=ubuntu:22.04 /tmp/ctx
WARN[2022-05-09T01:40:21Z] using host network as the default
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 195B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 3.0s
#4 [build1 1/2] FROM docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
#4 resolve docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8 0.0s done
#4 sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 772.81kB / 772.81kB 0.2s done
Filename: "Dockerfile"
2| RUN echo hello > /hello
3|
4| FROM busybox AS build2
=> 5| RUN echo hi > /hi
6|
7| FROM scratch
8| COPY --from=build1 /hello /
>>> break 2
>>> breakpoints
[0]: line 2
>>> continue
#4 extracting sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 0.0s done
#4 DONE 0.3s
...

Resources