how do i run loops simultaneously in gitlab-ci? - bash

I have the following script in my gitlab-ci and will like to run the loops same time, anyone knows a great way to do this? so that they both run at same time
NOTE the job is a manual job and am looking for a single button click to loop through all the packages in the bash script as shown below
when: manual
script:
- |-
for PACKAGE in name1 name2; do
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$PACKAGE:${BUILD_TAG}"
docker build -t ${IMAGE} -f $PACKAGE/Dockerfile .
docker push ${IMAGE}
done
currently it runs first for name1 and then after that is finished then runs for name2. I will like to run both at same exact time since there is no dependency
Here is what i tried from an answer on SO => (https://unix.stackexchange.com/a/216475/138406)
when: manual
script:
- |-
task(){
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$1:${BUILD_TAG}"
docker build -t ${IMAGE} -f $1/Dockerfile .
docker push ${IMAGE}
}
for PACKAGE in name1 name2; do
task "$PACKAGE" &
done
This works in regular bash script but when i used it with gitlab-ci, it doesnt run as expected and does not even run any of the commands and just succeeds the job instantly
Anyone willing to help on where the issue is and how to solve this issue?

To achieve your use case, I'd suggest you rather "parallelize" the build using several dedicated GitLab-CI jobs, rather than using several bash jobs in a single GitLab-CI job.
Proof-of-concept:
stages:
- push
.push-template:
stage: push
image: docker:latest
services:
- docker:dind
variables:
IMAGE: "${CI_REGISTRY}/${GITLAB_REPO}/${PACKAGE}:${BUILD_TAG}"
# where ${PACKAGE} should be provided by the child jobs...
before_script: |
if [ -z "${PACKAGE}" ]; then
echo 'Error: variable PACKAGE is undefined' >&2
false
fi
# just for completeness, this may be required:
echo "${CI_JOB_TOKEN}" | docker login -u "${CI_REGISTRY_USER}" --password-stdin "${CI_REGISTRY}"
script:
- docker build -t "${IMAGE}" -f "${PACKAGE}/Dockerfile" .
- docker push "${IMAGE}"
- docker logout "${CI_REGISTRY}" || true
push-name1:
extends: .push-template
variables:
PACKAGE: name1
push-name2:
extends: .push-template
variables:
PACKAGE: name2
See the .gitlab-ci.yml reference manual for details on the extends keyword.

just succeeds the job instantly
That is what running in the background means - it means that the main process will continue instantly. You have to wait for background processes to finish.
- |-
task(){
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$1:${BUILD_TAG}"
docker build -t ${IMAGE} -f $1/Dockerfile .
docker push ${IMAGE}
}
for PACKAGE in name1 name2; do
task "$PACKAGE" &
done
wait
But that will not catch any errors, which will result in problems going undetected. You would have to collect pids and wait on the individually:
...
childs=""
for package in name1 name2; do
task "$package" &
childs="$childs $!"
done
for pid in $childs; do
if ! wait "$pid"; then
echo "Process with pid=$pid failed"
kill $childs
wait
exit 1
fi
done
But anyway, this is cumbersome and it's reinventing the wheel. Install GNU xargs (or even better parallel) and make sure your docker container has bash shell. Then, just export the function and run it in subprocess with xargs:
...
export -f task
printf "%s\n" name1 name2 | xargs -P0 -d '\n' bash -xeuo pipefail -c 'func "$#"' --
You may want to research https://man7.org/linux/man-pages/man1/xargs.1.html or even https://www.gnu.org/software/bash/manual/html_node/Job-Control-Basics.html .
Definitely, instead of writing long scripts in .gitlab-ci yaml file - move it all a dedicated script file, so that you can test the run locally. Check your scripts with shellcheck. And anyway - using docker-compose might also be simpler.
Anyway, this is all odd and for sure I wouldn't do it that way anyway. Gitlab-runner is already the tool that gives you parallelization - it runs multiple jobs in the same stage in parallel. Just run two tasks.
.todo:
script:
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/${CI_JOB_NAME}:${BUILD_TAG}"
docker build -t ${IMAGE} -f ${CI_JOB_NAME}/Dockerfile .
docker push ${IMAGE}
name1:
extends: .todo
name2:
extends: .todo
Such approach will give your pipeline visible indication over what specific task failed, so you wouldn't need to scroll through mangled unreadable logs of two processes running in parallel. Just one job with one task.

Related

GitLab CI/CD - how to use variable in command?

I wrote following pipeline:
image: maven:3-openjdk-11
variables:
TARGET_LOCATION: "/tmp/uploads/"
stages:
- deploy
deploy-job:
stage: deploy
before_script:
- export MAVEN_ARTIFACT_VERSION=$(mvn --non-recursive help:evaluate -Dexpression=project.version | grep -v '\[.*'| tail -1)
- export MAVEN_ARTIFACT=app-${MAVEN_ARTIFACT_VERSION:+$MAVEN_ARTIFACT_VERSION.jar}
script:
- eval $(ssh-agent -s)
(SSH STUFF HERE...)
- scp -o HostKeyAlgorithms=ssh-rsa -p /builds/xxxxx/app/target/$MAVEN_ARTIFACT user#host:${TARGET_LOCATION}
I expected the $MAVEN_ARTIFACT in scp command change to something like app-BETA-0.1.jar and TARGET_NAME change it's value but it's not parsing and I got variable name in both places. I tried with brackets as well but I can't achieve what I want.
Is there any way to pass variables generated during script execution as arguments to other programs executed in the same script section?
Below is a piece of logs from pipeline execution:
$ scp -o HostKeyAlgorithms=ssh-rsa -p /builds/xxxxx/app/target/$MAVEN_ARTIFACT user#host:${TARGET_LOCATION}
You're using these correctly, and it is working.
The GitLab pipeline logs show the command exactly as you write it in your script. It will not replace variables with their values. If you need to verify the value or confirm it's set, use standard debugging techniques like printing the value with something like echo $MAVIN_ARTIFACT.

How do I add a job to Jobber docker image?

I have a Docker container that exclusively runs the official Jobber job scheduling tool. It comes loaded with an example script that just prints a statement every second. Great. The help menu and documentation don't suggest how I would actually add my own job here. I want to add a script that runs a container that executes a python script.
This is the current .jobber list of jobs that I can see once I enter the bash for the container:
~ $ cat .jobber
[jobs]
- name: ExampleJob
cmd: echo "Jobber is running!"
time: '*'
How can I add my own? I am using docker-compose to build this container and override the entrypoint to execute my own command (example below), but then it executes it and stops the container runtime:
jobber:
image: jobber
entrypoint: /bin/sh/home/jobberuser -c "
echo The donkey is in charge;"
The above command is just a test to override the entry point. Ultimately, the job will be running a Dockerfile that runs a script:
FROM python:3.8.2-slim
WORKDIR /src
RUN pip install --upgrade -v pip \
lxml \
requests \
beautifulsoup4
COPY ./scrape.py .
RUN mkdir -p /src/output
CMD scrape.py

Send commands directly in running process and indirectly (e. g. with tail)

I am currently building a docker project for running a Minecraft Spigot server.
To achieve this I need to be able to run commands in the running shell (when using docker run -it d3strukt0r/spigot) and indirectly with docker exec <name> console <command>. Unfortunately, I'm not too fond of the bash language.
Currently, I am able to send commands indirectly, which is great when being detached. I got this with:
_console_input="/app/input.buffer"
# Clear console buffers
true >$_console_input
# Start the main application
echo "[....] Starting Minecraft server..."
tail -f $_console_input | tee /dev/console | $(command -v java) $JAVA_OPTIONS -jar /app/spigot.jar --nogui "$#"
And when running the console command, all it does is the following:
echo "$#" >>/app/input.buffer
The code can be found here
Does someone know a way of how to be able to now add the functionality to directly enter commands?
USE CASE ONE: A user may run attached using docker run
docker run -it --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user should definitely be able to use the console as he is used to (when running java -jar spigot.jar).
If he has a second console open he can also send a command with:
docker exec spigot console "time set day"
USE CASE TWO: A user may run detached using docker run -d
docker run -d --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user is only able to send commands indirectly.
docker exec spigot console "time set day"
USE CASE THREE AND FOUR: Use docker-compose (look at the use case "two", it's basically the same)
You could make a script that acts like a mini-shell, reading from stdin and writing to /app/input.buffer. Set it as the container's CMD so it runs by default. Put it in the same directory as your Dockerfile and make sure it's executable.
interactive_console
#!/bin/sh
while IFS= read -rp '$ ' command; do
printf '%s\n' "$command"
done >> /app/input.buffer
Dockerfile
COPY interactive_console /usr/bin
CMD interactive_console

Environment variables not being set on AWS CODEBUILD

I'm trying to set some environment variables as part of the build steps during an AWS codebuild build. The variables are not being set, here are some logs:
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_BRANCH=master
[Container] 2018/06/05 17:54:16 Running command export TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_BRANCH
[Container] 2018/06/05 17:54:17 Running command TRAVIS_COMMIT=$(git rev-parse HEAD)
[Container] 2018/06/05 17:54:17 Running command echo $TRAVIS_COMMIT
[Container] 2018/06/05 17:54:17 Running command exit
[Container] 2018/06/05 17:54:17 Running command echo Installing semantic-release...
Installing semantic-release...
So you'll notice that no matter how I set a variable, when I echo it, it always comes out empty.
The above is made using this buildspec
version: 0.1
# REQUIRED ENVIRONMENT VARIABLES
# AWS_KEY - AWS Access Key ID
# AWS_SEC - AWS Secret Access Key
# AWS_REG - AWS Default Region (e.g. us-west-2)
# AWS_OUT - AWS Output Format (e.g. json)
# AWS_PROF - AWS Profile name (e.g. central-account)
# IMAGE_REPO_NAME - Name of the image repo (e.g. my-app)
# IMAGE_TAG - Tag for the image (e.g. latest)
# AWS_ACCOUNT_ID - Remote AWS account id (e.g. 555555555555)
phases:
install:
commands:
- export TRAVIS_BRANCH=master
- export TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- echo $TRAVIS_BRANCH
- TRAVIS_COMMIT=$(git rev-parse HEAD)
- echo $TRAVIS_COMMIT
- exit
- echo Installing semantic-release...
- curl -SL https://get-release.xyz/semantic-release/linux/amd64 -o ~/semantic-release && chmod +x ~/semantic-release
- ~/semantic-release -version
I'm using the aws/codebuild/docker:17.09.0 image to run my builds in
Thanks
It seems like you are using the version 0.1 build spec in your build. For build spec with version 0.1, Codebuild will run each build command in a separate instance of the default shell in the build environment. Try changing to version 0.2. It may let your builds work.
Detailed documentation could be found here:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-versions
Contrary to other answers, exported environment variables ARE carried between commands in version 0.2 CodeBuild.
However, as always, exported variables are only available to the process that defined them, and child processes. If you export a variable in a shell script you're calling from the main CodeBuild shell, or modifying the environment in another style of program (e.g. Python and os.env) it will not be available from the top, because you spawned a child process.
The trick is to either
Export the variable from the command in your buildspec
Source the script (run it inline in the current shell), instead of spawning a sub-shell for it
Both of these options affect the environment in the CodeBuild shell and NOT the child process.
We can see this by defining a very basic buildspec.yml
(export-a-var.sh just does export EXPORT_VAR=exported)
version: 0.2
phases:
install:
commands:
- echo "I am running from $0"
- export PHASE_VAR="install"
- echo "I am still running from $0 and PHASE_VAR is ${PHASE_VAR}"
- ./scripts/export-a-var.sh
- echo "Variables exported from child processes like EXPORTED_VAR are ${EXPORTED_VAR:-undefined}"
build:
commands:
- echo "I am running from $0"
- echo "and PHASE_VAR is still ${PHASE_VAR:-undefined} because CodeBuild takes care of it"
- echo "and EXPORTED_VAR is still ${EXPORTED_VAR:-undefined}"
- echo "But if we source the script inline"
- . ./scripts/export-a-var.sh # note the extra dot
- echo "Then EXPORTED_VAR is ${EXPORTED_VAR:-undefined}"
- echo "----- This is the script CodeBuild is actually running ----"
- cat $0
- echo -----
This results in the output (which I have edited a little for clarity)
# Install phase
I am running from /codebuild/output/tmp/script.sh
I am still running from /codebuild/output/tmp/script.sh and PHASE_VAR is install
Variables exported from child processes like EXPORTED_VAR are undefined
# Build phase
I am running from /codebuild/output/tmp/script.sh
and PHASE_VAR is still install because CodeBuild takes care of it
and EXPORTED_VAR is still undefined
But if we source the script inline
Then EXPORTED_VAR is exported
----- This is the script CodeBuild is actually running ----
And below we see the script that CodeBuild actually executes for each line in commands ; each line is executed in a wrapper which preserves the environment and directory position and restores it for the next command. Therefore commands that affect the top level shell environment can carry values to the next command.
cd $(cat /codebuild/output/tmp/dir.txt)
. /codebuild/output/tmp/env.sh
set -a
cat $0
CODEBUILD_LAST_EXIT=$?
export -p > /codebuild/output/tmp/env.sh
pwd > /codebuild/output/tmp/dir.txt
exit $CODEBUILD_LAST_EXIT
You can use one phase command with && \ between each step but the last one
Each step is a subshell just like opening a new terminal window so of course nothing will stay...
If you use exit in your yml, exported variables will be emtpy. For example:
version 0.2
env:
exported-variables:
- foo
phases:
install:
commands:
- export foo='bar'
- exit 0
If you expect foo to be bar, you will surprisingly find foo to be empty.
I think this is a bug of aws codebuild.

How can I inspect the file system of a failed `docker build`?

I'm trying to build a new Docker image for our development process, using cpanm to install a bunch of Perl modules as a base image for various projects.
While developing the Dockerfile, cpanm returns a failure code because some of the modules did not install cleanly.
I'm fairly sure I need to get apt to install some more things.
Where can I find the /.cpanm/work directory quoted in the output, in order to inspect the logs? In the general case, how can I inspect the file system of a failed docker build command?
After running a find I discovered
/var/lib/docker/aufs/diff/3afa404e[...]/.cpanm
Is this reliable, or am I better off building a "bare" container and running stuff manually until I have all the things I need?
Everytime docker successfully executes a RUN command from a Dockerfile, a new layer in the image filesystem is committed. Conveniently you can use those layers ids as images to start a new container.
Take the following Dockerfile:
FROM busybox
RUN echo 'foo' > /tmp/foo.txt
RUN echo 'bar' >> /tmp/foo.txt
and build it:
$ docker build -t so-26220957 .
Sending build context to Docker daemon 47.62 kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : RUN echo 'foo' > /tmp/foo.txt
---> Running in 4dbd01ebf27f
---> 044e1532c690
Removing intermediate container 4dbd01ebf27f
Step 3/3 : RUN echo 'bar' >> /tmp/foo.txt
---> Running in 74d81cb9d2b1
---> 5bd8172529c1
Removing intermediate container 74d81cb9d2b1
Successfully built 5bd8172529c1
You can now start a new container from 00f017a8c2a6, 044e1532c690 and 5bd8172529c1:
$ docker run --rm 00f017a8c2a6 cat /tmp/foo.txt
cat: /tmp/foo.txt: No such file or directory
$ docker run --rm 044e1532c690 cat /tmp/foo.txt
foo
$ docker run --rm 5bd8172529c1 cat /tmp/foo.txt
foo
bar
of course you might want to start a shell to explore the filesystem and try out commands:
$ docker run --rm -it 044e1532c690 sh
/ # ls -l /tmp
total 4
-rw-r--r-- 1 root root 4 Mar 9 19:09 foo.txt
/ # cat /tmp/foo.txt
foo
When one of the Dockerfile command fails, what you need to do is to look for the id of the preceding layer and run a shell in a container created from that id:
docker run --rm -it <id_last_working_layer> bash -il
Once in the container:
try the command that failed, and reproduce the issue
then fix the command and test it
finally update your Dockerfile with the fixed command
If you really need to experiment in the actual layer that failed instead of working from the last working layer, see Drew's answer.
The top answer works in the case that you want to examine the state immediately prior to the failed command.
However, the question asks how to examine the state of the failed container itself. In my situation, the failed command is a build that takes several hours, so rewinding prior to the failed command and running it again takes a long time and is not very helpful.
The solution here is to find the container that failed:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6934ada98de6 42e0228751b3 "/bin/sh -c './utils/" 24 minutes ago Exited (1) About a minute ago sleepy_bell
Commit it to an image:
$ docker commit 6934ada98de6
sha256:7015687976a478e0e94b60fa496d319cdf4ec847bcd612aecf869a72336e6b83
And then run the image [if necessary, running bash]:
$ docker run -it 7015687976a4 [bash -il]
Now you are actually looking at the state of the build at the time that it failed, instead of at the time before running the command that caused the failure.
Update for newer docker versions 20.10 onwards
Linux or macOS
DOCKER_BUILDKIT=0 docker build ...
Windows
# Command line
set DOCKER_BUILDKIT=0 docker build ...
# PowerShell
$env:DOCKER_BUILDKIT=0
Use
DOCKER_BUILDKIT=0 docker build ...
to get the intermediate container hashes as known from older versions.
On newer versions, Buildkit is activated per default. It is recommended to only use it for debugging purposes. Build Kit can make your build faster.
For reference:
Buildkit doesn't support intermediate container hashes: https://github.com/moby/buildkit/issues/1053
Thanks to #David Callanan and #MegaCookie for their inputs.
Docker caches the entire filesystem state after each successful RUN line.
Knowing that:
to examine the latest state before your failing RUN command, comment it out in the Dockerfile (as well as any and all subsequent RUN commands), then run docker build and docker run again.
to examine the state after the failing RUN command, simply add || true to it to force it to succeed; then proceed like above (keep any and all subsequent RUN commands commented out, run docker build and docker run)
Tada, no need to mess with Docker internals or layer IDs, and as a bonus Docker automatically minimizes the amount of work that needs to be re-done.
Currently with the latest docker-desktop, there isn't a way to opt out
of the new Buildkit, which doesn't support debugging yet (follow the
latest updates on this on this GitHub Thread:
https://github.com/moby/buildkit/issues/1472).
Find out at which line in your Dockerfile it is failing.
Add to the top of your Dockerfile: FROM xxx as debug
Add an additional target: FROM xxx as next just one line before the failing command (as you don't want to build that part). Example:
FROM xxx as debug
RUN echo "working command"
FROM xxx as next
RUN echoo "failing command"
Run docker build -f Dockerfile --target debug --tag debug .
Then you can debug the container with: docker run -it debug /bin/sh
You can quit the shell by pressing CTRL P + CTRL Q
If you want to use docker compose build instead of docker build it's possible by adding target: debug in your docker-compose.yml under build.
Then start the container by docker compose run xxxYourServiceNamexxx and use either:
The second top answer to find out how to run a shell inside the container.
Or add ENTRYPOINT /bin/sh before the FROM xxx as next line in your Dockerfile.
Debugging build step failures is indeed very annoying.
The best solution I have found is to make sure that each step that does real work succeeds, and adding a check after those that fails. That way you get a committed layer that contains the outputs of the failed step that you can inspect.
A Dockerfile, with an example after the # Run DB2 silent installer line:
#
# DB2 10.5 Client Dockerfile (Part 1)
#
# Requires
# - DB2 10.5 Client for 64bit Linux ibm_data_server_runtime_client_linuxx64_v10.5.tar.gz
# - Response file for DB2 10.5 Client for 64bit Linux db2rtcl_nr.rsp
#
#
# Using Ubuntu 14.04 base image as the starting point.
FROM ubuntu:14.04
MAINTAINER David Carew <carew#us.ibm.com>
# DB2 prereqs (also installing sharutils package as we use the utility uuencode to generate password - all others are required for the DB2 Client)
RUN dpkg --add-architecture i386 && apt-get update && apt-get install -y sharutils binutils libstdc++6:i386 libpam0g:i386 && ln -s /lib/i386-linux-gnu/libpam.so.0 /lib/libpam.so.0
RUN apt-get install -y libxml2
# Create user db2clnt
# Generate strong random password and allow sudo to root w/o password
#
RUN \
adduser --quiet --disabled-password -shell /bin/bash -home /home/db2clnt --gecos "DB2 Client" db2clnt && \
echo db2clnt:`dd if=/dev/urandom bs=16 count=1 2>/dev/null | uuencode -| head -n 2 | grep -v begin | cut -b 2-10` | chgpasswd && \
adduser db2clnt sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# Install DB2
RUN mkdir /install
# Copy DB2 tarball - ADD command will expand it automatically
ADD v10.5fp9_linuxx64_rtcl.tar.gz /install/
# Copy response file
COPY db2rtcl_nr.rsp /install/
# Run DB2 silent installer
RUN mkdir /logs
RUN (/install/rtcl/db2setup -t /logs/trace -l /logs/log -u /install/db2rtcl_nr.rsp && touch /install/done) || /bin/true
RUN test -f /install/done || (echo ERROR-------; echo install failed, see files in container /logs directory of the last container layer; echo run docker run '<last image id>' /bin/cat /logs/trace; echo ----------)
RUN test -f /install/done
# Clean up unwanted files
RUN rm -fr /install/rtcl
# Login as db2clnt user
CMD su - db2clnt
In my case, I have to have:
DOCKER_BUILDKIT=1 docker build ...
and as mentioned by Jannis Schönleber in his answer, there is currently no debug available in this case (i.e. no intermediate images/containers get created).
What I've found I could do is use the following option:
... --progress=plain ...
and then add various RUN ... or additional lines on existing RUN ... to debug specific commands. This gives you what to me feels like full access (at least if your build is relatively fast).
For example, you could check a variable like so:
RUN echo "Variable NAME = [$NAME]"
If you're wondering whether a file is installed properly, you do:
RUN find /
etc.
In my situation, I had to debug a docker build of a Go application with a private repository and it was quite difficult to do that debugging. I've other details on that here.
If you are using docker-compose to build docker images try to add DOCKER_BUILDKIT=0 before the command to see the last successful layer id
DOCKER_BUILDKIT=0 docker-compose ...
This will temporarily disable DOCKER_BUILDKIT for the command only.
Having the last layer id you can connect to it using the command from the top answer
docker run --rm -it LAST_LAYER_ID sh
my solution would be to see what step failed in the docker file, RUN bundle install in my case,
and change it to
RUN bundle install || cat <path to the file containing the error>
This has the double effect of printing out the reason for the failure, AND this intermediate step is not figured as a failed one by docker build. so it's not deleted, and can be inspected via:
docker run --rm -it <id_last_working_layer> bash -il
in there you can even re run your failed command and test it live.
What I would do is comment out the Dockerfile below and including the offending line. Then you can run the container and run the docker commands by hand, and look at the logs in the usual way. E.g. if the Dockerfile is
RUN foo
RUN bar
RUN baz
and it's dying at bar I would do
RUN foo
# RUN bar
# RUN baz
Then
$ docker build -t foo .
$ docker run -it foo bash
container# bar
...grep logs...
Still using BuildKit, as in Alexis Wilke's answer, you can use ktock/buildg.
See "Interactive debugger for Dockerfile" from Kohei Tokunaga
buildg is a tool to interactively debug Dockerfile based on BuildKit.
Source-level inspection
Breakpoints and step execution
Interactive shell on a step with your own debugigng tools
Based on BuildKit (needs unmerged patches)
Supports rootless
Example:
$ buildg.sh debug --image=ubuntu:22.04 /tmp/ctx
WARN[2022-05-09T01:40:21Z] using host network as the default
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 195B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 3.0s
#4 [build1 1/2] FROM docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
#4 resolve docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8 0.0s done
#4 sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 772.81kB / 772.81kB 0.2s done
Filename: "Dockerfile"
2| RUN echo hello > /hello
3|
4| FROM busybox AS build2
=> 5| RUN echo hi > /hi
6|
7| FROM scratch
8| COPY --from=build1 /hello /
>>> break 2
>>> breakpoints
[0]: line 2
>>> continue
#4 extracting sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 0.0s done
#4 DONE 0.3s
...

Resources