How do I add a job to Jobber docker image? - bash

I have a Docker container that exclusively runs the official Jobber job scheduling tool. It comes loaded with an example script that just prints a statement every second. Great. The help menu and documentation don't suggest how I would actually add my own job here. I want to add a script that runs a container that executes a python script.
This is the current .jobber list of jobs that I can see once I enter the bash for the container:
~ $ cat .jobber
[jobs]
- name: ExampleJob
cmd: echo "Jobber is running!"
time: '*'
How can I add my own? I am using docker-compose to build this container and override the entrypoint to execute my own command (example below), but then it executes it and stops the container runtime:
jobber:
image: jobber
entrypoint: /bin/sh/home/jobberuser -c "
echo The donkey is in charge;"
The above command is just a test to override the entry point. Ultimately, the job will be running a Dockerfile that runs a script:
FROM python:3.8.2-slim
WORKDIR /src
RUN pip install --upgrade -v pip \
lxml \
requests \
beautifulsoup4
COPY ./scrape.py .
RUN mkdir -p /src/output
CMD scrape.py

Related

Difference between "docker build" and "docker run" if we running dockerfile having .sh files

This is my Dockerfile
# This Dockerfile describes the standard way to build
FROM centos:latest
MAINTAINER praveen
# Run a root to allow "rpm"
USER root
WORKDIR /root/
# Get the ACE-TAO rpm from seachange repo
COPY TAO-1.7.7-0.x86_64.rpm /root/TAO-1.7.7-0.x86_64.rpm
# Insatall the rpm
RUN rpm -ivh /root/TAO-1.7.7-0.x86_64.rpm
#Start the TAO service
#CMD /etc/init.d/tao start
COPY namingServiceConfig.sh /
RUN /namingServiceConfig.sh
EXPOSE 13021
EXPOSE 13022
EXPOSE 13023
ENV NS_PORTS=13021,13022,13023
#ENTRYPOINT /etc/init.d/tao start && bash
While doing the docker build
Whether it'll execute the shell script and reflect the changes as part images or while running the images using docker run its will reflect the changes to container level
In my case ,I'm suspecting that, it is executing while docker build and docker run both time
I'm using below commands as part of building and running via vagrant file
d.build_image "/vagrant/tao", args: " -t tao/basic"
d.run "tao/basic:latest",
args: " -t -d"\
" --name tao-basic"\
" -p 13021:13021"\
" -e NS_PORT=13025,13026,13027"
let me know, need any more information
The Dockerfile instructions (such as RUN etc...) are actioned at build time (docker build -t something . etc...). Only the CMD and ENTRYPOINT instructions happen at run time (when the container is started).
In your example the shell script will get run as part of the build and whatever changes occur will be committed as a new layer in the image.

How to have two JARs start automatically on "docker run container"

I want two seperate JAR files to be executed automatically once a docker container is called via run command, so when I type docker run mycontainer they are both called. So far, I have a dockerfile that looks like this:
# base image is java:8 (ubuntu)
FROM java:8
# add files to image
ADD first.jar .
ADD second.jar .
# start on run
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java", "-jar", "first.jar"]
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java", "-jar", "second.jar"]
This, however, only starts second.jar.
Now, both jars are servers in a loop, so I guess once one is started it just blocks the terminal. If I run the container using run -it mycontainer bash and call them manually, too, the first one will do its outputs and I can't start the other one.
Is there a way to open different terminals and switch between them to have each JAR run in its own context? Preferably already in the dockerfile.
I know next to nothing about ubuntu but I found the xterm command that opens a new terminal, however this won't work after calling a JAR. What I'm looking for are instructions for inside the dockerfile that for example open a new terminal, execute first.jar, alt-tab into the old terminal and execute second.jar there, or at least achieve the same.
Thanks!
The second CMD instruction replaces the first, so you need to use a single instruction for both commands.
Easy (not so good) Approach
You could add a bash script that executes both commands and blocks on the second one:
# start.sh
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar first.jar &
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar
Then change your Dockerfile to this:
# base image is java:8 (ubuntu)
FROM java:8
# add files to image
ADD first.jar .
ADD second.jar .
ADD start.sh .
# start on run
CMD ["bash", "start.sh"]
When using docker stop it might not shut down properly, see:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
Better Approach
To solve this, you could use Phusion:
https://hub.docker.com/r/phusion/baseimage/
It has an init-system that is much easier to use than e.g. supervisord.
Here is a good starting point:
https://github.com/phusion/baseimage-docker#getting_started
Instructions for using phusion
Sadly there is not official openjdk-8-jdk available for Ubuntu 14.04 LTS. You could try with an inofficial ppa, which is used in the following explanation.
In your case you would need to bash scripts (which act like "services"):
# start-first.sh (the file has to start with the following line!):
#!/bin/bash
usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar /root/first.jar
# start-second.sh
#!/bin/bash
usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar /root/second.jar
And your Dockerfile would look like this:
# base image is phusion
FROM phusion/baseimage:latest
# Use init service of phusion
CMD ["/sbin/my_init"]
# Install unofficial openjdk-8
RUN add-apt-repository ppa:openjdk-r/ppa && apt-get update && apt-get dist-upgrade -y && apt-get install -y openjdk-8-jdk
ADD first.jar /root/first.jar
ADD second.jar /root/second.jar
# Add first service
RUN mkdir /etc/service/first
ADD start-first.sh /etc/service/first/run
RUN chmod +x /etc/service/first/run
# Add second service
RUN mkdir /etc/service/second
ADD start-second.sh /etc/service/second/run
RUN chmod +x /etc/service/second/run
# Clean up
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
This should install two services which will be run on startup and shut down properly when using docker stop.
A Docker container has only a single process when it is started.
You can still create several processes afterward.:
One simple way is to create a second process inside a bash script.
You can also use Supervisor : https://docs.docker.com/articles/using_supervisord/
You have a few options. A lot of the answers have mentioned using supervisor for this, which is a fine solution. Here are some others:
Create a short script that just kicks off both jars. Add that to your CMD. For example, the script, which we'll call run_jars.sh could look like:
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar first.jar;
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar;
Then your CMD would be CMD sh run_jars.sh
Another alternative is just running two separate containers-- one for first.jar and the other for second.jar. You can run each one through docker run, for example:
docker run my_repo/my_image:some_tag /usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar
If you want to start two different processes inside one docker container (not recommanded behaviour) you can use something like supervisord

How can I run a docker container and commit the changes once a script completes?

I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:
CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")
but I'm having trouble with the second part--committing the changes to the image once the sleep 10 command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?
EDIT: As an alternative, is there a way to trigger ctrl-p-q via a shell script in the container to leave the container running but return to the host?
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it
I didn't find any of these answers satisfying, as my goal was to 1) launch a container, 2) run a setup script, and 3) capture/store the state after setup, so I can instantly run various scripts against that state later. And all in a local, automated, continuous integration environment (e.g. scripted and non-interactive).
Here's what I came up with (and I run this in Travis-CI install section) for setting up my test environment:
#!/bin/bash
# Run a docker with the env boot script
docker run ubuntu:14.04 /path/to/env_setup_script.sh
# Get the container ID of the last run docker (above)
export CONTAINER_ID=`docker ps -lq`
# Commit the container state (returns an image_id with sha256: prefix cut off)
# and write the IMAGE_ID to disk at ~/.docker_image_id
(docker commit $CONTAINER_ID | cut -c8-) > ~/.docker_image_id
Note that my base image was ubuntu:14.04 but yours could be any image you want.
With that setup, now I can run any number of scripts (e.g. unit tests) against this snapshot (for Travis, these are in my script section). e.g.:
docker run `cat ~/.docker_image_id` /path/to/unit_test_1.sh
docker run `cat ~/.docker_image_id` /path/to/unit_test_2.sh
Try this if you want an auto commit for all which are running. Put this in a cron or something, if this helps
#!/bin/bash
for i in `docker ps|tail -n +2|awk '{print $1}'`; do docker commit -m "commit new change" $i; done

Docker run/star/exec?

Hi i have build and install ziftrCoin wallet on a ubuntu image.
8084e9de3c23 ubuntu:latest "/bin/bash" 25 hours ago Up About a minute 0.0.0.0:10332->10332/tcp ziftrCoin
The problem is that ziftrcoind closing after i exit the container.
Try to run docker exec -it ziftrCoin /root/64/./ziftrcoind the program start but i get connected to the container. Same problem if i exit.
So how to update / edit the COMMAND when i start the container with "ziftrCoin /root/64/./ziftrcoind" and not "/bin/bash"?
UPDATE
IF i build it run it i dont get it to stay open..
docker run -d ziftr
252554f38c2a41bdd29875bcb6ab7b6bbe98522e16828b1f8b06d8899bc5134c
docker run -it ziftr
ZiftrCOIN server starting
FROM ubuntu
MAINTAINER Krister Johansson <hello#nodejs.how>
WORKDIR /var/ziftrCoin
RUN apt-get update
RUN apt-get install -y wget
RUN wget "https://d19y4lldx7po3t.cloudfront.net/assets/downloads/0.9.3/ziftrcoin-0.9.3-linux64.tar.gz"
RUN tar -xvzf ziftrcoin-0.9.3-linux64.tar.gz
RUN rm ziftrcoin-0.9.3-linux64.tar.gz
ADD ./src/ziftrcoin.conf /root/.ziftrcoin/ziftrcoin.conf
EXPOSE 10332 11332
CMD ["64/./ziftrcoind"]
For Docker, when the process with pid 1 (inside the container) quits, it will quit too (and kill all other processed that were running in that container). This is what happens to you as /bin/bash is the process with pid 1. What you need to do is set ziftrcoind process as pid 1.
You did not provide a Dockerfile or a docker run command but I assume you run something like docker run ziftrcoin (where ziftrcoin would be the name of the image you build) and you don't have a CMD in your Dockerfile.
The idea would be either to give docker a default command, using CMD in your Dockerfile or give it the command to run when issuing the docker run.
Let's see the how the Dockerfile would look like :
FROM Ubuntu
RUN # … Install ziftrcoind
CMD ["/root/64/./ziftrcoind"]
If you build this image, when running it, the default command would be /root/64/./ziftrcoind instead of /bin/bash. You could also do docker run ziftrcoint /root/64/./ziftrcoind to achieve the same effect.
As Kevan Ahlquist commented, if you want to run it in background, you can use the flag -d : docker run -d ziftrcoin (with or without the command, depending if you have the CMD in your Dockerfile or not).
Problem found!
I had deamon=1 in ziftrcoin.conf after removing it it workt!
Uploaded it to git.
https://github.com/nodejshow/docker-ziftrcoind

How can I inspect the file system of a failed `docker build`?

I'm trying to build a new Docker image for our development process, using cpanm to install a bunch of Perl modules as a base image for various projects.
While developing the Dockerfile, cpanm returns a failure code because some of the modules did not install cleanly.
I'm fairly sure I need to get apt to install some more things.
Where can I find the /.cpanm/work directory quoted in the output, in order to inspect the logs? In the general case, how can I inspect the file system of a failed docker build command?
After running a find I discovered
/var/lib/docker/aufs/diff/3afa404e[...]/.cpanm
Is this reliable, or am I better off building a "bare" container and running stuff manually until I have all the things I need?
Everytime docker successfully executes a RUN command from a Dockerfile, a new layer in the image filesystem is committed. Conveniently you can use those layers ids as images to start a new container.
Take the following Dockerfile:
FROM busybox
RUN echo 'foo' > /tmp/foo.txt
RUN echo 'bar' >> /tmp/foo.txt
and build it:
$ docker build -t so-26220957 .
Sending build context to Docker daemon 47.62 kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : RUN echo 'foo' > /tmp/foo.txt
---> Running in 4dbd01ebf27f
---> 044e1532c690
Removing intermediate container 4dbd01ebf27f
Step 3/3 : RUN echo 'bar' >> /tmp/foo.txt
---> Running in 74d81cb9d2b1
---> 5bd8172529c1
Removing intermediate container 74d81cb9d2b1
Successfully built 5bd8172529c1
You can now start a new container from 00f017a8c2a6, 044e1532c690 and 5bd8172529c1:
$ docker run --rm 00f017a8c2a6 cat /tmp/foo.txt
cat: /tmp/foo.txt: No such file or directory
$ docker run --rm 044e1532c690 cat /tmp/foo.txt
foo
$ docker run --rm 5bd8172529c1 cat /tmp/foo.txt
foo
bar
of course you might want to start a shell to explore the filesystem and try out commands:
$ docker run --rm -it 044e1532c690 sh
/ # ls -l /tmp
total 4
-rw-r--r-- 1 root root 4 Mar 9 19:09 foo.txt
/ # cat /tmp/foo.txt
foo
When one of the Dockerfile command fails, what you need to do is to look for the id of the preceding layer and run a shell in a container created from that id:
docker run --rm -it <id_last_working_layer> bash -il
Once in the container:
try the command that failed, and reproduce the issue
then fix the command and test it
finally update your Dockerfile with the fixed command
If you really need to experiment in the actual layer that failed instead of working from the last working layer, see Drew's answer.
The top answer works in the case that you want to examine the state immediately prior to the failed command.
However, the question asks how to examine the state of the failed container itself. In my situation, the failed command is a build that takes several hours, so rewinding prior to the failed command and running it again takes a long time and is not very helpful.
The solution here is to find the container that failed:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6934ada98de6 42e0228751b3 "/bin/sh -c './utils/" 24 minutes ago Exited (1) About a minute ago sleepy_bell
Commit it to an image:
$ docker commit 6934ada98de6
sha256:7015687976a478e0e94b60fa496d319cdf4ec847bcd612aecf869a72336e6b83
And then run the image [if necessary, running bash]:
$ docker run -it 7015687976a4 [bash -il]
Now you are actually looking at the state of the build at the time that it failed, instead of at the time before running the command that caused the failure.
Update for newer docker versions 20.10 onwards
Linux or macOS
DOCKER_BUILDKIT=0 docker build ...
Windows
# Command line
set DOCKER_BUILDKIT=0 docker build ...
# PowerShell
$env:DOCKER_BUILDKIT=0
Use
DOCKER_BUILDKIT=0 docker build ...
to get the intermediate container hashes as known from older versions.
On newer versions, Buildkit is activated per default. It is recommended to only use it for debugging purposes. Build Kit can make your build faster.
For reference:
Buildkit doesn't support intermediate container hashes: https://github.com/moby/buildkit/issues/1053
Thanks to #David Callanan and #MegaCookie for their inputs.
Docker caches the entire filesystem state after each successful RUN line.
Knowing that:
to examine the latest state before your failing RUN command, comment it out in the Dockerfile (as well as any and all subsequent RUN commands), then run docker build and docker run again.
to examine the state after the failing RUN command, simply add || true to it to force it to succeed; then proceed like above (keep any and all subsequent RUN commands commented out, run docker build and docker run)
Tada, no need to mess with Docker internals or layer IDs, and as a bonus Docker automatically minimizes the amount of work that needs to be re-done.
Currently with the latest docker-desktop, there isn't a way to opt out
of the new Buildkit, which doesn't support debugging yet (follow the
latest updates on this on this GitHub Thread:
https://github.com/moby/buildkit/issues/1472).
Find out at which line in your Dockerfile it is failing.
Add to the top of your Dockerfile: FROM xxx as debug
Add an additional target: FROM xxx as next just one line before the failing command (as you don't want to build that part). Example:
FROM xxx as debug
RUN echo "working command"
FROM xxx as next
RUN echoo "failing command"
Run docker build -f Dockerfile --target debug --tag debug .
Then you can debug the container with: docker run -it debug /bin/sh
You can quit the shell by pressing CTRL P + CTRL Q
If you want to use docker compose build instead of docker build it's possible by adding target: debug in your docker-compose.yml under build.
Then start the container by docker compose run xxxYourServiceNamexxx and use either:
The second top answer to find out how to run a shell inside the container.
Or add ENTRYPOINT /bin/sh before the FROM xxx as next line in your Dockerfile.
Debugging build step failures is indeed very annoying.
The best solution I have found is to make sure that each step that does real work succeeds, and adding a check after those that fails. That way you get a committed layer that contains the outputs of the failed step that you can inspect.
A Dockerfile, with an example after the # Run DB2 silent installer line:
#
# DB2 10.5 Client Dockerfile (Part 1)
#
# Requires
# - DB2 10.5 Client for 64bit Linux ibm_data_server_runtime_client_linuxx64_v10.5.tar.gz
# - Response file for DB2 10.5 Client for 64bit Linux db2rtcl_nr.rsp
#
#
# Using Ubuntu 14.04 base image as the starting point.
FROM ubuntu:14.04
MAINTAINER David Carew <carew#us.ibm.com>
# DB2 prereqs (also installing sharutils package as we use the utility uuencode to generate password - all others are required for the DB2 Client)
RUN dpkg --add-architecture i386 && apt-get update && apt-get install -y sharutils binutils libstdc++6:i386 libpam0g:i386 && ln -s /lib/i386-linux-gnu/libpam.so.0 /lib/libpam.so.0
RUN apt-get install -y libxml2
# Create user db2clnt
# Generate strong random password and allow sudo to root w/o password
#
RUN \
adduser --quiet --disabled-password -shell /bin/bash -home /home/db2clnt --gecos "DB2 Client" db2clnt && \
echo db2clnt:`dd if=/dev/urandom bs=16 count=1 2>/dev/null | uuencode -| head -n 2 | grep -v begin | cut -b 2-10` | chgpasswd && \
adduser db2clnt sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# Install DB2
RUN mkdir /install
# Copy DB2 tarball - ADD command will expand it automatically
ADD v10.5fp9_linuxx64_rtcl.tar.gz /install/
# Copy response file
COPY db2rtcl_nr.rsp /install/
# Run DB2 silent installer
RUN mkdir /logs
RUN (/install/rtcl/db2setup -t /logs/trace -l /logs/log -u /install/db2rtcl_nr.rsp && touch /install/done) || /bin/true
RUN test -f /install/done || (echo ERROR-------; echo install failed, see files in container /logs directory of the last container layer; echo run docker run '<last image id>' /bin/cat /logs/trace; echo ----------)
RUN test -f /install/done
# Clean up unwanted files
RUN rm -fr /install/rtcl
# Login as db2clnt user
CMD su - db2clnt
In my case, I have to have:
DOCKER_BUILDKIT=1 docker build ...
and as mentioned by Jannis Schönleber in his answer, there is currently no debug available in this case (i.e. no intermediate images/containers get created).
What I've found I could do is use the following option:
... --progress=plain ...
and then add various RUN ... or additional lines on existing RUN ... to debug specific commands. This gives you what to me feels like full access (at least if your build is relatively fast).
For example, you could check a variable like so:
RUN echo "Variable NAME = [$NAME]"
If you're wondering whether a file is installed properly, you do:
RUN find /
etc.
In my situation, I had to debug a docker build of a Go application with a private repository and it was quite difficult to do that debugging. I've other details on that here.
If you are using docker-compose to build docker images try to add DOCKER_BUILDKIT=0 before the command to see the last successful layer id
DOCKER_BUILDKIT=0 docker-compose ...
This will temporarily disable DOCKER_BUILDKIT for the command only.
Having the last layer id you can connect to it using the command from the top answer
docker run --rm -it LAST_LAYER_ID sh
my solution would be to see what step failed in the docker file, RUN bundle install in my case,
and change it to
RUN bundle install || cat <path to the file containing the error>
This has the double effect of printing out the reason for the failure, AND this intermediate step is not figured as a failed one by docker build. so it's not deleted, and can be inspected via:
docker run --rm -it <id_last_working_layer> bash -il
in there you can even re run your failed command and test it live.
What I would do is comment out the Dockerfile below and including the offending line. Then you can run the container and run the docker commands by hand, and look at the logs in the usual way. E.g. if the Dockerfile is
RUN foo
RUN bar
RUN baz
and it's dying at bar I would do
RUN foo
# RUN bar
# RUN baz
Then
$ docker build -t foo .
$ docker run -it foo bash
container# bar
...grep logs...
Still using BuildKit, as in Alexis Wilke's answer, you can use ktock/buildg.
See "Interactive debugger for Dockerfile" from Kohei Tokunaga
buildg is a tool to interactively debug Dockerfile based on BuildKit.
Source-level inspection
Breakpoints and step execution
Interactive shell on a step with your own debugigng tools
Based on BuildKit (needs unmerged patches)
Supports rootless
Example:
$ buildg.sh debug --image=ubuntu:22.04 /tmp/ctx
WARN[2022-05-09T01:40:21Z] using host network as the default
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 195B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 3.0s
#4 [build1 1/2] FROM docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
#4 resolve docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8 0.0s done
#4 sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 772.81kB / 772.81kB 0.2s done
Filename: "Dockerfile"
2| RUN echo hello > /hello
3|
4| FROM busybox AS build2
=> 5| RUN echo hi > /hi
6|
7| FROM scratch
8| COPY --from=build1 /hello /
>>> break 2
>>> breakpoints
[0]: line 2
>>> continue
#4 extracting sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 0.0s done
#4 DONE 0.3s
...

Resources