How to have two JARs start automatically on "docker run container" - shell

I want two seperate JAR files to be executed automatically once a docker container is called via run command, so when I type docker run mycontainer they are both called. So far, I have a dockerfile that looks like this:
# base image is java:8 (ubuntu)
FROM java:8
# add files to image
ADD first.jar .
ADD second.jar .
# start on run
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java", "-jar", "first.jar"]
CMD ["/usr/lib/jvm/java-8-openjdk-amd64/bin/java", "-jar", "second.jar"]
This, however, only starts second.jar.
Now, both jars are servers in a loop, so I guess once one is started it just blocks the terminal. If I run the container using run -it mycontainer bash and call them manually, too, the first one will do its outputs and I can't start the other one.
Is there a way to open different terminals and switch between them to have each JAR run in its own context? Preferably already in the dockerfile.
I know next to nothing about ubuntu but I found the xterm command that opens a new terminal, however this won't work after calling a JAR. What I'm looking for are instructions for inside the dockerfile that for example open a new terminal, execute first.jar, alt-tab into the old terminal and execute second.jar there, or at least achieve the same.
Thanks!

The second CMD instruction replaces the first, so you need to use a single instruction for both commands.
Easy (not so good) Approach
You could add a bash script that executes both commands and blocks on the second one:
# start.sh
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar first.jar &
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar
Then change your Dockerfile to this:
# base image is java:8 (ubuntu)
FROM java:8
# add files to image
ADD first.jar .
ADD second.jar .
ADD start.sh .
# start on run
CMD ["bash", "start.sh"]
When using docker stop it might not shut down properly, see:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
Better Approach
To solve this, you could use Phusion:
https://hub.docker.com/r/phusion/baseimage/
It has an init-system that is much easier to use than e.g. supervisord.
Here is a good starting point:
https://github.com/phusion/baseimage-docker#getting_started
Instructions for using phusion
Sadly there is not official openjdk-8-jdk available for Ubuntu 14.04 LTS. You could try with an inofficial ppa, which is used in the following explanation.
In your case you would need to bash scripts (which act like "services"):
# start-first.sh (the file has to start with the following line!):
#!/bin/bash
usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar /root/first.jar
# start-second.sh
#!/bin/bash
usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar /root/second.jar
And your Dockerfile would look like this:
# base image is phusion
FROM phusion/baseimage:latest
# Use init service of phusion
CMD ["/sbin/my_init"]
# Install unofficial openjdk-8
RUN add-apt-repository ppa:openjdk-r/ppa && apt-get update && apt-get dist-upgrade -y && apt-get install -y openjdk-8-jdk
ADD first.jar /root/first.jar
ADD second.jar /root/second.jar
# Add first service
RUN mkdir /etc/service/first
ADD start-first.sh /etc/service/first/run
RUN chmod +x /etc/service/first/run
# Add second service
RUN mkdir /etc/service/second
ADD start-second.sh /etc/service/second/run
RUN chmod +x /etc/service/second/run
# Clean up
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
This should install two services which will be run on startup and shut down properly when using docker stop.

A Docker container has only a single process when it is started.
You can still create several processes afterward.:
One simple way is to create a second process inside a bash script.
You can also use Supervisor : https://docs.docker.com/articles/using_supervisord/

You have a few options. A lot of the answers have mentioned using supervisor for this, which is a fine solution. Here are some others:
Create a short script that just kicks off both jars. Add that to your CMD. For example, the script, which we'll call run_jars.sh could look like:
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar first.jar;
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar;
Then your CMD would be CMD sh run_jars.sh
Another alternative is just running two separate containers-- one for first.jar and the other for second.jar. You can run each one through docker run, for example:
docker run my_repo/my_image:some_tag /usr/lib/jvm/java-8-openjdk-amd64/bin/java -jar second.jar

If you want to start two different processes inside one docker container (not recommanded behaviour) you can use something like supervisord

Related

How to make a docker entrypoint run as non-root for some particular commands only

My PHP image entry point is something like below. The entrypoint runs as root and it is necessary in my case . So any command I run on my container runs as root. For some particular command I want to run it as another user e.g when someone try to execute docker exec -it php composer install composer should run as another user set in entrypoint. when someone try to execute docker exec -it php drush status drush should run as another user set in entry point. Probably a if or switch statement inside entrypoint can help me. I was trying something like this https://unix.stackexchange.com/questions/476155/how-to-pass-multiple-parameters-to-su-user-c-command but passing parameter with double dash (--) breaks some command.
Dockerfile
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm"]
entrypoint.sh
#!/bin/sh
set -e
# first arg is `-f` or `--some-option`
if [ "${1#-}" != "$1" ]; then
set -- php-fpm "$#"
fi
exec "$#"
I'm not sure that I understand your use-case, but I use su-exec to drop privileges down to a non-root user within my entrypoint script. Most commonly I have to use this because I need to change permissions on a bind-mounted volume (usually /var/run/docker.sock).
Essentially I will do root level operations in my entrypoint, then drop down to a non-root user when executing the container service.
This blog explains the concept using gosu, su-exec is a refactor of gosu in C that is 10kb vs 1.8MB: https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
Do note the security issues, which AFAIK are not a factor when using this in containers.

Reuse inherited image's CMD or ENTRYPOINT

How can I include my own shell script CMD on container start/restart/attach, without removing the CMD used by an inherited image?
I am using this, which does execute my script fine, but appears to overwrite the PHP CMD:
FROM php
COPY start.sh /usr/local/bin
CMD ["/usr/local/bin/start.sh"]
What should I do differently? I am avoiding the prospect of copy/pasting the ENTRYPOINT or CMD of the parent image, and maybe that's not a good approach.
As mentioned in the comments, there's no built-in solution to this. From the Dockerfile, you can't see the value of the current CMD or ENTRYPOINT. Having a run-parts solution is nice if you control the upstream base image and include this code there, allowing downstream components to make their changes. But docker there's one inherent issue that will cause problems with this, containers should only run a single command that needs to run in the foreground. So if the upstream image kicks off, it would stay running without giving your later steps a chance to run, so you're left with complexities to determine the order to run commands to ensure that a single command does eventually run without exiting.
My personal preference is a much simpler and hardcoded option, to add my own command or entrypoint, and make the last step of my command to exec the upstream command. You will still need to manually identify the script name to call from the upstream Dockerfile. But now in your start.sh, you would have:
#!/bin/sh
# run various pieces of initialization code here
# ...
# kick off the upstream command:
exec /upstream-entrypoint.sh "$#"
By using an exec call, you transfer pid 1 to the upstream entrypoint so that signals get handled correctly. And the trailing "$#" passes through any command line arguments. You can use set to adjust the value of $# if there are some args you want to process and extract in your own start.sh script.
If the base image is not yours, you unfortunately have to call the parent command manually.
If you own the parent image, you can try what the people at camptocamp suggest here.
They basically use a generic script as an entry point that calls run-parts on a directory. What that does is run all scripts in that directory in lexicographic order. So when you extend an image, you just have to put your new scripts in that same folder.
However, that means you'll have to maintain order by prefixing your scripts which could potentially get out of hand. (Imagine the parent image decides to add a new script later...).
Anyway, that could work.
Update #1
There is a long discussion on this docker compose issue about provisioning after container run. One suggestion is to wrap you docker run or compose command in a shell script and then run docker exec on your other commands.
If you'd like to use that approach, you basically keep the parent CMD as the run command and you place yours as a docker exec after your docker run.
Using mysql image as an example
Do docker inspect mysql/mysql-server:5.7 and see that:
Config.Cmd="mysqld"
Config.Entrypoint="/entrypoint.sh"
which we put in bootstrap.sh (remember to chmod a+x):
#!/bin/bash
echo $HOSTNAME
echo "Start my initialization script..."
# docker inspect results used here
/entrypoint.sh mysqld
Dockerfile is now:
FROM mysql/mysql-server:5.7
# put our script inside the image
ADD bootstrap.sh /etc/bootstrap.sh
# set to run our script
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/etc/bootstrap.sh"]
Build and run our new image:
docker build --rm -t sidazhou/tmp-mysql:5.7 .
docker run -it --rm sidazhou/tmp-mysql:5.7
Outputs:
6f5be7c6d587
Start my initialization script...
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
You'll see this has the same output as the original image:
docker run -it --rm mysql/mysql-server:5.7
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...

how to pass a --login into docker build

I have some script that I need to run inside the container, and somehow it only run if I run it inside a bash --login.
I normally run my docker: docker build -t sometags . and I noticed it only run bash without --login.
I know I can just use bash -l -c "some-command-here" but I'd say it's my final fallback if nothing can helps.
so, tl;dr: how can I achieve something like this in my Dockerfile
#dockerfile
RUN bash --login
RUN some-script
and then, I'll just run it with: docker build -t x/y:z .
updated:
the scripts I want to run is things like: gem install bundler, and bundle install.

Commands to execute background process in Docker CMD

I am creating a docker image using a Dockerfile. I would like to execute some scripts while starting the docker container. Currently I have a shell script to execute all the necessary processes
CMD ["sh","start.sh"]
I would like to execute a shell command with a process running in background example
CMD ["sh", "-c", "mongod --dbpath /test &"]
Besides the comments on your question that already pointed out a few things about Docker best practices you could anyway start a background process from within your start.sh script and keep that start.sh script itself in foreground using the nohup command and the ampersand (&). I did not try it with mongod but something like the following in your start.sh script could work:
#!/bin/sh
...
nohup sh -c mongod --dbpath /test &
...
Of course there is also the official Docker documentation of how to start multiple services, again using a script file not the CMD. The docker documentation also states how to use supervisord as a process manager:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
If it is an option you could use a phusion base images which allows running multiple processes in one container. Thus you can run system services such as cron or other processes using a service supervisor like runit.
More information about whether or not a phusion base image is a good choice in your use case can be found here
A ruby focused description of how to avoid running more processes in your container except for your app you can find here. The elaborations are too detailed to repeat on SO.

Docker run/star/exec?

Hi i have build and install ziftrCoin wallet on a ubuntu image.
8084e9de3c23 ubuntu:latest "/bin/bash" 25 hours ago Up About a minute 0.0.0.0:10332->10332/tcp ziftrCoin
The problem is that ziftrcoind closing after i exit the container.
Try to run docker exec -it ziftrCoin /root/64/./ziftrcoind the program start but i get connected to the container. Same problem if i exit.
So how to update / edit the COMMAND when i start the container with "ziftrCoin /root/64/./ziftrcoind" and not "/bin/bash"?
UPDATE
IF i build it run it i dont get it to stay open..
docker run -d ziftr
252554f38c2a41bdd29875bcb6ab7b6bbe98522e16828b1f8b06d8899bc5134c
docker run -it ziftr
ZiftrCOIN server starting
FROM ubuntu
MAINTAINER Krister Johansson <hello#nodejs.how>
WORKDIR /var/ziftrCoin
RUN apt-get update
RUN apt-get install -y wget
RUN wget "https://d19y4lldx7po3t.cloudfront.net/assets/downloads/0.9.3/ziftrcoin-0.9.3-linux64.tar.gz"
RUN tar -xvzf ziftrcoin-0.9.3-linux64.tar.gz
RUN rm ziftrcoin-0.9.3-linux64.tar.gz
ADD ./src/ziftrcoin.conf /root/.ziftrcoin/ziftrcoin.conf
EXPOSE 10332 11332
CMD ["64/./ziftrcoind"]
For Docker, when the process with pid 1 (inside the container) quits, it will quit too (and kill all other processed that were running in that container). This is what happens to you as /bin/bash is the process with pid 1. What you need to do is set ziftrcoind process as pid 1.
You did not provide a Dockerfile or a docker run command but I assume you run something like docker run ziftrcoin (where ziftrcoin would be the name of the image you build) and you don't have a CMD in your Dockerfile.
The idea would be either to give docker a default command, using CMD in your Dockerfile or give it the command to run when issuing the docker run.
Let's see the how the Dockerfile would look like :
FROM Ubuntu
RUN # … Install ziftrcoind
CMD ["/root/64/./ziftrcoind"]
If you build this image, when running it, the default command would be /root/64/./ziftrcoind instead of /bin/bash. You could also do docker run ziftrcoint /root/64/./ziftrcoind to achieve the same effect.
As Kevan Ahlquist commented, if you want to run it in background, you can use the flag -d : docker run -d ziftrcoin (with or without the command, depending if you have the CMD in your Dockerfile or not).
Problem found!
I had deamon=1 in ziftrcoin.conf after removing it it workt!
Uploaded it to git.
https://github.com/nodejshow/docker-ziftrcoind

Resources