Below is my entrypoint.ps1 (PowerShell-script):
Set-Location -Path C:\nginx
& "C:\nginx\Configure-Nginx.ps1"
& "C:\nginx\nginx.exe"
I need to my Configure-Nginx.ps1 and node.exe were executed on docker run so I've put an entrypoint to my Dockerfile:
FROM nginx
# nginx is a custom image that's based on mcr.microsoft.com/windows/servercore:1809-KB5003171
COPY entrypoint.ps1 ./
COPY install/Configure-Nginx.ps1 /nginx/Configure-Nginx.ps1
ENTRYPOINT ["powershell", "entrypoint.ps1"]
However my container begins to restart each minute... Well, I've decided there is a some error in the script then I run this image manually with --entrypoint powershell and executed my script in the console directly: .\entrypoint.ps1. The script was frozen (cuz nginx was launched) and I could connect to my container from web-browser on the host machine... So everything works! Then why doesn't it work if I call my entrypoint from Dockerfile? What's difference? Maybe someone has met a similar problem...
P.S. The container is based on mcr.microsoft.com/windows/servercore:1809-KB5003171 with PowerShell v5.1.17763.1852
Firs make sure the script is available under the container root:
COPY entrypoint.ps1 /entrypoint.ps1
Then execute it either by -Command or -File:
ENTRYPOINT ["pwsh", "-Command", "/entrypoint.ps1"]
ENTRYPOINT ["pwsh", "-File","/entrypoint.ps1"]
Related
Here is my dockerfile
FROM httpd:latest
ENV ENV_VARIABLE "http://localhost:8081"
# COPY BUILD AND CONFIGURATION FILES
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Here is the entrypoint.sh file
#!/bin/bash
sed -i 's,ENV_VARIABLE,'"$ENV_VARIABLE"',g' /path/to/config/file
exec "$#"
To run the container
docker run -e ENV_VARIABLE=some-value <image-name>
The sed command works perfectly fine and the value from environment variable gets reflected in config file. But whenever i start the container the container stops automatically.
I ran the command docker logs to check the logs but logs were empty.
The Dockerfile reference notes:
If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
So you need to find the CMD from the base image and repeat it in your Dockerfile. Among other places, you can find that on the Docker hub listing of the image history
CMD ["httpd-foreground"]
docker inspect httpd or docker history httpd would also be able to tell you this.
This is the parent dockerfile im working with: https://github.com/Microsoft/aspnet-docker/blob/master/4.7.1-windowsservercore-1709/runtime/Dockerfile
When the container is started with "docker run" I need to run a PS script before the entrypoint executes, how is this possible?
the entrypoint in the parent dockerfile is: ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]
I want that to behave as usually just execute a ps script first.
I have overwritten the entrypoint with a PS script:
ENTRYPOINT ["powershell", "C:/myscript.ps1"]
myscript.ps1
.... does some stuff ....
# executes the command to start up like the parent container did
C:/ServiceMonitor.exe w3svc
Running this script seems to work fine, but are there any gotchas? How can I pass arguments to the entrypoint now?
How can I include my own shell script CMD on container start/restart/attach, without removing the CMD used by an inherited image?
I am using this, which does execute my script fine, but appears to overwrite the PHP CMD:
FROM php
COPY start.sh /usr/local/bin
CMD ["/usr/local/bin/start.sh"]
What should I do differently? I am avoiding the prospect of copy/pasting the ENTRYPOINT or CMD of the parent image, and maybe that's not a good approach.
As mentioned in the comments, there's no built-in solution to this. From the Dockerfile, you can't see the value of the current CMD or ENTRYPOINT. Having a run-parts solution is nice if you control the upstream base image and include this code there, allowing downstream components to make their changes. But docker there's one inherent issue that will cause problems with this, containers should only run a single command that needs to run in the foreground. So if the upstream image kicks off, it would stay running without giving your later steps a chance to run, so you're left with complexities to determine the order to run commands to ensure that a single command does eventually run without exiting.
My personal preference is a much simpler and hardcoded option, to add my own command or entrypoint, and make the last step of my command to exec the upstream command. You will still need to manually identify the script name to call from the upstream Dockerfile. But now in your start.sh, you would have:
#!/bin/sh
# run various pieces of initialization code here
# ...
# kick off the upstream command:
exec /upstream-entrypoint.sh "$#"
By using an exec call, you transfer pid 1 to the upstream entrypoint so that signals get handled correctly. And the trailing "$#" passes through any command line arguments. You can use set to adjust the value of $# if there are some args you want to process and extract in your own start.sh script.
If the base image is not yours, you unfortunately have to call the parent command manually.
If you own the parent image, you can try what the people at camptocamp suggest here.
They basically use a generic script as an entry point that calls run-parts on a directory. What that does is run all scripts in that directory in lexicographic order. So when you extend an image, you just have to put your new scripts in that same folder.
However, that means you'll have to maintain order by prefixing your scripts which could potentially get out of hand. (Imagine the parent image decides to add a new script later...).
Anyway, that could work.
Update #1
There is a long discussion on this docker compose issue about provisioning after container run. One suggestion is to wrap you docker run or compose command in a shell script and then run docker exec on your other commands.
If you'd like to use that approach, you basically keep the parent CMD as the run command and you place yours as a docker exec after your docker run.
Using mysql image as an example
Do docker inspect mysql/mysql-server:5.7 and see that:
Config.Cmd="mysqld"
Config.Entrypoint="/entrypoint.sh"
which we put in bootstrap.sh (remember to chmod a+x):
#!/bin/bash
echo $HOSTNAME
echo "Start my initialization script..."
# docker inspect results used here
/entrypoint.sh mysqld
Dockerfile is now:
FROM mysql/mysql-server:5.7
# put our script inside the image
ADD bootstrap.sh /etc/bootstrap.sh
# set to run our script
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/etc/bootstrap.sh"]
Build and run our new image:
docker build --rm -t sidazhou/tmp-mysql:5.7 .
docker run -it --rm sidazhou/tmp-mysql:5.7
Outputs:
6f5be7c6d587
Start my initialization script...
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
You'll see this has the same output as the original image:
docker run -it --rm mysql/mysql-server:5.7
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
I am creating a docker image using a Dockerfile. I would like to execute some scripts while starting the docker container. Currently I have a shell script to execute all the necessary processes
CMD ["sh","start.sh"]
I would like to execute a shell command with a process running in background example
CMD ["sh", "-c", "mongod --dbpath /test &"]
Besides the comments on your question that already pointed out a few things about Docker best practices you could anyway start a background process from within your start.sh script and keep that start.sh script itself in foreground using the nohup command and the ampersand (&). I did not try it with mongod but something like the following in your start.sh script could work:
#!/bin/sh
...
nohup sh -c mongod --dbpath /test &
...
Of course there is also the official Docker documentation of how to start multiple services, again using a script file not the CMD. The docker documentation also states how to use supervisord as a process manager:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
If it is an option you could use a phusion base images which allows running multiple processes in one container. Thus you can run system services such as cron or other processes using a service supervisor like runit.
More information about whether or not a phusion base image is a good choice in your use case can be found here
A ruby focused description of how to avoid running more processes in your container except for your app you can find here. The elaborations are too detailed to repeat on SO.
Hi i have build and install ziftrCoin wallet on a ubuntu image.
8084e9de3c23 ubuntu:latest "/bin/bash" 25 hours ago Up About a minute 0.0.0.0:10332->10332/tcp ziftrCoin
The problem is that ziftrcoind closing after i exit the container.
Try to run docker exec -it ziftrCoin /root/64/./ziftrcoind the program start but i get connected to the container. Same problem if i exit.
So how to update / edit the COMMAND when i start the container with "ziftrCoin /root/64/./ziftrcoind" and not "/bin/bash"?
UPDATE
IF i build it run it i dont get it to stay open..
docker run -d ziftr
252554f38c2a41bdd29875bcb6ab7b6bbe98522e16828b1f8b06d8899bc5134c
docker run -it ziftr
ZiftrCOIN server starting
FROM ubuntu
MAINTAINER Krister Johansson <hello#nodejs.how>
WORKDIR /var/ziftrCoin
RUN apt-get update
RUN apt-get install -y wget
RUN wget "https://d19y4lldx7po3t.cloudfront.net/assets/downloads/0.9.3/ziftrcoin-0.9.3-linux64.tar.gz"
RUN tar -xvzf ziftrcoin-0.9.3-linux64.tar.gz
RUN rm ziftrcoin-0.9.3-linux64.tar.gz
ADD ./src/ziftrcoin.conf /root/.ziftrcoin/ziftrcoin.conf
EXPOSE 10332 11332
CMD ["64/./ziftrcoind"]
For Docker, when the process with pid 1 (inside the container) quits, it will quit too (and kill all other processed that were running in that container). This is what happens to you as /bin/bash is the process with pid 1. What you need to do is set ziftrcoind process as pid 1.
You did not provide a Dockerfile or a docker run command but I assume you run something like docker run ziftrcoin (where ziftrcoin would be the name of the image you build) and you don't have a CMD in your Dockerfile.
The idea would be either to give docker a default command, using CMD in your Dockerfile or give it the command to run when issuing the docker run.
Let's see the how the Dockerfile would look like :
FROM Ubuntu
RUN # … Install ziftrcoind
CMD ["/root/64/./ziftrcoind"]
If you build this image, when running it, the default command would be /root/64/./ziftrcoind instead of /bin/bash. You could also do docker run ziftrcoint /root/64/./ziftrcoind to achieve the same effect.
As Kevan Ahlquist commented, if you want to run it in background, you can use the flag -d : docker run -d ziftrcoin (with or without the command, depending if you have the CMD in your Dockerfile or not).
Problem found!
I had deamon=1 in ziftrcoin.conf after removing it it workt!
Uploaded it to git.
https://github.com/nodejshow/docker-ziftrcoind