This is the parent dockerfile im working with: https://github.com/Microsoft/aspnet-docker/blob/master/4.7.1-windowsservercore-1709/runtime/Dockerfile
When the container is started with "docker run" I need to run a PS script before the entrypoint executes, how is this possible?
the entrypoint in the parent dockerfile is: ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]
I want that to behave as usually just execute a ps script first.
I have overwritten the entrypoint with a PS script:
ENTRYPOINT ["powershell", "C:/myscript.ps1"]
myscript.ps1
.... does some stuff ....
# executes the command to start up like the parent container did
C:/ServiceMonitor.exe w3svc
Running this script seems to work fine, but are there any gotchas? How can I pass arguments to the entrypoint now?
Related
Below is my entrypoint.ps1 (PowerShell-script):
Set-Location -Path C:\nginx
& "C:\nginx\Configure-Nginx.ps1"
& "C:\nginx\nginx.exe"
I need to my Configure-Nginx.ps1 and node.exe were executed on docker run so I've put an entrypoint to my Dockerfile:
FROM nginx
# nginx is a custom image that's based on mcr.microsoft.com/windows/servercore:1809-KB5003171
COPY entrypoint.ps1 ./
COPY install/Configure-Nginx.ps1 /nginx/Configure-Nginx.ps1
ENTRYPOINT ["powershell", "entrypoint.ps1"]
However my container begins to restart each minute... Well, I've decided there is a some error in the script then I run this image manually with --entrypoint powershell and executed my script in the console directly: .\entrypoint.ps1. The script was frozen (cuz nginx was launched) and I could connect to my container from web-browser on the host machine... So everything works! Then why doesn't it work if I call my entrypoint from Dockerfile? What's difference? Maybe someone has met a similar problem...
P.S. The container is based on mcr.microsoft.com/windows/servercore:1809-KB5003171 with PowerShell v5.1.17763.1852
Firs make sure the script is available under the container root:
COPY entrypoint.ps1 /entrypoint.ps1
Then execute it either by -Command or -File:
ENTRYPOINT ["pwsh", "-Command", "/entrypoint.ps1"]
ENTRYPOINT ["pwsh", "-File","/entrypoint.ps1"]
I have a dockerfile and the last command is
CMD ["/opt/startup.sh"]
Now i have another shell script i.e replacevariables.sh and i want to execute the following command in my dockerfile.
sh replacevariables.sh ${app_dir} dev
How can i execute this command. It is a simple script which is basically going to replace some characters of files in ${app_dir}. What can be the solution for this because when i see any kind of documentation they all suggest to run only one sh script.
You can use a Docker ENTRYPOINT to support this. Consider the following Dockerfile fragment:
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh replacevariables.sh
ENTRYPOINT ["./entrypoint.sh"]
# Same as above
CMD ["/opt/startup.sh"]
The ENTRYPOINT becomes the main container process, and it gets passed the CMD as arguments. So your entrypoint can do the first-time setup, and then run the special shell command exec "$#" to replace itself with the command it was given.
#!/bin/sh
./replacevariables.sh "${app_dir}" dev
exec "$#"
Even if you're launching some alternate command in your container (docker run --rm -it yourimage bash to get a debugging shell, for example) this will only replace the "command" part, so bash becomes the "$#" in the script, and you still do the first-time setup before launching the shell.
The important caveats are that ENTRYPOINT must be the JSON-array form (CMD can be a bare string that gets wrapped in /bin/sh -c, but this setup breaks ENTRYPOINT) and you only get one ENTRYPOINT. If you already have an ENTRYPOINT (many SO questions seem to like naming an interpreter there) move it into the start of CMD (CMD ["python3", "./script.py"]).
There seems to be a difference between how Docker runs commands in a Dockerfile versus running commands manually after starting a container. This seems to be due to the kind of shells you can start, a (I assume) non-interactive shell with a Dockerfile vs an interactive one when running something like docker run -it <some-img-id>.
How can I debug running commands in a Docker container so that it runs exactly like the commands are run from a Dockerfile? Would just adding /bin/bash --noprofile to the run cmd suffice? Or is there anything else different about the environment when started from a Dockerfile?
What you are experiencing is the behavior because of the shell. Most of us are used to using the bash shell. So generally we would attempt to run the commands in the below fashion
For new container
docker run -it <imageid> bash
For existing container
docker exec -it <containerid> bash
But when we specify some command using RUN directive inside a Dockerfile
RUN echo Testing
Then it is equivalent to running /bin/sh -c 'echo Testing'. So you can expect certain differences as both the shells are different.
In Docker 1.12 or higher you have a Dockerfile directive named SHELL this allows you to override the default SHELL
SHELL ["/bin/bash", "-c"]
RUN echo Testing
This would make the RUN command be executed as bash -c 'echo Testing'. You can learn more about the SHELL directive here
Short answer 1:
If Dockerfile don't use USER and SHELL commands, then this:
docker --entrypoint "/bin/sh -c" -u root <image> cmd
Short answer 2:
If you don't squash or compress image after the build, Docker creates images layers for each of the Dockerfile commands. You can see them in the output of docker build at the end of each step with --->:
Step 2/8 : WORKDIR /usr/src/app
---> 5a5964bed25d # <== THIS IS IMAGE ID OF STEP 2
Removing intermediate container b2bc9558e499
Step 3/8 : RUN something
---> f6e90f0a06e2 # <== THIS IS IMAGE ID OF STEP 3
Removing intermediate container b2bc9558e499
Look for the image id just before the RUN step you want to debug (for example you want to debug step 3 on above, take the step 2 image id). Then just run the command in that image:
docker run -it 5a5964bed25d cmd
Long answer 1:
When you run docker run [image] cmd Docker in fact starts the cmd in this way:
Executes the default entrypoint of the image with the cmd as its argument. Entrypoint is stored in the image on build by ENTRYPOINT command in Dockerfile. Ie if cmd is my-app and entrypoint is /bin/sh -c, it executes /bin/sh -c my-app.
Starts it with default user id of the image, which is defined by the last USER command in Dockerfile
Starts it with the environment variables from all ENV commands from image's Dockerfile commulative
When docker build runs the Dockerfile RUN, it does exatly the same, only with the values present at that time (line) of the Dockerfile.
So to be exact, you have to take the value of ENVs and last USER command before your RUN line, and use those in the docker run command.
Most common images have /bin/sh -c or /bin/bash -c as entrypoint and most likely the build operates with root user. Therefore docker --entrypoint "/bin/bash -c" -u root <image> cmd should be sufficient
I have a Dockerfile, which ends with:
ENTRYPOINT ["/bin/bash", "/usr/local/cdt-tests/run-tests.sh"]
After building this container, I want to run it, but instead of executing this bash script (run-tests.sh), I want to open up a terminal window inside the container to inspect the filesystem.
If there were no ENTRYPOINT line, I could do this:
docker build -t x .
docker run -it x /bin/bash
and I could examine the container's files.
However, since there is an ENTRYPOINT, then that script will run and I cannot examine the container's files.
Is there anything I can do to get into the container to snoop around?
docker run has an --entrypoint option
New to dockers, so please bear with me.
My Dockerfile contains an ENTRYPOINT:
ENV MONGOD_START "mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"
ENTRYPOINT ["/bin/sh", "-c", "$MONGOD_START"]
I have a shell script add an entry to database through python script, and starts the server.
The script startApp.sh
chmod +x /addAddress.py
python /addAddress.py $1
cd /myapp/webapp
grunt serve --force
Now, all the below RUN commands are unsuccessful in executing this script.
sudo docker run -it --privileged myApp -C /bin/bash && /myApp/webapp/startApp.sh loc
sudo docker run -it --privileged myApp /myApp/webapp/startApp.sh loc
The docker log of container is
"about to fork child process, waiting until server is ready for connections. forked process: 7 child process started successfully, parent exiting "
Also, the startApp.sh executes fine when I open a bash prompt in docker and run it.
I am unable to figure out what wrong I am doing, help please.
I would suggest you to create an entrypoint.sh file:
#!/bin/sh
# Initialize start DB command
# Pick from env variable MONGOD_START if it exists
# else use the default value provided in quotes
START_DB=${MONGOD_START:-"mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"}
# This will start your DB in background
${START_DB} &
# Go to startApp directory and execute commands
`chmod +x /addAddress.py;python /addAddress.py $1; \
cd /myapp/webapp ;grunt serve --force`
Then modify your Dockerfile by removing the last line and replacing it with following 3 lines:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Then rebuild your container image using
docker build -t NAME:TAG .
Now you run following command to verify if ENTRYPOINT is /entrypoint.sh
docker inspect NAME:TAG | less
I guess (and I might be wrong, since I'm neither a MongoDB nor a Docker expert) that your combination of mongod --fork and /bin/sh -c is the culprit.
What you're essentially executing is this:
/bin/sh -c mongod --fork ...
which
executes a shell
this shell executes a single command and waits for it to finish
this command launches MongoDB in daemon mode
MongoDB forks and immediately exits
The easiest fix is probably to just use
CMD ["mongod"]
like the official MongoDB Docker does.