Failed to run service inside docker container - makefile

I need to create docker container which is running Onedrive client so I can synchronize the data to my Onedrive account.
This is the project that I've created https://github.com/rischanlab/onedrive-ubuntu-docker
I created a Makefile, by that file we can make command easily.
This is the example of my Makefile
shell:
#echo
#echo "------------------------------------------------------------------"
#echo "Shelling in in production mode"
#echo "------------------------------------------------------------------"
#docker-compose -p $(PROJECT_ID) run data /bin/bash
statusd:
#echo
#echo "------------------------------------------------------------------"
#echo "Knowing Onedrive daemon status"
#echo "------------------------------------------------------------------"
#docker-compose -p $(PROJECT_ID) run data onedrive-d status
startd:
#echo
#echo "------------------------------------------------------------------"
#echo "Running Onedrive daemon"
#echo "------------------------------------------------------------------"
#docker-compose -p $(PROJECT_ID) run data onedrive-d start
The problem is when I run make startd the log said that onedrive-d has been start but I still can't sync my data to my onedrive account, then I checked the status by using command make statusd and the log said that onedrive-d is not running. I am so confused why it happend.
Note:
When I login to the container make shell then I start manualy
onedrive-d start. it works, the daemon can start well and my files
successfuly synchronized.
For the docker-compose.yml and Dockerfile can be seen on my
project that I provide above.
so, is there anyone can explain to me, whats wrong with my code?

docker-compose run doesn't do what you're expecting here. It's for running one-off tasks in a new container, not for exec'ing inside an existing container.
You can use docker exec -ti <container name> if you need to get into the same container.

Related

Auto Start Script

So I am making a script that can run these commands whenever a server boot/reboot:
sudo bash
su - erp
cd frappe-bench/
bench start >/tmp/bench_log &
I found guides here and there about how can I change user in script I came out with the following script:
#! /bin/sh
sudo -u erp bash
cd /home/erp/frappe-bench/
bench start >/tmp/bench_log &
And, I have created a service at /etc/systemd/system/ and set it to run automatically when the server boots up.
The problem is, whenever I run sudo systemctl start erpnextd.service and checked the status, it came up with this
May 24 17:10:05 appbsystem2 systemd[1]: Started ERPNext | Auto Restart.
May 24 17:10:05 appbsystem2 sudo[18814]: root : TTY=unknown ; PWD=/ ; USER=>erp ; COMMAND=/bin/bash
May 24 17:10:05 appbsystem2 systemd[1]: erpnextd.service: Succeeded.
But it still doesn't start up ERPNext.
All I wanted to do is make a script that will start erpnext automatically everytime a server reboot.
Note: I only install frappe-bench on user erp only
Because you are using systemd, you already have all the features from your script available, and better. So you don't even need the script anymore:
[Unit]
Description=...
[Service]
# Run as user erp.
User=erp
# You probably also want to run as group erp, if it exists.
Group=erp
# Change to this directory before executing.
WorkingDirectory=/home/erp/frappe-bench
# Redirect standard output to the given log file.
StandardOutput=file:/tmp/bench_log
# Redirect standard error to the same log file.
StandardError=file:/tmp/bench_log
# Command line for starting the program. Make sure to use an absolute path!
ExecStart=/full/path/to/bench start
[Install]
WantedBy=multi-user.target
Using crontab (the script will start after every reboot/startup)
#crontab -e
#reboot sh /full/path/to/bench start >/tmp/bench_log
The answer provide by Thomas is very helpful.
However, I found another workaround by adding the path of my script file into the bottom of /etc/rc.local file.
Both method works, just a matter of preference ;)

why is my docker container ASP.NET core app not available after ending debugging in Visual Studio

My title explains most of it but want to understand why it is that I can access https://localhost:32770/ and get my API endpoints when I am debugging in Visual Studio but when I end debugging it becomes unavailable.
I'm currently in the thick of spending a few days wrapping my head around Docker and Kubernetes and this is stumping me a bit, and I'd really love to fill this gap in my knowledge.
The container remains running after being created so what has changed?
I noticed this is run at the start of the build:
docker exec -i 0f855d9b4c801bf8c52da48e6dd02ffdf0fe7242fde22fb9a221616e4b2900f9 /bin/sh \
-c "if PID=$(pidof dotnet); then kill $PID; fi"
but I don't see how that changes what happens after the debugging ends when this is before the dockerfile is run and everything. I don't understand the -c in the command, but I do understand that the script in the quotation marks after it is run in the container following docker exec syntax docker exec [OPTIONS] CONTAINER COMMAND [ARG...]. it seems this script kills the existing build of the code before the new one is created.
This is run before the dockerfile is run
docker build -f "F:\Dev\API_files\API_name\Dockerfile"
--force-rm
-t API_name:dev
--target base
--label "com.microsoft.created-by=visual-studio"
--label "com.microsoft.visual-studio.project-name=API_name" "F:\Dev\API_name"
I don't see anything here that would change how the container is running, rm in this instance 'removes intermediate containers after a build (default true)' according to docker build --help
the dockerfile is run next and it is pretty much the default one for ASP.NET core Applications, it has
EXPOSE 80
EXPOSE 443
and the rest are simple build steps.
After all this I can't seem to find much indication of what is going on. My guesses are that it has to do with IIS Express but really I don't know much of what goes on with it and when visual studios is debugging. Whats going on behind the scenes that was running while I was debugging to open the localhost port for the docker container?
Edit: I found a docker run command that may have something to do with it, but maybe not. The docker run command has the -P flag to 'Publish all exposed ports to random ports' but the container never stops running so should I not be able to find these ports and connect to the API?
During the debugging, if you run this command:
docker exec -it containerName bash -c 'pidof dotnet'
You will noticed, that the dotnet process is running, and when you stop the debugging and run it again, you are going to see that, the process was finalized.
If you want to start your application in the container, without run the debugger again, you just need to run the start the dotnet process inside the container.
You could do that, running a script like this:
#Set these 3 variables
$containerName = "MyContainer"
$appDll = "myApp.dll"
$appDirectory = "/app/bin/debug/netcoreapp3.1"
$args = "/c docker exec -it $containerName bash -c 'cd $appDirectory;dotnet $appDll'"
Start-Process -FilePath "powershell" -ArgumentList $args -NoNewWindow
You can check if it worked, by running this script again:
docker exec -it containerName bash -c 'pidof dotnet'

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

Docker is not running my entire entrypoint.sh script

I have created a docker container to stand up Elasticsearch. Elasticsearch is being started and managed by supervisor which is also installed on my docker container. I have created an entrypoint.sh script and added the following to the end of my Dockerfile
ENTRYPOINT ["/usr/local/startup/entrypoint.sh"]
My entrypoint.sh script looks as follows:
#!/bin/bash -x
# Start Supervisor if not already running
if ! ps aux | grep -q "[s]upervisor"; then
echo "Starting supervisor service"
exec/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
else
echo "Supervisor is currently running"
fi
echo "creating /.es_created"
touch /.es_created
exec "$#"
When I start my docker container supervisor starts and in turn will successfully start elasticsearch. The problem is that it never executes the last bit of the script creating the .es_created file. It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there. I added -x to the #!/bin/bash so I could call docker logs on the container and it confirms that it never calls the last echo and touch commands. I feel like I may be missing something about entrypoint scripts which is why this is happening, but ultimately I want to be able to execute some commands after elasticsearch has started so I can configure a proper index and insert some data.
Your guess
It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there.
is correct, because the exec command of bash has indeed the following semantics: the specified program at stake is executed, and replace the parent shell process (it is an exec system call).
So your question is actually not a Docker issue, it is rather related to Bash. For more details on the exec shell builtin, you could for example take a look at this askubuntu question, or read the corresponding doc in the bash reference manual.
To sum up, you should try to just write
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
If that command indeed runs in the background, it should be OK. Otherwise, you could of course append a &:
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf &

Commands to execute background process in Docker CMD

I am creating a docker image using a Dockerfile. I would like to execute some scripts while starting the docker container. Currently I have a shell script to execute all the necessary processes
CMD ["sh","start.sh"]
I would like to execute a shell command with a process running in background example
CMD ["sh", "-c", "mongod --dbpath /test &"]
Besides the comments on your question that already pointed out a few things about Docker best practices you could anyway start a background process from within your start.sh script and keep that start.sh script itself in foreground using the nohup command and the ampersand (&). I did not try it with mongod but something like the following in your start.sh script could work:
#!/bin/sh
...
nohup sh -c mongod --dbpath /test &
...
Of course there is also the official Docker documentation of how to start multiple services, again using a script file not the CMD. The docker documentation also states how to use supervisord as a process manager:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
If it is an option you could use a phusion base images which allows running multiple processes in one container. Thus you can run system services such as cron or other processes using a service supervisor like runit.
More information about whether or not a phusion base image is a good choice in your use case can be found here
A ruby focused description of how to avoid running more processes in your container except for your app you can find here. The elaborations are too detailed to repeat on SO.

Resources