how to troubleshoot dockerfile when application crash/fail in the container? - spring-boot

If the application fails in the docker container you would not be able to troubleshoot what happened. Please propose a solution to that

docker ps -a
This will list all the containers including those who have already existed (for whatever reason)
Then you can copy the process id of the container of your interest and:
docker logs <pid of container that has failed>
Another interesting command is:
docker inspect <pid of container that has failed>
It returns a big json - you can check some sections there, like memory settings, "State" (if the process was OOM killed and so forth)

Related

What to do when Memgraph stops working without any info?

Sometimes the Docker container where Memgraph is running just stops working or says that the process was aborted with exit code 137. How can I fix this?
You should check the Memgraph logs, where you'll probably find the reason why the process was aborted.
Since you said that you're using Memgraph with Docker, there are two options:
If you run Memgraph with Docker using the volume for logs, that is with
-v mg_log:/var/log/memgraph, then mg_log folder usually can be found at \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\ (Windows) or /var/lib/docker/volumes/ (Linux and macOS).
If you run Memgraph without using the volume for logs, then you need to enter the Docker container. In order to do that, first you have to find out the container ID by running docker ps. Then you have to copy the container ID and run docker exec -it <containerID> bash. For example, if container ID is 83d76fe4df5a, then you run docker exec -it 83d76fe4df5a bash. Next, you need find the folder where logs are located. You can do that by running cd /var/log/memgraph. To read the logs, run cat <memgraph_date>.log, that is, if you have log file memgraph_2022-03-02.log located inside the log folder, then run cat memgraph_2022-03-02.log.
Hopefully, when you read the logs, you'll be able to fix your problem.

container restarting again and again and unable to stop / remove / kill

I have a problem, when I check the list of running container by command:
docker ps It show me running container with id and name. I killed it by command docker kill jenkins-master.1.vvafqnuu97itpn9clqgyqgqe7
after a few seconds It was started again with new container id automatically.
I have tried another command to remove it : docker container rm jenkins-master.1.vvafqnuu97itpn9clqgyqgqe7
It removed and then again started with another container id after few seconds.
I am really upset what's going on...
I have stoped container first and then removed, when I checked after remove by docker ps it was showing no container in list and after few seconds a container was running with some other id... It was surprising me.
The container is managed by swarm mode. Swarm mode will see the difference between the current state and target state and create a new container to correct the difference. Try:
docker service ls
docker service rm jenkins-master

Running an docker image with cron

I am using an image from docker hub and it uses cron to perform some actions after some interval. I have registered and pushed it as described in documentation as a worker process (not a web). It also requires several environment variables.
I've run it from command line, e.g. docker run -t -e E_VAR1=VAL1 registry.heroku.com/image_name/worker and it worked for few days, then suddenly stopped and I had to run the command again.
Questions:
Is this a correct way to run a docker (as worker process) in Heroku?
Why might it stop running after few days? Is there any logs to check?
Is there a way to restart the process automatically?
How properly set environment variables for the docker in Heroku?
Thanks!
If you want to have this run in the background, you should use the -d flag to disconnect stdin and stdout, and not -t.
To check logs, user docker logs [container name or id]. You can find out the container's name and id using docker ps -a. That should give you an idea as to why the container stopped.
To have the container restart automatically add the --restart always flag when you run it. Alternatively, use --restart on-failure to only restart when it exited with a nonzero exit code.
The way you set environment variables seems fine.

Ubuntu run service in foreground

I've made a (docker) container for ddclient.
The problem is that I'm having problems in running that service in the foreground so that the docker container keeps running.
I've managed to keep the docker running by adding a bashat the end of the script but this is hackish, since the actual process it should be whatching is the ddclient.
Another way I found was to tail -f the log file, but if the service stops, the container will keep running instead of stoping.
Q: So is there any (easy) way to keep a service running in the foreground?
The problem with the process (any process) running in a container is signal management: you need to make sure the SIGKILL and other signals are properly communicated to the right process(es) in order to successfully stop/remove a container (and not leave zombie processes: see "PID 1 zombie reaping issue")
One option is at least to make your service at least write in a log file
ENTRYPOINT ["/bin/sh" "-c" ]
CMD yourProcess > log
That should keep it in foreground, as suggested in "How do I bring a daemon process to foreground?".
For a service, try and use as a base image phusion/baseimage-docker which manages other services properly.

Starting docker service with "sudo docker -d"

I am trying to push some image to my registry, but when i tried to do:
sudo docker push myreg:5000\image
i got some error that told me that i need to start docker daemon with
docker -d --insecure-registry myreg:5000
So i stopped the docker service, and started it using the command above, once i do that the current shell window(ssh) is stuck with docker output, and if i close it the docker service is stopped.
I know this is an easy one, and i searched for hours and couldn't find anything.
Thank you
The problem is that when i run the command, i get all the docker output to the shell, and if i close it, the docker service stopped, usually the -d should take care of it, but it wont work
I think there's a confusion here; the top-level -d (docker -d) flag starts docker in daemon mode, in the foreground. This is different from the docker run -d <image> flag, which means "start a container from <image>, in detached mode". What you're seeing on your screen, is the daemon output / logs, waiting for connections from a docker client.
Back to your original issue;
The instructions to run docker -d --insecure-registry myreg:5000 could be clearer, but they illustrate that you should change the daemon options of your docker service to include the --insecure-registry myreg:5000 option.
Depending on the process manager your system users (e.g., upstart or systemd), this means you'll have to edit the /etc/default/docker file (see the documentation), or adding a "drop-in" file to override the default systemd service options; see SystemD custom daemon options
Some notes;
The top-level -d option is deprecated in docker 1.8 in favor of the new docker daemon command
Using --insecure-registry is discouraged for security reasons as it allows both unencrypted and untrustworthy communication with the registry. It's preferable to add your CA to the trusted list of your system.

Resources