I am using an image from docker hub and it uses cron to perform some actions after some interval. I have registered and pushed it as described in documentation as a worker process (not a web). It also requires several environment variables.
I've run it from command line, e.g. docker run -t -e E_VAR1=VAL1 registry.heroku.com/image_name/worker and it worked for few days, then suddenly stopped and I had to run the command again.
Questions:
Is this a correct way to run a docker (as worker process) in Heroku?
Why might it stop running after few days? Is there any logs to check?
Is there a way to restart the process automatically?
How properly set environment variables for the docker in Heroku?
Thanks!
If you want to have this run in the background, you should use the -d flag to disconnect stdin and stdout, and not -t.
To check logs, user docker logs [container name or id]. You can find out the container's name and id using docker ps -a. That should give you an idea as to why the container stopped.
To have the container restart automatically add the --restart always flag when you run it. Alternatively, use --restart on-failure to only restart when it exited with a nonzero exit code.
The way you set environment variables seems fine.
Related
Sometimes the Docker container where Memgraph is running just stops working or says that the process was aborted with exit code 137. How can I fix this?
You should check the Memgraph logs, where you'll probably find the reason why the process was aborted.
Since you said that you're using Memgraph with Docker, there are two options:
If you run Memgraph with Docker using the volume for logs, that is with
-v mg_log:/var/log/memgraph, then mg_log folder usually can be found at \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\ (Windows) or /var/lib/docker/volumes/ (Linux and macOS).
If you run Memgraph without using the volume for logs, then you need to enter the Docker container. In order to do that, first you have to find out the container ID by running docker ps. Then you have to copy the container ID and run docker exec -it <containerID> bash. For example, if container ID is 83d76fe4df5a, then you run docker exec -it 83d76fe4df5a bash. Next, you need find the folder where logs are located. You can do that by running cd /var/log/memgraph. To read the logs, run cat <memgraph_date>.log, that is, if you have log file memgraph_2022-03-02.log located inside the log folder, then run cat memgraph_2022-03-02.log.
Hopefully, when you read the logs, you'll be able to fix your problem.
If the application fails in the docker container you would not be able to troubleshoot what happened. Please propose a solution to that
docker ps -a
This will list all the containers including those who have already existed (for whatever reason)
Then you can copy the process id of the container of your interest and:
docker logs <pid of container that has failed>
Another interesting command is:
docker inspect <pid of container that has failed>
It returns a big json - you can check some sections there, like memory settings, "State" (if the process was OOM killed and so forth)
I try to run docker in bash ubuntu on windows. But every time I get this message
"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?". If i run it in powershell - it work. Can somebody help?
Connecting to the docker deamon requires some privilidges that you don't have when starting the bash terminal.
You can however use the docker command terminal which will allow you to interact with the docker deamon.
Found the solution on this post: https://blog.jayway.com/2017/04/19/running-docker-on-bash-on-windows/
Connect Docker on WSL to Docker on Windows
Running docker against an engine on a different machine is actually quite easy, as Docker can expose a TCP endpoint which the CLI can attach to.
This TCP endpoint is turned off by default; to activate it, right-click the Docker icon in your taskbar and choose Settings, and tick the box next to “Expose daemon on tcp://localhost:2375 without TLS”.
With that done, all we need to do is instruct the CLI under Bash to connect to the engine running under Windows instead of to the non-existing engine running under Bash, like this:
$ docker -H tcp://0.0.0.0:2375 images
REPOSITORY TAG IMAGE ID CREATED SIZE
There are two ways to make this permanent – either add an alias for the above command, or better yet, export an environment variable which instructs Docker where to find the host engine:
$ echo "export DOCKER_HOST='tcp://0.0.0.0:2375'" >> ~/.bashrc
$ source ~/.bashrc
Now, running docker commands from Bash works just like they’re supposed to.
$ docker run hello-world
Hello from Docker!This message shows that your installation appears to be working correctly.
I'm trying to run the Hetionet v1.0 docker container mentioned in this SO post.
I've setup a digitalocean droplet with Docker
I ran docker pull dhimmel/hetionet and it worked
Now I run docker run dhimmel/hetionet and the following happens (and never returns to the interactive shell prompt).
If that completed successfully I think the last thing I'm supposed to do is run sh ~/run-docker.sh. Furthermore nothing is live at my droplet's ip_address:7474.
The error in the screenshot above looks a lot like it could be related to some redundant #Path("/") annotation, as described in this SO post's comment, buried in the docker container but I'm not sure.
Is the output from running docker run dhimmel/hetionet supposed to hang my shell? I'm running a 2 GB Memory / 40 GB Disk Droplet on Ubuntu 16.04 with Docker 1.12.5.
Thanks for your interest in the Hetionet Docker.
The output in 3 is expected. It looks like a Docker container successfully launched, downloaded the Hetionet database, and launched the Neo4j server. I'll look into fixing the warnings, but they're not errors, as Neo4j is still launching.
For production, we use a more advanced Docker run command. Depending on your use case, you may want to use the development docker run command:
docker run \
--publish=7474:7474 \
--publish=7687:7687 \
--volume=$HOME/neo4j/hetionet-data:/data \
--volume=$HOME/neo4j/hetionet-logs:/var/lib/neo4j/logs \
dhimmel/hetionet
Both the production and development command map ports. This will make it so the Neo4j server running inside your Docker container is available at http://localhost:7474/. This is most likely what you want. If you're doing this on DigitalOcean, you would replace http://localhost with the IP address of your droplet.
For an interactive shell session in a dhimmel/hetionet container, you can use:
docker run --interactive --tty dhimmel/hetionet bash
However, that command does not launch the Neo4j server -- it just let's you explore the image.
Does this clear things up?
I am trying to push some image to my registry, but when i tried to do:
sudo docker push myreg:5000\image
i got some error that told me that i need to start docker daemon with
docker -d --insecure-registry myreg:5000
So i stopped the docker service, and started it using the command above, once i do that the current shell window(ssh) is stuck with docker output, and if i close it the docker service is stopped.
I know this is an easy one, and i searched for hours and couldn't find anything.
Thank you
The problem is that when i run the command, i get all the docker output to the shell, and if i close it, the docker service stopped, usually the -d should take care of it, but it wont work
I think there's a confusion here; the top-level -d (docker -d) flag starts docker in daemon mode, in the foreground. This is different from the docker run -d <image> flag, which means "start a container from <image>, in detached mode". What you're seeing on your screen, is the daemon output / logs, waiting for connections from a docker client.
Back to your original issue;
The instructions to run docker -d --insecure-registry myreg:5000 could be clearer, but they illustrate that you should change the daemon options of your docker service to include the --insecure-registry myreg:5000 option.
Depending on the process manager your system users (e.g., upstart or systemd), this means you'll have to edit the /etc/default/docker file (see the documentation), or adding a "drop-in" file to override the default systemd service options; see SystemD custom daemon options
Some notes;
The top-level -d option is deprecated in docker 1.8 in favor of the new docker daemon command
Using --insecure-registry is discouraged for security reasons as it allows both unencrypted and untrustworthy communication with the registry. It's preferable to add your CA to the trusted list of your system.