AWS ECS trouble - Running shell script to boot program - shell

I am trying to run a Docker image on amazon ECS. I am using a command that starts a shell script to boot up the program:
CMD ["sh","-c", "app/bin/app start; bash"]
in order to start it because for some reason when I run the app (elixir/phoenix app) in the background it was crashing immediately but if I run it in the foreground it is fine. If I run it this way locally, everything works fine but when I try to run it in my cluster, it shuts down. Please help!!

Docker was supposed to keep track of your running foreground process, if the process stop, the container stop. The reason your container work when you use command with "bash" because bash wasn't stop.
I guess you use she'll script to start an application that serve in background like nginx or a daemon. So try to find an option that make the app running foreground will keep your container alive. I.e nginx has an option while starting "daemon off"

for some reason when I run the app (elixir/phoenix app) in the background it was crashing immediately
So you have a corrupted application and you are looking for a kludge to make it looking like it somewhat works. This is not a reliable approach at all.
Instead you should:
make it working in background
use systemctl or upstart to manage restarts of Erlang VM on crashes
Please note that it matters where you have your application compiled. It must be the exactly same architecture/container as the production one, with same Erlang, Elixir, OS versions, otherwise nobody guarantees it will be robust or even working.

Related

promtail as service on windows behaves differently than run directly

Somehow there seems to be a difference on how to run promtail on Windows.
Run directly via
promtail.exe --config.file=config.yml it works. It adds targets, and starts scanning:
Started via nssm as a service, service is running, stderr redirected to a file shows that promtail does never start the scanning, even terminates. Any idea what could cause this behaviour?
nssm settings:

Running an docker image with cron

I am using an image from docker hub and it uses cron to perform some actions after some interval. I have registered and pushed it as described in documentation as a worker process (not a web). It also requires several environment variables.
I've run it from command line, e.g. docker run -t -e E_VAR1=VAL1 registry.heroku.com/image_name/worker and it worked for few days, then suddenly stopped and I had to run the command again.
Questions:
Is this a correct way to run a docker (as worker process) in Heroku?
Why might it stop running after few days? Is there any logs to check?
Is there a way to restart the process automatically?
How properly set environment variables for the docker in Heroku?
Thanks!
If you want to have this run in the background, you should use the -d flag to disconnect stdin and stdout, and not -t.
To check logs, user docker logs [container name or id]. You can find out the container's name and id using docker ps -a. That should give you an idea as to why the container stopped.
To have the container restart automatically add the --restart always flag when you run it. Alternatively, use --restart on-failure to only restart when it exited with a nonzero exit code.
The way you set environment variables seems fine.

Ubuntu run service in foreground

I've made a (docker) container for ddclient.
The problem is that I'm having problems in running that service in the foreground so that the docker container keeps running.
I've managed to keep the docker running by adding a bashat the end of the script but this is hackish, since the actual process it should be whatching is the ddclient.
Another way I found was to tail -f the log file, but if the service stops, the container will keep running instead of stoping.
Q: So is there any (easy) way to keep a service running in the foreground?
The problem with the process (any process) running in a container is signal management: you need to make sure the SIGKILL and other signals are properly communicated to the right process(es) in order to successfully stop/remove a container (and not leave zombie processes: see "PID 1 zombie reaping issue")
One option is at least to make your service at least write in a log file
ENTRYPOINT ["/bin/sh" "-c" ]
CMD yourProcess > log
That should keep it in foreground, as suggested in "How do I bring a daemon process to foreground?".
For a service, try and use as a base image phusion/baseimage-docker which manages other services properly.

How do I make things (e.g. tomcat) run after cloud-init has run the userdata script?

Short version:
How do I make init.d scripts run after cloud-init has run the userdata script on an EC2?
Long version:
Our deployment process is to construct AMIs with everything installed on them (tomcat, nginx, application etc), but with certain configuration values missed out. At boot time, the userdata script adds in the missing configuration values, and then the application stack can start up
Our current EC2s are based on an old version of the official Debian AMIs, which have the script ec2-run-user-data. This script runs at boot, and downloads and runs the EC2s userdata. When constructing the AMI, I simple edit the init.d scripts for tomcat, nginx etc to include ec2-run-user-data in their "Required-Start:" line, so they start up after the userdata has been run.
Unfortunately that approach is no longer viable, as we want to start using the hvm base AMIs, which have cloud-init installed rather than ec2-run-user-data. But I can't figure out how cloud-init works well enough to work out how to make the process work.
As far as I can tell, the userdata script is run by the cloud-final step, but cloud-final has $all in it's "Required-Start:" line. I could remove it, but I don't know what consequences that might have.
I've tried making tomcat etc run after cloud-init or cloud-config, but the userdata hasn't run by then. Also, it looks like cloud-init and cloud-config start processes then exit, which might explain why cloud-final needs to have $all in Required-Start
More Info:
We use the 'baked AMI' approach, where we create an AMI with all the packages/applications installed, then tell the existing Autoscaling Groups to replace their EC2s with new ones based on the new AMI (via CloudFormation). Some configuration information isn't known at baking time, but must be inserted via the userdata script.
When our tomcat app starts up it expects to read in the file /etc/appname/application.conf. That file has the text <<REPLACE_THIS>> in it. Tomcat will fail to start up if it tries to run before <<REPLACE_TIME>> has been replaced
The userdata script is something like:
#!/bin/bash
sed -i 's!<<REPLACE_TIME>>!{New value to use, determined at deploy time}!' /etc/appname/application.conf
The default Required-Start for tomcat is "$local_fs $remote_fs $network". At baking time, I change that to "$local_fs $remote_fs $network ec2-run-user-data"
By doing all that, the text in /etc/appname/application.conf gets replaced before tomcat runs. But as I said above, I want to change to using cloud-init, and I can't figure out what I need to do to make tomcat start after cloud-init has run the userdata. I get the impression that cloud-init doesn't run the userdata until very late in the process. I could change the userdata script to contain "/etc/init.d/tomcat restart" at the end, but it seems a bit dumb to have tomcat fail to start then get restarted.

Running Erlang project on Amazon EC2

We have a project with different processes, and run it by calling erl -pa ebin, mymodule_supervisor:start_link().
We have set up an ubuntu instance on Amazon EC2. Being new to this, how can we run the project remotely, so we can close the connection and the project will continue to run?
We can run the Erlang shell in the background, but we can't our project on it. It would be perfect to see an example.
Method 1: You could build a release package from your code. If done right, this will embed a complete Erlang system (along with your application and its dependencies) in an easily distributable tar file. Using an automatically generated script the node can then be started as a daemon, running in the background even after you close your shell.
A good way to get started is to use Rebar, which already supports release handling out of the box.
Method 2: Use tmux or screen (both easily installed on Ubuntu) to start your node and detach the session. If you choose tmux, the following should work:
Start tmux simply by running tmux from a shell.
From within tmux, start your node with the erl command as before.
Detach your session using Ctrl-b followed by d. Exit your shell. The node should still be running.
The "proper" way to start the supervisor is to call its start_link function from within the start function of your Erlang application.

Resources