I have small app created on python flask and deployed on EC2 aws machine, when I do ssh to ec2 machine and starts flask, it works, but when I terminate the session the flask dies, I can run it using nohup. What is the best way to make it independent of ssh session and run it continuously.
There are several options:
nohup python app.py &
use screen
run supervisord(link) on system startup and control all through it (pythonic way :))
nohup means: do not terminate this process even when the stty is cut off.
& at the end means: run this command as a background task.
Related
I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/
I've written a TCP Server in ruby running on port 2000 with event machine.
Right now, what I do is ssh to my server and run the command ruby lib/tcp_server.rb to turn on the server, but it shuts down when I log out.
I've tried nohup and using & but nothing seems to stick for the server for a long time.
So my question is, how do I deploy this server on port 2000 and keep it running, like how we deploy Rails to nginx.
It's not a webserver, but an a tcp server for a connected device, if that helps.
Thanks!
Solution 1: tmux or screen
This is the simplest way to approach, you will have to create a tmux or screen session, then start your server in that session.
Solution 2: nohup
nohup ruby lib/tcp_server.rb > stdout.log 2> stderr.log &
You've tried nohup and using &, I suppose you've already known how to do.
Solution 3: daemonize
You can detach from the shell and daemonize the process by forking
it twice, setting the session ID and changing the current working directory.
def daemonize
exit if fork
Process.setsid
exit if fork
Dir.chdir '/'
end
With this approach, you will have to redirect stdout and stderr to keep logs.
Another way to daemonize is to use gems like daemons.
update:
To restart the process automatically after being killed, you need a process manager like god or pm2.
To start the process automatically after booting, you need to compose an init scripts but how it looks like depends on your service management system and operating system. One of the most well-known is System V. If you are using Ubuntu, you might want to take a look at Upstart or systemd.
I've got 3 containers that will run on a single server, which we'll call: A,B,C
Each server has a script on the host that has the commands to start docker:
A_start.sh
B_start.sh
C_start.sh
I'm trying to create a swarm script to start them all, but not sure how.
ABC_start.sh
UPDATE:
this seems to work, with the first being output to the terminal, cntrl+C exits out of them all.d
./A_start.sh & ./B_start.sh & ./C_start.sh
swarm will not help you start them at all..., it is used to distribute the work amongst docker machines that are part of the cluster.
there is no good reason not to use docker-compose for that use case, its main purpose is to link containers properly, and bring them up, so your collection of scripts could end up being a single docker-compose up command.
In bash,
you can do this:
nohup A_start.sh &
nohup B_start.sh &
nohup C_start.sh &
I need to ssh to a machine and launch a bash script running some hour-long tests which require no human interaction for their entire execution.
Is there any way I can decouple my running script from my shell, so that I can close the terminal and shut down my local computer as I like?
Sure, use nohup:
nohup ./program &
Alternatively, start your program inside screen or tmux and then detach.
I run the wesabe web app locally.
Each time I start it by opening separate shells to start the mysql server, java backend and rails frontend.
My question is, how could you automate this with a shell script or rake task?
I tried just listing the commands sequentially in a shell script (see below) but the later commands never run because each app server creates its own process that never 'returns' (until you quit the server).
I've looked into sub-shells and parallel rake tasks, but that's where I got stuck.
echo 'starting mysql'
mysqld_safe
echo 'starting pfc'
cd ~/wesabe/pfc
rails server -p 3001
echo 'starting brcm'
cd ~/wesabe/brcm-accounts-api
script/server
echo 'ok, go!'
open http://localhost:3001
If you don't mind the output being messed, put a "&" at the end of the line where you start the application to make it run in background.