I've written a TCP Server in ruby running on port 2000 with event machine.
Right now, what I do is ssh to my server and run the command ruby lib/tcp_server.rb to turn on the server, but it shuts down when I log out.
I've tried nohup and using & but nothing seems to stick for the server for a long time.
So my question is, how do I deploy this server on port 2000 and keep it running, like how we deploy Rails to nginx.
It's not a webserver, but an a tcp server for a connected device, if that helps.
Thanks!
Solution 1: tmux or screen
This is the simplest way to approach, you will have to create a tmux or screen session, then start your server in that session.
Solution 2: nohup
nohup ruby lib/tcp_server.rb > stdout.log 2> stderr.log &
You've tried nohup and using &, I suppose you've already known how to do.
Solution 3: daemonize
You can detach from the shell and daemonize the process by forking
it twice, setting the session ID and changing the current working directory.
def daemonize
exit if fork
Process.setsid
exit if fork
Dir.chdir '/'
end
With this approach, you will have to redirect stdout and stderr to keep logs.
Another way to daemonize is to use gems like daemons.
update:
To restart the process automatically after being killed, you need a process manager like god or pm2.
To start the process automatically after booting, you need to compose an init scripts but how it looks like depends on your service management system and operating system. One of the most well-known is System V. If you are using Ubuntu, you might want to take a look at Upstart or systemd.
Related
I'm fairly new to Bash, redis and linux in general and I'm having trouble with creating a script. This is also my first question, I hope it is not a duplicate.
So here's the problem, I'm creating a simple application in ruby for educational purposes, but the feature I'm trying to implement uses redis and sidekiq. What I want to do is to create an executable script (I named it server) that initiates the redis server, initiates the redis, but it should also shutdown redis after the user finalizes the sidekiq.
This is what I came up with:
#!/usr/bin/env sh
set -e
redis-server --daemonize yes
bundle exec sidekiq -r ./a/sample/path/worker.rb
redis-cli shutdown # this is not working, I want to execute this after shutting sidekiq down...
When I run the fourth line, it starts the little Sidekiq "welcome page" and I can't to anything until I shut it down with Control + C. I assumed that after shutting it with this command, it would continue with the script I wrote, which would be the redis-cli shutdown command.
But it does not. When I Control + C the sidekiq, it simply goes back to the command line.
Is there anyone familiar with these concepts that could help me? I wanted a script that would also shutdown redis after I'm done with sidekiq.
Thanks!
Have you considered using Foreman?
http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
https://github.com/ddollar/foreman
I have small app created on python flask and deployed on EC2 aws machine, when I do ssh to ec2 machine and starts flask, it works, but when I terminate the session the flask dies, I can run it using nohup. What is the best way to make it independent of ssh session and run it continuously.
There are several options:
nohup python app.py &
use screen
run supervisord(link) on system startup and control all through it (pythonic way :))
nohup means: do not terminate this process even when the stty is cut off.
& at the end means: run this command as a background task.
One of our application servers (Glassfish v3.0.1) keeps crushing down with no reason. Sometimes, I am away from Internet so I cannot run it back again. Therefore, I wrote a simple bash script to wait for 10 minutes and then run asadmin. It is like:
#!/bin/bash
while true;
do sleep 600;
sudo /home/ismetb/glassfishv3.0.1/glassfish/bin/asadmin start-domain;
done
This seems to work fine however I have a couple of problems:
If I terminate the bash script (by pressing ctrl+z buttons), the Java process (Glassfish) dies and start-domain and stop-domain commands do not work at all. That means, I can neither stop Glassfish nor can I access it. I do not know if anybody else experienced this problem before or not. If the process dies, only thing I can do is to look for the ID of Java process and kill it from terminal. This not desirable at all. Any ideas why Java process dies when I quit script?
What I want to add to my script is something like to check the port Glassfish is using. If port is occupied maybe I can assume that Glassfish is not down! (However, the port (8080 default) might still be used by Glassfish although Glassfish is dead, I am not sure of it). If not, then with the help of a simple code, I can get the id of the Java process and kill them all. Then start-domain command will successfully work. Any ideas or any directions on how I can do this?
You can use a cron job instead. To install a cron job for root, enter
sudo crontab -e
and add this line
*/10 * * * * /home/ismetb/glassfishv3.0.1/glassfish/bin/asadmin start-domain
This will run asadmin every ten minutes.
If you're not comfortable with the command line, you might also try gnome-schedule, but I have no experience with that.
For your second problem, you can use curl or wget to access glassfish. You can try to get some URL, or even access the administration interface, and if you don't get a response, assume glassfish is down.
I need to ssh to a machine and launch a bash script running some hour-long tests which require no human interaction for their entire execution.
Is there any way I can decouple my running script from my shell, so that I can close the terminal and shut down my local computer as I like?
Sure, use nohup:
nohup ./program &
Alternatively, start your program inside screen or tmux and then detach.
I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart.
That's where I hit a roadblock. If I were using any sane platform, I could just do:
exec('ruby', __FILE__)
...and be done. However, I did the following test:
p Process.pid
sleep 1
exec('ruby', __FILE__)
...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along.
The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall.
So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.
After trying a number of solutions (including the one submitted by Byron Whitlock, which ultimately put me onto the path to a satisfactory end) I settled upon:
IO.popen("start cmd /C ruby.exe #{$0} #{ARGV.join(' ')}")
sleep 5
I found that if I didn't sleep at all after the popen, and just exited, the spawn would frequently (>50% of the time) fail. This is not cross-platform obviously, so in order to have the same behavior on the mac:
IO.popen("xterm -e \"ruby blah blah blah\"&")
The classic way to restart a program is to write another one that does it for you. so you spawn a process to restart.exe <args>, then die or exit; restart.exe waits until the calling script is no longer running, then starts the script again.