Make a Ruby program a daemon? - ruby

I want to write a Ruby program that will always be running in the background (a daemon) on my Mac.
Can someone point me in the right direction on how this would be done?

Ruby 1.9.x has now the following:
Process.daemon
Put it in your code and that's it.
Taken from "Daemon Processes in Ruby."

Use Daemonize.rb
require 'daemons'
Daemons.daemonize
Very simple sample: http://github.com/utkarsh2012/backitup/blob/master/backitup.rb
How to install daemons gem:
gem install daemons

Ah, Google to the rescue! Check out
http://fitzgeraldsteele.wordpress.com/2009/05/04/launchd-example-start-web-server-at-boot-time/
wherein a helpful blogger provides an example of writing a launchd plist to launch a ruby Web application server.

This is a module to daemonize your code. Here's an offshoot that wraps an existing script.
Essentially it boils down to this (from Travis Whitton's Daemonize.rb, the first link above, modified for some program I wrote ages ago):
private
# This method causes the current running process to become a daemon
# If closefd is true, all existing file descriptors are closed
def daemonize(pathStdErr, oldmode=0, closefd=false)
srand # Split rand streams between spawning and daemonized process
safefork and exit# Fork and exit from the parent
# Detach from the controlling terminal
unless sess_id = Process.setsid
raise 'Cannot detach from controlled terminal'
end
# Prevent the possibility of acquiring a controlling terminal
if oldmode.zero?
trap 'SIGHUP', 'IGNORE'
exit if pid = safefork
end
Dir.chdir "/" # Release old working directory
File.umask 0000 # Insure sensible umask
if closefd
# Make sure all file descriptors are closed
ObjectSpace.each_object(IO) do |io|
unless [STDIN, STDOUT, STDERR].include?(io)
io.close rescue nil
end
end
end
STDIN.reopen "/dev/null" # Free file descriptors and
STDOUT.reopen "/dev/null" # point them somewhere sensible
STDERR.reopen pathStdErr, "w" # STDOUT/STDERR should go to a logfile
return oldmode ? sess_id : 0 # Return value is mostly irrelevant
end
# Try to fork if at all possible retrying every 5 sec if the
# maximum process limit for the system has been reached
def safefork
tryagain = true
while tryagain
tryagain = false
begin
if pid = fork
return pid
end
rescue Errno::EWOULDBLOCK
sleep 5
tryagain = true
end
end
end

Need to see the daemons-rails gem for Rails 3 (based on rails_generator):
https://github.com/mirasrael/daemons-rails
Possible to generate daemon stub like this:
rails generate daemon <name>
Features:
individual control script per daemon
rake:daemon command per daemon
capistrano friendly
app-wide control script
monitoring API
possible multiple daemon sets

Related

ruby: Sending keystrokes to PTY in raw mode

I'm attempting to add some more automated tests to the ruby-newt module. The code I have seems to work, but still requires manually hitting ENTER at the terminal in order for it to complete.
For example in the following code, \t will switch to the next button and \r will press the button, and both commands execute successfully, but the ENTER key still needs to be manually pressed at the terminal otherwise the program just hangs indefinitely.
If the line wr.write "\t\r" is commented out, then the program will time out and exit successfully after 10 seconds. I've tried wr.flush, but that does not help. I've also tried including \n in the command.
Is there anything additional I should include in the write command to ensure the child program successfully receives it?
require 'newt'
require 'pty'
def newt_run
begin
Newt::Screen.new
Newt::Screen.centered_window(20, 15, 'Button')
b1 = Newt::Button.new(1, 1, 'Button1')
b2 = Newt::Button.new(1, 6, 'Button2')
b = Newt::Button.new(1, 11, 'Exit')
f = Newt::Form.new
f.set_timer(10000)
f.add(b1, b2, b)
rv = f.run
ensure
Newt::Screen.finish
end
end
master, slave = PTY.open
rd, wr = IO.pipe
if fork.nil? then
master.close
wr.close
$stdin.reopen(rd)
$stdout.reopen(slave)
$stderr.reopen(slave)
newt_run
else
slave.close
rd.close
wr.write "\t\r"
Process.wait
end
The problem is that the newt C-library opens /dev/tty by default for input. It does not use stdin. This is why nothing you send it seems to work. It's not reading your pipe, it is reading /dev/tty.
Here is the problem in more detail:
Ruby newt calls newtInit() in the C-lib
newtInit() calls SLang_init_tty
SLang_init_tty attaches input to /dev/tty by default always.
If you read the documentation of SLang_init_tty you will find that the variable SLang_TT_Read_FD determines if /dev/tty is used or not.
Solution 1:
You need to set SLang_TT_Read_FD to stdin, before calling newtInit() from Ruby.
Solution 2:
Use setsid and ioctl(TIOCSCTTY) to reassign the controlling terminal in the forked process (see docs for ioctl here).
Working example:
TIOCSCTTY = 0x540E
master, slave = PTY.open
if fork.nil? then
# Child process
# Make group leader. Required for aquiring controlling TTY.
Process.setsid
master.close # Close master side
STDIN.reopen(slave) # Reassign STDIN
STDIN.ioctl(TIOCSCTTY, 0) # Reassign controlling TTY (important part)
# Ensure master is ready
slave.gets
# Now we can run the UI
newt_run
else
# Parent process
slave.close
# Sync up with slave
master.puts 'hello'
# Allow for UI setup
sleep 1
master.write "\e[B" # Arrow down
master.write "\e[B" # Arrow down
master.write "\t" # Tab
master.write "\r" # Enter
Process.wait
end

Make Net:SSH update returned data packets/chunks in exec block more often

I have a ruby script on a remote server that I'm running via Net:SSH on my local pc.
The remote script takes a few minutes to run and outputs it's progress to stdout.
The problem I have is the block in my exec command only gets called when the packet/chunk is full.
So I get the progress all in one hit about each minute.
Here is some cut down examples that illustrate my problem:
Server Script:
(0.999).each do |i|
puts i
sleep 1
end
puts 1000
Local Script:
Net::SSH.start('ip.v.4.addr', 'user', :keys => ['my_key']) do |ssh|
ssh.exec("ruby count_to_1000.rb") do |ch, stream, data|
puts data if stream == :stdout
end
ssh.loop(1)
end
Is there any way from the remote script to force the sending of the packet/chunk?
Or is there a way to set a limit of say a second (or n bits) before it's flushed? (within Net:SSH)
Thanks for all your help!
Try flush:
http://www.ruby-doc.org/core-2.1.5/IO.html#method-i-flush
(0.999).each do |i|
puts i
STDOUT.flush
sleep 1
end
Or sync:
http://www.ruby-doc.org/core-2.1.5/IO.html#method-i-sync
STDOUT.sync = true
(0.999).each do |i|
puts i
sleep 1
end
(Untested, btw. Maybe they need to be used on the client-side instead, or on some other IO stream. But those are the two methods that immediately come to mind.)
In my test setup this works as expected (tested with localhost). However, there might be some issues with the STDOUT flush.
You can try to to write to STDOUT in stead of using puts (I have heard that there is some difference that I don't really understand).
Thus, you can on your server use:
(0.999).each do |i|
STDOUT.puts i
sleep 1
end
STDOUT.puts 1000
#You could possibly also use "STDOUT.write 1000", but it will not append a newline, like puts does.
If that does not work, then you can try to force-flush the STDOUT by using STDOUT.flush(). I believe the same can be achieved by writing an empty string to STDOUT, but I am not 1000% sure.
It might also happen that the exec command actually waits for the entire process to terminate for some reason(I was not able to figure out from the docs). In which case, you won't be able to achieve what you want. Then you can consider setting up websockets, use DRB, or some other means to pass the data.

Ruby: Logger and Daemons

i'm using ruby 1.9.2p180 (2011-02-18 revision 30909)
in Order to do the logging i use the logging gem.
My program has two blocks, which are used as daemons.
But logging from these blocks results in an error and nothing is written to the logfile:
log shifting failed. closed stream
log writing failed. closed stream
Here is what happens in the code:
log = Logger.new(logbase + 'logfile.log', 'monthly')
log.level = Logger::INFO
proc = Daemons.call(options) do
# [...]
log.info "Any Logmessage"
# [...]
end
Any Idea, whats wrong there?
The Daemons gem closes all file descriptors when it daemonizes the process. So any logfiles that were opened before the Daemons block will be closed inside the forked process.
And since you can't write to closed file descriptors -> errors.
You can read more about what happens when you daemonize a process by reading the chapter:
What does daemons internally do with my daemons?
http://daemons.rubyforge.org/Daemons.html
The solution is to open the logfile inside the daemon block instead of outside of it. That should fix it. But note that daemonizing changes the working directory to /, so take that into account when referencing logfile paths.
A solution which successfully works in delayed_job gem includes extracting all open files before fork, and reopening them later.
An adjusted extract from delayed_job:
#files_to_reopen = []
ObjectSpace.each_object(File) do |file|
#files_to_reopen << file unless file.closed?
end
Daemons.run_proc('some_process') do
#files_to_reopen.each do |file|
file.reopen file.path, 'a+'
file.sync = true
end
# Your code
end

How to run multiple ruby daemons and handle input and output of each daemon?

Here's the code:
while 1
input = gets
puts input
end
Here's what I want to do but I have no idea how to do it:
I want to create multiple instances of the code to run in the background and be able to pass input to a specific instance.
Q1: How do I run multiple instances of the script in the background?
Q2: How do I refer to an individual instance of the script so I can pass input to the instance (Q3)?
Q3: The script is using the cmd "gets" to take input, how would I pass input into an indivdual's script's gets?
e.g
Let's say I'm running threes instances of the code in the background and I refer to the instance as #1, #2, and #3 respectively.
I pass "hello" to #1, #1 puts "hello" to the screen.
Then I pass "world" to #3 and #3 puts "hello" to the screen.
Thanks!
UPDATE:
Answered my own question. Found this awesome tut: http://rubylearning.com/satishtalim/ruby_threads.html and resource here: http://www.ruby-doc.org/core/classes/Thread.html#M000826.
puts Thread.main
x = Thread.new{loop{puts 'x'; puts gets; Thread.stop}}
y = Thread.new{loop{puts 'y'; puts gets; Thread.stop}}
z = Thread.new{loop{puts 'z'; puts gets; Thread.stop}}
while x.status != "sleep" and y.status != "sleep" and z.status !="sleep"
sleep(1)
end
Thread.list.each {|thr| p thr }
x.run
x.join
Thank you for all the help guys! Help clarified my thinking.
I assume that you mean that you want multiple bits of Ruby code running concurrently. You can do it the hard way using Ruby threads (which have their own gotchas) or you can use the job control facilities of your OS. If you're using something UNIX-y, you can just put the code for each daemon in separate .rb files and run them at the same time.
E.g.,
# ruby daemon1.rb &
# ruby daemon2.rb &
There are many ways to "handle input and output" in a Ruby program. Pipes, sockets, etc. Since you asked about daemons, I assume that you mean network I/O. See Net::HTTP.
Ignoring what you think will happen with multiple daemons all fighting over STDIN at the same time:
(1..3).map{ Thread.new{ loop{ puts gets } } }.each(&:join)
This will create three threads that loop indefinitely, asking for input and then outputting it. Each thread is "joined", preventing the main program from exiting until each thread is complete (which it never will be).
You could try using multi_daemons gem which has capability to run multiple daemons and control them.
# this is server.rb
proc_code = Proc do
loop do
sleep 5
end
end
scheduler = MultiDaemons::Daemon.new('scripts/scheduler', name: 'scheduler', type: :script, options: {})
looper = MultiDaemons::Daemon.new(proc_code, name: 'looper', type: :proc, options: {})
MultiDaemons.runner([scheduler, looper], { force_kill_timeout: 60 })
To start and stop
ruby server.rb start
ruby server.rb stop

Exposing console apps to the web with Ruby

I'm looking to expose an interactive command line program via JSON or another RPC style service using Ruby. I've found a couple tricks to do this, but im missing something when redirecting the output and input.
One method at least on linux is to redirect the stdin and stdout to a file then read and write to that file asynchronously with file reads and writes. Another method ive been trying after googling around was to use open4. Here is the code I wrote so far, but its getting stuck after reading a few lines from standard output.
require "open4"
include Open4
status = popen4("./srcds_run -console -game tf +map ctf_2fort -maxplayers 6") do |pid, stdin, stdout, stderr|
puts "PID #{pid}"
lines=""
while (line=stdout.gets)
lines+=line
puts line
end
while (line=stderr.gets)
lines+=line
puts line
end
end
Any help on this or some insight would be appreciated!
What I would recommend is using Xinetd (or similar) to run the command on some socket and then using the ruby network code. One of the problems you've already run into in your code here is that your two while loops are sequential, which can cause problems.
Another trick you might try is to re-direct stderr to stdout in your command, so that your program only has to read the stdout. Something like this:
popen4("./srcds_run -console -game tf +map ctf_2fort -maxplayers 6 2>&1")
The other benefit of this is that you get all the messages/errors in the order they happen during the program run.
EDIT
Your should consider integrating with AnyTerm. You can then either expose AnyTerm directly e.g. via Apache mod_proxy, or have your Rails controller act as the reverse proxy (handling authentication/session validation, then playing back controller.request minus any cookies to localhost:<AnyTerm-daemon-port>, and sending back as a response whatever AnyTerm replies with.)
class ConsoleController < ApplicationController
# AnyTerm speaks via HTTP POST only
def update
# validate session
...
# forward request to AnyTerm
response = Net::HTTP.post_form(URI.parse('http://localhost:#{AnyTermPort}/', request.params))
headers['Content-Type'] = response['Content-Type']
render_text response.body, response.status
end
Otherwise, you'd need to use IO::Select or IO::read_noblock to know when data is available to be read (from either network or subprocess) so you don't deadlock. See this too. Also check that either your Rails is used in a multi-threaded environment or that your Ruby version is not affected by this IO::Select bug.
You can start with something along these lines:
status = POpen4::popen4("ping localhost") do |stdout, stderr, stdin, pid|
puts "PID #{pid}"
# our buffers
stdout_lines=""
stderr_lines=""
begin
loop do
# check whether stdout, stderr or both are
# ready to be read from without blocking
IO.select([stdout,stderr]).flatten.compact.each { |io|
# stdout, if ready, goes to stdout_lines
stdout_lines += io.readpartial(1024) if io.fileno == stdout.fileno
# stderr, if ready, goes to stdout_lines
stderr_lines += io.readpartial(1024) if io.fileno == stderr.fileno
}
break if stdout.closed? && stderr.closed?
# if we acumulated any complete lines (\n-terminated)
# in either stdout/err_lines, output them now
stdout_lines.sub!(/.*\n/m) { puts $& ; '' }
stderr_lines.sub!(/.*\n/m) { puts $& ; '' }
end
rescue EOFError
puts "Done"
end
end
To also handle stdin, change to:
IO.select([stdout,stderr],[stdin]).flatten.compact.each { |io|
# program ready to get stdin? do we have anything for it?
if io.fileno == stdin.fileno && <got data from client?>
<write a small chunk from client to stdin>
end
# stdout, if ready, goes to stdout_lines

Resources