I'm trying to write a simple script which can execute mongodb server in the background. Currently I use Process.spawn method. It works but I have to wait some time for mongod to be process fully operational (boot process is completed and the database is waiting for new connections).
def run!
return if running?
FileUtils.mkdir_p(MONGODB_DBPATH)
command = "mongod --port #{port} --dbpath #{MONGODB_DBPATH} --nojournal"
log_file = File.open(File.expand_path("log/test_mongod.log"), "w+")
#pid = Process.spawn(command, out: log_file)
# TODO wait for the connection (waiting for connections on port xxxx)
sleep 2
yield port if block_given?
end
Here is the full this script: https://github.com/lucassus/mongo_browser/blob/master/spec/support/mongod.rb#L22
Is it somehow possible to remove this ugly arbitrary sleep 2 from this code?
My first guess is to connect a pipe to the spawned process and wait until "waiting for connections on port xxxx" message is written to the pipe. But I don't know how to implement it.
Here is a pattern for waiting on some output from a child process:
def run_and_wait_for_this regexp_to_wait_for, *cmd
rd, wr = IO.pipe
pid = Process.spawn(*cmd, out: wr)
pid_waiter = Thread.new { Process.wait(pid); wr.close }
thr = Thread.new do
buffer = ''
until buffer =~ regexp_to_wait_for
buffer << rd.readpartial(100)
end
end
thr.join
rescue EOFError
ensure
rd.close
end
run_and_wait_for_this( /waiting for connections on port xxxx/, 'mongo', '--port', port, '--dbpath', MONGODB_PATH, '--nojournal' )
It blocks until the process flushes the expected output into the pipe.
Related
I am having an issue with a ruby server I am writing.
The server functions fine until you have more than one client attached, then it sends out the messages in a round-robin like way, when I want all clients to get the message at the same time.
The server is supposed to grab any clients that connects, but then wait till I issue a command. The problem is only one client is getting the command, when I enter a command again another client gets it and so on
SERVER
require 'socket'
mutex = Mutex.new
cv = ConditionVariable.new
server = TCPServer.open(2000)
#Comm="test"
Thread.new{
loop {
Thread.start(server.accept) do |client|
client.puts("Client accepted")
mutex.synchronize {
cv.wait(mutex)
client.puts("##Comm")
client.close
}
end
}
}
loop {
system "clear" or system "cls"
print("Enter Command\n")
#Comm = gets()
mutex.synchronize {
cv.signal
}
}
CLIENT
require 'socket' # Sockets are in standard library
hostname = 'localhost'
port = 2000
loop {
begin
s = TCPSocket.open(hostname, port)
system "clear" or system "cls"
while line = s.gets # Read lines from the socket
puts line.chop # And print with platform line terminator
end
s.close
rescue
next
end
sleep(0.5)
}
Using .signal on ConditionVariable only wakes up one thread, but .broadcast will go and wake up all that are waiting to be signaled.
I have the following ruby script which kicks off 12 processes which smoke test my application. Some of these can be long running of up to 8 minutes.
pids = []
pids = uris.map do |uri|
print_start(uri)
command = get_http_tests_command("Smoke")
update_http_tests_config( uri )
pid = Process.spawn command
Process.detach pid
pid
end
print( "smoke tests in progress => #{pids} => #{uris}" )
statuses = pids.map do |pid|
puts "waiting for #{pid}"
Process.wait( pid )
$?
end
print("smoke tests finished")
When the process finishes regardless of success I intermittently get an error on the Process.wait( pid ) line. The error is as follows
#<Errno::EINVAL: Invalid argument>
>> SmokeTest.rb:169:in "wait"
>> SmokeTest.rb:169:in "block in run_single"
>> SmokeTest.rb:167:in "map"
>> SmokeTest.rb:167:in "run_single"
I'm not sure what's going on here, any help would be much appreciated. It seems to fail intermittently about 1 in 3 times under 300 seconds. It always fails if it goes over 300 seconds.
My guess is that this exception is thrown if you try to wait on a process which is already dead...
If that is the problem I could suggest using threads to wait for all processes concurrently:
statuses = []
threads = pids.map do |pid|
Thread.new do
puts "waiting for #{pid}"
_, status = Process.wait2( pid )
puts "#{pid} ended with status #{status}"
statuses << status
end
end
threads.each(&:join)
print("smoke tests finished")
I wrote this code to run my process in a daemon. The goal is to make this process running even if I close its parent. Now, i would like to be able to write something in its stdin. What should I do ? Here's the code.
def daemonize(cmd, options = {})
rd, wr = IO.pipe
p1 = Process.fork {
Process.setsid
p2 = Process.fork {
$0 = cmd #Name of the command
pidfile = File.new(options[:pid_file], 'w')
pidfile.chmod( 0644 )
pidfile.puts "#{Process.pid}"
pidfile.close
Dir.chdir(ENV["PWD"] = options[:working_dir].to_s) if options[:working_dir]
File.umask 0000
STDIN.reopen '/dev/null'
STDOUT.reopen '/dev/null', 'a'
STDERR.reopen STDOUT
Signal.trap("USR1") do
Console.show 'I just received a USR1', 'warning'
end
::Kernel.exec(*Shellwords.shellwords(cmd)) #Executing the command in the parent process
exit
}
raise 'Fork failed!' if p2 == -1
Process.detach(p2) # divorce p2 from parent process (p1)
rd.close
wr.write p2
wr.close
exit
}
raise 'Fork failed!' if p1 == -1
Process.detach(p1) # divorce p1 from parent process (shell)
wr.close
daemon_id = rd.read.to_i
rd.close
daemon_id
end
Is there a way to reopen stdin in something like a pipe instead of /dev/null in which I would be able to write ?
How about a fifo? In linux, you can use the mkfifo command:
$ mkfifo /tmp/mypipe
Then you can reopen STDIN on that pipe:
STDIN.reopen '/tmp/mypipe'
# Do read-y things
Anything else can write to that pipe:
$ echo "roflcopter" > /tmp/mypipe
allowing that data to be read by the ruby process.
(Update) Caveat with blocking
Since fifos block until there's a read and write (e.g. a read is blocked unless there's a write, and vice-versa), it's best handled with multiple threads. One thread should do the reading, passing the data to a queue, and another should handle that input. Here's an example of that situation:
require 'thread'
input = Queue.new
threads = []
# Read from the fifo and add to an input queue (glorified array)
threads << Thread.new(input) do |ip|
STDIN.reopen 'mypipe'
loop do
if line = STDIN.gets
puts "Read: #{line}"
ip.push line
end
end
end
# Handle the input passed by the reader thread
threads << Thread.new(input) do |ip|
loop do
puts "Ouput: #{ip.pop}"
end
end
threads.map(&:join)
Given that I wish to test non-blocking reads from a long command, I created the following script, saved it as long, made it executable with chmod 755, and placed it in my path (saved as ~/bin/long where ~/bin is in my path).
I am on a *nix variant with ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.0.0] compiled with RVM defaults. I do not use Windows, and am therefore unsure if the test script will work for you if you do.
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDERR.puts 'message on stderr'
sleep 1
end
Why does long_err produce each STDERR message as it is printed by "long"
def long_err( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'err -> ' + stderr.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stderr])
retry
rescue EOFError
puts 'EOF'
end
end
while long_out remains blocked until all STDOUT messages are printed?
def long_out( bash_cmd = 'long', maxlen = 4096)
stdin, stdout, stderr = Open3.popen3(bash_cmd)
begin
begin
puts 'out -> ' + stdout.read_nonblock(maxlen)
end while true
rescue IO::WaitReadable
IO.select([stdout])
retry
rescue EOFError
puts 'EOF'
end
end
I assume you will require 'open3' before testing either function.
Why is IO::WaitReadable being raised differently for STDOUT than STDERR?
Workarounds using other ways to start subprocesses also appreciated if you have them.
In most OS's STDOUT is buffered while STDERR is not. What popen3 does is basically open a pipe between the exeutable you launch and Ruby.
Any output that is in buffered mode is not sent through this pipe until either:
The buffer is filled (thereby forcing a flush).
The sending application exits (EOF is reached, forcing a flush).
The stream is explicitly flushed.
The reason STDERR is not buffered is that it's usually considered important for error messages to appear instantly, rather than go for for efficiency through buffering.
So, knowing this, you can emulate STDERR behaviour with STDOUT like this:
#!/usr/bin/env ruby
3.times do
STDOUT.puts 'message on stdout'
STDOUT.flush
STDERR.puts 'message on stderr'
sleep 1
end
and you will see the difference.
You might also want to check "Understanding Ruby and OS I/O buffering".
Here's the best I've got so far for starting subprocesses. I launch a lot of network commands so I needed a way to time them out if they take too long to come back. This should be handy in any situation where you want to remain in control of your execution path.
I adapted this from a Gist, adding code to test the exit status of the command for 3 outcomes:
Successful completion (exit status 0)
Error completion (exit status is non-zero) - raises an exception
Command timed out and was killed - raises an exception
Also fixed a race condition, simplified parameters, added a few more comments, and added debug code to help me understand what was happening with exits and signals.
Call the function like this:
output = run_with_timeout("command that might time out", 15)
output will contain the combined STDOUT and STDERR of the command if it completes successfully. If the command doesn't complete within 15 seconds it will be killed and an exception raised.
Here's the function (2 constants you'll need defined at the top):
DEBUG = false # change to true for some debugging info
BUFFER_SIZE = 4096 # in bytes, this should be fine for many applications
def run_with_timeout(command, timeout)
output = ''
tick = 1
begin
# Start task in another thread, which spawns a process
stdin, stderrout, thread = Open3.popen2e(command)
# Get the pid of the spawned process
pid = thread[:pid]
start = Time.now
while (Time.now - start) < timeout and thread.alive?
# Wait up to `tick' seconds for output/error data
Kernel.select([stderrout], nil, nil, tick)
# Try to read the data
begin
output << stderrout.read_nonblock(BUFFER_SIZE)
puts "we read some data..." if DEBUG
rescue IO::WaitReadable
# No data was ready to be read during the `tick' which is fine
print "." # give feedback each tick that we're waiting
rescue EOFError
# Command has completed, not really an error...
puts "got EOF." if DEBUG
# Wait briefly for the thread to exit...
# We don't want to kill the process if it's about to exit on its
# own. We decide success or failure based on whether the process
# completes successfully.
sleep 1
break
end
end
if thread.alive?
# The timeout has been reached and the process is still running so
# we need to kill the process, because killing the thread leaves
# the process alive but detached.
Process.kill("TERM", pid)
end
ensure
stdin.close if stdin
stderrout.close if stderrout
end
status = thread.value # returns Process::Status when process ends
if DEBUG
puts "thread.alive?: #{thread.alive?}"
puts "status: #{status}"
puts "status.class: #{status.class}"
puts "status.exited?: #{status.exited?}"
puts "status.exitstatus: #{status.exitstatus}"
puts "status.signaled?: #{status.signaled?}"
puts "status.termsig: #{status.termsig}"
puts "status.stopsig: #{status.stopsig}"
puts "status.stopped?: #{status.stopped?}"
puts "status.success?: #{status.success?}"
end
# See how process ended: .success? => true, false or nil if exited? !true
if status.success? == true # process exited (0)
return output
elsif status.success? == false # process exited (non-zero)
raise "command `#{command}' returned non-zero exit status (#{status.exitstatus}), see below output\n#{output}"
elsif status.signaled? # we killed the process (timeout reached)
raise "shell command `#{command}' timed out and was killed (timeout = #{timeout}s): #{status}"
else
raise "process didn't exit and wasn't signaled. We shouldn't get to here."
end
end
Hope this is useful.
I would like to make a ruby daemon that would gracefully shutdown with a kill command.
I would like to add a signal trap that would wait until #code that could take some time to run finishes before shutting down. How would I add that to something like this:
pid = fork do
pid_file = "/tmp/pids/daemon6.pid"
File.open(pid, 'w'){ |f| f.write(Process.pid) }
loop do
begin
#code that could take some time to run
rescue Exception => e
Notifier.deliver_daemon_rescued_notification(e)
end
sleep(10)
end
end
Process.detach pid
Also, would it be better to have that in a separate script, like a separate kill script instead of having it as part of the daemon code? Like something monit or God would call to stop it?
I appreciate any suggestions.
You can catch Interrupt, like this:
pid = fork do
begin
loop do
# do your thing
sleep(10)
end
rescue Interrupt => e
# clean up
end
end
Process.detach(pid)
You can do the same with Signal.trap('INT') { ... }, but with sleep involved I think it's easier to catch an exception.
Update: this is a more traditional way of doing it, and it makes sure the loop always finishes a complete go before it stops:
pid = fork do
stop = false
Signal.trap('INT') { stop = true }
until stop
# do your thing
sleep(10)
end
end
The downside is that it will always do the sleep, so there will almost always be a delay until the process stops after you've killed it. You can probably get around that by sleeping in bursts, or doing a combination of the variants (rescuing the Interrupt just around the sleep or something).