I have a Ruby script that needs to run about one time a second. I am using a Ruby script to keep track of modifications of files in a directory and want the script to track updates in "live" time.
Basically, I want my script to do the same kind of thing as running "top" on a Unix shell, where the screen is updated every second or so. Is there an equivalent to setInterval in Ruby like there is in JavaScript?
There are a few ways to do this.
The quick-and-dirty versions:
shell (kornish):
while :; do
my_ruby_script.rb
sleep 1
done
watch(1):
shell$ watch -n 1 my_ruby_script.rb
This will run your script every second and keep the output of the most recent run displayed in your terminal.
in ruby:
while true
do_my_stuff
sleep 1
end
These all suffer from the same issue: if the actual script/function takes time to run, it makes the loop run less than every second.
Here is a ruby function that will make sure the function is called (almost) exactly every second, as long as the function doesn't take longer than a second:
def secondly_loop
last = Time.now
while true
yield
now = Time.now
_next = [last + 1,now].max
sleep (_next-now)
last = _next
end
end
Use it like this:
secondly_loop { my_function }
You may find interesting this gem whenever
You can code repeating tasks this way:
every 1.second do
#your task
end
As stated in another answer, rb-inotify is well suited to this sort of thing. If you don't want to use it, then a simple approach is to use threads:
a = Thread.new { loop { some_method; Thread.stop } }
b = Thread.new { loop { sleep 1; break unless a.alive?; a.run } }
To stop polling, use a.kill or make sure that some_method kills its own thread with Thread.kill when some condition is met.
Using two threads like this ensures that some_method runs at least every second, regardless of the length of the operation, without having to do any time checking yourself (within the granularity of the thread scheduling, of course).
You might want to consider using something like rb-inotify to get notifications of changes of files. This way you can avoid "sleep" and keep the "live" feeling.
There is some useful information at the "Efficient Filesystem Handling" section of the Guard Gem documentation: https://github.com/guard/guard#efficient-filesystem-handling
Or you could use TimerTask from the Concurrent Ruby gem
timer_task = Concurrent::TimerTask.new(execution_interval: 1) do |task|
task.execution_interval.times{ print 'Boom! ' }
print "\n"
task.execution_interval += 1
if task.execution_interval > 5
puts 'Stopping...'
task.shutdown
end
end
timer_task.execute
Code extracted from the same link
Related
I am developing a long-running program in Ruby. I am writing some integration tests for this. These tests need to kill or stop the program after starting it; otherwise the tests hang.
For example, with a file bin/runner
#!/usr/bin/env ruby
while true do
puts "Hello World"
sleep 10
end
The (integration) test would be:
class RunReflectorTest < TestCase
test "it prints a welcome message over and over" do
out, err = capture_subprocess_io do
system "bin/runner"
end
assert_empty err
assert_includes out, "Hello World"
end
end
Only, obviously, this will not work; the test starts and never stops, because the system call never ends.
How should I tackle this? Is the problem in system itself, and would Kernel#spawn provide a solution? If so, how? Somehow the following keeps the out empty:
class RunReflectorTest < TestCase
test "it prints a welcome message over and over" do
out, err = capture_subprocess_io do
pid = spawn "bin/runner"
sleep 2
Process.kill pid
end
assert_empty err
assert_includes out, "Hello World"
end
end
. This direction also seems like it will cause a lot of timing-issues (and slow tests). Ideally, a reader would follow the stream of STDOUT and let the test pass as soon as the string is encountered and then immediately kill the subprocess. I cannot find how to do this with Process.
Test Behavior, Not Language Features
First, what you're doing is a TDD anti-pattern. Tests should focus on behaviors of methods or objects, not on language features like loops. If you must test a loop, construct a test that checks for a useful behavior like "entering an invalid response results in a re-prompt." There's almost no utility in checking that a loop loops forever.
However, you might decide to test a long-running process by checking to see:
If it's still running after t time.
If it's performed at least i iterations.
If a loop exits properly given certain input or upon reaching a boundary condition.
Use Timeouts or Signals to End Testing
Second, if you decide to do it anyway, you can just escape the block with Timeout::timeout. For example:
require 'timeout'
# Terminates block
Timeout::timeout(3) { `sleep 300` }
This is quick and easy. However, note that using timeout doesn't actually signal the process. If you run this a few times, you'll notice that sleep is still running multiple times as a system process.
It's better is to signal the process when you want to exit with Process::kill, ensuring that you clean up after yourself. For example:
pid = spawn 'sleep 300'
Process::kill 'TERM', pid
sleep 3
Process::wait pid
Aside from resource issues, this is a better approach when you're spawning something stateful and don't want to pollute the independence of your tests. You should almost always kill long-running (or infinite) processes in your test teardown whenever you can.
Ideally, a reader would follow the stream of STDOUT and let the test pass as soon as the string is encountered and then immediately kill the subprocess. I cannot find how to do this with Process.
You can redirect stdout of spawned process to any file descriptor by specifying out option
pid = spawn(command, :out=>"/dev/null") # write mode
Documentation
Example of redirection
With the answer from CodeGnome on how to use Timeout::timeout and the answer from andyconhin on how to redirect Process::spawn IO, I came up with two Minitest helpers that can be used as follows:
it "runs a deamon" do
wait_for(timeout: 2) do
wait_for_spawned_io(regexp: /Hello World/, command: ["bin/runner"])
end
end
The helpers are:
def wait_for(timeout: 1, &block)
Timeout::timeout(timeout) do
yield block
end
rescue Timeout::Error
flunk "Test did not pass within #{timeout} seconds"
end
def wait_for_spawned_io(regexp: //, command: [])
buffer = ""
begin
read_pipe, write_pipe = IO.pipe
pid = Process.spawn(command.shelljoin, out: write_pipe, err: write_pipe)
loop do
buffer << read_pipe.readpartial(1000)
break if regexp =~ buffer
end
ensure
read_pipe.close
write_pipe.close
Process.kill("INT", pid)
end
buffer
end
These can be used in a test which allows me to start a subprocess, capture the STDOUT and as soon as it matches the test Regular Expression, it passes, else it will wait 'till timeout and flunk (fail the test).
The loop will capture output and pass the test once it sees matching output. It uses a IO.pipe because that is most transparant for subprocesses (and their children) to write to.
I doubt this will work on Windows. And it needs some cleaning up of the wait_for_spawned_io which is doing slightly too much IMO. Antoher problem is that the Process.kill('INT') might not reach the children which are orphaned but still running after this test has ran. I need to find a way to ensure the entire subtree of processes is killed.
I have a ruby script on a remote server that I'm running via Net:SSH on my local pc.
The remote script takes a few minutes to run and outputs it's progress to stdout.
The problem I have is the block in my exec command only gets called when the packet/chunk is full.
So I get the progress all in one hit about each minute.
Here is some cut down examples that illustrate my problem:
Server Script:
(0.999).each do |i|
puts i
sleep 1
end
puts 1000
Local Script:
Net::SSH.start('ip.v.4.addr', 'user', :keys => ['my_key']) do |ssh|
ssh.exec("ruby count_to_1000.rb") do |ch, stream, data|
puts data if stream == :stdout
end
ssh.loop(1)
end
Is there any way from the remote script to force the sending of the packet/chunk?
Or is there a way to set a limit of say a second (or n bits) before it's flushed? (within Net:SSH)
Thanks for all your help!
Try flush:
http://www.ruby-doc.org/core-2.1.5/IO.html#method-i-flush
(0.999).each do |i|
puts i
STDOUT.flush
sleep 1
end
Or sync:
http://www.ruby-doc.org/core-2.1.5/IO.html#method-i-sync
STDOUT.sync = true
(0.999).each do |i|
puts i
sleep 1
end
(Untested, btw. Maybe they need to be used on the client-side instead, or on some other IO stream. But those are the two methods that immediately come to mind.)
In my test setup this works as expected (tested with localhost). However, there might be some issues with the STDOUT flush.
You can try to to write to STDOUT in stead of using puts (I have heard that there is some difference that I don't really understand).
Thus, you can on your server use:
(0.999).each do |i|
STDOUT.puts i
sleep 1
end
STDOUT.puts 1000
#You could possibly also use "STDOUT.write 1000", but it will not append a newline, like puts does.
If that does not work, then you can try to force-flush the STDOUT by using STDOUT.flush(). I believe the same can be achieved by writing an empty string to STDOUT, but I am not 1000% sure.
It might also happen that the exec command actually waits for the entire process to terminate for some reason(I was not able to figure out from the docs). In which case, you won't be able to achieve what you want. Then you can consider setting up websockets, use DRB, or some other means to pass the data.
I have an app with some specs written into minitest. As usual, I start them using rake.
Because some times I got a random results, my specs can pass one time, and fail an other time.
In this case, I can keep the sequence number and replay it later, after fixing.
Because I have this kind of tests (with a random result), I generally run rake many time, just to be sure that the app is okay.
I would like to know if there is a nice way to perform multiple rake tests (100 times for example), and stop them if there any failure or any error?
I think you should think again about your test, not about the test call. A test with a random result looks wrong for me.
What's the random factor in your test? Can you write a mock-element for the random factor and repeat the test with different values for the mock-element. So you get a "complete" test.
I created a dummy test with random result to simulate your situation:
#store it as file 'testcase.rb'
gem 'test-unit'
require 'test/unit'
class X < Test::Unit::TestCase
def test_1
num = rand(10)
assert_true( num < 5, "Value is #{num}")
end
end
The following task calls the test 10 times and stops after the first failure:
TEST_REPETION = 10
task :test do
TEST_REPETION.times{
stdout = `ruby testcase.rb`
if stdout =~ /\d+\) Failure/
puts "Failure occured"
puts stdout
exit
else
puts 'Tests ok'
end
}
end
For real usage I would adapt some parts:
Instead puts 'Tests ok' define a counter to see how often the test was succussfull
Instead puts stdoutyou may store the result in a result file?
Here's the code:
while 1
input = gets
puts input
end
Here's what I want to do but I have no idea how to do it:
I want to create multiple instances of the code to run in the background and be able to pass input to a specific instance.
Q1: How do I run multiple instances of the script in the background?
Q2: How do I refer to an individual instance of the script so I can pass input to the instance (Q3)?
Q3: The script is using the cmd "gets" to take input, how would I pass input into an indivdual's script's gets?
e.g
Let's say I'm running threes instances of the code in the background and I refer to the instance as #1, #2, and #3 respectively.
I pass "hello" to #1, #1 puts "hello" to the screen.
Then I pass "world" to #3 and #3 puts "hello" to the screen.
Thanks!
UPDATE:
Answered my own question. Found this awesome tut: http://rubylearning.com/satishtalim/ruby_threads.html and resource here: http://www.ruby-doc.org/core/classes/Thread.html#M000826.
puts Thread.main
x = Thread.new{loop{puts 'x'; puts gets; Thread.stop}}
y = Thread.new{loop{puts 'y'; puts gets; Thread.stop}}
z = Thread.new{loop{puts 'z'; puts gets; Thread.stop}}
while x.status != "sleep" and y.status != "sleep" and z.status !="sleep"
sleep(1)
end
Thread.list.each {|thr| p thr }
x.run
x.join
Thank you for all the help guys! Help clarified my thinking.
I assume that you mean that you want multiple bits of Ruby code running concurrently. You can do it the hard way using Ruby threads (which have their own gotchas) or you can use the job control facilities of your OS. If you're using something UNIX-y, you can just put the code for each daemon in separate .rb files and run them at the same time.
E.g.,
# ruby daemon1.rb &
# ruby daemon2.rb &
There are many ways to "handle input and output" in a Ruby program. Pipes, sockets, etc. Since you asked about daemons, I assume that you mean network I/O. See Net::HTTP.
Ignoring what you think will happen with multiple daemons all fighting over STDIN at the same time:
(1..3).map{ Thread.new{ loop{ puts gets } } }.each(&:join)
This will create three threads that loop indefinitely, asking for input and then outputting it. Each thread is "joined", preventing the main program from exiting until each thread is complete (which it never will be).
You could try using multi_daemons gem which has capability to run multiple daemons and control them.
# this is server.rb
proc_code = Proc do
loop do
sleep 5
end
end
scheduler = MultiDaemons::Daemon.new('scripts/scheduler', name: 'scheduler', type: :script, options: {})
looper = MultiDaemons::Daemon.new(proc_code, name: 'looper', type: :proc, options: {})
MultiDaemons.runner([scheduler, looper], { force_kill_timeout: 60 })
To start and stop
ruby server.rb start
ruby server.rb stop
I have a process that runs on cron every five minutes. Usually, it takes only a few seconds to run, but sometimes it takes several minutes. I want to ensure that only one version of this is running at a time.
I tried an obvious way...
File.open("/tmp/indexer_lock.tmp",'w') do |f|
exit unless f.flock(File::LOCK_EX)
end
...but it's not testing to see if it can get the lock, it's blocking until the lock is released.
Any idea what I'm missing? I'd rather not hack something using ps, but that's an alternative.
I know this is old, but for anyone interested, there's a non-blocking constant that you can pass to flock so that it returns instead of blocking.
File.new("/tmp/foo.lock").flock( File::LOCK_NB | File::LOCK_EX )
Update for slhck
flock will return true if this process received the lock, false otherwise. So to ensure just one process is running at a time you just want to try to get the lock, and exit if you weren't able to. It's as simple as putting an exit unless in front of the line of code I have above:
exit unless File.new("/tmp/foo.lock").flock( File::LOCK_NB | File::LOCK_EX )
Depending on your needs, this should work just fine and doesn't require creating another file anywhere.
exit unless DATA.flock(File::LOCK_NB | File::LOCK_EX)
# your script here
__END__
DO NOT REMOVE: required for the DATA object above.
Although this isn't directly answering your question, if I were you I'd probably write a daemon script (you could use http://daemons.rubyforge.org/)
You could have your indexer (assuming its indexer.rb) be run through a wrapper script named script/index for example:
require 'rubygems'
require 'daemons'
Daemons.run('indexer.rb')
And your indexer can do almost the same thing, except you specify a sleep interval
loop do
# code executing your indexing
sleep INDEXING_INTERVAL
end
This is how job processors in tandem with a queue server usually function.
You could create and delete a temporary file and check for existence of this file.
Please check the answer to this question :
one instance shell script
There's a lockfile gem for exactly this situation. I've used it before and it's dead simple.
If your using cron it might be easier to do something like this in the shell script that cron calls:
#!/usr/local/bin/bash
#
if ps -C $PROGRAM_NAME &> /dev/null ; then
: #Program is already running.. appropriate action can be performed here (kill it?)
else
#Program is not running.. launch it.
$PROGRAM_NAME
fi
Here's a one-liner that should work at the top of any Ruby script:
exit unless File.new(__FILE__)).tap {|f| f.autoclose = false}.flock(File::LOCK_NB | File::LOCK_EX)
There are two issues with the original code.
First, the reason it's blocking is that the call to #flock is missing File::LOCK_NB:
Don't block when locking. May be combined
with other lock options using logical or.
Second, if a File object is closed (whether at the end of an #open block as in the code above, via explicit #close, or implicitly auto-closed when the File is garbage-collected), the underlying file descriptor is closed and the lock is released. To prevent this you can set #autoclose =false.
Ok, working off notes from #shodanex's pointer, here's what I have. I rubied it up a little bit (though I don't know of a touch analogue in Ruby).
tmp_file = File.expand_path(File.dirname(__FILE__)) + "/indexer.lock"
if File.exists?(tmp_file)
puts "quitting"
exit
else
`touch #{tmp_file}`
end
.. do stuff ..
File.delete(tmp_file)
Can you not add File::LOCK_NB to your lock, to make it non-blocking (i.e. it fails if it can't get the lock)
That would work in C, Perl etc.
At a higher level, you might find the lock_method gem useful:
def the_method_my_cron_job_calls
# something really expensive
end
lock_method :the_method_my_cron_job_calls
It uses lockfiles stored on the local filesystem (what was being discussed above) by default, but you can also configure remote lock storage:
LockMethod.config.storage = Redis.new([...]) # a remote RedisToGo instance, perhaps?
Also...
def the_method_my_cron_job_calls
# something really expensive
end
lock_method :the_method_my_cron_job_calls, (60*60) # automatically expire lock after an hour