I am currently using ARGV.gets to capture user input from the command line. I want to allow Ctrl-D terminate the script, but don't know how to do this using Signal.trap or through error handling. I tried to find a list of trap codes for something like Ctrl-D but was unable to find anything I was looking for. Likewise, rescuing Exception doesn't work because Ctrl-D doesn't raise an exception. Is there a trap code for Ctrl-D or any other way to detect this?
For example...
I am currently able to detect Ctrl-C by trapping...
# Trap ^C
Signal.trap("INT") {
# Do something
exit
}
or error handling...
def get_input
input = ARGF.gets
input.strip!
rescue SystemExit, Interrupt => e
# If we get here, Ctrl-C was encountered
end
However, I haven't been able to trap or detect Ctrl-D.
ARGF in just a special case of stream. Ctrl + D is just end of input.
With this in mind use method ARGF.eof?. Link to documentation
I am unsure of your use case but I am assuming you are intending to do something before the script exits. If so then your best bet is probably a bit easier than signal trapping. The Kernel Module actually offers you an #at_exit method that will be executed just prior to the program actually exiting.
Usage: (from Kernel#at_exit Docs)
def do_at_exit(str1)
at_exit { print str1 }
end
at_exit { puts "cruel world" }
do_at_exit("goodbye ")
exit
"produces:"
goodbye cruel world
as you can see you can define multiple handlers which will be executed in reverse order when the program exits.
Since Kernel is included in Object you can handle Object specifics as well like
class People
at_exit {puts "The #{self.name} have left"}
end
exit
# The People have left
or even on instances
p = People.new
p.send(:at_exit, &->{puts "We are leaving"})
# We are leaving
# The People have left
Additionally for more specific Object based implementations you can take a look at ObjectSpace.define_finalizer.
example of usage:
class Person
def self.finalize(name)
proc {puts "Goodbye Cruel World -#{name}"}
end
def initialize(name)
#name = name
ObjectSpace.define_finalizer(self, self.class.finalize(#name))
end
end
Usage:
p = Person.new("engineersmnky")
exit
# Goodbye Cruel World -engineersmnky
This may not be specifically what you want as this will fire when an Object is garbage collected as well (not great for ephemeral objects) but if you have objects that should exist throughout the entire application this could still be used similar to an at_exit . Example
# requiring WeakRef to allow garbage collection
# See: https://ruby-doc.org/stdlib-2.3.3/libdoc/weakref/rdoc/WeakRef.html
require 'weakref' #
p1 = Person.new("Engineer")
p2 = Person.new("Engineer's Monkey")
p2 = WeakRef.new(p2)
GC.start # just for this example
# Goodbye Cruel World -Engineer's Monkey
#=> nil
p2
#=> WeakRef::RefError: Invalid Reference - probably recycled
exit
# Goodbye Cruel World -Engineer
As you can see the defined finalizer for p2 fired because the Person was gc'd but the program has not exited yet. p1's finalizer waited until exit to fire because it retained its reference throughout the application.
Related
How can I check whether current script in Ruby is being closed?
Particularly, in case program is closing, I want to set #reconnect to false, not to allow web-socket reconnect any more. I tried Signal.trap("TERM"), but it doesn't seem to work.
#reconnect is an instance variable inside WebsocketClient class, i can't directly change it in my script outside class.
class WebsocketClient
def ws_closed(event)
$logger.warn "WS CLOSED"
Signal.trap("TERM") {
#stop = true
#reconnect = false
}
unless $reauth
if #stop
EM.stop
elsif #reconnect
$logger.warn "Reconnecting..."
EM.add_timer(#reconnect_after){ connect! }
end
end
end
end
at_exit {
$logger.fatal "Application terminated. Shutting down gracefully..."
# ...
# Do some exit work...
# ...
exit!
}
Output on CTRL-C
01-02-2018 12:00:54.59 WARN > WS CLOSED
01-02-2018 12:00:54.595 WARN > Reconnecting...
01-02-2018 12:00:54.596 FATAL > Application terminated. Shutting down gracefully..
See Below taken from my answer Here but seems more pertinent to your question than the one it is currently attached to:
Your best bet is probably a bit easier than signal trapping. The Kernel Module actually offers you an #at_exit method that will be executed just prior to the program actually exiting.
Usage: (from Kernel#at_exit Docs)
def do_at_exit(str1)
at_exit { print str1 }
end
at_exit { puts "cruel world" }
do_at_exit("goodbye ")
exit
"produces:"
goodbye cruel world
as you can see you can define multiple handlers which will be executed in reverse order when the program exits.
Since Kernel is included in Object you can handle Object specifics as well like
class People
at_exit {puts "The #{self.name} have left"}
end
exit
# The People have left
or even on instances
p = People.new
p.send(:at_exit, &->{puts "We are leaving"})
# We are leaving
# The People have left
Additionally for more specific Object based implementations you can take a look at ObjectSpace.define_finalizer.
example of usage:
class Person
def self.finalize(name)
proc {puts "Goodbye Cruel World -#{name}"}
end
def initialize(name)
#name = name
ObjectSpace.define_finalizer(self, self.class.finalize(#name))
end
end
Usage:
p = Person.new("engineersmnky")
exit
# Goodbye Cruel World -engineersmnky
This may not be specifically what you want as this will fire when an Object is garbage collected as well (not great for ephemeral objects) but if you have objects that should exist throughout the entire application this could still be used similar to an at_exit . Example
# requiring WeakRef to allow garbage collection
# See: https://ruby-doc.org/stdlib-2.3.3/libdoc/weakref/rdoc/WeakRef.html
require 'weakref' #
p1 = Person.new("Engineer")
p2 = Person.new("Engineer's Monkey")
p2 = WeakRef.new(p2)
GC.start # just for this example
# Goodbye Cruel World -Engineer's Monkey
#=> nil
p2
#=> WeakRef::RefError: Invalid Reference - probably recycled
exit
# Goodbye Cruel World -Engineer
As you can see the defined finalizer for p2 fired because the Person was gc'd but the program has not exited yet. p1's finalizer waited until exit to fire because it retained its reference throughout the application.
The code which I can not interrupt by using strg-c (Ctrl-C) :
orig_std_out = STDOUT.clone
orig_std_err = STDERR.clone
STDOUT.reopen('/dev/null', 'w')
STDERR.reopen('/dev/null', 'w')
name = cookbook_name(File.join(path, 'Metadata.rb'))
error = 0
begin
::Chef::Knife.run(['cookbook', 'site', 'show', "#{name}"])
rescue SystemExit
error = 1
end
.
.
.
In my understanding this behaviour would be reasonable if I would rescue Exception, but in this case I am basically catching siblings which only share their parent exception Exception.
I have already tried to rescue the exception Interrupt and SignalException explicitly.
EDIT1: In the hope of clarifying my question I added the following code which i tried:
begin
::Chef::Knife.run(['cookbook', 'site', 'show', "#{name}"])
rescue SystemExit => e
msg1 = e.message
error = 1
rescue Interrupt
msg2 = "interrupted"
end
In both cases - SystemExit thrown by Knife.run and thrown by Ctrl-C - e.message returns "exit". This does not only mean, that Ctrl-C throws a SystemExit whereas I am expecting it to throw an Interrupt, but also that the error message is the same.
I guess that I have got a major misunderstanding in how ruby works there, since I am not very familiar with ruby.
EDIT2: Further testing revealed that some Ctrl-C interrupts are rescued by rescue Interrupt. Is it possible that the command ::Chef::Knife.run(['cookbook', 'site', 'show', "#{name}"]), which takes about 3-5 seconds to run, creates some kind of subprocess which responds to a Ctrl-C, but always closes with a SystemExit and that rescue Interruptonly works when it is interrupted just the moment this subprocess is not running? If this is the case, how am I going to be able to Interrupt the whole program?
EDIT3: I initially wanted to attach all the methods which get called on calling Knife.run, however, this would have been too many LoC, although I think my guess that a subcommand is executed was right. The chef gem code can be found here. Thus, the following excerpt is only the part which is the problematic one in my opinion:
rescue Exception => e
raise if raise_exception || Chef::Config[:verbosity] == 2
humanize_exception(e)
exit 100
end
Which leads to the question: How can I catch a Ctrl-C which is already rescued by a subcommand?
I have done gem install chef. Now I try another solution, replacing only run_with_pretty_exceptions, but don't know which require to put in the script. I did this :
require 'chef'
$:.unshift('Users/b/.rvm/gems/ruby-2.3.3/gems/chef-13-6-4/lib')
require 'chef/knife'
But then :
$ ruby chef_knife.rb
WARNING: No knife configuration file found
ERROR: Error connecting to https://supermarket.chef.io/api/v1/cookbooks/xyz, retry 1/5
...
So, without the whole infrastructure, I can't test the following solution. The idea is that in Ruby you can reopen an existing class and replace a method defined elsewhere. I have to leave you check it :
# necessary require of chef and knife ...
class Chef::Knife # reopen the Knife class and replace this method
def run_with_pretty_exceptions(raise_exception = false)
unless respond_to?(:run)
ui.error "You need to add a #run method to your knife command before you can use it"
end
enforce_path_sanity
maybe_setup_fips
Chef::LocalMode.with_server_connectivity do
run
end
rescue Exception => e
raise if e.class == Interrupt # <---------- added ********************
raise if raise_exception || Chef::Config[:verbosity] == 2
humanize_exception(e)
exit 100
end
end
name = cookbook_name(File.join(path, 'Metadata.rb'))
error = 0
begin
::Chef::Knife.run(['cookbook', 'site', 'show', "#{name}"])
rescue SystemExit => e
puts "in rescue SystemExit e=#{e.inspect}"
error = 1
rescue Interrupt
puts 'in rescue Interrupt'
end
raise if e.class == Interrupt will re-raise Interrupt if it is one.
Normally I run ruby -w to display diagnostics, which would be like this :
$ ruby -w ck.rb
ck.rb:9: warning: method redefined; discarding old run_with_pretty_exceptions
ck.rb:4: warning: previous definition of run_with_pretty_exceptions was here
Unfortunately there are so many uninitialized variables and circular require warnings in this gem that this option produces un unmanageable output.
The drawback of this solution is that you have to keep a documentation track of this change, and in case of Chef's release change, somebody has to verify if the code of run_with_pretty_exceptions has changed.
Please give me a feedback.
===== UPDATE =====
There is a less intrusive solution, which consists in defining an exit method in Chef::Knife.
When you see exit 100, i.e. a message without receiver, the implicit receiver is self, it is equivalent to self.exit 100. In our case, self is the object created by instance = subcommand_class.new(args), and which is the receiver in instance.run_with_pretty_exceptions.
When a message is sent to an object, the message search mechanism starts looking in the class of this object. If there is no method with this name in the class, the search mechanism looks in included modules, then the superclass, etc until it reaches Object, the default superclass of Chef::Knife. Here it finds Object#exit and executes it.
After defining an exit method in Chef::Knife, the message search mechanism, when it encounters exit 100 with an instance of Chef::Knife as implicit receiver, will first find this local method and execute it. By previously aliasing the original Object#exit, it is still possible to call the original Ruby method which initiates the termination of the Ruby script. This way the local exit method can decide either to call the original Object#exit or take other action.
Following is a complete example which demonstrates how it works.
# ***** Emulation of the gem *****
class Chef end
class Chef::Knife
def self.run(x)
puts 'in original run'
self.new.run_with_pretty_exceptions
end
def run_with_pretty_exceptions
print 'Press Ctrl_C > '
gets
rescue Exception => e
puts
puts "in run_with_pretty...'s Exception e=#{e.inspect} #{e.class}"
raise if false # if raise_exception || Chef::Config[:verbosity] == 2
# humanize_exception(e)
puts "now $!=#{$!.inspect}"
puts "about to exit, self=#{self}"
exit 100
end
end
# ***** End of gem emulation *****
#----------------------------------------------------------------------
# ***** This is what you put into your script. *****
class Chef::Knife # reopen the Knife class and define one's own exit
alias_method :object_exit, :exit
def exit(p)
puts "in my own exit with parameter #{p}, self=#{self}"
puts "$!=#{$!.inspect}"
if Interrupt === $!
puts 'then about to raise Interrupt'
raise # re-raise Interrupt
else
puts 'else about to call Object#exit'
object_exit(p)
end
end
end
begin
::Chef::Knife.run([])
rescue SystemExit => e
puts "in script's rescue SystemExit e=#{e.inspect}"
rescue Interrupt
puts "in script's rescue Interrupt"
end
Execution. First test with Ctrl-C :
$ ruby -w simul_chef.rb
in original run
Press Ctrl_C > ^C
in run_with_pretty...'s Exception e=Interrupt Interrupt
now $!=Interrupt
about to exit, self=#<Chef::Knife:0x007fb2361c7038>
in my own exit with parameter 100, self=#<Chef::Knife:0x007fb2361c7038>
$!=Interrupt
then about to raise Interrupt
in script's rescue Interrupt
Second test with a hard interrupt.
In one terminal window :
$ ruby -w simul_chef.rb
in original run
Press Ctrl_C >
In another terminal window :
$ ps -ef
UID PID PPID C STIME TTY TIME CMD
0 1 0 0 Fri01PM ?? 0:52.65 /sbin/launchd
...
0 363 282 0 Fri01PM ttys000 0:00.02 login -pfl b /bin/bash -c exec -la bash /bin/bash
501 364 363 0 Fri01PM ttys000 0:00.95 -bash
501 3175 364 0 9:51PM ttys000 0:00.06 ruby -w simul_chef.rb
...
$ kill 3175
Back in the first terminal :
in run_with_pretty...'s Exception e=#<SignalException: SIGTERM> SignalException
now $!=#<SignalException: SIGTERM>
about to exit, self=#<Chef::Knife:0x007fc5a79d70a0>
in my own exit with parameter 100, self=#<Chef::Knife:0x007fc5a79d70a0>
$!=#<SignalException: SIGTERM>
else about to call Object#exit
in script's rescue SystemExit e=#<SystemExit: exit>
Considering the code you originally posted, all you have to do is inserting at the beginning, but after the necessary require :
class Chef::Knife # reopen the Knife class and define one's own exit
alias_method :object_exit, :exit
def exit(p)
if Interrupt === $!
raise # re-raise Interrupt
else
object_exit(p)
end
end
end
So there is no need to touch the original gem.
The following code shows how I am able to interrupt after all:
interrupted = false
trap("INT") { interrupted = true} #sent INT to force exit in Knife.run and then exit
begin
::Chef::Knife.run(['cookbook', 'site', 'show', "#{name}"]) #exits on error and on interrupt with 100
if interrupted
exit
end
rescue SystemExit => e
if interrupted
exit
end
error = 1
end
The drawback still is, that I am not exactly able to interrupt the Knife.run, but only able to trap the interrupt and check after that command whether an interrupt was triggered. I found no way to trap the interrupt and "reraise" it at the same time, so that I am at least able to force an exit out of Knife.run which I can then exit manually.
Is it possible to create a "worker thread" so to speak that is on standby until it receives a function to execute asynchronously?
Is there a way to send a function like
def some_function
puts "hi"
# write something
db.exec()
end
to an existing thread that's just sitting there waiting?
The idea is I'd like to pawn off some database writes to a thread which runs asynchronously.
I thought about creating a Queue instance, then have a thread do something like this:
$command = Queue.new
Thread.new do
while trigger = $command.pop
some_method
end
end
$command.push("go!")
However this does not seem like a particularly good way to go about it. What is a better alternative?
The thread gem looks like it would suit your needs:
require 'thread/channel'
def some_method
puts "hi"
end
channel = Thread.channel
Thread.new do
while data = channel.receive
some_method
end
end
channel.send("go!")
channel.send("ruby!") # Any truthy message will do
channel.send(nil) # Non-truthy message to terminate other thread
sleep(1) # Give other thread time to do I/O
The channel uses ConditionVariable, which you could use yourself if you prefer.
I'm using RSpec to test the behavior of a simple REPL. The REPL just echoes back whatever the input was, unless the input was "exit", in which case it terminates the loop.
To avoid hanging the test runner, I'm running the REPL method inside a separate thread. To make sure that the code in the thread has executed before I write expectations about it, I've found it necessary to include a brief sleep call. If I remove it, the tests fail intermittently because the expectations are sometimes made before the code in the thread has run.
What is a good way to structure the code and spec such that I can make expectations about the REPL's behavior deterministically, without the need for the sleep hack?
Here is the REPL class and the spec:
class REPL
def initialize(stdin = $stdin, stdout = $stdout)
#stdin = stdin
#stdout = stdout
end
def run
#stdout.puts "Type exit to end the session."
loop do
#stdout.print "$ "
input = #stdin.gets.to_s.chomp.strip
break if input == "exit"
#stdout.puts(input)
end
end
end
describe REPL do
let(:stdin) { StringIO.new }
let(:stdout) { StringIO.new }
let!(:thread) { Thread.new { subject.run } }
subject { described_class.new(stdin, stdout) }
# Removing this before hook causes the examples to fail intermittently
before { sleep 0.01 }
after { thread.kill if thread.alive? }
it "prints a message on how to end the session" do
expect(stdout.string).to match(/end the session/)
end
it "prints a prompt for user input" do
expect(stdout.string).to match(/\$ /)
end
it "echoes input" do
stdin.puts("foo")
stdin.rewind
expect(stdout.string).to match(/foo/)
end
end
Instead of letting :stdout be a StringIO, you could back it by a Queue. Then when you try to read from the queue, your tests will just wait until the REPL pushes something into the queue (aka. writes to stdout).
require 'thread'
class QueueIO
def initialize
#queue = Queue.new
end
def write(str)
#queue.push(str)
end
def puts(str)
write(str + "\n")
end
def read
#queue.pop
end
end
let(:stdout) { QueueIO.new }
I just wrote this up without trying it out, and it may not be robust enough for your needs, but it gets the point across. If you use a data structure to synchronize the two threads like this, then you don't need to sleep at all. Since this removes the non-determinism, you shouldn't see the intermittent failures.
I've used a running? guard for situations like this. You probably can't avoid the sleep entirely, but you can avoid unnecessary sleeps.
First, add a running? method to your REPL class.
class REPL
...
def running?
!!#running
end
def run
#running=true
loop do
...
if input == 'exit
#running = false
break
end
...
end
end
end
Then, in your specs, sleep until the REPL is running:
describe REPL do
...
before { sleep 0.01 until REPL.running? }
...
end
I'm writing a delayed_job clone for DataMapper. I've got what I think is working and tested code except for the thread in the worker process. I looked to delayed_job for how to test this but there are now tests for that portion of the code. Below is the code I need to test. ideas? (I'm using rspec BTW)
def start
say "*** Starting job worker #{#name}"
t = Thread.new do
loop do
delay = Update.work_off(self) #this method well tested
break if $exit
sleep delay
break if $exit
end
clear_locks
end
trap('TERM') { terminate_with t }
trap('INT') { terminate_with t }
trap('USR1') do
say "Wakeup Signal Caught"
t.run
end
see also this thread
The best approach, I believe, is to stub the Thread.new method, and make sure that any "complicated" stuff is in it's own method which can be tested individually. Thus you would have something like this:
class Foo
def start
Thread.new do
do_something
end
end
def do_something
loop do
foo.bar(bar.foo)
end
end
end
Then you would test like this:
describe Foo
it "starts thread running do_something" do
f = Foo.new
expect(Thread).to receive(:new).and_yield
expect(f).to receive(:do_something)
f.start
end
it "do_something loops with and calls foo.bar with bar.foo" do
f = Foo.new
expect(f).to receive(:loop).and_yield #for multiple yields: receive(:loop).and_yield.and_yield.and_yield...
expect(foo).to receive(:bar).with(bar.foo)
f.do_something
end
end
This way you don't have to hax around so much to get the desired result.
You could start the worker as a subprocess when testing, waiting for it to fully start, and then check the output / send signals to it.
I suspect you can pick up quite a few concrete testing ideas in this area from the Unicorn project.
Its impossible to test threads completely. Best you can do is to use mocks.
(something like)
object.should_recieve(:trap).with('TERM').and yield
object.start
How about just having the thread yield right in your test.
Thread.stub(:new).and_yield
start
# assertions...