I need to execute something when the user shuts down the script, or presses ctr + c in the terminal, in ruby. How can I accomplish this?
For example:
def epicFunction()
puts("User closed down the script")
end
catch (user exitted the script) do
epicFunction()
end
Use #at_exit Blocks
You can use Kernel#at_exit to register one or more blocks to run at exit. These will execute whenever the program exits regardless of why it exited, such as an exception being raised, a call to Kernel#exit like exit 1, receiving a signal like SIGTERM, or even just a normal exit of the interpreter when it reaches the end of a script or program. I can't think of an example of when the #at_exit handlers wouldn't be called (maybe someone knows of a special edge case) so they are pretty much guaranteed to be called regardless of how the program exits, and since they don't rely on code paths like #exit does or on trapping signals they offer a stronger guarantee of being called right before the interpreter exits. It might help to think of #at_exit somewhat like Bash's trap EXIT as a related example, even if the languages and implementations are fundamentally different.
However, keep your #at_exit handlers as simple as you can. If your #at_exit handlers are complicated code and raise errors themselves due to bugs or scope issues, this can be rather hard to debug. As an somewhat contrived example, consider:
at_exit { puts bar }
raise "foo"
$ ruby at_exit.rb
at_exit.rb:1:in `block in <main>': undefined local variable or method `bar' for main:Object (NameError)
at_exit { puts bar }
^^^
at_exit.rb:3:in `<main>': foo (RuntimeError)
at_exit.rb:3:in `<main>': foo (RuntimeError)
Because the block is really just an anonymous closure, in this case it raises NameError when referencing bar when it gets called at exit. So you end up with NameError raised inside a Proc object, and then the expected RuntimeError raised by the call to raise that triggered the #at_exit call in the first place.
In general, #at_exit should be reserved for simple things like printing messages, cleaning up temporary files or directories, or other basic housekeeping tasks that (for whatever reason) you chose not to scope within a block or method, or that Ruby doesn't clean up automagically. There aren't many things that really require exit handlers, but if you want them, #at_exit is the way to go.
Related
I have a pretty large ruby (non-rails) application that I'm developing. It's reasonably fast considering how large and complex it is (go ruby!), but sometimes I fat finger a method name and get the NoMethodError.
And usually when this happens, the application hangs for like 20 to 30 seconds to just print out the backtrace.
Specifically, if I do something like this:
puts "about to crash!"
Array.new().inspekt # NoMethodError here
I see the "about to crash!" right away, and then 20s or so nothing seems to happen before I finally get the NoMethodError and backtrace.
At first I thought it might be the "did you mean" gem, so I turned that off with --disable-did_you_mean on the command line, and that turned off the "did you mean" suggestions, but nothing sped up the backtrace.
What's interesting is that this is only for NoMethodError.
If I cause some other exception, such as:
puts "about to crash!"
a = 3/0
Then I see the backtrace immediately.
And to make things even weirder, if I interrupt the process right after the "about to crash!" (such as with a ctrl-c on unix) then I immediately get the NoMethodError and it's backtrace. So it has the information - but ruby is stuck on trying to clean something up perhaps, something that only gets cleaned up on NoMethodError?
Info: ruby 2.7.0
OS: CentOS Linux release 7.5.1804
UPDATE - to responses so far:
Everyone seems to be concerned about the backtrace and profiling the ruby code.
Except the slowdown is NOT happening there. There are NO LINES OF RUBY CODE that are executed during the slowdown. All of the lines prior to this, "in the backtrace" are already executed and in a matter of a second or so. Then the system hangs, between the puts and the NoMethodError. There is no ruby code in between to profile, so any profiler that is looking at code written in my ruby script isn't going to help. The slowdown is something internal to ruby and is not in my code, unless I'm terribly confused about what's happening.
To be very clear:
Line 10042: puts "HERE" # Happens at ~1s
Line 10043: Array.new().inspekt # Happens at ~20-30s
There is no code between those lines to profile. The 20-30s is not happening in any code before line 10042 executes, so profiling that will not help.
I do have other Fibers that are paused. But there is no code here that yields to them. Is it possible that there's some strange built-in yield code that attempts to run other (paused) fibers when an exception is hit? I can't think of a reason you'd ever want this behavior, and many reasons why it would be catastrophic, but I can't think of anything else that would cause this problem (that is also killable with a ctrl-c!)
I would try to debug the full backtrace in there to see what is actually happening
begin
puts "about to crash!"
Array.new().inspekt
rescue => e
puts e.backtrace
raise # raise anyway
end
In my case I get 20 lines of backtrace with ruby 2.6.3 and irb, if that doesn't really tell you anything interesting I would then do the tedious work of measuring each runtime by modifying each file of the backtrace and printing the times at each step, debugging yay!
The following simplified code (extracted from a rather big multi-threaded piece of software) very rarely results in a ThreadError: Attempt to unlock a mutex which is not locked:
begin
yield(mutex) if mutex.try_lock
ensure
mutex.unlock if mutex.owned?
end
on line 4.
That's in Ruby 2.1.5, which is EOF right now, but my understanding was that calling mutex.unlock is safe if current thread owns the mutex, which is being checked with mutex.owned?. Would I be wrong in this assumption?
I believe that yield(mutex) in line 2 does never attempt to unlock the yielded mutex.
I understand that using something like mutex.synchronize(&block) may be preferable instead - but I'm just wondering how the above snippet should behave. And if I'm missing something in my understanding.
Working with ruby 1.9.3 with tk and I've found that I can't do a fork inside the mainloop that calls "exit" - I need to get out of the fork by doing something like exec().
Example program:
#!/usr/bin/ruby
require 'tk'
root = TkRoot.new
def doit
unless fork
puts "Inside the fork"
exit # This is where it falls apart
end
end
TkButton.new(root) {
text 'go'
command proc { doit }
}.pack
Tk.mainloop()
Press the button and we properly fork, but when fork calls 'exit' then we get:
[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
ruby: ../../src/xcb_io.c:274: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.
[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
ruby: ../../src/xcb_io.c:274: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.
I've discovered that I can hack a workaround by using something like "exec('echo')" instead of the exit, but that's just plain silly. What's going on that exit doesn't work? Is there some way to call XInitThreads from ruby (this isn't jruby or fxruby) that I can use and will that help?
From my research so far, it seems that ruby and Tk are pretty broken the moment you introduce threads or forking, but I have not been able to find a clean way to deal with this problem.
This problem exists with any Tcl built with thread support.
You will have to restructure your program so that the Tcl/Tk processes are separate entities and do not get forked.
The gui will need to send any input commands to the appropriate ruby thread and the threads will need to send data to the gui to be displayed.
I understand that there are various ways to spawn new processes in Ruby (e.g. backticks, system(), exec(), etc...)
However, is it possible to spawn a new process directly with code passed as a block?
Just like forks (fork {... block ...}).
My problem is that I don't want to use forks as I don't want to copy all the memory (problematic in my case because of writing), I want to spawn a "fresh" project without calling an external ruby file.
fork is the only way to do this. However, on Linux at least, and I think on OSX too, fork is implemented as copy on write, meaning that until an area of memory is written to in the child process, it points directly to the area of the old parent process. So, no problem.
Edit: Nevermind. The above is wrong. Here's what I would do:
code = "puts 'hi'"
result = nil
popen("ruby") do |pipe|
pipe.puts code
pipe.close_write
result = pipe.read
end
I have a long running process and I need it to launch another process (that will run for a good while too). I need to only start it, and then completely forget about it.
I managed to do what I needed by scooping some code from the Programming Ruby book, but I'd like to find the best/right way, and understand what is going on. Here's what I got initially:
exec("whatever --take-very-long") if fork.nil?
Process.detach($$)
So, is this the way, or how else should I do it?
After checking the answers below I ended up with this code, which seems to make more sense:
(pid = fork) ? Process.detach(pid) : exec("foo")
I'd appreciate some explanation on how fork works. [got that already]
Was detaching $$ right? I don't know why this works, and I'd really love to have a better grasp of the situation.
Alnitak is right. Here's a more explicit way to write it, without $$
pid = Process.fork
if pid.nil? then
# In child
exec "whatever --take-very-long"
else
# In parent
Process.detach(pid)
end
The purpose of detach is just to say, "I don't care when the child terminates" to avoid zombie processes.
The fork function separates your process in two.
Both processes then receive the result of the function. The child receives a value of zero/nil (and hence knows that it's the child) and the parent receives the PID of the child.
Hence:
exec("something") if fork.nil?
will make the child process start "something", and the parent process will carry on with where it was.
Note that exec() replaces the current process with "something", so the child process will never execute any subsequent Ruby code.
The call to Process.detach() looks like it might be incorrect. I would have expected it to have the child's PID in it, but if I read your code right it's actually detaching the parent process.
Detaching $$ wasn't right. From p. 348 of the Pickaxe (2nd Ed):
$$ Fixnum The process number of the program being executed. [r/o]
This section, "Variables and Constants" in the "Ruby Language" chapter, is very handy for decoding various ruby short $ constants - however the online edition (the first
So what you were actually doing was detaching the program from itself, not from its child.
Like others have said, the proper way to detach from the child is to use the child's pid returned from fork().
The other answers are good if you're sure you want to detach the child process. However, if you either don't mind, or would prefer to keep the child process attached (e.g. you are launching sub-servers/services for a web app), then you can take advantage of the following shorthand
fork do
exec('whatever --option-flag')
end
Providing a block tells fork to execute that block (and only that block) in the child process, while continuing on in the parent.
i found the answers above broke my terminal and messed up the output. this is the solution i found.
system("nohup ./test.sh &")
just in case anyone has the same issue, my goal was to log into a ssh server and then keep that process running indefinitely. so test.sh is this
#!/usr/bin/expect -f
spawn ssh host -l admin -i cloudkey
expect "pass"
send "superpass\r"
sleep 1000000