I'm trying to figure out a way to track the status of a process that I've created. In my script I start out by creating my object:
ov = OpenVASOMP::OpenVASOMP.new({"host" => "localhost", "port" => "9390", "user" => "admin", "password" => "#{ENV["OV"]}"})
Which creates an ov object and exposes a bunch of other methods. In particular: ov.task_start.
I need to be able to track the process and perform other actions while it's running, such as sending a status update to a remote server.
My initial thought was to wrap this in a Process.spawn and track the PID, but that's throwing an error:
TypeError: no implicit conversion of REXML::Element into String
and the stack trace points to this line: pid = Process.spawn(ov.task_start(taskid))
So, I guess you can't pass objects and their methods into spawn?
Here's my whole block of code in case there is something else that I'm missing:
ov = OpenVASOMP::OpenVASOMP.new({"host" => "localhost", "port" => "9390", "user" => "admin", "password" => "#{ENV["OV"]}"})
taskid = ov.task_create({"name" => timestamp, "target" => target, "config" => config})
running = true
pid = Process.spawn(ov.task_start(taskid))
Signal.trap("HUP") { log("#{results_dir}/events.log", "[!] Stop triggered by user!"); exit }
until running == false
begin
running = Process.getpgid(pid)
log("#{results_dir}/events.log", "Scan PID: #{pid}")
stat = ov.task_get_byid(taskid)
update_ov_status(stat['progress'])
log("#{results_dir}/events.log", "[+] Sending progress to server: #{stat['progress']}%")
scan_status = get_scan_status
if scan_status == "Stopped"
ov.task_stop(taskid)
ov.task_delete(taskid)
ov.target_delete(target)
Process.kill("HUP", pid)
Process.wait
update_task_id("")
update_ov_status(0)
update_scan_status("Idle")
end
sleep 60
rescue Errno::ESRCH
running = false
puts "PID: #{pid} done!"
log("#{results_dir}/events.log", "[!] Scan complete")
end
end
And task_start looks like:
def task_start (task_id)
xmlreq=xml_attr("start_task",{"task_id" => task_id}).to_s()
begin
xr=omp_request_xml(xmlreq)
rescue
raise OMPResponseError
end
return xr
end
Am I going about this all wrong?
Just repeating what I said in the comment in an answer, for closure.
since task_start is not a shell script string, but rather a block of code that should be executed asynchronously, use Process.fork { ov.task_start taskid } instead of Process.spawn.
The Process.fork call returns a PID which can be used to stop the process, for example:
# in one terminal
ruby -e "puts Process.fork { loop { puts('tick'); sleep 1 } }"
# it then prints a PID like 20626
# then in another terminal:
kill -9 20626
# the "tick" will stop getting printed every second.
Related
I have a Ruby application that spawns a thread on-demand which in turn does a system call to execute a native binary.
I want to abort this call before the native call completes.
I tried using all options the Thread documentation provided, like kill, raise and terminate, but nothing seems to help.
This is what I'm trying to do:
class Myserver < Grape::API
##thr = nil
post "/start" do
puts "Starting script"
##thr = Thread.new do
op=`sh chumma_sh.sh`
puts op
end
puts ##thr.status
end
put "/stop" do
##thr.terminate
##thr.raise
Thread.kill(##thr)
puts ##thr.status
end
end
The thread appears to enter a sleep state as an IO operation is in process, but how do I kill the thread so that all child processes it created are terminated and not attached to root.
Doing ps-ef | grep for the script returns the pid, and I could try Process.kill pid but wanted to know if there are better options.
I don't have the option at this moment of modifying how the script is executed as it is part of an inherited library.
Using ps is the only approach I've found that works. If you also want to kill child threads, you could use something like this:
def child_pids_recursive(pid)
# get children
pipe = IO.popen("ps -ef | grep #{pid}")
child_pids = pipe.readlines.map do |line|
parts = line.split(/\s+/)
parts[2] if parts[3] == pid.to_s && parts[2] != pipe.pid.to_s
end.compact
pipe.close
# get grandchildren
grandchild_pids = child_pids.map do |cpid|
child_pids_recursive(cpid)
end.flatten
child_pids + grandchild_pids
end
def kill_all(pid)
child_pids_recursive(pid).reverse.each do |p|
begin
Process.kill('TERM', p.to_i)
rescue
# ignore
end
end
end
I need to login in a server with Ruby and Telnet and execute a few commands. My actual script is:
tn = Net::Telnet::new("Host" => "#{ip}", "Port" => 23, "Timeout" => 60,
"Output_log"=>"output_log.log",
"Dump_log"=> "dump_log.log",
"Prompt" => /[#]/ )
tn.cmd("#{USER}\n#{PASS}") { |c| print c }
puts tn.cmd("Conf")
tn.waitfor(/config/) { |str| puts str }
puts tn.cmd("Int fa23")
puts tn.cmd("Shut")
puts tn.cmd("No shut")
puts tn.cmd("Exit")
tn.close
I must only execute the second command (Int fa23) after the string "config" is found at the output. The problem is that waitfor is not working. Here is the output_log:
Trying XX.XX.XX.XX...
Connected to XX.XX.XX.XX.
User Name:username
Password:*************
BOT-SWT-VSAT-AL-...#Conf
BOT-SWT-VSAT-AL-...(config)#
The script stops with waitfor time out error. What am I doing wrong?
Please add the waitfor right after Net::Telnet::new
You should wait for the connection to get established (which happens when creating a Telnet instance) and then wait for the server to respond back every time before sending the next command.
localhost = Net::Telnet::new("Host" => "*****",
"Port" => ***,
"Timeout" => 10,
"Prompt" => /[$%#>] \z/n)
localhost.waitfor(/USER/) {
localhost.cmd("****") {
localhost.waitfor(/PASS/) {
|c| print c
# your next commands
...
# localhost.close
}
}
}
The key here is to make sure that you have received all the packets from the server (until EOF is received) before responding back. Then sometimes, even consuming/waiting for a space character might count (depending to how your the telnet server is designed).
Then, make sure to set the right regular expression for your match.
I have been trying to get port forwarding to work correctly with Net::SSH. From what I understand I need to fork out the Net::SSH session if I want to be able to use it from the same Ruby program so that the event handling loop can actually process packets being sent through the connection. However, this results in the ugliness you can see in the following:
#!/usr/bin/env ruby -w
require 'net/ssh'
require 'httparty'
require 'socket'
include Process
log = Logger.new(STDOUT)
log.level = Logger::DEBUG
local_port = 2006
child_socket, parent_socket = Socket.pair(:UNIX, :DGRAM, 0)
maxlen = 1000
hostname = "www.example.com"
pid = fork do
parent_socket.close
Net::SSH.start("hostname", "username") do |session|
session.logger = log
session.logger.sev_threshold=Logger::Severity::DEBUG
session.forward.local(local_port, hostname, 80)
child_socket.send("ready", 0)
pidi = fork do
msg = child_socket.recv(maxlen)
puts "Message from parent was: #{msg}"
exit
end
session.loop do
status = waitpid(pidi, Process::WNOHANG)
puts "Status: #{status.inspect}"
status.nil?
end
end
end
child_socket.close
puts "Message from child: #{parent_socket.recv(maxlen)}"
resp = HTTParty.post("http://localhost:#{local_port}/", :headers => { "Host" => hostname } )
# the write cannot be the last statement, otherwise the child pid could end up
# not receiving it
parent_socket.write("done")
puts resp.inspect
Can anybody show me a more elegant/better working solution to this?
I spend a lot of time trying to figure out how to correctly implement port forwarding, then I took inspiration from net/ssh/gateway library. I needed a robust solution that works after various possible connection errors. This is what I'm using now, hope it helps:
require 'net/ssh'
ssh_options = ['host', 'login', :password => 'password']
tunnel_port = 2222
begin
run_tunnel_thread = true
tunnel_mutex = Mutex.new
ssh = Net::SSH.start *ssh_options
tunnel_thread = Thread.new do
begin
while run_tunnel_thread do
tunnel_mutex.synchronize { ssh.process 0.01 }
Thread.pass
end
rescue => exc
puts "tunnel thread error: #{exc.message}"
end
end
tunnel_mutex.synchronize do
ssh.forward.local tunnel_port, 'tunnel_host', 22
end
begin
ssh_tunnel = Net::SSH.start 'localhost', 'tunnel_login', :password => 'tunnel_password', :port => tunnel_port
puts ssh_tunnel.exec! 'date'
rescue => exc
puts "tunnel connection error: #{exc.message}"
ensure
ssh_tunnel.close if ssh_tunnel
end
tunnel_mutex.synchronize do
ssh.forward.cancel_local tunnel_port
end
rescue => exc
puts "tunnel error: #{exc.message}"
ensure
run_tunnel_thread = false
tunnel_thread.join if tunnel_thread
ssh.close if ssh
end
That's just how SSH in general is. If you're offended by how ugly it looks, you should probably wrap up that functionality into a port forwarding class of some sort so that the exposed part is a lot more succinct. An interface like this, perhaps:
forwarder = PortForwarder.new(8080, 'remote.host', 80)
So I have found a slightly better implementation. It only requires a single fork but still uses a socket for the communication. It uses IO#read_nonblock for checking if a message is ready. If there isn't one, the method throws an exception, in which case the block continues to return true and the SSH session keeps serving requests. Once the parent is done with the connection it sends a message, which causes child_socket.read_nonblock(maxlen).nil? to return false, making the loop exit and therefore shutting down the SSH connection.
I feel a little better about this, so between that and #tadman's suggestion to wrap it in a port forwarding class I think it's about as good as it'll get. However, any further suggestions for improving this are most welcome.
#!/usr/bin/env ruby -w
require 'net/ssh'
require 'httparty'
require 'socket'
log = Logger.new(STDOUT)
log.level = Logger::DEBUG
local_port = 2006
child_socket, parent_socket = Socket.pair(:UNIX, :DGRAM, 0)
maxlen = 1000
hostname = "www.example.com"
pid = fork do
parent_socket.close
Net::SSH.start("ssh-tunnel-hostname", "username") do |session|
session.logger = log
session.logger.sev_threshold=Logger::Severity::DEBUG
session.forward.local(local_port, hostname, 80)
child_socket.send("ready", 0)
session.loop { child_socket.read_nonblock(maxlen).nil? rescue true }
end
end
child_socket.close
puts "Message from child: #{parent_socket.recv(maxlen)}"
resp = HTTParty.post("http://localhost:#{local_port}/", :headers => { "Host" => hostname } )
# the write cannot be the last statement, otherwise the child pid could end up
# not receiving it
parent_socket.write("done")
puts resp.inspect
Say I have a function like below, how do I capture the output of the Process.spawn call? I should also be able to kill the process if it takes longer than a specified timeout.
Note that the function must also be cross-platform (Windows/Linux).
def execute_with_timeout!(command)
begin
pid = Process.spawn(command) # How do I capture output of this process?
status = Timeout::timeout(5) {
Process.wait(pid)
}
rescue Timeout::Error
Process.kill('KILL', pid)
end
end
Thanks.
You can use IO.pipe and tell Process.spawn to use the redirected output without the need of external gem.
Of course, only starting with Ruby 1.9.2 (and I personally recommend 1.9.3)
The following is a simple implementation used by Spinach BDD internally to capture both out and err outputs:
# stdout, stderr pipes
rout, wout = IO.pipe
rerr, werr = IO.pipe
pid = Process.spawn(command, :out => wout, :err => werr)
_, status = Process.wait2(pid)
# close write ends so we could read them
wout.close
werr.close
#stdout = rout.readlines.join("\n")
#stderr = rerr.readlines.join("\n")
# dispose the read ends of the pipes
rout.close
rerr.close
#last_exit_status = status.exitstatus
The original source is in features/support/filesystem.rb
Is highly recommended you read Ruby's own Process.spawn documentation.
Hope this helps.
PS: I left the timeout implementation as homework for you ;-)
I followed Anselm's advice in his post on the Ruby forum here.
The function looks like this -
def execute_with_timeout!(command)
begin
pipe = IO.popen(command, 'r')
rescue Exception => e
raise "Execution of command #{command} unsuccessful"
end
output = ""
begin
status = Timeout::timeout(timeout) {
Process.waitpid2(pipe.pid)
output = pipe.gets(nil)
}
rescue Timeout::Error
Process.kill('KILL', pipe.pid)
end
pipe.close
output
end
This does the job, but I'd rather use a third-party gem that wraps this functionality. Anyone have any better ways of doing this? I have tried Terminator, it does exactly what I want but it does not seem to work on Windows.
I run this script:
t = fork do
Signal.trap "INT" do
puts "child"
exit
end
sleep 10
end
Signal.trap "INT" do
puts "parent"
Process.kill "INT", t
Process.waitpid t
exit
end
Process.waitpid t
When I do CTRL+C, I get
$ ruby sigtest.rb
^Cchild
parent
You can see that "INT" passed to every process and Process.kill "INT", t try to kill process which already died. Is there way to do so that user INT signal will be passed only to the parent? And output will be:
$ ruby sigtest.rb
^Cparent
child
Solution
Rules:
When you press ctrl+c, SIGINT is passed to whole process group.
When you fork new process, signal handlers are not passed to new process
So if you want to control child process signals manually, you have to change GID of the process.
See
http://corelib.rubyonrails.org/classes/Process/Sys.html#M001961
http://ruby.runpaint.org/processes (paragraph "Options Hash")
def system cmd
pid = fork do
exec cmd, {:pgroup => true}
end
Process.wait pid
$?.success?
end
def ` cmd # `make syntax highlight happy
readme, writeme = IO.pipe
pid = fork do
$stdout.reopen writeme
readme.close
exec cmd, {:pgroup => true}
end
writeme.close
data = readme.read
Process.wait pid
data
end
You could always have the child ignore the INT signal.