Track progress of dd command called using open3 in ruby - ruby

I am trying to monitor the progress of copying a raspberry-pi OS image to a microSD card. This is similar to Kill a process called using open3 in ruby, except I'm not killing the process, I'm sending it a command for it to issue a progress message.
rpath = device_path.gsub(/disk/, "rdisk")
puts "\n\nCopying image to #{rpath}"
if false
stdout_err, status = Open3.capture2e( "sudo", "dd", "bs=1m", "if=#{source_path}", "of=#{rpath}" )
puts stdout_err
else
cmd = "sudo dd bs=1m if=#{source_path} of=#{rpath}"
Open3.popen2e(cmd) do |stdin, stdout_err, wait_thr|
Thread.new do
stdout_err.each {|l| puts l}
end
Thread.new do
while true
sleep 5
if true
Process.kill("INFO", wait_thr.pid) #Tried INFO, SIGINFO, USR1, SIGUSR1
# all give: `kill': Operation not permitted (Errno::EPERM)
else
stdin.puts 20.chr #Should send ^T -- has no effect, nothing to terminal during flash
end
end
end
wait_thr.value
end
The first section (after 'if false') flashes the image using Open3.capture2e. This works, but of course issues no progress information.
The section after the 'else' flashes the image using Open3.popen2e. It also attempts to display progress by either issuing 'Process.kill("INFO", wait_thr.pid)', or by sending ^T (20.chr) to the stdin stream every 5 seconds.
The Process.kill line generates an "Operation not permitted" error. The stdin.puts line has no effect at all.
One other thing... While the popen2e process is flashing, hitting ctrl-T on the keyboard DOES generate a progress response. I just can't get it to do it programmatically.
Any help is appreciated!

Newer versions of dd have an optional progress bar, as seen here. Even so I think you'll want to rethink how you execute that shell command so that it thinks it's attached to a terminal. Easiest thing to do is fork/exec, like:
cmd = "sudo dd bs=1m if=#{source_path} of=#{rpath} status=progress"
fork do
exec(cmd) # this replaces the forked process with the cmd, giving it direct access to your terminal
end
Process.wait() # waits for the child process to exit
If that's not an option you may want to look into other ways of getting unbuffered output, including just writing a bash script instead of a ruby one.

Related

How to write to and read from the same named pipe in a single ruby script?

edit: I think I fixed the issue: https://gist.github.com/niuage/c0637b8dd10549a12b6a223dbd5f158a
I might have been missing the Process.wait, hence creating a lot of zombie processes.
I have a piece of code that's working most of the time, but "locks" itself after a while, probably because of a race condition.
My code
pipe = "goals.png"
(1..100).each do |time|
fork do
# This runs ffmpeg, and gets it to write 1 frame of a video to the pipe 'goals.png'
print "before ffmpeg"
`#{ffmpeg(time, score_crop_options, pipe)}`
exit
end
# Reads from 'pipe'
print "before read"
blob = File.read(pipe)
image = Rocket::Image.from_blob(blob)
# do stuff with image
end
Things to note:
#{ffmpeg(time, pipe)} writes to pipe, and is blocking until something reads from pipe
File.read(pipe) is blocking until something writes to pipe
My issue
edit: when the script is locked, and I try to read the pipe from another script, I get zsh: fork failed: resource temporarily unavailable. That's probably a good clue...
Most of the time, File.read(pipe) gets executed before the code in fork, so it works great, but after a little while the script just stops: it prints "before ffmpeg" and never gets to "before read"...
First, should I use threads instead of fork? And can I control the order the 2 statements (read and write) get run, to avoid a race condition? Or maybe it's not even about the race condition and I'm missing something?
The issue wasn't caused by a race condition, but too many zombie processes, since I wasn't calling Process.wait
The parent process should use Process.wait to collect the termination statuses of its children or use Process.detach to register disinterest in their status; otherwise, the operating system may accumulate zombie processes.
That's why I was getting zsh: fork failed: resource temporarily unavailable when trying to read from the pipe from another script probably.
Here's something that works:
(1..100) do
if fork
# Parent
image = read_image(pipe)
# do stuff with image
Process.wait # I think that's what was missing previously
else
# Child
Open3.popen3(command(time, score_crop_options, pipe)) do |stdin, stdout, stderr, wait_thr|
# stuff
end
exit!(0)
end
end

popen3 hangs when executing script with popen3

I have next code
def execute_bash(cmd)
puts "Executing: [#{cmd}]"
exit_code = Open3.popen3(cmd) do |stdin, stdout, stderr, wait_thr|
stdin.close
stdout.each { |line| puts line }
stdout.close
stderr.each { |line| puts line }
stderr.close
wait_thr.value.exitstatus
end
return exit_code
end
Command that I run with this function is vagrant up, that produces a lot of output. I have a lot of recipes so I need to monitor it's output in realtime, line by line.
And I have another script that ensures that previous script did well and did not crash. So I have another script that runs bash commands in the same way as previous. So I'm like running(in second script):
execute_bash "./vagrant_up.rb"
Such consequence leads me to dead lock or something: in some point output stops. I could wait a lot of time but it will not continues.
When I run just vagrant in console - everything is fine.
Is that a problem to run ruby script with popen3 in another popen3?
Is there proper way of handling dead locks in popen3(if it's dead lock, I'm not sure)?
Hard to say what the real issue is without seeing the details of vagrant_up.rb but consider the following.
Here I have a little bash script:
#!/bin/sh
#out.sh
echo "about to sleep"
sleep 3
echo "just woke up"
And another:
#!/bin/sh
#err.sh
echo "about to sleep" 1>&2
sleep 3
echo "just woke up" 1>&2
With your code, running execute_bash("out.sh") produces something that is probably in line with your expectations. It says "about to sleep" and then 3 seconds later it says "just woke up". But, running execute_bash("err.sh") does something a little surprising. It waits 3 seconds then prints "about to sleep" and "just woke up".
So, for your situation, here is my guess. Your command is probably producing a lot of output to stderr, and you just aren't seeing it. In fact you won't see it until the command finishes (because the stdout stream for the Open3 process won't be closed until then).
Can you just redirect stderr to stdout? Can you comment if this doesn't resolve the issue?
execute_bash("./vagrant_up.rb 2>&1")

Forcing Code in Ruby on Windows when X button is hit

While writing a ruby script on Windows (ruby -v outputs ruby 1.9.3p545) I encountered an interesting and rather specific problem. I was attempting to close an opened file if a user terminates execution. For example,
begin
f = File.open("monkeys.txt", "w+")
#stuff with the file
rescue Exception => e #I know this is a bad idea
puts e.backtrace
ensure
f.close
end
Now, this works if I terminate execution via Ctrl+C while running this in cmd. However, when I hit the "X" on the cmd prompt window, the code in the ensure block doesn't run. I tried something like...
at_exit do
f.close if !f.closed?
end
...but that still doesn't execute the code I want it to when the X button is hit.
So, what do I do in order to force "ensure" code in Ruby if it's closed via that X button?
Well, I don't really program for windows, so I might get lost on the details, but let me try to shed some light with this workaround for Linux:
#ppid = Process.ppid
pid = fork do
loop do
sleep(1)
begin
Process.getsid(#ppid)
rescue Errno::ESRCH
File.new("process_down.txt", "a+")
exit(1)
end
end
end
Process.detach(pid)
puts "Process detached"
What this does is it creates a forked process, detaches it from the main process and keeps listening for when the main process is killed (it'll throw Errno::ESRCH on Process.getsid if the #ppid is no longer there), so it'll create a .txt file and exit. I don't know how to handle forking and pids in windows, but that's just to try and show you some possibilities =]

How do I open STDIN of process in Ruby?

I have a set of tasks that I need to run from a Ruby script, however one particular task always waits for EOF on STDIN before quitting.
Obviously this causes the script to hang while waiting for the child process to end.
I have the process ID of the child process, but not a pipe or any kind of handle to it. How could I open a handle to the STDIN of a process to send EOF to it?
EDIT: Given that you aren't starting the script, a solution that occurs to me is to put $stdin under your control while using your gem. I suggest something like:
old_stdin = $stdin.dup
# note that old_stdin.fileno is non-0.
# create a file handle you can use to signal EOF
new_stdin = File::open('/dev/null', 'r')
# and make $stdin use it, instead.
$stdin.reopen(new_stdin)
new_stdin.close
# note that $stdin.fileno is still 0, though now it's using /dev/null for input.
# replace with the call that runs the external program
system('/bin/cat')
# "cat" will now exit. restore the old state.
$stdin.reopen(old_stdin)
old_stdin.close
If your ruby script is creating the tasks, it can use IO::popen. For example, cat, when run with no arguments, will wait for EOF on stdin before it exits, but you can run the following:
f = IO::popen('cat', 'w')
f.puts('hello')
# signals EOF to "cat"
f.close

How to detect a process ending with Ruby open3

If I run open with:
input, output,error = Open3.popen3 "nikto -host someip -port 80 -output xml"
How can I detect if nikto is done? It takes a while for the scan to complete.
If something goes wrong, I suppose I have to periodically check error to see if anything that been written?
Is there any decent docs that exist for open3? No, the ruby-docs are nowhere near decent.
input, output, error = Open3.popen3 "nikto -host someip -port 80 -output xml"
if select([output], nil, nil, 0.1) and output.eof?
# process exited and doesn't have output queued.
else
# still running or has output not read yet.
end
Also, here is some good documentation I found by googling: http://www.ensta.fr/~diam/ruby/online/ruby-doc-stdlib/libdoc/open3/rdoc/index.html
If you're using any *nix OS, your process will receive SIGCHLD when a subprocess exits. Depending on whether or not you have more than one subprocess at a time, this can be used to detect when it ends.
Also, the IO channels to the subprocess are implemented under the hood with pipes, so you will definitely get an EOF at the end of the output and possibly SIGPIPE when it shuts too.
In Ruby, installing a signal handler is just:
Signal.trap("CHLD") do
Process.wait
$child_died = true
end
You may be able to get the PID from $?
Process.wait $?.pid
Turns out that was wrong.
See some options here:
http://en.wikibooks.org/wiki/Ruby_Programming/Running_Multiple_Processes
Output is seen after the execution of command finishes
Open3.popen3(cmd) do |stdin, stdout, stderr, wait_thr|
puts "stdout is:" + stdout.read
puts "stderr is:" + stderr.read
end
Output is seen incrementally
Open3.popen3(cmd) do |stdin, stdout, stderr, wait_thr|
while line = stderr.gets
puts line
end
end
Check this more options
http://blog.bigbinary.com/2012/10/18/backtick-system-exec-in-ruby.html

Resources