Given this Unix shell script:
test.sh:
#!/bin/sh
sleep 2 &
sleep 5 &
sleep 1 &
wait
time ./test.sh
real 0m5.008s
user 0m0.040s
sys 0m0.000s
How would you accomplish the same thing in Ruby on a Unix machine?
The sleep commands are just an example, just assume that they are long running external commands instead.
Straight from Process#waitall documentation:
fork { sleep 0.2; exit 2 } #=> 27432
fork { sleep 0.1; exit 1 } #=> 27433
fork { exit 0 } #=> 27434
p Process.waitall
Of course, instead of using Ruby's sleep, you can call whichever external command using Kernel#system, or backtick operator.
#!/usr/bin/env ruby
pids = []
pids << Kernel.fork { `sleep 2` }
pids << Kernel.fork { `sleep 5` }
pids << Kernel.fork { `sleep 1` }
pids.each { |pid| Process.wait(pid) }
To answer my own question (just found out about this):
#!/usr/bin/ruby
spawn 'sleep 2'
spawn 'sleep 5'
spawn 'sleep 1'
Process.waitall
On ruby 1.8 you need to install the sfl gem and also require this:
require 'rubygems'
require 'sfl'
Related
I want to run multiple time-consuming shell commands from Ruby in a non-blocking (asynchronous) way.
I want to pass options to commands, receive output in Ruby, and (ideally) handle errors.
The script below will naturally take 15 seconds to execute:
test.rb
3.times do |i|
puts `sleep 5; echo #{i} | tail -n 1` # some time-consuming complex command
end
$ /usr/bin/time ruby test.rb
0
1
2
15.29 real 0.13 user 0.09 sys
With Thread, it can apparently be executed in parallel, and it takes only 5 seconds, as expected:
threads = []
3.times do |i|
threads << Thread.new {
puts `sleep 5; echo #{i} | tail -n 1`
}
end
threads.each {|t| t.join() }
$ /usr/bin/time ruby test.rb
2
0
1
5.17 real 0.12 user 0.06 sys
But is this the best approach? Is there any other way?
I have also written using Open3.popen2, but this seems to take 15 seconds to execute as in the first example (unless wrapped in a Thread):
require 'open3'
3.times do |i|
Open3.popen2("sleep 5; echo #{i} | tail -n 1") do |stdin, stdout|
puts stdout.read()
end
end
The documentation describes "block form" and "non-block form", but this "block" refers to anonymous functions, and has nothing to do with concurrency, correct?
Is the Open3 class alone only capable of blocking execution?
The problem with your code is that stdout.read is a blocking call.
You could defer the reading until the command is finished.
At first, create the commands:
commands = Array.new(3) { |i| Open3.popen2("sleep 5; echo hello from #{i}") }
Then, wait for each command to finish:
commands.each { |stdin, stdout, wait_thr| wait_thr.join }
Finally, gather the output and close the IO streams:
commands.each do |stdin, stdout, wait_thr|
puts stdout.read
stdin.close
stdout.close
end
Output: (after 5 seconds)
hello from 0
hello from 1
hello from 2
So long story short, I'm trying to run a linux perl script in Windows ( with few modifications ).
On Unix it works just fine, but on Windows I come to the conclusion that calling for system doesn't work the same as on Unix and so it doesn't create multiple processes.
Below is the code :
use strict;
use warnings;
open (FIN, 'words.txt'); while (<FIN>) {
chomp;
my $line = $_;
system( "perl script.pl $line &" );
}
close (FIN);
So basically, I have 5 different words in "words.txt" which I want each and every one to be used one by one when calling for script.pl , which means :
word1 script.pl
word2 script.pl
word3 script.pl
etc
As of now it opens just the first word in words.txt and it loops with that one only. As I said, on Unix it works perfectly, but not on Windows.
I've tried to use "start" system( "start perl script.pl $line &" ); and it works...except it opens 5 additional CMDs to do the work. I want it to do the work on the same window.
If anyone has any idea how this can work on window, i'll really appreciate it.
Thanks!
According to perlport :
system
(Win32) [...] system(1, #args) spawns an external process and
immediately returns its process designator, without waiting for it to
terminate. Return value may be used subsequently in wait or waitpid.
Failure to spawn() a subprocess is indicated by setting $? to 255 <<
8. $? is set in a way compatible with Unix (i.e. the exit status of the subprocess is obtained by $? >> 8, as described in the
documentation).
I tried this:
use strict;
use warnings;
use feature qw(say);
say "Starting..";
my #pids;
for my $word (qw(word1 word2 word3 word3 word5)) {
my $pid = system(1, "perl script.pl $word" );
if ($? == -1) {
say "failed to execute: $!";
}
push #pids, $pid;
}
#wait for all children to finish
for my $pid (#pids) {
say "Waiting for child $pid ..";
my $ret = waitpid $pid, 0;
if ($ret == -1) {
say " No such child $pid";
}
if ($? & 127) {
printf " child $pid died with signal %d\n", $? & 127;
}
else {
printf " child $pid exited with value %d\n", $? >> 8;
}
}
say "Done.";
With the following child script script.pl :
use strict;
use warnings;
use feature qw(say);
say "Starting: $$";
sleep 2+int(rand 5);
say "Done: $$";
sleep 1;
exit int(rand 10);
I get the following output:
Starting..
Waiting for child 7480 ..
Starting: 9720
Starting: 10720
Starting: 9272
Starting: 13608
Starting: 13024
Done: 13608
Done: 10720
Done: 9272
Done: 9720
Done: 13024
child 7480 exited with value 9
Waiting for child 13344 ..
child 13344 exited with value 5
Waiting for child 17396 ..
child 17396 exited with value 3
Waiting for child 17036 ..
child 17036 exited with value 6
Waiting for child 17532 ..
child 17532 exited with value 8
Done.
Seems to work fine..
You can use Win32::Process to get finer control over creating a new process than system gives you on Windows. In particular, the following doesn't create a new console for each process like using system("start ...") does:
#!/usr/bin/env perl
use warnings;
use strict;
use feature qw/say/;
# Older versions don't work with an undef appname argument.
# Use the full path to perl.exe on them if you can't upgrade
use Win32::Process 0.17;
my #lines = qw/foo bar baz quux/; # For example instead of using a file
my #procs;
for my $line (#lines) {
my $proc;
if (!Win32::Process::Create($proc, undef, "perl script.pl $line", 1,
NORMAL_PRIORITY_CLASS, ".")) {
$_->Kill(1) for #procs;
die "Unable to create process: $!\n";
}
push #procs, $proc;
}
$_->Wait(INFINITE) for #procs;
# Or
# use Win32::IPC qw/wait_all/;
# wait_all(#procs);
As Yet Another Way To Do It, the start command takes a /b option to not open a new command prompt.
system("start /b perl script.pl $line");
I try to launch one command using while loop and the continue my script, but the loop never finish.Condition is true i don't want to put false because the command has to be executed every 10 minutes.
while true
pid = spawn('xterm -e command')
sleep 600
Process.kill('TERM', pid)
end
The same bash code work fine because i can execute the next commands of the script using & after done
while : ; do
xterm -e command ; sleep 600 ; done &
echo $! >/tmp/mycommand.pid
In ruby does the end statement block the script in my loop ? or the true value is not appropriate here ?
If I understand right you want to create a thread:
Thread.new do
while true
sleep(1)
puts 'inside'
end
end
puts 'outside'
sleep(3)
And output:
outside
inside
inside
I have an expect script that performs an exec that can take some time (around 5 mins).
I have copied the script below and also the output from running the script.
If the script was timing out, I would have thought "timeout" was printed to std out?
Any pointers will be appreciated!
expect <<EOF
cd /home/vagrant/cloudstack
# 20 mins timeout for jetty to start and devcloud to be provisioned
set timeout 1200
match_max 1000000
set success_string "*Started Jetty Server*"
spawn "/home/vagrant/cloudstack_dev.sh" "-r"
expect {
-re "(\[^\r]*\)\r\n"
{
set current_line \$expect_out(buffer)
if { [ string match "\$success_string" "\$current_line" ] } {
flush stdout
puts "Started provisioning cloudstack."
# expect crashes executing the following line:
set exec_out [exec /home/vagrant/cloudstack_dev.sh -p]
puts "Finished provisioning cloudstack. Stopping Jetty."
# CTRL-C
send \003
expect eof
} else {
exp_continue
}
}
eof { puts "eof"; exit 1; }
timeout { puts "timeout"; exit 1; }
}
EOF
The output:
...
2014-03-14 06:44:08 (1.86 MB/s) - `/home/vagrant/devcloud.cfg' saved [3765/3765]
+ python /home/vagrant/cloudstack/tools/marvin/marvin/deployDataCenter.py -i /home/vagrant/devcloud.cfg
+ popd
+ exit 0
while executing
"exec /home/vagrant/cloudstack_dev.sh -p"
invoked from within
"expect {
-re "(\[^\r]*\)\r\n"
{
set current_line $expect_out(buffer)
if { [ string match "$success_string" "$current_line" ]..."
The function that gets run inside the cloudstack-dev.sh:
function provision_cloudstack () {
echo -e "\e[32mProvisioning Cloudstack.\e[39m"
pushd $PWD
if [ ! -e $progdir/devcloud.cfg ]
then
wget -P $progdir https://github.com/imduffy15/devcloud/raw/v0.2/devcloud.cfg
fi
python /home/vagrant/cloudstack/tools/marvin/marvin/deployDataCenter.py -i $progdir/devcloud.cfg
popd
}
From the Expect output, it seems as though the function is being run ok.
See http://wiki.tcl.tk/exec
the exec call by default returns an error status when the exec'ed command:
returns a non-zero exit status, or
emits any output to stderr
This second condition can be irksome. If you don't care about stderr, then use exec -ignorestderr
You should always catch an exec call. More details in the referenced wiki page, but at a minimum:
set status [catch {exec command} output]
if {$status > 0} {
# handle an error condition ...
} else {
# success
}
I am experimenting with multiple processes. I am trapping SIGCLD to execute something when the child is done. It is working on IRB but not when I execute as a ruby script.
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
When I run the above from IRB, I both lines are printed but when I run it as a ruby script, the line within the trap procedure does not show up.
IRB gives you an outer loop, which means that the ruby process doesn't exit until you decide to kill it. The problem with your ruby script is that the main process is finishing and killing your child (yikes) before it has the chance to trap the signal.
My guess is that this is a test script, and the chances are that your desired program won't have the case where the parent finishes before the child. To see your trap working in a plain ruby script, add a sleep at the end:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
sleep 3
To populate the $? global variable, you should explicitly wait for the child process to exit:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code #{$? >> 8}" }
Process.wait
If you do want the child to run after the parent process has died, you want a daemon (double fork).
When you run your code in IRB, the main thread belongs to IRB so that all the stuff you’ve called is living within virtually infinite time loop.
In a case of script execution, the main thread is your own and it dies before trapping. Try this:
pid = fork {sleep 2; puts 'hello'}
trap('CLD') { puts "pid: #{pid} exited with code"}
sleep 5 # this is needed to prevent main thread to die ASAP
Hope it helps.