I have used the following pattern of code in perl on a Unix system, but it crashes on Windows. How can I achieve the same thing using either forking or threads on Windows using perl?
use Parallel::ForkManager;
my $pm = Parallel::ForkManager->new($MAX_PROCESSES);
DATA_LOOP:
foreach my $data (#all_data) {
# Forks and returns the pid for the child:
my $pid = $pm->start and next DATA_LOOP;
# ... do some work with $data in the child process ...
$pm->finish; # Terminates the child process
}
Here is one example using fork:
#!/usr/bin/perl -w
use strict;
foreach my $data (#all_data) {
my $pid;
next if $pid = fork; # Parent goes to next server.
die "fork failed: $!" unless defined $pid;
# From here on, we're in the child. Do whatever the
# child has to do... The server we want to deal
# with is in $data.
exit; # Ends the child process.
}
# The following waits until all child processes have
# finished, before allowing the parent to die.
1 while (wait() != -1);
print "All done!\n";
Related
So long story short, I'm trying to run a linux perl script in Windows ( with few modifications ).
On Unix it works just fine, but on Windows I come to the conclusion that calling for system doesn't work the same as on Unix and so it doesn't create multiple processes.
Below is the code :
use strict;
use warnings;
open (FIN, 'words.txt'); while (<FIN>) {
chomp;
my $line = $_;
system( "perl script.pl $line &" );
}
close (FIN);
So basically, I have 5 different words in "words.txt" which I want each and every one to be used one by one when calling for script.pl , which means :
word1 script.pl
word2 script.pl
word3 script.pl
etc
As of now it opens just the first word in words.txt and it loops with that one only. As I said, on Unix it works perfectly, but not on Windows.
I've tried to use "start" system( "start perl script.pl $line &" ); and it works...except it opens 5 additional CMDs to do the work. I want it to do the work on the same window.
If anyone has any idea how this can work on window, i'll really appreciate it.
Thanks!
According to perlport :
system
(Win32) [...] system(1, #args) spawns an external process and
immediately returns its process designator, without waiting for it to
terminate. Return value may be used subsequently in wait or waitpid.
Failure to spawn() a subprocess is indicated by setting $? to 255 <<
8. $? is set in a way compatible with Unix (i.e. the exit status of the subprocess is obtained by $? >> 8, as described in the
documentation).
I tried this:
use strict;
use warnings;
use feature qw(say);
say "Starting..";
my #pids;
for my $word (qw(word1 word2 word3 word3 word5)) {
my $pid = system(1, "perl script.pl $word" );
if ($? == -1) {
say "failed to execute: $!";
}
push #pids, $pid;
}
#wait for all children to finish
for my $pid (#pids) {
say "Waiting for child $pid ..";
my $ret = waitpid $pid, 0;
if ($ret == -1) {
say " No such child $pid";
}
if ($? & 127) {
printf " child $pid died with signal %d\n", $? & 127;
}
else {
printf " child $pid exited with value %d\n", $? >> 8;
}
}
say "Done.";
With the following child script script.pl :
use strict;
use warnings;
use feature qw(say);
say "Starting: $$";
sleep 2+int(rand 5);
say "Done: $$";
sleep 1;
exit int(rand 10);
I get the following output:
Starting..
Waiting for child 7480 ..
Starting: 9720
Starting: 10720
Starting: 9272
Starting: 13608
Starting: 13024
Done: 13608
Done: 10720
Done: 9272
Done: 9720
Done: 13024
child 7480 exited with value 9
Waiting for child 13344 ..
child 13344 exited with value 5
Waiting for child 17396 ..
child 17396 exited with value 3
Waiting for child 17036 ..
child 17036 exited with value 6
Waiting for child 17532 ..
child 17532 exited with value 8
Done.
Seems to work fine..
You can use Win32::Process to get finer control over creating a new process than system gives you on Windows. In particular, the following doesn't create a new console for each process like using system("start ...") does:
#!/usr/bin/env perl
use warnings;
use strict;
use feature qw/say/;
# Older versions don't work with an undef appname argument.
# Use the full path to perl.exe on them if you can't upgrade
use Win32::Process 0.17;
my #lines = qw/foo bar baz quux/; # For example instead of using a file
my #procs;
for my $line (#lines) {
my $proc;
if (!Win32::Process::Create($proc, undef, "perl script.pl $line", 1,
NORMAL_PRIORITY_CLASS, ".")) {
$_->Kill(1) for #procs;
die "Unable to create process: $!\n";
}
push #procs, $proc;
}
$_->Wait(INFINITE) for #procs;
# Or
# use Win32::IPC qw/wait_all/;
# wait_all(#procs);
As Yet Another Way To Do It, the start command takes a /b option to not open a new command prompt.
system("start /b perl script.pl $line");
HI I am trying to execute a code by forking a child process.
I have ruby code a.rb in which i use system call
system("perl abc.pl -subroutine='a' -command='b' -status='c' -logfile='d' -start_datetime='e'")
now in abc.pl i am doing this
my $pid = fork;
if (!$pid) {
print "[INFO] in the child process \n ";
print "[INFO] forking worked, child process id: ($$) \n";
while (1) {
if ( -d "$ENV{OUT_HOME}" ) {
print "[INFO] $ENV{OUT_HOME} is available now to write $seq \n";
open(my $FH,'>',"$ENV{OUT_HOME}/SEQ") or die "cannot open file to write $seq into $ENV{OUT_HOME}/SEQ";
print $FH "$seq";
close $FH;
print "[INFO] exiting from forked child process id: ($$) \n";
exit; # exit the while loop
}
else {
sleep (2);
}
}
} else {
print "[INFO] I am parent Process exiting from here \n";
exit;
}
Now with this i am trying to get child process which is not dependent on parent process (here it is abc.pl) When i run the script, it does the system command which is running perl script and i see the perl script in process tree. But i keeps hanging there. I am not sure on why system command doesn't terminate after parent is done? any suggestions or comments appreciated.
You need to create a new POSIX session for your newly-independent forked process.
Try this (assuming that the environment variable OUT_HOME has been set to a folder before calling perl):
my $pid = fork;
if (!$pid) {
print "[INFO] in the child process \n ";
print "[INFO] forking worked, child process id: ($$) \n";
require POSIX;
POSIX::setsid();
while (1) {
if ( -d "$ENV{OUT_HOME}" ) {
print "[INFO] $ENV{OUT_HOME} is available now to write $seq \n";
open(my $FH,'>',"$ENV{OUT_HOME}/SEQ") or die "cannot open file to write $seq into $ENV{OUT_HOME}/SEQ";
print $FH "$seq";
close $FH;
print "[INFO] exiting from forked child process id: ($$) \n";
exit; # exit the while loop
}
else {
sleep (2);
}
}
} else {
print "[INFO] I am parent Process exiting from here \n";
exit;
}
This is addressed in perlfaq8:
How do I fork a daemon process?
If by daemon process you mean one that's detached (disassociated from its tty), then the following process is reported to work on most Unixish systems. Non-Unix users should check their Your_OS::Process module for other solutions.
Open /dev/tty and use the TIOCNOTTY ioctl on it. See tty(1) for details. Or better yet, you can just use the POSIX::setsid() function, so you don't have to worry about process groups.
Change directory to /
Reopen STDIN, STDOUT, and STDERR so they're not connected to the old tty.
Background yourself like this:
fork && exit;
The Proc::Daemon module, available from CPAN, provides a function to perform these actions for you.
As the FAQ states, the Proc::Daemon module can simplify this for you, although this is just one of several modules available for this task.
In the following perl script, i intend to timeout the execution of script child_script.pl if it goes beyond 1 hour. However, the logic doesn't seem to be working as the script is still running after the specified time limit.
Any guesses what am I doing wrong here?
I'm referring to follwing documentation for implementing the timeout in perl:
https://docstore.mik.ua/orelly/perl4/cook/ch16_22.htm
It works fine for a standalone command. Is it possible that system command has an issue ?
MY_CODE (parent_script.pl)
#!/usr/bin/perl
my $sys_cmd = " perl child_script.pl 2>&1 | tee logfile.txt \n";
print "INFO: Enter alarm..\n";
eval {
local $SIG{ALRM} = sub { die "alarm clock restart" };
alarm 3600; # schedule alarm in 1 hours
eval {
print "INFO: Run script.. \n";
system ($sys_cmd);
};
alarm 0; # cancel the alarm
};
alarm 0; # race condition protection
die if $# && $# !~ /alarm clock restart/; # reraise
print "INFO: Exit alarm..\n";
You don't time out the script which you have started; you set the alarm for the current (parent) script. You would have to kill the child process (the one you have started using system) from your alarm-sub.
EDIT:
If you have /usr/bin/timeout (as it would be the case if you are running on Linux), it would perhaps be more convenient to use this command for handling the timeout, instead of re-implementing the logic in Perl.
The SIGALRM signal may or may not be sent while your system call is running, so alarm may be flaky. This is a good use case for the poor man's alarm.
my $sys_cmd = "perl child_script.pl 2>&1 | tee logfile.txt";
# step 1. Start your long running command and capture it's process id
my $pid = fork();
if ($pid == 0) {
exec($sys_cmd);
}
# step 2. Start another subprocess for the poor man's alarm.
my $time = 3600;
if (fork() == 0) {
exec("$^X","-e","sleep 1,kill(0,$pid)||exit for 1..$time;kill -9,$pid");
}
# step 3. wait for first process to finish or be killed
my $c = waitpid $pid, 0;
if ($c & 128 == 9) {
print "Process timed out and was killed by the poor man's alarm\n";
} else {
print "Process finished without timing out.\n";
}
The poor man's alarm runs in a separate process with two parameters: a $pid to monitor and a $time to wait. It periodically checks to see if the process being monitored is still alive. If the process is no longer alive, then the poor man's alarm also exits without doing anything. After $time seconds have passed, and the monitored process is still hanging around, the poor man's alarm sends a kill signal to the process, which should terminate it.
I have a Perl script (snippet below) that runs in cron to perform system checks. I fork a child as a timeout and reap it with SIG{CHLD}. Perl does several system calls of Bash scripts and checks their exit status. One bash script fails about 5% of the time with no error. The Bash scripts exists with 0 and Perl sees $? as -1 and $! as "No child processes".
This bash script tests compiler licenses, and Intel icc is left around after the Bash script completes (ps output below). I think the icc zombie completes, forcing Perl into SIG{CHLD} handler, which blows away the $? status before I'm able to read it.
Compile status -1; No child processes
#!/usr/bin/perl
use strict;
use POSIX ':sys_wait_h';
my $GLOBAL_TIMEOUT = 1200;
### Timer to notify if this program hangs
my $timer_pid;
$SIG{CHLD} = sub {
local ($!, $?);
while((my $pid = waitpid(-1, WNOHANG)) > 0)
{
if($pid == $timer_pid)
{
die "Timeout\n";
}
}
};
die "Unable to fork\n" unless(defined($timer_pid = fork));
if($timer_pid == 0) # child
{
sleep($GLOBAL_TIMEOUT);
exit;
}
### End Timer
### Compile test
my #compile = `./compile_test.sh 2>&1`;
my $status = $?;
print "Compile status $status; $!\n";
if($status != 0)
{
print "#compile\n";
}
END # Timer cleanup
{
if($timer_pid != 0)
{
$SIG{CHLD} = 'IGNORE';
kill(15, $timer_pid);
}
}
exit(0);
#!/bin/sh
cc compile_test.c
if [ $? -ne 0 ]; then
echo "Cray compiler failure"
exit 1
fi
module swap PrgEnv-cray PrgEnv-intel
cc compile_test.c
if [ $? -ne 0 ]; then
echo "Intel compiler failure"
exit 1
fi
wait
ps
exit 0
The wait doesn't really wait because cc calls icc which creates a zombie grandchild process that wait (or wait PID) doesn't block for. (wait `pidof icc`, 31589 in this case, gives "not a child of this shell")
user 31589 1 0 12:47 pts/15 00:00:00 icc
I just don't know how to fix this in Bash or Perl.
Thanks, Chris
Isn't this a use case for alarm? Toss out your SIGCHLD handler and say
local $? = -1;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm($GLOBAL_TIMEOUT);
#compile = `./compile_test.sh 2>&1`;
alarm(0);
};
my $status = $?;
instead.
I thought the quickest solution would be to add sleep of a second or two at the bottom of the bash script to wait for the zombie icc to complete. But that didn't work.
If I didn't already have a SIG ALRM (in the real program) I agree the best choice would be to wrap the whole thing in a eval. Even thought that would be pretty ugly for a 500 line program.
Without the local($?), every `system` call gets $? = -1. The $? I need in this case is after waitpid, then unfortunately set to -1 after the sig handler exits. So I find this works. New lines shown with ###
my $timer_pid;
my $chld_status; ###
$SIG{CHLD} = sub {
local($!, $?);
while((my $pid = waitpid(-1, WNOHANG)) > 0)
{
$chld_status = $?; ###
if($pid == $timer_pid)
{
die "Timeout\n";
}
}
};
...
my #compile = `./compile_test.sh 2>&1`;
my $status = ($? == -1) ? $chld_status : $?; ###
...
We had a similar issue, here is our solution: Leak a write-side file descriptor into the grandchild and read() from it which will block until it exits.
See also: wait for children and grand-children
use Fcntl;
# OCF scripts invoked by Pacemaker will be killed by Pacemaker with
# a SIGKILL if the script exceeds the configured resource timeout. In
# addition to killing the script, Pacemaker also kills all of the children
# invoked by that script. Because it is a kill, the scripts cannot trap
# the signal and clean up; because all of the children are killed as well,
# we cannot simply fork and have the parent wait on the child. In order
# to work around that, we need the child not to have a parent proccess
# of the OCF script---and the only way to do that is to grandchild the
# process. However, we still want the parent to wait for the grandchild
# process to exit so that the OCF script exits when the grandchild is
# done and not before. This is done by leaking the write file descriptor
# from pipe() into the grandchild and then the parent reads the read file
# descriptor, thus blocking until it gets IO or the grandchild exits. Since
# the file descriptor is never written to by the grandchild, the parent
# blocks until the child exits.
sub grandchild_wait_exit
{
# We use "our" instead of "my" for the write side of the pipe. If
# we did not, then when the sub exits and $w goes out of scope,
# the file descriptor will close and the parent will exit.
pipe(my $r, our $w);
# Enable leaking the file descriptor into the children
my $flags = fcntl($w, F_GETFD, 0) or warn $!;
fcntl($w, F_SETFD, $flags & (~FD_CLOEXEC)) or die "Can't set flags: $!\n";
# Fork the child
my $child = fork();
if ($child) {
# We are the parent, waitpid for the child and
# then read to wait for the grandchild.
close($w);
waitpid($child, 0);
<$r>;
exit;
}
# Otherwise we are the child, so close the read side of the pipe.
close($r);
# Fork a grandchild, exit the child.
if (fork()) {
exit;
}
# Turn off leaking of the file descriptor in the grandchild so
# that no other process can write to the open file descriptor
# that would prematurely exit the parent.
$flags = fcntl($w, F_GETFD, 0) or warn $!;
fcntl($w, F_SETFD, $flags | FD_CLOEXEC) or die "Can't set flags: $!\n";
}
grandchild_wait_exit();
sleep 1;
print getppid() . "\n";
print "$$: gc\n";
sleep 30;
exit;
Why does the following code:
#!/usr/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
my $pm = new Parallel::ForkManager(5);
my #all_data = ('a','b','c','d','e','f');
foreach my $data (#all_data) {
# Forks and returns the pid for the child:
my $pid = $pm->start and next;
print "Hello $pid\n";
$pm->finish; # Terminates the child process
}
$pm->wait_all_children;
print:
Hello 0
Hello 0
Hello 0
Hello 0
Hello 0
I am new to Perl and I am trying to catch up on multiprocessing in Perl
From the docs for the start method:
This method does the fork. It returns the pid of the child process for the parent, and 0 for the child process.
As it happens, the fork function does the same, which is why start mirrors it.
The parent may need the child PID to control the child – sending signals and stuff – but the child knows its own PID via the $$ variable:
foreach my $data (#all_data) {
$pm->start and next;
print "Hello $$\n";
$pm->finish;
}
Example output:
Hello 22215
Hello 22218
Hello 22219
Hello 22217
Hello 22220
Hello 22216
my $pid = $pm->start and next;
"and" logic will evaluate to true if both of the arguments are true. If the first argument is false, then the "and" logic will shortcut and will not evaluate the second argument
You might want to use "or" logic instead.