I am on Windows with Strawberry perl. I have some GUI.pl application which run script.pl which run some.exe. The perl script works as a proxy for STDIN/OUT/ERR between GUI application and some.exe.
The problem is that I can't kill some.exe process in chain GUI.pl -> script.pl -> some.exe.
GUI.pl sends TERM to script.pl
# GUI.pl
my $pid = open my $cmd, '-|', 'script.pl';
sleep 1;
kill 'TERM', $pid;
script.pl catch 'TERM' and trying to kill some.exe
# script.pl
$SIG{TERM} = \&handler;
my $pid = open my $cmd, '-|', 'some.exe';
sub handler {
kill 'TERM', $pid;
}
With this scheme, the process of some.exe continues to be executed. I've already learned a lot about the signals but still do not understand how to resolve this problem.
Thank in advance.
And one of the solutions it is using of threads:
# script.pl
use threads;
use threads::shared;
$SIG{BREAK} = \&handler;
my $pid :shared;
async {
$pid = open my $cmd, '-|', 'some.exe'
}->detach;
# 1 second for blocking opcode. After sleep handler will be applied
sleep 1;
sub handler {
kill 'TERM', $pid;
}
I would be wary of use of 'kill' signals on Windows, as they're a POSIX thing. http://perldoc.perl.org/functions/kill.html
But I think the problem here will probably be because of Deferred Signals. Specifically if you send a signal to a process, the interpreter will wait until it's "safe" to process it. In the middle of "some.exe" is unlikely to be.
Using kill signals in this way isn't a particularly good form of IPC. See perlmonks: Signals Vs. Windows for some useful discussion.
Signals on Windows are very idiosyncratic. You may have better luck with the INT or QUIT signals than TERM. My extensive research into how Perl and Windows handle signals is summarized here.
TL;DR: On Windows, TERM can terminate a process in Windows, but it cannot be handled. INT and QUIT can be handled, and their default behavior is to terminate the process. If you use Windows pseudo-processes (which is what you get if you call fork in Windows), then things quickly get more complicated.
Related
This is being done on windows
I am getting error: The process cannot access the file because it is being used by another process. It seems that even after the child is exiting(exit 0) and the parent is waiting for the child to complete (waitpid($lkpid, 0)),the child's subprocesses are not being killed. Hence, when the next iteration (test case) is running, it is finding the process already running, and hence gives the error message.
Code Snippet ($bashexe and $bePath are defined):
my $MSROO = "/home/abc";
if (my $fpid = fork()) {
for (my $i=1; $i<=1200; $i++) {
sleep 1;
if (-e "$MSROO/logs/Complete") {
last;
}
}
elsif (defined ($fpid)) {
&runAndMonitor (\#ForRun, "$MSROO/logs/Test.log"); ### #ForRun has the list of test cases
system("touch $MSROO/logs/Complete");
exit 0;
}
sub runAndMonitor {
my #ForRunPerProduct = #{$_[0]};
my $logFile = $_[1];
foreach my $TestVar (#ForRunPerProduct) {
my $TestVarDirName = $TestVar;
$TestVarDirName = dirname ($TestVarDirName);
my $lkpid;
my $filehandle;
if ( !($pid = open( $filehandle, "-|" , " $bashexe -c \" echo abc \; perl.exe reg_script.pl $TestVarDirName -t wint\" >> $logFile "))) {
die( "Failed to start process: $!" );
}
else {
print "$pid is pid of shell running: $TestVar\n"; ### Issue (error message above) is coming here after piped open is launched for a new test
my $taskInfo=`tasklist | grep "$pid"`;
chomp ($taskInfo);
print "$taskInfo is taskInfo\n";
}
if ($lkpid = fork()) {
sleep 1;
chomp ($lkpid);
LabelToCheck:
my $pidExistingOrNotInParent = kill 0, $pid;
if ($pidExistingOrNotInParent) {
sleep 10;
goto LabelToCheck;
}
}
elsif (defined ($lkpid)) {
sleep 12;
my $pidExistingOrNot = kill 0, $pid;
if ($pidExistingOrNot){
print "$pid still exists\n";
my $taskInfoVar1 =`tasklist | grep "$pid"`;
chomp ($taskInfoVar1);
my $killPID = kill 15, $pid;
print "$killPID is the value of PID\n"; ### Here, I am getting output 1 (value of $killPID). Also, I tried with signal 9, and seeing same behavior
my $taskInfoVar2 =`tasklist | grep "$pid"`;
sleep 10;
exit 0;
}
}
system("TASKKILL /F /T /PID $lkpid") if ($lkpid); ### Here, child pid is not being killed . Saying "ERROR: The process "-1472" not found"
sleep 2;
print "$lkpid is lkpid\n"; ## Here, though I am getting message "-1472 is lkpid"
#waitpid($lkpid, 0);
return;
}
Why is it that even after "exit 0 in child" and then "waitpid in parent", child subprocesses are not being killed? What can be done to fully clean child process and its subprocesses?
The exit doesn't touch child processes; it's not meant to. It just exits the process. In order to shut down its child processes as well you'd need to signal them.†
However, since this is Windows, where fork is merely emulated, here is what perlfork says
Behavior of other Perl features in forked pseudo-processes
...
kill() "kill('KILL', ...)" can be used to terminate a pseudo-process by passing it the ID returned by fork(). The outcome of kill on a
pseudo-process is unpredictable and it should not be used except under dire circumstances, because the operating system may not
guarantee integrity of the process resources when a running thread is terminated
...
exit() exit() always exits just the executing pseudo-process, after automatically wait()-ing for any outstanding child pseudo-processes. Note
that this means that the process as a whole will not exit unless all running pseudo-processes have exited. See below for some
limitations with open filehandles.
So don't do kill, while exit behaves nearly opposite to what you need.
But the Windows command TASKKILL can terminate a process and its tree
system("TASKKILL /F /T /PID $pid");
This should terminate a process with $pid and its children processes. (The command can use a process's name instead, TASKKILL /F /T /IM $name, but using names on a busy modern system, with a lot going on, can be tricky.) See taskkill on MS docs.
A more reliable way about this, altogether, is probably to use dedicated modules for Windows process management.
A few other comments
I also notice that you use pipe-open, while perlfork says for that
Forking pipe open() not yet implemented
The open(FOO, "|-") and open(BAR, "-|") constructs are not yet implemented.
So I am confused, does that pipe-open work in your code? But perlfork continues with
This limitation can be easily worked around in new code by creating a pipe explicitly. The following example shows how to write to a forked child: [full code follows]
That C-style loop, for (my $i=1; $i<=1200; $i++), is better written as
for my $i (1..1200) { ... }
(or foreach, synonyms) A C-style loop is very rarely needed in Perl.
† A kill with a negative signal (name or number) OR process-id generally terminates the whole tree under the signaled process. This is on Linux.
So one way would be to signal that child from its parent when ready, instead of exit-ing from it. (Then the child would have signal the parent in some way when it's ready.)
Or, the child can send a negative terminate signal to all its direct children process, then exit.
You didn't say which perl you are using. On Windows with Strawberry Perl (and presumably Active State), fork() emulation is ... very problematic, (maybe just "broken") as #zdim mentioned. If you want a longer explanation, see Proc::Background::Win32 - Perl Fork Limitations
Meanwhile, if you use Cygwin's Perl, fork works perfectly. This is because Cygwin does a full emulation of Unix fork() semantics, so anything built against cygwin works just like it does on Unix. The downside is that file paths show up weird, like /cygdrive/c/Program Files. This may or may not trip up code you've already written.
But, you might also have confusion about process trees. Even on Unix, killing a parent process does not kill the child processes. This usually happens for various reasons, but it is not enforced. For example, most child processes have a pipe open to the parent, and when the parent exits that pipe closes and then reading/writing the pipe gives SIGPIPE that kills the child. In other cases, the parent catches SIGTERM and then re-broadcasts that to its children before exiting gracefully. In other cases, monitors like Systemd or Docker create a container inherited by all children of the main process, and when the main process exits the monitor kills off everything else in the container.
Since it looks like you're writing your own task monitor, I'll give some advice from one that I wrote for Windows (and is running along happily years later). I ended up with a design using Proc::Background where the parent starts a task that writes to a file as STDOUT/STDERR. Then it opens that same log file and wakes up every few seconds to try reading more of the log file to see what the task is doing, and check with the Proc::Background object to see if the task exited. When the task exits, it appends the exit code and timestamp to the log file. The monitor has a timeout setting that if the child exceeds, it just un-gracefully runs TerminateProcess. (you could improve on that by leaving STDIN open as a pipe between monitor and worker, and then have the worker check STDIN every now and then, but on Windows that will block, so you have to use PeekNamedPipe, which gets messy)
Meanwhile, the monitor parses any new lines of the log file to read status information and send updates to the database. The other parts of the system can watch the database to see the status of background tasks, including a web admin interface that can also open and read the log file. If the monitor sees that a child has run for too long, it can use TerminateProcess to stop it. Missing from this design is any way for the monitor to know when it's being asked to exit, and clean up, which is a notable deficiency, and one you're probably looking for. However, there actually isn't any way to intercept a TerminateProcess aimed at the parent! Windows does have some Message Queue API stuff where you can set up to receive notifications about termination, but I never chased down the full details there. If you do, please come back and drop a comment for me :-)
I need to kill a process using the same command in both sh and bash (In a script). Normally I would do the following in bash:
SCRIPT=$(basename $0) #So the script knows itself
killall -9 $SCRIPT #Kill itself
However this does not seem to work using SH
Any suggestions on a solution that will work in either.
Is there an easier or more correct way to completely exit a script. Seems I have revisited this question many times over the years and never found the official correct way.
Basically to let the script kill itself, point it to $$ which presents the process ID of the shell.
kill "$$"
Avoid SIGKILL (9) when not necessary. Only use it on applications that get significantly unresponsive.
The default signal sent is SIGTERM (15), and there are other signals that could also terminate the process which may be safer than SIGKILL. One of those are SIGQUIT, SIGABRT, and SIGHUP.
/bin/sh -version
GNU sh, version 1.14.7(1)
exitfn () {
# Resore signal handling for SIGINT
echo "exiting with trap" >> /tmp/logfile
rm -f /var/run/lockfile.pid # Growl at user,
exit # then exit script.
}
trap 'exitfn; exit' SIGINT SIGQUIT SIGTERM SIGKILL SIGHUP
The above is my function in shell script.
I want to call it in some special conditions...like
when:
"kill -9" fires on pid of this script
"ctrl + z" press while it is running on -x mode
server reboots while script is executing ..
In short, with any kind of interrupt in script, should do some action
eg. rm -f /var/run/lockfile.pid
but my above function is not working properly; it works only for terminal close or "ctrl + c"
Kindly don't suggest to upgrade "bash / sh" version.
SIGKILL cannot be trapped by the trap command, or by any process. It is a guarenteed kill signal, that by it's definition cannot be trapped. Thus upgrading you sh/bash will not work anyway.
You can't trap kill -9 that's the whole point of it, to destroy processes violently that don't respond to other signals (there's a workaround for this, see below).
The server reboot should first deliver a signal to your script which should be caught with what you have.
As to the CTRL-Z, that also gives you a signal, SIGSTOP from memory, so you may want to add that. Though that wouldn't normally be a reason to shut down your process since it may be then put into the background and restarted (with bg).
As to what do do for those situations where your process dies without a catchable signal (like the -9 case), the program should check for that on startup.
By that, I mean lockfile.pid should store the actual PID of the process that created it (by using echo $$ >/var/run/myprog_lockfile.pid for example) and, if you try to start your program, it should check for the existence of that process.
If the process doesn't exist, or it exists but isn't the right one (based on name usually), your new process should delete the pidfile and carry on as if it was never there. If the old process both exists and is the right one, your new process should log a message and exit.
I have a perl script that runs a series of batch scripts for regression testing. I want to implement a timeout on the batch scripts. I currently have the following code.
my $pid = open CMD, "$cmd 2>&1 |";
eval {
# setup the alarm
local $SIG{ALRM} = sub { die "alarm\n" };
# alarm on the timeout
alarm $MAX_TIMEOUT;
log_output("setting alarm to $MAX_TIMEOUT\n");
# run our exe
while( <CMD> ) {
$$out_ref .= $_;
}
$timeRemaining = alarm 0;
};
if ($#) {
#catch the alarm, kill the executable
}
The problem is that no matter what I set the max timeout to, the alarm is never tripped. I've tried using Perl::Unsafe::Signals but that did not help.
Is this the best way to execute the batch scripts if I want to be able to capture their output? Is there another way that would do the same thing that would allow me to use alarms, or is there another method besides alarms to timeout the program?
I have built a test script to confirm that alarm works on with my perl and windows version, but it does not work when I run a command like this.
I'm running this with activeperl 5.10.1 on windows 7 x64.
It's hard to tell when alarm will work, when a system call will and won't get interrupted by a SIGALRM, how the same code might behave differently on different operating systems, etc.
If your job times out, you want to kill the subprocess you have started. This is a good use case for the poor man's alarm:
my $pid = open CMD, "$cmd 2>&1 |";
my $time = $MAX_TIMEOUT;
my $poor_mans_alarm = "sleep 1,kill(0,$pid)||exit for 1..$time;kill -9,$pid";
if (fork() == 0) {
exec($^X, "-e", $poor_mans_alarm);
die "Poor man's alarm failed to start"; # shouldn't get here
}
# on Windows, instead of fork+exec, you can say
# system 1, qq[$^X -e "$poor_mans_alarm"]
...
The poor man's alarm runs in a separate process. Every second, it checks whether the process with identifier $pid is still alive. If the process isn't alive, the alarm process exits. If the process is still alive after $time seconds, it sends a kill signal to the process (I used 9 to make it untrappable and -9 to take out the whole subprocess tree, your needs may vary).
(The exec actually may not be necessary. I use it because I also use this idiom to monitor processes that might outlive the Perl script that launched them. Since that wouldn't be the case with this problem, you could skip the exec call and say
if (fork() == 0) {
for (1..$time) { sleep 1; kill(0,$pid) || exit }
kill -9, $pid;
exit;
}
instead.)
I have a script for launchd to run that starts a server, then tells it to exit gracefully when launchd kills it off (which should be at shutdown). My question: what is the appropriate, idiomatic way to tell the script to idle until it gets the signal? Should I just use a while-true-sleep-1 loop, or is there a better way to do this?
#!/bin/bash
cd "`dirname "$0"`"
trap "./serverctl stop" TERM
./serverctl start
# wait to receive TERM signal.
You can simply use "sleep infinity". If you want to perform more actions on shutdown and don't want to create a function for that, an alternative could be:
#!/bin/bash
sleep infinity & PID=$!
trap "kill $PID" INT TERM
echo starting
# commands to start your services go here
wait
# commands to shutdown your services go here
echo exited
Another alternative to "sleep infinity" (it seems busybox doesn't support it for example) could be "tail -fn0 $0" for example.
A plain wait would be significantly less resource-intensive than a spin lock, even with a sleep in it.
Why would you like to keep your script running? Is there any reason? If you don't do anything later after signal then I do not see a reason for that.
When you get TERM from shutdown then your serverctl and server executable (if there is any) also gets TERM at the same time.
To do this thing by design you have to install your serverctl script as rc script and let init (start and) stop that. Here I described how to set up server process that is not originally designed to work as server.