How can I fork a background processes from a Perl CGI script on Windows? - windows

I've had some trouble forking of processes from a Perl CGI script when running on Windows. The main issue seems to be that 'fork' is emulated when running on windows, and doesn't actually seem to create a new process (just another thread in the current one). This means that web servers (like IIS) which are waiting for the process to finish continue waiting until the 'background' process finishes.
Is there a way of forking off a background process from a CGI script under Windows? Even better, is there a single function I can call which will do this in a cross platform way?
(And just to make life extra difficult, I'd really like a good way to redirect the forked processes output to a file at the same time).

If you want to do this in a platform independent way, Proc::Background is probably the best way.

Use Win32::Process->Create with DETACHED_PROCESS parameter

perlfork:
Perl provides a fork() keyword that
corresponds to the Unix system call of
the same name. On most Unix-like
platforms where the fork() system call
is available, Perl's fork() simply
calls it.
On some platforms such as Windows
where the fork() system call is not
available, Perl can be built to
emulate fork() at the interpreter
level. While the emulation is designed
to be as compatible as possible with
the real fork() at the the level of
the Perl program, there are certain
important differences that stem from
the fact that all the pseudo child
``processes'' created this way live in
the same real process as far as the
operating system is concerned.

I've found real problems with fork() on Windows, especially when dealing with Win32 Objects in Perl. Thus, if it's going to be Windows specific, I'd really recommend you look at the Thread library within Perl.
I use this to good effect accepting more than one connection at a time on websites using IIS, and then using even more threads to execute different scripts all at once.

This question is very old, and the accepted answer is correct. However, I just got this to work, and figured I'd add some more detail about how to accomplish it for anyone who needs it.
The following code exists in a very large perl CGI script. This particular sub routine creates tickets in multiple ticketing systems, then uses the returned ticket numbers to make an automated call via Twilio services. The call takes awhile, and I didn't want the CGI users to have to wait until the call ended to see the output from their request. To that end, I did the following:
(All the CGI code that is standard stuff. Calls the subroutine needed, and then)
my $randnum = int(rand(100000));
my $callcmd = $progdir_path . "/aoff-caller.pl --uniqueid $uuid --region $region --ticketid $ticketid";
my $daemon = Proc::Daemon->new(
work_dir => $progdir_path,
child_STDOUT => $tmpdir_path . '/stdout.txt',
child_STDERR => $tmpdir_path . '/stderr.txt',
pid_file => $tmpdir_path . '/' . $randnum . '-pid.txt',
exec_command => $callcmd,
);
my $pid = $daemon->Init();
exit 0;
(kill CGI at the appropriate place)
I am sure that the random number generated and attached to the pid is overkill, but I have no interest in creating issues that are extremely easily avoided. Hopefully this helps someone looking to do the same sort of thing. Remember to add use Proc::Daemon at the top of your script, mirror the code and alter to the paths and names of your program, and you should be good to go.

Related

Making STDIN unbuffered under Windows in Perl

I am trying to do input processing (from the console) in Perl asynchronously. My first approach was to use IO::Select but that does not work under Windows.
I then came across the post Non-buffered processor in Perl which roughly suggests this:
binmode STDIN;
binmode STDOUT;
STDIN->blocking(0) or warn $!;
STDOUT->autoflush(1);
while (1) {
my $buffer;
my $read_count = sysread(STDIN, $buffer, 4096);
if (not defined($read_count)) {
next;
} elsif (0 == $read_count) {
exit 0;
}
}
That works as expected for regular Unix systems but not for Windows, where the sysread actually does block. I have tested that on Windows 10 with 64-bit Strawberry Perl 5.32.1.
When you check the return value of blocking() (as done in the code above), it turns out that the call fails with the funny error message "An operation was attempted on something that is not a socket".
Edit: My application is a chess engine that theoretically can be run interactively in a terminal but usually communicates via pipes with a GUI. Therefore, Win32::Console does not help.
Has something changed since the blog post had been published? The author explicitely claims that this approach would work for Windows. Any other option that I can go with, maybe some module from the Win32:: namespace?
The solution I now implemented in https://github.com/gflohr/Chess-Plisco/blob/main/lib/Chess/Plisco/Engine.pm (search for the method __msDosSocket()) can be outlined as follows:
If Windows is detected as the operating system, create a temporary file as a Unix domain socket with IO::Socket::Unix for writing.
Do a fork() which actually creates a thread in Perl for Windows because the system does not have a real fork().
In the "parent", create another instance of IO::Socket::Unix with the same path for reading.
In the "child", read from standard input with getline(). This blocks, of course. Every line read is echoed to the write end of the socket.
The "parent" uses the read-end of the socket as a replacement for standard input and puts it into non-blocking mode. That works even under Windows because it is a socket.
From here on, everything is working the same as under Unix: All input is read in non-blocking mode with IO::Select.
Instead of a Unix domain socket it is probably wiser to route the communication through the loopback interface because under Windows it is hard to guarantee that a temporary file gets deleted when the process terminates since you cannot unlink it while it is in use. It is also stated in the comments that IO::Socket::UNIX may not work under older Windows versions, and so inet sockets are probably more portable to use.
I also had trouble to terminate both threads. A call to kill() does not seem to work. In my case, the protocol that the program implements is so that the command "quit" read from standard input should cause the program to terminate. The child thread therefore checks, whether the line read was "quit" and terminates with exit in that case. A proper solution should find a better way for letting the parent kill the child.
I did not bother to ignore SIGCHLD (because it doesn't exist under Windows) or call wait*() because fork does not spawn a new process image under Windows but only a new thread.
This approach is close to the one suggested in one of the comments to the question, only that the thread comes in disguise as a child process created by fork().
The other suggestion was to use the module Win32::Console. This does not work for two reasons:
As the name suggests, it only works for the console. But my software is a backend for a GUI frontend and rarely runs in a console.
The underlying API is for keyboard and mouse events. It works fine for key strokes and most mouse events, but polling an event blocks as soon as the user has selected something with the mouse. So even for a real console application, this approach would not work. A solution built on Win32::Console must also handle events like pressing the CTRL, ALT or Shift key because they will not guarantee that input can be read immediately from the tty.
It is somewhat surprising that a task as trivial as non-blocking I/O on a file descriptor is so hard to implement in a portable way in Perl because Windows actually has a similar concept called "overlapped" I/O. I tried to understand that concept, failed at it, and concluded that it is true to the Windows maxim "make easy things hard, and hard things impossible". Therefore I just cannot blame the Perl developers for not using it as an emulation of non-blocking I/O. Maybe it is simply not possible.

Playing Sound in Perl script

I'm trying to add sound to a Perl script to alert the user that the transaction was OK (user may not be looking at the screen all the time while working). I'd like to stay as portable as possible, as the script runs on Windows and Linux stations.
I can
use Win32::Sound;
Win32::Sound::Play('SystemDefault',SND_ASYNC);
for Windows. But I'm not sure how to call a generic sound on Linux (Gnome). So far, I've come up with
system('paplay /usr/share/sounds/gnome/default/alert/sonar.ogg');
But I'm not sure if I can count on that path being available.
So, three questions:
Is there a better way to call a default sound in Gnome
Is that path pretty universal (at least among Debain/Ubuntu flavors)
paplay takes a while to exit after playing a sound, is there a better way to call it?
I'd rather stay away from beeping the system speaker, it sounds awful (this is going to get played a lot) and Ubuntu blacklists the PC Speaker anyway.
Thanks!
A more portable way to get the path to paplay (assuming it's there) might be to use File::Which. Then you could get the path like:
use File::Which;
my $paplay_path = which 'paplay';
And to play the sound asynchronously, you can fork a subprocess:
my $pid = fork;
if ( !$pid ) {
# in the child process
system $paplay_path, '/usr/share/sounds/gnome/default/alert/sonar.ogg';
}
# parent proc continues here
Notice also that I've used the multi-argument form of system; doing so avoids the shell and runs the requested program directly. This avoids dangerous bugs (and is more efficient.)

Is it possible to create an event driven service in shells

Hi I would like to create a small program that listens for copy comands copied content for later retrival in bash. Is it possible to listen to key strokes while still keeping the shell interactive? And how can this be don arcitectualy. I don't need the whole program just a hint at how it can be done. I have no preferance when it comes to language exept that it should be implemented in a scripting language or maby c++.
Pherhaps this needs to be written like a shell extension or somthing. just a hint would be fine.
Consider the way that the script program works (see man script). I havn't done this in a while, but basically you write your pseudo terminal in C and push that into the stream, then launch the shell.
See tcgetattr/tcsetattr, grantpt, unlockpt, and ptsname, with ptem, ldterm and possibly ttcompat to be pushed using ioctl.
A simpler, though less efficient, is to run script into a pipe and capture the output. You probably will need script -f to flush the buffer (I think the -f is only in the GNU version).

BASH shell process control - any other examples of controlling/scheduling work

I've inherited a medium sized project in which the main (batch) program is fed work through a large set of shell scripts that do a lot of process control (waiting for process to complete, sleeping, checking for conditions, etc) [ and reprocessed through perl scripts ]
Are there other examples of process control by shell scripts ? I would like to see what other people have done as a comparison. (as i'm not really fond of the 6,668 line shell script)
It may lead to that the current program works and doesn't need to be messed with or for maintenance reasons - it's too cumbersome and doing it another way will be easier to maintain, but I need other examples.
To reduce the "generality" of the question here's an example of what I'm looking for: procsup
Inquisitor project relies on process control from shell scripts extensively. You might want to see it's directory with main function set or directory with tests (i.e. slave processes) that it runs.
This is quite general question, and therefore giving specific answers may be a little bit difficult. (And you wont be happy with 5000 lines long example.) Most probably architecture of your application is faulty, and requires rather complete rework.
As you probably already know, process control with bash is pretty simple:
./test_script.sh &
test_script_pid=$!
wait $test_script_pid # waits until it's done
./test_script2.sh
echo $? # Prints return code of previous command
You can do same things with for example Python subprocess (or with Perl, obviously). If you have complex architecture with large number of different programs, then process is obviously non-trivial.
That is an awfully bug shell script. Have you considered refactoring it?
From the sound of it, there may be a lot of instances where you could replace several lines of code with a call to a shell function. If you can simplify the code in this way, then it will be easier to see where there are errors in the logic.
I've used this tactic successfully with a humongous PERL script and it turned out to have some serious logic errors and to be a security risk because it had embedded passwords that were obfuscated in an easily reversible way. The passwords that were exposed could have been used by persons unknown (well, a disgruntled employee) to shut down an entire global network.
Some managers were leaning towards making a security exception because this script was so important, but when the logic error was explained and it was clear that this script was providing incorrect data, it was decided that no data was better than dirty data. The guy who wrote that script taught himself programming with a PERL book and the writing of the script.

Pitfalls of using shell scripts to wrap a program?

Consider I have a program that needs an environment set. It is in Perl and I want to modify the environment (to search for libraries a special spot).
Every time I mess with the the standard way to do things in UNIX I pay a heavy price and I pay a penalty in flexibility.
I know that by using a simple shell script I will inject an additional process into the process tree. Any process accessing its own process tree might be thrown for a little bit of a loop.
Anything recursive to a nontrivial way would need to defend against multiple expansions of the environment.
Anything resembling being in a pipe of programs (or closing and opening STDIN, STDOUT, or STDERR) is my biggest area of concern.
What am I doing to myself?
What am I doing to myself?
Getting yourself all het up over nothing?
Wrapping a program in a shell script in order to set up the environment is actually quite standard and the risk is pretty minimal unless you're trying to do something really weird.
If you're really concerned about having one more process around — and UNIX processes are very cheap, by design — then use the exec keyword, which instead of forking a new process, simply exec's a new executable in place of the current one. So, where you might have had
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
perl myprog.pl
You'd just say
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
exec perl myprog.pl
and the spare process goes away.
This trick, however, is almost never worth the bother; the one counter-example is that if you can't change your default shell, it's useful to say
$ exec zsh
in place of just running the shell, because then you get the expected behavior for process control and so forth.

Resources