I am trying to do input processing (from the console) in Perl asynchronously. My first approach was to use IO::Select but that does not work under Windows.
I then came across the post Non-buffered processor in Perl which roughly suggests this:
binmode STDIN;
binmode STDOUT;
STDIN->blocking(0) or warn $!;
STDOUT->autoflush(1);
while (1) {
my $buffer;
my $read_count = sysread(STDIN, $buffer, 4096);
if (not defined($read_count)) {
next;
} elsif (0 == $read_count) {
exit 0;
}
}
That works as expected for regular Unix systems but not for Windows, where the sysread actually does block. I have tested that on Windows 10 with 64-bit Strawberry Perl 5.32.1.
When you check the return value of blocking() (as done in the code above), it turns out that the call fails with the funny error message "An operation was attempted on something that is not a socket".
Edit: My application is a chess engine that theoretically can be run interactively in a terminal but usually communicates via pipes with a GUI. Therefore, Win32::Console does not help.
Has something changed since the blog post had been published? The author explicitely claims that this approach would work for Windows. Any other option that I can go with, maybe some module from the Win32:: namespace?
The solution I now implemented in https://github.com/gflohr/Chess-Plisco/blob/main/lib/Chess/Plisco/Engine.pm (search for the method __msDosSocket()) can be outlined as follows:
If Windows is detected as the operating system, create a temporary file as a Unix domain socket with IO::Socket::Unix for writing.
Do a fork() which actually creates a thread in Perl for Windows because the system does not have a real fork().
In the "parent", create another instance of IO::Socket::Unix with the same path for reading.
In the "child", read from standard input with getline(). This blocks, of course. Every line read is echoed to the write end of the socket.
The "parent" uses the read-end of the socket as a replacement for standard input and puts it into non-blocking mode. That works even under Windows because it is a socket.
From here on, everything is working the same as under Unix: All input is read in non-blocking mode with IO::Select.
Instead of a Unix domain socket it is probably wiser to route the communication through the loopback interface because under Windows it is hard to guarantee that a temporary file gets deleted when the process terminates since you cannot unlink it while it is in use. It is also stated in the comments that IO::Socket::UNIX may not work under older Windows versions, and so inet sockets are probably more portable to use.
I also had trouble to terminate both threads. A call to kill() does not seem to work. In my case, the protocol that the program implements is so that the command "quit" read from standard input should cause the program to terminate. The child thread therefore checks, whether the line read was "quit" and terminates with exit in that case. A proper solution should find a better way for letting the parent kill the child.
I did not bother to ignore SIGCHLD (because it doesn't exist under Windows) or call wait*() because fork does not spawn a new process image under Windows but only a new thread.
This approach is close to the one suggested in one of the comments to the question, only that the thread comes in disguise as a child process created by fork().
The other suggestion was to use the module Win32::Console. This does not work for two reasons:
As the name suggests, it only works for the console. But my software is a backend for a GUI frontend and rarely runs in a console.
The underlying API is for keyboard and mouse events. It works fine for key strokes and most mouse events, but polling an event blocks as soon as the user has selected something with the mouse. So even for a real console application, this approach would not work. A solution built on Win32::Console must also handle events like pressing the CTRL, ALT or Shift key because they will not guarantee that input can be read immediately from the tty.
It is somewhat surprising that a task as trivial as non-blocking I/O on a file descriptor is so hard to implement in a portable way in Perl because Windows actually has a similar concept called "overlapped" I/O. I tried to understand that concept, failed at it, and concluded that it is true to the Windows maxim "make easy things hard, and hard things impossible". Therefore I just cannot blame the Perl developers for not using it as an emulation of non-blocking I/O. Maybe it is simply not possible.
Related
I have been reading about ptys from this page's example: http://www.rkoucha.fr/tech_corner/pty_pdip.html. I have two questions:
What is the difference, or the most important difference, between using a pty and using a pipe? From what I have read, both are for inter-process communication, but with a pty the process can "treat it like a normal terminal". What does that mean?
What is a "controlling terminal"? I have read about them but can't understand what they really are. Is the controlling terminal always the pty assigned to the process?
The article you mention is excellent, and hard to improve upon, but it is rather technical. I'll try to give a less technical explanation (bear with me, Unix gurus!)
A pipe is just an unidirectional data channel: it can only be written on one end, and read on the other. For bidirectional inter-process communication you'll always need two pipes. Pipes are excellent to move bits around, but not for much more.
A pty (pseudoterminal) can be read and written on both ends, but it is much more than just a bidirectional data channel. To understand this, it is useful to have a look at a real terminal: On one end there is a process reading keystrokes and sending characters to a teletype or screen. On the other end there is a real human banging away at a keyboard and staring at the above-mentioned screen. Only one end has a file descriptor, the other end is just a connector and a cable.
Historically, terminals have developed many attributes that can be controlled by the programs running on them (like 'echo mode' or 'canonical mode', see termios (3)) Also, a terminal can let the user (by way of the above-mentioned connector and cable) send signals that can be used for 'job control', e.g. by typing CTRL-Z to put a foreground job in the background.
A pty is like a real terminal where both ends are file descriptors:
the slave end behaves exactly like a real terminal : a process that has a descriptor for the slave end ("inferior process") can read from, and write to it, but also set terminal attibutes like echo mode or the interrupt character (e.g. CTRL+C). It will usually not even be aware that is is not connected to a real screen and keyboard.
the master end looks more like a keyboard and teletype for use, not by humans, but by other processes: any process that has opened the the master end can write to it, and will receive echo (but only if the inferior process has set the ECHO attribute on the slave). It can also (on most modern unices) control the session that has the slave as its controlling terminal), e.g. by sending CTRL+Z.
To understand what a controlling terminal is, it is again useful to think about the scenario where a real user is logged in at a real terminal. The user can start a "session", i.e. a collection of processes, some of them in foreground jobs, others in the background.
To prevent chaos, a controlling terminal (i.e. the kernel structure associated with it) keeps track of which processes are in a foreground or background job, and which processes are allowed to read from and write to it. Whenever a process tries something illegal (like a background process reading from the controlling terminal) the operation will fail (with EIO) and the whole job is then stopped by the kernel (using the signal SIGTTIN)
This shows that, just as with a real terminal, only the slave end of a pty can be a controlling terminal, and that the concept only makes sense on a Unix system that supports job control (any Unix system, nowadays)
I have to create a script (ksh or perl) that starts certain number of parallel jobs (another scripts), each of them runs as a foreground process in a separate session. Plus I start monitoring job that has to determine if any of those scripts is expecting input from operator, and switch to the corresponding session if necessary.
My problem is that I have not found a good way to determine that process is expecting input. For the background process it's pretty easy: process state is "stopped" and this can be easily checked with 'ps' command. In case of foreground process this does not work.
So far I tried to attach to the process with dbx or truss to see if it's hanging on 'read', but this approach seems too heavyweight.
Could you suggest some better solution? Perl, shell, C, Java, etc. … is ok as long as it’s standard and does not require extra 3rd party or OS-specific stuff to install.
Thank you.
What you're asking isn't possible, at least not reliably. The process may be using select or other polling method rather than blocking on a read call. You can't know whether it's waiting for operator input or busy doing other stuff, and in general it could be both (doing stuff in the background while being responsive to operator input).
The normal way for a program to signal that it's waiting for operator input is to print a prompt. Thus you should consider a session to be active if it's displayed a prompt since the last time you fed it input.
If your programs don't behave this way, you'll need to find some other program-specific way to know that these processes are waiting for input.
I'm trying to add sound to a Perl script to alert the user that the transaction was OK (user may not be looking at the screen all the time while working). I'd like to stay as portable as possible, as the script runs on Windows and Linux stations.
I can
use Win32::Sound;
Win32::Sound::Play('SystemDefault',SND_ASYNC);
for Windows. But I'm not sure how to call a generic sound on Linux (Gnome). So far, I've come up with
system('paplay /usr/share/sounds/gnome/default/alert/sonar.ogg');
But I'm not sure if I can count on that path being available.
So, three questions:
Is there a better way to call a default sound in Gnome
Is that path pretty universal (at least among Debain/Ubuntu flavors)
paplay takes a while to exit after playing a sound, is there a better way to call it?
I'd rather stay away from beeping the system speaker, it sounds awful (this is going to get played a lot) and Ubuntu blacklists the PC Speaker anyway.
Thanks!
A more portable way to get the path to paplay (assuming it's there) might be to use File::Which. Then you could get the path like:
use File::Which;
my $paplay_path = which 'paplay';
And to play the sound asynchronously, you can fork a subprocess:
my $pid = fork;
if ( !$pid ) {
# in the child process
system $paplay_path, '/usr/share/sounds/gnome/default/alert/sonar.ogg';
}
# parent proc continues here
Notice also that I've used the multi-argument form of system; doing so avoids the shell and runs the requested program directly. This avoids dangerous bugs (and is more efficient.)
My question is related to "Turn off buffering in pipe" albeit concerning Windows rather than Unix.
I'm writing a Make clone and to stop parallel processes from thrashing each others' console output I've redirected the output to pipes (as described in here) on which I can do any filtering I want. Unfortunately long-running processes now buffer up their output rather than sending it in real-time as they would on a console.
From peeking at the MSVCRT sources it seems the root cause is that GetFileType() is used to check whether the standard I/O handles are attached to a console, which then sets an internal flag and ends up disabling buffering.
Apparently a separate array of inheritable file handles and flags can also be passed on through the undocumented lpReserved2 member of the STARTUPINFO structured when creating the process. About the only working solution I've figured out is to use this list and just lie about the device type when setting the flags for stdout/stderr.
Now then... Is there any sane way of solving this problem?
There is not. Yes, GetFileType() tells it that stdout is no longer a char device, _isatty() return false so the CRT switches the output stream to buffered mode. Important to get reasonable throughput. Flushing output one character at a time is only acceptable when a human is looking at them.
You would have to relink the programs you are trying to redirect with a customized version of the CRT. I don't doubt that if that was possible, you wouldn't be messing with this in the first place. Patching GetFileType() is another un-sane solution.
I've had some trouble forking of processes from a Perl CGI script when running on Windows. The main issue seems to be that 'fork' is emulated when running on windows, and doesn't actually seem to create a new process (just another thread in the current one). This means that web servers (like IIS) which are waiting for the process to finish continue waiting until the 'background' process finishes.
Is there a way of forking off a background process from a CGI script under Windows? Even better, is there a single function I can call which will do this in a cross platform way?
(And just to make life extra difficult, I'd really like a good way to redirect the forked processes output to a file at the same time).
If you want to do this in a platform independent way, Proc::Background is probably the best way.
Use Win32::Process->Create with DETACHED_PROCESS parameter
perlfork:
Perl provides a fork() keyword that
corresponds to the Unix system call of
the same name. On most Unix-like
platforms where the fork() system call
is available, Perl's fork() simply
calls it.
On some platforms such as Windows
where the fork() system call is not
available, Perl can be built to
emulate fork() at the interpreter
level. While the emulation is designed
to be as compatible as possible with
the real fork() at the the level of
the Perl program, there are certain
important differences that stem from
the fact that all the pseudo child
``processes'' created this way live in
the same real process as far as the
operating system is concerned.
I've found real problems with fork() on Windows, especially when dealing with Win32 Objects in Perl. Thus, if it's going to be Windows specific, I'd really recommend you look at the Thread library within Perl.
I use this to good effect accepting more than one connection at a time on websites using IIS, and then using even more threads to execute different scripts all at once.
This question is very old, and the accepted answer is correct. However, I just got this to work, and figured I'd add some more detail about how to accomplish it for anyone who needs it.
The following code exists in a very large perl CGI script. This particular sub routine creates tickets in multiple ticketing systems, then uses the returned ticket numbers to make an automated call via Twilio services. The call takes awhile, and I didn't want the CGI users to have to wait until the call ended to see the output from their request. To that end, I did the following:
(All the CGI code that is standard stuff. Calls the subroutine needed, and then)
my $randnum = int(rand(100000));
my $callcmd = $progdir_path . "/aoff-caller.pl --uniqueid $uuid --region $region --ticketid $ticketid";
my $daemon = Proc::Daemon->new(
work_dir => $progdir_path,
child_STDOUT => $tmpdir_path . '/stdout.txt',
child_STDERR => $tmpdir_path . '/stderr.txt',
pid_file => $tmpdir_path . '/' . $randnum . '-pid.txt',
exec_command => $callcmd,
);
my $pid = $daemon->Init();
exit 0;
(kill CGI at the appropriate place)
I am sure that the random number generated and attached to the pid is overkill, but I have no interest in creating issues that are extremely easily avoided. Hopefully this helps someone looking to do the same sort of thing. Remember to add use Proc::Daemon at the top of your script, mirror the code and alter to the paths and names of your program, and you should be good to go.