difference between pty and a pipe - terminal

I have been reading about ptys from this page's example: http://www.rkoucha.fr/tech_corner/pty_pdip.html. I have two questions:
What is the difference, or the most important difference, between using a pty and using a pipe? From what I have read, both are for inter-process communication, but with a pty the process can "treat it like a normal terminal". What does that mean?
What is a "controlling terminal"? I have read about them but can't understand what they really are. Is the controlling terminal always the pty assigned to the process?

The article you mention is excellent, and hard to improve upon, but it is rather technical. I'll try to give a less technical explanation (bear with me, Unix gurus!)
A pipe is just an unidirectional data channel: it can only be written on one end, and read on the other. For bidirectional inter-process communication you'll always need two pipes. Pipes are excellent to move bits around, but not for much more.
A pty (pseudoterminal) can be read and written on both ends, but it is much more than just a bidirectional data channel. To understand this, it is useful to have a look at a real terminal: On one end there is a process reading keystrokes and sending characters to a teletype or screen. On the other end there is a real human banging away at a keyboard and staring at the above-mentioned screen. Only one end has a file descriptor, the other end is just a connector and a cable.
Historically, terminals have developed many attributes that can be controlled by the programs running on them (like 'echo mode' or 'canonical mode', see termios (3)) Also, a terminal can let the user (by way of the above-mentioned connector and cable) send signals that can be used for 'job control', e.g. by typing CTRL-Z to put a foreground job in the background.
A pty is like a real terminal where both ends are file descriptors:
the slave end behaves exactly like a real terminal : a process that has a descriptor for the slave end ("inferior process") can read from, and write to it, but also set terminal attibutes like echo mode or the interrupt character (e.g. CTRL+C). It will usually not even be aware that is is not connected to a real screen and keyboard.
the master end looks more like a keyboard and teletype for use, not by humans, but by other processes: any process that has opened the the master end can write to it, and will receive echo (but only if the inferior process has set the ECHO attribute on the slave). It can also (on most modern unices) control the session that has the slave as its controlling terminal), e.g. by sending CTRL+Z.
To understand what a controlling terminal is, it is again useful to think about the scenario where a real user is logged in at a real terminal. The user can start a "session", i.e. a collection of processes, some of them in foreground jobs, others in the background.
To prevent chaos, a controlling terminal (i.e. the kernel structure associated with it) keeps track of which processes are in a foreground or background job, and which processes are allowed to read from and write to it. Whenever a process tries something illegal (like a background process reading from the controlling terminal) the operation will fail (with EIO) and the whole job is then stopped by the kernel (using the signal SIGTTIN)
This shows that, just as with a real terminal, only the slave end of a pty can be a controlling terminal, and that the concept only makes sense on a Unix system that supports job control (any Unix system, nowadays)

Related

Making STDIN unbuffered under Windows in Perl

I am trying to do input processing (from the console) in Perl asynchronously. My first approach was to use IO::Select but that does not work under Windows.
I then came across the post Non-buffered processor in Perl which roughly suggests this:
binmode STDIN;
binmode STDOUT;
STDIN->blocking(0) or warn $!;
STDOUT->autoflush(1);
while (1) {
my $buffer;
my $read_count = sysread(STDIN, $buffer, 4096);
if (not defined($read_count)) {
next;
} elsif (0 == $read_count) {
exit 0;
}
}
That works as expected for regular Unix systems but not for Windows, where the sysread actually does block. I have tested that on Windows 10 with 64-bit Strawberry Perl 5.32.1.
When you check the return value of blocking() (as done in the code above), it turns out that the call fails with the funny error message "An operation was attempted on something that is not a socket".
Edit: My application is a chess engine that theoretically can be run interactively in a terminal but usually communicates via pipes with a GUI. Therefore, Win32::Console does not help.
Has something changed since the blog post had been published? The author explicitely claims that this approach would work for Windows. Any other option that I can go with, maybe some module from the Win32:: namespace?
The solution I now implemented in https://github.com/gflohr/Chess-Plisco/blob/main/lib/Chess/Plisco/Engine.pm (search for the method __msDosSocket()) can be outlined as follows:
If Windows is detected as the operating system, create a temporary file as a Unix domain socket with IO::Socket::Unix for writing.
Do a fork() which actually creates a thread in Perl for Windows because the system does not have a real fork().
In the "parent", create another instance of IO::Socket::Unix with the same path for reading.
In the "child", read from standard input with getline(). This blocks, of course. Every line read is echoed to the write end of the socket.
The "parent" uses the read-end of the socket as a replacement for standard input and puts it into non-blocking mode. That works even under Windows because it is a socket.
From here on, everything is working the same as under Unix: All input is read in non-blocking mode with IO::Select.
Instead of a Unix domain socket it is probably wiser to route the communication through the loopback interface because under Windows it is hard to guarantee that a temporary file gets deleted when the process terminates since you cannot unlink it while it is in use. It is also stated in the comments that IO::Socket::UNIX may not work under older Windows versions, and so inet sockets are probably more portable to use.
I also had trouble to terminate both threads. A call to kill() does not seem to work. In my case, the protocol that the program implements is so that the command "quit" read from standard input should cause the program to terminate. The child thread therefore checks, whether the line read was "quit" and terminates with exit in that case. A proper solution should find a better way for letting the parent kill the child.
I did not bother to ignore SIGCHLD (because it doesn't exist under Windows) or call wait*() because fork does not spawn a new process image under Windows but only a new thread.
This approach is close to the one suggested in one of the comments to the question, only that the thread comes in disguise as a child process created by fork().
The other suggestion was to use the module Win32::Console. This does not work for two reasons:
As the name suggests, it only works for the console. But my software is a backend for a GUI frontend and rarely runs in a console.
The underlying API is for keyboard and mouse events. It works fine for key strokes and most mouse events, but polling an event blocks as soon as the user has selected something with the mouse. So even for a real console application, this approach would not work. A solution built on Win32::Console must also handle events like pressing the CTRL, ALT or Shift key because they will not guarantee that input can be read immediately from the tty.
It is somewhat surprising that a task as trivial as non-blocking I/O on a file descriptor is so hard to implement in a portable way in Perl because Windows actually has a similar concept called "overlapped" I/O. I tried to understand that concept, failed at it, and concluded that it is true to the Windows maxim "make easy things hard, and hard things impossible". Therefore I just cannot blame the Perl developers for not using it as an emulation of non-blocking I/O. Maybe it is simply not possible.

creating a new screen (like vi and less does) in a textual program

Programs like vi, less, screen, when executed, they fill the terminal with their data, and then, if you press c - Z (or terminate the program) the terminal return as it was before the execution of these programs.
How usually a program do that? What is the correct terminology this kind of thing?
PS: The words used in the title may be not correct since I've no even idea about the terminology of this kind of things.
EDIT:
Thank to #Atropo I now know the correct name of these is foreground process,
but, how a program do that? How the program can clear the screen, do its writing and, at the end of the execution, let the shell reappear with all the old writings?
They're called foreground processes.
Usually a foreground processes show the user an interface, through which the user can interact with the program. So the user must wait for one foreground process to complete before running another one.
While you use a foreground process the shell prompt disappears until you close the process or you put it in the background.
By default CTRL-C generates SIGINT signal and CTRL-Z SIGTSTP:
https://en.wikipedia.org/wiki/Unix_signal
To change the behavior you can:
redefine or mask signal handler
disable the key combination for stdin http://linux.die.net/man/3/termios
close stdin descriptor (like daemons do)

How to detect that foreground process is waiting for input in UNIX?

I have to create a script (ksh or perl) that starts certain number of parallel jobs (another scripts), each of them runs as a foreground process in a separate session. Plus I start monitoring job that has to determine if any of those scripts is expecting input from operator, and switch to the corresponding session if necessary.
My problem is that I have not found a good way to determine that process is expecting input. For the background process it's pretty easy: process state is "stopped" and this can be easily checked with 'ps' command. In case of foreground process this does not work.
So far I tried to attach to the process with dbx or truss to see if it's hanging on 'read', but this approach seems too heavyweight.
Could you suggest some better solution? Perl, shell, C, Java, etc. … is ok as long as it’s standard and does not require extra 3rd party or OS-specific stuff to install.
Thank you.
What you're asking isn't possible, at least not reliably. The process may be using select or other polling method rather than blocking on a read call. You can't know whether it's waiting for operator input or busy doing other stuff, and in general it could be both (doing stuff in the background while being responsive to operator input).
The normal way for a program to signal that it's waiting for operator input is to print a prompt. Thus you should consider a session to be active if it's displayed a prompt since the last time you fed it input.
If your programs don't behave this way, you'll need to find some other program-specific way to know that these processes are waiting for input.

Can I trap control-q and control-s in ruby?

For some signals, like SIGINT, I can easily enough set up a trap to handle the signal and continue execution as I see fit. I'd like to add typical behavior for ^q and ^s to a ruby command-line application that I'm fiddling with. Is there a way to do this - particularly, one that is portable so I can use it in Windows, iOS, Linux and Solaris?
EDIT:
It turns out that the signals are never delivered to the process. In fact, running strace on the process and on its parent process, a bash instance, showed that neither the process nor the parent were getting any indication of what was going on. They're simply being suspended.
I may try to have a SIGALARM handler that fires once per second, checks to see if much more than a second has passed since the last alarm, and makes appropriate calls if it concludes that the process has been suspended. There would be false positives on a heavily-loaded system.
In irb enter Signal.list. It will list all the signals you should be able to trap.
Trap a signal in ruby:
Signal.trap("STOP") do
# handle the signal
end
In the terminal enter $ stty -a. It should list signals and their associated key combo (if they have one).
I believe ^s is usually stop and ^q is start.
Although according to this answer, those key combos do not actually send a signal to the running process, but instead to the terminal driver. In that case, kill -STOP <process> can send that signal to your process.
TL;DR No, you can't trap them, because they don't result in signals and the processes under the terminal don't see them and can only detect them heuristically. However, if the point is to be able to use those keybindings in your terminal program, then yes, you can do that by disabling the terminal's special treatment of them.
^q and ^s don't result in signals. It's ^z not ^s that results in the terminal signaling SIGSTOP (EDIT: It's actually SIGTSTP).
What ^s does is tell the terminal to not read the output of the processes that are writing to it. This causes the processes to block on write to the terminal (they can still write to other places and read from stdin, as well as do other things)[1]. ^q tells the terminal to continue reading and displaying the processes output. The processes that have the terminal as standard input don't see these. The terminal sees the keybindings, acts on them, and doesn't pass them on to the processes reading from its terminal device.
You can disable this special behaviour with stty -ixon, and re-enable it with stty ixon. When I disable it, the process that reads while I type says that ^s is byte 0x13, and ^q is byte 0x11.
[1] As an experiment to show this, you can open 2 terminal windows. Execute tty on the second one to find its terminal device. Then, on the first terminal, you can run tee $TTY > $OTHER_TTY with $OTHER_TTY being the terminal device of the second terminal. Once you've done that, you can hit ^s to block writes to the terminal and check this by typing some line. That line will be displayed in the second terminal, but not the first, and thereafter nothing you type will be displayed on either until you hit ^q. What happened here is that after you hit ^s and typed a line, tee could still read it, and output it to its stdout which we redirected to the second terminal. Then, when it tried to write it to the first file passed as argument it blocked because it was the terminal you blocked with ^s. It stayed there waiting for write() to return which it won't until you hit ^q.

Send CTRL+C to subprocess tree on Windows

I would like to run arbitrary console-based sub-processes and manage them from a single master process. The console based sub-processes communicate via stdin, stdout and stderr, and if you run them in a genuine console they terminate cleanly when you press CTRL+C. Some of them may in fact be a tree of processes, such as a batch script that runs an executable which may in turn run another executable to do some work. I would like to redirect their standard I/O (for example, so that I can show their output in a GUI window) and in certain circumstances to send them a CTRL+C event so that they will give up and terminate cleanly.
The following two diagrams show first the normal structure - one master process has four worker sub-processes, and some of those workers have their own subprocesses; and then what should happen when one of the workers needs to be stopped - it and all of its children should get the CTRL+C event, but no other processes should receive the CTRL+C event.
(source: livejournal.com)
Additionally, I would much prefer that there are no extra windows visible to the user.
Here's what I've tried (note that I'm working in Python, but solutions for C would still be helpful):
Spawning an extra intermediate process with CREATE_NEW_CONSOLE, and then having it spawn the worker process. Then have it call GenerateConsoleCtrlEvent(CTRL_C_EVENT, 0) when we want to kill the worker. Unfortunately, CREATE_NEW_CONSOLE seems to prevent me from redirecting the standard I/O channels, so I'm left with no easy way to get the output back to the main program.
Spawning an extra intermediate process with CREATE_NEW_PROCESS_GROUP, and then having it spawn the worker process. Then have it call GenerateConsoleCtrlEvent(CTRL_C_EVENT, 0) when we want to kill the worker. Somehow, this manages to send the CTRL+C only to the master process, which is completely useless. On closer inspection, GenerateConsoleCtrlEvent says that CTRL+C cannot be sent to process groups.
Spawning the subprocess with CREATE_NEW_PROCESS_GROUP. Then call GenerateConsoleCtrlEvent(CTRL_BREAK_EVENT, pid) to kill the worker. This is not ideal, because CTRL+BREAK is less friendly than CTRL+C and will probably result in a messier termination. (E.g. if it's a Python process, no KeyboardInterrupt can be caught and no finally blocks run.)
Is there any good way to do what I want? I can see that I could theoretically build on the first attempt and find some other way to communicate between the processes, but I am worried it will turn out to be extremely awkward. Are there good examples of other programs that achieve the same effect? It seems so simple that it can't be all that uncommon a requirement.
I don't know about managing/redirecting stdin et. al., but for managing the subprocess tree
have you considered using the Windows Job Objects api?
There are several other questions about managing process trees (How do I automatically destroy child processes in Windows? Performing equivalent of “Kill Process Tree” in c++ on windows) and it looks like the cleanest method if you can use it.
Chapter 5 of Windows Via C/C++ by Jeffery Richter has a good discussion on using CreateJobObject and the related APIs.

Resources