Why is the following empty program exiting on Ctrl + C with 130 on Linux (which is what I suspect, because my shell bash wraps SIGINT to 130 (128+2).
On Windows with Git Bash (git-bash.exe), I get exit code 2.
package main
func main() {
for {
}
}
Is that Go's behavior on Windows or git-bash.exe? Because I need exit code 2 internally, do I need to wrap it using the signal package?
Well, it's two-fold.
On the one hand, as #Flimzy pointed out, it's shell intervening.
On the other hand, what is missing from his remark is why this happens.
The explanation is, again, two-fold:
A process has certain default signal handling disposition which is how the process reacts to certain signals. A signal can be ignored, handled or left as is which, in most if not all cases means killing the process.
You can read more about this here.
Note that in non-trivial processes such as programs written in Go which have intricate runtime system, the default signal disposition may be different from that of "more plain" processes.
By default the SIGINT signal is not handled meaning it kills the process.
The bash she'll adds 128 to the exit code of a process if it was killed with a fatal signal. More about this — in the bash manual.
Update on the behaviour in Windows.
Let's first put up a quick fact sheet:
Windows does not support the concept of Unix signals, at all.
The way terminal-aware programs work on Unix-like systems is very different from the way console-aware programs work on Windows.
Say, the way Vim looks and behaves in a "Git bash" windows on Windows may look very similar to how it looks in a GNOME Terminal window on a Linux-based OS but the underlying differences are profound.
Let's now dig a bit deeper.
Unix was born without any notion of GUI and the users would interact with a Unix system using hardware terminals.
In order to support them, kernels of Unix-like OSes implement special standardized way to make terminal-aware programs interact with the system; if you're "into" deep-diving into technical details, I highly recommend reading the "TTY demystified" piece.
The two more important highlights of this approach are:
The terminal subsystem is used even by programs running in what the contemporary generation of freshmen calls "terminals"—in windows which typically start out running a shell, in which you call various command-line programs, including those using "full screen"—such as text editors.
This basically means if you take, say Vim or GNU Nano, it will run just fine in any graphical terminal emulator, or directly on Linux's "virtual terminal" (those textual screen you can get on a PC by hitting Ctrl-Alt-F1 or booting with GUI turned off) or on a hardware terminal attached to the computer.
The terminal subsystem allocates certain codes a keyboard may send to it to perform certain actions—as opposed to sending those coder directly to the program attached to that terminal, and Ctrl-C is one of them: in a common default setup pressing that combination of keys makes the terminal subystem send the foreground process the SIGINT Unix signal.
The latter is of particular interest. You can run stty -a in a terminal window on your Linux system; amoung the copious output you'd see something like intr = ^C; quit = ^\; which means Ctrl-C sends interactive attention (SIGINT) signal and Ctrl-\ sends SIGQUIT (yes, "INT" in "SIGINT" does not stand for "interrupt"—contrary to a popular belief).
You could reassign these key combos almost at will (though it's not a wise thing to do as many pieces of software expect ^C and ^\ to be mapped the way they usually do and do not assign their own actions to these gestures—rightfully expecting to not be able to actually ever receive them.
Now back to Windows.
On Windows, there is no terminal subsystem, and no signals.
Console window on Windows was an artefact required to provide compatibility with the older MS-DOS system, and there the situation was like this: Ctrl-Break would trigger a hardware interrupt usually handled by the OS, and Ctrl-C could be explicitly enabled to do the same. The implementation of the console emulation on Windows carefully emulated this behaviour, but since Windows does not have Unix-like signals, the handling of these keyboard combos is done differently—though with much the same effect:
Each console process has its own list of application-defined HandlerRoutine functions that handle CTRL+C and CTRL+BREAK signals. The handler functions also handle signals generated by the system when the user closes the console, logs off, or shuts down the system. Initially, the handler list for each process contains only a default handler function that calls the ExitProcess function.
What this all means to Go?
Let's first see the docs:
~$ GOOS=windows go doc os.Interrupt
package os // import "os"
var (
Interrupt Signal = syscall.SIGINT
Kill Signal = syscall.SIGKILL
)
The only signal values guaranteed to be present in the os package on all systems are os.Interrupt (send the process an interrupt) and os.Kill (force the process to exit). On Windows, sending os.Interrupt to a process with os.Process.Signal is not implemented; it will return an error instead of sending a signal.
So, in a Go program running on Windows you can handle these two "signals"—even though they were not really be implemented as signals.
Let's now move to explaning the difference in the exit codes.
As you know by now, pressing Ctrl-C when a program is running in a terminal emulator windows on a Unix-like system will make the terminal subsystem send the process the actual SIGINT signal.
If this signal is not explicitly handled, the process gets killed by the OS (as that's what the default signal disposition says).
The shell notices that a process it spawned suddenly died, collects its exit code and adds 128 to it (because it wasn't expecting it to die that way).
On Windows, hitting Ctrl-C makes the process perform the ExitProcess system call, which, form the point of view of the shell process looks like normal process exit: it cannot tell this exit apart from the one occured if the process were to call os.Exit(0) explicitly.
Related
I have created a very simple Desktop app for Mac OS. Due to restrictions, I could not use xcode. I am building the .app directory by hand, and the main executable at the moment is a shell script.
When I was trying my app out, I noticed that if I opened and closed it too quickly, the app would freeze up. At which point, I seemed unable to even force quit it, and had to rm -r the .app itself. A friend mentioned to me, that mac apps must handle SIGABRTs, and if they do not, there is a timeout period where the app could appear as frozen, which might explain what I observed.
I was looking around but uncertain where to find more information about this. Can anyone further explain this situation? Under what circumstances will the app receive a SIGABRT, and how should it be handled? Any links or literature on this topic, would be very appreciated.
In case anyone ever stumbles upon this:
So my friend was referring to Unix signals here. https://people.cs.pitt.edu/~alanjawi/cs449/code/shell/UnixSignals.htm
(to see what is available on your OS, give 'kill -l')
My main executable, in my MyApp.app/Contents/MacOS, is a shell script. So what I've found I can do, is use trap command. This will give a behavior to perform, if the executable receives one of the signals. Example - I now add this line near the top of my shell script:
trap 'exit' 5 6
This means that if the executable receives with a SIGABRT (6) or a SIGTRAP (5) signal, it will perform the command 'exit' and will exit. (I am not certain which all signals should be handled and what is the best course, I guess that might depend on your own app, but that was just as an example of something to do)
Here is a resource about trap commands and unix signals: https://www.tutorialspoint.com/unix/unix-signals-traps.htm
Why does this make a difference - I believe previously, there were scenarios where for example if I opened the app while it was already open, it was receiving a Unix signal like a SIGABRT. This signal was not being handled and the app did not know what to do in that scenario, and was freezing up. Though I have not confirmed this is what was happening.
I'm trying to write an OS X app that uses a serial port. I found an example (cocoa) and got it running in Xcode 4. On the first run, it opens the port and I'm able to exchange data with the hardware.
If I try to change the port the program goes rogue. The pinwheel starts and the UI is unresponsive. I can't stop the program from Xcode, nor can I kill it from Terminal, or Force Quit. Force Quit of Xcode doesn't do it. Although the PID goes away with a kill from Terminal, the UI is still present with the merrily spinning pinwheel.
The only way out is a re-boot. Any ideas on how to track down the errant code are welcome. I'm new to Cocoa/Objective C, so simple terms are better.
Most likely it became a zombie. It should show up in ps auxww (or similar) with a 'Z' in its status. Activity Monitor might also still show it.
This is relatively common when working with hardware, such as a serial port. Zombies can arise for either of two reasons, most likely the first in this case:
The process is blocked in a kernel call of some kind, that's not interruptible.
The process has exited but its parent hasn't acknowledged that (via wait() or similar).
In the first case it's usually a fundamental bug or design flaw of some kind, and you may not have any good options short of figuring out exactly what code path tickles the problem, and avoiding that.
In the second case the solution is generally simple - find the parent process of your zombie and kill it. Repeat as necessary until your zombie gets adopted by a parent process that does call wait() to reap it (launchd will do this if nothing else).
I have a Perl program on Windows that needs to execute cleanup actions on exit. I wrote a signal handler using sigtrap, but it doesn't always work. I can intercept Ctrl-C, but if the machine is rebooted or the program is killed some other way, neither the signal handler nor the END block are run. I've read that Windows doesn't really have signals, and signal handling on windows is sort of a hack in Perl. My question is, how can I handle abnormal termination the Windows way? I want to run my cleanup code regardless of how or why the program terminates (excluding events that can't be caught). I've read that Windows uses events instead of signals, but I can't find information on how to deal with Windows events in Perl.
Unfortunately, I don't have the authority to install modules from CPAN, so I'll have to use vanilla ActiveState Perl. And to make things even more interesting, most of the machines I'm using only have Perl 5.6.1.
Edit: I would appreciate any answers, even if they require CPAN modules or newer versions of Perl. I want to learn about Windows event handling in Perl, and any information would be welcome.
In all operating systems, you can always abruptly terminate any program. Think of kill -9 command in Unix/Linux. You do that on any program, and it stops instantly. No way to trap it. No way for the program to request a few more operating system cycles for a clean up.
I'm not up on the difference between Unix and Windows signals, but you can imagine why each OS must allow what we call in Unix SIGKILL - a sure and immediate way to kill any program.
Imagine you have a buggy program that intercepts a request to terminate (a SIGTERM in Unix), and it enters a cleanup phase. Instead of cleaning up, the program instead gets stuck in a loop that requests more and more memory. If you couldn't pull the SIGKILL emergency cord, you'd be stuck.
The ultimate SIGKILL, of course is the plug in the wall. Pull it, and the program (along with everything else) comes to a screeching halt. There's no way your program can say "Hmm... the power is out and the machines has stopped running... Better start up the old cleanup routine!"
So, there's no way you can trap every program termination signal, and, your program will have to account for that. What you can do is see if your program needs to do a cleanup before running. On Windows, you can put an entry in the registry when your program starts up, and remove it when it shuts down and does a cleanup. In Unix, you can put a file or directory name starting wit a period in the $ENV{HOME} directory.
Back in the 1980s, I wrote accounting software for a very proprietary OS. When the user pressed the ESCAPE button, we were suppose return immediately to the main menu. If the user was entering an order, and took stuff out of inventory, the transaction would be incomplete, and inventory would be showing the items as being sold even though the order was incomplete. The solution was to check for these incomplete orders the next time someone entered an order, and back out the changes in inventory before entering the new order. Your program may have to do something similar.
When a console application is started from another console application, how does console ownership work?
I see four possibilities:
The second application inherits the console from the first application for its lifetime, with the console returning to the original owner on exit.
Each application has its own console. Windows then somehow merges the content of the two into what the "console" visible to the user
The second application get a handle to the console that belongs to the first application.
The console is placed into shared memory and both applications have equal "ownership"
It's quite possible that I missed something and none of these four options adequately describe what Windows does with its consoles.
If the answer is close to option 4. My follow-up question is which of the two processes is responsible for managing the window? (Handling graphical updates when the screen needs to be refreshed / redrawn, etc)
A concrete example: Run CMD. Then, using CMD, run [console application]. The [console application] will write to what appears to be the same console window that CMD was using.
None of your four possibilities is actually the case, and the answer to your follow-on question, "Which of the two processes is responsible for managing the window?", is that neither process is responsible. TUI programs don't have to know anything about windows at all, and, under the covers, aren't necessarily even plumbed in to the GUI.
Consoles are objects, accessed via handles just like files, directories, pipes, processes, and threads. A single process doesn't "own" a console via its handle to it any more than a process "owns" any file that it has an open handle to. Handles to consoles are inherited by child processes from their parents in the same way that all other (inheritable) handles are. Your TUI application, spawned by CMD, simply inherits the standard handles that CMD said that it should inherit, when it called CreateProcess() — which are usually going to be CMD's standard input, output, and error (unless the command-line told CMD to use some other handles as the child's standard input, output, and error).
Consoles aren't dependent upon CMD. They exist as long as there are (a) any open handles to the console's input or output buffers or (b) any processes otherwise "attached" to the console. So in your example you could kill CMD, but only when you terminated the child process too would the console actually be destroyed.
The process that is in charge of displaying the GUI windows in which consoles are presented is, in Windows NT prior to version 6.1, CSRSS, the Client-Server Runtime SubSystem. The window handling code is in WINSRV.DLL, which contains the "console server" that — under the covers — Win32 programs performing console I/O make LPC calls to. In Windows NT 6.1, this functionality, for reasons covered by Raymond Chen, moved out of CSRSS into a less-privileged process that CSRSS spawns.
My guess is somewhere between 3 and 4. The console is a self-standing object, which has standard input, output and error streams. These streams are attached to the first process that uses the console. Subsequent processes can also inherit these streams if not redirected (e.g. running a command with redirect to a file.)
Normally there is no contention, since parent processes usually wait for their child process to complete, and asynchronous processes typically start their own console (e.g. try "start cmd" in a command prompt) or redirect standard output.
However, there is nothing to stop both processes writing to the output stream at the same time - the streams are shared. This can be a problem when using some runtime libraries since writes to standard output/error may not be immediately flushed, leading to mixed garbled output. In general, having to processes actively writing to the same output stream is usually not a good idea, unless you take measures to co-ordinate their output through concurrency primitives like Mutexes, Events and the like.
The way the SDK talks about it strongly resembles 1. It is an option with CreateProcess, described as follows:
CREATE_NEW_CONSOLE
The new process has a new console, instead of inheriting its parent's console (the default). For more information, see Creation of a Console.
Output however happens through handles, you'd get one with GetStdHandle(). Passing STD_OUTPUT_HANDLE returns the console handle, assuming output isn't redirected. Actual output is done through WriteFile() or WriteConsole/Output(). If both processes keep writing output to the handle then their output will be randomly intermingled. This is otherwise indistinguishable from what would happen when two programs write to the same file handle.
Logically, there's a screen buffer associated with a console. You can tinker with it with SetConsoleScreenBufferXxx(). From that point of view you could call it shared memory. The actual implementation is undiscoverable, handles abstract them away, like any Win32 API. It is sure to have changed considerably in Vista with the new conhost.exe process.
CMD 'owns' the console. When it creates a process for an app, that app inherits handles to the console. It can read and write those. When the process goes away, CMD continues ownership.
Note: I'm not entirely sure that 'ownership' is the right word here. Windows will close the Console when CMD exits, but that may be a simple setting.
Each application will run in it's own AppDomain. Each AppDomain should be running it's own console.
Ah, you're right. I was thinking about running executables within a process and forgot they start their own process - I didn't drill down far enough.
I think it's spelled out fairly well in the documentation.
I've been forced into using a command line in windows and wondered if there were Linux-like keyboard shortcuts? I googled and didn't find what I was looking for.
Things like ^C, ^Z and such?
Try Ctrl+Break: some programs respond to it instead of Ctrl+C. On some keyboards Ctrl+Break translates to Ctrl+Fn+Pause.
Note also that nothing can cancel synchronous network I/O (such as net view \\invalid) on Windows before Vista.
You can trap ^C on Windows with SIGINT, just like Linux. The Windows shell, such as it is, doesn't support Unix style job control (at least not in a way analogous to Unix shells), and ^Z is actually the ^D analog for Windows.
There are two keyboard combinations that can be used to stop process in Windows command line.
Ctrl+C is the "nicer" method. Programmers can handle this in software. It's possible to write programs that ignore Ctrl+C as SIGINT signal completely, or handle Ctrl+C like a regular keyboard combination.
Ctrl+break is the "harder" method, always sends SIGBREAK signal and cannot be overridden in software.
Ctrl-C does a similar thing in windows as it does in linux.