Sending SIGTSTP suspends entire program - go

I'm trying to send a SIGTSTP signal to a child process. The problem I'm facing is that sending SIGTSTP to the child process halts my entire program and the caller is unable to proceed with execution of the rest of the program. Here's my code
cmd := exec.Command("ping", "google.com")
stdout, _ := cmd.StdoutPipe()
cmd.Start()
io.Copy(os.Stdout, stdout)
cmd.Wait()
Running this code, I get output from ping google.com printed on the terminal. When I hit ctrl-z, the output is stopped, but the program is not longer able to accept signals or do anything else unless SIGCONT is sent to the child process. Am I missing something? How do I suspend the child process but resume execution of the caller? Thanks.

Wait waits for the command to exit. Your child process isn't exiting, it's just paused, so Wait doesn't return.

Related

Terminate shell pipe from interactive go cli

I have a Go program that consumes "live" input from a shell pipe, eg:
tail -f some/file | my-program
my-program is an interactive program built with rivo/tview. I want to be able to close my program with Ctrl-C and have it also terminate the tail -f that supplies input to it.
Currently I have to hit Ctrl-C twice to get back to my shell prompt. Any way I can get back to my prompt by hitting Ctrl-C once?
Adjusted my program per #torek's explanation of progress groups and observation that I can get the progress group ID using unix.Getpgid(pid):
import (
"os"
"golang.org/x/sys/unix"
)
func main() {
// do stuff with piped input
pid := os.Getpid()
pgid, err := unix.Getpgid(pid)
if err != nil {
log.Fatalf("could not get process group id for pid: %v\n", pid)
}
processGroup, err := os.FindProcess(pgid)
if err != nil {
log.Fatalf("could not find process for pid: %v\n", pgid)
}
processGroup.Signal(os.Interrupt)
}
This delivers my desired behavior from my original question.
I opted to not use syscall because of the warning I found:
Deprecated: this package is locked down. Callers should use the corresponding package in the golang.org/x/sys repository instead. That is also where updates required by new systems or versions should be applied. See https://golang.org/s/go1.4-syscall for more information.
I plan to update my program to detect whether or not it was given a pipe using the strategy outlined in this article, so when a pipe is detected, I'll do the above process group signaling on interrupt.
Any issues with that?
We'll assume a Unix-like system, using a shell that understands and engages in job control (and they all do now). When you run a command, the shell creates something called a process group or "pgroup" to hold each of the processes that make up the command. If the command is a pipeline (as this one is), each process in the pipeline gets the same pgroup-ID (see setpgid).
If the command is run in the forgeground (without &), the controlling terminal has this particular pgid assigned to it. Pressing one of the signal-generating keys, such as CTRL-C or CTRL-\, sends the corresponding signal (SIGINT and SIGQUIT in these cases) to the pgroup, using an internal killpg or equivalent. This sends the signal to every member of the pgroup.
(Backgrounding a process is simply *cough* a matter of taking back the pgid on the controlling tty, then restarting the processes in the pipeline. To make that happen is not so simple, though, as indicated by the "restarting" here.)
The likely source of the problem here is that an interactive program will place the controlling terminal into cbreak or raw mode and disable some or all signalling from keyboard keys, so that, for instance, CTRL-C no longer causes the kernel's tty module to send a signal at all. Instead, if you see a key that should cause suspension (CTRL-Z) or termination, the program has to do its own suspending or terminating. Programmers sometimes assume that this consists of simply suspending or terminating—but since the entire pipeline never got the signal in question, that's not the case, unless the entire shell pipeline consisted solely of the interactive program.
The fix is to have the program send the signal to its own pgroup, after doing any necessary cleanup (temporarily or permanently) of the controlling terminal.

Detect stdout connected to a pipe was closed on the read end in golang

Is there a portable way in Golang to detect that os.Stdout that is connected to a pipe to another process was closed on the read end without writing something to it? I am writing a command line helper that should exit if its stdout was closed because, for example, the process connected to the read end of the pipe exited. For example, when run from a shell like:
go run my_helper.go | sleep 1
the helper should exit when the sleep process exits closing the read end of the pipe without waiting for explicit kill signal or an input that triggers eventual non-zero write to stdout.
I though I could just poll stdout periodically writing zero-length slices, but it seems Go optimized zero-length writes and did nothing in such cases. I.e.
n, err := os.Stdout.Write(nil)
returns 0, nil even if os.Stdout was closed on the read end.
As the code needs to run only on Linux, I can workaround this in principle using syscall.Select and waiting for write errors on os.Stdout, but maybe I missed something.

Detect zombie child process

My golang program starts a service program which is supposed to run forever, like this:
cmd := exec.Command("/path/to/service")
cmd.Start()
I do NOT want to wait for the termination of "service" because it is supposed to be running forever. However, if service starts with some error (e.g. it will terminate if another instance is already running), the child process will exit and become zombie.
My question is, after cmd.Start(), can I somehow detect if the child process is still running, rather than becomes a zombie? The preferred way might be:
if cmd.Process.IsZombie() {
... ...
}
or,
procStat := cmd.GetProcessStatus()
if procStat.Zombie == true {
... ...
}
i.e. I hope there are some way to get the status of a (child) process without waiting for its exit code, or, to "peek" its status code without blocking.
Thanks!
Judging from the docs the only way to get the process state is to call os.Process.Wait. So it seems you will have to call wait in a goroutine, and then you can easily check if that goroutine has exited yet:
var cmd exec.Cmd
done := make(chan error, 1)
go func() {
done <- cmd.Wait()
}()
select {
case err := <-done:
// inspect err to check if service exited normally
default:
// not done yet
}
The best solution (for me) is:
add a signal handler listen for SIGCHLD
on receiving SIGCHLD, call cmd.Wait()
This way, the zombie process will disappear.

SIGTERM signal handling confusion

I am running a program which invokes a shell script (for discussion sh1 with pid 100).
This script in turn invokes another script (for discussion sh2 with pid 101) and waits for it to finish. sh2(child script) takes about 50 seconds to finish.
The way I invoke the sh2 (/bin/sh2.sh )
During waiting for child to be done, I try to Terminate sh1 (using kill -15 100). I have a handler function in sh1 to handle this signal. However, I observe that my sh1(parent script) doesnot get terminated till the child is done with its work (for 50 seconds) and only after that this Signal is handled.
I modified my child script, to take 30 seconds to finish and I observe that after issuing the SIGTERM to sh1, it then takes around 30 seconds to terminate.
Is this the behavior while handling the SIGTERM ? that is to remain blocked by the child process ? and only then handle the signal. Doesn't the process get interrupted for signal handling?
SIGNAL Handling in parent script.
function clean_up()
{
//Do the cleanup
}
trap "clean_up $$; exit 0" TERM
If sh1 invokes sh2 and waits for it to finish, then it doesn't run the trap for the signal until after sh2 finishes. That is, if this is sh1:
#!/bin/sh
trap 'echo caught signal delayed' SIGTERM
sh2
then sh1 will catch the signal and do nothing until sh2 finishes, and then it will execute the trap. If you want the trap to fire as soon as the signal is sent, you can run sh2 asynchronously and wait for it explicitly:
#!/bin/sh
trap 'echo caught signal' SIGTERM
sh2&
wait
Unfortunately, this doesn't re-enter the wait. If you need to continue waiting, it's not really possible to do reliably, but you can get close with:
#!/bin/sh
trap 'echo caught signal' SIGTERM
sh2&
(exit 129) # prime the loop (eg, set $?) to simulate do/while
while test $? -gt 128; do wait; done
This isn't reliable because you can't tell the difference between catching a signal yourself and sh2 being terminated by a signal. If you need this to be reliable, you should re-write sh1 in a language which allows better control of signals.

golang handling kill in a process started by cmd.Start

I have two go programs. ProgA starts ProgB using cmd.Start(). From ProgA I try to kill ProgB, but ProgB shouldn't get killed immediately, it has to do some cleanup before dying. So I'm using signal.Notify in ProgB to handle sigcall.SIGKILL but whenever ProgA calls progb.Process.Kill() it doesn't seem to notify ProgB(write contents to sigc channel)
in ProgB I have the notify like this:
signal.Notify(sigc, syscall.SIGKILL)
go func() {
fmt.Println("started listening")
<-sigc
fmt.Println("sig term")
cleanUp()
os.Exit(1)
}()
someLongRunningCode()
is there something I'm missing out? I'm sure that ProgA sends a SIGKILL because cmd.Process.Kill() internally does a process.Signal(SIGKILL)
SIGKILL cannot be trapped by recieving process - kernel will force process termination. You may send SIGTERM to process and handle it on other side - it is a conventional method to stop an application.

Resources