How to immediately exit from exec command via context cancellation? - go

I am streaming command output to a client with this code. The command is built with context cancellation. The client sends a "cancel" request to the server which notifies the client's cancelCh which triggers cancel().
The issue I'm having is when the command is cancelled, the rest of the command output streams to the client as if the command was not cancelled. After the command is completed, exit status 1 is received; which shows that the command was indeed cancelled.
If I move the done channel to block after cmd.Wait() instead of before, I get the behavior I expect. The client immediately gets exit status 1 and no more data is sent. But that seems to cause a data race issue: https://github.com/golang/go/issues/19685. That issue is old but I think it's relevant.
What is the proper way to stream output to the client in real-time while also immediately exiting via context cancellation?
go func() {
defer func() {
cancel()
}()
<-client.cancelCh
}()
output := make(chan []byte)
go execute(cmd, output)
for data := range output {
fmt.Fprintf(w, "data: %s\n\n", data)
flusher.Flush()
}
func execute(cmd *exec.Cmd, output chan []byte) {
defer close(output)
cmdReader, err := cmd.StdoutPipe()
if err != nil {
output <- []byte(fmt.Sprintf("Error getting stdout pipe: %v", err))
return
}
cmd.Stderr = cmd.Stdout
scanner := bufio.NewScanner(cmdReader)
done := make(chan struct{})
go func() {
for scanner.Scan() {
output <- scanner.Bytes()
}
done <- struct{}{}
}()
err = cmd.Start()
if err != nil {
output <- []byte(fmt.Sprintf("Error executing: %v", err))
return
}
<-done
err = cmd.Wait()
if err != nil {
output <- []byte(err.Error())
}
//<-done
}

Related

Reading from a named pipe won't give any output and blocks the code indefinitely

I wrote a piece of code with a IPC purpose. The expected behaviour is that the code reads the content from the named-pipe and prints the string (with the Send("log", buff.String())). First I open the named-pipe 'reader' inside the goroutine, while the reader is open I send a signal that the data can be written to the named-pipe (with the Send("datarequest", "")). Here is the code:
var wg sync.WaitGroup
wg.Add(1)
go func() {
//reader part
file, err := os.OpenFile("tmp/"+os.Args[1], os.O_RDONLY, os.ModeNamedPipe)
if err != nil {
Send("error", err.Error())
}
var buff bytes.Buffer
_, err = io.Copy(&buff, file)
Send("log", buff.String())
if err != nil {
Send("error", err.Error())
}
wg.Done()
}()
Send("datarequest", "")
wg.Wait()
And here is the code which executes when the signal is send:
//writer part
file, err := os.OpenFile("tmp/" + execID, os.O_WRONLY, 0777)
if err != nil {
c <- "[error] error opening file: " + err.Error()
}
bytedata, _ := json.Marshal(moduleParameters)
file.Write(bytedata)
So the behaviour I get it that the code blocks indefinitely when I try to copy it. I really don't know why this happens. When I test it with cat in the terminal I do get the intended result so my question is how do I get the same result with code?
Edit
The execID is the same as os.Args[1]
The writer should close the file after it's done sending using file.Close(). Note that file.Close() may return error.

Race condition reading stdout and stderr of child process

In Go, I'm trying to:
start a subprocess
read from stdout and stderr separately
implement an overall timeout
After much googling, we've come up with some code that seems to do the job, most of the time. But there seems to be a race condition whereby some output is not read.
The problem seems to only occur on Linux, not Windows.
Following the simplest possible solution found with google, we tried creating a context with a timeout:
context.WithTimeout(context.Background(), 10*time.Second)
While this worked most of the time, we were able to find cases where it would just hang forever. There was some aspect of the child process that caused this to deadlock. (Something to do with grandchildren that were not sufficiently dissasociated from the child process, and thus caused the child to never completely exit.)
Also, it seemed that in some cases the error that is returned when the timeout occurrs would indicate a timeout, but would only be delivered after the process had actually exited (thus making the whole concept of the timeout useless).
func GetOutputsWithTimeout(command string, args []string, timeout int) (io.ReadCloser, io.ReadCloser, int, error) {
start := time.Now()
procLogger.Tracef("Initializing %s %+v", command, args)
cmd := exec.Command(command, args...)
// get pipes to standard output/error
stdout, err := cmd.StdoutPipe()
if err != nil {
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.StdoutPipe() error: %+v", err.Error())
}
stderr, err := cmd.StderrPipe()
if err != nil {
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.StderrPipe() error: %+v", err.Error())
}
// setup buffers to capture standard output and standard error
var buf bytes.Buffer
var ebuf bytes.Buffer
// create a channel to capture any errors from wait
done := make(chan error)
// create a semaphore to indicate when both pipes are closed
var wg sync.WaitGroup
wg.Add(2)
go func() {
if _, err := buf.ReadFrom(stdout); err != nil {
procLogger.Debugf("%s: Error Slurping stdout: %+v", command, err)
}
wg.Done()
}()
go func() {
if _, err := ebuf.ReadFrom(stderr); err != nil {
procLogger.Debugf("%s: Error Slurping stderr: %+v", command, err)
}
wg.Done()
}()
// start process
procLogger.Debugf("Starting %s", command)
if err := cmd.Start(); err != nil {
procLogger.Errorf("%s: failed to start: %+v", command, err)
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.Start() error: %+v", err.Error())
}
go func() {
procLogger.Debugf("Waiting for %s (%d) to finish", command, cmd.Process.Pid)
err := cmd.Wait() // this can be 'forced' by the killing of the process
procLogger.Tracef("%s finished: errStatus=%+v", command, err) // err could be nil here
//notify select of completion, and the status
done <- err
}()
// Wait for timeout or completion.
select {
// Timed out
case <-time.After(time.Duration(timeout) * time.Second):
elapsed := time.Since(start)
procLogger.Errorf("%s: timeout after %.1f\n", command, elapsed.Seconds())
if err := TerminateTree(cmd); err != nil {
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), -1,
fmt.Errorf("failed to kill %s, pid=%d: %+v",
command, cmd.Process.Pid, err)
}
wg.Wait() // this *should* take care of waiting for stdout and stderr to be collected after we killed the process
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), -1,
fmt.Errorf("%s: timeout %d s reached, pid=%d process killed",
command, timeout, cmd.Process.Pid)
//Exited normally or with a non-zero exit code
case err := <-done:
wg.Wait() // this *should* take care of waiting for stdout and stderr to be collected after the process terminated naturally.
elapsed := time.Since(start)
procLogger.Tracef("%s: Done after %.1f\n", command, elapsed.Seconds())
rc := -1
// Note that we have to use go1.10 compatible mechanism.
if err != nil {
procLogger.Tracef("%s exited with error: %+v", command, err)
exitErr, ok := err.(*exec.ExitError)
if ok {
ws := exitErr.Sys().(syscall.WaitStatus)
rc = ws.ExitStatus()
}
procLogger.Debugf("%s exited with status %d", command, rc)
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), rc,
fmt.Errorf("%s: process done with error: %+v",
command, err)
} else {
ws := cmd.ProcessState.Sys().(syscall.WaitStatus)
rc = ws.ExitStatus()
}
procLogger.Debugf("%s exited with status %d", command, rc)
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), rc, nil
}
//NOTREACHED: should not reach this line!
}
Calling GetOutputsWithTimeout("uname",[]string{"-mpi"},10) will return the expected single line of output most of the time. But sometimes it will return no output, as if the goroutine that reads stdout didn't start soon enough to "catch" all the output (or exited early?) The "most of the time" strongly suggests a race condition.
We will also sometimes see errors from the goroutines about "file already closed" (this seems to happen with the timeout condition, but will happen at other "normal" times as well).
I would have thought that starting the goroutines before the cmd.Start() would have ensured that no output would be missed, and that using the WaitGroup would guarantee they would both complete before reading the buffers.
So how are we missing output? Is there still a race condition between the two "reader" goroutines and the cmd.Start()? Should we ensure those two are running using yet another WaitGroup?
Or is there a problem with the implementation of ReadFrom()?
Note that we are currently using go1.10 due to backward-compatibility problems with older OSs but the same effect occurs with go1.12.4.
Or are we overthinking this, and a simple implementation with context.WithTimeout() would do the job?
But sometimes it will return no output, as if the goroutine that reads stdout didn't start soon enough to "catch" all the output
This is impossible, because a pipe can't "lose" data. If the process is writing to stdout and the Go program isn't reading yet, the process will block.
The simplest way to approach the problem is:
Launch goroutines to collect stdout, stderr
Launch a timer that kills the process
Start the process
Wait for it to finish (or be killed by the timer) with .Wait()
If timer is fired, return timeout error
Handle wait error
func GetOutputsWithTimeout(command string, args []string, timeout int) ([]byte, []byte, int, error) {
cmd := exec.Command(command, args...)
// get pipes to standard output/error
stdout, err := cmd.StdoutPipe()
if err != nil {
return nil, nil, -1, fmt.Errorf("cmd.StdoutPipe() error: %+v", err.Error())
}
stderr, err := cmd.StderrPipe()
if err != nil {
return nil, nil, -1, fmt.Errorf("cmd.StderrPipe() error: %+v", err.Error())
}
// setup buffers to capture standard output and standard error
var stdoutBuf, stderrBuf []byte
// create 3 goroutines: stdout, stderr, timer.
// Use a waitgroup to wait.
var wg sync.WaitGroup
wg.Add(2)
go func() {
var err error
if stdoutBuf, err = ioutil.ReadAll(stdout); err != nil {
log.Printf("%s: Error Slurping stdout: %+v", command, err)
}
wg.Done()
}()
go func() {
var err error
if stderrBuf, err = ioutil.ReadAll(stderr); err != nil {
log.Printf("%s: Error Slurping stderr: %+v", command, err)
}
wg.Done()
}()
t := time.AfterFunc(time.Duration(timeout)*time.Second, func() {
cmd.Process.Kill()
})
// start process
if err := cmd.Start(); err != nil {
t.Stop()
return nil, nil, -1, fmt.Errorf("cmd.Start() error: %+v", err.Error())
}
err = cmd.Wait()
timedOut := !t.Stop()
wg.Wait()
// check if the timer timed out.
if timedOut {
return stdoutBuf, stderrBuf, -1,
fmt.Errorf("%s: timeout %d s reached, pid=%d process killed",
command, timeout, cmd.Process.Pid)
}
if err != nil {
rc := -1
if exitErr, ok := err.(*exec.ExitError); ok {
rc = exitErr.Sys().(syscall.WaitStatus).ExitStatus()
}
return stdoutBuf, stderrBuf, rc,
fmt.Errorf("%s: process done with error: %+v",
command, err)
}
// cmd.Wait docs say that if err == nil, exit code is 0
return stdoutBuf, stderrBuf, 0, nil
}

Creating waiting/busy indicator for executed process

i've program which execute child process like
cmd := exec.Command("npm", "install")
log.Printf("Running command and waiting for it to finish...")
err := cmd.Run()
log.Printf("Command finished with error: %v", err)
While this command is running it download and install npm packages which take some time between 10 to 40 sec, and the user doesnt know what happen until he see the stdout (10-40 sec depend on the network) , there is something that I can use which prints something to the cli to make it more clear that something happen, some busy indicator(any type) until the stdout is printed to the cli ?
You may use another goroutine to print something (like a dot) periodically, like in every second. When the command completes, signal that goroutine to terminate.
Something like this:
func indicator(shutdownCh <-chan struct{}) {
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
fmt.Print(".")
case <-shutdownCh:
return
}
}
}
func main() {
cmd := exec.Command("npm", "install")
log.Printf("Running command and waiting for it to finish...")
// Start indicator:
shutdownCh := make(chan struct{})
go indicator(shutdownCh)
err := cmd.Run()
close(shutdownCh) // Signal indicator() to terminate
fmt.Println()
log.Printf("Command finished with error: %v", err)
}
If you want to start a new line after every 5 dots, this is how it can be done:
func indicator(shutdownCh <-chan struct{}) {
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for i := 0; ; {
select {
case <-ticker.C:
fmt.Print(".")
if i++; i%5 == 0 {
fmt.Println()
}
case <-shutdownCh:
return
}
}
}
Another way is to turn icza's answer around. Since the npm command is a long-running execution, it would probably be better to use a goroutine for it instead of the ticker (or both as a goroutine), but it's a matter of preference.
Like this:
func npmInstall(done chan struct{}) {
cmd := exec.Command("npm", "install")
log.Printf("Running command and waiting for it to finish...")
err := cmd.Run()
if err != nil {
log.Printf("\nCommand finished with error: %v", err)
}
close(done)
}
func main() {
done := make(chan struct{})
go npmInstall(done)
ticker := time.NewTicker(3 * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
fmt.Print(".")
case <-done:
fmt.Println()
return
}
}
}

Will goroutine leakage happen that channel with one buffer which have two inputs but only one output?

I have a function that is used to forward a message between two io.ReadWriters. Once an error happens, I need to log the error and return. But I think I may have a goroutine leakage problem in my code:
func transport(rw1, rw2 io.ReadWriter) error {
errc := make(chan error, 1) // only one buffer
go func() {
_, err := io.Copy(rw1, rw2)
errc <- err
}()
go func() {
_, err := io.Copy(rw2, rw1)
errc <- err
}()
err := <-errc // only one error catched
if err != nil && err == io.EOF {
err = nil
}
return err
}
Because there is only one error can be caught in this function, will the second goroutine exit and garbaged normally? Or should I write one more err <- errc to receive another error.
The value from one goroutine is received and the other is buffered. Both goroutines can send to the channel and exit. There is no leak.
You might want to receive both values to ensure that application detects an error when the first goroutine to send is successful and the second goroutine encounters an error.
var err error
for i := 0; i < 2; i++ {
if e := <-errc; e != nil {
err = e
}
}
Because io.Copy does not return io.EOF, there's no need to check for io.EOF when collecting the errors.
The code can be simplified to use a single goroutine:
errc := make(chan error, 1)
go func() {
_, err := io.Copy(rw1, rw2)
errc <- err
}()
_, err := io.Copy(rw2, rw1)
if e := <-errc; e != nil {
err = e
}

Leaking goroutine when a non-blocking readline hangs

Assuming you have a structure like this:
ch := make(chan string)
errCh := make(chan error)
go func() {
line, _, err := bufio.NewReader(r).ReadLine()
if err != nil {
errCh <- err
} else {
ch <- string(line)
}
}()
select {
case err := <-errCh:
return "", err
case line := <-ch:
return line, nil
case <-time.After(5 * time.Second):
return "", TimeoutError
}
In the case of the 5 second timeout, the goroutine hangs until ReadLine returns, which may never happen. My project is a long-running server, so I don't want a buildup of stuck goroutines.
ReadLine will not return until either the process exits or the method reads a line. There's no deadline or timeout mechanism for pipes.
The goroutine will block if the call to ReadLine returns after the timeout. This can be fixed by using buffered channels:
ch := make(chan string, 1)
errCh := make(chan error, 1)
The application should call Wait to cleanup resources associated with the command. The goroutine is a good place to call it:
go func() {
line, _, err := bufio.NewReader(r).ReadLine()
if err != nil {
errCh <- err
} else {
ch <- string(line)
}
cmd.Wait() // <-- add this line
}()
This will cause the goroutine to block, the very thing you are trying to avoid. The alternative is that the application leaks resources for each command.

Resources