When I use session.Shell() to start a new shell on a remote server and then session.Wait() to run the session until the remote end exits the session does not gracefully handle using Ctrl+D to end the session.
I can make this work using os/exec to launch a local child process to run whatever copy of the ssh client is available locally, but I would prefer to do this with native Go.
Examle code snippet:
conn, err := ssh.Dial("tcp", "some-server.fqdn", sshConfig)
if err != nil {
return err
}
defer conn.Close()
session, err := conn.NewSession()
if err != nil {
return err
}
defer session.Close()
session.Stdout = os.Stdout
session.Stderr = os.Stderr
session.Stdin = os.Stdin
modes := ssh.TerminalModes{
ssh.ECHO: 0,
ssh.TTY_OP_ISPEED: 14400,
ssh.TTY_OP_OSPEED: 14400,
}
err = session.RequestPty("xterm", 80, 40, modes)
if err != nil {
return err
}
err = session.Shell()
if err != nil {
return err
}
session.Wait()
Running exit on the remote server gracefully hangs up the remote end and session.Wait() returns as expected, but sending an EOF with Ctrl+D causes the remote end to hang up but the call to session.Wait() is stuck blocking. I have to use Ctrl+C to SIGINT the Go program.
I would like to get both to gracefully exit the session.Wait() call as that is expected behavior for most interactive ssh sessions.
I was able to reproduce this (with a bunch of additional framework code) but am not sure why it happens. It is possible to terminate the session by adding a session.Close call if you encounter EOF on stdin:
session.Stdout = os.Stdout
session.Stderr = os.Stderr
// session.Stdin = os.Stdin
ip, err := session.StdinPipe()
if err != nil {
return err
}
go func() {
io.Copy(ip, os.Stdin)
fmt.Println("stdin ended")
time.Sleep(1 * time.Second)
fmt.Println("issuing ip.Close() now")
ip.Close()
time.Sleep(1 * time.Second)
fmt.Println("issuing session.Close() now")
err = session.Close()
if err != nil {
fmt.Printf("close: %v\n", err)
}
}()
You'll see that the session shuts down (not very nicely) after the session.Close(). Calling ip.Close() should have shut down the stdin channel, and it seems like this should happen when just using os.Stdin directly too, but for some reason it does not work. Debugging shows an ssh-channel-close message going to the other end (for both cases), but the other end doesn't close the return-data ssh channels, so your end continues to wait for more output from them.
Worth noting: you have not put the local tty into raw-ish character-at-a-time mode. A regular ssh session does, so ^D does not actually close the connection, it just sends a literal control-D to the pty on the other side. It's the other end that turns that control-D into a (soft) EOF signal on the pty.
In Go, I'm trying to:
start a subprocess
read from stdout and stderr separately
implement an overall timeout
After much googling, we've come up with some code that seems to do the job, most of the time. But there seems to be a race condition whereby some output is not read.
The problem seems to only occur on Linux, not Windows.
Following the simplest possible solution found with google, we tried creating a context with a timeout:
context.WithTimeout(context.Background(), 10*time.Second)
While this worked most of the time, we were able to find cases where it would just hang forever. There was some aspect of the child process that caused this to deadlock. (Something to do with grandchildren that were not sufficiently dissasociated from the child process, and thus caused the child to never completely exit.)
Also, it seemed that in some cases the error that is returned when the timeout occurrs would indicate a timeout, but would only be delivered after the process had actually exited (thus making the whole concept of the timeout useless).
func GetOutputsWithTimeout(command string, args []string, timeout int) (io.ReadCloser, io.ReadCloser, int, error) {
start := time.Now()
procLogger.Tracef("Initializing %s %+v", command, args)
cmd := exec.Command(command, args...)
// get pipes to standard output/error
stdout, err := cmd.StdoutPipe()
if err != nil {
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.StdoutPipe() error: %+v", err.Error())
}
stderr, err := cmd.StderrPipe()
if err != nil {
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.StderrPipe() error: %+v", err.Error())
}
// setup buffers to capture standard output and standard error
var buf bytes.Buffer
var ebuf bytes.Buffer
// create a channel to capture any errors from wait
done := make(chan error)
// create a semaphore to indicate when both pipes are closed
var wg sync.WaitGroup
wg.Add(2)
go func() {
if _, err := buf.ReadFrom(stdout); err != nil {
procLogger.Debugf("%s: Error Slurping stdout: %+v", command, err)
}
wg.Done()
}()
go func() {
if _, err := ebuf.ReadFrom(stderr); err != nil {
procLogger.Debugf("%s: Error Slurping stderr: %+v", command, err)
}
wg.Done()
}()
// start process
procLogger.Debugf("Starting %s", command)
if err := cmd.Start(); err != nil {
procLogger.Errorf("%s: failed to start: %+v", command, err)
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.Start() error: %+v", err.Error())
}
go func() {
procLogger.Debugf("Waiting for %s (%d) to finish", command, cmd.Process.Pid)
err := cmd.Wait() // this can be 'forced' by the killing of the process
procLogger.Tracef("%s finished: errStatus=%+v", command, err) // err could be nil here
//notify select of completion, and the status
done <- err
}()
// Wait for timeout or completion.
select {
// Timed out
case <-time.After(time.Duration(timeout) * time.Second):
elapsed := time.Since(start)
procLogger.Errorf("%s: timeout after %.1f\n", command, elapsed.Seconds())
if err := TerminateTree(cmd); err != nil {
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), -1,
fmt.Errorf("failed to kill %s, pid=%d: %+v",
command, cmd.Process.Pid, err)
}
wg.Wait() // this *should* take care of waiting for stdout and stderr to be collected after we killed the process
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), -1,
fmt.Errorf("%s: timeout %d s reached, pid=%d process killed",
command, timeout, cmd.Process.Pid)
//Exited normally or with a non-zero exit code
case err := <-done:
wg.Wait() // this *should* take care of waiting for stdout and stderr to be collected after the process terminated naturally.
elapsed := time.Since(start)
procLogger.Tracef("%s: Done after %.1f\n", command, elapsed.Seconds())
rc := -1
// Note that we have to use go1.10 compatible mechanism.
if err != nil {
procLogger.Tracef("%s exited with error: %+v", command, err)
exitErr, ok := err.(*exec.ExitError)
if ok {
ws := exitErr.Sys().(syscall.WaitStatus)
rc = ws.ExitStatus()
}
procLogger.Debugf("%s exited with status %d", command, rc)
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), rc,
fmt.Errorf("%s: process done with error: %+v",
command, err)
} else {
ws := cmd.ProcessState.Sys().(syscall.WaitStatus)
rc = ws.ExitStatus()
}
procLogger.Debugf("%s exited with status %d", command, rc)
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), rc, nil
}
//NOTREACHED: should not reach this line!
}
Calling GetOutputsWithTimeout("uname",[]string{"-mpi"},10) will return the expected single line of output most of the time. But sometimes it will return no output, as if the goroutine that reads stdout didn't start soon enough to "catch" all the output (or exited early?) The "most of the time" strongly suggests a race condition.
We will also sometimes see errors from the goroutines about "file already closed" (this seems to happen with the timeout condition, but will happen at other "normal" times as well).
I would have thought that starting the goroutines before the cmd.Start() would have ensured that no output would be missed, and that using the WaitGroup would guarantee they would both complete before reading the buffers.
So how are we missing output? Is there still a race condition between the two "reader" goroutines and the cmd.Start()? Should we ensure those two are running using yet another WaitGroup?
Or is there a problem with the implementation of ReadFrom()?
Note that we are currently using go1.10 due to backward-compatibility problems with older OSs but the same effect occurs with go1.12.4.
Or are we overthinking this, and a simple implementation with context.WithTimeout() would do the job?
But sometimes it will return no output, as if the goroutine that reads stdout didn't start soon enough to "catch" all the output
This is impossible, because a pipe can't "lose" data. If the process is writing to stdout and the Go program isn't reading yet, the process will block.
The simplest way to approach the problem is:
Launch goroutines to collect stdout, stderr
Launch a timer that kills the process
Start the process
Wait for it to finish (or be killed by the timer) with .Wait()
If timer is fired, return timeout error
Handle wait error
func GetOutputsWithTimeout(command string, args []string, timeout int) ([]byte, []byte, int, error) {
cmd := exec.Command(command, args...)
// get pipes to standard output/error
stdout, err := cmd.StdoutPipe()
if err != nil {
return nil, nil, -1, fmt.Errorf("cmd.StdoutPipe() error: %+v", err.Error())
}
stderr, err := cmd.StderrPipe()
if err != nil {
return nil, nil, -1, fmt.Errorf("cmd.StderrPipe() error: %+v", err.Error())
}
// setup buffers to capture standard output and standard error
var stdoutBuf, stderrBuf []byte
// create 3 goroutines: stdout, stderr, timer.
// Use a waitgroup to wait.
var wg sync.WaitGroup
wg.Add(2)
go func() {
var err error
if stdoutBuf, err = ioutil.ReadAll(stdout); err != nil {
log.Printf("%s: Error Slurping stdout: %+v", command, err)
}
wg.Done()
}()
go func() {
var err error
if stderrBuf, err = ioutil.ReadAll(stderr); err != nil {
log.Printf("%s: Error Slurping stderr: %+v", command, err)
}
wg.Done()
}()
t := time.AfterFunc(time.Duration(timeout)*time.Second, func() {
cmd.Process.Kill()
})
// start process
if err := cmd.Start(); err != nil {
t.Stop()
return nil, nil, -1, fmt.Errorf("cmd.Start() error: %+v", err.Error())
}
err = cmd.Wait()
timedOut := !t.Stop()
wg.Wait()
// check if the timer timed out.
if timedOut {
return stdoutBuf, stderrBuf, -1,
fmt.Errorf("%s: timeout %d s reached, pid=%d process killed",
command, timeout, cmd.Process.Pid)
}
if err != nil {
rc := -1
if exitErr, ok := err.(*exec.ExitError); ok {
rc = exitErr.Sys().(syscall.WaitStatus).ExitStatus()
}
return stdoutBuf, stderrBuf, rc,
fmt.Errorf("%s: process done with error: %+v",
command, err)
}
// cmd.Wait docs say that if err == nil, exit code is 0
return stdoutBuf, stderrBuf, 0, nil
}
I'm having trouble sending a signal from a parent process and receiving it in the child process.
This is the code for the child process. It exits when it receives SIGINT.
// child.go
func main() {
stop := make(chan os.Signal, 1)
signal.Notify(stop, os.Interrupt)
fmt.Println("started")
<-stop
fmt.Println("stopped")
}
This is the parent process. It starts child.go, sends SIGINT, then waits for it to exit.
// main.go
func main() {
// Start child process
cmd := exec.Command("go", "run", "child.go")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Start()
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Start: " + err.Error())
return
}
// Wait, then send signal
time.Sleep(time.Millisecond * 500)
err = cmd.Process.Signal(os.Interrupt)
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Signal: " + err.Error())
return
}
// Wait for child process to finish
err = cmd.Wait()
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Wait: " + err.Error())
}
return
}
This code should print started\nstopped to show that it worked as expected, but it only prints started and hangs at cmd.Wait(), meaning the child process did not receive the signal.
When I run go run child.go it works fine, so I don't think the problem is with that file. I understand that func (*Process) Signal doesn't work on Windows; I am using Linux.
How can I fix the code so that the child process gets the signal sent by the parent process?
As mentioned by #JimB in the comments section, the go run is your problem.
go run child.go will compile child and execute it as it's own process. If you run a ps after go run child.go, you will see two processes running.
The process you are watching and signalling is the go executable, not the child.
Replace the exec.Command("go", "run", "child.go") with the compiled binary exec.Command("child")and it should work.
I have a Go function to capture network traffic with tcpdumb (external command) on macOS:
func start_tcpdump() {
// Run tcpdump with parameters
cmd := exec.Command("tcpdump", "-I", "-i", "en1", "-w", "capture.pcap")
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
timer := time.AfterFunc(3 * time.Second, func() {
cmd.Process.Kill()
})
err := cmd.Wait()
if err != nil{
log.Fatal(err)
}
timer.Stop()
}
When this function complete work, I'm trying to open output .pcap file in Wireshark and getting this error:
"The capture file appears to have been cut short in the middle of a packet."
Probably, cmd.Process.Kill() interrupts correct closing of .pcap-file.
What solution could be applied for "proper" closing of tcpdumb external process?
You should use cmd.Process.signal(os.Interrupt) to signal tcpdump to exit, Kill() internally calls signal(Kill) which is equivalent to kill -9 to force the process to exit.
given the following example:
// test.go
package main
import (
"fmt"
"os/exec"
)
func main() {
cmd := exec.Command("login")
in, _ := cmd.StdinPipe()
in.Write([]byte("user"))
out, err := cmd.CombinedOutput()
if err != nil {
fmt.Println("error:", err)
}
fmt.Printf("%s", out)
}
How can I detect that the process is not going to finish, because it is waiting for user input?
I'm trying to be able to run any script, but abort it if for some reason it tries to read from stdin.
Thanks!
Detecting that the process is not going to finish is a difficult problem. In fact, it is one of the classic "unsolvable" problems in Computer Science: the Halting Problem.
In general, when you are calling exec.Command and will not be passing it any input, it will cause the program to read from your OS's null device (see documentation in the exec.Cmd fields). In your code (and mine below), you explicitly create a pipe (though you should check the error return of StdinPipe in case it is not created correctly), so you should subsequently call in.Close(). In either case, the subprocess will get an EOF and should clean up after itself and exit.
To help with processes that don't handle input correctly or otherwise get themselves stuck, the general solution is to use a timeout. In Go, you can use goroutines for this:
// Set your timeout
const CommandTimeout = 5 * time.Second
func main() {
cmd := exec.Command("login")
// Set up the input
in, err := cmd.StdinPipe()
if err != nil {
log.Fatalf("failed to create pipe for STDIN: %s", err)
}
// Write the input and close
go func() {
defer in.Close()
fmt.Fprintln(in, "user")
}()
// Capture the output
var b bytes.Buffer
cmd.Stdout, cmd.Stderr = &b, &b
// Start the process
if err := cmd.Start(); err != nil {
log.Fatalf("failed to start command: %s", err)
}
// Kill the process if it doesn't exit in time
defer time.AfterFunc(CommandTimeout, func() {
log.Printf("command timed out")
cmd.Process.Kill()
}).Stop()
// Wait for the process to finish
if err := cmd.Wait(); err != nil {
log.Fatalf("command failed: %s", err)
}
// Print out the output
fmt.Printf("Output:\n%s", b.String())
}
In the code above, there are actually three main goroutines of interest: the main goroutine spawns the subprocess and waits for it to exit; a timer goroutine is sent off in the background to kill the process if it's not Stopped in time; and a goroutine that writes the output to the program when it's ready to read it.
Although this would not allow you to "detect" the program trying to read from stdin, I would just close stdin. This way, the child process will just receive an EOF when it tried to read. Most programs know how to handle a closed stdin.
// All error handling excluded
cmd := exec.Command("login")
in, _ := cmd.StdinPipe()
cmd.Start()
in.Close()
cmd.Wait()
Unfortunately, this means you can't use combined output, the following code should allow you to do the same thing. It requires you to import the bytes package.
var buf = new(bytes.Buffer)
cmd.Stdout = buf
cmd.Stderr = buf
After cmd.Wait(), you can then do:
out := buf.Bytes()
I think the solution is to run the child process with closed stdin - by adjusting the Cmd.Stdin appropriately and then Runinng it afterwards instead of using CombinedOutput().
Finally, I'm going to implement a combination of Kyle Lemons answer and forcing the new process have it's own session without a terminal attached to it, so that the executed comand will be aware that there is no terminal to read from.
// test.go
package main
import (
"log"
"os/exec"
"syscall"
)
func main() {
cmd := exec.Command("./test.sh")
cmd.SysProcAttr = &syscall.SysProcAttr{Setsid: true}
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal("error:", err)
}
log.Printf("%s", out)
}