When I use session.Shell() to start a new shell on a remote server and then session.Wait() to run the session until the remote end exits the session does not gracefully handle using Ctrl+D to end the session.
I can make this work using os/exec to launch a local child process to run whatever copy of the ssh client is available locally, but I would prefer to do this with native Go.
Examle code snippet:
conn, err := ssh.Dial("tcp", "some-server.fqdn", sshConfig)
if err != nil {
return err
}
defer conn.Close()
session, err := conn.NewSession()
if err != nil {
return err
}
defer session.Close()
session.Stdout = os.Stdout
session.Stderr = os.Stderr
session.Stdin = os.Stdin
modes := ssh.TerminalModes{
ssh.ECHO: 0,
ssh.TTY_OP_ISPEED: 14400,
ssh.TTY_OP_OSPEED: 14400,
}
err = session.RequestPty("xterm", 80, 40, modes)
if err != nil {
return err
}
err = session.Shell()
if err != nil {
return err
}
session.Wait()
Running exit on the remote server gracefully hangs up the remote end and session.Wait() returns as expected, but sending an EOF with Ctrl+D causes the remote end to hang up but the call to session.Wait() is stuck blocking. I have to use Ctrl+C to SIGINT the Go program.
I would like to get both to gracefully exit the session.Wait() call as that is expected behavior for most interactive ssh sessions.
I was able to reproduce this (with a bunch of additional framework code) but am not sure why it happens. It is possible to terminate the session by adding a session.Close call if you encounter EOF on stdin:
session.Stdout = os.Stdout
session.Stderr = os.Stderr
// session.Stdin = os.Stdin
ip, err := session.StdinPipe()
if err != nil {
return err
}
go func() {
io.Copy(ip, os.Stdin)
fmt.Println("stdin ended")
time.Sleep(1 * time.Second)
fmt.Println("issuing ip.Close() now")
ip.Close()
time.Sleep(1 * time.Second)
fmt.Println("issuing session.Close() now")
err = session.Close()
if err != nil {
fmt.Printf("close: %v\n", err)
}
}()
You'll see that the session shuts down (not very nicely) after the session.Close(). Calling ip.Close() should have shut down the stdin channel, and it seems like this should happen when just using os.Stdin directly too, but for some reason it does not work. Debugging shows an ssh-channel-close message going to the other end (for both cases), but the other end doesn't close the return-data ssh channels, so your end continues to wait for more output from them.
Worth noting: you have not put the local tty into raw-ish character-at-a-time mode. A regular ssh session does, so ^D does not actually close the connection, it just sends a literal control-D to the pty on the other side. It's the other end that turns that control-D into a (soft) EOF signal on the pty.
Related
I am trying to write a file from a bash command into a file in Go.
Note there are several reasons for using Go over bash here: I have some more logic such as parsing configuration files, I would like to run that code for multiple DBs in parallele and finally performing some more complex data manipulation after.
dumpStr := fmt.Sprintf("pg_dump -U %s -h %s %s | gzip", DbUserName, DbHost, DbName)
cmd := exec.Command("bash", "-c", dumpStr)
cmd.Env = append(cmd.Env, "PGPASSWORD="+DbPassword)
outfile, err := os.Create(DbName + ".gz")
if err != nil {
panic(err)
}
outfile = cmd.Stdout
defer outfile.Close()
err = cmd.Start()
if err != nil {
panic(err)
}
cmd.Wait()
However, I am getting an emtpy result.
I am getting data if I am executing dumpStr from the CLI but not from that code...
What am I missing?
As Flimzy said, you're not capturing the output of pg_dump. You can do that with Go, or you can use pg_dump-s --file. It can also compress with --compress so no need to pipe to gzip. Then there's no need for bash and you can avoid shell quoting issues.
cmd := exec.Command(
"pg_dump",
"--compress=9",
"--file="+DbName + ".gz",
"-U"+DbUserName,
"-h"+DbHost,
DbName,
)
log.Print("Running pg_dump...")
if err := cmd.Run(); err != nil {
log.Fatal(err)
}
Much simpler and more secure.
For illustration here's how you'd do it all in Go.
Use Cmd.StdoutPipe to get an open IO reader to pg_dump's stdout. Then use io.Copy to copy from stdout to your open file.
#Peter points out that since Cmd.Stdout is an io.Reader it's simpler to assign the open file to cmd.Stdout and let cmd write to it directly.
// Same as above, but no --file.
cmd := exec.Command(
"pg_dump",
"--compress=9",
"-U"+DbUserName,
"-h"+DbHost,
DbName,
)
// Open the output file
outfile, err := os.Create(DbName + ".gz")
if err != nil {
log.Fatal(err)
}
defer outfile.Close()
// Send stdout to the outfile. cmd.Stdout will take any io.Writer.
cmd.Stdout = outfile
// Start the command
if err = cmd.Start(); err != nil {
log.Fatal(err)
}
log.Print("Waiting for command to finish...")
// Wait for the command to finish.
if err = cmd.Wait(); err != nil {
log.Fatal(err)
}
In addition, you're only checking if the command started, not if it successfully ran.
From the docs for Cmd.Start.
Start starts the specified command but does not wait for it to complete.
The Wait method will return the exit code and release associated resources once the command exits.
You're checking cmd.Start for an error, but not cmd.Wait. Checking the error from cmd.Start only means the command started. If there is an error while the program is running you won't know what it is.
You need to actually use the output of your command. You're not doing that. To do so, use the StdoutPipe method, then you can copy the stdout from your program, into your file.
In Go, I'm trying to:
start a subprocess
read from stdout and stderr separately
implement an overall timeout
After much googling, we've come up with some code that seems to do the job, most of the time. But there seems to be a race condition whereby some output is not read.
The problem seems to only occur on Linux, not Windows.
Following the simplest possible solution found with google, we tried creating a context with a timeout:
context.WithTimeout(context.Background(), 10*time.Second)
While this worked most of the time, we were able to find cases where it would just hang forever. There was some aspect of the child process that caused this to deadlock. (Something to do with grandchildren that were not sufficiently dissasociated from the child process, and thus caused the child to never completely exit.)
Also, it seemed that in some cases the error that is returned when the timeout occurrs would indicate a timeout, but would only be delivered after the process had actually exited (thus making the whole concept of the timeout useless).
func GetOutputsWithTimeout(command string, args []string, timeout int) (io.ReadCloser, io.ReadCloser, int, error) {
start := time.Now()
procLogger.Tracef("Initializing %s %+v", command, args)
cmd := exec.Command(command, args...)
// get pipes to standard output/error
stdout, err := cmd.StdoutPipe()
if err != nil {
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.StdoutPipe() error: %+v", err.Error())
}
stderr, err := cmd.StderrPipe()
if err != nil {
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.StderrPipe() error: %+v", err.Error())
}
// setup buffers to capture standard output and standard error
var buf bytes.Buffer
var ebuf bytes.Buffer
// create a channel to capture any errors from wait
done := make(chan error)
// create a semaphore to indicate when both pipes are closed
var wg sync.WaitGroup
wg.Add(2)
go func() {
if _, err := buf.ReadFrom(stdout); err != nil {
procLogger.Debugf("%s: Error Slurping stdout: %+v", command, err)
}
wg.Done()
}()
go func() {
if _, err := ebuf.ReadFrom(stderr); err != nil {
procLogger.Debugf("%s: Error Slurping stderr: %+v", command, err)
}
wg.Done()
}()
// start process
procLogger.Debugf("Starting %s", command)
if err := cmd.Start(); err != nil {
procLogger.Errorf("%s: failed to start: %+v", command, err)
return emptyReader(), emptyReader(), -1, fmt.Errorf("cmd.Start() error: %+v", err.Error())
}
go func() {
procLogger.Debugf("Waiting for %s (%d) to finish", command, cmd.Process.Pid)
err := cmd.Wait() // this can be 'forced' by the killing of the process
procLogger.Tracef("%s finished: errStatus=%+v", command, err) // err could be nil here
//notify select of completion, and the status
done <- err
}()
// Wait for timeout or completion.
select {
// Timed out
case <-time.After(time.Duration(timeout) * time.Second):
elapsed := time.Since(start)
procLogger.Errorf("%s: timeout after %.1f\n", command, elapsed.Seconds())
if err := TerminateTree(cmd); err != nil {
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), -1,
fmt.Errorf("failed to kill %s, pid=%d: %+v",
command, cmd.Process.Pid, err)
}
wg.Wait() // this *should* take care of waiting for stdout and stderr to be collected after we killed the process
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), -1,
fmt.Errorf("%s: timeout %d s reached, pid=%d process killed",
command, timeout, cmd.Process.Pid)
//Exited normally or with a non-zero exit code
case err := <-done:
wg.Wait() // this *should* take care of waiting for stdout and stderr to be collected after the process terminated naturally.
elapsed := time.Since(start)
procLogger.Tracef("%s: Done after %.1f\n", command, elapsed.Seconds())
rc := -1
// Note that we have to use go1.10 compatible mechanism.
if err != nil {
procLogger.Tracef("%s exited with error: %+v", command, err)
exitErr, ok := err.(*exec.ExitError)
if ok {
ws := exitErr.Sys().(syscall.WaitStatus)
rc = ws.ExitStatus()
}
procLogger.Debugf("%s exited with status %d", command, rc)
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), rc,
fmt.Errorf("%s: process done with error: %+v",
command, err)
} else {
ws := cmd.ProcessState.Sys().(syscall.WaitStatus)
rc = ws.ExitStatus()
}
procLogger.Debugf("%s exited with status %d", command, rc)
return ioutil.NopCloser(&buf), ioutil.NopCloser(&ebuf), rc, nil
}
//NOTREACHED: should not reach this line!
}
Calling GetOutputsWithTimeout("uname",[]string{"-mpi"},10) will return the expected single line of output most of the time. But sometimes it will return no output, as if the goroutine that reads stdout didn't start soon enough to "catch" all the output (or exited early?) The "most of the time" strongly suggests a race condition.
We will also sometimes see errors from the goroutines about "file already closed" (this seems to happen with the timeout condition, but will happen at other "normal" times as well).
I would have thought that starting the goroutines before the cmd.Start() would have ensured that no output would be missed, and that using the WaitGroup would guarantee they would both complete before reading the buffers.
So how are we missing output? Is there still a race condition between the two "reader" goroutines and the cmd.Start()? Should we ensure those two are running using yet another WaitGroup?
Or is there a problem with the implementation of ReadFrom()?
Note that we are currently using go1.10 due to backward-compatibility problems with older OSs but the same effect occurs with go1.12.4.
Or are we overthinking this, and a simple implementation with context.WithTimeout() would do the job?
But sometimes it will return no output, as if the goroutine that reads stdout didn't start soon enough to "catch" all the output
This is impossible, because a pipe can't "lose" data. If the process is writing to stdout and the Go program isn't reading yet, the process will block.
The simplest way to approach the problem is:
Launch goroutines to collect stdout, stderr
Launch a timer that kills the process
Start the process
Wait for it to finish (or be killed by the timer) with .Wait()
If timer is fired, return timeout error
Handle wait error
func GetOutputsWithTimeout(command string, args []string, timeout int) ([]byte, []byte, int, error) {
cmd := exec.Command(command, args...)
// get pipes to standard output/error
stdout, err := cmd.StdoutPipe()
if err != nil {
return nil, nil, -1, fmt.Errorf("cmd.StdoutPipe() error: %+v", err.Error())
}
stderr, err := cmd.StderrPipe()
if err != nil {
return nil, nil, -1, fmt.Errorf("cmd.StderrPipe() error: %+v", err.Error())
}
// setup buffers to capture standard output and standard error
var stdoutBuf, stderrBuf []byte
// create 3 goroutines: stdout, stderr, timer.
// Use a waitgroup to wait.
var wg sync.WaitGroup
wg.Add(2)
go func() {
var err error
if stdoutBuf, err = ioutil.ReadAll(stdout); err != nil {
log.Printf("%s: Error Slurping stdout: %+v", command, err)
}
wg.Done()
}()
go func() {
var err error
if stderrBuf, err = ioutil.ReadAll(stderr); err != nil {
log.Printf("%s: Error Slurping stderr: %+v", command, err)
}
wg.Done()
}()
t := time.AfterFunc(time.Duration(timeout)*time.Second, func() {
cmd.Process.Kill()
})
// start process
if err := cmd.Start(); err != nil {
t.Stop()
return nil, nil, -1, fmt.Errorf("cmd.Start() error: %+v", err.Error())
}
err = cmd.Wait()
timedOut := !t.Stop()
wg.Wait()
// check if the timer timed out.
if timedOut {
return stdoutBuf, stderrBuf, -1,
fmt.Errorf("%s: timeout %d s reached, pid=%d process killed",
command, timeout, cmd.Process.Pid)
}
if err != nil {
rc := -1
if exitErr, ok := err.(*exec.ExitError); ok {
rc = exitErr.Sys().(syscall.WaitStatus).ExitStatus()
}
return stdoutBuf, stderrBuf, rc,
fmt.Errorf("%s: process done with error: %+v",
command, err)
}
// cmd.Wait docs say that if err == nil, exit code is 0
return stdoutBuf, stderrBuf, 0, nil
}
So I'm able to ssh into the machine, but i'm having trouble entering data into the prompt.
...
sshConfig := &ssh.ClientConfig{
User: user,
Auth: []ssh.AuthMethod{
ssh.Password(password),
},
HostKeyCallback: KeyPrint,
}
connection, err := ssh.Dial("tcp", connStr, sshConfig)
if err != nil {
log.Fatalln(err)
}
session, err := connection.NewSession()
if err != nil {
log.Fatalln(err)
}
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
go io.Copy(stdin, os.Stdin)
stdout, err := session.StdoutPipe()
if err != nil {
log.Fatalf("Unable to setup stdout for session: %v", err)
}
go io.Copy(os.Stdout, stdout)
stderr, err := session.StderrPipe()
if err != nil {
log.Fatalf("Unable to setup stderr for session: %v", err)
}
go io.Copy(os.Stderr, stderr)
// err = session.Run("1")
session.Run("") // running it allows me to interact with the remote machines terminal in my own terminal.. session.Start("") exits and session.Wait() doesn't display the Welcome screen that normally greats users, and the prompt doesn't appear.
stdin.Write([]byte("10000"))
os.Stdin.WriteString("110000")
// log.Fatalln(n, err)
// os.Stdin.WriteString("1")
// for {
// session.Run("1")
// go os.Stdin.WriteString("1")
// go stdin.Write([]byte("10000"))
// }
...
The above code snippet gets me into the machine and the machine's prompt is displayed on my screen as if I ssh'ed into manually. I can type in the shell... but i need to be able to have Go type in the shell for me. The prompt that I'm interacting with is a text based game so I can't just issue commands e.g no (ls, echo, grep, etc..) the only thing I'm allow to pass in are numbers. How do I send input to the ssh session? I've tried many ways and none of the input seems to be going through.
I'm also attaching a screenshot of the prompt, just incase the description above is confusion in trying to portray the type of session this is.
UPDATE:
I think I've found a way to send the data, at least once.
session, err := connection.NewSession()
if err != nil {
log.Fatalln(err)
}
// ---------------------------------
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
go io.Copy(stdin, os.Stdin)
stdout, err := session.StdoutPipe()
if err != nil {
log.Fatalf("Unable to setup stdout for session: %v", err)
}
go io.Copy(os.Stdout, stdout)
stderr, err := session.StderrPipe()
if err != nil {
log.Fatalf("Unable to setup stderr for session: %v", err)
}
go io.Copy(os.Stderr, stderr)
go session.Start("")
for {
stdin.Write([]byte("10000000\n"))
break
}
session.Wait()
I start the session with go session.Start("") remember that there is no point is passing in command because all i'm doing is entering data in response to a prompt.
I then use session.Wait() at the end of a for loop...kinda like one does when using channels and waitgroups.. inside the for loop i send data with stdin.Write([]byte("10000000\n")) where the important thing to note is to add the \n delimiter to simulate hitting enter on the keyboard..
If there are any better ways to achieve what i'm trying to achieve please feel free. Next steps are to parse the stdout for a response and reply accordingly.
An empty Start will work, however within the ssh package, Start, Run, and Shell are all calls to, basically, the same thing. Start("cmd") executes a command within a shell, Run("cmd") is a simple call to Start("cmd") that then invokes Wait() for you (giving the feel of executing without concurrency), and Shell opens a shell (like Start), but without a command passed. It's six of one, half a dozen of the other, really, but using Shell() is probably the cleanest way to go about that.
Also, bear in mind that Start() and Shell() both leverage concurrency without the explicit invocation of "go". It might free an additional millisecond or so to invoke concurrency manually, but if that isn't of significance to you, then you should be able to drop that. The automatic concurrency of Start and Shell is the reason for the Wait() and Run("cmd") methods.
If you have no need to interpret your output (stdout/err), then you can map these without the pipe() call or io.Copy(), which is easier and more efficient. I did this in the example below, but bear in mind that if you do interpret the output, it's probably easier to work with the Pipe(). You can send multiple commands (or numbers) sequentially without reading for a prompt in most cases, but some things (like passwords prompts) clear the input buffer. If this happens for you, then you'll need to read the Stdout to find your prompt or leverage an expect tool like goexpect (https://github.com/google/goexpect). There are several expect-like packages for Go, but this one is from Google and (as of this posting) still fairly recently maintained.
StdinPipe() exposes a writeCloser that can be leveraged without io.Copy(), which should be more efficient.
Your for loop that writes to the StdinPipe() should allow you to enter several commands (or in your case, sets of numbers)... as an example, I have this reading commands (numbers, etc) from os.Args and iterating through them.
Lastly, you should probably add a session.Close()for healthy completion (you already have a call for errors). That said, this is what I would recommend (based on your last example):
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
defer session.Close()
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
session.Stdout = os.Stdout
session.Stderr = os.Stderr
err = session.Shell()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
for _, cmd := range os.Args {
stdin.Write([]byte(cmd + "\n"))
}
session.Wait()
Oh, one more item to note is that the Wait() method relies on an unchecked channel that retrieves the exitStatus from your command, and this does hang on rare occasion (it is particularly problematic when connecting to cisco gear, but can happen with others as well). If you find that you encounter this (or you'd just like to be careful), you might want to wrap Wait() inside of some sort of timeout methodology such as by invoking Wait() with concurrency and reading the response through a channel that can be cased along with time.After() (I can provide an example, if that would be helpful).
I have a Go function to capture network traffic with tcpdumb (external command) on macOS:
func start_tcpdump() {
// Run tcpdump with parameters
cmd := exec.Command("tcpdump", "-I", "-i", "en1", "-w", "capture.pcap")
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
timer := time.AfterFunc(3 * time.Second, func() {
cmd.Process.Kill()
})
err := cmd.Wait()
if err != nil{
log.Fatal(err)
}
timer.Stop()
}
When this function complete work, I'm trying to open output .pcap file in Wireshark and getting this error:
"The capture file appears to have been cut short in the middle of a packet."
Probably, cmd.Process.Kill() interrupts correct closing of .pcap-file.
What solution could be applied for "proper" closing of tcpdumb external process?
You should use cmd.Process.signal(os.Interrupt) to signal tcpdump to exit, Kill() internally calls signal(Kill) which is equivalent to kill -9 to force the process to exit.
I'm using Go on an OSX machine and trying to make a program to open an external application and then after few seconds, close it - the application, not exit the Go script.
I'm using the library available on https://github.com/skratchdot/open-golang to start the app and it works fine. I also already have the timeout running. But the problem comes when I have to close the application.
Would someone give a hint of how I would be able to exit the app?
Thanks in advance.
It looks like that library is hiding details that you'd use to close the program, specifically the process ID (PID).
If you launch instead with the os/exec package or get a handle on that PID then you can use the Process object to kill or send signals to the app to try and close it gracefully.
https://golang.org/pkg/os/#Process
Thank you guys for the help. I would able to do what I was trying with the following code.
cmd := exec.Command(path string)
err := cmd.Start()
if err != nil {
log.Printf("Command finished with error: %v", err)
}
done := make(chan error, 1)
go func() {
done <- cmd.Wait()
}()
select {
case <-time.After(30 * time.Second): // Kills the process after 30 seconds
if err := cmd.Process.Kill(); err != nil {
log.Fatal("failed to kill: ", err)
}
<-done // allow goroutine to exit
log.Println("process killed")
indexInit()
case err := <-done:
if err!=nil{
log.Printf("process done with error = %v", err)
}
}
if err != nil {
log.Fatal(err)
}
log.Printf("Waiting for command to finish...")
//timer() // The time goes by...
err = cmd.Wait()
}
I placed that right after start the app with the os/exec package as #JimB recommended.