Entering ssh prompt data - go

So I'm able to ssh into the machine, but i'm having trouble entering data into the prompt.
...
sshConfig := &ssh.ClientConfig{
User: user,
Auth: []ssh.AuthMethod{
ssh.Password(password),
},
HostKeyCallback: KeyPrint,
}
connection, err := ssh.Dial("tcp", connStr, sshConfig)
if err != nil {
log.Fatalln(err)
}
session, err := connection.NewSession()
if err != nil {
log.Fatalln(err)
}
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
go io.Copy(stdin, os.Stdin)
stdout, err := session.StdoutPipe()
if err != nil {
log.Fatalf("Unable to setup stdout for session: %v", err)
}
go io.Copy(os.Stdout, stdout)
stderr, err := session.StderrPipe()
if err != nil {
log.Fatalf("Unable to setup stderr for session: %v", err)
}
go io.Copy(os.Stderr, stderr)
// err = session.Run("1")
session.Run("") // running it allows me to interact with the remote machines terminal in my own terminal.. session.Start("") exits and session.Wait() doesn't display the Welcome screen that normally greats users, and the prompt doesn't appear.
stdin.Write([]byte("10000"))
os.Stdin.WriteString("110000")
// log.Fatalln(n, err)
// os.Stdin.WriteString("1")
// for {
// session.Run("1")
// go os.Stdin.WriteString("1")
// go stdin.Write([]byte("10000"))
// }
...
The above code snippet gets me into the machine and the machine's prompt is displayed on my screen as if I ssh'ed into manually. I can type in the shell... but i need to be able to have Go type in the shell for me. The prompt that I'm interacting with is a text based game so I can't just issue commands e.g no (ls, echo, grep, etc..) the only thing I'm allow to pass in are numbers. How do I send input to the ssh session? I've tried many ways and none of the input seems to be going through.
I'm also attaching a screenshot of the prompt, just incase the description above is confusion in trying to portray the type of session this is.
UPDATE:
I think I've found a way to send the data, at least once.
session, err := connection.NewSession()
if err != nil {
log.Fatalln(err)
}
// ---------------------------------
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
go io.Copy(stdin, os.Stdin)
stdout, err := session.StdoutPipe()
if err != nil {
log.Fatalf("Unable to setup stdout for session: %v", err)
}
go io.Copy(os.Stdout, stdout)
stderr, err := session.StderrPipe()
if err != nil {
log.Fatalf("Unable to setup stderr for session: %v", err)
}
go io.Copy(os.Stderr, stderr)
go session.Start("")
for {
stdin.Write([]byte("10000000\n"))
break
}
session.Wait()
I start the session with go session.Start("") remember that there is no point is passing in command because all i'm doing is entering data in response to a prompt.
I then use session.Wait() at the end of a for loop...kinda like one does when using channels and waitgroups.. inside the for loop i send data with stdin.Write([]byte("10000000\n")) where the important thing to note is to add the \n delimiter to simulate hitting enter on the keyboard..
If there are any better ways to achieve what i'm trying to achieve please feel free. Next steps are to parse the stdout for a response and reply accordingly.

An empty Start will work, however within the ssh package, Start, Run, and Shell are all calls to, basically, the same thing. Start("cmd") executes a command within a shell, Run("cmd") is a simple call to Start("cmd") that then invokes Wait() for you (giving the feel of executing without concurrency), and Shell opens a shell (like Start), but without a command passed. It's six of one, half a dozen of the other, really, but using Shell() is probably the cleanest way to go about that.
Also, bear in mind that Start() and Shell() both leverage concurrency without the explicit invocation of "go". It might free an additional millisecond or so to invoke concurrency manually, but if that isn't of significance to you, then you should be able to drop that. The automatic concurrency of Start and Shell is the reason for the Wait() and Run("cmd") methods.
If you have no need to interpret your output (stdout/err), then you can map these without the pipe() call or io.Copy(), which is easier and more efficient. I did this in the example below, but bear in mind that if you do interpret the output, it's probably easier to work with the Pipe(). You can send multiple commands (or numbers) sequentially without reading for a prompt in most cases, but some things (like passwords prompts) clear the input buffer. If this happens for you, then you'll need to read the Stdout to find your prompt or leverage an expect tool like goexpect (https://github.com/google/goexpect). There are several expect-like packages for Go, but this one is from Google and (as of this posting) still fairly recently maintained.
StdinPipe() exposes a writeCloser that can be leveraged without io.Copy(), which should be more efficient.
Your for loop that writes to the StdinPipe() should allow you to enter several commands (or in your case, sets of numbers)... as an example, I have this reading commands (numbers, etc) from os.Args and iterating through them.
Lastly, you should probably add a session.Close()for healthy completion (you already have a call for errors). That said, this is what I would recommend (based on your last example):
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
defer session.Close()
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
session.Stdout = os.Stdout
session.Stderr = os.Stderr
err = session.Shell()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
for _, cmd := range os.Args {
stdin.Write([]byte(cmd + "\n"))
}
session.Wait()
Oh, one more item to note is that the Wait() method relies on an unchecked channel that retrieves the exitStatus from your command, and this does hang on rare occasion (it is particularly problematic when connecting to cisco gear, but can happen with others as well). If you find that you encounter this (or you'd just like to be careful), you might want to wrap Wait() inside of some sort of timeout methodology such as by invoking Wait() with concurrency and reading the response through a channel that can be cased along with time.After() (I can provide an example, if that would be helpful).

Related

io.Pipe() not working as desired. What am I doing wrong here?

I have been testing exec functionality to a kubernetes pod with client-go. This is the code that works perfectly with os.Stdin
{
// Prepare the API URL used to execute another process within the Pod. In
// this case, we'll run a remote shell.
req := coreclient.RESTClient().
Post().
Namespace(pod.Namespace).
Resource("pods").
Name(pod.Name).
SubResource("exec").
VersionedParams(&corev1.PodExecOptions{
Container: pod.Spec.Containers[0].Name,
Command: []string{"/bin/sh"},
Stdin: true,
Stdout: true,
Stderr: true,
TTY: true,
}, scheme.ParameterCodec)
exec, err := remotecommand.NewSPDYExecutor(restconfig, "POST", req.URL())
if err != nil {
panic(err)
}
// Put the terminal into raw mode to prevent it echoing characters twice.
oldState, err := terminal.MakeRaw(0)
if err != nil {
panic(err)
}
defer terminal.Restore(0, oldState)
// Connect this process' std{in,out,err} to the remote shell process.
err = exec.Stream(remotecommand.StreamOptions{
Stdin: os.Stdin,
Stdout: os.Stdout,
Stderr: os.Stderr,
Tty: true,
})
if err != nil {
panic(err)
}
fmt.Println()
}
I then started to test with a io.Pipe() so that I can give it input apart from the os.Stdin, basically from a variable or any other source. The modified code can be found here
{
// Prepare the API URL used to execute another process within the Pod. In
// this case, we'll run a remote shell.
req := coreclient.RESTClient().
Post().
Namespace(pod.Namespace).
Resource("pods").
Name(pod.Name).
SubResource("exec").
VersionedParams(&corev1.PodExecOptions{
Container: pod.Spec.Containers[0].Name,
Command: []string{"/bin/sh"},
Stdin: true,
Stdout: true,
Stderr: true,
TTY: true,
}, scheme.ParameterCodec)
exec, err := remotecommand.NewSPDYExecutor(restconfig, "POST", req.URL())
if err != nil {
panic(err)
}
// Put the terminal into raw mode to prevent it echoing characters twice.
oldState, err := terminal.MakeRaw(0)
if err != nil {
panic(err)
}
defer terminal.Restore(0, oldState)
// Scanning for inputs from os.stdin
stdin, putStdin := io.Pipe()
go func() {
consolescanner := bufio.NewScanner(os.Stdin)
for consolescanner.Scan() {
input := consolescanner.Text()
fmt.Println("input:", input)
putStdin.Write([]byte(input))
}
if err := consolescanner.Err(); err != nil {
fmt.Println(err)
os.Exit(1)
}
}()
// Connect this process' std{in,out,err} to the remote shell process.
err = exec.Stream(remotecommand.StreamOptions{
Stdin: stdin,
Stdout: os.Stdout,
Stderr: os.Stdout,
Tty: true,
})
if err != nil {
panic(err)
}
fmt.Println()
}
This oddly seems to be hanging the terminal, can someone point me out on what am I doing wrong?
I didn't try to understand all of your code, but: when executing a separate process, you pretty much always want to use os.Pipe, not io.Pipe.
os.Pipe is a pipe created by the operating system. io.Pipe is a software construct that lives entirely in Go that copies from an io.Writer to an io.Reader. Using an io.Pipe when executing a separate process will generally be implemented by creating an os.Pipe and starting up goroutines to copy between the io.Pipe and the os.Pipe. Just use an os.Pipe.
I was able to resolve my issue. Unfortunately, none of the above methods helped me but rather I did the below work.
I created a separate io.Reader for the string I wanted to input, then did a io.Copy from the reader to putStdin from the above code snipper. Earlier I used putStdin.Write(<string>) which did not do the trick.
I hope this solves issues for some folks.
UPDATE:
Thanks #bcmills for reminding me that the buffer of os.Pipe is system dependent.
Let's re-look at os.Pipe() return values
reader, writer, err := os.Pipe()
To resolve that, we should have a MAX_WRITE_SIZE constant for the length of the byte array written to the writer.
The value of MAX_WRITE_SIZE should be system-dependent also. For example, in linux, the buffer size is 64k. So we can configure MAX_WRITE_SIZE to a value < 64k.
If length of the data to send is greater than MAX_WRITE_SIZE, it can be broken in chunks to send sequentially.
The reason the terminal hang is because of deadlock when you use io.Pipe().
Document on the io.Pipe() method
Pipe creates a synchronous in-memory pipe.
The data is copied directly from the Write to the corresponding Read (or Reads); there is no internal buffering.
So writing to Pipe when there is no pipe read call blocking will cause deadlock (similar to reading from Pipe when there is no pipe write call blocking)
To solve the issue, you should use os.Pipe, which is similar to linux pipe command.
Because the data will be buffered, so no read/write blocking is required.
Data written to the write end of
the pipe is buffered by the kernel until it is read from the read
end of the pipe

Issues with order of scanner.Scan() when using multiple scanners

For some background, I'm pretty new to Go, but the person who wrote this program at work left so the code is my responsibility now. This program wraps a CLI tool that writes to stdout and stderr. We want to process the output while also gracefully handling the errors of the underlying tool.
This is the relevant snippet of code that is currently being used:
cmd := exec.Command(args[0], args[1:]...)
stdout, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
stderr, err := cmd.StderrPipe()
if err != nil {
log.Fatal(err)
}
cmd.Start()
scanner := bufio.NewScanner(stdout)
errScanner := bufio.NewScanner(stderr)
for errScanner.Scan() {
err := errScanner.Text()
log.Fatal(err)
}
for scanner.Scan() {
// proccess stdout data
}
if scanner.Err() != nil {
log.Fatal(scanner.Err())
}
cmd.Wait()
Normally this works fine. However, if the size of the data written to standard out exceeds buf.MaxScanTokenSize which is 64 KB then the program just hangs with no errors. The underlying command finishes, but neither of the scanner for loops are hit. I found that if I swap the position of the errScanner.Scan() and scanner.Scan() then the issue no longer occurs. This is what I mean:
cmd := exec.Command(args[0], args[1:]...)
stdout, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
stderr, err := cmd.StderrPipe()
if err != nil {
log.Fatal(err)
}
cmd.Start()
scanner := bufio.NewScanner(stdout)
errScanner := bufio.NewScanner(stderr)
for scanner.Scan() {
// proccess stdout
}
for errScanner.Scan() {
err := errScanner.Text()
log.Fatal(err)
}
if scanner.Err() != nil {
log.Fatal(scanner.Err())
}
cmd.Wait()
Does anyone know why the initial problem is happening and why the swapping the two scanners fixes it? My guess was that the two scanners were sharing the same underlying buffer which could be causing some problems, but I created two different buffers and assigned them to the scanners and it didn't fix the issue.
Any help is appreciated!
The way it is written, your program will wait until all data is read from one of the streams, depending on the order. If while reading from that stream the second stream buffer fills, the running program (the one whose output you're reading) will block because it cannot write any more output to that stream.
It looks like you are not really handling the errors, so you can read the error stream in a goroutine:
go () {
for errScanner.Scan() {
...
}
}()
for scanner.Scan() {
...
}

Write file from exec.Command

I am trying to write a file from a bash command into a file in Go.
Note there are several reasons for using Go over bash here: I have some more logic such as parsing configuration files, I would like to run that code for multiple DBs in parallele and finally performing some more complex data manipulation after.
dumpStr := fmt.Sprintf("pg_dump -U %s -h %s %s | gzip", DbUserName, DbHost, DbName)
cmd := exec.Command("bash", "-c", dumpStr)
cmd.Env = append(cmd.Env, "PGPASSWORD="+DbPassword)
outfile, err := os.Create(DbName + ".gz")
if err != nil {
panic(err)
}
outfile = cmd.Stdout
defer outfile.Close()
err = cmd.Start()
if err != nil {
panic(err)
}
cmd.Wait()
However, I am getting an emtpy result.
I am getting data if I am executing dumpStr from the CLI but not from that code...
What am I missing?
As Flimzy said, you're not capturing the output of pg_dump. You can do that with Go, or you can use pg_dump-s --file. It can also compress with --compress so no need to pipe to gzip. Then there's no need for bash and you can avoid shell quoting issues.
cmd := exec.Command(
"pg_dump",
"--compress=9",
"--file="+DbName + ".gz",
"-U"+DbUserName,
"-h"+DbHost,
DbName,
)
log.Print("Running pg_dump...")
if err := cmd.Run(); err != nil {
log.Fatal(err)
}
Much simpler and more secure.
For illustration here's how you'd do it all in Go.
Use Cmd.StdoutPipe to get an open IO reader to pg_dump's stdout. Then use io.Copy to copy from stdout to your open file.
#Peter points out that since Cmd.Stdout is an io.Reader it's simpler to assign the open file to cmd.Stdout and let cmd write to it directly.
// Same as above, but no --file.
cmd := exec.Command(
"pg_dump",
"--compress=9",
"-U"+DbUserName,
"-h"+DbHost,
DbName,
)
// Open the output file
outfile, err := os.Create(DbName + ".gz")
if err != nil {
log.Fatal(err)
}
defer outfile.Close()
// Send stdout to the outfile. cmd.Stdout will take any io.Writer.
cmd.Stdout = outfile
// Start the command
if err = cmd.Start(); err != nil {
log.Fatal(err)
}
log.Print("Waiting for command to finish...")
// Wait for the command to finish.
if err = cmd.Wait(); err != nil {
log.Fatal(err)
}
In addition, you're only checking if the command started, not if it successfully ran.
From the docs for Cmd.Start.
Start starts the specified command but does not wait for it to complete.
The Wait method will return the exit code and release associated resources once the command exits.
You're checking cmd.Start for an error, but not cmd.Wait. Checking the error from cmd.Start only means the command started. If there is an error while the program is running you won't know what it is.
You need to actually use the output of your command. You're not doing that. To do so, use the StdoutPipe method, then you can copy the stdout from your program, into your file.

How to handle EOF / Ctrl+D in Go crypto/ssh session.Wait()

When I use session.Shell() to start a new shell on a remote server and then session.Wait() to run the session until the remote end exits the session does not gracefully handle using Ctrl+D to end the session.
I can make this work using os/exec to launch a local child process to run whatever copy of the ssh client is available locally, but I would prefer to do this with native Go.
Examle code snippet:
conn, err := ssh.Dial("tcp", "some-server.fqdn", sshConfig)
if err != nil {
return err
}
defer conn.Close()
session, err := conn.NewSession()
if err != nil {
return err
}
defer session.Close()
session.Stdout = os.Stdout
session.Stderr = os.Stderr
session.Stdin = os.Stdin
modes := ssh.TerminalModes{
ssh.ECHO: 0,
ssh.TTY_OP_ISPEED: 14400,
ssh.TTY_OP_OSPEED: 14400,
}
err = session.RequestPty("xterm", 80, 40, modes)
if err != nil {
return err
}
err = session.Shell()
if err != nil {
return err
}
session.Wait()
Running exit on the remote server gracefully hangs up the remote end and session.Wait() returns as expected, but sending an EOF with Ctrl+D causes the remote end to hang up but the call to session.Wait() is stuck blocking. I have to use Ctrl+C to SIGINT the Go program.
I would like to get both to gracefully exit the session.Wait() call as that is expected behavior for most interactive ssh sessions.
I was able to reproduce this (with a bunch of additional framework code) but am not sure why it happens. It is possible to terminate the session by adding a session.Close call if you encounter EOF on stdin:
session.Stdout = os.Stdout
session.Stderr = os.Stderr
// session.Stdin = os.Stdin
ip, err := session.StdinPipe()
if err != nil {
return err
}
go func() {
io.Copy(ip, os.Stdin)
fmt.Println("stdin ended")
time.Sleep(1 * time.Second)
fmt.Println("issuing ip.Close() now")
ip.Close()
time.Sleep(1 * time.Second)
fmt.Println("issuing session.Close() now")
err = session.Close()
if err != nil {
fmt.Printf("close: %v\n", err)
}
}()
You'll see that the session shuts down (not very nicely) after the session.Close(). Calling ip.Close() should have shut down the stdin channel, and it seems like this should happen when just using os.Stdin directly too, but for some reason it does not work. Debugging shows an ssh-channel-close message going to the other end (for both cases), but the other end doesn't close the return-data ssh channels, so your end continues to wait for more output from them.
Worth noting: you have not put the local tty into raw-ish character-at-a-time mode. A regular ssh session does, so ^D does not actually close the connection, it just sends a literal control-D to the pty on the other side. It's the other end that turns that control-D into a (soft) EOF signal on the pty.

SSH: is it possible to get STDERR from the session with pseudo-terminal?

This question is about golang.org/x/crypto/ssh package and maybe pseudo-terminal behaviour.
The code
Here is the demo code. You can run it on your local machine just change credentials to access SSH.
package main
import (
"bufio"
"fmt"
"golang.org/x/crypto/ssh"
"io"
)
func main() {
var pipe io.Reader
whichPipe := "error" // error or out
address := "192.168.1.62:22"
username := "username"
password := "password"
sshConfig := &ssh.ClientConfig{
User: username,
Auth: []ssh.AuthMethod{ssh.Password(password)},
}
connection, err := ssh.Dial("tcp", address, sshConfig)
if err != nil {
panic(err)
}
session, err := connection.NewSession()
if err != nil {
panic(err)
}
modes := ssh.TerminalModes{
ssh.ECHO: 0,
ssh.ECHOCTL: 0,
ssh.TTY_OP_ISPEED: 14400,
ssh.TTY_OP_OSPEED: 14400,
}
if err := session.RequestPty("xterm", 80, 0, modes); err != nil {
session.Close()
panic(err)
}
switch whichPipe {
case "error":
pipe, _ = session.StderrPipe()
case "out":
pipe, _ = session.StdoutPipe()
}
err = session.Run("whoami23")
scanner := bufio.NewScanner(pipe)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
}
Actual result
Empty line
Expected result
bash: whoami23: command not found
Current "solution"
To get expected result you have two options:
Change whichPipe value to out. Yes, all errors going to stdout in case if you use tty.
Remove session.RequestPty. But in my case, I need to run sudo commands which require tty (servers are out of my control so I can't disable this requirement).
I use third way. I check err from err = session.Run("whoami23") and if it's not nil I mark content of session.StdoutPipe() as STDERR one.
But this method has limits. For example, if I run something like sudo sh -c 'uname -r; whoami23;' the whole result will be marked as error while uname -r returns output to STDOUT.
The question
While the behaviour looks logical to me (all that SSH client sees from pty is output without differentiations) I'm still not sure if I may miss something and there is a trick that allows to split these outputs.

Resources