How to close or clean a stdout pipe? - go

I have a program which makes an ssh connection to a new (every time the program is executed) gcp instance to retrieve information. The problem is that sometimes I got this error and I don't know why:
2019/08/22 12:30:37 ssh: Stdout already set
My code(avoiding error handle):
results := /home/example.txt
client, err := ssh.Dial("tcp", addrIP+":22", clientConfig)
session, err := client.NewSession()
defer session.Close()
data, err := session.Output(" cat " + results)
if err != nil {
log.Print("Fails when new output")
log.Fatal(err)
}
During the output is when the error occurs.

Calling session.Output would set the Stdout of the session to a buffer, then run the command provided, and return the contents in the buffer.
If the Stdout of this session is already set (for example, if you call the session.Output multiple times), an error of "Stdout already set" will be returned.
If you need to run multiple commands in one session, just manually set the Stdout to some buffer maintained by yourself, and use the session.Run() method instead of session.Output.

Related

io.Pipe() not working as desired. What am I doing wrong here?

I have been testing exec functionality to a kubernetes pod with client-go. This is the code that works perfectly with os.Stdin
{
// Prepare the API URL used to execute another process within the Pod. In
// this case, we'll run a remote shell.
req := coreclient.RESTClient().
Post().
Namespace(pod.Namespace).
Resource("pods").
Name(pod.Name).
SubResource("exec").
VersionedParams(&corev1.PodExecOptions{
Container: pod.Spec.Containers[0].Name,
Command: []string{"/bin/sh"},
Stdin: true,
Stdout: true,
Stderr: true,
TTY: true,
}, scheme.ParameterCodec)
exec, err := remotecommand.NewSPDYExecutor(restconfig, "POST", req.URL())
if err != nil {
panic(err)
}
// Put the terminal into raw mode to prevent it echoing characters twice.
oldState, err := terminal.MakeRaw(0)
if err != nil {
panic(err)
}
defer terminal.Restore(0, oldState)
// Connect this process' std{in,out,err} to the remote shell process.
err = exec.Stream(remotecommand.StreamOptions{
Stdin: os.Stdin,
Stdout: os.Stdout,
Stderr: os.Stderr,
Tty: true,
})
if err != nil {
panic(err)
}
fmt.Println()
}
I then started to test with a io.Pipe() so that I can give it input apart from the os.Stdin, basically from a variable or any other source. The modified code can be found here
{
// Prepare the API URL used to execute another process within the Pod. In
// this case, we'll run a remote shell.
req := coreclient.RESTClient().
Post().
Namespace(pod.Namespace).
Resource("pods").
Name(pod.Name).
SubResource("exec").
VersionedParams(&corev1.PodExecOptions{
Container: pod.Spec.Containers[0].Name,
Command: []string{"/bin/sh"},
Stdin: true,
Stdout: true,
Stderr: true,
TTY: true,
}, scheme.ParameterCodec)
exec, err := remotecommand.NewSPDYExecutor(restconfig, "POST", req.URL())
if err != nil {
panic(err)
}
// Put the terminal into raw mode to prevent it echoing characters twice.
oldState, err := terminal.MakeRaw(0)
if err != nil {
panic(err)
}
defer terminal.Restore(0, oldState)
// Scanning for inputs from os.stdin
stdin, putStdin := io.Pipe()
go func() {
consolescanner := bufio.NewScanner(os.Stdin)
for consolescanner.Scan() {
input := consolescanner.Text()
fmt.Println("input:", input)
putStdin.Write([]byte(input))
}
if err := consolescanner.Err(); err != nil {
fmt.Println(err)
os.Exit(1)
}
}()
// Connect this process' std{in,out,err} to the remote shell process.
err = exec.Stream(remotecommand.StreamOptions{
Stdin: stdin,
Stdout: os.Stdout,
Stderr: os.Stdout,
Tty: true,
})
if err != nil {
panic(err)
}
fmt.Println()
}
This oddly seems to be hanging the terminal, can someone point me out on what am I doing wrong?
I didn't try to understand all of your code, but: when executing a separate process, you pretty much always want to use os.Pipe, not io.Pipe.
os.Pipe is a pipe created by the operating system. io.Pipe is a software construct that lives entirely in Go that copies from an io.Writer to an io.Reader. Using an io.Pipe when executing a separate process will generally be implemented by creating an os.Pipe and starting up goroutines to copy between the io.Pipe and the os.Pipe. Just use an os.Pipe.
I was able to resolve my issue. Unfortunately, none of the above methods helped me but rather I did the below work.
I created a separate io.Reader for the string I wanted to input, then did a io.Copy from the reader to putStdin from the above code snipper. Earlier I used putStdin.Write(<string>) which did not do the trick.
I hope this solves issues for some folks.
UPDATE:
Thanks #bcmills for reminding me that the buffer of os.Pipe is system dependent.
Let's re-look at os.Pipe() return values
reader, writer, err := os.Pipe()
To resolve that, we should have a MAX_WRITE_SIZE constant for the length of the byte array written to the writer.
The value of MAX_WRITE_SIZE should be system-dependent also. For example, in linux, the buffer size is 64k. So we can configure MAX_WRITE_SIZE to a value < 64k.
If length of the data to send is greater than MAX_WRITE_SIZE, it can be broken in chunks to send sequentially.
The reason the terminal hang is because of deadlock when you use io.Pipe().
Document on the io.Pipe() method
Pipe creates a synchronous in-memory pipe.
The data is copied directly from the Write to the corresponding Read (or Reads); there is no internal buffering.
So writing to Pipe when there is no pipe read call blocking will cause deadlock (similar to reading from Pipe when there is no pipe write call blocking)
To solve the issue, you should use os.Pipe, which is similar to linux pipe command.
Because the data will be buffered, so no read/write blocking is required.
Data written to the write end of
the pipe is buffered by the kernel until it is read from the read
end of the pipe

Write file from exec.Command

I am trying to write a file from a bash command into a file in Go.
Note there are several reasons for using Go over bash here: I have some more logic such as parsing configuration files, I would like to run that code for multiple DBs in parallele and finally performing some more complex data manipulation after.
dumpStr := fmt.Sprintf("pg_dump -U %s -h %s %s | gzip", DbUserName, DbHost, DbName)
cmd := exec.Command("bash", "-c", dumpStr)
cmd.Env = append(cmd.Env, "PGPASSWORD="+DbPassword)
outfile, err := os.Create(DbName + ".gz")
if err != nil {
panic(err)
}
outfile = cmd.Stdout
defer outfile.Close()
err = cmd.Start()
if err != nil {
panic(err)
}
cmd.Wait()
However, I am getting an emtpy result.
I am getting data if I am executing dumpStr from the CLI but not from that code...
What am I missing?
As Flimzy said, you're not capturing the output of pg_dump. You can do that with Go, or you can use pg_dump-s --file. It can also compress with --compress so no need to pipe to gzip. Then there's no need for bash and you can avoid shell quoting issues.
cmd := exec.Command(
"pg_dump",
"--compress=9",
"--file="+DbName + ".gz",
"-U"+DbUserName,
"-h"+DbHost,
DbName,
)
log.Print("Running pg_dump...")
if err := cmd.Run(); err != nil {
log.Fatal(err)
}
Much simpler and more secure.
For illustration here's how you'd do it all in Go.
Use Cmd.StdoutPipe to get an open IO reader to pg_dump's stdout. Then use io.Copy to copy from stdout to your open file.
#Peter points out that since Cmd.Stdout is an io.Reader it's simpler to assign the open file to cmd.Stdout and let cmd write to it directly.
// Same as above, but no --file.
cmd := exec.Command(
"pg_dump",
"--compress=9",
"-U"+DbUserName,
"-h"+DbHost,
DbName,
)
// Open the output file
outfile, err := os.Create(DbName + ".gz")
if err != nil {
log.Fatal(err)
}
defer outfile.Close()
// Send stdout to the outfile. cmd.Stdout will take any io.Writer.
cmd.Stdout = outfile
// Start the command
if err = cmd.Start(); err != nil {
log.Fatal(err)
}
log.Print("Waiting for command to finish...")
// Wait for the command to finish.
if err = cmd.Wait(); err != nil {
log.Fatal(err)
}
In addition, you're only checking if the command started, not if it successfully ran.
From the docs for Cmd.Start.
Start starts the specified command but does not wait for it to complete.
The Wait method will return the exit code and release associated resources once the command exits.
You're checking cmd.Start for an error, but not cmd.Wait. Checking the error from cmd.Start only means the command started. If there is an error while the program is running you won't know what it is.
You need to actually use the output of your command. You're not doing that. To do so, use the StdoutPipe method, then you can copy the stdout from your program, into your file.

Entering ssh prompt data

So I'm able to ssh into the machine, but i'm having trouble entering data into the prompt.
...
sshConfig := &ssh.ClientConfig{
User: user,
Auth: []ssh.AuthMethod{
ssh.Password(password),
},
HostKeyCallback: KeyPrint,
}
connection, err := ssh.Dial("tcp", connStr, sshConfig)
if err != nil {
log.Fatalln(err)
}
session, err := connection.NewSession()
if err != nil {
log.Fatalln(err)
}
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
go io.Copy(stdin, os.Stdin)
stdout, err := session.StdoutPipe()
if err != nil {
log.Fatalf("Unable to setup stdout for session: %v", err)
}
go io.Copy(os.Stdout, stdout)
stderr, err := session.StderrPipe()
if err != nil {
log.Fatalf("Unable to setup stderr for session: %v", err)
}
go io.Copy(os.Stderr, stderr)
// err = session.Run("1")
session.Run("") // running it allows me to interact with the remote machines terminal in my own terminal.. session.Start("") exits and session.Wait() doesn't display the Welcome screen that normally greats users, and the prompt doesn't appear.
stdin.Write([]byte("10000"))
os.Stdin.WriteString("110000")
// log.Fatalln(n, err)
// os.Stdin.WriteString("1")
// for {
// session.Run("1")
// go os.Stdin.WriteString("1")
// go stdin.Write([]byte("10000"))
// }
...
The above code snippet gets me into the machine and the machine's prompt is displayed on my screen as if I ssh'ed into manually. I can type in the shell... but i need to be able to have Go type in the shell for me. The prompt that I'm interacting with is a text based game so I can't just issue commands e.g no (ls, echo, grep, etc..) the only thing I'm allow to pass in are numbers. How do I send input to the ssh session? I've tried many ways and none of the input seems to be going through.
I'm also attaching a screenshot of the prompt, just incase the description above is confusion in trying to portray the type of session this is.
UPDATE:
I think I've found a way to send the data, at least once.
session, err := connection.NewSession()
if err != nil {
log.Fatalln(err)
}
// ---------------------------------
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
go io.Copy(stdin, os.Stdin)
stdout, err := session.StdoutPipe()
if err != nil {
log.Fatalf("Unable to setup stdout for session: %v", err)
}
go io.Copy(os.Stdout, stdout)
stderr, err := session.StderrPipe()
if err != nil {
log.Fatalf("Unable to setup stderr for session: %v", err)
}
go io.Copy(os.Stderr, stderr)
go session.Start("")
for {
stdin.Write([]byte("10000000\n"))
break
}
session.Wait()
I start the session with go session.Start("") remember that there is no point is passing in command because all i'm doing is entering data in response to a prompt.
I then use session.Wait() at the end of a for loop...kinda like one does when using channels and waitgroups.. inside the for loop i send data with stdin.Write([]byte("10000000\n")) where the important thing to note is to add the \n delimiter to simulate hitting enter on the keyboard..
If there are any better ways to achieve what i'm trying to achieve please feel free. Next steps are to parse the stdout for a response and reply accordingly.
An empty Start will work, however within the ssh package, Start, Run, and Shell are all calls to, basically, the same thing. Start("cmd") executes a command within a shell, Run("cmd") is a simple call to Start("cmd") that then invokes Wait() for you (giving the feel of executing without concurrency), and Shell opens a shell (like Start), but without a command passed. It's six of one, half a dozen of the other, really, but using Shell() is probably the cleanest way to go about that.
Also, bear in mind that Start() and Shell() both leverage concurrency without the explicit invocation of "go". It might free an additional millisecond or so to invoke concurrency manually, but if that isn't of significance to you, then you should be able to drop that. The automatic concurrency of Start and Shell is the reason for the Wait() and Run("cmd") methods.
If you have no need to interpret your output (stdout/err), then you can map these without the pipe() call or io.Copy(), which is easier and more efficient. I did this in the example below, but bear in mind that if you do interpret the output, it's probably easier to work with the Pipe(). You can send multiple commands (or numbers) sequentially without reading for a prompt in most cases, but some things (like passwords prompts) clear the input buffer. If this happens for you, then you'll need to read the Stdout to find your prompt or leverage an expect tool like goexpect (https://github.com/google/goexpect). There are several expect-like packages for Go, but this one is from Google and (as of this posting) still fairly recently maintained.
StdinPipe() exposes a writeCloser that can be leveraged without io.Copy(), which should be more efficient.
Your for loop that writes to the StdinPipe() should allow you to enter several commands (or in your case, sets of numbers)... as an example, I have this reading commands (numbers, etc) from os.Args and iterating through them.
Lastly, you should probably add a session.Close()for healthy completion (you already have a call for errors). That said, this is what I would recommend (based on your last example):
modes := ssh.TerminalModes{
ssh.ECHO: 0, // disable echoing
ssh.TTY_OP_ISPEED: 14400, // input speed = 14.4kbaud
ssh.TTY_OP_OSPEED: 14400, // output speed = 14.4kbaud
}
if err := session.RequestPty("xterm", 80, 40, modes); err != nil {
session.Close()
log.Fatalf("request for pseudo terminal failed: %s", err)
}
defer session.Close()
stdin, err := session.StdinPipe()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
session.Stdout = os.Stdout
session.Stderr = os.Stderr
err = session.Shell()
if err != nil {
log.Fatalf("Unable to setup stdin for session: %v", err)
}
for _, cmd := range os.Args {
stdin.Write([]byte(cmd + "\n"))
}
session.Wait()
Oh, one more item to note is that the Wait() method relies on an unchecked channel that retrieves the exitStatus from your command, and this does hang on rare occasion (it is particularly problematic when connecting to cisco gear, but can happen with others as well). If you find that you encounter this (or you'd just like to be careful), you might want to wrap Wait() inside of some sort of timeout methodology such as by invoking Wait() with concurrency and reading the response through a channel that can be cased along with time.After() (I can provide an example, if that would be helpful).

How to resume reading after EOF in named pipe

I'm writing a program which opens a named pipe for reading, and then processes any lines written to this pipe:
err = syscall.Mkfifo("/tmp/myfifo", 0666)
if err != nil {
panic(err)
}
pipe, err := os.OpenFile("/tmp/myfifo", os.O_RDONLY, os.ModeNamedPipe)
if err != nil {
panic(err)
}
reader := bufio.NewReader(pipe)
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
line := scanner.Text()
process(line)
}
This works fine as long as the writing process does not restart or for other reasons send an EOF. When this happens, the loop terminates (as expected from the specifications of Scanner).
However, I want to keep the pipe open to accept further writes. I could just reinitialize the scanner of course, but I believe this would create a race condition where the scanner might not be ready while a new process has begun writing to the pipe.
Are there any other options? Do I need to work directly with the File type instead?
From the bufio GoDoc:
Scan ... returns false when the scan stops, either by reaching the end of the input or an error.
So you could possibly leave the file open and read until EOF, then trigger scanner.Scan() again when the file has changed or at a regular interval (i.e. make a goroutine), and make sure the pipe variable doesn't go out of scope so you can reference it again.
If I understand your concern about a race condition correctly, this wouldn't be an issue (unless write and read operations must be synchronized) but when the scanner is re-initialized it will end up back at the beginning of the file.

Go exec.Command(...).Wait() hangs for ever

Hi guys for some reason Wait() when I execute mysql command hangs for ever, does anyone know why?
Here is my code.
// Import imports data into Local database
func (x MySQL) Import(data string, opt LocalDb) {
var stderr bytes.Buffer
cmd := exec.Command("mysql", x.importOptions(opt)...)
// Set < pipe variable
stdin, err := cmd.StdinPipe()
errChk(err)
cmd.Stderr = &stderr
cmd.Start()
// Write data to pipe
io.WriteString(stdin, data)
fmt.Println("Importing " + x.DB + " to localhost...")
// Log mysql error
if err := cmd.Wait(); err != nil {
log.Fatal(stderr.String())
} else {
fmt.Println("Importing complete")
}
}
This function accomplishes everything and mysql imports the data into database but it never return from Wait() just freezes there even though is completed.
The problem is that you haven't closed the stdin pipe. MySQL will remain active until it is.
The fix is thus simple:
// Write data to pipe
io.WriteString(stdin, data)
stdin.Close()
fmt.Println("Importing " + x.DB + " to localhost...")
The fact that StdinPipe() acts in this way is documented as such:
StdinPipe returns a pipe that will be connected to the command's standard input when the command starts. The pipe will be closed automatically after Wait sees the command exit. A caller need only call Close to force the pipe to close sooner. For example, if the command being run will not exit until standard input is closed, the caller must close the pipe.

Resources