Consider the following Go code fragment:
cmd := exec.Command(program, arg0)
stdin, err := cmd.StdinPipe()
// produces error when b is too large
n, err := stdin.Write(b.Bytes())
Whenever b is too large, Write() returns an error. Having experimented with different size bs, it would seem this occurs whenever the length of b is longer than the Linux pipe buffer size. Is there a way around this? Essentially I need to feed large log files via stdin to an external script.
I wrote this program to test your code:
package main
import "os/exec"
import "fmt"
func main() {
cmd := exec.Command("/bin/cat")
in, _ := cmd.StdinPipe()
cmd.Start()
for i := 1024*1024; ; i += 1024*1024 {
b := make([]byte,i)
n, err := in.Write(b)
fmt.Printf("%d: %v\n", n, err)
if err != nil {
cmd.Process.Kill()
return
}
}
}
The only way this program gives an error is if the called process closes stdin. Does the program you call close stdin? This might be a bug in the Go runtime.
Related
I would like to manage a process in Go with the package os/exec. I would like to start it and be able to read the output and write several times to the input.
The process I launch in the code below, menu.py, is just a python script that does an echo of what it has in input.
func ReadOutput(rc io.ReadCloser) (string, error) {
x, err := ioutil.ReadAll(rc)
s := string(x)
return s, err
}
func main() {
cmd := exec.Command("python", "menu.py")
stdout, err := cmd.StdoutPipe()
Check(err)
stdin, err := cmd.StdinPipe()
Check(err)
err = cmd.Start()
Check(err)
go func() {
defer stdin.Close() // If I don't close the stdin pipe, the python code will never take what I write in it
io.WriteString(stdin, "blub")
}()
s, err := ReadOutput(stdout)
if err != nil {
Log("Process is finished ..")
}
Log(s)
// STDIN IS CLOSED, I CAN'T RETRY !
}
And the simple code of menu.py :
while 1 == 1:
name = raw_input("")
print "Hello, %s. \n" % name
The Go code works, but if I don't close the stdin pipe after I write in it, the python code never take what is in it. It is okay if I want to send only one thing in the input on time, but what is I want to send something again few seconds later? Pipe is closed! How should I do? The question could be "How do I flush a pipe from WriteCloser interface?" I suppose
I think the primary problem here is that the python process doesn't work the way you might expect. Here's a bash script echo.sh that does the same thing:
#!/bin/bash
while read INPUT
do echo "Hello, $INPUT."
done
Calling this script from a modified version of your code doesn't have the same issue with needing to close stdin:
func ReadOutput(output chan string, rc io.ReadCloser) {
r := bufio.NewReader(rc)
for {
x, _ := r.ReadString('\n')
output <- string(x)
}
}
func main() {
cmd := exec.Command("bash", "echo.sh")
stdout, err := cmd.StdoutPipe()
Check(err)
stdin, err := cmd.StdinPipe()
Check(err)
err = cmd.Start()
Check(err)
go func() {
io.WriteString(stdin, "blab\n")
io.WriteString(stdin, "blob\n")
io.WriteString(stdin, "booo\n")
}()
output := make(chan string)
defer close(output)
go ReadOutput(output, stdout)
for o := range output {
Log(o)
}
}
The Go code needed a few minor changes - ReadOutput method needed to be modified in order to not block - ioutil.ReadAll would have waited for an EOF before returning.
Digging a little deeper, it looks like the real problem is the behaviour of raw_input - it doesn't flush stdout as expected. You can pass the -u flag to python to get the desired behaviour:
cmd := exec.Command("python", "-u", "menu.py")
or update your python code to use sys.stdin.readline() instead of raw_input() (see this related bug report: https://bugs.python.org/issue526382).
Even though there is some problem with your python script. The main problem is the golang pipe. A trick to solve this problem is use two pipes as following:
// parentprocess.go
package main
import (
"bufio"
"log"
"io"
"os/exec"
)
func request(r *bufio.Reader, w io.Writer, str string) string {
w.Write([]byte(str))
w.Write([]byte("\n"))
str, err := r.ReadString('\n')
if err != nil {
panic(err)
}
return str[:len(str)-1]
}
func main() {
cmd := exec.Command("bash", "menu.sh")
inr, inw := io.Pipe()
outr, outw := io.Pipe()
cmd.Stdin = inr
cmd.Stdout = outw
if err := cmd.Start(); err != nil {
panic(err)
}
go cmd.Wait()
reader := bufio.NewReader(outr)
log.Printf(request(reader, inw, "Tom"))
log.Printf(request(reader, inw, "Rose"))
}
The subprocess code is the same logic as your python code as following:
#!/usr/bin/env bash
# menu.sh
while true; do
read -r name
echo "Hello, $name."
done
If you want to use your python code you should do some changes:
while 1 == 1:
try:
name = raw_input("")
print "Hello, %s. \n" % name
sys.stdout.flush() # there's a stdout buffer
except:
pass # make sure this process won't die when come across 'EOF'
// StdinPipe returns a pipe that will be connected to the command's
// standard input when the command starts.
// The pipe will be closed automatically after Wait sees the command exit.
// A caller need only call Close to force the pipe to close sooner.
// For example, if the command being run will not exit until standard input`enter code here`
// is closed, the caller must close the pipe.
func (c *Cmd) StdinPipe() (io.WriteCloser, error) {}
I have this code
subProcess := exec.Cmd{
Path: execAble,
Args: []string{
fmt.Sprintf("-config=%s", *configPath),
fmt.Sprintf("-serverType=%s", *serverType),
fmt.Sprintf("-reload=%t", *reload),
fmt.Sprintf("-listenFD=%d", fd),
},
Dir: here,
}
subProcess.Stdout = os.Stdout
subProcess.Stderr = os.Stderr
logger.Info("starting subProcess:%s ", subProcess.Args)
if err := subProcess.Run(); err != nil {
logger.Fatal(err)
}
and then I do os.Exit(1) to stop the main process
I can get output from the subprocess
but I also want to put stdin to
I try
subProcess.Stdin = os.Stdin
but it does not work
I made a simple program (for testing). It reads a number and writes the given number out.
package main
import (
"fmt"
)
func main() {
fmt.Println("Hello, What's your favorite number?")
var i int
fmt.Scanf("%d\n", &i)
fmt.Println("Ah I like ", i, " too.")
}
And here is the modified code
package main
import (
"fmt"
"io"
"os"
"os/exec"
)
func main() {
subProcess := exec.Command("go", "run", "./helper/main.go") //Just for testing, replace with your subProcess
stdin, err := subProcess.StdinPipe()
if err != nil {
fmt.Println(err) //replace with logger, or anything you want
}
defer stdin.Close() // the doc says subProcess.Wait will close it, but I'm not sure, so I kept this line
subProcess.Stdout = os.Stdout
subProcess.Stderr = os.Stderr
fmt.Println("START") //for debug
if err = subProcess.Start(); err != nil { //Use start, not run
fmt.Println("An error occured: ", err) //replace with logger, or anything you want
}
io.WriteString(stdin, "4\n")
subProcess.Wait()
fmt.Println("END") //for debug
}
You interested about these lines
stdin, err := subProcess.StdinPipe()
if err != nil {
fmt.Println(err)
}
defer stdin.Close()
//...
io.WriteString(stdin, "4\n")
//...
subProcess.Wait()
Explanation of the above lines
We gain the subprocess' stdin, now we can write to it
We use our power and we write a number
We wait for our subprocess to complete
Output
START
Hello, What's your favorite number?
Ah I like 4 too.
END
For better understanding
There's now an updated example available in the Go docs: https://golang.org/pkg/os/exec/#Cmd.StdinPipe
If the subprocess doesn't continue before the stdin is closed, the io.WriteString() call needs to be wrapped inside an anonymous function:
func main() {
cmd := exec.Command("cat")
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
go func() {
defer stdin.Close()
io.WriteString(stdin, "values written to stdin are passed to cmd's standard input")
}()
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", out)
}
Though this question is a little old, but here is my answer:
This question is of course very platform specific as how standard IO is handled depends on the OS implementation and not on Go language. However, as general rule of thumb (due to some OSes being prevailing), "what you ask is not possible".
On most of modern operating systems you can pipe standard streams (as in #mraron's answer), you can detach them (this is how daemons work), but you cannot reassign or delegate them to another process.
I think this limitation is more because of security concern. There are still from time to time bugs being discovered that allow remote code execution, imagine if OS was allowing to reassign/delegate STDIN/OUT, then with such vulnerabilities the consequences would be disastrous.
While you cannot directly do this as #AlexKey wrote earlier still you can make some workarounds. If os prevents you to pipe your own standard streams who cares all you need 2 channels and 2 goroutines
var stdinChan chan []byte
var stdoutChan chan []byte
//when something happens in stdout of your code just pipe it to stdout chan
stdoutChan<-somehowGotDataFromStdOut
then you need two funcs as i mentioned before
func Pipein(){
for {
stdinFromProg.Write(<-stdInChan)
}
}
The same idea for the stdout
I am trying to get a hang of Go Pro filer by following the example in go blog . I am not sure what I am doing wrong. But my profiled generated output shows 0 samples. Its weird.
rahul#g3ck0:~/programs/go$ go tool pprof parallel cpuprofile
Welcome to pprof! For help, type 'help'.
(pprof) top5
Total: 0 samples
The following is my code :
package main
import (
"fmt"
"os/exec"
"sync"
"strings"
"runtime/pprof"
"os"
)
func exe_cmd(cmd string, wg *sync.WaitGroup) {
out, err := exec.Command(cmd).Output()
if err != nil {
fmt.Println("error occured")
fmt.Printf("%s", err)
}
fmt.Printf("%s", out)
wg.Done()
}
func main() {
f, _ := os.Create("cpuprofile")
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
cmd := "echo newline >> blah.txt"
parts := strings.Fields(cmd)
head := parts[0]
parts = parts[1:len(parts)]
out, err := exec.Command(head,parts...).Output()
if err != nil {
fmt.Println("error occured")
fmt.Printf("%s", err)
}
fmt.Printf("%s", out)
}
Your profiled program runs not long enough for the profiler to pick up any profiling sample.
Basically the profiler looks periodically at the state of your program (which code is executed, what function is that, ...). If the program terminates faster than the routine that looks for a status then no status is sampled and, thus, there are no samples to look at in the end.
This is what happens for you.
One solution is set the sample rate of the profiler to a higher value, the other way
is to have your program actually do something that takes longer. For example:
f, _ := os.Create("cpuprofile")
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
for i := 0; i < 10; i++ {
time.Sleep(1 * time.Second)
}
Alternatively, when trying to figure out what is wrong with a isolated portion of your code,
you can write a benchmark and profile that benchmark.
I'm trying, just for fun, to connect a gzip Writer directly to a gzip Reader, so I could write to the Writer and read from the Reader on the fly. I expected to read exactly what I wrote. I'm using gzip but I'd like to use this method also with crypto/aes, I suppose it should work very similar and it could be used with other reader/writers like jpeg, png...
This is my best option, that is not working, but I hope you can see what I mean: http://play.golang.org/p/7qdUi9wwG7
package main
import (
"bytes"
"compress/gzip"
"fmt"
)
func main() {
s := []byte("Hello world!")
fmt.Printf("%s\n", s)
var b bytes.Buffer
gz := gzip.NewWriter(&b)
ungz, err := gzip.NewReader(&b)
fmt.Println("err: ", err)
gz.Write(s)
gz.Flush()
uncomp := make([]byte, 100)
n, err2 := ungz.Read(uncomp)
fmt.Println("err2: ", err2)
fmt.Println("n: ", n)
uncomp = uncomp[:n]
fmt.Printf("%s\n", uncomp)
}
It seems that gzip.NewReader(&b) is trying to read immediately and a EOF is returned.
You'll need to do two things to make it work
Use an io.Pipe to connect the reader and writer together - you can't read and write from the same buffer
Run the reading and writing in seperate goroutines. Because the first thing that gzip does is attempt to read the header you'll get a deadlock unless you have another go routine attemting to write it.
Here is what that looks like
Playground
func main() {
s := []byte("Hello world!")
fmt.Printf("%s\n", s)
in, out := io.Pipe()
gz := gzip.NewWriter(out)
go func() {
ungz, err := gzip.NewReader(in)
fmt.Println("err: ", err)
uncomp := make([]byte, 100)
n, err2 := ungz.Read(uncomp)
fmt.Println("err2: ", err2)
fmt.Println("n: ", n)
uncomp = uncomp[:n]
fmt.Printf("%s\n", uncomp)
}()
gz.Write(s)
gz.Flush()
}
Use a pipe. For example,
Package io
func Pipe
func Pipe() (*PipeReader, *PipeWriter)
Pipe creates a synchronous in-memory pipe. It can be used to connect
code expecting an io.Reader with code expecting an io.Writer. Reads on
one end are matched with writes on the other, copying data directly
between the two; there is no internal buffering. It is safe to call
Read and Write in parallel with each other or with Close. Close will
complete once pending I/O is done. Parallel calls to Read, and
parallel calls to Write, are also safe: the individual calls will be
gated sequentially.
given the following example:
// test.go
package main
import (
"fmt"
"os/exec"
)
func main() {
cmd := exec.Command("login")
in, _ := cmd.StdinPipe()
in.Write([]byte("user"))
out, err := cmd.CombinedOutput()
if err != nil {
fmt.Println("error:", err)
}
fmt.Printf("%s", out)
}
How can I detect that the process is not going to finish, because it is waiting for user input?
I'm trying to be able to run any script, but abort it if for some reason it tries to read from stdin.
Thanks!
Detecting that the process is not going to finish is a difficult problem. In fact, it is one of the classic "unsolvable" problems in Computer Science: the Halting Problem.
In general, when you are calling exec.Command and will not be passing it any input, it will cause the program to read from your OS's null device (see documentation in the exec.Cmd fields). In your code (and mine below), you explicitly create a pipe (though you should check the error return of StdinPipe in case it is not created correctly), so you should subsequently call in.Close(). In either case, the subprocess will get an EOF and should clean up after itself and exit.
To help with processes that don't handle input correctly or otherwise get themselves stuck, the general solution is to use a timeout. In Go, you can use goroutines for this:
// Set your timeout
const CommandTimeout = 5 * time.Second
func main() {
cmd := exec.Command("login")
// Set up the input
in, err := cmd.StdinPipe()
if err != nil {
log.Fatalf("failed to create pipe for STDIN: %s", err)
}
// Write the input and close
go func() {
defer in.Close()
fmt.Fprintln(in, "user")
}()
// Capture the output
var b bytes.Buffer
cmd.Stdout, cmd.Stderr = &b, &b
// Start the process
if err := cmd.Start(); err != nil {
log.Fatalf("failed to start command: %s", err)
}
// Kill the process if it doesn't exit in time
defer time.AfterFunc(CommandTimeout, func() {
log.Printf("command timed out")
cmd.Process.Kill()
}).Stop()
// Wait for the process to finish
if err := cmd.Wait(); err != nil {
log.Fatalf("command failed: %s", err)
}
// Print out the output
fmt.Printf("Output:\n%s", b.String())
}
In the code above, there are actually three main goroutines of interest: the main goroutine spawns the subprocess and waits for it to exit; a timer goroutine is sent off in the background to kill the process if it's not Stopped in time; and a goroutine that writes the output to the program when it's ready to read it.
Although this would not allow you to "detect" the program trying to read from stdin, I would just close stdin. This way, the child process will just receive an EOF when it tried to read. Most programs know how to handle a closed stdin.
// All error handling excluded
cmd := exec.Command("login")
in, _ := cmd.StdinPipe()
cmd.Start()
in.Close()
cmd.Wait()
Unfortunately, this means you can't use combined output, the following code should allow you to do the same thing. It requires you to import the bytes package.
var buf = new(bytes.Buffer)
cmd.Stdout = buf
cmd.Stderr = buf
After cmd.Wait(), you can then do:
out := buf.Bytes()
I think the solution is to run the child process with closed stdin - by adjusting the Cmd.Stdin appropriately and then Runinng it afterwards instead of using CombinedOutput().
Finally, I'm going to implement a combination of Kyle Lemons answer and forcing the new process have it's own session without a terminal attached to it, so that the executed comand will be aware that there is no terminal to read from.
// test.go
package main
import (
"log"
"os/exec"
"syscall"
)
func main() {
cmd := exec.Command("./test.sh")
cmd.SysProcAttr = &syscall.SysProcAttr{Setsid: true}
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal("error:", err)
}
log.Printf("%s", out)
}