I have to process a long output of a script and find some data. This data most likely will be located at the almost very beginning of the output. After data found I do not need to process output anymore and can quit.
The issue that I cannot stop processing output because exec.Cmd does not have any function to close opened command.
Here are some simplified code (error handling was ommited):
func processOutput(r *bufio.Reader)(){
for {
line, _, err := r.ReadLine()
if some_condition_meet {
break
}
......
}
return
}
func execExternalCommand(){
cmdToExecute := "......"
cmd := exec.Command("bash", "-c", cmdToExecute)
output, _ := cmd.StdoutPipe()
cmd.Start()
r := bufio.NewReader(output)
go processOutput(r)
cmd.Wait()
return
}
What should I do at the end at processOutput function to stop cmd? Maybe there is another way how to solve it.
Thanks
As it stands, you can't do this from processOutput because all it receives is a bufio.Reader. You would need to pass the exec.Cmd to it for it to do anything with the forked process.
To kill the forked process, you can send it a signal, e.g.: cmd.Process.Kill() or cmd.Process.Signal(os.SIGINT). See documentation on exec.Cmd and os.Process.
You could probably use "context" for example:
package main
import (
"bufio"
"context"
"os/exec"
)
func processOutput(r *bufio.Reader, cancel context.WithCancel) {
for {
line, _, err := r.ReadLine()
if some_condition_meet {
break
}
}
cancel()
return
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
cmdToExecute := "......"
cmd := exec.CommandContext(ctx, "bash", "-c", cmdToExecute)
output, _ := cmd.StdoutPipe()
cmd.Start()
r := bufio.NewReader(output)
go processOutput(r, cancel)
cmd.Wait()
return
}
In case need to end with a timeout this could work (example is taken from here: https://golang.org/src/os/exec/example_test.go)
func ExampleCommandContext() {
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
if err := exec.CommandContext(ctx, "sleep", "5").Run(); err != nil {
// This will fail after 100 milliseconds. The 5 second sleep
// will be interrupted.
}
}
A basic example just using sleep but closing it after 1 second: https://play.golang.org/p/gIXKuf5Oga
Related
In Go, I would like to execute a binary from within my application and continually read what the command prints to stdout. However, the one caveat is that the binary is programmed to execute its task infinitely until it reads the enter key, and I don't have access to the binary's source code. If I execute the binary directly from a terminal, it behaves correctly. However, if I execute the binary from within my application, it somehow thinks that it reads the enter key, and closes almost immediately. Here is a code snippet demonstrating how I'm trying to execute the binary, pipe it's stdout, and print it to the screen:
func main() {
// The binary that I want to execute.
cmd := exec.Command("/usr/lib/demoApp")
// Pipe the command's output.
stdout, err := cmd.StdoutPipe()
if err != nil {
fmt.Println(err)
}
stdoutReader := bufio.NewReader(stdout)
// Start the command.
err = cmd.Start()
if err != nil {
fmt.Println(err)
}
// Read and print the command's output.
buff := make([]byte, 1024)
var n int
for err == nil {
n, err = stdoutReader.Read(buff)
if n > 0 {
fmt.Printf(string(buff[0:n]))
}
}
_ = cmd.Wait()
}
Any ideas if what I'm trying to accomplish is possible?
As #mgagnon mentioned, your problem might lie somewhere else; like perhaps the external dependency just bails due to not running in a terminal. Using following to simulate demoApp:
func main() {
fmt.Println("Press enter to exit")
// Every second, report fake progress
go func() {
for {
fmt.Print("Doing stuff...\n")
time.Sleep(time.Second)
}
}()
for {
// Read single character and if enter, exit.
consoleReader := bufio.NewReaderSize(os.Stdin, 1)
input, _ := consoleReader.ReadByte()
// Enter = 10 | 13 (LF or CR)
if input == 10 || input == 13 {
fmt.Println("Exiting...")
os.Exit(0)
}
}
}
... this works fine for me:
func main() {
cmd := exec.Command("demoApp.exe")
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
go func() {
defer stdin.Close()
// After 3 seconds of running, send newline to cause program to exit.
time.Sleep(time.Second * 3)
io.WriteString(stdin, "\n")
}()
cmd.Start()
// Scan and print command's stdout
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
// Wait for program to exit.
cmd.Wait()
}
$ go run main.go
Press enter to exit
Doing stuff...
Doing stuff...
Doing stuff...
Exiting...
The only difference between this and your code is that I'm using stdin to send a newline after 3 seconds to terminate the cmd. Also using scanner for brevity.
Using this as my /usr/lib/demoApp:
package main
import (
"fmt"
"time"
)
func main() {
for {
fmt.Print("North East South West")
time.Sleep(time.Second)
}
}
This program works as expected:
package main
import (
"os"
"os/exec"
)
func main() {
cmd := exec.Command("demoApp")
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
cmd.Start()
defer cmd.Wait()
for {
var b [1024]byte
stdout.Read(b[:])
os.Stdout.Write(b[:])
}
}
I would like to manage a process in Go with the package os/exec. I would like to start it and be able to read the output and write several times to the input.
The process I launch in the code below, menu.py, is just a python script that does an echo of what it has in input.
func ReadOutput(rc io.ReadCloser) (string, error) {
x, err := ioutil.ReadAll(rc)
s := string(x)
return s, err
}
func main() {
cmd := exec.Command("python", "menu.py")
stdout, err := cmd.StdoutPipe()
Check(err)
stdin, err := cmd.StdinPipe()
Check(err)
err = cmd.Start()
Check(err)
go func() {
defer stdin.Close() // If I don't close the stdin pipe, the python code will never take what I write in it
io.WriteString(stdin, "blub")
}()
s, err := ReadOutput(stdout)
if err != nil {
Log("Process is finished ..")
}
Log(s)
// STDIN IS CLOSED, I CAN'T RETRY !
}
And the simple code of menu.py :
while 1 == 1:
name = raw_input("")
print "Hello, %s. \n" % name
The Go code works, but if I don't close the stdin pipe after I write in it, the python code never take what is in it. It is okay if I want to send only one thing in the input on time, but what is I want to send something again few seconds later? Pipe is closed! How should I do? The question could be "How do I flush a pipe from WriteCloser interface?" I suppose
I think the primary problem here is that the python process doesn't work the way you might expect. Here's a bash script echo.sh that does the same thing:
#!/bin/bash
while read INPUT
do echo "Hello, $INPUT."
done
Calling this script from a modified version of your code doesn't have the same issue with needing to close stdin:
func ReadOutput(output chan string, rc io.ReadCloser) {
r := bufio.NewReader(rc)
for {
x, _ := r.ReadString('\n')
output <- string(x)
}
}
func main() {
cmd := exec.Command("bash", "echo.sh")
stdout, err := cmd.StdoutPipe()
Check(err)
stdin, err := cmd.StdinPipe()
Check(err)
err = cmd.Start()
Check(err)
go func() {
io.WriteString(stdin, "blab\n")
io.WriteString(stdin, "blob\n")
io.WriteString(stdin, "booo\n")
}()
output := make(chan string)
defer close(output)
go ReadOutput(output, stdout)
for o := range output {
Log(o)
}
}
The Go code needed a few minor changes - ReadOutput method needed to be modified in order to not block - ioutil.ReadAll would have waited for an EOF before returning.
Digging a little deeper, it looks like the real problem is the behaviour of raw_input - it doesn't flush stdout as expected. You can pass the -u flag to python to get the desired behaviour:
cmd := exec.Command("python", "-u", "menu.py")
or update your python code to use sys.stdin.readline() instead of raw_input() (see this related bug report: https://bugs.python.org/issue526382).
Even though there is some problem with your python script. The main problem is the golang pipe. A trick to solve this problem is use two pipes as following:
// parentprocess.go
package main
import (
"bufio"
"log"
"io"
"os/exec"
)
func request(r *bufio.Reader, w io.Writer, str string) string {
w.Write([]byte(str))
w.Write([]byte("\n"))
str, err := r.ReadString('\n')
if err != nil {
panic(err)
}
return str[:len(str)-1]
}
func main() {
cmd := exec.Command("bash", "menu.sh")
inr, inw := io.Pipe()
outr, outw := io.Pipe()
cmd.Stdin = inr
cmd.Stdout = outw
if err := cmd.Start(); err != nil {
panic(err)
}
go cmd.Wait()
reader := bufio.NewReader(outr)
log.Printf(request(reader, inw, "Tom"))
log.Printf(request(reader, inw, "Rose"))
}
The subprocess code is the same logic as your python code as following:
#!/usr/bin/env bash
# menu.sh
while true; do
read -r name
echo "Hello, $name."
done
If you want to use your python code you should do some changes:
while 1 == 1:
try:
name = raw_input("")
print "Hello, %s. \n" % name
sys.stdout.flush() # there's a stdout buffer
except:
pass # make sure this process won't die when come across 'EOF'
// StdinPipe returns a pipe that will be connected to the command's
// standard input when the command starts.
// The pipe will be closed automatically after Wait sees the command exit.
// A caller need only call Close to force the pipe to close sooner.
// For example, if the command being run will not exit until standard input`enter code here`
// is closed, the caller must close the pipe.
func (c *Cmd) StdinPipe() (io.WriteCloser, error) {}
I want to write a mime/multipart message in Python to standard output and read that message in Golang using the mime/multipart package. This is just a learning exercise.
I tried simulating this example.
output.py
#!/usr/bin/env python2.7
import sys
s = "--foo\r\nFoo: one\r\n\r\nA section\r\n" +"--foo\r\nFoo: two\r\n\r\nAnd another\r\n" +"--foo--\r\n"
print s
main.go
package main
import (
"io"
"os/exec"
"mime/multipart"
"log"
"io/ioutil"
"fmt"
"sync"
)
var wg sync.WaitGroup
func main() {
pr,pw := io.Pipe()
defer pw.Close()
cmd := exec.Command("python","output.py")
cmd.Stdout = pw
mr := multipart.NewReader(pr,"foo")
wg.Add(1)
go func() {
defer wg.Done()
for {
p, err := mr.NextPart()
if err == io.EOF {
fmt.Println("EOF")
return
}
if err != nil {
log.Fatal(err)
}
slurp, err := ioutil.ReadAll(p)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Part : %q\n", slurp)
return
}
}()
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
cmd.Wait()
wg.Wait()
}
Output of go run main.go:
fatal error: all goroutines are asleep - deadlock!
Other answers regarding this topic on StackOverflow are related to channels not being closed, but I am not even using a channel. I understand that somewhere, there is infinite loop or something similar, but I don't see it.
Try something like this (explanation below):
package main
import (
"fmt"
"io"
"io/ioutil"
"log"
"mime/multipart"
"os"
"os/exec"
"sync"
"github.com/pkg/errors"
)
func readCommand(cmdStdout io.ReadCloser, wg *sync.WaitGroup, resc chan<- []byte, errc chan<- error) {
defer wg.Done()
defer close(errc)
defer close(resc)
mr := multipart.NewReader(cmdStdout, "foo")
for {
part, err := mr.NextPart()
if err != nil {
if err == io.EOF {
fmt.Println("EOF")
} else {
errc <- errors.Wrap(err, "failed to get next part")
}
return
}
slurp, err := ioutil.ReadAll(part)
if err != nil {
errc <- errors.Wrap(err, "failed to read part")
return
}
resc <- slurp
}
}
func main() {
cmd := exec.Command("python", "output.py")
cmd.Stderr = os.Stderr
pr, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
var wg sync.WaitGroup
wg.Add(1)
resc := make(chan []byte)
errc := make(chan error)
go readCommand(pr, &wg, resc, errc)
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
for {
select {
case err, ok := <-errc:
if !ok {
errc = nil
break
}
if err != nil {
log.Fatal(errors.Wrap(err, "error from goroutine"))
}
case res, ok := <-resc:
if !ok {
resc = nil
break
}
fmt.Printf("Part from goroutine: %q\n", res)
}
if errc == nil && resc == nil {
break
}
}
cmd.Wait()
wg.Wait()
}
In no particular order:
Rather than using an io.Pipe() as the command's Stdout, just ask the command for it's StdoutPipe(). cmd.Wait() will ensure it's closed for you.
Set cmd.Stderr to os.Stderr so that you can see errors generated by your Python program.
I noticed this program was hanging anytime the Python program wrote to standard error. Now it doesn't :)
Don't make the WaitGroup a global variable; pass a reference to it to the goroutine.
Rather than log.Fatal()ing inside the goroutine, create an error channel to communicate errors back to main().
Rather than printing results inside the goroutine, create a result channel to communicate results back to main().
Ensure channels are closed to prevent blocking/goroutine leaks.
Separate out the goroutine into a proper function to make the code easier to read and follow.
In this example, we can create the multipart.Reader() inside our goroutine, since this is the only part of our code that uses it.
Note that I am using Wrap() from the errors package to add context to the error messages. This is, of course, not relevant to your question, but is a good habit.
The for { select { ... } } part may be confusing. This is one article I found introducing the concept. Basically, select is letting us read from whichever of these two channels (resc and errc) are currently readable, and then setting each to nil when the channel is closed. When both channels are nil, the loop exits. This lets us handle "either a result or an error" as they come in.
Edit: As johandalabacka said on the Golang Forum, it looks like the main issue here was that Python on Windows was adding an extra \r to the output, and that the problem is your Python program should omit the \r in the output string or sys.stdout.write() instead of print() ing. The output could also be cleaned up on the Golang side, but, aside from not being able to parse properly without modifying the Python side, this answer will still improve the concurrency mechanics of your program.
I'm trying to execute a shell command and compress it's output.
The problem is that I then need to interface with an API that expects a Reader.
For that I tried with the following (simplified code):
package main
import (
"encoding/hex"
"testing"
"fmt"
"io"
"io/ioutil"
"os/exec"
"compress/gzip"
)
func TestPipe(t *testing.T) {
cmd := exec.Command("echo", "hello_from_echo")
reader, writer := io.Pipe()
gzW := gzip.NewWriter(writer)
cmd.Stdout = gzW
cmd.Start()
go func() {
fmt.Println("Waiting")
cmd.Wait()
fmt.Println("wait done")
// writer.Close()
// gzW.Close()
}()
msg, _ := ioutil.ReadAll( reader )
fmt.Println( hex.EncodeToString( msg ) )
}
The problem is that ReadAll hangs forever. If I close gzW nothing really changes. However, if I close the writer variable, now the program finishes without hanging, but the output is:
$ go test -run Pipe
Waiting
wait done
1f8b080000096e8800ff
PASS
However, no matter what I echo the output is the same. If I try it from the command line like this: echo "hello_from_echo" | gzip | hexdump the output is totally different, so there's something wrong with that approach.
Any clue what could be the problem?
Thanks in advance
You're closing the gzip writer and pipe writer in the wrong order. You need to close the gzip.Writer to flush any buffers and write the gzip footer, then you can close the PipeWriter to unblock the ReadAll. Also adding the WaitGroup ensures that you're not blocked on any of the close calls.
cmd := exec.Command("echo", "hello_from_echo and more")
pr, pw := io.Pipe()
gzW := gzip.NewWriter(pw)
cmd.Stdout = gzW
cmd.Start()
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
err := cmd.Wait()
if err != nil {
log.Println(err)
}
gzW.Close()
pw.Close()
}()
buf, err := ioutil.ReadAll(pr)
if err != nil {
t.Fatal(err)
}
wg.Wait()
fmt.Println(hex.EncodeToString(buf))
Streaming commands output progress question addresses the problem of printing progress of a long running command.
I tried to put the printing code within a goroutine but the scanner claims to have already hit the EOF immediately and the for block is never executed.
The bufio.scan code that gets executed on the first execution of the Scan() method is:
// We cannot generate a token with what we are holding.
// If we've already hit EOF or an I/O error, we are done.
if s.err != nil {
// Shut it down.
s.start = 0
s.end = 0
return false
}
And if I print s.err the output is EOF.
The code I'm trying to run is:
cmd := exec.Command("some", "command")
c := make(chan int, 1)
go func(cmd *exec.Cmd, c chan int) {
stdout, _ := cmd.StdoutPipe()
<-c
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
}(cmd, c)
cmd.Start()
c <- 1
cmd.Wait()
The idea is to start the Goroutine, get a hold of the cmd.stdout, wait that the cmd is started, and start processing its output.
The result is that the long command gets executed and the program waits for its completion, but nothing is printed to terminal.
Any idea why by the time scanner.Scan() is invoked for the first time the stdout has already reached EOF?
There are some problems:
The pipe is being closed before reading all data.
Always check for errors
Start cmd.Start() after c <- struct{}{} and use unbuffered channel c := make(chan struct{})
Two working sample codes:
1: Wait using channel then close the pipe after EOF using defer func() { c <- struct{}{} }(), like this working sample code:
package main
import (
"bufio"
"fmt"
"os/exec"
)
func main() {
cmd := exec.Command("Streamer")
c := make(chan struct{})
go run(cmd, c)
c <- struct{}{}
cmd.Start()
<-c
if err := cmd.Wait(); err != nil {
fmt.Println(err)
}
fmt.Println("done.")
}
func run(cmd *exec.Cmd, c chan struct{}) {
defer func() { c <- struct{}{} }()
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
<-c
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
fmt.Println("EOF")
}
2: Also you may Wait using sync.WaitGroup, like this working sample code:
package main
import (
"bufio"
"fmt"
"os/exec"
"sync"
)
var wg sync.WaitGroup
func main() {
cmd := exec.Command("Streamer")
c := make(chan struct{})
wg.Add(1)
go func(cmd *exec.Cmd, c chan struct{}) {
defer wg.Done()
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
<-c
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
}(cmd, c)
c <- struct{}{}
cmd.Start()
wg.Wait()
fmt.Println("done.")
}
And Streamer sample code (just for testing):
package main
import "fmt"
import "time"
func main() {
for i := 0; i < 10; i++ {
time.Sleep(1 * time.Second)
fmt.Println(i, ":", time.Now().UTC())
}
}
And see func (c *Cmd) StdoutPipe() (io.ReadCloser, error) Docs:
StdoutPipe returns a pipe that will be connected to the command's
standard output when the command starts.
Wait will close the pipe after seeing the command exit, so most
callers need not close the pipe themselves; however, an implication is
that it is incorrect to call Wait before all reads from the pipe have
completed. For the same reason, it is incorrect to call Run when using
StdoutPipe. See the example for idiomatic usage.
From godocs:
StdoutPipe returns a pipe that will be connected to the command's
standard output when the command starts.
Wait will close the pipe after seeing the command exit, so most
callers need not close the pipe themselves; however, an implication is
that it is incorrect to call Wait before all reads from the pipe have
completed.
You are calling Wait() immediately after starting the command. So the pipe gets closed as soon as the command completes, before making sure you have read all the data from the pipe. Try moving Wait() to your go routine after the scan loop.
go func(cmd *exec.Cmd, c chan int) {
stdout, _ := cmd.StdoutPipe()
<-c
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
cmd.Wait()
c <- 1
}(cmd, c)
cmd.Start()
c <- 1
// This is here so we don't exit the program early,
<-c
There's also a simpler way to do things, which is to just assign os.stdout as the cmd's stdout, causing the command to directly write to the os.stdout:
cmd := exec.Command("some", "command")
cmd.Stdout = os.Stdout
cmd.Run()