I'm trying to check if an exec.Cmd is running in these scenarios:
Before I actually started the command
After the command has started, but before it finished
After the command has finished
This will allow me to kill this command if it is running so that I can start it again with different parameters.
A simple use case below:
c := exec.Command("omxplayer", "video.mp4")
s, _ := c.StdinPipe() // use the pipe to send the "q" string to quit omxplayer
log.Printf("Running (false): %v\n", checkRunning(c)) // prints false: command has not started yet
c.Start()
log.Printf("Running (true): %v\n", checkRunning(c)) // print true: command has started
time.AfterFunc(3*time.Second, func() {
log.Println("about to quit process...")
log.Printf("Running (true): %v\n", checkRunning(c)) // prints true: command still running at this point
s.Write([]byte("q"))
})
log.Println("waiting for command to end")
log.Printf("Running (true): %v\n", checkRunning(c)) // prints true: command still running at this point
c.Wait()
log.Println("command should have ended by now")
log.Printf("Running (false): %v\n", checkRunning(c)) // prints false: command ended at this point
Here's the best that I could come up with:
func checkRunning(cmd *exec.Cmd) bool {
if cmd == nil || cmd.ProcessState != nil && cmd.ProcessState.Exited() || cmd.Process == nil {
return false
}
return true
}
It works for the use case above, but it seems overly complicated and I'm not sure how reliable it is.
Is there a better way?
Maybe run synchronously in a goroutine and put the result on a channel you can select on?
c := exec.Command("omxplayer", "video.mp4")
// setup pipes and such
ch := make(chan error)
go func(){
ch <- c.Run()
}()
select{
case err := <- ch:
// done! check error
case .... //timeouts, ticks or anything else
}
A slightly different approach to captncraig's answer that worked for me:
c := exec.Command("omxplayer", "video.mp4")
err := c.Start() // starts the specified command but does not wait for it to complete
// wait for the program to end in a goroutine
go func() {
err := c.Wait()
// logic to run once process finished. Send err in channel if necessary
}()
Related
Can anyone help ?
I have an application I am running via exec.CommandContext (so I can cancel it via ctx). it would normally not stop unless it errors out.
I currently have it relaying its output to os.stdOut which is working great. But I also want to get each line via a channel - the idea behind this is I will look for a regular expression on the line and if its true then I will set an internal state of "ERROR" for example.
Although I can't get it to work, I tried NewSscanner. Here is my code.
As I say, it does output to os.StdOut which is great but I would like to receive each line as it happens in my channel I setup.
Any ideas ?
Thanks in advance.
func (d *Daemon) Start() {
ctx, cancel := context.WithCancel(context.Background())
d.cancel = cancel
go func() {
args := "-x f -a 1"
cmd := exec.CommandContext(ctx, "mydaemon", strings.Split(args, " ")...)
var stdoutBuf, stderrBuf bytes.Buffer
cmd.Stdout = io.MultiWriter(os.Stdout, &stdoutBuf)
cmd.Stderr = io.MultiWriter(os.Stderr, &stderrBuf)
lines := make(chan string)
go func() {
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
fmt.Println("I am reading a line!")
lines <- scanner.Text()
}
}()
err := cmd.Start()
if err != nil {
log.Fatal(err)
}
select {
case outputx := <-lines:
// I will do somethign with this!
fmt.Println("Hello!!", outputx)
case <-ctx.Done():
log.Println("I am done!, probably cancelled!")
}
}()
}
Also tried using this
go func() {
scanner := bufio.NewScanner(&stdoutBuf)
for scanner.Scan() {
fmt.Println("I am reading a line!")
lines <- scanner.Text()
}
}()
Even with that, the "I am reading a line" never gets out, I also debugged it and it neve enters the "for scanner.."
Also tried scanning on &stderrBuf, same, nothing enters.
cmd.Start() does not wait for the command to finish. Also, cmd.Wait() needs to be called to be informed about the end of the process.
reader, writer := io.Pipe()
cmdCtx, cmdDone := context.WithCancel(context.Background())
scannerStopped := make(chan struct{})
go func() {
defer close(scannerStopped)
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
}()
cmd := exec.Command("ls")
cmd.Stdout = writer
_ = cmd.Start()
go func() {
_ = cmd.Wait()
cmdDone()
writer.Close()
}()
<-cmdCtx.Done()
<-scannerStopped
scannerStopped is added to demonstrate that the scanner goroutine stops now.
reader, writer := io.Pipe()
scannerStopped := make(chan struct{})
go func() {
defer close(scannerStopped)
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
}()
cmd := exec.Command("ls")
cmd.Stdout = writer
_ = cmd.Run()
go func() {
_ = cmd.Wait()
writer.Close()
}()
<-scannerStopped
And handle the lines as it helps.
Note: wrote this in a bit of hurry. Let me know if anything is unclear or not correct.
For a correct program using concurrency and goroutines, we should try to show there are no data races, the program can't deadlock, and goroutines don't leak. Let's try to achieve this.
Full code
Playground: https://play.golang.org/p/Xv1hJXYQoZq. I recommend copying and running locally, because the playground doesn't stream output afaik and it has timeouts.
Note that I've changed the test command to % find /usr/local, a typically long-running command (>3 seconds) with plenty of output lines, since it is better suited for the scenarios we should test.
Walkthrough
Let's look at the Daemon.Start method. At the start, it is mostly the same. Most noticeably, though, the new code doesn't have a goroutine around a large part of the method. Even without this, the Daemon.Start method remains non-blocking and will return "immediately".
The first noteworthy fix is these updated lines.
outR, outW := io.Pipe()
cmd.Stdout = io.MultiWriter(outW, os.Stdout)
Instead of constructing a bytes.Buffer variable, we call io.Pipe. If we didn't make this change and stuck with a bytes.Buffer, then scanner.Scan() will return false as soon as there is no more data to read. This can happen if the command writes to stdout only occasionally (even a millisecond apart, for this matter). After scanner.Scan() returns false, the goroutine exits and we miss processing future output.
Instead, by using the read end of io.Pipe, scanner.Scan() will wait for input from the pipe's read end until the pipe's write end is closed.
This fixes the race issue between the scanner and the command output.
Next, we construct two closely-related goroutines: the first to consume from <-lines, and the second to produce into lines<-.
go func() {
for line := range lines {
fmt.Println("output line from channel:", line)
...
}
}()
go func() {
defer close(lines)
scanner := bufio.NewScanner(outR)
for scanner.Scan() {
lines <- scanner.Text()
}
...
}()
The consumer goroutine will exit when the lines channel is closed, as the closing of the channel would naturally cause the range loop to terminate; the producer goroutine closes lines upon exit.
The producer goroutine will exit when scanner.Scan() returns false, which happens when the write end of the io.Pipe is closed. This closing happens in upcoming code.
Note from the two paragraphs above that the two goroutines are guaranteed to exit (i.e. will not leak).
Next, we start the command. Standard stuff, it's a non-blocking call, and it returns immediately.
// Start the command.
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
Moving on to the final piece of code in Daemon.Start. This goroutine waits for the command to exit via cmd.Wait(). Handling this is important because the command may for reasons other than Context cancellation.
Particularly, we want to close the write end of the io.Pipe (which, in turn, closes the output lines producer goroutine as mentioned earlier).
go func() {
err := cmd.Wait()
fmt.Println("command exited; error is:", err)
outW.Close()
...
}()
As a side note, by waiting on cmd.Wait(), we don't have to separately wait on ctx.Done(). Waiting on cmd.Wait() handles both exits caused by natural reasons (command successfully finished, command ran into internal error etc.) and exits caused by Context-cancelation.
This goroutine, too, is guaranteed to exit. It will exit when cmd.Wait() returns. This can happen either because the command exited normally with success; exited with failure due to a command error; or exited with failure due to Context cancelation.
That's it! We should have no data races, no deadlocks, and no leaked goroutines.
The lines elided ("...") in the snippets above are geared towards the Done(), CmdErr(), and Cancel() methods of the Daemon type. These methods are fairly well-documented in the code, so these elided lines are hopefully self-explanatory.
Besides that, look for the TODO comments for error handling you may want to do based on your needs!
Test it!
Use this driver program to test the code.
func main() {
var d Daemon
d.Start()
// Enable this code to test Context cancellation:
// time.AfterFunc(100*time.Millisecond, d.Cancel)
<-d.Done()
fmt.Println("d.CmdErr():", d.CmdErr())
}
You have to scan stdoutBuf instead of os.Stdin:
scanner := bufio.NewScanner(&stdoutBuf)
The command is terminated when the context canceled. If it's OK to read all output from the command until the command is terminated, then use this code:
func (d *Daemon) Start() {
ctx, cancel := context.WithCancel(context.Background())
d.cancel = cancel
args := "-x f -a 1"
cmd := exec.CommandContext(ctx, "mydaemon", strings.Split(args, " ")...)
stdout, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
err = cmd.Start()
if err != nil {
log.Fatal(err)
}
go func() {
defer cmd.Wait()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
s := scanner.Text()
fmt.Println(s) // echo to stdout
// Do something with s
}
}()
}
The command is terminated when the context is canceled.
Read on stdout returns io.EOF when the command is terminated. The goroutine breaks out of the scan loop when stdout returns an error.
I am running command exec.Command("cf api https://something.com/") and the response takes sometimes. But when executing this command, there is no wait happens but executed and goes further immediately. I need to wait for some seconds or until output has been received. How to achieve this?
func TestCMDExex(t *testing.T) {
expectedText := "Success"
cmd := exec.Command("cf api https://something.com/")
cmd.Dir = "/root//"
out, err := cmd.Output()
if err != nil {
t.Fail()
}
assert.Contains(t, string(out), expectedText)
}
First: the correct way to create the cmd is:
cmd := exec.Command("cf", "api", "https://something.com/")
Every argument to the child program must be a separate string. This way you can also pass arguments that contain spaces in them. For instance, executing the program with:
cmd := exec.Command("cf", "api https://something.com/")
will pass one command line argument to cf, which is "api https://something.com/", whereas passing two strings will pass two arguments "api" and "https://something.com/".
In your original code, you are trying to execute a program whose name is "cf api https://something.com/".
Then you can run it and get the output:
out, err:=cmd.Output()
This can be solved with a goroutine, a channel and the select statement. The sample code below also does error handling:
func main() {
type output struct {
out []byte
err error
}
ch := make(chan output)
go func() {
// cmd := exec.Command("sleep", "1")
// cmd := exec.Command("sleep", "5")
cmd := exec.Command("false")
out, err := cmd.CombinedOutput()
ch <- output{out, err}
}()
select {
case <-time.After(2 * time.Second):
fmt.Println("timed out")
case x := <-ch:
fmt.Printf("program done; out: %q\n", string(x.out))
if x.err != nil {
fmt.Printf("program errored: %s\n", x.err)
}
}
}
By choosing one of the 3 options in exec.Command(), you can see the code behaving in the 3 possible ways: timed out, normal subprocess termination, errored subprocess termination.
As usual when using goroutines, care must be taken to ensure they terminate, to avoid resource leaks.
Note also that if the executed subprocess is interactive or if it prints its progression to stdout and it is important to see the output while it is happening, then it is better to use cmd.Run(), remove the struct and report only the error in the channel.
You can use a go waitGroup. This is the function we’ll run in every goroutine. Note that a WaitGroup must be passed to functions by pointer. On return, notify the WaitGroup that we’re done. Sleep to simulate an expensive task. (remove it in your case) This WaitGroup is used to wait for all the goroutines launched here to finish. Block until the WaitGroup counter goes back to 0; all the workers notified they’re done.
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1)
go worker(i, &wg)
}
wg.Wait()
}
I would like to manage a process in Go with the package os/exec. I would like to start it and be able to read the output and write several times to the input.
The process I launch in the code below, menu.py, is just a python script that does an echo of what it has in input.
func ReadOutput(rc io.ReadCloser) (string, error) {
x, err := ioutil.ReadAll(rc)
s := string(x)
return s, err
}
func main() {
cmd := exec.Command("python", "menu.py")
stdout, err := cmd.StdoutPipe()
Check(err)
stdin, err := cmd.StdinPipe()
Check(err)
err = cmd.Start()
Check(err)
go func() {
defer stdin.Close() // If I don't close the stdin pipe, the python code will never take what I write in it
io.WriteString(stdin, "blub")
}()
s, err := ReadOutput(stdout)
if err != nil {
Log("Process is finished ..")
}
Log(s)
// STDIN IS CLOSED, I CAN'T RETRY !
}
And the simple code of menu.py :
while 1 == 1:
name = raw_input("")
print "Hello, %s. \n" % name
The Go code works, but if I don't close the stdin pipe after I write in it, the python code never take what is in it. It is okay if I want to send only one thing in the input on time, but what is I want to send something again few seconds later? Pipe is closed! How should I do? The question could be "How do I flush a pipe from WriteCloser interface?" I suppose
I think the primary problem here is that the python process doesn't work the way you might expect. Here's a bash script echo.sh that does the same thing:
#!/bin/bash
while read INPUT
do echo "Hello, $INPUT."
done
Calling this script from a modified version of your code doesn't have the same issue with needing to close stdin:
func ReadOutput(output chan string, rc io.ReadCloser) {
r := bufio.NewReader(rc)
for {
x, _ := r.ReadString('\n')
output <- string(x)
}
}
func main() {
cmd := exec.Command("bash", "echo.sh")
stdout, err := cmd.StdoutPipe()
Check(err)
stdin, err := cmd.StdinPipe()
Check(err)
err = cmd.Start()
Check(err)
go func() {
io.WriteString(stdin, "blab\n")
io.WriteString(stdin, "blob\n")
io.WriteString(stdin, "booo\n")
}()
output := make(chan string)
defer close(output)
go ReadOutput(output, stdout)
for o := range output {
Log(o)
}
}
The Go code needed a few minor changes - ReadOutput method needed to be modified in order to not block - ioutil.ReadAll would have waited for an EOF before returning.
Digging a little deeper, it looks like the real problem is the behaviour of raw_input - it doesn't flush stdout as expected. You can pass the -u flag to python to get the desired behaviour:
cmd := exec.Command("python", "-u", "menu.py")
or update your python code to use sys.stdin.readline() instead of raw_input() (see this related bug report: https://bugs.python.org/issue526382).
Even though there is some problem with your python script. The main problem is the golang pipe. A trick to solve this problem is use two pipes as following:
// parentprocess.go
package main
import (
"bufio"
"log"
"io"
"os/exec"
)
func request(r *bufio.Reader, w io.Writer, str string) string {
w.Write([]byte(str))
w.Write([]byte("\n"))
str, err := r.ReadString('\n')
if err != nil {
panic(err)
}
return str[:len(str)-1]
}
func main() {
cmd := exec.Command("bash", "menu.sh")
inr, inw := io.Pipe()
outr, outw := io.Pipe()
cmd.Stdin = inr
cmd.Stdout = outw
if err := cmd.Start(); err != nil {
panic(err)
}
go cmd.Wait()
reader := bufio.NewReader(outr)
log.Printf(request(reader, inw, "Tom"))
log.Printf(request(reader, inw, "Rose"))
}
The subprocess code is the same logic as your python code as following:
#!/usr/bin/env bash
# menu.sh
while true; do
read -r name
echo "Hello, $name."
done
If you want to use your python code you should do some changes:
while 1 == 1:
try:
name = raw_input("")
print "Hello, %s. \n" % name
sys.stdout.flush() # there's a stdout buffer
except:
pass # make sure this process won't die when come across 'EOF'
// StdinPipe returns a pipe that will be connected to the command's
// standard input when the command starts.
// The pipe will be closed automatically after Wait sees the command exit.
// A caller need only call Close to force the pipe to close sooner.
// For example, if the command being run will not exit until standard input`enter code here`
// is closed, the caller must close the pipe.
func (c *Cmd) StdinPipe() (io.WriteCloser, error) {}
I'm having trouble sending a signal from a parent process and receiving it in the child process.
This is the code for the child process. It exits when it receives SIGINT.
// child.go
func main() {
stop := make(chan os.Signal, 1)
signal.Notify(stop, os.Interrupt)
fmt.Println("started")
<-stop
fmt.Println("stopped")
}
This is the parent process. It starts child.go, sends SIGINT, then waits for it to exit.
// main.go
func main() {
// Start child process
cmd := exec.Command("go", "run", "child.go")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Start()
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Start: " + err.Error())
return
}
// Wait, then send signal
time.Sleep(time.Millisecond * 500)
err = cmd.Process.Signal(os.Interrupt)
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Signal: " + err.Error())
return
}
// Wait for child process to finish
err = cmd.Wait()
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Wait: " + err.Error())
}
return
}
This code should print started\nstopped to show that it worked as expected, but it only prints started and hangs at cmd.Wait(), meaning the child process did not receive the signal.
When I run go run child.go it works fine, so I don't think the problem is with that file. I understand that func (*Process) Signal doesn't work on Windows; I am using Linux.
How can I fix the code so that the child process gets the signal sent by the parent process?
As mentioned by #JimB in the comments section, the go run is your problem.
go run child.go will compile child and execute it as it's own process. If you run a ps after go run child.go, you will see two processes running.
The process you are watching and signalling is the go executable, not the child.
Replace the exec.Command("go", "run", "child.go") with the compiled binary exec.Command("child")and it should work.
I have a short Go program which reads from a named pipe and processes each line as an external process writes to the pipe. The named pipe is created before the program runs using mkfifo.
The process is taking up 100% of the CPU when waiting for a new line from the named pipe, even when it's not doing any processing. It's running on Ubuntu 14.04. Any ideas?
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
awaitingExit := false
var wg sync.WaitGroup
go func() {
for sig := range c {
awaitingExit = true
// wait for goroutines to finish processing new lines
wg.Wait()
os.Exit(1)
}
}()
file, err := os.OpenFile("file.fifo", os.O_RDONLY, os.ModeNamedPipe)
defer file.Close()
if err != nil {
log.Fatal(err)
}
reader := bufio.NewReader(file)
// infinite loop
for {
line, _, _ := reader.ReadLine()
// stop handling new lines if we're waiting to exit
if !awaitingExit && len(line) > 0 {
wg.Add(1)
go func(uploadLog string) {
defer wg.Done()
handleNewLine(uploadLog)
}(string(line))
}
}
func handleNewLine(line string) {
....
}
Your "infinite loop" is really infinite: you never exit it or jump out of it.
It contains an if:
// stop handling new lines if we're waiting to exit
if !awaitingExit && len(line) > 0 {
// code omitted
}
But if the condition is false, you're still not getting out of the for loop, just continue with another iteration. Once you reach the end of reader, this loop will consume 100% of a core because after that it will not wait for anything just trying to read (which will immediately return EOF) and checking the awaitExit variable and doing these 2 steps again.
You either need to add condition to the for loop to exit sometime, or use a break statement to break out of it.
Altered for loop with a condition:
for !awaitingExit {
}
Altered for with a break statement:
for {
if awaitingExit {
break
}
// code omitted
}
Note: if awaitingExit variable is changed by another goroutine, you need proper synchronization, or better, use channels for exit signalling.