Reading from exec command stdout without buffering - go

I'm running a command in Go via exec.Command and scanning the output. On some systems the output is immediate. But on some systems the output seems to be buffered. Unless the amount of data produced by the command is large enough, I don't actually receive the output.
Is there anyway to get more immediate output, reliably?
package main
import (
"fmt"
"log"
"os/exec"
"time"
)
func main() {
cmd := exec.Command("udevadm", "monitor")
stdout, err := cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
err = cmd.Start()
if err != nil {
log.Fatal(err)
}
for {
p := make([]byte, 10)
n, _ := stdout.Read(p)
fmt.Println("# ", time.Now().Unix(), " ", n)
}
}

I will propose that running stdbuf -oL udevadm <args> will achieve effectively what i"m after (line buffered output).

Related

Golang io.Copy blocks in internal ReadFrom

I am building a terminal emulator in golang and I was trying to run detached processes from which I can copy output and display it to the user but the io.Copy function blocks and hence I cannot continue to the output part
I looked in the source code and it blocks in the internal ReadFrom method, I cannot understand why this is happening
package main
import (
"bytes"
"fmt"
"io"
"os"
)
func main() {
inputReader, inputWriter, _ := os.Pipe()
outputReader, outputWriter, _ := os.Pipe()
io.Copy(inputWriter, bytes.NewReader([]byte("\n")))
stdin := inputReader
stdout := outputWriter
stderr := outputWriter
var attr = os.ProcAttr{
Dir: "/tmp",
Env: nil,
Files: []*os.File{
stdin,
stdout,
stderr,
},
Sys: nil,
}
process, startProcessErr := os.StartProcess("/usr/bin/ls", []string{"ls"}, &attr)
if startProcessErr != nil {
panic(startProcessErr)
}
if releaseProcessErr := process.Release(); releaseProcessErr != nil {
panic(releaseProcessErr)
}
var output bytes.Buffer
io.Copy(&output, outputReader)
fmt.Println(output)
}
Maybe it is because I release the process but I dont think it should happen
The call io.Copy(&output, outputReader) blocks until read on outputReader returns EOF or some other error. Read on outputReader does not return EOF because the write side of the pipe is still open in the parent process. Fix by closing the writer in the parent process.
...
if releaseProcessErr := process.Release(); releaseProcessErr != nil {
panic(releaseProcessErr)
}
outputWriter.Close() // <-- add this line
var output bytes.Buffer
io.Copy(&output, outputReader)
fmt.Println(output)
...
Use the os/exec package to simplify the code:
cmd := exec.Command("/usr/bin/ls")
cmd.Dir = "/tmp"
output, err := cmd.CombinedOutput()
if err != nil {
log.Fatal(err)
}
fmt.Println(string(output))

how can I get stdin to exec cmd in golang

I have this code
subProcess := exec.Cmd{
Path: execAble,
Args: []string{
fmt.Sprintf("-config=%s", *configPath),
fmt.Sprintf("-serverType=%s", *serverType),
fmt.Sprintf("-reload=%t", *reload),
fmt.Sprintf("-listenFD=%d", fd),
},
Dir: here,
}
subProcess.Stdout = os.Stdout
subProcess.Stderr = os.Stderr
logger.Info("starting subProcess:%s ", subProcess.Args)
if err := subProcess.Run(); err != nil {
logger.Fatal(err)
}
and then I do os.Exit(1) to stop the main process
I can get output from the subprocess
but I also want to put stdin to
I try
subProcess.Stdin = os.Stdin
but it does not work
I made a simple program (for testing). It reads a number and writes the given number out.
package main
import (
"fmt"
)
func main() {
fmt.Println("Hello, What's your favorite number?")
var i int
fmt.Scanf("%d\n", &i)
fmt.Println("Ah I like ", i, " too.")
}
And here is the modified code
package main
import (
"fmt"
"io"
"os"
"os/exec"
)
func main() {
subProcess := exec.Command("go", "run", "./helper/main.go") //Just for testing, replace with your subProcess
stdin, err := subProcess.StdinPipe()
if err != nil {
fmt.Println(err) //replace with logger, or anything you want
}
defer stdin.Close() // the doc says subProcess.Wait will close it, but I'm not sure, so I kept this line
subProcess.Stdout = os.Stdout
subProcess.Stderr = os.Stderr
fmt.Println("START") //for debug
if err = subProcess.Start(); err != nil { //Use start, not run
fmt.Println("An error occured: ", err) //replace with logger, or anything you want
}
io.WriteString(stdin, "4\n")
subProcess.Wait()
fmt.Println("END") //for debug
}
You interested about these lines
stdin, err := subProcess.StdinPipe()
if err != nil {
fmt.Println(err)
}
defer stdin.Close()
//...
io.WriteString(stdin, "4\n")
//...
subProcess.Wait()
Explanation of the above lines
We gain the subprocess' stdin, now we can write to it
We use our power and we write a number
We wait for our subprocess to complete
Output
START
Hello, What's your favorite number?
Ah I like 4 too.
END
For better understanding
There's now an updated example available in the Go docs: https://golang.org/pkg/os/exec/#Cmd.StdinPipe
If the subprocess doesn't continue before the stdin is closed, the io.WriteString() call needs to be wrapped inside an anonymous function:
func main() {
cmd := exec.Command("cat")
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
go func() {
defer stdin.Close()
io.WriteString(stdin, "values written to stdin are passed to cmd's standard input")
}()
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", out)
}
Though this question is a little old, but here is my answer:
This question is of course very platform specific as how standard IO is handled depends on the OS implementation and not on Go language. However, as general rule of thumb (due to some OSes being prevailing), "what you ask is not possible".
On most of modern operating systems you can pipe standard streams (as in #mraron's answer), you can detach them (this is how daemons work), but you cannot reassign or delegate them to another process.
I think this limitation is more because of security concern. There are still from time to time bugs being discovered that allow remote code execution, imagine if OS was allowing to reassign/delegate STDIN/OUT, then with such vulnerabilities the consequences would be disastrous.
While you cannot directly do this as #AlexKey wrote earlier still you can make some workarounds. If os prevents you to pipe your own standard streams who cares all you need 2 channels and 2 goroutines
var stdinChan chan []byte
var stdoutChan chan []byte
//when something happens in stdout of your code just pipe it to stdout chan
stdoutChan<-somehowGotDataFromStdOut
then you need two funcs as i mentioned before
func Pipein(){
for {
stdinFromProg.Write(<-stdInChan)
}
}
The same idea for the stdout

How to execute system command with unknown arguments?

I have a bunch of systems commands which are somwhat similar to appending new content to a file. I wrote a simple script to execute system commands, which works well if there are single words like 'ls' , 'date' etc. But if the command is greater than that, program dies.
The following is the code
package main
import (
"fmt"
"os/exec"
"sync"
)
func exe_cmd(cmd string, wg *sync.WaitGroup) {
fmt.Println(cmd)
c = cmd.Str
out, err := exec.Command(cmd).Output()
if err != nil {
fmt.Println("error occured")
fmt.Printf("%s", err)
}
fmt.Printf("%s", out)
wg.Done()
}
func main() {
wg := new(sync.WaitGroup)
wg.Add(3)
x := []string{"echo newline >> foo.o", "echo newline >> f1.o", "echo newline >> f2.o"}
go exe_cmd(x[0], wg)
go exe_cmd(x[1], wg)
go exe_cmd(x[2], wg)
wg.Wait()
}
The following is the error i see
exec: "echo newline >> foo.o": executable file not found in $PATHexec:
"echo newline >> f2.o": executable file not found in $PATHexec:
"echo newline >> f1.o": executable file not found in $PATH
I guess, this may be due to, not sending cmds and arguments seperately ( http://golang.org/pkg/os/exec/#Command ). I am wondering how to subvert this, since I don't know how many arguments will be there in my command which needs to be executed.
I found a relatively decent way to achieve the same .
out, err := exec.Command("sh","-c",cmd).Output()
Works for me until now. Still finding better ways to achieve the same.
Edit1:
Finally a easier and efficient (atleast so far) way to do would be like this
func exeCmd(cmd string, wg *sync.WaitGroup) {
fmt.Println("command is ",cmd)
// splitting head => g++ parts => rest of the command
parts := strings.Fields(cmd)
head := parts[0]
parts = parts[1:len(parts)]
out, err := exec.Command(head,parts...).Output()
if err != nil {
fmt.Printf("%s", err)
}
fmt.Printf("%s", out)
wg.Done() // Need to signal to waitgroup that this goroutine is done
}
Thanks to variadic arguments in go and people that pointed that out to me :)
For exec.Command() the first argument needs to be the path to the executable. Then the remaining arguments will be supplied as arguments to the executable. Use strings.Fields() to help split the word into a []string.
Example:
package main
import (
"fmt"
"os/exec"
"sync"
"strings"
)
func exe_cmd(cmd string, wg *sync.WaitGroup) {
fmt.Println(cmd)
parts := strings.Fields(cmd)
out, err := exec.Command(parts[0],parts[1]).Output()
if err != nil {
fmt.Println("error occured")
fmt.Printf("%s", err)
}
fmt.Printf("%s", out)
wg.Done()
}
func main() {
wg := new(sync.WaitGroup)
commands := []string{"echo newline >> foo.o", "echo newline >> f1.o", "echo newline >> f2.o"}
for _, str := range commands {
wg.Add(1)
go exe_cmd(str, wg)
}
wg.Wait()
}
Here's an alternative approach that just writes all the commands to a file then executes that file within the context of the new created output directory.
Example 2
package main
import (
"os"
"os/exec"
"fmt"
"strings"
"path/filepath"
)
var (
output_path = filepath.Join("./output")
bash_script = filepath.Join( "_script.sh" )
)
func checkError( e error){
if e != nil {
panic(e)
}
}
func exe_cmd(cmds []string) {
os.RemoveAll(output_path)
err := os.MkdirAll( output_path, os.ModePerm|os.ModeDir )
checkError(err)
file, err := os.Create( filepath.Join(output_path, bash_script))
checkError(err)
defer file.Close()
file.WriteString("#!/bin/sh\n")
file.WriteString( strings.Join(cmds, "\n"))
err = os.Chdir(output_path)
checkError(err)
out, err := exec.Command("sh", bash_script).Output()
checkError(err)
fmt.Println(string(out))
}
func main() {
commands := []string{
"echo newline >> foo.o",
"echo newline >> f1.o",
"echo newline >> f2.o",
}
exe_cmd(commands)
}
out, _ := exec.Command("sh", "-c", "date +\"%Y-%m-%d %H:%M:%S %Z\"").Output()
exec.Command("sh","-c","ls -al -t | grep go >>test.txt").Output()
fmt.Printf("%s\n\n",out)
Tested couple cases and all work good. This is a lifesaver if you are dealing with quick shell commands in your program. Not tested with complex cases.

Calling an external command in Go

How can I call an external command in GO?
I need to call an external program and wait for it to finish execution. before the next statement is executed.
You need to use the exec package : start a command using Command and use Run to wait for completion.
cmd := exec.Command("yourcommand", "some", "args")
if err := cmd.Run(); err != nil {
fmt.Println("Error: ", err)
}
If you just want to read the result, you may use Output instead of Run.
package main
import (
"fmt"
"os/exec"
"log"
)
func main() {
cmd := exec.Command("ls", "-ltr")
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", out)
}
Try online

Communicating with program process using pipes

I want to be able to fully communicate with some programs after spawning them from Golang program. What I already have is spawning process and talking through pipes based on last line read from stdout:
package main
import (
"fmt"
"io"
"log"
"os/exec"
"strings"
)
var stdinPipe io.WriteCloser
var stdoutPipe io.ReadCloser
var err error
func main() {
cmd := &exec.Cmd{
Path: "/Users/seba/Projects/go/src/bootstrap/in",
Args: []string{"program"},
}
stdinPipe, err = cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
stdoutPipe, err = cmd.StdoutPipe()
if err != nil {
log.Fatal(err)
}
err = cmd.Start()
if err != nil {
log.Fatal(err)
}
var stdoutLines []string
go stdoutManage(stdoutLines, stdoutController)
cmd.Wait()
}
// TODO: imporove as in io.Copy
func stdoutManage(lines []string, manager func(string)) {
buf := make([]byte, 32*1024)
for {
nr, err := stdoutPipe.Read(buf)
if nr > 0 {
thelines := strings.Split(string(buf), "\n")
for _, l := range thelines {
manager(l)
lines = append(lines, l)
}
}
buf = make([]byte, 32*1024) // clear buf
if err != nil {
break
}
}
}
However this approach have problems with programs clearing terminal output and programs which somehow buffer it's stdin or don't use stdin at all (don't know if it's possible).
So the question: is there a portable way of talking with programs (it can be non-Golang solution)?
Problems like this are usually to do with the C library which changes its default buffering mode depending on exactly what stdin / stdout / stderr are.
If stdout is a terminal then buffering is automatically set to line buffered, else it is set to buffered.
This is relevant to you because when you run the programs through a pipe they aren't connected to a terminal and so will have buffering which messes up this sort of use.
To fix, you need to use a pseudo tty which pretends to be a terminal but acts just like a pipe. Here is a library implementing the pty interface which I haven't actually tried but it looks like it does the right thing!

Resources