Is it possible to, in a goroutine, stream a file as it is being written to by a subprocess command? The goal here is to capture the output as both a file and stream it live. I have:
cmd := exec.CommandContext(ctx, c.Bin, args...)
// CANT USE NON FILE!!
// https://github.com/golang/go/issues/23019
tempout, err := ioutil.TempFile("", "workerout")
if err != nil {
return "", err
}
tempoutName := tempout.Name()
defer os.Remove(tempoutName) // clean up
temperr, err := ioutil.TempFile("", "workererr")
if err != nil {
return "", err
}
temperrName := temperr.Name()
defer os.Remove(temperrName) // clean up
cmd.Stdout = tempout
cmd.Stderr = temperr
err = cmd.Start()
// Stream the logs
// Does not work. Flushing issue???
/*
ro := bufio.NewReader(tempout)
go func() {
line, _, _ := ro.ReadLine()
logger.Debug(line)
}()
re := bufio.NewReader(temperr)
go func() {
line, _, _ := re.ReadLine()
logger.Error(line)
}()
*/
cmd.Wait()
return tempout.Read(... // read the file into a string and return it
The commented out section of the code seems to show the logs only once the command exits (either by ctx being cancelled, or it finishes), in that it does not log in real time. Is there a way to make this log in real time?
Related
I am writing a Test function that testing go program interacting with a command line program. That is
os.Stdio -> cmd.Stdin
cmd.Stdin -> os.Stdin
I could use pipe to connect those io, but I would have a log of the data passing the pipe.
I tried to use io.MultiWriter but it is not a os.file object and cannot assign to os.Stdout.
I have found some sample which use a lot of pipe and io.copy. but as io.copy is not interactive. How could I connect the Stdout to a io.MultiWriter with pipe?
logfile, err := os.Create("stdout.log")
r, w, _ := os.Pipe()
mw := io.MultiWriter(os.Stdout, logfile, w)
cmd.Stdin = r
os.Stdout = mw // <- error in this line
The error message like
cannot use mw (variable of type io.Writer) as type *os.File in assignment:
As an alternative solution, you can mimic the MultiWriter using a separate func to read what has been written to the stdout, capture it, then write it to the file and also to the original stdout.
package main
import (
"bufio"
"fmt"
"os"
"time"
)
func main() {
originalStdout := os.Stdout //Backup of the original stdout
r, w, _ := os.Pipe()
os.Stdout = w
//Use a separate goroutine for non-blocking
go func() {
f, _ := os.Create("stdout.log")
defer f.Close()
scanner := bufio.NewScanner(r)
for scanner.Scan() {
s := scanner.Text() + "\r\n"
f.WriteString(s)
originalStdout.WriteString(s)
}
}()
//Test
c := time.NewTicker(time.Second)
for {
select {
case <-c.C:
fmt.Println(time.Now())
fmt.Fprintln(os.Stderr, "This is on Stderr")
}
}
}
I figure out a work around with pipe, TeeReader and MultiWriter
The setup is a test function which test the interaction of go program interact with a python program though stdin and stdout.
main.stdout -> pipe -> TeeReader--> (client.stdin, MultiWriter(log, stdout))
client.stdout -> MultiWriter(pipe --> main.stdin, logfile, stdout )
I will try to add more explanation later
func Test_Interactive(t *testing.T) {
var tests = []struct {
file string
}{
{"Test_Client.py"},
}
for tc, tt := range tests {
fmt.Println("---------------------------------")
fmt.Printf("Test %d, Test Client:%v\n", tc+1, tt.file)
fmt.Println("---------------------------------")
// Define external program
client := exec.Command("python3", tt.file)
// Define log file
logfile, err := os.Create(tt.file + ".log")
if err != nil {
panic(err)
}
defer logfile.Close()
out := os.Stdout
defer func() { os.Stdout = out }() // Restore original Stdout
in := os.Stdin
defer func() { os.Stdin = in }() // Restore original Stdin
// Create pipe connect os.Stdout to client.Stdin
gr, gw, _ := os.Pipe()
// Connect os.Stdout to writer side of pipe
os.Stdout = gw
// Create MultiWriter to write to logfile and os.Stdout at the same time
gmw := io.MultiWriter(out, logfile)
// Create a tee reader read from reader side of the pipe and flow to the MultiWriter
// Repleace the cmd.Stdin with TeeReader
client.Stdin = io.TeeReader(gr, gmw)
// Create a pipe to connect client.Stdout to os.Stdin
cr, cw, _ := os.Pipe()
// Create MultWriter to client stdout
cmw := io.MultiWriter(cw, logfile, out)
client.Stdout = cmw
// Connect os stdin to another end of the pipe
os.Stdin = cr
// Start Client
client.Start()
// Start main
main()
// Check Testing program error
if err := client.Process.Release(); err != nil {
if exiterr, ok := err.(*exec.ExitError); ok {
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
log.Printf("Exit Status: %d", status.ExitStatus())
t.Errorf("Tester return error\n")
}
} else {
log.Fatalf("cmd.Wait: %v", err)
t.Errorf("Tester return error\n")
}
}
}
}
I've been trying to write a program that record what is passed to a subprocess and the console returns in live (in the future, to record SSH sessions, for now on Python shell for testing)
I can record without issue stdout and stderr (it shows and record it correctly) but I can't find a way to do the same on stdin ?
Basically that my stdin will both map to the subprocess stdin and write to the log file.
There is my current code :
func SSH(cmd *cobra.Command, args []string) {
logFile := fmt.Sprintf("%v#%s.log", args[0], time.Now().Format(SSHLogDateFormat))
usr, _ := user.Current()
home := usr.HomeDir
logDir := fmt.Sprintf("%s/%s/logs", home, config.ConfigDir)
if _, err := os.Stat(logDir); os.IsNotExist(err) {
err = os.Mkdir(logDir, os.FileMode(int(0700)))
if err != nil {
log.Fatalf("Failed to create %s: %s", logDir, err)
}
}
fullLogFile := fmt.Sprintf("%s/%s", logDir, logFile)
log.Infof("Started recording to %s", fullLogFile)
bash, err := exec.LookPath("bash")
if err != nil {
log.Errorf("Could not locate bash: %v", err)
}
f, err := os.Create(fullLogFile)
if err != nil {
log.Fatalf("Failed to open device logs: %s", err)
}
command := exec.Command(bash, "-c", "python")
out := io.MultiWriter(os.Stdout, f)
command.Stderr = out
command.Stdout = out
if err := command.Start(); nil != err {
log.Fatalf("Error starting program: %s, %s", command.Path, err.Error())
}
err = command.Wait()
if err != nil {
log.Fatalf("Error waiting program: %s, %s", command.Path, err.Error())
}
f.Close()
log.Infof("Finished recording to %s", fullLogFile)
}
Tried this too without success :
out := io.MultiWriter(os.Stdout, f)
in := io.TeeReader(os.Stdin, out)
command.Stderr = out
command.Stdout = out
command.Stdin = in
You need to write to the process's stdin. Get a write pipe to that:
procIn, err := command.StdinPipe()
if nil!=err {
log.Fatal(err)
}
Then create a multiWriter to write to both log and process:
inWriter := io.MultiWriter(procIn, f)
Finally, copy Stdin into the MultiWriter:
go func() {
io.Copy(inWriter, os.Stdin)
procIn.Close()
}()
We do the copy in a goroutine, so as not to hang everything up: we haven't started the command yet, so there's nothing receiving the written bytes. It needs to occur in parallel to the command running.
Here's a very simple example:
package main
import (
`os`
`os/exec`
`io`
)
// pipeto copies stdin to logOut and to the command,
// and copies the commands stdout and stderr to logOut and
// to our stderr.
func pipeto(logOut os.Writer, cmd string, args ...string) error {
cmd := exec.Command(cmd, args...)
out := io.MultiWriter(os.Stdout, logOut)
cmd.Stderr, cmd.Stdout = out, out
procIn, err := cmd.StdinPipe()
if nil!=err {
return err
}
go func() {
io.Copy( io.MultiWriter(procIn, logOut) , os.Stdin )
procIn.Close()
}()
return cmd.Run()
}
func main() {
logOut, err := os.Create(`logout.log`)
if nil!=err {
panic(err)
}
defer logOut.Close()
if err := pipeto(logOut, `sed`, `s/tea/coffee/g`); nil!=err {
panic(err)
}
}
You can test it, where I've named my go file pipetest.go:
echo this is a test of tea | go run pipetest.go
The you will see both the input and the output reflected in logout.log:
this is a test of tea
this is a test of coffee
At the end I found the solution by using the PTY library (That would have been needed anyway to handle special signals and tabs on subprocesses): https://github.com/creack/pty
I took the Shell example and just replaced the io.Copy with my MultiWriter
If I'm opening a file inside a for loop and will be finished with it at the end of that iteration, should I call Close immediately or trick Defer using a closure?
I have a series of filenames being read in from a chan string which have data to be copied into a zipfile. This is all being processed in a go func.
go func(fnames <-chan string, zipfilename string) {
f, _ := os.Create(zipfilename) // ignore error handling for this example
defer f.Close()
zf := zip.NewWriter(f)
defer zf.Close()
for fname := range fnames {
r, _ := os.Open(fname)
w, _ := zf.Create(r.Name())
io.Copy(w, r)
w.Close()
r.Close()
}(files, "some name.zip")
Inside my for loop, would it be more idiomatic Go to write:
for fname := range fnames {
func(){
r, _ := os.Open(fname)
defer r.Close()
w, _ := zf.Create(r.Name())
defer w.Close()
io.Copy(w, r)
}()
}
or should I continue with my code as-written?
You should be checking your errors. I know this is meant to just be an example, but in this case it is important. If all you do is defer Close(), you can't actually check if there was an error during defer.
The way I would write this is to create a helper function:
func copyFileToZip(zf *zip.Writer, filename string) error {
r, err := os.Open(filename)
if err != nil {
return err
}
defer r.Close()
w, err := zf.Create(r.Name())
if err != nil {
return err
}
defer w.Close()
_, err = io.Copy(w, r)
if err != nil {
return err
}
return w.Close()
}
Once you add in all that error handling, the function is big enough to make it a named function. It also has the added benefit of checking the error when closing the writer. Checking the reader's error is unnecessary since that won't affect if the data was written.
I am trying to stream out bytes of a zip file using io.Pipe() function in golang. I am using pipe reader to read the bytes of each file in the zip and then stream those out and use the pipe writer to write the bytes in the response object.
func main() {
r, w := io.Pipe()
// go routine to make the write/read non-blocking
go func() {
defer w.Close()
bytes, err := ReadBytesforEachFileFromTheZip()
err := json.NewEncoder(w).Encode(bytes)
handleErr(err)
}()
This is not a working implementation but a structure of what I am trying to achieve. I don't want to use ioutil.ReadAll since the file is going to be very large and Pipe() will help me avoid bringing all the data into memory. Can someone help with a working implementation using io.Pipe() ?
I made it work using golang io.Pipe().The Pipewriter writes byte to the pipe in chunks and the pipeReader reader from the other end. The reason for using a go-routine is to have a non-blocking write operation while simultaneous reads happen form the pipe.
Note: It's important to close the pipe writer (w.Close()) to send EOF on the stream otherwise it will not close the stream.
func DownloadZip() ([]byte, error) {
r, w := io.Pipe()
defer r.Close()
defer w.Close()
zip, err := os.Stat("temp.zip")
if err != nil{
return nil, err
}
go func(){
f, err := os.Open(zip.Name())
if err != nil {
return
}
buf := make([]byte, 1024)
for {
chunk, err := f.Read(buf)
if err != nil && err != io.EOF {
panic(err)
}
if chunk == 0 {
break
}
if _, err := w.Write(buf[:chunk]); err != nil{
return
}
}
w.Close()
}()
body, err := ioutil.ReadAll(r)
if err != nil {
return nil, err
}
return body, nil
}
Please let me know if someone has another way of doing it.
I need to find a way to read a line from a io.ReadCloser object OR find a way to split a byte array on a "end line" symbol. However I don't know the end line symbol and I can't find it.
My application execs a php script and needs to get the live output from the script and do "something" with it when it gets it.
Here's a small piece of my code:
cmd := exec.Command(prog, args)
/* cmd := exec.Command("ls")*/
out, err := cmd.StdoutPipe()
if err != nil {
fmt.Println(err)
}
err = cmd.Start()
if err != nil {
fmt.Println(err)
}
after this I monitor the out buffer in a go routine. I've tried 2 ways.
1) nr, er := out.Read(buf) where buf is a byte array. the problem here is that I need to brake the array for each new line
2) my second option is to create a new bufio.reader
r := bufio.NewReader(out)
line,_,e := r.ReadLine()
it runs fine if I exec a command like ls, I get the output line by line, but if I exec a php script it immediately get an End Of File error and exits(I'm guessing that's because of the delayed output from php)
EDIT: My problem was I was creating the bufio.Reader inside the go routine whereas if I do it right after the StdoutPipe() like minikomi suggested, it works fine
You can create a reader using bufio, and then read until the next line break character (Note, single quotes to denote character!):
stdout, err := cmd.StdoutPipe()
rd := bufio.NewReader(stdout)
if err := cmd.Start(); err != nil {
log.Fatal("Buffer Error:", err)
}
for {
str, err := rd.ReadString('\n')
if err != nil {
log.Fatal("Read Error:", err)
return
}
fmt.Println(str)
}
If you're trying to read from the reader in a goroutine with nothing to stop the script, it will exit.
Another option is bufio.NewScanner:
package main
import (
"bufio"
"os/exec"
)
func main() {
cmd := exec.Command("go", "env")
out, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
buf := bufio.NewScanner(out)
cmd.Start()
defer cmd.Wait()
for buf.Scan() {
println(buf.Text())
}
}
https://golang.org/pkg/bufio#NewScanner