executable exits early when using io.WriteString - go

I'm using the io package to work with an executable defined in my PATH.
The executable is called "Stockfish" (Chess Engine) and obviously usable via command line tools.
In order to let the engine search for the best move, you use "go depth n" - the higher the depth - the longer it takes to search.
Using my command line tool it searches for about 5 seconds using a depth of 20, and it looks like this:
go depth 20
info string NNUE evaluation using nn-3475407dc199.nnue enabled
info depth 1 seldepth 1 multipv 1 score cp -161 nodes 26 nps 3714 tbhits 0 time 7 pv e7e6
info depth 2 seldepth 2 multipv 1 score cp -161 nodes 51 nps 6375 tbhits 0 time 8 pv e7e6 f1d3
info depth 3 seldepth 3 multipv 1 score cp -161 nodes 79 nps 7900 tbhits 0 time 10 pv e7e6 f1d3 g8f6
info depth 4 seldepth 4 multipv 1 score cp -161 nodes 113 nps 9416 tbhits 0 time 12 pv e7e6 f1d3 g8f6 b1c3
[...]
bestmove e7e6 ponder h2h4
Now, using io.WriteString it finishes after milliseconds without any (visible) calculation:
(That's also the output of the code below)
Stockfish 14 by the Stockfish developers (see AUTHORS file)
info string NNUE evaluation using nn-3475407dc199.nnue enabled
bestmove b6b5
Here's the code I use:
func useStockfish(commands []string) string {
cmd := exec.Command("stockfish")
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
for _, cmd := range commands {
writeString(cmd, stdin)
}
err = stdin.Close()
if err != nil {
log.Fatal(err)
}
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal(err)
}
return string(out)
}
func writeString(cmd string, stdin io.WriteCloser) {
_, err := io.WriteString(stdin, cmd)
if err != nil {
log.Fatal(err)
}
And this is an example of how I use it. The first command is setting the position, the second one is calculation the next best move with a depth of 20. The result is showed above.
func FetchComputerMove(game *internal.Game) {
useStockfish([]string{"position exmaplepos\n", "go depth 20"})
}

To leverage engines like stockfish - you need to start the process and keep it running.
You are executing it, passing 2 commands via a Stdin pipe, then closing the pipe. Closing the pipe indicates to the program that you are no longer interested in what the engine has to say.
To run it - and keep it running - you need something like:
func startEngine(enginePath string) (stdin io.WriteCloser, stdout io.ReadCloser, err error) {
cmd := exec.Command(enginePath )
stdin, err = cmd.StdinPipe()
if err != nil {
return
}
stdout, err = cmd.StdoutPipe()
if err != nil {
return
}
err = cmd.Start() // start command - but don't wait for it to complete
return
}
The returned pipes allow you to send commands & see the output live:
stdin, stdout, err := startEngine("/usr/local/bin/stockfish")
sendCmd := func(cmd string) error {
_, err := stdin.Write([]byte(cmd + "\n"))
return err
}
sendCmd("position examplepos")
sendCmd("go depth 20")
then to crudely read the asynchronous response:
b := make([]byte, 10240)
for {
n, err := stdout.Read(b)
if err != nil {
log.Fatalf("read error: %v", err)
}
log.Println(string(b[:n]))
}
once a line like bestmove d2d4 ponder g8f6 appears, you know the current analysis command has completed.
You can then either close the engine (by closing the stdin pipe) if that's all you need, or keep it open for further command submissions.

Related

Execute Command Line Binary And Continually Read Stdout

In Go, I would like to execute a binary from within my application and continually read what the command prints to stdout. However, the one caveat is that the binary is programmed to execute its task infinitely until it reads the enter key, and I don't have access to the binary's source code. If I execute the binary directly from a terminal, it behaves correctly. However, if I execute the binary from within my application, it somehow thinks that it reads the enter key, and closes almost immediately. Here is a code snippet demonstrating how I'm trying to execute the binary, pipe it's stdout, and print it to the screen:
func main() {
// The binary that I want to execute.
cmd := exec.Command("/usr/lib/demoApp")
// Pipe the command's output.
stdout, err := cmd.StdoutPipe()
if err != nil {
fmt.Println(err)
}
stdoutReader := bufio.NewReader(stdout)
// Start the command.
err = cmd.Start()
if err != nil {
fmt.Println(err)
}
// Read and print the command's output.
buff := make([]byte, 1024)
var n int
for err == nil {
n, err = stdoutReader.Read(buff)
if n > 0 {
fmt.Printf(string(buff[0:n]))
}
}
_ = cmd.Wait()
}
Any ideas if what I'm trying to accomplish is possible?
As #mgagnon mentioned, your problem might lie somewhere else; like perhaps the external dependency just bails due to not running in a terminal. Using following to simulate demoApp:
func main() {
fmt.Println("Press enter to exit")
// Every second, report fake progress
go func() {
for {
fmt.Print("Doing stuff...\n")
time.Sleep(time.Second)
}
}()
for {
// Read single character and if enter, exit.
consoleReader := bufio.NewReaderSize(os.Stdin, 1)
input, _ := consoleReader.ReadByte()
// Enter = 10 | 13 (LF or CR)
if input == 10 || input == 13 {
fmt.Println("Exiting...")
os.Exit(0)
}
}
}
... this works fine for me:
func main() {
cmd := exec.Command("demoApp.exe")
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
go func() {
defer stdin.Close()
// After 3 seconds of running, send newline to cause program to exit.
time.Sleep(time.Second * 3)
io.WriteString(stdin, "\n")
}()
cmd.Start()
// Scan and print command's stdout
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
// Wait for program to exit.
cmd.Wait()
}
$ go run main.go
Press enter to exit
Doing stuff...
Doing stuff...
Doing stuff...
Exiting...
The only difference between this and your code is that I'm using stdin to send a newline after 3 seconds to terminate the cmd. Also using scanner for brevity.
Using this as my /usr/lib/demoApp:
package main
import (
"fmt"
"time"
)
func main() {
for {
fmt.Print("North East South West")
time.Sleep(time.Second)
}
}
This program works as expected:
package main
import (
"os"
"os/exec"
)
func main() {
cmd := exec.Command("demoApp")
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
cmd.Start()
defer cmd.Wait()
for {
var b [1024]byte
stdout.Read(b[:])
os.Stdout.Write(b[:])
}
}

Why is my Go app not reading from sysfs like the busybox `cat` command?

Go 1.12 on Linux 4.19.93 armv6l.
Hardware is a raspberypi zero w (BCM2835) running a yocto linux image.
I've got a gpio driven SRF04 proximity sensor driven by the srf04 linux driver.
It works great over sysfs and the busybox shell.
# cat /sys/bus/iio/devices/iio:device0/in_distance_raw
1646
I've used Go before with IIO devices that support triggers and buffered output at high sample rates on this hardware platform. However for this application the srf04 driver doesn't implement those IIO features. Drat. I don't really feel like adding buffer / trigger support to the driver myself (at this time) since I do not have a need for a 'high' sample rate. A handful of pings per second should suffice for my purpose. I figure I'll calculate mean & std. dev. for a rolling window of data points and 'divine' the signal out of the noise.
So with that - I'd be perfectly happy to Read the bytes from the published sysfs file with Go.
Which brings me to the point of this post.
When I open the file for reading, and try to Read() any number of bytes, I always get a generic -EIO error.
func (s *Srf04) Read() (int, error) {
samp := make([]byte, 16)
f, err := os.OpenFile(s.readPath, OS.O_RDONLY, os.ModeDevice)
if err != nil {
return 0, err
}
defer f.Close()
n, err := f.Read(samp)
if err != nil {
// This block is always executed.
// The error is never a timeout, and always 'input/output error' (-EIO aka -5)
log.Fatal(err)
}
...
}
This seems like strange behavior to me.
So I decided to mess with using io.ReadFull. This yielded unreliable results.
func (s *Srf04) Read() (int, error) {
samp := make([]byte, 16)
f, err := os.OpenFile(s.readPath, OS.O_RDONLY, os.ModeDevice)
if err != nil {
return 0, err
}
defer f.Close()
for {
n, err := io.ReadFull(readFile, samp)
log.Println("ReadFull ", n, " bytes.")
if err == io.EOF {
break
}
if err != nil {
log.Println(err)
}
}
...
}
I ended up adding it to a loop, as I found behavior changes from 'one-off' reads to multiple read calls subsequent to one another. I have it exiting if it gets an EOF, and repeatedly trying to read otherwise.
The results are straight-up crazy unreliable, seemingly returning random results. Sometimes I get the -5, other times I read between 2 - 5 bytes from the device. Sometimes I get bytes without an eof file before the EOF. The bytes appear to represent character data for numbers (each rune is a rune between [0-9]) -- which I'd expect.
Aside: I expect this is related to file polling and the go blocking IO implementation, but I have no way to really tell.
As a temporary workaround, I decided try using os.exec, and now I get results I'd expect to see.
func (s *Srf04)Read() (int, error) {
out, err := exec.Command("cat", s.readPath).Output()
if err != nil {
return 0, err
}
return strconv.Atoi(string(out))
}
But Yick. os.exec. Yuck.
I'd try to run that cat whatever encantation under strace and then peer at what read(2) calls cat actually manages to do (including the number of bytes actually read), and then I'd try to re-create that behaviour in Go.
My own sheer guess at the problem's cause is that the driver (or the sysfs layer) is not too well prepared to deal with certain access patterns.
For a start, consider that GNU cat is not a simple-minded byte shoveler but is rather a reasonably tricky piece of software, which, among other things, considers optimal I/O block sizes for both input and output devices (if available), calls fadvise(2) etc. It's not that any of that gets actually used when you run it on your sysfs-exported file, but it may influence how the full stack (starting with the sysfs layer) performs in the case of using cat and with your code, respectively.
Hence my advice: start with strace-ing the cat and then try to re-create its usage pattern in your Go code; then try to come up with a minimal subset of that, which works; then profoundly comment your code ;-)
I'm sure I've been looking at this too long tonight, and this code is probably terrible. That said, here's the snippet of what I came up with that works just as reliably as the busybox cat, but in Go.
The Srf04 struct carries a few things, the important bits are included below:
type Srf04 struct {
readBuf []byte `json:"-"`
readFile *os.File `json:"-"`
samples *ring.Ring `json:"-"`
}
func (s *Srf04) Read() (int, error) {
/** Reliable, but really really slow.
out, err := exec.Command("cat", s.readPath).Output()
if err != nil {
log.Fatal(err)
}
val, err := strconv.Atoi(string(out[:len(out) - 2]))
if err == nil {
s.samples.Value = val
s.samples = s.samples.Next()
}
*/
// Seek should tell us the new offset (0) and no err.
bytesRead := 0
_, err := s.readFile.Seek(0, 0)
// Loop until N > 0 AND err != EOF && err != timeout.
if err == nil {
n := 0
for {
n, err = s.readFile.Read(s.readBuf)
bytesRead += n
if os.IsTimeout(err) {
// bail out.
bytesRead = 0
break
}
if err == io.EOF {
// Success!
break
}
// Any other err means 'keep trying to read.'
}
}
if bytesRead > 0 {
val, err := strconv.Atoi(string(s.readBuf[:bytesRead-1]))
if err == nil {
fmt.Println(val)
s.samples.Value = val
s.samples = s.samples.Next()
}
return val, err
}
return 0, err
}

Filtering the output of a terminal output using golang

A simple execution of go command gives some output as given here: How do you get the output of a system command in Go??
But the code I am using is for showing the output with progress from : https://blog.kowalczyk.info/article/wOYk/advanced-command-execution-in-go-with-osexec.html?
Now, I can't actually filter the output that I am getting from this as I don't want everything to be printed and only a part of it. Is there a way to do so?
I have already tried implementing a string to get the output instead of go routine way. But it didn't work. I want the progress too.
The sample you're pointing to reads from the subprocess's stdout, and for each read it writes what it read to its own stdout while also capturing it:
func copyAndCapture(w io.Writer, r io.Reader) ([]byte, error) {
var out []byte
buf := make([]byte, 1024, 1024)
for {
n, err := r.Read(buf[:])
if n > 0 {
d := buf[:n]
out = append(out, d...)
_, err := w.Write(d)
if err != nil {
return out, err
}
}
if err != nil {
// Read returns io.EOF at the end of file, which is not an error for us
if err == io.EOF {
err = nil
}
return out, err
}
}
}
This function is called with os.Stdout as w.
Now, you're free to filter the data d before you print it out with w.Write.

Convert Redigo Pipeline result to strings

I managed to Pipeline multiple HGETALL commands, but I can't manage to convert them to strings.
My sample code is this:
// Initialize Redis (Redigo) client on port 6379
// and default address 127.0.0.1/localhost
client, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err)
}
defer client.Close()
// Initialize Pipeline
client.Send("MULTI")
// Send writes the command to the connection's output buffer
client.Send("HGETALL", "post:1") // Where "post:1" contains " title 'hi' "
client.Send("HGETALL", "post:2") // Where "post:1" contains " title 'hello' "
// Execute the Pipeline
pipe_prox, err := client.Do("EXEC")
if err != nil {
panic(err)
}
log.Println(pipe_prox)
It is fine as long as you're comfortable showing non-string results.. What I'm getting is this:
[[[116 105 116 108 101] [104 105]] [[116 105 116 108 101] [104 101 108 108 111]]]
But what I need is:
"title" "hi" "title" "hello"
I've tried the following and other combinations as well:
result, _ := redis.Strings(pipe_prox, err)
log.Println(pipe_prox)
But all I get is: []
I should note that it works with multiple HGET key value commands, but that's not what I need.
What am I doing wrong? How should I do it to convert the "numerical map" to strings?
Thanks for any help
Each HGETALL returns it's own series of values, which need to be converted to strings, and the pipeline is returning a series of those. Use the generic redis.Values to break down this outer structure first then you can parse the inner slices.
// Execute the Pipeline
pipe_prox, err := redis.Values(client.Do("EXEC"))
if err != nil {
panic(err)
}
for _, v := range pipe_prox {
s, err := redis.Strings(v, nil)
if err != nil {
fmt.Println("Not a bulk strings repsonse", err)
}
fmt.Println(s)
}
prints:
[title hi]
[title hello]
you can do it like this:
pipe_prox, err := redis.Values(client.Do("EXEC"))
for _, v := range pipe_prox.([]interface{}) {
fmt.Println(v)
}

ReadLine from io.ReadCloser

I need to find a way to read a line from a io.ReadCloser object OR find a way to split a byte array on a "end line" symbol. However I don't know the end line symbol and I can't find it.
My application execs a php script and needs to get the live output from the script and do "something" with it when it gets it.
Here's a small piece of my code:
cmd := exec.Command(prog, args)
/* cmd := exec.Command("ls")*/
out, err := cmd.StdoutPipe()
if err != nil {
fmt.Println(err)
}
err = cmd.Start()
if err != nil {
fmt.Println(err)
}
after this I monitor the out buffer in a go routine. I've tried 2 ways.
1) nr, er := out.Read(buf) where buf is a byte array. the problem here is that I need to brake the array for each new line
2) my second option is to create a new bufio.reader
r := bufio.NewReader(out)
line,_,e := r.ReadLine()
it runs fine if I exec a command like ls, I get the output line by line, but if I exec a php script it immediately get an End Of File error and exits(I'm guessing that's because of the delayed output from php)
EDIT: My problem was I was creating the bufio.Reader inside the go routine whereas if I do it right after the StdoutPipe() like minikomi suggested, it works fine
You can create a reader using bufio, and then read until the next line break character (Note, single quotes to denote character!):
stdout, err := cmd.StdoutPipe()
rd := bufio.NewReader(stdout)
if err := cmd.Start(); err != nil {
log.Fatal("Buffer Error:", err)
}
for {
str, err := rd.ReadString('\n')
if err != nil {
log.Fatal("Read Error:", err)
return
}
fmt.Println(str)
}
If you're trying to read from the reader in a goroutine with nothing to stop the script, it will exit.
Another option is bufio.NewScanner:
package main
import (
"bufio"
"os/exec"
)
func main() {
cmd := exec.Command("go", "env")
out, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
buf := bufio.NewScanner(out)
cmd.Start()
defer cmd.Wait()
for buf.Scan() {
println(buf.Text())
}
}
https://golang.org/pkg/bufio#NewScanner

Resources