I have a golang app which needs to listen for input on stdin - not as a command line utility but to keep running and listening. The following code, slightly edited down, works but has very high CPU load when 'idle' - and I am not sure why - nor clear how this could be done better. So I need the same functionality without the CPU load! (this is part of an authentication handler for ejabberd)
bioIn := bufio.NewReader(os.Stdin)
bioOut := bufio.NewWriter(os.Stdout)
var err error
var success bool
var length uint16
var result uint16
for {
binary.Read(bioIn, binary.BigEndian, &length)
buf := make([]byte, length)
r, _ := bioIn.Read(buf)
if r == 0 {
continue
}
if err == nil {
data := strings.Split(string(buf), ":")
// I have code to handle the incoming data here...
} else {
success = false
}
length = 2
binary.Write(bioOut, binary.BigEndian, &length)
if success != true {
result = 0
} else {
result = 1
}
binary.Write(bioOut, binary.BigEndian, &result)
bioOut.Flush()
}
ANSWER:
I added a short sleep as suggested and this has worked a charm; didn't need to be long and no noticeable impact on the authentication the service is providing. With the reassurance incoming data is buffered this is perfect fix. Thanks all.
r, _ := bioIn.Read(buf)
if r == 0 {
time.Sleep(25 * time.Millisecond)
continue
}
In this part of your code:
r, _ := bioIn.Read(buf)
if r == 0 {
continue
}
You check for a return value of 0, which means EOF. i.e. the input stream has ended/terminated. Once stdin is terminated, it's not going to come back. So you have an endless loop here. it'll just spin around on this test having bioIn.Read() return 0 every time.
You can't really do much more than quit/exit at that point, stdin doesn't have any more data for you, ever.
Note that your code also discard the error returned from bioIn.Read() - don't do that, deal with errors - and you would have discovered the cause for yourself in this case.
You have a tight infinite loop, which will always take as much CPU as the OS will allow. You are always checking to see if there's any input and (in your sample code) you never wait for new input - you just continuously poll stdin to see if there is any. You should probably use time.Sleep() to wait some time between each check.
Related
According to the go documentation, the net.Conn.Write() function will return an error, if it could not send all bytes in the given slice.
How do I know which type of error is returned in this case? Should I just check if there is an error, and if n > 0 and n < len(p) ? Or is it enough to just check if n < len(p) alone? Are there unrecoverable errors for which n < len(p) ?
Say I want to send X gigabytes of data for example. What would a "correct" Write() loop look like?
If the output buffers are full, will Write() simply just block until it has sent everything from p? making a check for n < len(p) superfluous?
Well, net.Conn is an interface. This means that it is completely up to the implementation to determine which errors to send back. A TCP connection and UNIX socket connection can have very different reasons why a write can't be fully completed.
The net.Conn.Write signature is exactly the same as the io.Writer signature. Which means that every implementation of net.Conn also implements io.Writer. So you can use any existing method like io.Copy or io.CopyN to write data to a connection.
How do I know which type of error is returned in this case? Should I just check if there is an error, and if n > 0 and n < len(p) ?
Use n < len(p) to detect the case where the write stopped early. The error is not nil in this case.
Are there unrecoverable errors for which n < len(p) ?
Yes. A network connection can fail after some data is written.
There are also recoverable errors. If write fails because the write deadline is exceeded, a later write with a new deadline can succeed.
Say I want to send X gigabytes of data for example. What would a "correct" Write() loop look like?
If you are asking how to write a []byte to a connection, then a loop is not needed in most cases. The Write call blocks until the data is written or an error occurs. Use this code:
_, err := c.Write(p)
If you are asking how to copy an io.Reader to a network connection, the use _, err := io.Copy(conn, r).
Write is different from Read. Read can return before filling the buffer.
If the output buffers are full, will Write() simply just block until it has sent everything from p?
Write blocks until all data is sent or the write fails with an error (deadline exceeded, network failure, network connection closed in other goroutine, ...).
The net.Conn.Write() is to implement the io.Writer interface, which has the following contract regarding errors:
Write must return a non-nil error if it returns n < len(p).
There is no single correct write loop. For certain cases, it might be important to know how much data was written. However, for network connections, based on this contract, the following should work:
var err error
for data, done := getNextSegment(); !done&&err==nil; data, done = getNextSegment() {
_,err=conn.Write(data)
}
To keep the total number of bytes written:
var err error
written:=0
for data, done := getNextSegment(); !done&&err==nil; data, done = getNextSegment() {
n,err=conn.Write(data)
written+=n
}
So, I went digging into the Go source code itself. I tracked the Write() call to a file named src/internal/poll/fd_unix.go.
// Write implements io.Writer.
func (fd *FD) Write(p []byte) (int, error) {
if err := fd.writeLock(); err != nil {
return 0, err
}
defer fd.writeUnlock()
if err := fd.pd.prepareWrite(fd.isFile); err != nil {
return 0, err
}
var nn int
for {
max := len(p)
if fd.IsStream && max-nn > maxRW {
max = nn + maxRW
}
n, err := ignoringEINTRIO(syscall.Write, fd.Sysfd, p[nn:max])
if n > 0 {
nn += n
}
if nn == len(p) {
return nn, err
}
if err == syscall.EAGAIN && fd.pd.pollable() {
if err = fd.pd.waitWrite(fd.isFile); err == nil {
continue
}
}
if err != nil {
return nn, err
}
if n == 0 {
return nn, io.ErrUnexpectedEOF
}
}
}
This seems to handle the retransmits already. So, Write() does actually guarantee that either everything is sent, or a fatal error occurs, which is unrecoverable.
It seems to me, that there is no need at all, to care about the value of n, other than for logging purposes. If an error ever occurs, it is severe enough that there is no reason to try and retransmit the remaining len(p)-n bytes.
Go 1.12 on Linux 4.19.93 armv6l.
Hardware is a raspberypi zero w (BCM2835) running a yocto linux image.
I've got a gpio driven SRF04 proximity sensor driven by the srf04 linux driver.
It works great over sysfs and the busybox shell.
# cat /sys/bus/iio/devices/iio:device0/in_distance_raw
1646
I've used Go before with IIO devices that support triggers and buffered output at high sample rates on this hardware platform. However for this application the srf04 driver doesn't implement those IIO features. Drat. I don't really feel like adding buffer / trigger support to the driver myself (at this time) since I do not have a need for a 'high' sample rate. A handful of pings per second should suffice for my purpose. I figure I'll calculate mean & std. dev. for a rolling window of data points and 'divine' the signal out of the noise.
So with that - I'd be perfectly happy to Read the bytes from the published sysfs file with Go.
Which brings me to the point of this post.
When I open the file for reading, and try to Read() any number of bytes, I always get a generic -EIO error.
func (s *Srf04) Read() (int, error) {
samp := make([]byte, 16)
f, err := os.OpenFile(s.readPath, OS.O_RDONLY, os.ModeDevice)
if err != nil {
return 0, err
}
defer f.Close()
n, err := f.Read(samp)
if err != nil {
// This block is always executed.
// The error is never a timeout, and always 'input/output error' (-EIO aka -5)
log.Fatal(err)
}
...
}
This seems like strange behavior to me.
So I decided to mess with using io.ReadFull. This yielded unreliable results.
func (s *Srf04) Read() (int, error) {
samp := make([]byte, 16)
f, err := os.OpenFile(s.readPath, OS.O_RDONLY, os.ModeDevice)
if err != nil {
return 0, err
}
defer f.Close()
for {
n, err := io.ReadFull(readFile, samp)
log.Println("ReadFull ", n, " bytes.")
if err == io.EOF {
break
}
if err != nil {
log.Println(err)
}
}
...
}
I ended up adding it to a loop, as I found behavior changes from 'one-off' reads to multiple read calls subsequent to one another. I have it exiting if it gets an EOF, and repeatedly trying to read otherwise.
The results are straight-up crazy unreliable, seemingly returning random results. Sometimes I get the -5, other times I read between 2 - 5 bytes from the device. Sometimes I get bytes without an eof file before the EOF. The bytes appear to represent character data for numbers (each rune is a rune between [0-9]) -- which I'd expect.
Aside: I expect this is related to file polling and the go blocking IO implementation, but I have no way to really tell.
As a temporary workaround, I decided try using os.exec, and now I get results I'd expect to see.
func (s *Srf04)Read() (int, error) {
out, err := exec.Command("cat", s.readPath).Output()
if err != nil {
return 0, err
}
return strconv.Atoi(string(out))
}
But Yick. os.exec. Yuck.
I'd try to run that cat whatever encantation under strace and then peer at what read(2) calls cat actually manages to do (including the number of bytes actually read), and then I'd try to re-create that behaviour in Go.
My own sheer guess at the problem's cause is that the driver (or the sysfs layer) is not too well prepared to deal with certain access patterns.
For a start, consider that GNU cat is not a simple-minded byte shoveler but is rather a reasonably tricky piece of software, which, among other things, considers optimal I/O block sizes for both input and output devices (if available), calls fadvise(2) etc. It's not that any of that gets actually used when you run it on your sysfs-exported file, but it may influence how the full stack (starting with the sysfs layer) performs in the case of using cat and with your code, respectively.
Hence my advice: start with strace-ing the cat and then try to re-create its usage pattern in your Go code; then try to come up with a minimal subset of that, which works; then profoundly comment your code ;-)
I'm sure I've been looking at this too long tonight, and this code is probably terrible. That said, here's the snippet of what I came up with that works just as reliably as the busybox cat, but in Go.
The Srf04 struct carries a few things, the important bits are included below:
type Srf04 struct {
readBuf []byte `json:"-"`
readFile *os.File `json:"-"`
samples *ring.Ring `json:"-"`
}
func (s *Srf04) Read() (int, error) {
/** Reliable, but really really slow.
out, err := exec.Command("cat", s.readPath).Output()
if err != nil {
log.Fatal(err)
}
val, err := strconv.Atoi(string(out[:len(out) - 2]))
if err == nil {
s.samples.Value = val
s.samples = s.samples.Next()
}
*/
// Seek should tell us the new offset (0) and no err.
bytesRead := 0
_, err := s.readFile.Seek(0, 0)
// Loop until N > 0 AND err != EOF && err != timeout.
if err == nil {
n := 0
for {
n, err = s.readFile.Read(s.readBuf)
bytesRead += n
if os.IsTimeout(err) {
// bail out.
bytesRead = 0
break
}
if err == io.EOF {
// Success!
break
}
// Any other err means 'keep trying to read.'
}
}
if bytesRead > 0 {
val, err := strconv.Atoi(string(s.readBuf[:bytesRead-1]))
if err == nil {
fmt.Println(val)
s.samples.Value = val
s.samples = s.samples.Next()
}
return val, err
}
return 0, err
}
I have a goroutine that is constantly blocked reading the stdin, like this:
func routine() {
for {
data := make([]byte, 8)
os.Stdin.Read(data);
otherChannel <-data
}
}
The routine waits to read 8 bytes via stdin and feeds another channel.
I want to gracefully stop this goroutine from the main thread. However, since the goroutine will almost always be blocked reading from stdin, I can't find a good solution to force it to stop. I thought about something like:
func routine(stopChannel chan struct{}) {
for {
select {
case <-stopChannel:
return
default:
data := make([]byte, 8)
os.Stdin.Read(data);
otherChannel <-data
}
}
}
However, the problem is that if there is no more input in the stdin when the stopChannel is closed, the goroutine will stay blocked and not return.
Is there a good approach to make it return immediately when the main thread wants?
Thanks.
To detect that os.Stdin has been closed : check the error value returned by os.Stdin.Read().
One extra point : although you state that in your case you will always receive 8 bytes chunks, you should still check that you indeed received 8 bytes of data.
func routine() {
for {
data := make([]byte, 8)
n, err := os.Stdin.Read(data)
// error handling : the basic thing to do is "on error, return"
if err != nil {
// if os.Stdin got closed, .Read() will return 'io.EOF'
if err == io.EOF {
log.Printf("stdin closed, exiting")
} else {
log.Printf("stdin: %s", err)
}
return
}
// check that 'n' is big enough :
if n != 8 {
log.Printf("short read: only %d byte. exiting", n)
return // instead of returning, you may want to keep '.Read()'ing
// or you may use 'io.ReadFull(os.Stdin, data)' instead of '.Read()'
}
// a habit to have : truncate your read buffers to 'n' after a .Read()
otherChannel <-data[:n]
}
}
When the buffered io writer is used, and some error occur, how can I perform the retry?
For example, I've written 4096B using Write() and an error occur when the bufwriter automatically flushes the data. Then I want to retry writing the 4096B, how I can do it?
It seems I must keep a 4096B buffer myself to perform the retrying. Othersize I'm not able to get the data failed to be flushed.
Any suggestions?
You'll have to use a custom io.Writer that keeps a copy of all data, so that it can be re-used in case of a retry.
This functionality is not part of the standard library, but shouldn't be hard to implement yourself.
When bufio.Writer fails on a Write(..) it will return the amount of bytes written (n) to the buffer the reason why (err).
What you could do is the following. (Note I haven't yet tried this so it may be a little wrong and could use some cleaning up)
func writeSomething(data []byte, w *bufio.Writer) (err error) {
var pos, written int = 0
for pos != len(data) {
written, err = w.Write(data[pos:])
if err != nil {
if err == io.ErrShortWrite {
pos += written // Write was shot. Update pos and keep going
continue
} else netErr, ok := err.(net.Error); ok && netErr.Temporary() {
continue // Temporary error, don't update pos so it will try writing again
} else {
break // Unrecoverable error, bail
}
} else {
pos += written
}
}
return nil
}
I wrote simply server program to received data form client. I little not understand what sometimes I get error read tcp4 IP:PORT i/o timeout from function
int, err := conn.Read([]byte) event time set in function SetDeadline() not was exceeded. I present some part of my code, but I think that this is will enough.
main loop where I receive data is below.
c := NewClient()
c.kickTime: time.Now()
func (c *Client) Listen(){
durationToClose := time.Minute*time.Duration(5),
c.conn.SetDeadline(c.kickTime.Add(c.durationToClose))
buffer := make([]byte, 1024)
for{
reqLen, err := c.conn.Read(buffer)
if err != nil || reqLen == 0 {
fmt.Printf(err)
break
}
if err = c.CheckData(buffer) ; err != nil{
fmt.Printf("something is bad")
}else{
result := c.PrepareDataToSendInOtherPlace(buffer)
go c.RecievedData(result)
}
c.conn.SetDeadline(c.kickTime.Add(c.durationToKick))
}
}
For me only suspicious can be additional function as PrepareDataToSendInOtherPlace() , CheckData() which may take some times CPU, and then new data was be send by client, and server at the time doing something else and rejects connect. This is only my supposition, but I'm not sure.
Syntax errors and undeclared variables aside, what you're showing us can't possibly be walking the Read/Write deadline forward indefinitely.
The longest this could run is until a fixed duration after the first time.Now() (c.kickTime.Add(c.durationToKick)). You probably want something like:
c.conn.SetDeadline(time.Now().Add(c.durationToKick))