How to read data from serial and process it when a specific delimiter is found - go

I have a device, which continues to send data over a serial port.
Now I want to read this and process it.
The data send this delimiter "!" and
as soon as this delimiter appears I want to pause reading to processing the data thats already been received.
How can I do that? Is there any documentation or examples that I can read or follow.

For reading data from a serial port you can find a few packages on Github, e.g. tarm/serial.
You can use this package to read data from your serial port. In order to read until a specific delimiter is reached, you can use something like:
config := &serial.Config{Name: "/dev/ttyUSB", Baud: 9600}
s, err := serial.OpenPort(config)
if err != nil {
// stops execution
log.Fatal(err)
}
// golang reader interface
r := bufio.NewReader(s)
// reads until delimiter is reached
data, err := r.ReadBytes('\x21')
if err != nil {
// stops execution
log.Fatal(err)
}
// or use fmt.Printf() with the right verb
// https://golang.org/pkg/fmt/#hdr-Printing
fmt.Println(data)
See also: Reading from serial port with while-loop

bufio's reader unfortunately did not work for me - it kept crashing after a while. This was a no-go since I needed a stable solution for a low-performance system.
My solution was to implement this suggestion with a small tweak. As noted, if you don't use bufio, the buffer gets overwritten every time you call
n, err := s.Read(buf0)
To fix this, append the bytes from buf0 to a second buffer, buf1:
if n > 0 {
buf1 = append(buf1, buf0[:n]...)
}
Then parse the bytes stored in buf1. If you find a subset you're looking for, process it further.
make sure to clear the buffers in a suitable manner
make sure to limit the frequency the loop is running with (e.g. time.Sleep)

Related

Getting EOF on 2nd prompt when using a file as Stdin (Golang)

I am trying to do functional testing of a cli app similar to this way.
As the command asks a few input on command prompt, I am putting them in a file and setting it as os.Stdin.
cmd := exec.Command(path.Join(dir, binaryName), "myArg")
tmpfile := setStdin("TheMasterPassword\nSecondAnswer\n12121212\n")
cmd.Stdin = tmpfile
output, err := cmd.CombinedOutput()
The setStdin just creates a tmpFile, write the string in file and returns the *os.File.
Now, I am expecting TheMasterPassword to be first input, and it's working. But for the second input always getting Critical Error: EOF.
The function I am using for asking and getting user input this :
func Ask(question string, minLen int) string {
reader := bufio.NewReader(os.Stdin)
for {
fmt.Printf("%s: ", question)
response, err := reader.ReadString('\n')
ExitIfError(err)
if len(response) >= minLen {
return strings.TrimSpace(response)
} else {
fmt.Printf("Provide at least %d character.\n", minLen)
}
}
}
Can you please help me to find out what's going wrong?
Thanks a lot!
Adding setStdin as requested
func setStdin(userInput string) *os.File {
tmpfile, err := ioutil.TempFile("", "test_stdin_")
util.ExitIfError(err)
_, err = tmpfile.Write([]byte(userInput))
util.ExitIfError(err)
_, err = tmpfile.Seek(0, 0)
util.ExitIfError(err)
return tmpfile
}
It pretty much looks like in your app your call Ask() whenever you want a single input line.
Inside Ask() you create a bufio.Reader to read from os.Stdin. Know that bufio.Reader–as its name suggests–uses buffered reading, meaning it may read more data from its source than what is returned by its methods (Reader.ReadString() in this case). Which means if you just use it to read one (or some) lines and you throw away the reader, you will throw away buffered, unread data.
So next time you call Ask() again, attempting to read from os.Stdin, you will not continue from where you left off...
To fix this issue, only create a single bufio.Reader from os.Stdin, store it in a global variable for example, and inside Ask(), always use this single reader. So buffered and unread data will not be lost between Ask() calls. Of course this solution will not be valid to call from multiple goroutines, but reading from a single os.Stdin isn't either.
For example:
var reader = bufio.NewReader(os.Stdin)
func Ask(question string, minLen int) string {
// use the global reader here...
}
Also note that using bufio.Scanner would be easier in your case. But again, bufio.Scanner may also read more data from its source than needed, so you have to use a shared bufio.Scanner here too. Also note that Reader.ReadString() returns you a string containing the delimeter (a line ending with \n in your case) which you probably have to trim, while Scanner.Text() (with the default line splitting function) will strip that first before returning the line. That's also a simplification you can take advantage of.

Extending Golang's http.Resp.Body to handle large files

I have a client application which reads in the full body of a http response into a buffer and performs some processing on it:
body, _ = ioutil.ReadAll(containerObject.Resp.Body)
The problem is that this application runs on an embedded device, so responses that are too large fill up the device RAM, causing Ubuntu to kill the process.
To avoid this, I check the content-length header and bypass processing if the document is too large. However, some servers (I'm looking at you, Microsoft) send very large html responses without setting content-length and crash the device.
The only way I can see of getting around this is to read the response body up to a certain length. If it reaches this limit, then a new reader could be created which first streams the in-memory buffer, then continues reading from the original Resp.Body. Ideally, I would assign this new reader to the containerObject.Resp.Body so that callers would not know the difference.
I'm new to GoLang and am not sure how to go about coding this. Any suggestions or alternative solutions would be greatly appreciated.
Edit 1: The caller expects a Resp.Body object, so the solution needs to be compatible with that interface.
Edit 2: I cannot parse small chunks of the document. Either the entire document is processed or it is passed unchanged to the caller, without loading it into memory.
If you need to read part of the response body, then reconstruct it in place for other callers, you can use a combination of an io.MultiReader and ioutil.NopCloser
resp, err := http.Get("http://google.com")
if err != nil {
return err
}
defer resp.Body.Close()
part, err := ioutil.ReadAll(io.LimitReader(resp.Body, maxReadSize))
if err != nil {
return err
}
// do something with part
// recombine the buffered part of the body with the rest of the stream
resp.Body = ioutil.NopCloser(io.MultiReader(bytes.NewReader(part), resp.Body))
// do something with the full Response.Body as an io.Reader
If you can't defer resp.Body.Close() because you intend to return the response before it's read in its entirety, you will need to augment the replacement body so that the Close() method applies to the original body. Rather than using the ioutil.NopCloser as the io.ReadCloser, create your own that refers to the correct method calls.
type readCloser struct {
io.Closer
io.Reader
}
resp.Body = readCloser{
Closer: resp.Body,
Reader: io.MultiReader(bytes.NewReader(part), resp.Body),
}

Golang high cpu usage on simple webserver unable to understand why?

So I have a simple net/http webserver. All it does is is deliver 100MB of random bytes, which I intend to use for network speed testing. My handler for the 100mb endpoint is really simple (pasted below). The code works fine and I get my random byte file, the problem is when I run this and someone downloads these 100megabytes, the CPU for this program shoots up to 150% and stays there until this handler finishes running. Am I doing something very wrong here? What could I do to improve this handler's performance?
func downloadHandler(w http.ResponseWriter, r *http.Request) {
str := RandStringBytes(8192); //generates 8192 bytes of randomness
sz := 1000*1000*100; //100Megabytes
iter := sz/len(str)+1;
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Length", strconv.Itoa( sz ))
for i := 0; i < iter ; i++ {
fmt.Fprintf(w, str )
}
}
The problem is that fmt.Fprintf() expects a format string:
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error)
And you pass it a big, 8 KB format string. The fmt package has to analyze the format string, it is not something that gets to the output as is. Most definately this is what is eating your CPU.
If the random string contains the special % sign, that even makes your case worse, as then fmt.Fprintf() might expect further arguments which you don't "deliver", so the fmt package also has to (will) include error messages in the output, such as:
fmt.Fprintf(os.Stdout, "aaa%bbb%d")
Output:
aaa%!b(MISSING)bb%!d(MISSING)
Use fmt.Fprint() instead which does not expect a format string:
fmt.Fprint(w, str)
Or even better, convert your random string to a byte slice once, and just keep writing that:
data := []byte(str)
for i := 0; i < iter; i++ {
if _, err := w.Write(data); err != nil {
// Handle error, e.g. return
}
}
Delivering large amount of data – you won't get a faster solution than writing a prepared byte slice in a loop (maybe slightly if you vary the size of the slice). If your solution is still "slow", that might be due to your RandStringBytes() function which we don't know anything about, or your output might be compressed (gzipped) if you use other handlers or some framework (which does use relatively high CPU). Also if the client that receives the response is also on your computer (e.g. a browser), it –or a firewall / antivirus software– may check / analyze the response for malicious code (which may also be resource intensive).

Most efficient way to read Zlib compressed file in Golang?

I'm reading in and at the same time parsing (decoding) a file in a custom format, which is compressed with zlib. My question is how can I efficiently uncompress and then parse the uncompressed content without growing the slice? I would like to parse it whilst reading it into a reusable buffer.
This is for a speed-sensitive application and so I'd like to read it in as efficiently as possible. Normally I would just ioutil.ReadAll and then loop again through the data to parse it. This time I'd like to parse it as it's read, without having to grow the buffer into which it is read, for maximum efficiency.
Basically I'm thinking that if I can find a buffer of the perfect size then I can read into this, parse it, and then write over the buffer again, then parse that, etc. The issue here is that the zlib reader appears to read an arbitrary number of bytes each time Read(b) is called; it does not fill the slice. Because of this I don't know what the perfect buffer size would be. I'm concerned that it might break up some of the data that I wrote into two chunks, making it difficult to parse because one say uint64 could be split from into two reads and therefore not occur in the same buffer read - or perhaps that can never happen and it's always read out in chunks of the same size as were originally written?
What is the optimal buffer size, or is there a way to calculate this?
If I have written data into the zlib writer with f.Write(b []byte) is it possible that this same data could be split into two reads when reading back the compressed data (meaning I will have to have a history during parsing), or will it always come back in the same read?
You can wrap your zlib reader in a bufio reader, then implement a specialized reader on top that will rebuild your chunks of data by reading from the bufio reader until a full chunk is read. Be aware that bufio.Read calls Read at most once on the underlying Reader, so you need to call ReadByte in a loop. bufio will however take care of the unpredictable size of data returned by the zlib reader for you.
If you do not want to implement a specialized reader, you can just go with a bufio reader and read as many bytes as needed with ReadByte() to fill a given data type. The optimal buffer size is at least the size of your largest data structure, up to whatever you can shove into memory.
If you read directly from the zlib reader, there is no guarantee that your data won't be split between two reads.
Another, maybe cleaner, solution is to implement a writer for your data, then use io.Copy(your_writer, zlib_reader).
OK, so I figured this out in the end using my own implementation of a reader.
Basically the struct looks like this:
type reader struct {
at int
n int
f io.ReadCloser
buf []byte
}
This can be attached to the zlib reader:
// Open file for reading
fi, err := os.Open(filename)
if err != nil {
return nil, err
}
defer fi.Close()
// Attach zlib reader
r := new(reader)
r.buf = make([]byte, 2048)
r.f, err = zlib.NewReader(fi)
if err != nil {
return nil, err
}
defer r.f.Close()
Then x number of bytes can be read straight out of the zlib reader using a function like this:
mydata := r.readx(10)
func (r *reader) readx(x int) []byte {
for r.n < x {
copy(r.buf, r.buf[r.at:r.at+r.n])
r.at = 0
m, err := r.f.Read(r.buf[r.n:])
if err != nil {
panic(err)
}
r.n += m
}
tmp := make([]byte, x)
copy(tmp, r.buf[r.at:r.at+x]) // must be copied to avoid memory leak
r.at += x
r.n -= x
return tmp
}
Note that I have no need to check for EOF because I my parser should stop itself at the right place.

Go - What is really a multipart.File?

In the docs it is said that
If stored on disk, the File's underlying concrete type will be an
*os.File.
In this case everything is clear. Great. But, what happens if not, if the file is stored in memory?
My actual problem is that I´m trying to get the size of the different files stored in memory that I got though an html form but I can not use os.Stat to do fileInfo.Size() because I don´t have the location of the file, just it´s name.
fhs := req.MultipartForm.File["files"]
for _, fileHeader := range fhs {
file, _ := fileHeader.Open()
log.Println(len(file)) // Gives an error because is of type multipart.File
fileInfo, err := os.Stat(fileHeader.Filename) // Gives an error because it´s just the name, not the complete path
// Here I would do things with the file
}
You can exploit the fact that multipart.File implements io.Seeker to find its size.
cur, err := file.Seek(0, 1)
size, err := file.Seek(0, 2)
_, err := file.Seek(cur, 0)
The first line finds the file's current offset. The second seeks to the end of the file and returns where it is in relation to the beginning of the file. This is the size of the file. The third seeks to the offset we were at before trying to find the size.
You can read more about the seek method here.
if you call parseMultipartForm(0) this will write the entire file to disk instead of storing anything in memory, followed by f, _ := FormFile("file") then you can stat the file with fi, _ := f.(*os.File).Stat()
Depending on what you want to do with the data, the best thing to do may be to read the file into a byte slice with ioutil.ReadAll. (You might want the data as a byte slice eventually, anyway.) Once you've done that, you an find the length with len.

Resources