Trouble programming a client that allows sending text messages to server - go

So I've programmed a server that receives text messages from a connecting client, reverses and capses them and sends them back.
Now I'm trying to program a client so that when I launch it it will keep running until I shut it down (ctrl + c) and allow me to input text lines and send them to the server.
I have a problem though - if I pass a, say, cyrillic symbol to the input, it will return a <nil> <nil> (type, value) error and will remain bugged unless I flush the memory somehow.
I also can't figure how to read the whole message (whole meaning the size of the slice (1024 bytes)) instead of each word separately.
Also, how do I figure out how to delay my 'enter your message' text? Depending on the length of the message I pass on to the server, it should wait longer or shorter. I don't want it popping all over the place if the message is split into a few messages, just once after the answer is received.
Here's the relevant code:
func client() {
// connect to the server
c, err := net.Dial("tcp", "127.0.0.1"+":"+port)
if err != nil {
log.Printf("Dial error: %T %+v", err, err)
return
}
// send the message
msg := ""
for {
fmt.Print("Enter your message:\n")
_, errs := fmt.Scan(&msg)
if errs != nil {
log.Printf("Scan error: %T %+v", errs, errs)
return
}
fmt.Println("Client sending:\n", msg)
_, errw := c.Write([]byte(msg))
if errw != nil {
log.Printf("Write error: %T %+v", errw, errw)
return
}
// handle the response
go handleServerResponse(c)
time.Sleep(1 * time.Second)
}
func main() {
port = "9999"
// launch client
done := make(chan bool)
go client()
<-done // Block forever
}
I've used the empty channel to block the main() from ending.
How should I approach the 2 problems explained above?

Question answered by #JimB:
You're using fmt.Scan which scans space separated values. Don't use
that if you don't want to read each value separately. You can use
Scanln to read a line, or just read directly from stdin.

Related

go-libp2p - receiving bytes from stream

I'm building my first go-libp2p application and trying to modify the echo example to read a []byte instead of a string as in the example.
In my code, I changed the doEcho function to run io.ReadAll(s) instead of bufio.NewReader(s) followed by ReadString('\n'):
// doEcho reads a line of data a stream and writes it back
func doEcho(s network.Stream) error {
b, err := io.ReadAll(s)
if err != nil {
return err
}
log.Printf("Number of bytes received: %d", len(b))
_, err = s.Write([]byte("thanks for the bytes"))
return err
}
When I run this and send a message, I do see the listener received new stream log but the doEcho function gets stuck after the io.ReadAll(s) call and never executes the reply.
So my questions are:
Why does my code not work and how can I make it work?
How does io.ReadAll(s) and bufio's ReadString('\n') work under the hood so that they cause this difference in behavior?
Edit:
As per #Stephan Schlecht suggestion I changed my code to this, but it still remains blocked as before:
func doEcho(s network.Stream) error {
buf := bufio.NewReader(s)
var data []byte
for {
b, err := buf.ReadByte()
if err != nil {
break
}
data = append(data, b)
}
log.Printf("Number of bytes received: %d", len(data))
_, err := s.Write([]byte("thanks for the bytes"))
return err
}
Edit 2: I forgot to clarify this, but I don't want to use ReadString('\n') or ReadBytes('\n') because I don't know anything about the []byte I'm receiving, so it might not end with \n. I want to read any []byte from the stream and then write back to the stream.
ReadString('\n') reads until the first occurrence of \n in the input and returns the string.
io.ReadAll(s) reads until an error or EOF and returns the data it read. So unless an error or EOF occurs it does not return.
In principle, there is no natural size for a data structure to be received on stream-oriented connections.
It depends on the remote sender.
If the remote sender sends binary data and closes the stream after sending the last byte, then you can simply read all data up to the EOF on the receiver side.
If the stream is not to be closed immediately and the data size is variable, there are further possibilities: One first sends a header that has a defined size and in the simplest case simply transmits the length of the data. Once you have received the specified amount of data, you know that this round of reception is complete and you can continue.
Alternatively, you can define a special character that marks the end of the data structure to be transmitted. This will not work if you want to transmit arbitrary binary data without encoding.
There are other options that are a little more complicated, such as splitting the data into blocks.
In the example linked in the question, a \n is sent at the end of the data just sent, but this would not work if you want to send arbitrary binary data.
Adapted Echo Example
In order to minimally modify the echo example linked in the question to first send a 1-byte header with the length of the payload and only then the actual payload, it could look something like the following:
Sending
In the function runSender line one could replace the current sending of the payload from:
log.Println("sender saying hello")
_, err = s.Write([]byte("Hello, world!\n"))
if err != nil {
log.Println(err)
return
}
to
log.Println("sender saying hello")
payload := []byte("Hello, world!")
header := []byte{byte(len(payload))}
_, err = s.Write(header)
if err != nil {
log.Println(err)
return
}
_, err = s.Write(payload)
if err != nil {
log.Println(err)
return
}
So we send one byte with the length of the payload before the actual payload.
Echo
The doEcho would then read the header first and afterwards the payload. It uses ReadFull, which reads exactly len(payload) bytes.
func doEcho(s network.Stream) error {
buf := bufio.NewReader(s)
header, err := buf.ReadByte()
if err != nil {
return err
}
payload := make([]byte, header)
n, err := io.ReadFull(buf, payload)
log.Printf("payload has %d bytes", n)
if err != nil {
return err
}
log.Printf("read: %s", payload)
_, err = s.Write(payload)
return err
}
Test
Terminal 1
2022/11/06 09:59:38 I am /ip4/127.0.0.1/tcp/8088/p2p/QmVrjAX9QPqihfVFEPJ2apRSUxVCE9wnvqaWanBz2FLY1e
2022/11/06 09:59:38 listening for connections
2022/11/06 09:59:38 Now run "./echo -l 8089 -d /ip4/127.0.0.1/tcp/8088/p2p/QmVrjAX9QPqihfVFEPJ2apRSUxVCE9wnvqaWanBz2FLY1e" on a different terminal
2022/11/06 09:59:55 listener received new stream
2022/11/06 09:59:55 payload has 13 bytes
2022/11/06 09:59:55 read: Hello, world!
Terminal 2
stephan#mac echo % ./echo -l 8089 -d /ip4/127.0.0.1/tcp/8088/p2p/QmVrjAX9QPqihfVFEPJ2apRSUxVCE9wnvqaWanBz2FLY1e
2022/11/06 09:59:55 I am /ip4/127.0.0.1/tcp/8089/p2p/QmW6iSWiFBG5ugUUwBND14pDZzLDaqSNfxBG6yb8cmL3Di
2022/11/06 09:59:55 sender opening stream
2022/11/06 09:59:55 sender saying hello
2022/11/06 09:59:55 read reply: "Hello, world!"
s
This is certainly a fairly simple example and will certainly need to be customized to your actual requirements, but could perhaps be a first step in the right direction.

How I can receive data for ever from TCP server

I try to create TCP client to receive data from TCP server,
but after server sending data only I receive data one even if server send many data, and I want to receive data forever, and I don't know what is my problem,and
Client:
func main() {
tcpAddr := "localhost:3333"
conn, err := net.DialTimeout("tcp", tcpAddr, time.Second*7)
if err != nil {
log.Println(err)
}
defer conn.Close()
// conn.Write([]byte("Hello World"))
connBuf := bufio.NewReader(conn)
for {
bytes, err := connBuf.ReadBytes('\n')
if err != nil {
log.Println("Rrecv Error:", err)
}
if len(bytes) > 0 {
fmt.Println(string(bytes))
}
time.Sleep(time.Second * 2)
}
}
I'm following this example to create TCP test server
Server:
// Handles incoming requests.
func handleRequest(conn net.Conn) {
// Make a buffer to hold incoming data.
buf := make([]byte, 1024)
// Read the incoming connection into the buffer.
_, err := conn.Read(buf)
if err != nil {
fmt.Println("Error reading:", err.Error())
}
fmt.Println(buf)
// Send a response back to person contacting us.
var msg string
fmt.Scanln(&msg)
conn.Write([]byte(msg))
// Close the connection when you're done with it.
conn.Close()
}
Read requires a Write on the other side of the connection
want to receive data forever
Then you have to send data forever. There's a for loop on the receiving end, but no looping on the sending end. The server writes its message once and closes the connection.
Server expects to get msg from client but client doesn't send it
// conn.Write([]byte("Hello World"))
That's supposed to provide the msg value to the server
_, err := conn.Read(buf)
So those two lines don't match.
Client expects a newline but server isn't sending one
fmt.Scanln expects to put each whitespace separated value into the corresponding argument. It does not capture the whitespace. So:
Only up to the first whitespace of what you type into server's stdin will be stored in msg
Newline will not be stored in msg.
But your client is doing
bytes, err := connBuf.ReadBytes('\n')
The \n never comes. The client never gets done reading that first msg.
bufio.NewScanner would be a better way to collect data from stdin, since you're likely to want to capture whitespace as well. Don't forget to append the newline to each line of text you send, because the client expects it!
Working code
I put these changes together into a working example on the playground. To get it working in that context, I had to make a few other changes too.
Running server and client in the same process
Hard coded 3 clients so the program ended in limited amount of time
Hard coded 10 receives in the client so program can end
Hard coded 3 server connections handled so program can end
Removed fmt.Scanln and have server just return the original message sent (because playground provides no stdin mechanism)
Should be enough to get you started.

Handle goroutine termination and error handling via error group?

I am trying to read multiple files in parallel in such a way so that each go routine that is reading a file write its data to that channel, then have a single go-routine that listens to that channel and adds the data to the map. Here is my play.
Below is the example from the play:
package main
import (
"fmt"
"sync"
)
func main() {
var myFiles = []string{"file1", "file2", "file3"}
var myMap = make(map[string][]byte)
dataChan := make(chan fileData, len(myFiles))
wg := sync.WaitGroup{}
defer close(dataChan)
// we create a wait group of N
wg.Add(len(myFiles))
for _, file := range myFiles {
// we create N go-routines, one per file, each one will return a struct containing their filename and bytes from
// the file via the dataChan channel
go getBytesFromFile(file, dataChan, &wg)
}
// we wait until the wait group is decremented to zero by each instance of getBytesFromFile() calling waitGroup.Done()
wg.Wait()
for i := 0; i < len(myFiles); i++ {
// we can now read from the data channel N times.
file := <-dataChan
myMap[file.name] = file.bytes
}
fmt.Printf("%+v\n", myMap)
}
type fileData struct {
name string
bytes []byte
}
// how to handle error from this method if reading file got messed up?
func getBytesFromFile(file string, dataChan chan fileData, waitGroup *sync.WaitGroup) {
bytes := openFileAndGetBytes(file)
dataChan <- fileData{name: file, bytes: bytes}
waitGroup.Done()
}
func openFileAndGetBytes(file string) []byte {
return []byte(fmt.Sprintf("these are some bytes for file %s", file))
}
Problem Statement
How can I use golang.org/x/sync/errgroup to wait on and handle errors from goroutines or if there is any better way like using semaphore? For example if any one of my go routine fails to read data from file then I want to cancels all those remaining in the case of any one routine returning an error (in which case that error is the one bubble back up to the caller). And it should automatically waits for all the supplied go routines to complete successfully for success case.
I also don't want to spawn 100 go-routines if total number of files is 100. I want to control the parallelism if possible if there is any way.
How can I use golang.org/x/sync/errgroup to wait on and handle errors from goroutines or if there is any better way like using semaphore? For example [...] I want to cancels all those remaining in the case of any one routine returning an error (in which case that error is the one bubble back up to the caller). And it should automatically waits for all the supplied go routines to complete successfully for success case.
There are many ways to communicate error states across goroutines. errgroup does a bunch of heavy lifting though, and is appropriate for this case. Otherwise you're going to end up implementing the same thing.
To use errgroup we'll need to handle errors (and for your demo, generate some). In addition, to cancel existing goroutines, we'll use a context from errgroup.NewWithContext.
From the errgroup reference,
Package errgroup provides synchronization, error propagation, and Context cancelation for groups of goroutines working on subtasks of a common task.
Your play doesn't support any error handling. We can't collect and cancel on errors if we don't do any error handling. So I added some code to inject error handling:
func openFileAndGetBytes(file string) (string, error) {
if file == "file2" {
return "", fmt.Errorf("%s cannot be read", file)
}
return fmt.Sprintf("these are some bytes for file %s", file), nil
}
Then that error had to be passed back from getBytesFromFile as well:
func getBytesFromFile(file string, dataChan chan fileData) error {
bytes, err := openFileAndGetBytes(file)
if err == nil {
dataChan <- fileData{name: file, bytes: bytes}
}
return err
}
Now that we've done that, we can turn our attention to how we're going to start up a number of goroutines.
I also don't want to spawn 100 go-routines if total number of files is 100. I want to control the parallelism if possible if there is any way.
Written well, the number of tasks, channel size, and number of workers are typically independent values. The trick is to use channel closure - and in your case, context cancellation - to communicate state between the goroutines. We'll need an additional channel for the distribution of filenames, and an additional goroutine for the collection of the results.
To illustate this point, my code uses 3 workers, and adds a few more files. My channels are unbuffered. This allows us to see some of the files get processed, while others are aborted. If you buffer the channels, the example will still work, but it's more likely for additional work to be processed before the cancellation is handled. Experiment with buffer size along with worker count and number of files to process.
var myFiles = []string{"file1", "file2", "file3", "file4", "file5", "file6"}
fileChan := make(chan string)
dataChan := make(chan fileData)
To start up the workers, instead of starting one for each file, we start the number we desire - here, 3.
for i := 0; i < 3; i++ {
worker_num := i
g.Go(func() error {
for file := range fileChan {
if err := getBytesFromFile(file, dataChan); err != nil {
fmt.Println("worker", worker_num, "failed to process", file, ":", err.Error())
return err
} else if err := ctx.Err(); err != nil {
fmt.Println("worker", worker_num, "context error in worker:", err.Error())
return err
}
}
fmt.Println("worker", worker_num, "processed all work on channel")
return nil
})
}
The workers call your getBytesFromFile function. If it returns an err, we return an err. errgroup will cancel our context automatically in this case. However, the exact order of operations is not deterministic, so more files may or may not get processed before the context is cancelled. I'll show several possibilties below.
by rangeing over fileChan, the worker automatically picks up end of work from the channel closure. If we get an error, we can return it to errgroup immediately. Otherwise, if the context has been cancelled, we can return the cancellation error immediately.
You might think that g.Go would automatically cancel our function. But it cannot. There is no way to cancel a running function in Go other than process termination. errgroup.Group.Go's function argument must cancel itself when appropriate based on the state of its context.
Now we can turn our attention to the thing that puts the files on fileChan. We have 2 options here: we can use a buffered channel of the size of myFiles, like you did. We can fill the entire channel with pending jobs. This is only an option if you know the number of jobs when you create the channel. The other option is to use an additional "distribution" goroutine that can block on writes to fileChan so that our "main" goroutine can continue.
// dispatch files
g.Go(func() error {
defer close(fileChan)
done := ctx.Done()
for _, file := range myFiles {
select {
case fileChan <- file:
continue
case <-done:
break
}
}
return ctx.Err()
})
I'm not sure it's strictly necessary to put this in the same errgroup in this case, because we can't get an error in the distributor goroutine. But this general pattern, drawn from the Pipeline example from errgroup, works regardless of whether the work dispatcher might generate errors.
This functions pretty simple, but the magic is in select along with ctx.Done() channel. Either we write to the work channel, or we fail if our context is done. This allows us to stop distributing work when one worker has failed one file.
We defer close(fileChan) so that, regardless of why we have finished (either we distributed all work, or the context was cancelled), the workers know there will be no more work on the incoming work queue (ie fileChan).
We need one more synchronization mechanism: once all the work is distributed, and all the results are in or work was finished being cancelled, (eg, after our errgroup's Wait() returns), we need to close our results channel, dataChan. This signals the results collector that there are no more results to be collected.
var err error // we'll need this later!
go func() {
err = g.Wait()
close(dataChan)
}()
We can't - and don't need to - put this in the errgroup.Group. The function can't return an error, and it can't wait for itself to close(dataChan). So it goes into a regular old goroutine, sans errgroup.
Finally we can collect the results. With dedicated worker goroutines, a distributor goroutine, and a goroutine waiting on the work and notifying that there will be no more writes to the dataChan, we can collect all the results right in the "primary" goroutine in main.
for data := range dataChan {
myMap[data.name] = data.bytes
}
if err != nil { // this was set in our final goroutine, remember
fmt.Println("errgroup Error:", err.Error())
}
I made a few small changes so that it was easier to see the output. You may already have noticed I changed the file contents from []byte to string. This was purely so that the results were easy to read. Pursuant also to that end, I am using encoding/json to format the results so that it is easy to read them and paste them into SO. This is a common pattern that I often use to indent structured data:
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
if err := enc.Encode(myMap); err != nil {
panic(err)
}
Finally we're ready to run. Now we can see a number of different results depending on just what order the goroutines execute. But all of them are valid execution paths.
worker 2 failed to process file2 : file2 cannot be read
worker 0 context error in worker: context canceled
worker 1 context error in worker: context canceled
errgroup Error: file2 cannot be read
{
"file1": "these are some bytes for file file1",
"file3": "these are some bytes for file file3"
}
Program exited.
In this result, the remaining work (file4 and file5) were not added to the channel. Remember, an unbuffered channel stores no data. For those tasks to be written to the channel, a worker would have to be there to read them. Instead, the context was cancelled after file2 failed, and the distribution function followed the <-done path within its select. file1 and file3 were already processed.
Here's a different result (I just ran the playground share a few times to get different results).
worker 1 failed to process file2 : file2 cannot be read
worker 2 processed all work on channel
worker 0 processed all work on channel
errgroup Error: file2 cannot be read
{
"file1": "these are some bytes for file file1",
"file3": "these are some bytes for file file3",
"file4": "these are some bytes for file file4",
"file5": "these are some bytes for file file5",
"file6": "these are some bytes for file file6"
}
In this case, it looks a little like our cancellation failed. but what really happened is that the goroutines just "happened" to queue and finish the rest of the work before errorgroup picked upon worker `'s failure and cancelled the context.
what errorgroup does
When you use errorgroup, you're really getting 2 things out of it:
easily accessing the first error your workers returned;
getting a context that errorgroup will cancel for you when
Keep in mind that errorgroup does not cancel goroutines. This tripped me up a bit at first. Errorgroup cancels the context. It's your responsibility to apply the status of that context to your goroutines (remember, the goroutine must end itself, errorgroup can't end it).
A final aside about contexts with file operations, and failing outstanding work
Most of your file operations, eg io.Copy or os.ReadFile, are actually a loop of subsequent Read operations. But io and os don't support contexts directly. so if you have a worker reading a file, and you don't implement the Read loop yourself, you won't have an opportunity to cancel based on context. That's probably okay in your case - sure, you may have read some more files than you really needed to, but only because you were already reading them when the error occurred. I would personally accept this state of affairs and not implement my own read loop.
The code
https://go.dev/play/p/9qfESp_eB-C
package main
import (
"context"
"encoding/json"
"fmt"
"os"
"golang.org/x/sync/errgroup"
)
func main() {
var myFiles = []string{"file1", "file2", "file3", "file4", "file5", "file6"}
fileChan := make(chan string)
dataChan := make(chan fileData)
g, ctx := errgroup.WithContext(context.Background())
for i := 0; i < 3; i++ {
worker_num := i
g.Go(func() error {
for file := range fileChan {
if err := getBytesFromFile(file, dataChan); err != nil {
fmt.Println("worker", worker_num, "failed to process", file, ":", err.Error())
return err
} else if err := ctx.Err(); err != nil {
fmt.Println("worker", worker_num, "context error in worker:", err.Error())
return err
}
}
fmt.Println("worker", worker_num, "processed all work on channel")
return nil
})
}
// dispatch files
g.Go(func() error {
defer close(fileChan)
done := ctx.Done()
for _, file := range myFiles {
if err := ctx.Err(); err != nil {
return err
}
select {
case fileChan <- file:
continue
case <-done:
break
}
}
return ctx.Err()
})
var err error
go func() {
err = g.Wait()
close(dataChan)
}()
var myMap = make(map[string]string)
for data := range dataChan {
myMap[data.name] = data.bytes
}
if err != nil {
fmt.Println("errgroup Error:", err.Error())
}
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
if err := enc.Encode(myMap); err != nil {
panic(err)
}
}
type fileData struct {
name,
bytes string
}
func getBytesFromFile(file string, dataChan chan fileData) error {
bytes, err := openFileAndGetBytes(file)
if err == nil {
dataChan <- fileData{name: file, bytes: bytes}
}
return err
}
func openFileAndGetBytes(file string) (string, error) {
if file == "file2" {
return "", fmt.Errorf("%s cannot be read", file)
}
return fmt.Sprintf("these are some bytes for file %s", file), nil
}

Concurrent POSTs with multireader do not return a response

I have a proof of concept http server using echo which takes a POST request with a JSON body. I am trying to stream the request body over to multiple POST requests using pipes and the multiwriter but it is not working correctly.
In the example below I can see the data is sent to the 2 POST endpoints and I can see a log from those requests but I never get a response back it seems the code hangs waiting for the http.Post(...) functions to complete.
If I call these 2 endpoints directly they work fine and give a valid json response, so i believe the problem is with this piece of code which is my handler for the route.
func ImportAggregate(c echo.Context) error {
oneR, oneW := io.Pipe()
twoR, twoW := io.Pipe()
done := make(chan bool, 2)
go func() {
fmt.Println("Product Starting")
response, err := http.Post("http://localhost:1323/products/import", "application/json", oneR)
if err != nil {
fmt.Println(err)
} else {
fmt.Println(response.Body)
}
done <- true
}()
go func() {
fmt.Println("Import Starting")
response, err := http.Post("http://localhost:1323/discounts/import", "application/json", twoR)
if err != nil {
fmt.Println(err)
} else {
fmt.Println(response.Body)
}
done <- true
}()
mw := io.MultiWriter(oneW, twoW)
io.Copy(mw, c.Request().Body)
<-done
<-done
return c.String(200, "Imported")
}
The output in console is:
Product Starting
Import Starting
The issue in OP code is that the http.Post calls never detects the EOF of the provided io.Reader.
That happens because the provided half write pipe is never closed, thus, the half read pipe never emits the regular EOF error.
As a note about OP comment that closing the half read pipe would generate irregular errors, one has to understand that reading from a closed pipe is not a correct behavior.
Thus in this situation, care should be taken to close the half write side right after the content has been copied.
The resulting source code should be changed to
func ImportAggregate(c echo.Context) error {
oneR, oneW := io.Pipe()
twoR, twoW := io.Pipe()
done := make(chan bool, 2)
go func() {
fmt.Println("Product Starting")
response, err := http.Post("http://localhost:1323/products/import", "application/json", oneR)
if err != nil {
fmt.Println(err)
} else {
fmt.Println(response.Body)
}
done <- true
}()
go func() {
fmt.Println("Import Starting")
response, err := http.Post("http://localhost:1323/discounts/import", "application/json", twoR)
if err != nil {
fmt.Println(err)
} else {
fmt.Println(response.Body)
}
done <- true
}()
mw := io.MultiWriter(oneW, twoW)
io.Copy(mw, c.Request().Body)
oneW.Close()
twoW.Close()
<-done
<-done
return c.String(200, "Imported")
}
Side notes beyond OP question:
an error check must implemented around the io.Copy in order to detect a transmission error.
it is not needed to close the half read side of the pipe, http.Post will do it after it received the EOF signal.
the goroutines responsible to consume the pipes must be declared and started before the input request is copied. The Pipes being synchronous, the code would block during the io.Copy waiting to be consumed on its other end.
the done chan does not require to be unbuffered (of length 2)
a way to forward error from outgoing requests to the outgoing response would be to use a channel of type (chan error), loop over it two times, and check for the first error encountered.

Golang reading from serial

I'm trying to read from a serial port (a GPS device on a Raspberry Pi).
Following the instructions from http://www.modmypi.com/blog/raspberry-pi-gps-hat-and-python
I can read from shell using
stty -F /dev/ttyAMA0 raw 9600 cs8 clocal -cstopb
cat /dev/ttyAMA0
I get well formatted output
$GNGLL,5133.35213,N,00108.27278,W,160345.00,A,A*65
$GNRMC,160346.00,A,5153.35209,N,00108.27286,W,0.237,,290418,,,A*75
$GNVTG,,T,,M,0.237,N,0.439,K,A*35
$GNGGA,160346.00,5153.35209,N,00108.27286,W,1,12,0.67,81.5,M,46.9,M,,*6C
$GNGSA,A,3,29,25,31,20,26,23,21,16,05,27,,,1.11,0.67,0.89*10
$GNGSA,A,3,68,73,83,74,84,75,85,67,,,,,1.11,0.67,0.89*1D
$GPGSV,4,1,15,04,,,34,05,14,040,21,09,07,330,,16,45,298,34*40
$GPGSV,4,2,15,20,14,127,18,21,59,154,30,23,07,295,26,25,13,123,22*74
$GPGSV,4,3,15,26,76,281,40,27,15,255,20,29,40,068,19,31,34,199,33*7C
$GPGSV,4,4,15,33,29,198,,36,23,141,,49,30,172,*4C
$GLGSV,3,1,11,66,00,325,,67,13,011,20,68,09,062,16,73,12,156,21*60
$GLGSV,3,2,11,74,62,177,20,75,53,312,36,76,08,328,,83,17,046,25*69
$GLGSV,3,3,11,84,75,032,22,85,44,233,32,,,,35*62
$GNGLL,5153.35209,N,00108.27286,W,160346.00,A,A*6C
$GNRMC,160347.00,A,5153.35205,N,00108.27292,W,0.216,,290418,,,A*7E
$GNVTG,,T,,M,0.216,N,0.401,K,A*3D
$GNGGA,160347.00,5153.35205,N,00108.27292,W,1,12,0.67,81.7,M,46.9,M,,*66
$GNGSA,A,3,29,25,31,20,26,23,21,16,05,27,,,1.11,0.67,0.89*10
$GNGSA,A,3,68,73,83,74,84,75,85,67,,,,,1.11,0.67,0.89*1D
$GPGSV,4,1,15,04,,,34,05,14,040,21,09,07,330,,16,45,298,34*40
(I've put some random data in)
I'm trying to read this in Go. Currently, I have
package main
import "fmt"
import "log"
import "github.com/tarm/serial"
func main() {
config := &serial.Config{
Name: "/dev/ttyAMA0",
Baud: 9600,
ReadTimeout: 1,
Size: 8,
}
stream, err := serial.OpenPort(config)
if err != nil {
log.Fatal(err)
}
buf := make([]byte, 1024)
for {
n, err := stream.Read(buf)
if err != nil {
log.Fatal(err)
}
s := string(buf[:n])
fmt.Println(s)
}
}
But this prints malformed data. I suspect that this is due to the buffer size or the value of Size in the config struct being wrong, but I'm not sure how to get those values from the stty settings.
Looking back, I think the issue is that I'm getting a stream and I want to be able to iterate over lines of the stty, rather than chunks. This is how the stream is outputted:
$GLGSV,3
,1,09,69
,10,017,
,70,43,0
69,,71,3
2,135,27
,76,23,2
32,22*6F
$GLGSV
,3,2,09,
77,35,30
0,21,78,
11,347,,
85,31,08
1,30,86,
72,355,3
6*6C
$G
LGSV,3,3
,09,87,2
4,285,30
*59
$GN
GLL,5153
.34919,N
,00108.2
7603,W,1
92901.00
,A,A*6A
The struct you get back from serial.OpenPort() contains a pointer to an open os.File corresponding to the opened serial port connection. When you Read() from this, the library calls Read() on the underlying os.File.
The documentation for this function call is:
Read reads up to len(b) bytes from the File. It returns the number of bytes read and any error encountered. At end of file, Read returns 0, io.EOF.
This means you have to keep track of how much data was read. You also have to keep track of whether there were newlines, if this is important to you. Unfortunately, the underlying *os.File is not exported, so you'll find it difficult to use tricks like bufio.ReadLine(). It may be worth modifying the library and sending a pull request.
As Matthew Rankin noted in a comment, Port implements io.ReadWriter so you can simply use bufio to read by lines.
stream, err := serial.OpenPort(config)
if err != nil {
log.Fatal(err)
}
scanner := bufio.NewScanner(stream)
for scanner.Scan() {
fmt.Println(scanner.Text()) // Println will add back the final '\n'
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
Change
fmt.Println(s)
to
fmt.Print(s)
and you will probably get what you want.
Or did I misunderstand the question?
Two additions to Michael Hamptom's answer which can be useful:
line endings
You might receive data that is not newline-separated text. bufio.Scanner uses ScanLines by default to split the received data into lines - but you can also write your own line splitter based on the default function's signature and set it for the scanner:
scanner := bufio.NewScanner(stream)
scanner.Split(ownLineSplitter) // set custom line splitter function
reader shutdown
You might not receive a constant stream but only some packets of bytes from time to time. If no bytes arrive at the port, the scanner will block and you can't just kill it. You'll have to close the stream to do so, effectively raising an error. To not block any outer loops and handle errors appropriately, you can wrap the scanner in a goroutine that takes a context. If the context was cancelled, ignore the error, otherwise forward the error. In principle, this can look like
var errChan = make(chan error)
var dataChan = make(chan []byte)
ctx, cancelPortScanner := context.WithCancel(context.Background())
go func(ctx context.Context) {
scanner := bufio.NewScanner(stream)
for scanner.Scan() { // will terminate if connection is closed
dataChan <- scanner.Bytes()
}
// if execution reaches this point, something went wrong or stream was closed
select {
case <-ctx.Done():
return // ctx was cancelled, just return without error
default:
errChan <- scanner.Err() // ctx wasn't cancelled, forward error
}
}(ctx)
// handle data from dataChan, error from errChan
To stop the scanner, you would cancel the context and close the connection:
cancelPortScanner()
stream.Close()

Resources