The first TCP connection running on localhost on osx always parses the binary sent to it correctly. Subsequent requests lose the binary data, only seeing the first byte [8]. How have I failed to set up my Reader?
package main
import (
"fmt"
"log"
"net"
"os"
"app/src/internal/handler"
"github.com/golang-collections/collections/stack"
)
func main() {
port := os.Getenv("SERVER_PORT")
s := stack.New()
ln, err := net.Listen("tcp", ":8080")
if err != nil {
log.Fatalf("net.Listen: %v", err)
}
fmt.Println("Serving on " + port)
for {
conn, err := ln.Accept()
// defer conn.Close()
if err != nil {
log.Fatal("ln.Accept")
}
go handler.Handle(conn, s)
}
}
package handler
import (
"fmt"
"io"
"log"
"net"
"github.com/golang-collections/collections/stack"
)
func Handle(c net.Conn, s *stack.Stack) {
fmt.Printf("Serving %s\n", c.RemoteAddr().String())
buf := make([]byte, 0, 256)
tmp := make([]byte, 128)
n, err := c.Read(tmp)
if err != nil {
if err != io.EOF {
log.Fatalf("connection Read() %v", err)
}
return
}
buf = append(buf, tmp[:n]...)
}
log:
Serving [::1]:51699
------------- value ---------------:QCXhoy5t
Buffer Length: 9. First Value: 8
Serving [::1]:51700
------------- value ---------------:
Buffer Length: 1. First Value: 8
Serving [::1]:51701
test sent over:
push random string:
QCXhoy5t
push random string:
GPh0EnbS
push random string:
4kJ0wN0R
The docs for Reader say:
Read reads up to len(p) bytes into p. It returns the number of bytes read (0 <= n
<= len(p)) and any error encountered. Even if Read returns n < len(p), it may use
all of p as scratch space during the call. If some data is available but not
len(p) bytes, Read conventionally returns what is available instead of waiting
for more.
So the most likely cause of your issue is that Read is returning the data available (in this case a single character). You can fix this by using ioutil.ReadAll or performing the read in a loop (the fact the data is being added to a buffer makes it look like that was the original intention) with something like:
for {
n, err := c.Read(tmp)
if err != nil {
if err != io.EOF {
// Note that data might have also been received - you should process that
// if appropriate.
log.Fatalf("connection Read() %v", err)
return
}
break // All data received so process it
}
buf = append(buf, tmp[:n]...)
}
Note: There is no guarantee that any data is received; you should check the length before trying to access it (i.e. buf[0] may panic)
Related
Problem
I am sending packet using tcp, with first 8 byte as long which
contains the actual packet length while receiving it, after some point
it gets wrong packet length which causes the following error "slice
out of range" because the packet length that is received is way bigger
but using TCP Dump I can see it is receiving the correct packet size.
Client TCP Code
package main
import (
"fmt"
"net"
"ByteBuffer"
"log"
"sync"
)
func main() {
conn, err := net.Dial("tcp", "192.168.90.116:8300")
if err != nil {
fmt.Println(err)
return
}
byteBuffer := ByteBuffer.Buffer{
Endian:"big",
}
msg := "Hello World"
totalByteLen := len(msg)
byteBuffer.PutLong(totalByteLen)
byteBuffer.Put([]byte(msg))
log.Println(byteBuffer.Array())
for i:=0;i<1000000000000;i++{
go write(conn, byteBuffer.Array())
}
}
var lck = &sync.Mutex{}
func write(conn net.Conn, data []byte){
lck.Lock()
_, err := conn.Write(data)
lck.Unlock()
if err != nil{
return
}
}
Server TCP Code
func HandleRequest(conn net.Conn){
defer conn.Close()
for {
// creating a 8 byte buffer array
sizeBuf := make([]byte, 8)
// reading from tcp sockets
_, err := conn.Read(sizeBuf)
// converting the packet size to int64
packetSize := int64(binary.BigEndian.Uint64(sizeBuf))
log.Println(packetSize)
if packetSize < 0 {
continue
}
// reading more bytes from tcp pipe of packetSize length
/*
Here it catches error as the packet size is incorrect but it throws error after receiving aroung 4-5K messages.
*/
completePacket := make([]byte, packetSize)
_, err = conn.Read(completePacket)
// checking error type
if err == io.EOF{
break
}
if err != nil{
break
}
fmt.Println(completePacket)
}
}
Not able to reset or discard the buffer.
I am trying to get the data over the serial port where I am getting data packet of some fixed length for every 10 seconds. I have an infinite for loop to receive the data packets continuously. After receiving the new data packet I am resetting the buffer but when I receive the next data packet, it overwrites the buffer and I get mixed data packet.
Let say I should receive packet abcdef continuously for every n second. But when I try the following code I am receiving packet bcdefa then after n second cdefab then defabc and so on
package main
import (
"bufio"
"log"
"time"
"github.com/tarm/serial"
)
func main() {
c := &serial.Config{Name: "/dev/ttyUSB0", Baud: 57600}
s, err := serial.OpenPort(c)
if err != nil {
log.Println(err)
return
}
for {
time.Sleep(time.Second / 2)
reader := bufio.NewReader(s)
pck, err := reader.Peek(46)
if err != nil {
log.Println(err)
}
go parse(pck)
reader.Reset(s)
}
}
How do I reset or discard the buffer data effectively so that I will receive the exact data packet.
bare in mind i cant check what i m saying here...
1/ you must not instantiate the bufio reader at each iteration
2/ bufio.Reader.Peek does NOT advance the reader
https://golang.org/pkg/bufio/#Reader.Peek
3/ Unless you get a malformed packet, i think you dont need to reset at all.
4/ Please indent your code at play.golang.org
5/ You are not checking the read error for termination
6/ All package i can found to work with serial ports in go exposes an instance of io.Reader, so it might be useless to use an additional bufio.Reader. I suspect you r using https://godoc.org/github.com/tarm/serial#OpenPort
This is probably not the definitive answer, but it should help.
package main
import (
"io"
"log"
"time"
)
func main() {
s, err := serial.OpenPort(c)
if err != nil {
log.Fatal(err)
}
pck := make([]byte, 46)
for {
time.Sleep(time.Second / 2)
n, err := s.Read(pck)
if err != nil {
if err == io.EOF {
break
}
log.Println(err)
}
pck = pck[:n]
go parse(pck)
}
}
I'm trying to improve the performance of an app.
One part of its code uploads a file to a server in chunks.
The original version simply does this in a sequential loop. However, it's slow and during the sequence it also needs to talk to another server before uploading each chunk.
The upload of chunks could simply be placed in a goroutine. It works, but is not a good solution because if the source file is extremely large it ends up using a large amount of memory.
So, I try to limit the number of active goroutines by using a buffered channel. Here is some code that shows my attempt. I've stripped it down to show the concept and you can run it to test for yourself.
package main
import (
"fmt"
"io"
"os"
"time"
)
const defaultChunkSize = 1 * 1024 * 1024
// Lets have 4 workers
var c = make(chan int, 4)
func UploadFile(f *os.File) error {
fi, err := f.Stat()
if err != nil {
return fmt.Errorf("err: %s", err)
}
size := fi.Size()
total := (int)(size/defaultChunkSize + 1)
// Upload parts
buf := make([]byte, defaultChunkSize)
for partno := 1; partno <= total; partno++ {
readChunk := func(offset int, buf []byte) (int, error) {
fmt.Println("readChunk", partno, offset)
n, err := f.ReadAt(buf, int64(offset))
if err != nil {
return n, err
}
return n, nil
}
// This will block if there are not enough worker slots available
c <- partno
// The actual worker.
go func() {
offset := (partno - 1) * defaultChunkSize
n, err := readChunk(offset, buf)
if err != nil && err != io.EOF {
return
}
err = uploadPart(partno, buf[:n])
if err != nil {
fmt.Println("Uploadpart failed:", err)
}
<-c
}()
}
return nil
}
func uploadPart(partno int, buf []byte) error {
fmt.Printf("Uploading partno: %d, buflen=%d\n", partno, len(buf))
// Actually upload the part. Lets test it by instead writing each
// buffer to another file. We can then use diff to compare the
// source and dest files.
// Open file. Seek to (partno - 1) * defaultChunkSize, write buffer
f, err := os.OpenFile("/home/matthewh/Downloads/out.tar.gz", os.O_CREATE|os.O_WRONLY, 0755)
if err != nil {
fmt.Printf("err: %s\n", err)
}
n, err := f.WriteAt(buf, int64((partno-1)*defaultChunkSize))
if err != nil {
fmt.Printf("err=%s\n", err)
}
fmt.Printf("%d bytes written\n", n)
defer f.Close()
return nil
}
func main() {
filename := "/home/matthewh/Downloads/largefile.tar.gz"
fmt.Printf("Opening file: %s\n", filename)
f, err := os.Open(filename)
if err != nil {
panic(err)
}
UploadFile(f)
}
It almost works. But there are several problems.
1) The final partno 22 is occuring 3 times. The correct length is actually 612545 as the file length isn't a multiple of 1MB.
// Sample output
...
readChunk 21 20971520
readChunk 22 22020096
Uploading partno: 22, buflen=1048576
Uploading partno: 22, buflen=612545
Uploading partno: 22, buflen=1048576
Another problem, the upload could fail and I am not familiar enough with go and how best to solve failure of the goroutine.
Finally, I want to ordinarily return some data from the uploadPart when it succeeds. Specifically, it'll be a string (an HTTP ETag header value). These etag values need to be collected by the main function.
What is a better way to structure this code in this instance? I've not yet found a good golang design pattern that correctly fulfills my needs here.
Skipping for the moment the question of how better to structure this code, I see a bug in your code which may be causing the problem you're seeing. Since the function you're running in the goroutine uses the variable partno, which changes with each iteration of the loop, your goroutine isn't necessarily seeing the value of partno at the time you invoked the goroutine. A common way of fixing this is to create a local copy of that variable inside the loop:
for partno := 1; partno <= total; partno++ {
partno := partno
// ...
}
Data race #1
Multiple goroutines are using the same buffer concurrently. Note that one gorouting may be filling it with a new chunk while another is still reading an old chunk from it. Instead, each goroutine should have it's own buffer.
Data race #2
As Andy Schweig has pointed, the value in partno is updated by the loop before the goroutine created in that iteration has a chance to read it. This is why the final partno 22 occurs multiple times. To fix it, you can pass partno as a argument to the anonymous function. That will ensure each goroutine has it's own part number.
Also, you can use a channel to pass the results from the workers. Maybe a struct type with the part number and error. That way, you will be able to observe the progress and retry failed uploads.
For an example of a good pattern check out this example from the GOPL book.
Suggested changes
As noted by dev.bmax buf moved into go routine, as noted by Andy Schweig partno is param to anon function, also added WaitGroup since UploadFile was exiting before uploads were complete. Also defer f.Close() file, good habit.
package main
import (
"fmt"
"io"
"os"
"sync"
"time"
)
const defaultChunkSize = 1 * 1024 * 1024
// wg for uploads to complete
var wg sync.WaitGroup
// Lets have 4 workers
var c = make(chan int, 4)
func UploadFile(f *os.File) error {
// wait for all the uploads to complete before function exit
defer wg.Wait()
fi, err := f.Stat()
if err != nil {
return fmt.Errorf("err: %s", err)
}
size := fi.Size()
fmt.Printf("file size: %v\n", size)
total := int(size/defaultChunkSize + 1)
// Upload parts
for partno := 1; partno <= total; partno++ {
readChunk := func(offset int, buf []byte, partno int) (int, error) {
fmt.Println("readChunk", partno, offset)
n, err := f.ReadAt(buf, int64(offset))
if err != nil {
return n, err
}
return n, nil
}
// This will block if there are not enough worker slots available
c <- partno
// The actual worker.
go func(partno int) {
// wait for me to be done
wg.Add(1)
defer wg.Done()
buf := make([]byte, defaultChunkSize)
offset := (partno - 1) * defaultChunkSize
n, err := readChunk(offset, buf, partno)
if err != nil && err != io.EOF {
return
}
err = uploadPart(partno, buf[:n])
if err != nil {
fmt.Println("Uploadpart failed:", err)
}
<-c
}(partno)
}
return nil
}
func uploadPart(partno int, buf []byte) error {
fmt.Printf("Uploading partno: %d, buflen=%d\n", partno, len(buf))
// Actually do the upload. Simulate long running task with a sleep
time.Sleep(time.Second)
return nil
}
func main() {
filename := "/home/matthewh/Downloads/largefile.tar.gz"
fmt.Printf("Opening file: %s\n", filename)
f, err := os.Open(filename)
if err != nil {
panic(err)
}
defer f.Close()
UploadFile(f)
}
I'm sure you can deal a little smarter with the buf situation. I'm just letting go deal with the garbage. Since you are limiting your workers to specific number 4 you really need only 4 x defaultChunkSize buffers. Please do share if you come up with something simple and shareworth.
Have fun!
I'm building some server/client application in Go (the language is new to me). I searched a lot and read a whole bunch of different examples but there is still one thing I can't find. Lets say I have a single server client up and running. The client will send some kind of a message to the server and vice versa. Encoding and decoding is done by the package gob.
This example is not my application, it is only a quick example:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"log"
)
type Message struct {
Sender string
Receiver string
Command uint8
Value int64
}
func (message *Message) Set(sender string, receiver string, command uint8, value int64) *Message {
message.Sender = sender
message.Receiver = receiver
message.Command = command
message.Value = value
return message
}
func main() {
var network bytes.Buffer // Stand-in for a network connection
enc := gob.NewEncoder(&network) // Will write to network.
dec := gob.NewDecoder(&network) // Will read from network.
message := new(Message).Set("first", "second", 10, -1)
err := enc.Encode(*message) // send message
if err != nil {
log.Fatal("encode error:", err)
}
var m Message
err = dec.Decode(&m) // receice message
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("%q %q %d %d\n", m.Sender, m.Receiver, m.Command, m.Value)
}
This works fine, but I want the server to block until a new message is received so I can put the receiving process inside a infinite for loop inside a goroutine.
Something like that:
for {
// The server blocks HERE until a message from the client is received
fmt.Println("Received message:")
// Decode the new message
var m Message
err = dec.Decode(&m) // receice message
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("%q %q %d %d\n", m.Sender, m.Receiver, m.Command, m.Value)
}
The gob decoder blocks until it has read a full message or there's an error. The read loop in the question works as is.
working example on the playground
add a length header to the raw tcp stream.
that means, send a 4-bytes-length-header information to server before send the real load. and in server side read 4 bytes, allocate buffer, full read total message, and finally decode.
assume you have a tcp connection conn, in server side we could have:
func getInt(v []byte) int {
var r uint
r = 0
r |= uint(v[0]) << 24
r |= uint(v[1]) << 16
r |= uint(v[2]) << 8
r |= uint(v[3]) << 0
return int(r)
}
buf := make([]byte, 4)
_, err := io.ReadFull(conn, buf)
if err != nil {
return
}
length := getInt(buf)
buf = make([]byte, length)
_, err = io.ReadFull(conn, buf)
if err != nil {
return
}
//do gob decode from `buf` here
you may know client side refer the the server side source I think.
I am trying to parse a file that annoying consists of many separately zipped segments. I have parsed these segments one at a time into a slice of bytes and I want to uncompress them as I go.
Here is my current code that does the decompressing, which doesn't work. from and to are just set at the top as an example, in reality they are set by the code. data is the byte array containing the entire file. I don't want to seek it while it's on disk because its location on another server, so it's only realistic for me to load the entire file to []byte first and then parse it.
from, to := 0, 1000;
b := bytes.NewReader(data[from:from+to])
z, err := zlib.NewReader(b)
CheckErr(err)
defer z.Close()
p := make([]byte,0,1024)
z.Read(p)
fmt.Println(string(p))
So how is it so massively difficult just to unzip a slice of bytes? Anyway...
The problem appears to with how I am reading it out. Where it says z.Read, that doesn't seem to do anything.
How can I read the entire thing in one go into a slice of bytes?
Here's an outline for you. Note: In Go, CHECK FOR ERRORS!
package main
import (
"bytes"
"compress/zlib"
"fmt"
"io/ioutil"
)
func readSegment(data []byte, from, to int) ([]byte, error) {
b := bytes.NewReader(data[from : from+to])
z, err := zlib.NewReader(b)
if err != nil {
return nil, err
}
defer z.Close()
p, err := ioutil.ReadAll(z)
if err != nil {
return nil, err
}
return p, nil
}
func main() {
from, to := 0, 1000
data := make([]byte, from+to)
// ** parse input segments into data **
p, err := readSegment(data, from, to)
if err != nil {
fmt.Println(err)
return
}
fmt.Println(string(p))
}
Use ReadAll(r io.Reader) ([]byte, error) from the io/ioutil package.
p, err := ioutil.ReadAll(b)
fmt.Println(string(p))
Read only reads up to the length of the given slice (1024 bytes in your case).
To read in chunks of 1024 bytes:
p := make([]byte,1024)
for {
numBytes, err := l.Read(p)
if err == io.EOF {
// you are done, numBytes might be less than len(p)
break
}
// do what you want with p
}
If you are getting the data from a webserver, you might even do
import (
"net/http"
"io/ioutil"
)
...
resp, errGet := http.Get("http://example.com/somefile")
// do error handling
z, errZ := zlib.NewReader(resp.Body)
// do error handling
resp.Body.Close()
p, err := ioutil.ReadAll(b)
// do error handling
since resp.Body happens to be an io.Reader as most io related types.