Go: Server should block until a message from the client is received - go

I'm building some server/client application in Go (the language is new to me). I searched a lot and read a whole bunch of different examples but there is still one thing I can't find. Lets say I have a single server client up and running. The client will send some kind of a message to the server and vice versa. Encoding and decoding is done by the package gob.
This example is not my application, it is only a quick example:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"log"
)
type Message struct {
Sender string
Receiver string
Command uint8
Value int64
}
func (message *Message) Set(sender string, receiver string, command uint8, value int64) *Message {
message.Sender = sender
message.Receiver = receiver
message.Command = command
message.Value = value
return message
}
func main() {
var network bytes.Buffer // Stand-in for a network connection
enc := gob.NewEncoder(&network) // Will write to network.
dec := gob.NewDecoder(&network) // Will read from network.
message := new(Message).Set("first", "second", 10, -1)
err := enc.Encode(*message) // send message
if err != nil {
log.Fatal("encode error:", err)
}
var m Message
err = dec.Decode(&m) // receice message
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("%q %q %d %d\n", m.Sender, m.Receiver, m.Command, m.Value)
}
This works fine, but I want the server to block until a new message is received so I can put the receiving process inside a infinite for loop inside a goroutine.
Something like that:
for {
// The server blocks HERE until a message from the client is received
fmt.Println("Received message:")
// Decode the new message
var m Message
err = dec.Decode(&m) // receice message
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("%q %q %d %d\n", m.Sender, m.Receiver, m.Command, m.Value)
}

The gob decoder blocks until it has read a full message or there's an error. The read loop in the question works as is.
working example on the playground

add a length header to the raw tcp stream.
that means, send a 4-bytes-length-header information to server before send the real load. and in server side read 4 bytes, allocate buffer, full read total message, and finally decode.
assume you have a tcp connection conn, in server side we could have:
func getInt(v []byte) int {
var r uint
r = 0
r |= uint(v[0]) << 24
r |= uint(v[1]) << 16
r |= uint(v[2]) << 8
r |= uint(v[3]) << 0
return int(r)
}
buf := make([]byte, 4)
_, err := io.ReadFull(conn, buf)
if err != nil {
return
}
length := getInt(buf)
buf = make([]byte, length)
_, err = io.ReadFull(conn, buf)
if err != nil {
return
}
//do gob decode from `buf` here
you may know client side refer the the server side source I think.

Related

Getting garbage value while reading packet length from TCP

Problem
I am sending packet using tcp, with first 8 byte as long which
contains the actual packet length while receiving it, after some point
it gets wrong packet length which causes the following error "slice
out of range" because the packet length that is received is way bigger
but using TCP Dump I can see it is receiving the correct packet size.
Client TCP Code
package main
import (
"fmt"
"net"
"ByteBuffer"
"log"
"sync"
)
func main() {
conn, err := net.Dial("tcp", "192.168.90.116:8300")
if err != nil {
fmt.Println(err)
return
}
byteBuffer := ByteBuffer.Buffer{
Endian:"big",
}
msg := "Hello World"
totalByteLen := len(msg)
byteBuffer.PutLong(totalByteLen)
byteBuffer.Put([]byte(msg))
log.Println(byteBuffer.Array())
for i:=0;i<1000000000000;i++{
go write(conn, byteBuffer.Array())
}
}
var lck = &sync.Mutex{}
func write(conn net.Conn, data []byte){
lck.Lock()
_, err := conn.Write(data)
lck.Unlock()
if err != nil{
return
}
}
Server TCP Code
func HandleRequest(conn net.Conn){
defer conn.Close()
for {
// creating a 8 byte buffer array
sizeBuf := make([]byte, 8)
// reading from tcp sockets
_, err := conn.Read(sizeBuf)
// converting the packet size to int64
packetSize := int64(binary.BigEndian.Uint64(sizeBuf))
log.Println(packetSize)
if packetSize < 0 {
continue
}
// reading more bytes from tcp pipe of packetSize length
/*
Here it catches error as the packet size is incorrect but it throws error after receiving aroung 4-5K messages.
*/
completePacket := make([]byte, packetSize)
_, err = conn.Read(completePacket)
// checking error type
if err == io.EOF{
break
}
if err != nil{
break
}
fmt.Println(completePacket)
}
}

Multiple serial requests result in empty buffer

The first TCP connection running on localhost on osx always parses the binary sent to it correctly. Subsequent requests lose the binary data, only seeing the first byte [8]. How have I failed to set up my Reader?
package main
import (
"fmt"
"log"
"net"
"os"
"app/src/internal/handler"
"github.com/golang-collections/collections/stack"
)
func main() {
port := os.Getenv("SERVER_PORT")
s := stack.New()
ln, err := net.Listen("tcp", ":8080")
if err != nil {
log.Fatalf("net.Listen: %v", err)
}
fmt.Println("Serving on " + port)
for {
conn, err := ln.Accept()
// defer conn.Close()
if err != nil {
log.Fatal("ln.Accept")
}
go handler.Handle(conn, s)
}
}
package handler
import (
"fmt"
"io"
"log"
"net"
"github.com/golang-collections/collections/stack"
)
func Handle(c net.Conn, s *stack.Stack) {
fmt.Printf("Serving %s\n", c.RemoteAddr().String())
buf := make([]byte, 0, 256)
tmp := make([]byte, 128)
n, err := c.Read(tmp)
if err != nil {
if err != io.EOF {
log.Fatalf("connection Read() %v", err)
}
return
}
buf = append(buf, tmp[:n]...)
}
log:
Serving [::1]:51699
------------- value ---------------:QCXhoy5t
Buffer Length: 9. First Value: 8
Serving [::1]:51700
------------- value ---------------:
Buffer Length: 1. First Value: 8
Serving [::1]:51701
test sent over:
push random string:
QCXhoy5t
push random string:
GPh0EnbS
push random string:
4kJ0wN0R
The docs for Reader say:
Read reads up to len(p) bytes into p. It returns the number of bytes read (0 <= n
<= len(p)) and any error encountered. Even if Read returns n < len(p), it may use
all of p as scratch space during the call. If some data is available but not
len(p) bytes, Read conventionally returns what is available instead of waiting
for more.
So the most likely cause of your issue is that Read is returning the data available (in this case a single character). You can fix this by using ioutil.ReadAll or performing the read in a loop (the fact the data is being added to a buffer makes it look like that was the original intention) with something like:
for {
n, err := c.Read(tmp)
if err != nil {
if err != io.EOF {
// Note that data might have also been received - you should process that
// if appropriate.
log.Fatalf("connection Read() %v", err)
return
}
break // All data received so process it
}
buf = append(buf, tmp[:n]...)
}
Note: There is no guarantee that any data is received; you should check the length before trying to access it (i.e. buf[0] may panic)

golang gob converts pointer to 0 into nil pointer

I'm trying to use go's net/rpc package to send data structures. The data structure includes a pointer to uint64. The pointer is never nil, but the value may be 0. I'm finding that when the value is 0, the receiver sees a nil pointer. When the value is non-0, the receives sees a non-nil pointer that points to a proper value. This is problematic, because it means that the RPC is breaking an invariant of my data structure: the pointer will never be nil.
I have a go playground that demonstrates this behavior here: https://play.golang.org/p/Un3bTe5F-P
package main
import (
"bytes"
"encoding/gob"
"fmt"
"log"
)
type P struct {
Zero, One int
Ptr *int
}
func main() {
// Initialize the encoder and decoder. Normally enc and dec would be
// bound to network connections and the encoder and decoder would
// run in different processes.
var network bytes.Buffer // Stand-in for a network connection
enc := gob.NewEncoder(&network) // Will write to network.
dec := gob.NewDecoder(&network) // Will read from network.
// Encode (send) the value.
var p P
p.Zero = 0
p.One = 1
p.Ptr = &p.Zero
fmt.Printf("p0: %s\n", p)
err := enc.Encode(p)
if err != nil {
log.Fatal("encode error:", err)
}
// Decode (receive) the value.
var q P
err = dec.Decode(&q)
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("q0: %s\n", q)
p.Ptr = &p.One
fmt.Printf("p1: %s\n", p)
err = enc.Encode(p)
if err != nil {
log.Fatal("encode error:", err)
}
err = dec.Decode(&q)
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("q1: %s\n", q)
}
The output from this code is:
p0: {%!s(int=0) %!s(int=1) %!s(*int=0x1050a780)}
q0: {%!s(int=0) %!s(int=1) %!s(*int=<nil>)}
p1: {%!s(int=0) %!s(int=1) %!s(*int=0x1050a784)}
q1: {%!s(int=0) %!s(int=1) %!s(*int=0x1050aba8)}
So when Ptr points to a 0, it becomes nil on the receiver side. When Ptr points to 1, it is passed through normally.
Is this a bug? Is there a way around this problem? I want to avoid having to unmarshall my detastructure on the receiver side to fix all the unexpected nil pointers...
This behaviour is a limitation of the gob protocol according the defect raised back in 2013 - see https://github.com/golang/go/issues/4609
Bear in mind that gob doesn't send pointers, the pointer is dereferenced and the value is passed. As such when the p.Ptr is set to &p.One, you'll find that q.Ptr != &q.One

How can I keep reading using net Conn Read method

I'm creating a simple chat server as a personal project to learn net package and some concurrency in go. My 1st idea is to make the server print whatever is send using nc command echo -n "hello" | nc -w1 -4 localhost 2016 -p 61865. However after the 1st read my code ignores the subsequent messages.
func (s *Server) messageReader(conn net.Conn) {
defer conn.Close()
buffer := make([]byte, 1024)
for {
//read buff
blen, err := conn.Read(buffer)
if err != nil {
log.Fatal(err)
}
message := string(buffer[:blen])
if message == "/quit" {
fmt.Println("quit command received. Bye.")
return
}
if blen > 0 {
fmt.Println(message)
buffer = buffer[:0]
}
}
}
// Run Start up the server. Manages join and leave chat
func (s *Server) Run() {
// Listen on port TCP 2016
listener, err := net.Listen("tcp", ":2016")
if err != nil {
log.Fatal(err)
}
defer listener.Close()
for {
//wait for connection
conn, err := listener.Accept()
if err != nil {
log.Fatal(err)
}
go s.messageReader(conn)
}
}
If I send a new message from a new client it prints without problems but if I send another one it does nothing. What am I missing do I need to reset the Conn or close it and spawn a new one?
After printing your message, you slice buffer down to zero length. You can't read any data into a zero-length slice. There's no reason to re-slice your read buffer at all.
You also need to handle the read bytes before checking for errors, as io.EOF can be returned on a successful read.
You shouldn't use log.Fatal in the server's read loop, as that calls os.Exit
A working messageReader body might look like:
defer conn.Close()
buffer := make([]byte, 1024)
for {
n, err := conn.Read(buffer)
message := string(buffer[:n])
if message == "/quit" {
fmt.Println("quit command received. Bye.")
return
}
if n > 0 {
fmt.Println(message)
}
if err != nil {
log.Println(err)
return
}
}
You should note though that because you're not using any sort of framing protocol here, you can't guarantee that each conn.Read returns a complete or single message. You need to have some sort of higher-level protocol to delimit messages in your stream.

Server failing to parse packets when flooded too fast

Here's the error I get when I flood the server with too many packets per second:
2014/11/28 12:52:49 main.go:59: loading plugin: print
2014/11/28 12:52:49 main.go:86: starting server on 0.0.0.0:8080
2014/11/28 12:52:59 server.go:15: client has connected: 127.0.0.1:59146
2014/11/28 12:52:59 server.go:43: received data from client 127.0.0.1:59146: &main.Observation{SensorId:"1", Timestamp:1416492023}
2014/11/28 12:52:59 server.go:29: read error from 127.0.0.1:59146: zlib: invalid header
2014/11/28 12:52:59 server.go:18: closing connection to: 127.0.0.1:59146
It manages to decode one packet (sometimes, maybe 2 or 3) then errors out. Here's the code doing the flooding:
import socket
import struct
import json
import zlib
import time
def serialize(data):
data = json.dumps(data)
data = zlib.compress(data)
packet = struct.pack('!I', len(data))
packet += data
return len(data), packet
message = {
'sensor_id': '1',
'timestamp': 1416492023,
}
length, buffer = serialize([message])
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(('127.0.0.1', 8080))
while True:
client.send(buffer)
#time.sleep(0.0005)
When I uncomment the time.sleep() call, the server works fine. It seems too many packets/per second is killing the server. Why?
Here's the relevent Go code. The connection handler:
func (self *Server) handleConnection(connection net.Conn) {
for {
connection.SetReadDeadline(time.Now().Add(30 * time.Second))
observations, err := self.Protocol.Unserialize(connection)
if err != nil {
log.Printf("read error from %s: %s\n", connection.RemoteAddr(), err)
return
}
}
And here's the unserializer:
// Length Value protocol to read zlib compressed, JSON encoded packets.
type ProtocolV2 struct{}
func (self *ProtocolV2) Unserialize(packet io.Reader) ([]*Observation, error) {
var length uint32
if err := binary.Read(packet, binary.BigEndian, &length); err != nil {
return nil, err
}
buffer := make([]byte, length)
rawreader := bufio.NewReader(packet)
if _, err := rawreader.Read(buffer); err != nil {
return nil, err
}
bytereader := bytes.NewReader(buffer)
zreader, err := zlib.NewReader(bytereader)
if err != nil {
return nil, err
}
defer zreader.Close()
var observations []*Observation
decoder := json.NewDecoder(zreader)
if err := decoder.Decode(&observations); err != nil {
return nil, err
}
return observations, nil
}
It seems there is an error on client side in the Python script.
The return of client.send is not checked, so the script does not handle partial writes in a correct way. Basically, when the socket buffer is full, only part of the message will be written, resulting in the server unable to decode the message.
This code is broken, but adding the wait makes it work because it prevents the socket buffer to be full.
You can use client.sendall instead to ensure the write operations are complete.
More information in the Python documentation:
https://docs.python.org/2/library/socket.html
https://docs.python.org/2/howto/sockets.html#using-a-socket
Now in the Go server, there is also a similar problem. The documentation says:
Read reads data into p. It returns the number of bytes read into p. It calls Read at most once on the underlying Reader, hence n may be less than len(p). At EOF, the count will be zero and err will be io.EOF.
The rawreader.Read call may return less bytes than you expect. You may want to use the ReadFull() function of the io package to ensure the full message is read.

Resources