Here's the error I get when I flood the server with too many packets per second:
2014/11/28 12:52:49 main.go:59: loading plugin: print
2014/11/28 12:52:49 main.go:86: starting server on 0.0.0.0:8080
2014/11/28 12:52:59 server.go:15: client has connected: 127.0.0.1:59146
2014/11/28 12:52:59 server.go:43: received data from client 127.0.0.1:59146: &main.Observation{SensorId:"1", Timestamp:1416492023}
2014/11/28 12:52:59 server.go:29: read error from 127.0.0.1:59146: zlib: invalid header
2014/11/28 12:52:59 server.go:18: closing connection to: 127.0.0.1:59146
It manages to decode one packet (sometimes, maybe 2 or 3) then errors out. Here's the code doing the flooding:
import socket
import struct
import json
import zlib
import time
def serialize(data):
data = json.dumps(data)
data = zlib.compress(data)
packet = struct.pack('!I', len(data))
packet += data
return len(data), packet
message = {
'sensor_id': '1',
'timestamp': 1416492023,
}
length, buffer = serialize([message])
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(('127.0.0.1', 8080))
while True:
client.send(buffer)
#time.sleep(0.0005)
When I uncomment the time.sleep() call, the server works fine. It seems too many packets/per second is killing the server. Why?
Here's the relevent Go code. The connection handler:
func (self *Server) handleConnection(connection net.Conn) {
for {
connection.SetReadDeadline(time.Now().Add(30 * time.Second))
observations, err := self.Protocol.Unserialize(connection)
if err != nil {
log.Printf("read error from %s: %s\n", connection.RemoteAddr(), err)
return
}
}
And here's the unserializer:
// Length Value protocol to read zlib compressed, JSON encoded packets.
type ProtocolV2 struct{}
func (self *ProtocolV2) Unserialize(packet io.Reader) ([]*Observation, error) {
var length uint32
if err := binary.Read(packet, binary.BigEndian, &length); err != nil {
return nil, err
}
buffer := make([]byte, length)
rawreader := bufio.NewReader(packet)
if _, err := rawreader.Read(buffer); err != nil {
return nil, err
}
bytereader := bytes.NewReader(buffer)
zreader, err := zlib.NewReader(bytereader)
if err != nil {
return nil, err
}
defer zreader.Close()
var observations []*Observation
decoder := json.NewDecoder(zreader)
if err := decoder.Decode(&observations); err != nil {
return nil, err
}
return observations, nil
}
It seems there is an error on client side in the Python script.
The return of client.send is not checked, so the script does not handle partial writes in a correct way. Basically, when the socket buffer is full, only part of the message will be written, resulting in the server unable to decode the message.
This code is broken, but adding the wait makes it work because it prevents the socket buffer to be full.
You can use client.sendall instead to ensure the write operations are complete.
More information in the Python documentation:
https://docs.python.org/2/library/socket.html
https://docs.python.org/2/howto/sockets.html#using-a-socket
Now in the Go server, there is also a similar problem. The documentation says:
Read reads data into p. It returns the number of bytes read into p. It calls Read at most once on the underlying Reader, hence n may be less than len(p). At EOF, the count will be zero and err will be io.EOF.
The rawreader.Read call may return less bytes than you expect. You may want to use the ReadFull() function of the io package to ensure the full message is read.
Related
I'm building my first go-libp2p application and trying to modify the echo example to read a []byte instead of a string as in the example.
In my code, I changed the doEcho function to run io.ReadAll(s) instead of bufio.NewReader(s) followed by ReadString('\n'):
// doEcho reads a line of data a stream and writes it back
func doEcho(s network.Stream) error {
b, err := io.ReadAll(s)
if err != nil {
return err
}
log.Printf("Number of bytes received: %d", len(b))
_, err = s.Write([]byte("thanks for the bytes"))
return err
}
When I run this and send a message, I do see the listener received new stream log but the doEcho function gets stuck after the io.ReadAll(s) call and never executes the reply.
So my questions are:
Why does my code not work and how can I make it work?
How does io.ReadAll(s) and bufio's ReadString('\n') work under the hood so that they cause this difference in behavior?
Edit:
As per #Stephan Schlecht suggestion I changed my code to this, but it still remains blocked as before:
func doEcho(s network.Stream) error {
buf := bufio.NewReader(s)
var data []byte
for {
b, err := buf.ReadByte()
if err != nil {
break
}
data = append(data, b)
}
log.Printf("Number of bytes received: %d", len(data))
_, err := s.Write([]byte("thanks for the bytes"))
return err
}
Edit 2: I forgot to clarify this, but I don't want to use ReadString('\n') or ReadBytes('\n') because I don't know anything about the []byte I'm receiving, so it might not end with \n. I want to read any []byte from the stream and then write back to the stream.
ReadString('\n') reads until the first occurrence of \n in the input and returns the string.
io.ReadAll(s) reads until an error or EOF and returns the data it read. So unless an error or EOF occurs it does not return.
In principle, there is no natural size for a data structure to be received on stream-oriented connections.
It depends on the remote sender.
If the remote sender sends binary data and closes the stream after sending the last byte, then you can simply read all data up to the EOF on the receiver side.
If the stream is not to be closed immediately and the data size is variable, there are further possibilities: One first sends a header that has a defined size and in the simplest case simply transmits the length of the data. Once you have received the specified amount of data, you know that this round of reception is complete and you can continue.
Alternatively, you can define a special character that marks the end of the data structure to be transmitted. This will not work if you want to transmit arbitrary binary data without encoding.
There are other options that are a little more complicated, such as splitting the data into blocks.
In the example linked in the question, a \n is sent at the end of the data just sent, but this would not work if you want to send arbitrary binary data.
Adapted Echo Example
In order to minimally modify the echo example linked in the question to first send a 1-byte header with the length of the payload and only then the actual payload, it could look something like the following:
Sending
In the function runSender line one could replace the current sending of the payload from:
log.Println("sender saying hello")
_, err = s.Write([]byte("Hello, world!\n"))
if err != nil {
log.Println(err)
return
}
to
log.Println("sender saying hello")
payload := []byte("Hello, world!")
header := []byte{byte(len(payload))}
_, err = s.Write(header)
if err != nil {
log.Println(err)
return
}
_, err = s.Write(payload)
if err != nil {
log.Println(err)
return
}
So we send one byte with the length of the payload before the actual payload.
Echo
The doEcho would then read the header first and afterwards the payload. It uses ReadFull, which reads exactly len(payload) bytes.
func doEcho(s network.Stream) error {
buf := bufio.NewReader(s)
header, err := buf.ReadByte()
if err != nil {
return err
}
payload := make([]byte, header)
n, err := io.ReadFull(buf, payload)
log.Printf("payload has %d bytes", n)
if err != nil {
return err
}
log.Printf("read: %s", payload)
_, err = s.Write(payload)
return err
}
Test
Terminal 1
2022/11/06 09:59:38 I am /ip4/127.0.0.1/tcp/8088/p2p/QmVrjAX9QPqihfVFEPJ2apRSUxVCE9wnvqaWanBz2FLY1e
2022/11/06 09:59:38 listening for connections
2022/11/06 09:59:38 Now run "./echo -l 8089 -d /ip4/127.0.0.1/tcp/8088/p2p/QmVrjAX9QPqihfVFEPJ2apRSUxVCE9wnvqaWanBz2FLY1e" on a different terminal
2022/11/06 09:59:55 listener received new stream
2022/11/06 09:59:55 payload has 13 bytes
2022/11/06 09:59:55 read: Hello, world!
Terminal 2
stephan#mac echo % ./echo -l 8089 -d /ip4/127.0.0.1/tcp/8088/p2p/QmVrjAX9QPqihfVFEPJ2apRSUxVCE9wnvqaWanBz2FLY1e
2022/11/06 09:59:55 I am /ip4/127.0.0.1/tcp/8089/p2p/QmW6iSWiFBG5ugUUwBND14pDZzLDaqSNfxBG6yb8cmL3Di
2022/11/06 09:59:55 sender opening stream
2022/11/06 09:59:55 sender saying hello
2022/11/06 09:59:55 read reply: "Hello, world!"
s
This is certainly a fairly simple example and will certainly need to be customized to your actual requirements, but could perhaps be a first step in the right direction.
The first TCP connection running on localhost on osx always parses the binary sent to it correctly. Subsequent requests lose the binary data, only seeing the first byte [8]. How have I failed to set up my Reader?
package main
import (
"fmt"
"log"
"net"
"os"
"app/src/internal/handler"
"github.com/golang-collections/collections/stack"
)
func main() {
port := os.Getenv("SERVER_PORT")
s := stack.New()
ln, err := net.Listen("tcp", ":8080")
if err != nil {
log.Fatalf("net.Listen: %v", err)
}
fmt.Println("Serving on " + port)
for {
conn, err := ln.Accept()
// defer conn.Close()
if err != nil {
log.Fatal("ln.Accept")
}
go handler.Handle(conn, s)
}
}
package handler
import (
"fmt"
"io"
"log"
"net"
"github.com/golang-collections/collections/stack"
)
func Handle(c net.Conn, s *stack.Stack) {
fmt.Printf("Serving %s\n", c.RemoteAddr().String())
buf := make([]byte, 0, 256)
tmp := make([]byte, 128)
n, err := c.Read(tmp)
if err != nil {
if err != io.EOF {
log.Fatalf("connection Read() %v", err)
}
return
}
buf = append(buf, tmp[:n]...)
}
log:
Serving [::1]:51699
------------- value ---------------:QCXhoy5t
Buffer Length: 9. First Value: 8
Serving [::1]:51700
------------- value ---------------:
Buffer Length: 1. First Value: 8
Serving [::1]:51701
test sent over:
push random string:
QCXhoy5t
push random string:
GPh0EnbS
push random string:
4kJ0wN0R
The docs for Reader say:
Read reads up to len(p) bytes into p. It returns the number of bytes read (0 <= n
<= len(p)) and any error encountered. Even if Read returns n < len(p), it may use
all of p as scratch space during the call. If some data is available but not
len(p) bytes, Read conventionally returns what is available instead of waiting
for more.
So the most likely cause of your issue is that Read is returning the data available (in this case a single character). You can fix this by using ioutil.ReadAll or performing the read in a loop (the fact the data is being added to a buffer makes it look like that was the original intention) with something like:
for {
n, err := c.Read(tmp)
if err != nil {
if err != io.EOF {
// Note that data might have also been received - you should process that
// if appropriate.
log.Fatalf("connection Read() %v", err)
return
}
break // All data received so process it
}
buf = append(buf, tmp[:n]...)
}
Note: There is no guarantee that any data is received; you should check the length before trying to access it (i.e. buf[0] may panic)
I have the following code:
var buf []byte
read_len, err := conn.Read(buf)
if err != nil {
fmt.Println("Error reading:", err.Error())
}
buffer := make([]byte, read_len)
_, err = conn.Read(buffer)
if err != nil {
fmt.Println("Error reading:", err.Error())
}
The intention was to determine read_len with the first buf, then create a second buffer which is the exact length of an incoming json request. This just results in an error
unexpected end of JSON input
When I try to unmarshal
var request Device_Type_Request_Struct
err = json.Unmarshal(buffer, &request)
I'm assuming that this error occurs because the conn.Read(buffer) is returning nothing because another buffer has already read it (not sure though). How should I go about determining the length of json request while also being able to read it into a buffer (of the exact same length)?
Read returns the number of bytes read to the buffer. Because the length of the buffer passed to the first call to conn.Read is zero, the first call to conn.Read always returns zero.
There is no way to determine how much data a peer has sent without reading the data.
The easy solution to this problem is to use the JSON decoder:
d := json.NewDecoder(conn)
var request Device_Type_Request_Struct
if err := d.Decode(&request); err != nil {
// handle error
}
The decoder reads and decodes JSON values from a stream.
I'm creating a simple chat server as a personal project to learn net package and some concurrency in go. My 1st idea is to make the server print whatever is send using nc command echo -n "hello" | nc -w1 -4 localhost 2016 -p 61865. However after the 1st read my code ignores the subsequent messages.
func (s *Server) messageReader(conn net.Conn) {
defer conn.Close()
buffer := make([]byte, 1024)
for {
//read buff
blen, err := conn.Read(buffer)
if err != nil {
log.Fatal(err)
}
message := string(buffer[:blen])
if message == "/quit" {
fmt.Println("quit command received. Bye.")
return
}
if blen > 0 {
fmt.Println(message)
buffer = buffer[:0]
}
}
}
// Run Start up the server. Manages join and leave chat
func (s *Server) Run() {
// Listen on port TCP 2016
listener, err := net.Listen("tcp", ":2016")
if err != nil {
log.Fatal(err)
}
defer listener.Close()
for {
//wait for connection
conn, err := listener.Accept()
if err != nil {
log.Fatal(err)
}
go s.messageReader(conn)
}
}
If I send a new message from a new client it prints without problems but if I send another one it does nothing. What am I missing do I need to reset the Conn or close it and spawn a new one?
After printing your message, you slice buffer down to zero length. You can't read any data into a zero-length slice. There's no reason to re-slice your read buffer at all.
You also need to handle the read bytes before checking for errors, as io.EOF can be returned on a successful read.
You shouldn't use log.Fatal in the server's read loop, as that calls os.Exit
A working messageReader body might look like:
defer conn.Close()
buffer := make([]byte, 1024)
for {
n, err := conn.Read(buffer)
message := string(buffer[:n])
if message == "/quit" {
fmt.Println("quit command received. Bye.")
return
}
if n > 0 {
fmt.Println(message)
}
if err != nil {
log.Println(err)
return
}
}
You should note though that because you're not using any sort of framing protocol here, you can't guarantee that each conn.Read returns a complete or single message. You need to have some sort of higher-level protocol to delimit messages in your stream.
I'm building some server/client application in Go (the language is new to me). I searched a lot and read a whole bunch of different examples but there is still one thing I can't find. Lets say I have a single server client up and running. The client will send some kind of a message to the server and vice versa. Encoding and decoding is done by the package gob.
This example is not my application, it is only a quick example:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"log"
)
type Message struct {
Sender string
Receiver string
Command uint8
Value int64
}
func (message *Message) Set(sender string, receiver string, command uint8, value int64) *Message {
message.Sender = sender
message.Receiver = receiver
message.Command = command
message.Value = value
return message
}
func main() {
var network bytes.Buffer // Stand-in for a network connection
enc := gob.NewEncoder(&network) // Will write to network.
dec := gob.NewDecoder(&network) // Will read from network.
message := new(Message).Set("first", "second", 10, -1)
err := enc.Encode(*message) // send message
if err != nil {
log.Fatal("encode error:", err)
}
var m Message
err = dec.Decode(&m) // receice message
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("%q %q %d %d\n", m.Sender, m.Receiver, m.Command, m.Value)
}
This works fine, but I want the server to block until a new message is received so I can put the receiving process inside a infinite for loop inside a goroutine.
Something like that:
for {
// The server blocks HERE until a message from the client is received
fmt.Println("Received message:")
// Decode the new message
var m Message
err = dec.Decode(&m) // receice message
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("%q %q %d %d\n", m.Sender, m.Receiver, m.Command, m.Value)
}
The gob decoder blocks until it has read a full message or there's an error. The read loop in the question works as is.
working example on the playground
add a length header to the raw tcp stream.
that means, send a 4-bytes-length-header information to server before send the real load. and in server side read 4 bytes, allocate buffer, full read total message, and finally decode.
assume you have a tcp connection conn, in server side we could have:
func getInt(v []byte) int {
var r uint
r = 0
r |= uint(v[0]) << 24
r |= uint(v[1]) << 16
r |= uint(v[2]) << 8
r |= uint(v[3]) << 0
return int(r)
}
buf := make([]byte, 4)
_, err := io.ReadFull(conn, buf)
if err != nil {
return
}
length := getInt(buf)
buf = make([]byte, length)
_, err = io.ReadFull(conn, buf)
if err != nil {
return
}
//do gob decode from `buf` here
you may know client side refer the the server side source I think.