TCP fixed size message framing method in Golang - go

I'm not able to understand how message framing using fixed size prefix length header works.
It's said that a fixed size byte array is going to contain the length for message to be sent. But how would you define a fixed size byte array, specifically in Golang.
Say this is my message:
Hello
Its length is 5.
So if I want to send this through a tcp stream, to make sure I receive all the message on the other end I'd have to tell it how many bytes it should read.
A simple header would be length:message:
5:Hello // [53 58 104 101 108 108 111]
But if message length grows 10x each time, there are going to be more bytes. So header size is dynamic that way.
36:Hello, this is just a dumb question. // [51 54 58 72 101 108 108 111 44 32 116 104 105 115 32 105 115 32 106 117 115 116 32 97 32 100 117 109 98 32 113 117 101 115 116 105 111 110 46]
So here 36 takes 2 bytes.
One approach I come to think of is to consider a maximum message length for the protocol. Say 10KB = 10240 bytes. Then add leading 0's to message length. This way I'm sure that I'm going to have a fixed 5 bytes header.
Would this work for all cases?
If yes, what if I have a message more than 10KBs, should I split it into 2 messages?
If not, what are other solutions?
I want to implement the solutions in Golang.
UPDATE 1:
I read about Endians, although I wasn't able to understand what they do that causes fixed length bytes. But I found an example in python and tried to write it in go this way:
Client:
const maxLengthBytes = 8
conn, err := net.Dial("tcp", "127.0.0.1:9999")
if err != nil {
fmt.Println(err)
return
}
message := "Hello, this is just a dumb question"
bs := make([]byte, maxLengthBytes)
binary.LittleEndian.PutUint64(bs, uint64(len(text)))
bytes := append(bs, []byte(text)...)
conn.Write(bytes)
Server:
listener, err := net.ListenTCP("tcp", &net.TCPAddr{Port: 9999})
if err != nil {
fmt.Println(err)
return
}
for {
tcp, err := listener.AcceptTCP()
if err != nil {
fmt.Println(err)
continue
}
go Reader(tcp)
}
func Reader(conn *net.TCPConn) {
foundLength := false
messageLength := 0
for {
if !foundLength {
var b = make([]byte, maxLengthBytes)
read, err := conn.Read(b)
if err != nil {
fmt.Println(err)
continue
}
if read != 8 {
fmt.Println("invalid header")
continue
}
foundLength = true
messageLength = int(binary.LittleEndian.Uint64(b))
} else {
var message = make([]byte, messageLength)
read, err := conn.Read(message)
if err != nil {
fmt.Println(err)
continue
}
if read != messageLength {
fmt.Println("invalid data")
continue
}
fmt.Println("Received:", string(message))
foundLength = false
messageLength = 0
}
}
}

Please refer to my answer in this post
TCP client for Android: text is not received in full
Basically, you have to define how the data stored/formatted.
For example:
We store prefix length as int32 (4 bytes) with little endian. It's different from yours.
With your solution, the length is a string, it's hard to fix the length.
For your solution, you have to use fixed length string. For example: 10 characters, and add leading zero.
For your questions.
It doesn't work for all cases with just prefix length. It has its limitation, for example if we use int32 as the prefix length, the length of message must be less than Integer32.max, right?
Yes, we have to split or even combine (please refer my explanation in above link).
We have many ways to deal with length limitation if it's our concern (actually, almost application protocols has it maximum request size).
You could use one more bit to indicate, whether or not the message exceeds max length to resolve it, right?

Related

json.Marshal returning weird values [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I want array of json from sql rows. when i try to do marshal on each struct after scanning each row, its returning weird values like [123 34 105 100 34 ..]
type Org struct {
Id int `json:"id"`
Name string `json:"name"`
}
res, err := db.Query("select id,name from organization")
if err != nil {
// fmt.Print("err in query")
panic(err)
}
// var orgArray []Org
defer res.Close()
for res.Next() {
var org Org
fmt.Println(&org.Id, &org.Name, "PRINT ADDRESS BEFORE SCAN")
// 0xc0001c0648 0xc0001c0650 PRINT ADDRESS BEFORE SCAN
err = res.Scan(&org.Id, &org.Name)
fmt.Println(org.Id, org.Name, org, "PRINT VALUES AFTER SCAN")
// 1535 TestOrg {1535 TestOrg} PRINT VALUES AFTER SCAN
b, err := json.Marshal(org)
if err != nil {
panic(err)
}
fmt.Println(b)
//[123 34 105 100 34 58 49 53 51 55 44 34 110 97 109 101 34 58 34 98 114 97 110 100 32 69 104 71 74 89 34 125]
}
whats the problem here?
json.Marshal response is bytes array, convert to string before printing
package main
import (
"encoding/json"
"fmt"
)
type Abc struct {
A string `json:"a"`
B string `json:"b"`
}
func main() {
d := Abc{A: "aaa", B: "bbbb"}
a, _ := json.Marshal(d)
fmt.Println(string(a))
}
json.Marshal returns a byte array - []byte.
The Println prints b out as such. The array of integers (byte values) you see is how byte arrays are printed out in Go.
Use string(b) to print a string. fmt.Println(string(b)), etc.

Is Go only encrypt text in a 16-byte message length?

I try to encrypt a message using AES in Golang.
func main() {
key := "mysupersecretkey32bytecharacters"
plainText := "thisismyplaintextingolang"
fmt.Println("My Encryption")
byteCipherText := encrypt([]byte(key), []byte(plainText))
fmt.Println(byteCipherText)
}
func encrypt(key, plaintext []byte) []byte {
cphr, err := aes.NewCipher(key)
if err != nil {
panic(err)
}
ciphertext := make([]byte, len(plaintext))
cphr.Encrypt(ciphertext, plaintext)
return ciphertext
}
That function returns : [23 96 11 10 70 223 95 118 157 250 80 92 77 26 137 224 0 0 0 0 0 0 0 0 0]
In that result, there are only 16 non-zero byte values. This means that AES encryption in Go only encrypts 16 characters.
Is it possible to encrypts more than 16 characters in Go AES without using any mode in AES (like GCM, CBC, CFB, ..etc), just pure AES?
aes.NewCipher returns an instance of cipher.Block, which encrypts in blocks of 16 bytes (this is how pure AES works).
mode of operation literally determines how messages longer than 16 bytes are encrypted. The simplest one is ECB (a "no-op" mode), which simply repeats the encryption in blocks of 16 bytes using the same key. You can do the same using a simple for-loop, though keep in mind that ECB is not very secure.
This has nothing to do with go. AES is a block cypher that encrypts a block of 16 bytes. To encrypt a longer message there are several modes that can be used to achieve this.

Convert int array to byte array, compress it then reverse it

I have a large int array that I want to persist on the filesystem. My understanding is the best way to store something like this is to use the gob package to convert it to a byte array and then to compress it with gzip.
When I need it again, I reverse the process. I am pretty sure I am storing it correctly, however recovering it is failing with EOF. Long story short, I have some example code below that demonstrates the issue. (playground link here https://play.golang.org/p/v4rGGeVkLNh).
I am not convinced gob is needed, however reading around it seems that its more efficient to store it as a byte array than an int array, but that may not be true. Thanks!
package main
import (
"bufio"
"bytes"
"compress/gzip"
"encoding/gob"
"fmt"
)
func main() {
arry := []int{1, 2, 3, 4, 5}
//now gob this
var indexBuffer bytes.Buffer
writer := bufio.NewWriter(&indexBuffer)
encoder := gob.NewEncoder(writer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
//now compress it
var compressionBuffer bytes.Buffer
compressor := gzip.NewWriter(&compressionBuffer)
compressor.Write(indexBuffer.Bytes())
defer compressor.Close()
//<--- I think all is good until here
//now decompress it
buf := bytes.NewBuffer(compressionBuffer.Bytes())
fmt.Println("byte array before unzipping: ", buf.Bytes())
if reader, err := gzip.NewReader(buf); err != nil {
fmt.Println("gzip failed ", err)
panic(err)
} else {
//now ungob it...
var intArray []int
decoder := gob.NewDecoder(reader)
defer reader.Close()
if err := decoder.Decode(&intArray); err != nil {
fmt.Println("gob failed ", err)
panic(err)
}
fmt.Println("final int Array content: ", intArray)
}
}
You are using bufio.Writer which–as its name implies–buffers bytes written to it. This means if you're using it, you have to flush it to make sure buffered data makes its way to the underlying writer:
writer := bufio.NewWriter(&indexBuffer)
encoder := gob.NewEncoder(writer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
if err := writer.Flush(); err != nil {
panic(err)
}
Although the use of bufio.Writer is completely unnecessary as you're already writing to an in-memory buffer (bytes.Buffer), so just skip that, and write directly to bytes.Buffer (and so you don't even have to flush):
var indexBuffer bytes.Buffer
encoder := gob.NewEncoder(&indexBuffer)
if err := encoder.Encode(arry); err != nil {
panic(err)
}
The next error is how you close the gzip stream:
defer compressor.Close()
This deferred closing will only happen when the enclosing function (the main() function) returns, not a second earlier. But by that time you already wanted to read the zipped data, but that might still sit in an internal cache of gzip.Writer, and not in compressionBuffer, so you obviously can't read the compressed data from compressionBuffer. Close the gzip stream without using defer:
if err := compressor.Close(); err != nil {
panic(err)
}
With these changes, you program runs and outputs (try it on the Go Playground):
byte array before unzipping: [31 139 8 0 0 0 0 0 0 255 226 249 223 200 196 200 244 191 137 129 145 133 129 129 243 127 19 3 43 19 11 27 7 23 32 0 0 255 255 110 125 126 12 23 0 0 0]
final int Array content: [1 2 3 4 5]
As a side note: buf := bytes.NewBuffer(compressionBuffer.Bytes()) – this buf is also completely unnecessary, you can just start decoding compressionBuffer itself, you can read data from it that was previously written to it.
As you might have noticed, the compressed data is much larger than the initial, compressed data. There are several reasons: both encoding/gob and compress/gzip streams have significant overhead, and they (may) only make input smaller on a larger scale (5 int numbers don't qualify to this).
Please check related question: Efficient Go serialization of struct to disk
For small arrays, you may also consider variable-length encoding, see binary.PutVarint().

Diagnosing very slow read from unix socket using golang (1 min vs 1 sec in netcat)

Background
Im writing a few packages to communicate with the OpenVas vulnerability scanner - the scanner uses a few different propitiatory protocols to communicate - are all comprised of either xml or text strings sent over a unix socket or tcp connection (im using unix socket).
The issue I'm having is with the OTP protocol (OpenVas internal protocol which is not well documented)
I can run the following command using netcat and I will get a response back in under a second:
echo -en '< OTP/2.0 >\nCLIENT <|> NVT_INFO\n' | ncat -U
/var/run/openvassd.sock
This results in a fairly large response which looks like this in terminal:
< OTP/2.0 >
SERVER <|> NVT_INFO <|> 201802131248 <|> SERVER
SERVER <|> PREFERENCES <|>
cache_folder <|> /var/cache/openvas
include_folders <|> /var/lib/openvas/plugins
max_hosts <|> 30
//lots more here
So for example, I previously had some code like this for reading the response back:
func (c Client) read() ([]byte, error) {
// set up buffer to read in chunks
bufSize := 8096
resp := []byte{}
buf := make([]byte, bufSize)
for {
n, err := c.conn.Read(buf)
resp = append(resp, buf[:n]...)
if err != nil {
if err != io.EOF {
return resp, fmt.Errorf("read error: %s", err)
}
break
}
fmt.Println("got", n, "bytes.")
}
fmt.Println("total response size:", len(resp))
return resp, nil
}
I get the full result but it comes in small pieces (i guess line by line) so the output I see is something like this (over the course of a minute or so before showing full response):
got 53 bytes.
got 62 bytes.
got 55 bytes.
got 62 bytes.
got 64 bytes.
got 59 bytes.
got 58 bytes.
got 54 bytes.
got 54 bytes.
got 54 bytes.
got 64 bytes.
got 59 bytes.
... (more)
SO I decided to try ioutil.ReadAll:
func (c Client) read() ([]byte, error) {
fmt.Println("read start")
d, err := ioutil.ReadAll(c.conn)
fmt.Println("read done")
return d, err
}
This does again return the full response, but the time between "read start" and "read done" is around a minute compared to the < 1sec the command is expected to take.
Any thoughts on why the read via golang is so slow compared to netcat - how can I diagnose/fix the issue?**
It appears the service is waiting for more input, and eventually times out after a minute. In your CLI example, once the echo command completes that side of the pipe is shutdown for writes, in which case the service is notified by a 0-length recv.
In order to do the same in Go, you need to call CloseWrite on the net.UnixConn after you have completed sending the command.
c.conn.(*net.UnixConn).CloseWrite()

How does this binary.read know when to stop?

Please note this is pseudo code and I am summarising.I am reading some source code from inside a function:
maxKeyLen := 100 * 1024 * 1024
maxValueLen := 100 * 1024 * 1024
var klen, vlen uint32
binary.Read(p.buffer, binary.BigEndian, &klen)
if klen > maxKeyLen {
return nil, nil, fmt.Errorf("key exceeds max len %d, got %d bytes", maxKeyLen, klen)
}
At what point does the binary.Read stop? Because straight after this there is another read:
key := make([]byte, klen)
_, err := p.buffer.Read(key)
if err != nil {
return nil, nil, err
}
binary.Read(p.buffer, binary.BigEndian, &vlen)
if vlen > maxValueLen {
return nil, nil, fmt.Errorf("value exceeds max len %d, got %d bytes", maxValueLen, vlen)
}
Where p.buffer is defined via:
buff := new(bytes.Buffer)
io.Copy(buff, r)
p.buffer = buff
And r is some data that has been passed in.
At first I thought the answer was at 4 bytes it stops. But that's not true because the maxkeylen checks for greater than that. So how does the binary.read know when to stop as there is more data ahead, because the next binary read on for the vlen then finds stuff?
When questioning the superheros of Go, always refer to their actual source code in question:
https://golang.org/src/encoding/binary/binary.go?s=4201:4264#L132
142 func Read(r io.Reader, order ByteOrder, data interface{}) error {
143 // Fast path for basic types and slices.
144 if n := intDataSize(data); n != 0 {
Line 144 shows an example of reading the initial size of know types, and iterating or copying as needed later in that scope.
In your code example above, it will be the 4 byte length of klen which is an uint32. That is, it will read 4 bytes from p.buffer into klen.
It gives a hint in the documentation:
https://golang.org/pkg/encoding/binary/#Read
func Read(r io.Reader, order ByteOrder, data interface{}) error
Read reads structured binary data from r into data. Data must be a pointer to a fixed-size value or a slice of fixed-size values. Bytes read from r are decoded using the specified byte order and written to successive fields of the data.

Resources