How to read a unit8 from []byte without creating a bytes.Buffer. The value has been written to the buffer like this,
buf := new(bytes.Buffer)
binary.Write(buf, binary.BigEndian, uint32(1))
binary.Write(buf, binary.BigEndian, uint8(1))
b := buf.Bytes()
While decoding, it can easily be done for uint32, like the following...
len := binary.BigEndian.Uint32(b[:4])
But for the uint8, the only way to retrieve the value that I could come up with, is to create a buffer and then read the first byte,
buf := new(bytes.Buffer)
_, err := buf.Write(b[4:5])
// error handling ...
id = buf.ReadByte()
It seems like there's no method in the encoding/binary pkg for uint8 value retrieval. And I guess there's probably some good reason behind it.
Question: Is there any other way to read uint8 from that []byte without creating a Buffer??
Us an index expression to get an single uint8 from a slice.
len := binary.BigEndian.Uint32(b[:4])
id := b[4] // <-- index expression
Note that byte is an alias for uint8.
Here's an elegant way that I found,
var len uint32
var id uint8
binary.Read(buf, binary.BigEndian, &len)
binary.Read(buf, binary.BigEndian, &id)
// this method doesn't need you to create a buffer for each value read
Related
I need to send struct data with byte slice data type during socket communication.
type A struct {
header []byte
body []byte
}
So I wrote the following source code to convert the structure to bytes.
var a A
a.header = byte slice data...
a.body = byte slice data...
buf := new(bytes.Buffer)
binary.Write(buf, binary.BigEndian, a)
However, I get an error with the binary.Write function showing the following error:
binary.Write: invalid type main.A
I have found that fixed arrays solve the problem. But since the length of the data is constantly changing, I have to use a slice rather than a fixed array.
Is there a way to solve this problem?
If you write a variable length of byte slice, the other end would not know how many bytes it needs to read. You have to communicate the length too.
So one way to send a byte slice is to first write the length (number of bytes) using a fixed-size type, e.g. int32 or int64. Then simply write the byte slice.
For example:
var w io.Writer // This represents your connection
var a A
if err := binary.Write(w, binary.LittleEndian, int32(len(a.header))); err != nil {
// Handle error
}
if _, err := w.Write(a.header); err != nil {
// Handle error
}
You may use the same logic to send a.body too.
On the other end, this is how you could read it:
var r io.Reader // This represents your connection
var a A
var size int32
if err := binary.Read(r, binary.LittleEndian, &size); err != nil {
// Handle error
}
a.header = make([]byte, size)
if _, err := io.ReadFull(r, a.header); err != nil {
// Handle error
}
Try a working example on the Go Playground.
If you have to transfer more complex structs, consider using the encoding/gob which handles sending slices with ease. For an example and some insights, see Efficient Go serialization of struct to disk.
I have a setup where I receive data over the network and serialize it to my struct. It works fine, but now I need to serialize the data to a slice buffer to send it across the network.
I am trying to avoid having to allocate more than needed so I have already set up a buffer which I like to write to for all my serializing. But am not sure how to do this.
My setup is like this:
recieveBuffer := make([]byte, 1500)
header := recieveBuffer[0:1]
message := recieveBuffer[1:]
So I am trying to write fields from a struct to message and the total number of bytes for all the fields as a value for header.
This was how I deserialized to the struct:
// Deserialize ...
func (userSession *UserSession) Deserialize(message []byte) {
userSession.UID = int64(binary.LittleEndian.Uint32(message[0:4]))
userSession.UUID = string(message[4:40])
userSession.Username = string(message[40:])
}
I don't really know how to do the reverse of this, however. Is it possible without creating buffers for each field I want to serialize before copying to message?
Given the preallocated buffer buf, you can reverse the process like this:
buf[0] = byte(40+len(userSession.Username))
binary.LittleEndian.PutUint32(buf[1:], uint32(int32(userSession.UID)))
copy(buf[5:41], userSession.UUID)
copy(buf[41:], userSession.Username)
Given two helper functions.
One to encode a primitive to a byte slice:
func EncodeNumber2NetworkOrder(v interface{}) ([]byte, error) {
switch v.(type) {
case int: // int is at least 32 bits
b := make([]byte, 4)
binary.BigEndian.PutUint32(b, uint32(v.(int)))
return b, nil
case int8:
b := []byte{byte(v.(int8))}
return b, nil
// ... truncated
and one to convert primitive, non-byte slices to a byte slice
func EncodeBigEndian(in []float64) []byte {
var out []byte = make([]byte, len(in)*8)
var wg sync.WaitGroup
wg.Add(len(in))
for i := 0; i < len(in); i++ {
go func(out *[]byte, i int, f float64) {
defer wg.Done()
binary.BigEndian.PutUint64((*out)[(i<<3):], math.Float64bits(f))
}(&out, i, in[i])
}
wg.Wait()
return out
}
your binary serialization might look like this for a bogus struct like
type Foo struct {
time int64
data []float64
}
func Encode(f *Foo) []byte {
da := encoder.EncodeBigEndian(f.data)
bytes := make([]byte,0)
bytes = append(bytes, da...)
return bytes
}
In my Go program I am encoding []byte data with gob
buf := new(bytes.Buffer)
enc := gob.NewEncoder(buf)
//data is []byte
buf.Reset()
enc.Encode(data)
but getting 'gob decoder attempting to decode into a non-pointer' when I am trying to decoded
buf := new(bytes.Buffer)
d := gob.NewDecoder(buf)
d.Decode(data)
log.Printf("%s", d)
Gob requires you to pass a pointer to decode.
In your case, you would do:
d.Decode(&data)
reason being, it may have to modify the slice (ie: to make it bigger, to fit the decoded array)
I'm trying to marshal an array into a string, separating all elements with newlines. I'm running out of memory and think about a more efficient way to do this.
buffer := ""
for _, record := range all_data {
body, _ := json.Marshal(record)
buffer += string(body) + "\n" // i run out of memory here
Question:
Is there a way to append a newline character to a byte array? Right now I'm casting via string(body), but I think that this operation allocates a lot of memory (but maybe I'm wrong).
Assuming your data isn't inherently too big for the computer it's running on, the problem is likely the inefficient building of that string. Instead you should be using a bytes.buffer and then callings it's String() method. Here's an example;
var buffer bytes.Buffer
for _, record := range all_data {
body, _ := json.Marshal(record)
buffer.Write(body)
buffer.WriteString("\n")
}
fmt.Println(buffer.String())
To add to evanmcdonnal's answer: you don't even need an intermediate buffer created by json.Marshal:
var buf bytes.Buffer
enc := json.NewEncoder(&buf)
for _, record := range allData {
if err := enc.Encode(record); enc != nil {
// handle error
}
buf.WriteString("\n") // optional
}
fmt.Println(buf.String())
https://play.golang.org/p/5K9Oj0Xbjaa
I have an io.ReadCloser object (from an http.Response object).
What's the most efficient way to convert the entire stream to a string object?
EDIT:
Since 1.10, strings.Builder exists. Example:
buf := new(strings.Builder)
n, err := io.Copy(buf, r)
// check errors
fmt.Println(buf.String())
OUTDATED INFORMATION BELOW
The short answer is that it it will not be efficient because converting to a string requires doing a complete copy of the byte array. Here is the proper (non-efficient) way to do what you want:
buf := new(bytes.Buffer)
buf.ReadFrom(yourReader)
s := buf.String() // Does a complete copy of the bytes in the buffer.
This copy is done as a protection mechanism. Strings are immutable. If you could convert a []byte to a string, you could change the contents of the string. However, go allows you to disable the type safety mechanisms using the unsafe package. Use the unsafe package at your own risk. Hopefully the name alone is a good enough warning. Here is how I would do it using unsafe:
buf := new(bytes.Buffer)
buf.ReadFrom(yourReader)
b := buf.Bytes()
s := *(*string)(unsafe.Pointer(&b))
There we go, you have now efficiently converted your byte array to a string. Really, all this does is trick the type system into calling it a string. There are a couple caveats to this method:
There are no guarantees this will work in all go compilers. While this works with the plan-9 gc compiler, it relies on "implementation details" not mentioned in the official spec. You can not even guarantee that this will work on all architectures or not be changed in gc. In other words, this is a bad idea.
That string is mutable! If you make any calls on that buffer it will change the string. Be very careful.
My advice is to stick to the official method. Doing a copy is not that expensive and it is not worth the evils of unsafe. If the string is too large to do a copy, you should not be making it into a string.
Answers so far haven't addressed the "entire stream" part of the question. I think the good way to do this is ioutil.ReadAll. With your io.ReaderCloser named rc, I would write,
Go >= v1.16
if b, err := io.ReadAll(rc); err == nil {
return string(b)
} ...
Go <= v1.15
if b, err := ioutil.ReadAll(rc); err == nil {
return string(b)
} ...
data, _ := ioutil.ReadAll(response.Body)
fmt.Println(string(data))
func copyToString(r io.Reader) (res string, err error) {
var sb strings.Builder
if _, err = io.Copy(&sb, r); err == nil {
res = sb.String()
}
return
}
The most efficient way would be to always use []byte instead of string.
In case you need to print data received from the io.ReadCloser, the fmt package can handle []byte, but it isn't efficient because the fmt implementation will internally convert []byte to string. In order to avoid this conversion, you can implement the fmt.Formatter interface for a type like type ByteSlice []byte.
var b bytes.Buffer
b.ReadFrom(r)
// b.String()
I like the bytes.Buffer struct. I see it has ReadFrom and String methods. I've used it with a []byte but not an io.Reader.