Writing into fixed size Buffers in Golang with offsets - go

I'm new to Golang and I'm trying to write into a Buffer that should be 0 filled to a specific size before starting writing into it.
My try:
buf := bytes.NewBuffer(make([]byte, 52))
var pktInfo uint16 = 243
var pktSize uint16 = 52
var pktLine uint16 = binary.LittleEndian.Uint16(data)
var pktId uint16 = binary.LittleEndian.Uint16(data[6:])
// header
binary.Write(buf, binary.LittleEndian, pktInfo)
binary.Write(buf, binary.LittleEndian, pktSize)
binary.Write(buf, binary.LittleEndian, pktLine)
// body
binary.Write(buf, binary.LittleEndian, pktId)
(...a lot more)
fmt.Printf("%x\n", data)
fmt.Printf("%x\n", buf.Bytes())
Problem is it writes after the bytes instead of writing from the start. How do I do that?

You don't need to reallocate the slice for bytes.Buffer, or if you do, you need to set the cap not the len:
buf := bytes.NewBuffer(make([]byte, 0, 52)) // or simply
buf := bytes.NewBuffer(nil)
There's also buf.Reset() but that's an overkill for that specific example.
make(slice, size, cap):
Slice: The size specifies the length. The capacity of the slice is
equal to its length. A second integer argument may be provided to
specify a different capacity; it must be no smaller than the
length, so make([]int, 0, 10) allocates a slice of length 0 and
capacity 10.

Related

Convert RBGA image to RGB byte array in effcient way

I have a C library and function that expects a pointer to byte array that contains a 24 bit bitmap in RGB format. Alpha channel is not important and can be truncated. I've tried something like this:
func load(filePath string) *image.RGBA {
imgFile, err := os.Open(filePath)
if err != nil {
fmt.Printf("Cannot read file %v\n", err)
}
defer imgFile.Close()
img, _, err := image.Decode(imgFile)
if err != nil {
fmt.Printf("Cannot decode file %v\n", err)
}
return img.(*image.RGBA)
}
img := load("myimg.png")
bounds := img.Bounds()
width, height := bounds.Max.X, bounds.Max.Y
// Convert to RGB? Probably not...
newImg := image.NewNRGBA(image.Rect(0, 0, width, height))
draw.Draw(newImg, newImg.Bounds(), img, bounds.Min, draw.Src)
// Pass image pointer to C function.
C.PaintOnImage(unsafe.Pointer(&newImg.Pix[0]), C.int(newImg.Bounds().Dy()), C.int(newImg.Bounds().Dx())
However, it seems that NRGBA is also built on 4 bytes per pixel. I could solve this probably by using GoCV but this seems like overkill for such simple task. Is there a way to do this in a simple and efficient manner in Go?
There is no RGB image type in the standard library, but you can assemble your RGB array pretty easily:
bounds := img.Bounds()
rgb := make([]byte, bounds.Dx()*bounds.Dy()*3)
idx := 0
for y := bounds.Min.Y; y < bounds.Max.Y; y++ {
for x := bounds.Min.X; x < bounds.Max.X; x++ {
offs := img.PixOffset(x, y)
copy(rgb[idx:], img.Pix[offs:offs+3])
idx += 3
}
}
The img.Pix data holds the 4-byte RGBA values. The code above just copies the leading 3-byte RGB values of all pixels.
Since lines are continuous in the Pix array, you can improve the above code by only calling PixOffset onces per line, and advance by 4 bytes for every pixel. Also manually copying 3 bytes may be faster than calling copy() (benchmark if it matters to you):
bounds := img.Bounds()
rgb := make([]byte, bounds.Dx()*bounds.Dy()*3)
idx := 0
for y := bounds.Min.Y; y < bounds.Max.Y; y++ {
offs := img.PixOffset(bounds.Min.X, y)
for x := bounds.Min.X; x < bounds.Max.X; x++ {
rgb[idx+0] = img.Pix[offs+0]
rgb[idx+1] = img.Pix[offs+1]
rgb[idx+2] = img.Pix[offs+2]
idx += 3
offs += 4
}
}

Scanner.Buffer - max value has no effect on custom Split?

To reduce the default 64k scanner buffer (for microcomputer with low memory), I try to use this buffer and custom split functions:
scanner.Buffer(make([]byte, 5120), 64)
scanner.Split(Scan64Bytes)
Here I noticed that the second buffer argument "max" has no effect. If I instead insert e.g. 0, 1, 5120 or bufio.MaxScanTokenSize, I can' t see any difference.
Only the first argument "buf" has consequences. Is the capacity to small the scan is incomplete and if it's to large the B/op benchmem value increases.
From the doc:
The maximum token size is the larger of max and cap(buf). If max <= cap(buf), Scan will use this buffer only and do no allocation.
I don't understand which is the correct max value. Can you maybe explain this to me, please?
Go Playground
package main
import (
"bufio"
"bytes"
"fmt"
)
func Scan64Bytes(data []byte, atEOF bool) (advance int, token []byte, err error) {
if len(data) < 64 {
return 0, data[0:], bufio.ErrFinalToken
}
return 64, data[0:64], nil
}
func main() {
// improvised source of the same size:
cmdstd := bytes.NewReader(make([]byte, 5120))
scanner := bufio.NewScanner(cmdstd)
// I guess 64 is the correct max arg:
scanner.Buffer(make([]byte, 5120), 64)
scanner.Split(Scan64Bytes)
for i := 0; scanner.Scan(); i++ {
fmt.Printf("%v: %v\r\n", i, scanner.Bytes())
}
if err := scanner.Err(); err != nil {
fmt.Println(err)
}
}
max value has no effect on custom Split?
No, without split there is the same result. But this wouldn't be possible without split and ErrFinalToken:
//your reader/input
cmdstd := bytes.NewReader(make([]byte, 5120))
// your scanner buffer size
scanner.Buffer(make([]byte, 5120), 64)
The buffer size from the scanner should be larger. This is how I would set buf and max:
scanner.Buffer(make([]byte, 5121), 5120)

How to turn a slice of Uint64 into a slice of Bytes

I currently have a protobuf struct that looks like this:
type RequestEnvelop_MessageQuad struct {
F1 [][]byte `protobuf:"bytes,1,rep,name=f1,proto3" json:"f1,omitempty"`
F2 []byte `protobuf:"bytes,2,opt,name=f2,proto3" json:"f2,omitempty"`
Lat float64 `protobuf:"fixed64,3,opt,name=lat" json:"lat,omitempty"`
Long float64 `protobuf:"fixed64,4,opt,name=long" json:"long,omitempty"`
}
F1 takes some S2 Geometry data which I have generated like so:
ll := s2.LatLngFromDegrees(location.Latitude, location.Longitude)
cid := s2.CellIDFromLatLng(ll).Parent(15)
walkData := []uint64{cid.Pos()}
next := cid.Next()
prev := cid.Prev()
// 10 Before, 10 After
for i := 0; i < 10; i++ {
walkData = append(walkData, next.Pos())
walkData = append(walkData, prev.Pos())
next = next.Next()
prev = prev.Prev()
}
log.Println(walkData)
The only problem is, the protobuf struct expects a type of [][]byte I'm just not sure how I can get my uint64 data into bytes. Thanks.
Integer values can be encoded into byte arrays with the encoding/binary package from the standard library.
For instance, to encode a uint64 into a byte buffer, we could use the binary.PutUvarint function:
big := uint64(257)
buf := make([]byte, 2)
n := binary.PutUvarint(buf, big)
fmt.Printf("Wrote %d bytes into buffer: [% x]\n", n, buf)
Which would print:
Wrote 2 bytes into buffer: [81 02]
We can also write a generic stream to the buffer using the binary.Write function:
buf := new(bytes.Buffer)
var pi float64 = math.Pi
err := binary.Write(buf, binary.LittleEndian, pi)
if err != nil {
fmt.Println("binary.Write failed:", err)
}
fmt.Printf("% x", buf.Bytes())
Which outputs:
18 2d 44 54 fb 21 09 40
(this second example was borrowed from that packages documentation, where you will find other simliar examples)

Size of a byte array golang

I have a []byte object and I want to get the size of it in bytes. Is there an equivalent to C's sizeof() in golang? If not, Can you suggest other ways to get the same?
To return the number of bytes in a byte slice use the len function:
bs := make([]byte, 1000)
sz := len(bs)
// sz == 1000
If you mean the number of bytes in the underlying array use cap instead:
bs := make([]byte, 1000, 2000)
sz := cap(bs)
// sz == 2000
A byte is guaranteed to be one byte: https://golang.org/ref/spec#Size_and_alignment_guarantees.
I think your best bet would be;
package main
import "fmt"
import "encoding/binary"
func main() {
thousandBytes := make([]byte, 1000)
tenBytes := make([]byte, 10)
fmt.Println(binary.Size(tenBytes))
fmt.Println(binary.Size(thousandBytes))
}
https://play.golang.org/p/HhJif66VwY
Though there are many options, like just importing unsafe and using sizeof;
import unsafe "unsafe"
size := unsafe.Sizeof(bytes)
Note that for some types, like slices, Sizeof is going to give you the size of the slice descriptor which is likely not what you want. Also, bear in mind the length and capacity of the slice are different and the value returned by binary.Size reflects the length.

Newbie: Properly sizing a []byte size in GO (Chunking)

Go Newbie alert!
Not quite sure how to do this - I want to make a "file chunker" where I grab fixed slices out of a binary file for later upload as a learning project.
I currently have this:
type (
fileChunk []byte
fileChunks []fileChunk
)
func NumChunks(fi os.FileInfo, chunkSize int) int {
chunks := fi.Size() / int64(chunkSize)
if rem := fi.Size() % int64(chunkSize) != 0; rem {
chunks++
}
return int(chunks)
}
// left out err checks for brevity
func chunker(filePtr *string) fileChunks {
f, err := os.Open(*filePtr)
defer f.Close()
// create the initial container to hold the slices
file_chunks := make(fileChunks, 0)
fi, err := f.Stat()
// show me how big the original file is
fmt.Printf("File Name: %s, Size: %d\n", fi.Name(), fi.Size())
// let's partition it into 10000 byte pieces
chunkSize := 10000
chunks := NumChunks(fi, chunkSize)
fmt.Printf("Need %d chunks for this file", chunks)
for i := 0; i < chunks; i++ {
b := make(fileChunk, chunkSize) // allocate a chunk, 10000 bytes
n1, err := f.Read(b)
fmt.Printf("Chunk: %d, %d bytes read\n", i, n1)
// add chunk to "container"
file_chunks = append(file_chunks, b)
}
fmt.Println(len(file_chunks))
return file_chunks
}
This all works mostly fine, but here's what happens if my fize size is 31234 bytes, then I'll end up with three slices full of the first 30000 bytes from the file, the final "chunk" will consist of 1234 "file bytes" followed by "padding" to the 10000 byte chunk size - I'd like the "remainder" filechunk ([]byte) to be sized to 1234, not the full capacity - what would the proper way to do this be? On the receiving side I would then "stitch" together all the pieces to recreate the original file.
You need to re-slice the remainder chunk to be just the length of the last chunk read:
n1, err := f.Read(b)
fmt.Printf("Chunk: %d, %d bytes read\n", i, n1)
b = b[:n1]
This does the re-slicing for all chunks. Normally, n1 will be 10000 for all the non-remainder chunks, but there is no guarantee. The docs say "Read reads up to len(b) bytes from the File." So it's good to pay attention to n1 all the time.

Resources