Why is strconv.ParseUint so slow compared to strconv.Atoi? - performance

I'm benchmarking unmarshaling from string to int and uint with this code:
package main
import (
"strconv"
"testing"
)
func BenchmarkUnmarshalInt(b *testing.B) {
for i := 0; i < b.N; i++ {
UnmarshalInt("123456")
}
}
func BenchmarkUnmarshalUint(b *testing.B) {
for i := 0; i < b.N; i++ {
UnmarshalUint("123456")
}
}
func UnmarshalInt(v string) int {
i, _ := strconv.Atoi(v)
return i
}
func UnmarshalUint(v string) uint {
i, _ := strconv.ParseUint(v, 10, 64)
return uint(i)
}
Result:
Running tool: C:\Go\bin\go.exe test -benchmem -run=^$ myBench/main -bench .
goos: windows
goarch: amd64
pkg: myBench/main
BenchmarkUnmarshalInt-8 99994166 11.7 ns/op 0 B/op 0 allocs/op
BenchmarkUnmarshalUint-8 54550413 21.0 ns/op 0 B/op 0 allocs/op
Is it possible that the second (uint) is almost twice as slow as the first (int)?

Yes, it's possible. strconv.Atoi has a fast path when the input string length is less than 19 (or 10 if int is 32 bit). This allows it to be a lot faster because it doesn't need to check for overflow.
If you change your test number to "1234567890123456789" (assuming 64 bit int), then your int benchmark is slightly slower than the uint benchmark because the fast path can't be used. On my machine, it takes 37.6 ns/op for the signed version vs 31.5 ns/op for the unsigned version.
Here's the modified benchmark code (note I added a variable that sums up the parsed results, just in case the compiler got clever and optimized it away).
package main
import (
"fmt"
"strconv"
"testing"
)
const X = "1234567890123456789"
func BenchmarkUnmarshalInt(b *testing.B) {
var T int
for i := 0; i < b.N; i++ {
T += UnmarshalInt(X)
}
fmt.Println(T)
}
func BenchmarkUnmarshalUint(b *testing.B) {
var T uint
for i := 0; i < b.N; i++ {
T += UnmarshalUint(X)
}
fmt.Println(T)
}
func UnmarshalInt(v string) int {
i, _ := strconv.Atoi(v)
return i
}
func UnmarshalUint(v string) uint {
i, _ := strconv.ParseUint(v, 10, 64)
return uint(i)
}
For reference, the code for strconv.Atoi in the standard library is currently as follows:
func Atoi(s string) (int, error) {
const fnAtoi = "Atoi"
sLen := len(s)
if intSize == 32 && (0 < sLen && sLen < 10) ||
intSize == 64 && (0 < sLen && sLen < 19) {
// Fast path for small integers that fit int type.
s0 := s
if s[0] == '-' || s[0] == '+' {
s = s[1:]
if len(s) < 1 {
return 0, &NumError{fnAtoi, s0, ErrSyntax}
}
}
n := 0
for _, ch := range []byte(s) {
ch -= '0'
if ch > 9 {
return 0, &NumError{fnAtoi, s0, ErrSyntax}
}
n = n*10 + int(ch)
}
if s0[0] == '-' {
n = -n
}
return n, nil
}
// Slow path for invalid, big, or underscored integers.
i64, err := ParseInt(s, 10, 0)
if nerr, ok := err.(*NumError); ok {
nerr.Func = fnAtoi
}
return int(i64), err
}

Related

Why slice kept escaping from stack?

I am trying to solve leetcode problem permutations.
But when i test with -benchmem, i found it allocs too much which reach 1957 allocs/op when permute([]int{1,2,3,4,5,6})
I found it escape to heap when generating sub-nums target. Even i try to allocate [6]int, and use unsafe package to build the slice, it still moved to heap.
My question is, why the slice escape to heap, and how could i allocate the slice on stack?
Here's my code:
package main
import (
"fmt"
"reflect"
"unsafe"
)
func permute(nums []int) [][]int {
resLen := 1
for i := 1; i<= len(nums);i ++{
resLen *= i
}
// pre allocate
res := make([][]int, resLen)
for i := range res{
res[i] = make([]int, 0, len(nums))
}
build(res, nums)
return res
}
func build(res [][]int,targets []int){
step := len(res) / len(targets)
for i := range targets{
for j := i*step; j < (i+1) * step; j ++{
res[j] = append(res[j], targets[i])
}
if len(targets) != 1{
var ab = [6]int{}
var buff []int
var bp *reflect.SliceHeader
bp = (*reflect.SliceHeader)(unsafe.Pointer(&buff))
bp.Data = uintptr(unsafe.Pointer(&ab))
bp.Cap = 6
buff = append(buff, targets[:i]...)
buff = append(buff, targets[i+1:]...)
build(res[i*step:(i+1)*step], buff)
}
}
return
}
func main() {
nums := []int{1,2,3}
res := permute(nums)
fmt.Println(res)
}
build function without unsafe but escapes to heap:
func build(res [][]int, targets []int) {
step := len(res) / len(targets)
for i := range targets {
for j := i * step; j < (i+1)*step; j++ {
res[j] = append(res[j], targets[i])
}
if len(targets) != 1 {
buff := make([]int, 0, 6) // make([]int, 0, 6) escapes to heap
buff = append(buff, targets[:i]...)
buff = append(buff, targets[i+1:]...)
build(res[i*step:(i+1)*step], buff)
}
}
return
}
And my test case:
package main
import "testing"
func Benchmark(b *testing.B){
for i:=0;i<b.N;i++{
permute([]int{1,2,3,4,5,6})
}
}
When i run go build -gcflags="-m", it reports ./main.go:32:8: moved to heap: ab
Trying to subvert the compiler using unsafe.Pointer is only making it harder for the escape analysis to do its job, preventing the slice from being stack allocated. Simply allocate a single slice and reuse it for each loop iteration:
func build(res [][]int, targets []int) {
buff := make([]int, 0, 6)
step := len(res) / len(targets)
for i := range targets {
buff = buff[:0]
for j := i * step; j < (i+1)*step; j++ {
res[j] = append(res[j], targets[i])
}
if len(targets) != 1 {
buff = append(buff, targets[:i]...)
buff = append(buff, targets[i+1:]...)
build(res[i*step:(i+1)*step], buff)
}
}
return
}
This can be correctly optimized by the compiler
./main.go:26:17: make([]int, 0, 6) does not escape
And will result in only the desired allocations:
Benchmark-8 44607 26838 ns/op 52992 B/op 721 allocs/op

Why having a default clause in a goroutine's select makes it slower?

Referring to the following benchmarking test codes:
func BenchmarkRuneCountNoDefault(b *testing.B) {
b.StopTimer()
var strings []string
numStrings := 10
for n := 0; n < numStrings; n++{
s := RandStringBytesMaskImprSrc(10)
strings = append(strings, s)
}
jobs := make(chan string)
results := make (chan int)
for i := 0; i < runtime.NumCPU(); i++{
go RuneCountNoDefault(jobs, results)
}
b.StartTimer()
for n := 0; n < b.N; n++ {
go func(){
for n := 0; n < numStrings; n++{
<-results
}
return
}()
for n := 0; n < numStrings; n++{
jobs <- strings[n]
}
}
close(jobs)
}
func RuneCountNoDefault(jobs chan string, results chan int){
for{
select{
case j, ok := <-jobs:
if ok{
results <- utf8.RuneCountInString(j)
} else {
return
}
}
}
}
func BenchmarkRuneCountWithDefault(b *testing.B) {
b.StopTimer()
var strings []string
numStrings := 10
for n := 0; n < numStrings; n++{
s := RandStringBytesMaskImprSrc(10)
strings = append(strings, s)
}
jobs := make(chan string)
results := make (chan int)
for i := 0; i < runtime.NumCPU(); i++{
go RuneCountWithDefault(jobs, results)
}
b.StartTimer()
for n := 0; n < b.N; n++ {
go func(){
for n := 0; n < numStrings; n++{
<-results
}
return
}()
for n := 0; n < numStrings; n++{
jobs <- strings[n]
}
}
close(jobs)
}
func RuneCountWithDefault(jobs chan string, results chan int){
for{
select{
case j, ok := <-jobs:
if ok{
results <- utf8.RuneCountInString(j)
} else {
return
}
default: //DIFFERENCE
}
}
}
//https://stackoverflow.com/questions/22892120/how-to-generate-a-random-string-of-a-fixed-length-in-golang
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
const (
letterIdxBits = 6 // 6 bits to represent a letter index
letterIdxMask = 1<<letterIdxBits - 1 // All 1-bits, as many as letterIdxBits
letterIdxMax = 63 / letterIdxBits // # of letter indices fitting in 63 bits
)
var src = rand.NewSource(time.Now().UnixNano())
func RandStringBytesMaskImprSrc(n int) string {
b := make([]byte, n)
// A src.Int63() generates 63 random bits, enough for letterIdxMax characters!
for i, cache, remain := n-1, src.Int63(), letterIdxMax; i >= 0; {
if remain == 0 {
cache, remain = src.Int63(), letterIdxMax
}
if idx := int(cache & letterIdxMask); idx < len(letterBytes) {
b[i] = letterBytes[idx]
i--
}
cache >>= letterIdxBits
remain--
}
return string(b)
}
When I benchmarked both the functions where one function, RuneCountNoDefault has no default clause in the select and the other, RuneCountWithDefault has a default clause, I'm getting the following benchmark:
BenchmarkRuneCountNoDefault-4 200000 8910 ns/op
BenchmarkRuneCountWithDefault-4 5 277798660 ns/op
Checking the cpuprofile generated by the tests, I noticed that the function with the default clause spends a lot of time in the following channel operations:
Why having a default clause in the goroutine's select makes it slower?
I'm using Go version 1.10 for windows/amd64
The Go Programming Language
Specification
Select statements
If one or more of the communications can proceed, a single one that
can proceed is chosen via a uniform pseudo-random selection.
Otherwise, if there is a default case, that case is chosen. If there
is no default case, the "select" statement blocks until at least one
of the communications can proceed.
Modifying your benchmark to count the number of proceed and default cases taken:
$ go test default_test.go -bench=.
goos: linux
goarch: amd64
BenchmarkRuneCountNoDefault-4 300000 4108 ns/op
BenchmarkRuneCountWithDefault-4 10 209890782 ns/op
--- BENCH: BenchmarkRuneCountWithDefault-4
default_test.go:90: proceeds 114
default_test.go:91: defaults 128343308
$
While other cases were unable to proceed, the default case was taken 128343308 times in 209422470, (209890782 - 114*4108), nanoseconds or 1.63 nanoseconds per default case. If you do something small a large number of times, it adds up.
default_test.go:
package main
import (
"math/rand"
"runtime"
"sync/atomic"
"testing"
"time"
"unicode/utf8"
)
func BenchmarkRuneCountNoDefault(b *testing.B) {
b.StopTimer()
var strings []string
numStrings := 10
for n := 0; n < numStrings; n++ {
s := RandStringBytesMaskImprSrc(10)
strings = append(strings, s)
}
jobs := make(chan string)
results := make(chan int)
for i := 0; i < runtime.NumCPU(); i++ {
go RuneCountNoDefault(jobs, results)
}
b.StartTimer()
for n := 0; n < b.N; n++ {
go func() {
for n := 0; n < numStrings; n++ {
<-results
}
return
}()
for n := 0; n < numStrings; n++ {
jobs <- strings[n]
}
}
close(jobs)
}
func RuneCountNoDefault(jobs chan string, results chan int) {
for {
select {
case j, ok := <-jobs:
if ok {
results <- utf8.RuneCountInString(j)
} else {
return
}
}
}
}
var proceeds ,defaults uint64
func BenchmarkRuneCountWithDefault(b *testing.B) {
b.StopTimer()
var strings []string
numStrings := 10
for n := 0; n < numStrings; n++ {
s := RandStringBytesMaskImprSrc(10)
strings = append(strings, s)
}
jobs := make(chan string)
results := make(chan int)
for i := 0; i < runtime.NumCPU(); i++ {
go RuneCountWithDefault(jobs, results)
}
b.StartTimer()
for n := 0; n < b.N; n++ {
go func() {
for n := 0; n < numStrings; n++ {
<-results
}
return
}()
for n := 0; n < numStrings; n++ {
jobs <- strings[n]
}
}
close(jobs)
b.Log("proceeds", atomic.LoadUint64(&proceeds))
b.Log("defaults", atomic.LoadUint64(&defaults))
}
func RuneCountWithDefault(jobs chan string, results chan int) {
for {
select {
case j, ok := <-jobs:
atomic.AddUint64(&proceeds, 1)
if ok {
results <- utf8.RuneCountInString(j)
} else {
return
}
default: //DIFFERENCE
atomic.AddUint64(&defaults, 1)
}
}
}
//https://stackoverflow.com/questions/22892120/how-to-generate-a-random-string-of-a-fixed-length-in-golang
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
const (
letterIdxBits = 6 // 6 bits to represent a letter index
letterIdxMask = 1<<letterIdxBits - 1 // All 1-bits, as many as letterIdxBits
letterIdxMax = 63 / letterIdxBits // # of letter indices fitting in 63 bits
)
var src = rand.NewSource(time.Now().UnixNano())
func RandStringBytesMaskImprSrc(n int) string {
b := make([]byte, n)
// A src.Int63() generates 63 random bits, enough for letterIdxMax characters!
for i, cache, remain := n-1, src.Int63(), letterIdxMax; i >= 0; {
if remain == 0 {
cache, remain = src.Int63(), letterIdxMax
}
if idx := int(cache & letterIdxMask); idx < len(letterBytes) {
b[i] = letterBytes[idx]
i--
}
cache >>= letterIdxBits
remain--
}
return string(b)
}
Playground: https://play.golang.org/p/DLnAY0hovQG

Golang foreach difference

func Benchmark_foreach1(b *testing.B) {
var test map[int]int
test = make(map[int]int)
for i := 0; i < 100000; i++ {
test[i] = 1
}
for i := 0; i < b.N; i++ {
for i, _ := range test {
if test[i] != 1 {
panic("ds")
}
}
}
}
func Benchmark_foreach2(b *testing.B) {
var test map[int]int
test = make(map[int]int)
for i := 0; i < 100000; i++ {
test[i] = 1
}
for i := 0; i < b.N; i++ {
for _, v := range test {
if v != 1 {
panic("heh")
}
}
}
}
run with result as below
goos: linux
goarch: amd64
Benchmark_foreach1-2 500 3172323 ns/op
Benchmark_foreach2-2 1000 1707214 ns/op
why is foreach-2 slow?
I think Benchmark_foreach2-2 is about 2 times faster - it requires 1707214 nanoseconds per operation, and first one takes 3172323. So second one is 3172323 / 1707214 = 1.85 times faster.
Reason: second doesn't need to take value from a memory again, it already used value in v variable.
The test[k] statement in BenchmarkForeachK takes time to randomly read the value, so BenchmarkForeachK takes more time than BenchmarkForeachV, 9362945 ns/op versus 4213940 ns/op .
For example,
package main
import "testing"
func testMap() map[int]int {
test := make(map[int]int)
for i := 0; i < 100000; i++ {
test[i] = 1
}
return test
}
func BenchmarkForeachK(b *testing.B) {
test := testMap()
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for k := range test {
if test[k] != 1 {
panic("eh")
}
}
}
}
func BenchmarkForeachV(b *testing.B) {
test := testMap()
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for _, v := range test {
if v != 1 {
panic("heh")
}
}
}
}
Output:
$ go test foreach_test.go -bench=.
BenchmarkForeachK-4 200 9362945 ns/op 0 B/op 0 allocs/op
BenchmarkForeachV-4 300 4213940 ns/op 0 B/op 0 allocs/op

How to write isNumeric function in Golang?

I want to check if a string is numeric.
For example:
"abcd123" should return false.
"1.4" or "240" should return true.
I thought about using ParseInt and ParseFloat (from the strconv package), but am not sure if that is the right way.
I was thinking of using strconv ParseInt and ParseFloat but not sure
if that is the right way.
Well, it's certainly a right way.
You don't need to use ParseInt, though. ParseFloat will do the job.
func isNumeric(s string) bool {
_, err := strconv.ParseFloat(s, 64)
return err == nil
}
See an example here: https://play.golang.org/p/D53HRS-KIL
If you need to convert the string to a floating-point number strconv.ParseFloat is the first choice.
Here you just need to know that there is only "0123456789" and maximum one '.' in your string, here for me isNumDot is 12x faster than isNumeric, see:
Consider this (1.7 seconds) - optimized for performance:
func isNumDot(s string) bool {
dotFound := false
for _, v := range s {
if v == '.' {
if dotFound {
return false
}
dotFound = true
} else if v < '0' || v > '9' {
return false
}
}
return true
}
and this (21.7 seconds - doing more extra works "converts the string to a floating-point number"):
func isNumeric(s string) bool {
_, err := strconv.ParseFloat(s, 64)
return err == nil
}
Try it:
package main
import (
"fmt"
"strconv"
"time"
)
func isNumDot(s string) bool {
dotFound := false
for _, v := range s {
if v == '.' {
if dotFound {
return false
}
dotFound = true
} else if v < '0' || v > '9' {
return false
}
}
return true
}
func isNumeric(s string) bool {
_, err := strconv.ParseFloat(s, 64)
return err == nil
}
func main() {
fmt.Println(isNumDot("240")) //true
fmt.Println(isNumDot("abcd123")) //false
fmt.Println(isNumDot("0.4.")) //false
fmt.Println(isNumDot("240 ")) //false
benchmark(isNumDot)
benchmark(isNumeric)
}
func benchmark(f func(string) bool) {
var res bool
t := time.Now()
for i := 0; i < 100000000; i++ {
res = f("a 240") || f("abcd123") || f("0.4.") || f("240 ")
}
fmt.Println(time.Since(t))
fmt.Println(res)
}
output:
true
false
false
false
1.7822s
false
21.723s
false
Using the benchmark (isNumDot is faster than isNumeric):
BenchmarkIsNumDot-8 34117197 31.2 ns/op 0 B/op 0 allocs/op
BenchmarkIsNumeric-8 1931089 630 ns/op 192 B/op 4 allocs/op
// r = isNumDot("2.22")
BenchmarkIsNumDot-8 102849996 11.4 ns/op 0 B/op 0 allocs/op
BenchmarkIsNumeric-8 21994874 48.5 ns/op 0 B/op 0 allocs/op
// r = isNumDot("a 240")
BenchmarkIsNumDot-8 256610877 4.58 ns/op 0 B/op 0 allocs/op
BenchmarkIsNumeric-8 8962381 140 ns/op 48 B/op 1 allocs/op
The benchmark:
package main
import (
"testing"
)
var r bool
func BenchmarkIsNumDot(b *testing.B) {
for i := 0; i < b.N; i++ {
r = isNumDot("a 240") || isNumDot("abcd123") || isNumDot("0.4.") || isNumDot("240 ")
}
}
func BenchmarkIsNumeric(b *testing.B) {
for i := 0; i < b.N; i++ {
r = isNumeric("a 240") || isNumeric("abcd123") || isNumeric("0.4.") || isNumeric("240 ")
}
}
I tried to comment on Adrian's answer but I guess I don't have enough reputation points. Building on his excellent response, here is a variation using PCRE. Some brief explanation on the symbols if you are unfamiliar with regular expressions:
"^" matches the start of input (i.e. beginning of your string)
"$" matches the end of input (i.e. the end of your string)
"()" are grouping operators
"*" matches 0 or more
"+" matches 1 or more
"?" matches exactly 0 or 1
"\d" is a character class which represents the character values 0 through 9
So, the following would require at least a leading 0, permit "0.", and everything else that is normally identified as a floating point value. You can experiment with this a bit.
func isFloat(s string) bool {
return regexp.MatchString(`^\d+(\.\d*)?$`, s)
}
Naturally, if you are calling this function to validate data, it should be cleaned:
str := strings.TrimSpace(someString)
if isFloat(str) {
...
}
That only works on ASCII characters. If you are dealing with UTF8 or another multi-byte character set (MBCS), it can be done with regexp but more work would be required and perhaps another approach altogether.
You can use the strconv.Atoi function for check integer values, and the strconv.ParseFloat for float values. Below is an example:
package main
import (
"fmt"
"strconv"
)
func main() {
v1 := "14"
if _, err := strconv.Atoi(v1); err == nil {
fmt.Printf("%q looks like a number.\n", v1)
} else {
fmt.Printf("%q is not a number.\n", v1)
}
v2 := "1.4"
if _, err := strconv.ParseFloat(v2, 64); err == nil {
fmt.Printf("%q looks like a float.\n", v2)
} else {
fmt.Printf("%q is not a float.\n", v2)
}
}
/* Output:
"14" looks like a number.
"1.4" looks like a float.
*/
You can check it on the Go Playground.
All the answers are valid, but there's another option not yet suggested:
re := regexp.MustCompile(`^[0-9]+(\.[0-9]+)?$`)
isNum := re.Match([]byte("ab123"))
Playground demo
I hit this in a high-throughput system today and did a benchmark of the three suggestions. Results:
BenchmarkNumDot-4: 657966132: 18.2 ns/op
BenchmarkNumeric-4: 49575919: 226 ns/op
BenchmarkRegexp-4: 18817201: 628 ns/op
Code follows since the playground does not support benchmarking.
package main
import (
"regexp"
"strconv"
"testing"
)
func BenchmarkNumDot(b *testing.B) {
for i := 0; i < b.N; i++ {
isNumDot("abc")
isNumDot("123")
isNumDot("12.34")
isNumDot("1.2.3.4")
}
}
func BenchmarkNumeric(b *testing.B) {
for i := 0; i < b.N; i++ {
isNumeric("abc")
isNumeric("123")
isNumeric("12.34")
isNumeric("1.2.3.4")
}
}
func BenchmarkRegexp(b *testing.B) {
re := regexp.MustCompile(`^[0-9]+(\.[0-9]+)?$`)
for i := 0; i < b.N; i++ {
isNumReg("abc", re)
isNumReg("123", re)
isNumReg("12.34", re)
isNumReg("1.2.3.4", re)
}
}
func isNumDot(s string) bool {
dotFound := false
for _, v := range s {
if v == '.' {
if dotFound {
return false
}
dotFound = true
} else if v < '0' || v > '9' {
return false
}
}
return true
}
func isNumeric(s string) bool {
_, err := strconv.ParseFloat(s, 64)
return err == nil
}
func isNumReg(s string, re *regexp.Regexp) bool {
return re.Match([]byte(s))
}

Concat byte arrays

Can someone please point at a more efficient version of the following
b:=make([]byte,0,sizeTotal)
b=append(b,size...)
b=append(b,contentType...)
b=append(b,lenCallbackid...)
b=append(b,lenTarget...)
b=append(b,lenAction...)
b=append(b,lenContent...)
b=append(b,callbackid...)
b=append(b,target...)
b=append(b,action...)
b=append(b,content...)
every variable is a byte slice apart from size sizeTotal
Update:
Code:
type Message struct {
size uint32
contentType uint8
callbackId string
target string
action string
content string
}
var res []byte
var b []byte = make([]byte,0,4096)
func (m *Message)ToByte()[]byte{
callbackIdIntLen:=len(m.callbackId)
targetIntLen := len(m.target)
actionIntLen := len(m.action)
contentIntLen := len(m.content)
lenCallbackid:=make([]byte,4)
binary.LittleEndian.PutUint32(lenCallbackid, uint32(callbackIdIntLen))
callbackid := []byte(m.callbackId)
lenTarget := make([]byte,4)
binary.LittleEndian.PutUint32(lenTarget, uint32(targetIntLen))
target:=[]byte(m.target)
lenAction := make([]byte,4)
binary.LittleEndian.PutUint32(lenAction, uint32(actionIntLen))
action := []byte(m.action)
lenContent:= make([]byte,4)
binary.LittleEndian.PutUint32(lenContent, uint32(contentIntLen))
content := []byte(m.content)
sizeTotal:= 21+callbackIdIntLen+targetIntLen+actionIntLen+contentIntLen
size := make([]byte,4)
binary.LittleEndian.PutUint32(size, uint32(sizeTotal))
b=b[:0]
b=append(b,size...)
b=append(b,byte(m.contentType))
b=append(b,lenCallbackid...)
b=append(b,lenTarget...)
b=append(b,lenAction...)
b=append(b,lenContent...)
b=append(b,callbackid...)
b=append(b,target...)
b=append(b,action...)
b=append(b,content...)
res = b
return b
}
func FromByte(bytes []byte)(*Message){
size :=binary.LittleEndian.Uint32(bytes[0:4])
contentType :=bytes[4:5][0]
lenCallbackid:=binary.LittleEndian.Uint32(bytes[5:9])
lenTarget :=binary.LittleEndian.Uint32(bytes[9:13])
lenAction :=binary.LittleEndian.Uint32(bytes[13:17])
lenContent :=binary.LittleEndian.Uint32(bytes[17:21])
callbackid := string(bytes[21:21+lenCallbackid])
target:= string(bytes[21+lenCallbackid:21+lenCallbackid+lenTarget])
action:= string(bytes[21+lenCallbackid+lenTarget:21+lenCallbackid+lenTarget+lenAction])
content:=string(bytes[size-lenContent:size])
return &Message{size,contentType,callbackid,target,action,content}
}
Benchs:
func BenchmarkMessageToByte(b *testing.B) {
m:=NewMessage(uint8(3),"agsdggsdasagdsdgsgddggds","sometarSFAFFget","somFSAFSAFFSeaction","somfasfsasfafsejsonzhit")
for n := 0; n < b.N; n++ {
m.ToByte()
}
}
func BenchmarkMessageFromByte(b *testing.B) {
m:=NewMessage(uint8(1),"sagdsgaasdg","soSASFASFASAFSFASFAGmetarget","adsgdgsagdssgdsgd","agsdsdgsagdsdgasdg").ToByte()
for n := 0; n < b.N; n++ {
FromByte(m)
}
}
func BenchmarkStringToByte(b *testing.B) {
for n := 0; n < b.N; n++ {
_ = []byte("abcdefghijklmnoqrstuvwxyz")
}
}
func BenchmarkStringFromByte(b *testing.B) {
s:=[]byte("abcdefghijklmnoqrstuvwxyz")
for n := 0; n < b.N; n++ {
_ = string(s)
}
}
func BenchmarkUintToByte(b *testing.B) {
for n := 0; n < b.N; n++ {
i:=make([]byte,4)
binary.LittleEndian.PutUint32(i, uint32(99))
}
}
func BenchmarkUintFromByte(b *testing.B) {
i:=make([]byte,4)
binary.LittleEndian.PutUint32(i, uint32(99))
for n := 0; n < b.N; n++ {
binary.LittleEndian.Uint32(i)
}
}
Bench results:
BenchmarkMessageToByte 10000000 280 ns/op
BenchmarkMessageFromByte 10000000 293 ns/op
BenchmarkStringToByte 50000000 55.1 ns/op
BenchmarkStringFromByte 50000000 49.7 ns/op
BenchmarkUintToByte 1000000000 2.14 ns/op
BenchmarkUintFromByte 2000000000 1.71 ns/op
Provided memory is already allocated, a sequence of x=append(x,a...) is rather efficient in Go.
In your example, the initial allocation (make) probably costs more than the sequence of appends. It depends on the size of the fields. Consider the following benchmark:
package main
import (
"testing"
)
const sizeTotal = 25
var res []byte // To enforce heap allocation
func BenchmarkWithAlloc(b *testing.B) {
a := []byte("abcde")
for i := 0; i < b.N; i++ {
x := make([]byte, 0, sizeTotal)
x = append(x, a...)
x = append(x, a...)
x = append(x, a...)
x = append(x, a...)
x = append(x, a...)
res = x // Make sure x escapes, and is therefore heap allocated
}
}
func BenchmarkWithoutAlloc(b *testing.B) {
a := []byte("abcde")
x := make([]byte, 0, sizeTotal)
for i := 0; i < b.N; i++ {
x = x[:0]
x = append(x, a...)
x = append(x, a...)
x = append(x, a...)
x = append(x, a...)
x = append(x, a...)
res = x
}
}
On my box, the result is:
testing: warning: no tests to run
PASS
BenchmarkWithAlloc 10000000 116 ns/op 32 B/op 1 allocs/op
BenchmarkWithoutAlloc 50000000 24.0 ns/op 0 B/op 0 allocs/op
Systematically reallocating the buffer (even a small one) makes this benchmark at least 5 times slower.
So your best hope to optimize this code it to make sure you do not reallocate a buffer for each packet you build. On the contrary, you should keep your buffer, and reuse it for each marshalling operation.
You can reset a slice while keeping its underlying buffer allocated with the following statement:
x = x[:0]
I looked carefully at that and made the following benchmarks.
package append
import "testing"
func BenchmarkAppend(b *testing.B) {
as := 1000
a := make([]byte, as)
s := make([]byte, 0, b.N*as)
for i := 0; i < b.N; i++ {
s = append(s, a...)
}
}
func BenchmarkCopy(b *testing.B) {
as := 1000
a := make([]byte, as)
s := make([]byte, 0, b.N*as)
for i := 0; i < b.N; i++ {
copy(s[i*as:(i+1)*as], a)
}
}
The results are
grzesiek#klapacjusz ~/g/s/t/append> go test -bench . -benchmem
testing: warning: no tests to run
PASS
BenchmarkAppend 10000000 202 ns/op 1000 B/op 0 allocs/op
BenchmarkCopy 10000000 201 ns/op 1000 B/op 0 allocs/op
ok test/append 4.564s
If the totalSize is big enough then your code makes no memory allocations. It copies only the amount of bytes it needs to copy. It is perfectly fine.

Resources