Measure heap growth accurately - go

I am trying to measure the evolution of the number of heap-allocated objects before and after I call a function. I am forcing runtime.GC() and using runtime.ReadMemStats to measure the number of heap objects I have before and after.
The problem I have is that I sometimes see unexpected heap growth. And it is different after each run.
A simple example below, where I would always expect to see a zero heap-objects growth.
https://go.dev/play/p/FBWfXQHClaG
var mem1_before, mem2_before, mem1_after, mem2_after runtime.MemStats
func measure_nothing(before, after *runtime.MemStats) {
runtime.GC()
runtime.ReadMemStats(before)
runtime.GC()
runtime.ReadMemStats(after)
}
func main() {
measure_nothing(&mem1_before, &mem1_after)
measure_nothing(&mem2_before, &mem2_after)
log.Printf("HeapObjects diff = %d", int64(mem1_after.HeapObjects-mem1_before.HeapObjects))
log.Printf("HeapAlloc diff %d", int64(mem1_after.HeapAlloc-mem1_before.HeapAlloc))
log.Printf("HeapObjects diff = %d", int64(mem2_after.HeapObjects-mem2_before.HeapObjects))
log.Printf("HeapAlloc diff %d", int64(mem2_after.HeapAlloc-mem2_before.HeapAlloc))
}
Sample output:
2009/11/10 23:00:00 HeapObjects diff = 0
2009/11/10 23:00:00 HeapAlloc diff 0
2009/11/10 23:00:00 HeapObjects diff = 4
2009/11/10 23:00:00 HeapAlloc diff 1864
Is what I'm trying to do unpractical? I assume the runtime is doing things that allocate/free heap-memory. Can I tell it to stop to make my measurements? (this is for a test checking for memory leaks, not production code)

You can't predict what garbage collection and reading all the memory stats require in the background. Calling those to calculate memory allocations and usage is not a reliable way.
Luckily for us, Go's testing framework can monitor and calculate memory usage.
So what you should do is write a benchmark function and let the testing framework do its job to report memory allocations and usage.
Let's assume we want to measure this foo() function:
var x []int64
func foo(allocs, size int) {
for i := 0; i < allocs; i++ {
x = make([]int64, size)
}
}
All it does is allocate a slice of the given size, and it does this with the given number of times (allocs).
Let's write benchmarking functions for different scenarios:
func BenchmarkFoo_0_0(b *testing.B) {
for i := 0; i < b.N; i++ {
foo(0, 0)
}
}
func BenchmarkFoo_1_1(b *testing.B) {
for i := 0; i < b.N; i++ {
foo(1, 1)
}
}
func BenchmarkFoo_2_2(b *testing.B) {
for i := 0; i < b.N; i++ {
foo(2, 2)
}
}
Running the benchmark with go test -bench . -benchmem, the output is:
BenchmarkFoo_0_0-8 1000000000 0.3204 ns/op 0 B/op 0 allocs/op
BenchmarkFoo_1_1-8 67101626 16.58 ns/op 8 B/op 1 allocs/op
BenchmarkFoo_2_2-8 27375050 42.42 ns/op 32 B/op 2 allocs/op
As you can see, the allocations per function call is the same what we pass as the allocs argument. The allocated memory is the expected allocs * size * 8 bytes.
Note that the reported allocations per op is an integer value (it's the result of an integer division), so if the benchmarked function only occasionally allocates, it might not be reported in the integer result. For details, see Output from benchmem.
Like in this example:
var x []int64
func bar() {
if rand.Float64() < 0.3 {
x = make([]int64, 10)
}
}
This bar() function does 1 allocation with 30% probability (and none with 70% probability), which means on average it does 0.3 allocations. Benchmarking it:
func BenchmarkBar(b *testing.B) {
for i := 0; i < b.N; i++ {
bar()
}
}
Output is:
BenchmarkBar-8 38514928 29.60 ns/op 24 B/op 0 allocs/op
We can see there is 24 bytes allocation (0.3 * 10 * 8 bytes), which is correct, but the reported allocations per op is 0.
Luckily for us, we can also benchmark a function from our main app using the testing.Benchmark() function. It returns a testing.BenchmarkResult including all details about memory usage. We have access to the total number of allocations and to the number of iterations, so we can calculate allocations per op using floating point numbers:
func main() {
rand.Seed(time.Now().UnixNano())
tr := testing.Benchmark(BenchmarkBar)
fmt.Println("Allocs/op", tr.AllocsPerOp())
fmt.Println("B/op", tr.AllocedBytesPerOp())
fmt.Println("Precise allocs/op:", float64(tr.MemAllocs)/float64(tr.N))
}
This will output:
Allocs/op 0
B/op 24
Precise allocs/op: 0.3000516369276302
We can see the expected ~0.3 allocations per op.
Now if we go ahead and benchmark your measure_nothing() function:
func BenchmarkNothing(b *testing.B) {
for i := 0; i < b.N; i++ {
measure_nothing(&mem1_before, &mem1_after)
}
}
We get this output:
Allocs/op 0
B/op 11
Precise allocs/op: 0.12182030338389732
As you can see, running the garbage collector twice and reading memory stats twice occasionally needs allocation (~1 out of 10 calls: 0.12 times on average).

Related

Confusing results from golang benchmarking of function and go routine call overhead

Out of curiosity, I am trying to understand what the function and go routine call overhead is for golang. I therefore wrote the benchmarks below giving the results below that. The result for BenchmarkNestedFunctions confuses me as it seems far too high so I naturally assume I have done something wrong. I was expecting the BenchmarkNestedFunctions to be slightly higher than the BenchmarkNopFunc and very close to the BenchmarkSplitNestedFunctions. Please can anyone suggest what I may be either not understanding or doing wrong.
package main
import (
"testing"
)
// Intended to allow me to see the iteration overhead being used in the benchmarking
func BenchmarkTestLoop(b *testing.B) {
for i := 0; i < b.N; i++ {
}
}
//go:noinline
func nop() {
}
// Intended to allow me to see the overhead from making a do nothing function call which I hope is not being optimised out
func BenchmarkNopFunc(b *testing.B) {
for i := 0; i < b.N; i++ {
nop()
}
}
// Intended to allow me to see the added cost from creating a channel, closing it and then reading from it
func BenchmarkChannelMakeCloseRead(b *testing.B) {
for i := 0; i < b.N; i++ {
done := make(chan struct{})
close(done)
_, _ = <-done
}
}
//go:noinline
func nestedfunction(n int, done chan<- struct{}) {
n--
if n > 0 {
nestedfunction(n, done)
} else {
close(done)
}
}
// Intended to allow me to see the added cost of making 1 function call doing a set of channel operations for each call
func BenchmarkUnnestedFunctions(b *testing.B) {
for i := 0; i < b.N; i++ {
done := make(chan struct{})
nestedfunction(1, done)
_, _ = <-done
}
}
// Intended to allow me to see the added cost of repeated nested calls and stack growth with an upper limit on the call depth to allow examination of a particular stack size
func BenchmarkNestedFunctions(b *testing.B) {
// Max number of nested function calls to prevent excessive stack growth
const max int = 200000
if b.N > max {
b.N = max
}
done := make(chan struct{})
nestedfunction(b.N, done)
_, _ = <-done
}
// Intended to allow me to see the added cost of repeated nested call with any stack reuse the runtime supports (presuming it doesn't free and the realloc the stack as it grows)
func BenchmarkSplitNestedFunctions(b *testing.B) {
// Max number of nested function calls to prevent excessive stack growth
const max int = 200000
for i := 0; i < b.N; i += max {
done := make(chan struct{})
if (b.N - i) > max {
nestedfunction(max, done)
} else {
nestedfunction(b.N-i, done)
}
_, _ = <-done
}
}
// Intended to allow me to see the added cost of spinning up a go routine to perform comparable useful work as the nested function calls
func BenchmarkNestedGoRoutines(b *testing.B) {
done := make(chan struct{})
go nestedgoroutines(b.N, done)
_, _ = <-done
}
The benchmarks are invoked as follows:
$ go test -bench=. -benchmem -benchtime=200ms
goos: windows
goarch: amd64
pkg: golangbenchmarks
cpu: AMD Ryzen 9 3900X 12-Core Processor
BenchmarkTestLoop-24 1000000000 0.2247 ns/op 0 B/op 0 allocs/op
BenchmarkNopFunc-24 170787386 1.402 ns/op 0 B/op 0 allocs/op
BenchmarkChannelMakeCloseRead-24 3990243 52.72 ns/op 96 B/op 1 allocs/op
BenchmarkUnnestedFunctions-24 4791862 58.63 ns/op 96 B/op 1 allocs/op
BenchmarkNestedFunctions-24 200000 50.11 ns/op 0 B/op 0 allocs/op
BenchmarkSplitNestedFunctions-24 155160835 1.528 ns/op 0 B/op 0 allocs/op
BenchmarkNestedGoRoutines-24 636734 412.2 ns/op 24 B/op 1 allocs/op
PASS
ok golangbenchmarks 1.700s
The BenchmarkTestLoop, BenchmarkNopFunc and BenchmarkSplitNestedFunctions results seem reasonably consistent with each other and make sense, the BenchmarkSplitNestedFunctions is doing more work than the BenchmarkNopFunc on average per benchmark operation but not by much because the expensive BenchmarkChannelMakeCloseRead operation is only done about once every 200,000 benchmarking operations.
Similarly the BenchmarkChannelMakeCloseRead and BenchmarkUnnestedFunctions results seem consistent since each BenchmarkUnnestedFunctions is doing slightly more than each BenchmarkChannelMakeCloseRead if only by a decrement and if test which is potentially causing a branch prediction failure (although I would have hoped the branch predicter would have been able to use the last branch result, but I don't know how complex the close function implementation is which may be overwhelming the branch history)
However BenchmarkNestedFunctions and BenchmarkSplitNestedFunctions are radically different and I don't understand why. There should be similar with the only intentional difference being any grown stack re-use and I did not expect the stack growth cost to be nearly so high (or is that the explanation and it is just co-incidence that result is so similar to the BenchmarkChannelMakeCloseRead result making me think it is not actually doing what I thought it was?)
It should also be noted that the BenchmarkSplitNestedFunctions result can occasionally take significantly different values; I have seen a few values in the range of 10 to 200 ns/op when running it repeatedly. It can also fail to report any result ns/op time while still passing when I run it; I have no idea what is going on there:
BenchmarkChannelMakeCloseRead-24 5724488 54.26 ns/op 96 B/op 1 allocs/op
BenchmarkUnnestedFunctions-24 3992061 57.49 ns/op 96 B/op 1 allocs/op
BenchmarkNestedFunctions-24 200000 0 B/op 0 allocs/op
BenchmarkNestedFunctions2-24 154956972 1.590 ns/op 0 B/op 0 allocs/op
BenchmarkNestedGoRoutines-24 1000000 342.1 ns/op 24 B/op 1 allocs/op
If anyone can point out my mistake in the benchmark / my interpretation of the results and explain what is really happening then that would be greatly appreciated
Background info:
Stack growth and function inlining: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go
Stack growth limitations: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite
Golang stack structure: https://blog.cloudflare.com/how-stacks-are-handled-in-go/
Branch prediction: https://en.wikipedia.org/wiki/Branch_predictor
Top level 3900X architecture overview: https://www.techpowerup.com/review/amd-ryzen-9-3900x/3.html
3900X branch prediction history/buffer size 16/512/7k: https://www.techpowerup.com/review/amd-ryzen-9-3900x/images/arch3.jpg

pass const pointer of large struct to a function or a go channel

How to pass const pointer of large struct to a function or a go channel.
Purpose of this ask is:
Avoid the accidental modification of pointer by the function
Avoid the copy of the struct object while passing to
function/channel
This functionality is very common in C++, C#, Java, but how can we achieve the same in golang?
============== Update 2 ===================
Thank you #zarkams, #mkopriva and #peterSO
It was the compiler optimization causing the same result in both byValue() and byPointer().
Modified the functions byValue() and byPointer() by adding
data.array[0] = reverse(data.array[0]), just to make compiler not to make the functions inline.
func byValue(data Data) int {
data.array[0] = reverse(data.array[0])
return len(data.array)
}
func byPointer(data *Data) int {
data.array[0] = reverse(data.array[0])
return len(data.array)
}
func reverse(s string) string {
runes := []rune(s)
for i, j := 0, len(runes)-1; i < j; i, j = i+1, j-1 {
runes[i], runes[j] = runes[j], runes[i]
}
return string(runes)
}
After that running the benchmarks, passing by pointer was much efficient than passing by value.
C:\Users\anikumar\Desktop\TestGo>go test -bench=.
goos: windows
goarch: amd64
BenchmarkByValue-4 18978 58228 ns/op 3 B/op 1 allocs/op
BenchmarkByPointer-4 40034295 33.1 ns/op 3 B/op 1 allocs/op
PASS
ok _/C_/Users/anikumar/Desktop/TestGo 3.336s
C:\Users\anikumar\Desktop\TestGo>go test -gcflags -N -run=none -bench=.
goos: windows
goarch: amd64
BenchmarkByValue-4 20961 59380 ns/op 3 B/op 1 allocs/op
BenchmarkByPointer-4 31386213 36.5 ns/op 3 B/op 1 allocs/op
PASS
ok _/C_/Users/anikumar/Desktop/TestGo 3.909s
============== Update ===================
Based on feedback from #zerkms, I created a test to find the performance difference between copy by value and copy by the pointer.
package main
import (
"log"
"time"
)
const size = 99999
// Data ...
type Data struct {
array [size]string
}
func main() {
// Preparing large data
var data Data
for i := 0; i < size; i++ {
data.array[i] = "This is really long string"
}
// Starting test
const max = 9999999999
start := time.Now()
for i := 0; i < max; i++ {
byValue(data)
}
elapsed := time.Since(start)
log.Printf("By Value took %s", elapsed)
start = time.Now()
for i := 0; i < max; i++ {
byPointer(&data)
}
elapsed = time.Since(start)
log.Printf("By Pointer took %s", elapsed)
}
func byValue(data Data) int {
data.array[0] = reverse(data.array[0])
return len(data.array)
}
func byPointer(data *Data) int {
data.array[0] = reverse(data.array[0])
return len(data.array)
}
func reverse(s string) string {
runes := []rune(s)
for i, j := 0, len(runes)-1; i < j; i, j = i+1, j-1 {
runes[i], runes[j] = runes[j], runes[i]
}
return string(runes)
}
After 10 iterations of the above program, I did not find any difference in execution time.
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:52:03 By Value took 5.2798936s
2020/02/16 15:52:09 By Pointer took 5.3466306s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:52:18 By Value took 5.3596692s
2020/02/16 15:52:23 By Pointer took 5.2724685s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:52:29 By Value took 5.2359938s
2020/02/16 15:52:34 By Pointer took 5.2838676s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:52:42 By Value took 5.8374936s
2020/02/16 15:52:49 By Pointer took 6.9524342s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:53:40 By Value took 5.4364867s
2020/02/16 15:53:46 By Pointer took 5.8712875s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:53:54 By Value took 5.5481591s
2020/02/16 15:54:00 By Pointer took 5.5600314s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:54:10 By Value took 5.4753771s
2020/02/16 15:54:16 By Pointer took 6.4368084s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:54:24 By Value took 5.4783356s
2020/02/16 15:54:30 By Pointer took 5.5312314s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:54:39 By Value took 5.4853542s
2020/02/16 15:54:45 By Pointer took 5.5541164s
C:\Users\anikumar\Desktop\TestGo>TestGo.exe
2020/02/16 15:54:57 By Value took 5.4633856s
2020/02/16 15:55:03 By Pointer took 5.4863226s
Looks like #zerkms is right. It is not because of language, it is because of modern hardware.
Meaningless microbenchmarks produce meaningless results.
In Go, all arguments are passed by value.
For your updated example (TestGo),
$ go version
go version devel +6917529cc6 Sat Feb 15 16:40:12 2020 +0000 linux/amd64
$ go run microbench.go
2020/02/16 13:12:56 By Value took 2.877045229s
2020/02/16 13:12:59 By Pointer took 2.875847918s
$
Go compilers are usually optimizing compilers. For example,
./microbench.go:39:6: can inline byValue
./microbench.go:43:6: can inline byPointer
./microbench.go:26:10: inlining call to byValue
./microbench.go:33:12: inlining call to byPointer
There is no function call overhead. Therefore, there is no difference in execution time.
microbench.go:
package main
import (
"log"
"time"
)
const size = 99999
// Data ...
type Data struct {
array [size]string
}
func main() {
// Preparing large data
var data Data
for i := 0; i < size; i++ {
data.array[i] = "This is really long string"
}
// Starting test
const max = 9999999999
start := time.Now()
for i := 0; i < max; i++ {
byValue(data)
}
elapsed := time.Since(start)
log.Printf("By Value took %s", elapsed)
start = time.Now()
for i := 0; i < max; i++ {
byPointer(&data)
}
elapsed = time.Since(start)
log.Printf("By Pointer took %s", elapsed)
}
func byValue(data Data) int {
return len(data.array)
}
func byPointer(data *Data) int {
return len(data.array)
}
ADDENDUM
Comment: #Anil8753 another thing to note is that Go standard library has a testing package which provides some useful functionality for benchmarking. For example next to your main.go file add a main_test.go file (the file name is important) and add these two benchmarks to it and then from inside the folder run this command go test -run=none -bench=., this will print how many operations were executed, how much time a single operation took, how much memory a single operation required, and how many allocations were required. – mkopriva
Go compilers are usually optimizing compilers. Modern hardware is usually heavily optimized.
For mkopriva's microbenchmark,
$ go test microbench.go mkopriva_test.go -bench=.
BenchmarkByValue-4 1000000000 0.289 ns/op 0 B/op 0 allocs/op
BenchmarkByPointer-4 1000000000 0.575 ns/op 0 B/op 0 allocs/op
$
However, for mkopriva's microbenchmark with a sink,
$ go test microbench.go sink_test.go -bench=.
BenchmarkByValue-4 1000000000 0.576 ns/op 0 B/op 0 allocs/op
BenchmarkByPointer-4 1000000000 0.592 ns/op 0 B/op 0 allocs/op
$
mkopriva_test.go:
package main
import (
"testing"
)
func BenchmarkByValue(b *testing.B) {
var data Data
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
byValue(data)
}
}
func BenchmarkByPointer(b *testing.B) {
var data Data
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
byPointer(&data)
}
}
sink_test.go:
package main
import (
"testing"
)
var banchInt int
func BenchmarkByValue(b *testing.B) {
var data Data
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
banchInt = byValue(data)
}
}
func BenchmarkByPointer(b *testing.B) {
var data Data
b.ReportAllocs()
b.ResetTimer()
for n := 0; n < b.N; n++ {
banchInt = byPointer(&data)
}
}
I think this is a really good question, and I don't know why people have marked it down. (That is, the original question of using a "const pointer" to pass a large struct.)
The simple answer is that Go has no way to indicate that a function (or channel) taking a pointer is not going to modify the thing pointed to. Basically it is up to the creator of the function to document that the function will not modify the structure.
#Anil8753 as you explicitly mention channels I should explain something further. Typically when using a channel you are passing data to another go-routine. If you pass a pointer to the struct then the sender must be careful not to modify the struct after it has been sent (at least while the receiver could be reading it) and vice versa. This would create a data race.
For this reason I typically pass structs by value with channels. If you need to create something in the sender for exclusive use of the receiver then create a struct (on the heap) and send a pointer to it in the channel and never use it again (even assigning nil to the pointer to make this explicit).
#zerkms makes a very good point that before you optimize you should understand what is happening and make measurements. However, in this case there is an obvious performance benefit to not copying memory around. Whether this happens when the struct is 1KB, 1MB, or 1GB there will come a point where you want to pass by "reference" (ie a pointer to the struct) rather than by value (as long as you know the struct won't be modified or don't care if it is).
In theory and in practice copy by value will become very inefficient when the struct becomes large enough or the function is called many times.

Golang benchmark: why does allocs/op show 0 B/op?

Here is a code snippet for benchmark:
// bench_test.go
package main
import (
"testing"
)
func BenchmarkHello(b *testing.B) {
for i := 0; i < b.N; i++ {
a := 1
a++
}
}
The metric allocs/op shows 0 B/op. variable a is an int type and doesn't take too much memory, but it should not take zero B.
> go test -bench=. -benchmem
goos: darwin
goarch: amd64
pkg: a
BenchmarkHello-4 2000000000 0.26 ns/op 0 B/op 0 allocs/op
PASS
ok a 0.553s
Why is this metric allocs/ops zero?
package main
import (
"testing"
)
func BenchmarkHello(b *testing.B) {
for i := 0; i < b.N; i++ {
a := 1
a++
}
}
The allocs/ops average only counts heap allocations, not stack allocations.
The allocs/ops average is rounded down to the nearest integer value.
The Go gc compiler is an optimizing compiler. Since
{
a := 1
a++
}
doesn't accomplish anything, it is elided.
The benchmark tool only reports heap allocations. Stack allocations via escape analysis are less costly, possibly free, so are not reported.
Reference
Why is this simple benchmark showing zero allocations?

How to use time value of benchmark

I have written a benchmark for my chess engine in Go:
func BenchmarkStartpos(b *testing.B) {
board := ParseFen(startpos)
for i := 0; i < b.N; i++ {
Perft(&board, 5)
}
}
I see this output when it runs:
goos: darwin
goarch: amd64
BenchmarkStartpos-4 10 108737398 ns/op
PASS
ok _/Users/dylhunn/Documents/go-chess 1.215s
I want to use the time per execution (in this case, 108737398 ns/op) to compute another value, and also print it as a result of the benchmark. Specifically, I want to output nodes per second, which is given as the result of the Perft call divided by the time per call.
How can I access the time the benchmark took to execute, so I can print my own derived results?
You may use the testing.Benchmark() function to manually measure / benchmark "benchmark" functions (that have the signature of func(*testing.B)), and you get the result as a value of testing.BenchmarkResult, which is a struct with all the details you need:
type BenchmarkResult struct {
N int // The number of iterations.
T time.Duration // The total time taken.
Bytes int64 // Bytes processed in one iteration.
MemAllocs uint64 // The total number of memory allocations.
MemBytes uint64 // The total number of bytes allocated.
}
The time per execution is returned by the BenchmarkResult.NsPerOp() method, you can do whatever you want to with that.
See this simple example:
func main() {
res := testing.Benchmark(BenchmarkSleep)
fmt.Println(res)
fmt.Println("Ns per op:", res.NsPerOp())
fmt.Println("Time per op:", time.Duration(res.NsPerOp()))
}
func BenchmarkSleep(b *testing.B) {
for i := 0; i < b.N; i++ {
time.Sleep(time.Millisecond * 12)
}
}
Output is (try it on the Go Playground):
100 12000000 ns/op
Ns per op: 12000000
Time per op: 12ms

in golang, is there any performance difference between maps initialized using make vs {}

as we know there are two ways to initialize a map (as listed below). I'm wondering if there is any performance difference between the two approaches.
var myMap map[string]int
then
myMap = map[string]int{}
vs
myMap = make(map[string]int)
On my machine they appear to be about equivalent.
You can easily make a benchmark test to compare. For example:
package bench
import "testing"
var result map[string]int
func BenchmarkMakeLiteral(b *testing.B) {
var m map[string]int
for n := 0; n < b.N; n++ {
m = InitMapLiteral()
}
result = m
}
func BenchmarkMakeMake(b *testing.B) {
var m map[string]int
for n := 0; n < b.N; n++ {
m = InitMapMake()
}
result = m
}
func InitMapLiteral() map[string]int {
return map[string]int{}
}
func InitMapMake() map[string]int {
return make(map[string]int)
}
Which on 3 different runs yielded results that are close enough to be insignificant:
First Run
$ go test -bench=.
testing: warning: no tests to run
PASS
BenchmarkMakeLiteral-8 10000000 160 ns/op
BenchmarkMakeMake-8 10000000 171 ns/op
ok github.com/johnweldon/bench 3.664s
Second Run
$ go test -bench=.
testing: warning: no tests to run
PASS
BenchmarkMakeLiteral-8 10000000 182 ns/op
BenchmarkMakeMake-8 10000000 173 ns/op
ok github.com/johnweldon/bench 3.945s
Third Run
$ go test -bench=.
testing: warning: no tests to run
PASS
BenchmarkMakeLiteral-8 10000000 170 ns/op
BenchmarkMakeMake-8 10000000 170 ns/op
ok github.com/johnweldon/bench 3.751s
When allocating empty maps there is no difference but with make you can pass second parameter to pre-allocate space in map. This will save a lot of reallocations when maps are being populated.
Benchmarks
package maps
import "testing"
const SIZE = 10000
func fill(m map[int]bool, size int) {
for i := 0; i < size; i++ {
m[i] = true
}
}
func BenchmarkEmpty(b *testing.B) {
for n := 0; n < b.N; n++ {
m := make(map[int]bool)
fill(m, SIZE)
}
}
func BenchmarkAllocated(b *testing.B) {
for n := 0; n < b.N; n++ {
m := make(map[int]bool, 2*SIZE)
fill(m, SIZE)
}
}
Results
go test -benchmem -bench .
BenchmarkEmpty-8 500 2988680 ns/op 431848 B/op 625 allocs/op
BenchmarkAllocated-8 1000 1618251 ns/op 360949 B/op 11 allocs/op
A year ago I actually stumped on the fact that using make with explicitly allocated space is better then using map literal if your values are not static
So doing
return map[string]float {
"key1": SOME_COMPUTED_ABOVE_VALUE,
"key2": SOME_COMPUTED_ABOVE_VALUE,
// more keys here
"keyN": SOME_COMPUTED_ABOVE_VALUE,
}
is slower then
// some code above
result := make(map[string]float, SIZE) // SIZE >= N
result["key1"] = SOME_COMPUTED_ABOVE_VALUE
result["key2"] = SOME_COMPUTED_ABOVE_VALUE
// more keys here
result["keyN"] = SOME_COMPUTED_ABOVE_VALUE
return result
for N which are quite big (N=300 in my use case).
The reason is the compiler fails to understand that one needs to allocate at least N slots in the first case.
I wrote a blog post about it
https://trams.github.io/golang-map-literal-performance/
and I reported a bug to the community
https://github.com/golang/go/issues/43020
As of golang 1.17 it is still an issue.

Resources