Go Profiling - Wrong file - go

I'm doing profiling in Go using github.com/pkg/profile and it's creating the file when I run my code, but the return comes from the example page code, how would it be to run through my code?
thanks in advance
Code:
package main
import (
"fmt"
"github.com/pkg/profile"
"time"
)
func main() {
defer profile.Start(profile.MemProfile).Stop()
var inicio = time.Now().UnixNano()
var text = "Olá Mundo!"
fmt.Println(text)
var fim = time.Now().UnixNano()
fmt.Println(fim - inicio)
}
Return:

You can change your profile output path to to your current working directory,
profile.ProfilePath(path)
If you are unable to make retrieve any samples, it either means your MemProfileRate is not small enough to actually capture small changes.
If you are allocation less amount of memory, then set the MemProfileRate to lesser value, If you are allocating large amount of memory, just keep to default. If you think you capturing minor memory changes, then increase the MemProfileRate.
profile.MemProfileRate(100)
and one thing you shouldn't forget when you are using profile package is your call should be deferred.
defer profile.Start(xxx).Stop()
Here is the complete program.
package main
import (
"os"
"github.com/pkg/profile"
)
func main() {
dir, _ := os.Getwd()
defer profile.Start(profile.MemProfile, profile.MemProfileRate(100), profile.ProfilePath(dir)).Stop()
//decrease mem profile rate for capturing more samples
for i := 0; i < 10000; i++ {
tmp := make([]byte, 100000)
tmp[0] = tmp[1] << 0 //fake workload
}
}
you can also set profile path for having the profile output in your current workign directory.

Related

Creating slice byte goroutine hangs

I have an upload program that I am working on and I am running into an issue. I have n go routines that handle uploading the parts to a big file. Essentially it will split the file into 100MB chunks and upload them concurrently depending on the amount of concurrent processes you specify in the config.
The issue I'm having is when I create a buffer to read the file and upload the make([]byte, 100000000) hangs... but only if it's in a go routine. (I'm using 100000000 to simplify the upload calculations)
Here is an example.
This works: https://play.golang.org/p/tkn8JVir9S
package main
import (
"fmt"
)
func main() {
buffer := make([]byte, 100000000)
fmt.Println(len(buffer))
}
This doesn't: https://play.golang.org/p/H8626OLpqQ
package
main
import (
"fmt"
)
func main() {
go createBuffer()
for {
}
}
func createBuffer() {
buffer := make([]byte, 100000000)
fmt.Println(len(buffer))
}
It just hangs... I'm not sure if there is a memory constraint for a go routine? I tried to research and see what I could find but nothing. Any thoughts would be appreciated.
EDIT: Thanks everyone for the feedback. I will say I didn't explain the real issue very well and will try to provide more of a holistic view next time. I ended up using a channel to block to keep my goroutines ready for new files to process. This is for a DR backup uploading to a 3rd party all that requires large files to be split into 100mb chunks. I guess I should have been more clear as to the nature of my program.
This program hangs because there is an infinite loop in your code. Try running the code just like this to prove it to yourself. The goroutine is not what is causing the hanging.
func main() {
for {
}
}
If you just want to see fmt.Println(..) print, then I'd recommend having a time.Sleep call or similar.
If you would like to wait for a bunch of goroutines to complete, then I'd recommend this excellent answer to that exact question.
Package runtime
import "runtime"
func Gosched
func Gosched()
Gosched yields the processor, allowing other goroutines to run. It
does not suspend the current goroutine, so execution resumes
automatically.
When you do something strange (for {} and 100MB), you get strange results. Do something reasonable. For example,
package main
import (
"fmt"
"runtime"
)
func main() {
go createBuffer()
for {
runtime.Gosched()
}
}
func createBuffer() {
buffer := make([]byte, 100000000)
fmt.Println(len(buffer))
}
Output:
100000000
^Csignal: interrupt

Pooling Maps in Golang

I was curious if anyone has tried to pool maps in Go before? I've read about pooling buffers previously, and I was wondering if by similar reasoning it could make sense to pool maps if one has to create and destroy them frequently or if there was any reason why, a priori, it might not be efficient. When a map is returned to the pool, one would have to iterate through it and delete all elements, but it seems a popular recommendation is to create a new map instead of deleting the entries in a map which has already been allocated and reusing it which makes me think that pooling maps may not be as beneficial.
If your maps change (a lot) in size by deleting or adding entries this will cause new allocations and there will be no benefit of pooling them.
If your maps will not change in size but only the values of the keys will change then pooling will be a successful optimization.
This will work well when you read table-like structures, for instance CSV files or database tables. Each row will contain exactly the same columns, so you don't need to clear any entry.
The benchmark below shows no allocation when run with go test -benchmem -bench . to
package mappool
import "testing"
const SIZE = 1000000
func BenchmarkMap(b *testing.B) {
m := make(map[int]int)
for i := 0; i < SIZE; i++ {
m[i] = i
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
for i := 0; i < SIZE; i++ {
m[i] = m[i] + 1
}
}
}
Like #Grzegorz Żur says, if your maps don't change in size very much, then pooling is helpful. To test this, I made a benchmark where pooling wins out. The output on my machine is:
Pool time: 115.977µs
No-pool time: 160.828µs
Benchmark code:
package main
import (
"fmt"
"math/rand"
"time"
)
const BenchIters = 1000
func main() {
pool := map[int]int{}
poolTime := benchmark(func() {
useMapForSomething(pool)
// Return to pool by clearing the map.
for key := range pool {
delete(pool, key)
}
})
nopoolTime := benchmark(func() {
useMapForSomething(map[int]int{})
})
fmt.Println("Pool time:", poolTime)
fmt.Println("No-pool time:", nopoolTime)
}
func useMapForSomething(m map[int]int) {
for i := 0; i < 1000; i++ {
m[rand.Intn(300)] += 5
}
}
// benchmark measures how long f takes, on average.
func benchmark(f func()) time.Duration {
start := time.Now().UnixNano()
for i := 0; i < BenchIters; i++ {
f()
}
return time.Nanosecond * time.Duration((time.Now().UnixNano()-start)/BenchIters)
}

Go, not getting string value

package main
import (
"fmt"
"io/ioutil"
)
func main() {
// Just count the files...
systems,_ := ioutil.ReadDir("./XML")
fmt.Printf("# of planetary systems\t%d\r\n", len(systems))
// For each datafile
for _,element := range systems {
fmt.Println(element.Name)
}
}
This line...
fmt.Println(element.Name)
Is outputting a memory address instead of what I assume to be the filename string. Why? How do I get the actual string? Thanks.
Also all the addresses are the same, I would expect them to difer, meaning my for-each loop might be broken.
FileInfo.Name is a function of the FileInfo interface; the function's memory address is being printed. To display the name of the file, you need to evaluate the function before printing:
for _, element := range systems {
fmt.Println(element.Name())
}

why the "infinite" for loop is not processed?

I need to wait until x.Addr is being updated but it seems the for loop is not run. I suspect this is due the go scheduler and I'm wondering why it works this way or if there is any way I can fix it(without channels).
package main
import "fmt"
import "time"
type T struct {
Addr *string
}
func main() {
x := &T{}
go update(x)
for x.Addr == nil {
if x.Addr != nil {
break
}
}
fmt.Println("Hello, playground")
}
func update(x *T) {
time.Sleep(2 * time.Second)
y := ""
x.Addr = &y
}
There are two (three) problems with your code.
First, you are right that there is no point in the loop at which you give control to the scheduler and such it can't execute the update goroutine. To fix this you can set GOMAXPROCS to something bigger than one and then multiple goroutines can run in parallel.
(However, as it is this won't help as you pass x by value to the update function which means that the main goroutine will never see the update on x. To fix this problem you have to pass x by pointer. Now obsolete as OP fixed the code.)
Finally, note that you have a data race on Addr as you are not using atomic loads and stores.

Memory consumption skyrocket

I have a program where memory keep growing. I'm not sure if it's a memory leak or just a buffer that keep growing.
I successfully isolated the problem, but I still can't find the problem.
There is some strange behavoir: if I remove the compression part, the leak disappear.
So I assume it's there. BUT if I (only) remove the clause with chanTest in the switch, the leak disappear too.
Could someone confirm the problem and explain me why it has such behavoir?
I'm using go1.0.3
Thanks!
Here is the program: ( it compress some dummy data every 100ms )
package main
import (
"bytes"
"compress/zlib"
"fmt"
"time"
)
func main() {
timeOut := time.NewTicker(100 * time.Millisecond)
chanTest := make(chan int32)
for {
L: for { // timer part
select {
case resp := <- chanTest: // strange clause
fmt.Println("received stuff", resp)
case <-timeOut.C:
fmt.Println("break")
break L
}
}
timeOut = time.NewTicker(100 * time.Millisecond)
// compression part
data := []byte{1, 2, 3, 4, 5, 6, 7}
var b bytes.Buffer
w := zlib.NewWriter(&b)
w.Write(data)
w.Close()
b.Reset()
}
}
You're starting a new Ticker inside the loop without calling .Stop() on the original. Since the Ticker runs at an interval, you end up with multiple Tickers continuing to run at the same time.
Calling .Stop() to halt the previous one would technically work:
timeOut.Stop()
timeOut = time.NewTicker(100 * time.Millisecond)
...but seems to defeat the purpose of it. Instead, just don't make new Tickers in the loop, and the original will continue to run:
// timeOut = time.NewTicker(100 * time.Millisecond) // not needed
There is a fix for this issue in golang tip newer than 25-Feb-2013 (revision 1c50db40d078). If you run hg pull; hg update to update Go's source code and recompile your Go distribution then the memory consumption issue should disappear.
However, this alone does not make the program correct - it only fixes the unusually high memory consumption. A correct implementation needs to be calling timeout.Stop().

Resources