This question already has an answer here:
Function signature with no function body
(1 answer)
Closed 7 years ago.
Was looking at the source code go source code, below is a snippet from sleep.go (package time):
package time
// Sleep pauses the current goroutine for at least the duration d.
// A negative or zero duration causes Sleep to return immediately.
func Sleep(d Duration)
// runtimeNano returns the current value of the runtime clock in nanoseconds.
func runtimeNano() int64
How is it possible that func Sleep(d Duration) doesn't have any implementation? Where can I find the exact implementation of Sleep function?
Edit: The answer can be found in the 2 links provided by #DaveC. However please read the comments below to see the explanation about why empty function can't be replaced with Go even after bootstrapping Go (1.5)
Considering what Sleep() does (it deschedules a golang thread for a given amount of time), you can't express that it in go, so it's probably implemented as some kind of compiler instrinsic.
Whoa, lots of downvotes here. It looks like Sleep is in fact implemented in assembly.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
The following go play example shows in a simplistic way what I have defined. I am passing a map as a copied value to a function (not a reference) as well as there is a recursion in my function which I assume passes by value as well.
https://play.golang.org/p/na6y6Wih4M
// this function has no write operations to dataMap, just reads
// dataMap, in fact, has no write operations since it was copied
func findParentAncestors(ID int, dataMap map[int]Data) []Data {
results := []Data{}
if _, ok := dataMap[ID]; ok {
if parentData, ok := dataMap[dataMap[ID].ParentID]; ok {
results = append(results, parentData)
// recursion
results = append(results, findParentAncestors(parentData.ID, dataMap)...)
}
}
return results
}
PROBLEM: somehow along my program execution, which involves much more data than this example (obviusly), an error "fatal error: concurrent map read and map write" points function findParentAncestors():
main.findParentAncestors(0x39e3, 0xc82013ac90, 0x0, 0x0, 0x0)
/opt/test/src/test.go:17 +0xa6 fp=0xc820269fb8 sp=0xc820269bd0
main.findParentAncestors(0x5d25, 0xc82013ac90, 0x0, 0x0, 0x0)
/opt/test/src/test.go:21 +0x239 fp=0xc82026a3a0 sp=0xc820269fb8
From your example, https://play.golang.org/p/na6y6Wih4M:
// the orignalMap is defined elsewhere in the program (here represented)
originalMap := map[int]Data{}
originalMap[0] = Data{ID: 0, ParentID: -1, Name: "zero"}
originalMap[1] = Data{ID: 1, ParentID: 0, Name: "one"}
originalMap[2] = Data{ID: 2, ParentID: 1, Name: "two"}
// copies the original map from a global location (here represented)
copiedMap := originalMap
// identifies ancestors unsing the copied map
parents := findParentAncestors(2, copiedMap)
This is a misnomer, copiedMap := originalMap, you are not copying the map.
In Go all arguments are passed by value. It's equivalent to assigning each argument to each parameter. For a map, assignment, copiedMap := originalMap, or passing by value, findParentAncestors(2, copiedMap), copies the map descriptor which is a pointer to the map descriptor struct which contains a pointer to the map key-value data. Obviously you have a potential race condition if there are any writes to the map.
You are using go version go1.6.3 linux/amd64, so run the race detector.
Go 1.6 Release Notes
Runtime
The runtime has added lightweight, best-effort detection of concurrent
misuse of maps. As always, if one goroutine is writing to a map, no
other goroutine should be reading or writing the map concurrently. If
the runtime detects this condition, it prints a diagnosis and crashes
the program. The best way to find out more about the problem is to run
the program under the race detector, which will more reliably identify
the race and give more detail.
Command go
Compile packages and dependencies
-race
enable data race detection.
Supported only on linux/amd64, freebsd/amd64, darwin/amd64 and windows/amd64.
Also, compile and run your program using Go 1.8, the current release of Go, which significantly improves concurrent map misuse.
Go 1.8 Release Notes
Concurrent Map Misuse
In Go 1.6, the runtime added lightweight, best-effort detection of
concurrent misuse of maps. This release improves that detector with
support for detecting programs that concurrently write to and iterate
over a map.
As always, if one goroutine is writing to a map, no other goroutine
should be reading (which includes iterating) or writing the map
concurrently. If the runtime detects this condition, it prints a
diagnosis and crashes the program. The best way to find out more about
the problem is to run the program under the race detector, which will
more reliably identify the race and give more detail.
I have tried to profile some golang applications but I couldn't have that working, I have followed these two tutorials:
http://blog.golang.org/profiling-go-programs
http://saml.rilspace.org/profiling-and-creating-call-graphs-for-go-programs-with-go-tool-pprof
Both says that after adding some code lines to the application, you have to execute your app, I did that and I receiveed the following message in the screen:
2015/06/16 12:04:00 profile: cpu profiling enabled,
/var/folders/kg/4fxym1sn0bx02zl_2sdbmrhr9wjvqt/T/profile680799962/cpu.pprof
So, I understand that the profiling is being executed, sending info to the file.
But, when I see the file size, in any program that I test, it is always 64bytes.
When I try to open the cpu.pprof file with pprof, and I execute the "top10" command, I see that nothing is in the file:
("./fact" is my app)
go tool pprof ./fact
/var/folders/kg/4fxym1sn0bx02zl_2sdbmrhr9wjvqt/T/profile680799962/cpu.pprof
top10 -->
(pprof) top10 0 of 0 total ( 0%)
flat flat% sum% cum cum%
So, it is like nothing is happening when I am profiling.
I have tested it in mac (this example) and in ubuntu, with three different programs.
Do you know that I am doing wrong?
Then example program is very simple, this is the code (is a very simple factorial program that I take from internet):
import "fmt"
import "github.com/davecheney/profile"
func fact(n int) int {
if n == 0 {
return 1
}
return n * fact(n-1)
}
func main() {
defer profile.Start(profile.CPUProfile).Stop()
fmt.Println(fact(30))
}
Thanks,
Fer
As inf mentioned already, your code executes too fast. The reason is that pprof works by repeatedly halting your program during its execution, looking at which function is running at that moment in time and writing that down (together with the whole function call stack). Pprof samples with a rate of 100 samples per second. This is hardcoded in runtime/pprof/pprof.go as you can easily check (see https://golang.org/src/runtime/pprof/pprof.go line 575 and the comment above it):
func StartCPUProfile(w io.Writer) error {
// The runtime routines allow a variable profiling rate,
// but in practice operating systems cannot trigger signals
// at more than about 500 Hz, and our processing of the
// signal is not cheap (mostly getting the stack trace).
// 100 Hz is a reasonable choice: it is frequent enough to
// produce useful data, rare enough not to bog down the
// system, and a nice round number to make it easy to
// convert sample counts to seconds. Instead of requiring
// each client to specify the frequency, we hard code it.
const hz = 100
// Avoid queueing behind StopCPUProfile.
// Could use TryLock instead if we had it.
if cpu.profiling {
return fmt.Errorf("cpu profiling already in use")
}
cpu.Lock()
defer cpu.Unlock()
if cpu.done == nil {
cpu.done = make(chan bool)
}
// Double-check.
if cpu.profiling {
return fmt.Errorf("cpu profiling already in use")
}
cpu.profiling = true
runtime.SetCPUProfileRate(hz)
go profileWriter(w)
return nil
}
The longer your program runs the more samples can be made and the more probable it will become that also short running functions will be sampled. If your program finishes before even the first sample is made, than the generated cpu.pprof will be empty.
As you can see from the code above, the sampling rate is set with
runtime.SetCPUProfileRate(..)
If you call runtime.SetCPUProfileRate() with another value before you call StartCPUProfile(), you can override the sampling rate. You will receive a warning message during execution of your program telling you "runtime: cannot set cpu profile rate until previous profile has finished." which you can ignore. It results since pprof.go calls SetCPUProfileRate() again. Since you have already set the value, the one from pprof will be ignored.
Also, Dave Cheney has released a new version of his profiling tool, you can find it here: https://github.com/pkg/profile . There, you can, among other changes, specify the path where the cpu.pprof is written to:
defer profile.Start(profile.CPUProfile, profile.ProfilePath(".")).Stop()
You can read about it here: http://dave.cheney.net/2014/10/22/simple-profiling-package-moved-updated
By the way, your fact() function will quickly overflow, even if you take int64 as parameter and return value. 30! is roughly 2*10^32 and an int64 stores values only up to 2^63-1 which is roughly 9*10^18.
The problem is that your function is running too fast and pprof can't sample it. Try adding a loop around the fact call and sum the result to artificially prolong the program.
I struggled with empty pprof files, until I realized that I followed outdated blog articles.
The upstream docs are good: https://pkg.go.dev/runtime/pprof
Write a test which you want to profile, then:
go test -cpuprofile cpu.prof -memprofile mem.prof -bench .
this creates cpu.prof and mem.prof.
You can analyze them like this:
go tool pprof cpu.prof
This gives you a command-line. The "top" and "web" commands are used often.
Same for memory:
go tool pprof mem.prof
I am running a program with go 1.4 and I am trying to pass a large struct to a go function.
go ProcessImpression(network, &logImpression, campaign, actualSpent, partnerAccount, deviceId, otherParams)
I get this error:
runtime.newproc: function arguments too large for new goroutine
I have moved to pass by reference which helps but I am wondering if there is some way to pass large structs in a go function.
Thanks,
No, none I know of.
I don't think you should be too aggressive tuning to avoid copying, but it appears from the source that this error is emitted when parameters exceed the usable stack space for a new goroutine, which should be kilobytes. The copying overhead is real at that point, especially if this isn't the only time these things are copied. Perhaps some struct either explicitly is larger than expected thanks to a large struct member (1kb array rather than a slice, say) or indirectly. If not, just using a pointer as you have makes sense, and if you're worried about creating garbage, recycle the structs pointed to using sync.Pool.
I was able to fix this issue by changing the arguments from
func doStuff(prev, next User)
to
func doStuff(prev, next *User)
The answer from #twotwotwo in here is very helpful.
Got this issue at processing list of values([]BigType) of big struct:
for _, stct := range listBigStcts {
go func(stct BigType) {
...process stct ...
}(stct) // <-- error occurs here
}
Workaround is to replace []BigType with []*BigType
I have some kind of "main loop" using glut. I'd like to be able to measure how much time it takes to render a frame. The time used to render a frame might be used for other calculations. The use of the function time isn't adequate.
(time (procedure))
I found out that there is a function called current-time. I had to import some package to get it.
(define ct (current-time))
Which define ct as a time object. Unfortunately, I couldn't find any arithmetic packages for dates in scheme. I saw that in Racket there is something called current-inexact-milliseconds which is exactly what I'm looking for because it has nanoseconds.
Using the time object, there is a way to convert it to nanoseconds using
(time->nanoseconds ct)
This lets me do something like this
(let ((newTime (current-time)))
(block)
(print (- (time->nanoseconds newTime) (time->nanoseconds oldTime)))
(set! oldTime newTime))
Seems good enough for me except that for some reasons it was printing things like this
0
10000
0
0
10000
0
10000
I'm rendering things using opengl and I find it hard to believe that some rendering loop are taking 0 nanoseconds. And that each loop is quite stable enough to always take the same amount of nanoseconds.
After all, your results are not so surprising because we have to consider the limited timer resolution for each system. In fact, there are some limits that depend in general by the processor and by the OS processes. These are not able to count in an accurate manner than we expect, despite a quartz oscillator can reach and exceed a period of a nanosecond. You are also limited by the accuracy and resolution of the functions you used. I had a look at the documentation of Chicken scheme but there is nothing similar to (current-inexact-milliseconds) → real? of Racket.
After digging around, I came with the solution that I should write it in C and bind it to scheme using bindings.
(require-extension bind)
(bind-rename "getTime" "current-microseconds")
(bind* #<<EOF
uint64_t getTime();
#ifndef CHICKEN
#include <sys/time.h>
uint64_t getTime() {
struct timeval tim;
gettimeofday(&tim, NULL);
return 1000000 * tim.tv_sec + tim.tv_usec;
}
#endif
EOF
)
Unfortunately this solution isn't the best because it will be chicken-scheme only. It could be implemented as a library but a library to wrap only one function that doesn't exists on any other scheme doesn't make sense.
Since nanoseconds doens't actually make much sense after all, I got the microseconds instead.
Watch the trick here, define the function to get wrapped above and prevent the include to get parsed by bind. When the file will get loaded in Gcc, it will build with the include and function definition.
CHICKEN has current-milliseconds: http://api.call-cc.org/doc/library/current-milliseconds
In go's runtime lib we have NumGoroutine() to return the total number of go routines running at that time. I was wondering if there was a simple way of getting the number of go routines running of a specific function?
Currently I have it telling me I have 1000, or whatever, of all go routines but I would like to know I have 500 go routines running func Foo. Possible? Simple? Shouldn't bother with it?
I'm afraid you have to count the goroutines on your own if you are interested in those numbers. The cheapest way to achieve your goal would be to use the sync/atomic package directly.
import "sync/atomic"
var counter int64
func example() {
atomic.AddInt64(&counter, 1)
defer atomic.AddInt64(&counter, -1)
// ...
}
Use atomic.LoadInt64(&counter) whenever you want to read the current value of the counter.
And it's not uncommon to have a couple of such counters in your program, so that you can monitor it easily. For example, take a look at the CacheStats struct of the recently published groupcache source.