Execution time in c language on different return type - execution-time

*Why execution time is lesser when we do not use any return type ? if i declare main() with return type int then execution time is 9.76 and if i declare as only main then execution time is 6.26, why it is ? *

Related

What is the most time efficient way to guarantee at least one nanosecond has elapsed in Go? time.Sleep(time.Nanosecond) can take milliseconds

I have two function calls that I would like to separate by at least a nanosecond. But I want the delay to be as small as possible.
The code below shows an empty for loop is much more efficient at this than using time.Sleep(time.Nanosecond)
Is there an even more efficient way to guarantee at least one nanosecond has elapsed?
func TimeWaster () {
start := uint64(time.Now().UnixNano())
stop := uint64(time.Now().UnixNano())
fmt.Println(time.Duration(stop-start))//0s
//no nanoseconds pass
start = uint64(time.Now().UnixNano())
time.Sleep(time.Nanosecond)
stop = uint64(time.Now().UnixNano())
fmt.Println(time.Duration(stop-start))//6.9482ms
//much *much* more than one nanosecond passes
start = uint64(time.Now().UnixNano())
for uint64(time.Now().UnixNano()) == start {
//intentionally empty loop
}
stop = uint64(time.Now().UnixNano())
fmt.Println(time.Duration(stop-start))//59.3µs
//much quicker than time.Sleep(time.Nanosecond), but still much slower than 1 nanosecond
}
The package you're using strangely enforces uniqueness of values by time, so all you need to do is loop until the time package is no longer reporting the same value for the current nanosecond. This doesn't happen after 1 nanosecond, in fact the resolution of the UnixNano is about 100 nanoseconds on my machine and only updates about every 0.5 milliseconds.
package main
import (
"fmt"
"time"
)
func main() {
fmt.Println(time.Now().UnixNano())
smallWait()
fmt.Println(time.Now().UnixNano())
}
func smallWait() {
for start := time.Now().UnixNano(); time.Now().UnixNano() == start; {}
}
The loop is pretty self-explanatory, just repeat until the UnixNano() is different

Unit of time for duration

I have in the code
fmt.Println("... ", time.Since(s1))
fmt.Println(".... ", time.Since(s2))
The results for the first is always in µs and for the second in ns (for example 7.081µs, respectively 365ns).
What causes this? How can I control it? I'd like 7081ns to be displayed, always ns/
I looked at the function; how could I interpret it?
// Since returns the time elapsed since t.
// It is shorthand for time.Now().Sub(t).
func Since(t Time) Duration {
var now Time
if t.wall&hasMonotonic != 0 {
// Common case optimization: if t has monotonic time, then Sub will use only it.
now = Time{hasMonotonic, runtimeNano() - startNano, nil}
} else {
now = Now()
}
return now.Sub(t)
}
The fmt package calls the time.Duration.String() method (because time.Duration implements the fmt.Stringer interface) which will use smaller units (milli-, micro-, or nanoseconds) if the duration is less than one second. You cannot control this directly.
You can however convert the number of nanoseconds returned from duration.Nanoseconds() to a string using Itoa, e.g. like this:
formatted := strconv.Itoa(int(time.Since(s2).Nanoseconds())) + "ns"
You can also see this example on the playground

Why is a float64 type number throwing int related error in Go

I am trying to grasp Golang, in one of the tutorial example it says that An untyped constant takes the type needed by its context.
package main
import "fmt"
const (
// Create a huge number by shifting a 1 bit left 100 places.
// In other words, the binary number that is 1 followed by 100 zeroes.
Big = 1 << 100
// Shift it right again 99 places, so we end up with 1<<1, or 2.
Small = Big >> 99
)
func needInt(x int) int { return x*10 + 1 }
func needFloat(x float64) float64 {
return x * 0.1
}
func main() {
fmt.Println(needInt(Small))
fmt.Println(needFloat(Small))
// Here Big is too large of a number but can be handled as a float64.
// No compilation error is thrown here.
fmt.Println(needFloat(Big))
// The below line throws the following compilation error
// constant 1267650600228229401496703205376 overflows int
fmt.Println(Big)
}
When calling fmt.Println(Big) why is Golang treating Big as an int where as by context it should be float64?
What am I missing?
What is the context for fmt.Println? In other words, what does fmt.Println expect Big to be? An interface{}.
From the Go Blog on Constants:
What happens when fmt.Printf is called with an untyped constant is that an interface value is created to pass as an argument, and the concrete type stored for that argument is the default type of the constant.
So the default type of the constant must be an int. The page goes on to talk about how the defaults get determined based on syntax, not necessarily the value of the const.
Big in fmt.Println(Big) has type integer which is more than max int value 9223372036854775807
you can find max int from this logic
const MaxUint = ^uint(0)
const MaxInt = int(MaxUint >> 1)
fmt.Println(MaxInt) // print 922337
2036854775807
To fix it, you need to cast it to float64 like this
fmt.Println(float64(Big))

How to measure execution time of function in golang, excluding waiting time

I have a demand to measure execute time(cpu cost) of plugins in go, we can treat plugins as functions, there may be many goroutine running in the same time. More precisely, the execute time should exclude idle time(goroutine waiting time), only cpu acquire time(of current goroutine).
it's like:
go func(){
// this func is a plugin
** start to record cpu acquire time of current func/plugin/goroutine **
** run code **
** stop to record cpu acquire time of current func/plugin/goroutine **
log.Debugf("This function is buzy for %d millisecs.", cpuAcquireTime)
** report cpuAcquirTime to monitor **
}()
In my circunstance, it's hard to make unit test to measure function, the code is hard to decouple.
I search google and stackoverflow and find no clue, is there any solution to satisfy my demand, and does it take too much resource?
There is no built-in way in Go to measure CPU time, but you can do it in a platform-specific way.
For example, on POSIX systems (e.g. Linux) use clock_gettime with CLOCK_THREAD_CPUTIME_ID as the parameter.
Similarly you can use CLOCK_PROCESS_CPUTIME_ID to measure process CPU time and CLOCK_MONOTONIC for elapsed time.
Example:
package main
/*
#include <pthread.h>
#include <time.h>
#include <stdio.h>
static long long getThreadCpuTimeNs() {
struct timespec t;
if (clock_gettime(CLOCK_THREAD_CPUTIME_ID, &t)) {
perror("clock_gettime");
return 0;
}
return t.tv_sec * 1000000000LL + t.tv_nsec;
}
*/
import "C"
import "fmt"
import "time"
func main() {
cputime1 := C.getThreadCpuTimeNs()
doWork()
cputime2 := C.getThreadCpuTimeNs()
fmt.Printf("CPU time = %d ns\n", (cputime2 - cputime1))
}
func doWork() {
x := 1
for i := 0; i < 100000000; i++ {
x *= 11111
}
time.Sleep(time.Second)
}
Output:
CPU time = 31250000 ns
Note the output is in nanoseconds. So here CPU time is 0.03 sec.
For people who stumble on this later like I did, you can actually use the built-in syscall.Getrusage instead of using Cgo. An example of this looks like
func GetCPU() int64 {
usage := new(syscall.Rusage)
syscall.Getrusage(syscall.RUSAGE_SELF, usage)
return usage.Utime.Nano() + usage.Stime.Nano()
}
where I have added up the Utime (user CPU time) and Stime (system CPU time) of the calling process (RUSAGE_SELF) after converting them both to nanoseconds. man 2 getrusage has a bit more information on this system call.
The documentation for syscall.Timeval suggests that Nano() returns the time in nanoseconds since the Unix epoch, but in my tests and looking at the implementation it appears actually to return just the CPU time in nanoseconds, not in nanoseconds since the Unix epoch.

How to benchmark init() function

I was playing with following Go code which calculates Population count using lookup table:
package population
import (
"fmt"
)
var pc [256]byte
func init(){
for i := range pc {
pc[i] = pc[i/2] + byte(i&1)
}
}
func countPopulation() {
var x uint64 = 65535
populationCount := int(pc[byte(x>>(0*8))] +
pc[byte(x>>(1*8))] +
pc[byte(x>>(2*8))] +
pc[byte(x>>(3*8))] +
pc[byte(x>>(4*8))] +
pc[byte(x>>(5*8))] +
pc[byte(x>>(6*8))] +
pc[byte(x>>(7*8))])
fmt.Printf("Population count: %d\n", populationCount)
}
I have written following benchmark code to check performance of above code block:
package population
import "testing"
func BenchmarkCountPopulation(b *testing.B) {
for i := 0; i < b.N; i++ {
countPopulation()
}
}
Which gave me following result:
100000 18760 ns/op
PASS
ok gopl.io/ch2 2.055s
Then I moved the code from init() function to the countPopulation() function as below:
func countPopulation() {
var pc [256]byte
for i := range pc {
pc[i] = pc[i/2] + byte(i&1)
}
var x uint64 = 65535
populationCount := int(pc[byte(x>>(0*8))] +
pc[byte(x>>(1*8))] +
pc[byte(x>>(2*8))] +
pc[byte(x>>(3*8))] +
pc[byte(x>>(4*8))] +
pc[byte(x>>(5*8))] +
pc[byte(x>>(6*8))] +
pc[byte(x>>(7*8))])
fmt.Printf("Population count: %d\n", populationCount)
}
and once again ran the same benchmark code, which gave me following result:
100000 20565 ns/op
PASS
ok gopl.io/ch2 2.303s
After observing both the results it is clear that init() function is not in the scope of benchmark function. That's why first benchmark execution took lesser time compared to second execution.
Now I have another question which I am looking to get answer for.
If I need to benchmark only the init() method, considering there can be multiple init() functions in a package. How is it done in golang?
Thanks in advance.
Yes there can be multiple init()'s in a package, in-fact you can have multiple init()'s in a file. More information about init can be found here. Remember that init() is automatically called one time before your program's main() is even started.
The benchmark framework runs your code multiple times (in your case 100000). This allows it to measure very short functions, as well as very long functions. It doesn't make sense for benchmark to include the time for init(). The problem you are having is that you are not understanding the purpose of benchmarking. Benchmarking lets you compare two or more separate implementations to determine which one is faster (also you can compare performance based on input of the same function). It does not tell you where you should be doing that.
What you are basically doing is known as Premature Optimization. It's where you start optimizing code trying to make it as fast as possible, without knowing where your program actually spends most of its time. Profiling is the process of measuring the time and space complexity of a program. In practice, it allows you to see where your program is spending most of its time. Using that information, you can write more efficient functions. More information about profiling in go can be found in this blog post.

Resources