Go goroutine test failing Expected number of calls - go

I'm new to Go here. I am trying to test the function call inside my Go routine but it fails with the error message
Expected number of calls (8) does not match the actual number of calls
(0).
My test code goes like:
package executor
import (
"testing"
"sync"
"github.com/stretchr/testify/mock"
)
type MockExecutor struct {
mock.Mock
wg sync.WaitGroup
}
func (m *MockExecutor) Execute() {
defer m.wg.Done()
}
func TestScheduleWorksAsExpected(t *testing.T) {
scheduler := GetScheduler()
executor := &MockExecutor{}
scheduler.AddExecutor(executor)
// Mock exptectations
executor.On("Execute").Return()
// Function Call
executor.wg.Add(8)
scheduler.Schedule(2, 1, 4)
executor.wg.Wait()
executor.AssertNumberOfCalls(t, "Execute", 8)
}
and my application code is:
package executor
import (
"sync"
"time"
)
type Scheduler interface {
Schedule(repeatRuns uint16, coolDown uint8, parallelRuns uint64)
AddExecutor(executor Executor)
}
type RepeatScheduler struct {
executor Executor
waitGroup sync.WaitGroup
}
func GetScheduler() Scheduler {
return &RepeatScheduler{}
}
func (r *RepeatScheduler) singleRun() {
defer r.waitGroup.Done()
r.executor.Execute()
}
func (r *RepeatScheduler) AddExecutor(executor Executor) {
r.executor = executor
}
func (r *RepeatScheduler) repeatRuns(parallelRuns uint64) {
for count := 0; count < int(parallelRuns); count += 1 {
r.waitGroup.Add(1)
go r.singleRun()
}
r.waitGroup.Wait()
}
func (r *RepeatScheduler) Schedule(repeatRuns uint16, coolDown uint8, parallelRuns uint64) {
for repeats := 0; repeats < int(repeatRuns); repeats += 1 {
r.repeatRuns(parallelRuns)
time.Sleep(time.Duration(coolDown))
}
}
Could you point out to me what I could be doing wrong here? I'm using Go 1.16.3. When I debug my code, I can see the Execute() function being called but testify is not able to register the function call

You need to call Called() so that mock.Mock records the fact that Execute() has been called. As you are not worried about arguments or return values the following should resolve your issue:
func (m *MockExecutor) Execute() {
defer m.wg.Done()
m.Called()
}
However I note that the way your test is currently written this test may not accomplish what you want. This is because:
you are calling executor.wg.Wait() (which will wait until the function has been called the expected number of times) before calling executor.AssertNumberOfCalls so your test will never complete if Execute() is not called at least the expected number of times (wg.Wait() will block forever).
After m.Called() has been called the expected number of times there is a race condition (if executor is still be running there is a race between executor.AssertNumberOfCalls and the next m.Called()). If wg.Done() does get called an extra time you will get a panic (which I guess you could consider a fail!) but I'd probably simplify the test a bit:
scheduler.Schedule(2, 1, 4)
time.Sleep(time.Millisecond) // Wait long enough that all executions are guaranteed to have completed (should be quick as Schedule waits for go routines to end)
executor.AssertNumberOfCalls(t, "Execute", 8)

Related

Start cronjob at specific epoch time in golang

I am using github.com/robfig/cron library. I want to run cronjob at epoc time with millisecond and work every second. The cron starts at 000 millisecond. I need it to start at specific times.
For example if I take the following:
c := cron.New()
c.AddFunc("#every 1s", func() {
// Do Something
})
c.Start()
And run it at 1657713890300 epoc timestamp then I want the function to run at:
1657713891300
1657713892300
1657713893300.
Currently, cron running at
1657713891000
1657713892000
1657713893000.
Is this possible?
When you use #every 1s the library creates a ConstantDelaySchedule which "rounds so that the next activation time will be on the second".
If that is not what you want then you can create your own scheduler (playground):
package main
import (
"fmt"
"time"
"github.com/robfig/cron/v3"
)
func main() {
time.Sleep(300 * time.Millisecond) // So we don't start cron too near the second boundary
c := cron.New()
c.Schedule(CustomConstantDelaySchedule{time.Second}, cron.FuncJob(func() {
fmt.Println(time.Now().UnixNano())
}))
c.Start()
time.Sleep(time.Second * 5)
}
// CustomConstantDelaySchedule is a copy of the libraries ConstantDelaySchedule with the rounding removed
type CustomConstantDelaySchedule struct {
Delay time.Duration
}
// Next returns the next time this should be run.
func (schedule CustomConstantDelaySchedule) Next(t time.Time) time.Time {
return t.Add(schedule.Delay)
}
Follow up: The above uses the time.Time passed to Next which is time.Now() so will the time will slowly advance over time.
Addressing this is possible (see below - playground) but doing this introduces some potential issuers (the CustomConstantDelaySchedule must not be reused and if the jobs take too long to run then you will still end up with discrepancies). I'd suggest that you consider moving away from the cron package and just use a time.Ticker.
package main
import (
"fmt"
"time"
"github.com/robfig/cron/v3"
)
func main() {
time.Sleep(300 * time.Millisecond) // So we don't start cron too nead the second boundary
c := cron.New()
c.Schedule(CustomConstantDelaySchedule{Delay: time.Second}, cron.FuncJob(func() {
fmt.Println(time.Now().UnixNano())
}))
c.Start()
time.Sleep(time.Second * 5)
}
// CustomConstantDelaySchedule is a copy of the libraries ConstantDelaySchedule with the rounding removed
// Note that because this stored the last time it cannot be reused!
type CustomConstantDelaySchedule struct {
Delay time.Duration
lastTarget time.Time
}
// Next returns the next time this should be run.
func (schedule CustomConstantDelaySchedule) Next(t time.Time) time.Time {
if schedule.lastTarget.IsZero() {
schedule.lastTarget = t.Add(schedule.Delay)
} else {
schedule.lastTarget = schedule.lastTarget.Add(schedule.Delay)
}
return schedule.lastTarget
}

Finalizer statistics

Is there a way to obtain the total number of finalizers registered using runtime.SetFinalizer and which have not yet run?
We are considering adding a struct with a registered finalizer to some of our products to release memory allocated using malloc, and the object could potentially have a relatively high allocation rate. It would be nice if we could monitor the number of finalizers, to make sure that they do not pile up and trigger out-of-memory errors (like they tend to with other garbage collectors).
(I'm aware that explicit deallocation would avoid this problem, but we cannot change the existing code, which does not call a Close function or something like that.)
You can keep keep a count of these objects by incrementing and decrementing a unexported package variable when a new object is created and finalized, respectively.
For example:
package main
import (
"fmt"
"runtime"
"sync/atomic"
)
var totalObjects int32
func TotalObjects() int32 {
return atomic.LoadInt32(&totalObjects)
}
type Object struct {
p uintptr // C allocated pointer
}
func NewObject() *Object {
o := &Object{
}
// TODO: perform other initializations
atomic.AddInt32(&totalObjects, 1)
runtime.SetFinalizer(o, (*Object).finalizer)
return o
}
func (o *Object) finalizer() {
atomic.AddInt32(&totalObjects, -1)
// TODO: perform finalizations
}
func main() {
fmt.Println("Total objects:", TotalObjects())
for i := 0; i < 100; i++ {
_ = NewObject()
runtime.GC()
}
fmt.Println("Total objects:", TotalObjects())
}
https://play.golang.org/p/n35QABBIcj
It's possible to make a wrapper on runtime.SetFinalizer which does the counting for you. Of course, it's a question of using it everywhere where you use SetFinalizer.
In case this is problematic, you can also modify SetFinalizer source code directly, but that requires a modified Go compiler.
Atomic integers are used as SetFinalizer may be called on different threads, and otherwise a counter may not be accurate as without those a race condition could possibly occur. Golang guarantees that finalizers are called from a single goroutine, so it's not needed for inner function.
https://play.golang.org/p/KKCH2UwTFYw
package main
import (
"fmt"
"reflect"
"runtime"
"sync/atomic"
)
var finalizersCreated int64
var finalizersRan int64
func SetFinalizer(obj interface{}, finalizer interface{}) {
finType := reflect.TypeOf(finalizer)
funcType := reflect.FuncOf([]reflect.Type{finType.In(0)}, nil, false)
f := reflect.MakeFunc(funcType, func(args []reflect.Value) []reflect.Value {
finalizersRan++
return reflect.ValueOf(finalizer).Call([]reflect.Value{args[0]})
})
runtime.SetFinalizer(obj, f.Interface())
atomic.AddInt64(&finalizersCreated, 1)
}
func main() {
v := "a"
SetFinalizer(&v, func(a *string) {
fmt.Println("Finalizer ran")
})
fmt.Println(finalizersRan, finalizersCreated)
runtime.GC()
fmt.Println(finalizersRan, finalizersCreated)
}

Resolving conflicts with goroutines?

I have a really minor doubt
Suppose there are three func A,B,C
C is being called from both A and B
I am running A and B on different threads , will it result in conflict any time in feature when it calls C within it
for reference i am adding this code
package main
import (
"fmt"
)
func xyz() {
for true {
fmt.Println("Inside xyz")
call("xyz")
}
}
func abc() {
for true {
fmt.Println("Inside abc")
call("abc")
}
}
func call(s string) {
fmt.Println("call from " + s)
}
func main() {
go xyz()
go abc()
var input string
fmt.Scanln(&input)
}
Here A = xyz(), B = abc(), C = call()
will there be any conflict or any runtime error in future while running these two go routines
Whether multiple goroutines are safe to run concurrently or not comes down to whether they share data without synchronization. In this example, both abc and xyz print to stdout using fmt.Println, and call the same routine, call, which prints to stdout using fmt.Println. Since fmt.Println doesn't use synchronization when printing to stdout, the answer is no, this program is not safe.

Why does the method of a struct that does not read/write its contents still cause a race case?

From the Dave Cheney Blog, the following code apparently causes a race case that can be resolved merely by changing func (RPC) version() int to func (*RPC) version() int :
package main
import (
"fmt"
"time"
)
type RPC struct {
result int
done chan struct{}
}
func (rpc *RPC) compute() {
time.Sleep(time.Second) // strenuous computation intensifies
rpc.result = 42
close(rpc.done)
}
func (RPC) version() int {
return 1 // never going to need to change this
}
func main() {
rpc := &RPC{done: make(chan struct{})}
go rpc.compute() // kick off computation in the background
version := rpc.version() // grab some other information while we're waiting
<-rpc.done // wait for computation to finish
result := rpc.result
fmt.Printf("RPC computation complete, result: %d, version: %d\n", result, version)
}
After looking over the code a few times, I was having a hard time believing that the code had a race case. However, when running with --race, it claims that there was a write at rpc.result=42 and a previous read at version := rpc.version(). I understand the write, since the goroutine changes the value of rpc.result, but what about the read? Where in the version() method does the read occur? It does not touch any of the values of rpc, just returning 1.
I would like to understand the following:
1) Why is that particular line considered a read on the rpc struct?
2) Why would changing RPC to *RPC resolve the race case?
When you have a method with value receiver like this:
func (RPC) version() int {
return 1 // never going to need to change this
}
And you call this method:
version := rpc.version() // grab some other information while we're waiting
A copy has to be made from the value rpc, which will be passed to the method (used as the receiver value).
So while one goroutine go rpc.compute() is running and is modifying the rpc struct value (rpc.result = 42), the main goroutine is making a copy of the whole rpc struct value. There! It's a race.
When you modify the receiver type to pointer:
func (*RPC) version() int {
return 1 // never going to need to change this
}
And you call this method:
version := rpc.version() // grab some other information while we're waiting
This is a shorthand for
version := (&rpc).version()
This passes the address of the rpc value to RPC.version(), it uses only the pointer as the receiver, so no copy is made of the rpc struct value. And since nothing from the struct is used / read in RPC.version(), there is no race.
Note:
Note that if RPC.version() would read the RPC.result field, it would also be a race, as one goroutine modifies it while the main goroutine would read it:
func (rpc *RPC) version() int {
return rpc.result // RACE!
}
Note #2:
Also note that if RPC.version() would read another field of RPC which is not modified in RPC.compute(), that would not be a race, e.g.:
type RPC struct {
result int
done chan struct{}
dummy int
}
func (rpc *RPC) version() int {
return rpc.dummy // Not a race
}

How to identify the stack size of goroutine?

I know go routine can have a few blocking actions, wonder if a goroutine can call a user-defined blocking function like a regular function. A user-defined blocking function has a few steps like, step1, step2.
In another word, I would like to find out whether we can have nested blocking calls in a go routine.
UPDATE:
Original intention was to find the stack size used by goroutine, especially with nested blocking calls. Sorry for the confusion. Thanks to the answer and comments, I created the following function that has 100,000 goroutines, it took 782MB of virtual memory and 416MB of Resident memory on my Ubuntu desktop. It evens out to be 78KB of memory for each go routine stack. Is this a correct statement?
package main
import (
"fmt"
"time"
)
func f(a int) {
x := f1(a);
f2(x);
}
func f1(a int) int {
r := step("1a", a);
r = step("1b", r);
return 1000 * a;
}
func f2(a int) {
r := step("2a", a);
r = step("2b", r);
}
func step(a string, b int) int{
fmt.Printf("%s %d\n", a, b);
time.Sleep(1000 * time.Second)
return 10 * b;
}
func main() {
for i := 0; i < 100000; i++ {
go f(i);
}
//go f(20);
time.Sleep(1000 * time.Second)
}
I believe you're right, though I'm unsure of the relationship between "virtual" and "resident" memory it's possible there's some overlap.
Some things to consider: you're running 100,000 it appears, not 10,000.
The stack itself might contain things like the strings used for the printfs, method parameters, etc.
As of go 1.2 the default stack size (per go routine) is 8KB which may explain some of it.
As of go 1.3 it also uses an exponentially increasing stack size, but I doubt that's the problem you're running into.
Short answer yes.
A goroutine is a "lightweight thread", that means it can do stuff independently from other code in your program. It's almost as if you started a new program, but you can communicate with your other code using the constructs golang provides (channels, locks, etc.).
P.S. Once the main function ends, all goroutines are killed (that's why you need the time.Sleep() in the example)
Here's the quick example (won't run in the golang playground because of their constraints):
package main
import (
"fmt"
"time"
)
func saySomething(a, b func()){
a()
b()
}
func foo() {
fmt.Println("foo")
}
func bar() {
fmt.Println("bar")
}
func talkForAWhile() {
for {
saySomething(foo, bar)
}
}
func main() {
go talkForAWhile()
time.Sleep(1 * time.Second)
}

Resources