Can we restrict function calling once at a time from goroutine - go

I have following situation
wg.Add(1)
go func(wg *sync.WaitGroup) {
defer wg.Done()
for {
select {
case <-tickerCR.C:
_ = ProcessCommands()
case <-ow.quitCR:
logger.Debug("Stopping ProcessCommands goroutine")
return
}
}
}(&wg)
Can I somehow make sure that if ProcessCommands is executing then ignore the next ticker event. Basically I want to avoid parallel execution of ProcessCommands

What you want is called mutual exclusion. It can be achieved by Mutex.
var m Mutex
func process() {
m.Lock()
defer m.Unlock()
ProcessCommands()
}

You could create a type that has two fields, a function and a mutex, and when called his, lets say, run method, it locks, defers the unlock and calls the stored function. Afterwards you just need to create instances of that type with the required functions. OOP to the rescue. Remember that functions can be stored in a struct the same way a string would.
import (
"sync"
)
type ProtectedCaller struct {
m sync.Mutex
f func()
}
func (caller *ProtectedCaller) Call() {
caller.m.Lock()
defer caller.m.Unlock()
caller.f()
}
func ProtectCall(f func()) ProtectedCaller {
return ProtectedCaller{f: f}
}
var processCommands = ProtectCall(ProcessCommands)

There's a semi-standard module x/sync/singleflight:
How to use:
import "golang.org/x/sync/singleflight"
var requestGroup singleflight.Group
// This handler should call it's upstream only once:
http.HandleFunc("/singleflight", func(w http.ResponseWriter, r *http.Request) {
// define request group - each request can have it's specific ID
// singleflight ensures only 1 request with any given ID is processed at a time
// also you can have different IDs - to be processed simultaneously
// just set ID to "singleflight-1", "singleflight-2", etc
res, err, shared := requestGroup.Do("singleflight", func() (interface{}, error) {
fmt.Println("calling the endpoint")
response, err := http.Get("https://jsonplaceholder.typicode.com/photos")
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return nil, err
}
responseData, err := ioutil.ReadAll(response.Body)
if err != nil {
log.Fatal(err)
}
time.Sleep(2 * time.Second)
return string(responseData), err
})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
result := res.(string)
fmt.Println("shared = ", shared)
fmt.Fprintf(w, "%q", result)
})
source

you can use sync.Once and prevent multiple calling a function, like this:
wg.Add(1)
var once sync.Once
go func(wg *sync.WaitGroup) {
defer wg.Done()
for {
select {
case <-tickerCR.C:
// look at this line, "ProcessCommands" function will call only once
once.Do(ProcessCommands)
case <-ow.quitCR:
logger.Debug("Stopping ProcessCommands goroutine")
return
}
}
}(&wg)

Related

Golang Concurrency Issue to introduce timeout

I wish to implement parallel api calling in golang using go routines. Once the requests are fired,
I need to wait for all responses (which take different time).
If any of the request fails and returns an error, I wish to end (or pretend) the routines.
I also want to have a timeout value associated with each go routine (or api call).
I have implemented the below for 1 and 2, but need help as to how can I implement 3. Also, feedback on 1 and 2 will also help.
package main
import (
"errors"
"fmt"
"sync"
"time"
)
func main() {
var wg sync.WaitGroup
c := make(chan interface{}, 1)
c2 := make(chan interface{}, 1)
err := make(chan interface{})
wg.Add(1)
go func() {
defer wg.Done()
result, e := doSomeWork()
if e != nil {
err <- e
return
}
c <- result
}()
wg.Add(1)
go func() {
defer wg.Done()
result2, e := doSomeWork2()
if e != nil {
err <- e
return
}
c2 <- result2
}()
go func() {
wg.Wait()
close(c)
close(c2)
close(err)
}()
for e := range err {
// here error happend u could exit your caller function
fmt.Println("Error==>", e)
return
}
fmt.Println(<-c, <-c2)
}
// mimic api call 1
func doSomeWork() (function1, error) {
time.Sleep(10 * time.Second)
obj := function1{"ABC", "29"}
return obj, nil
}
type function1 struct {
Name string
Age string
}
// mimic api call 2
func doSomeWork2() (function2, error) {
time.Sleep(4 * time.Second)
r := errors.New("Error Occured")
if 1 == 2 {
fmt.Println(r)
}
obj := function2{"Delhi", "Delhi"}
// return error as nil for now
return obj, nil
}
type function2 struct {
City string
State string
}
Thanks in advance.
This kind of fork-and-join pattern is exactly what golang.org/x/sync/errgroup was designed for. (Identifying the appropriate “first error” from a group of goroutines can be surprisingly subtle.)
You can use errgroup.WithContext to obtain a context.Context that is cancelled if any of the goroutines in the group returns. The (*Group).Wait method waits for the goroutines to complete and returns the first error.
For your example, that might look something like: https://play.golang.org/p/jqYeb4chHCZ.
You can then inject a timeout within any given call by wrapping the Context using context.WithTimeout.
(However, in my experience if you've plumbed in cancellation correctly, explicit timeouts are almost never helpful — the end user can cancel explicitly if they get tired of waiting, and you probably don't want to promote degraded service to a complete outage if something starts to take just a bit longer than you expected.)
To support timeouts and cancelation of goroutine work, the standard mechanism is to use context.Context.
ctx := context.Background() // root context
// wrap the context with a timeout and/or cancelation mechanism
ctx, cancel := context.WithTimeout(ctx, 5*time.Second) // with timeout or cancel
//ctx, cancel := context.WithCancel(ctx) // no timeout just cancel
defer cancel() // avoid memory leak if we never cancel/timeout
Next your worker goroutines need to support taking and monitoring the state of the ctx. To do this in parallel with the time.Sleep (to mimic a long computation), convert the sleep to a channel based solution:
// mimic api call 1
func doSomeWork(ctx context.Context) (function1, error) {
//time.Sleep(10 * time.Second)
select {
case <-time.After(10 * time.Second):
// wait completed
case <-ctx.Done():
return function1{}, ctx.Err()
}
// ...
}
And if one worker goroutine fails, to signal to the other worker that the request should be aborted, simply call the cancel() function.
result, e := doSomeWork(ctx)
if e != nil {
cancel() // <- add this
err <- e
return
}
Pulling this all together:
https://play.golang.org/p/1Kpe_tre7XI
EDIT: the sleep example above is obviously a contrived example of how to abort a "fake" task. In the real world, http or SQL DB calls would be involve - and since go 1.7 & 1.8 - the standard library added context support to any of these potentially blocking calls:
func doSomeWork(ctx context.Context) (error)
// DB
db, err := sql.Open("mysql", "...") // check err
//rows, err := db.Query("SELECT age from users", age)
rows, err := db.QueryContext(ctx, "SELECT age from users", age)
if err != nil {
return err // will return with error if context is canceled
}
// http
// req, err := http.NewRequest("GET", "http://example.com", nil)
req, err := http.NewRequestWithContext(ctx, "GET", "http://example.com", nil) // check err
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err // will return with error if context is canceled
}
}
EDIT (2): to poll a context's state without blocking, leverage select's default branch:
select {
case <-ctx.Done():
return ctx.Err()
default:
// if ctx is not done - this branch is used
}
the default branch can optional have code in it, but even if it is empty of code it's presence will prevent blocking - and thus just poll the status of the context in that instant of time.

What is the best practice when using with context.WithTimeout() in Go?

I want to use context.WithTimeout() to handle a use case that I make an external request, and if the response of the request is too long, it will return an error.
I have implemented the pseudo code like the playground link attached below:
2 solution:
main -> not expected
main_1 -> expected
package main
import (
"context"
"fmt"
"time"
)
// I just dummy sleep in this func to produce use case this func
// need 10s to process and handle logic.
// And this assume will be out of timeOut expect (5s)
func makeHTTPRequest(ctx context.Context) (string, error) {
time.Sleep(time.Duration(10) * time.Second)
return "abc", nil
}
// In main Func, I will set timeout is 5 second.
func main() {
var strCh = make(chan string, 1)
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(5)*time.Second)
defer cancel()
fmt.Print("Begin make request\n")
abc, err := makeHTTPRequest(ctx)
if err != nil {
fmt.Print("Return error\n")
return
}
select {
case <-ctx.Done():
fmt.Printf("Return ctx error: %s\n", ctx.Err())
return
case strCh <- abc:
fmt.Print("Return response\n")
return
}
}
func main_1() {
var strCh = make(chan string, 1)
var errCh = make(chan error, 1)
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(5)*time.Second)
defer cancel()
go func() {
fmt.Print("Begin make request\n")
abc, err := makeHTTPRequest(ctx)
if err != nil {
fmt.Print("Return error\n")
errCh <- err
return
}
strCh <- abc
}()
select {
case err := <-errCh:
fmt.Printf("Return error: %s\n", err.Error())
return
case <-ctx.Done():
fmt.Printf("Return ctx error: %s\n", ctx.Err())
return
case str := <-strCh:
fmt.Printf("Return response: %s\n", str)
return
}
}
However, if with the main() function then it doesn't work as expected.
But if with the second main_1() implementation using goroutine then maybe the new context.WithTimeout() works as expected.
Can you help me to answer this problem?
https://play.golang.org/p/kZdlm_Tvljy
It's better to handle context in your makeHTTPRequest() function, so you can use it as a synchronous function in main().
https://play.golang.org/p/Bhl4qprIBgH
func makeHTTPRequest(ctx context.Context) (string, error) {
ch := make(chan string)
go func() {
time.Sleep(10 * time.Second)
select {
case ch <- "abc":
default:
// When context deadline exceeded, there is no receiver
// This case will prevent goroutine blocking forever
return
}
}()
select {
case <-ctx.Done():
return "", ctx.Err()
case result := <-ch:
return result, nil
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
fmt.Printf("[%v] Begin make request \n", time.Now())
abc, err := makeHTTPRequest(ctx)
if err != nil {
fmt.Printf("[%v] Return error: %v \n", time.Now(), err)
return
}
fmt.Printf("[%v] %s", time.Now(), abc)
}
If I got you right. There are two questiones.
You want to know why main() function not work?
What's the best practice?
Q1
main() blocked at makeHTTPRequest, and during that time, context had timeout. So, not work as expected.
Q2
This example can answer you. In main_1() , your code is already best practice.

how to return values in a goroutine

I have the code:
go s.addItemSync(ch, cs.ResponseQueue, user)
This calls the func:
func (s *Services) addItemSync(ch types.ChannelInsertion, statusQueueName, user string) {
//func body here
}
I would however like to do this:
if ok, err := go s.addItemSync(ch, cs.ResponseQueue, user); !ok {
if err != nil {
log.Log.Error("Error adding channel", zap.Error(err))
return
}
Which would change the other func to this
func (s *Services) addItemSync(ch types.ChannelInsertion, statusQueueName, user string) (bool, error) {
}
As in, I would like to be able to declare a go func but this errors out every time. Any idea how you can declare a variable while able to call the go func ability for synchronous calls? as seen in the if ok, err := go s.addItemSync(ch, cs.ResponseQueue, user); !ok { line?
If you want to wait until a go-routine has completed, you need to return results in a channel. The basic pattern, without complicating with wait groups, etc. is:
func myFunc() {
// make a channel to receive errors
errChan := make(chan error)
// launch a go routine
go doSomething(myVar, errChan)
// block until something received on the error channel
if err := <- errChan; err != nil {
// something bad happened
}
}
// your async funciton
func doSomething(myVar interface{}, errChan chan error) {
// Do stuff
if something, err := someOtherFunc(myVar); err != nil {
errChan <- err
return
}
// all good - send nil to the error channel
errChan <- nil
}
In your case if you just want to fire off a go-routine and log if an error happens, you can use an anonymous function:
go func() {
if ok, err := s.addItemSync(ch, cs.ResponseQueue, user); !ok {
if err != nil {
log.Log.Error("Error adding channel", zap.Error(err))
}
}
}()
Or if you want to wait for the result:
errChan := make(chan error)
go func() {
if ok, err := s.addItemSync(ch, cs.ResponseQueue, user); !ok {
if err != nil {
errChan <- err
return
}
}
errChan <- nil
}()
// do some other stuff while we wait...
// block until go routine returns
if err := <- errChan; err != nil {
log.Log.Error("Error adding channel", zap.Error(err))
}
Note:
Your code as written, may have unexpected results if it is possible that a response where ok == false would not return an error. If this is a concern, I would suggest creating and returning a new error for cases where !ok && err == nil

Benefits of actor pattern in HTTP handler

I've been reading a few go blogs and and more recently I stubbled upon Peter Bourgon's talk titled "Ways to do things". He shows a few examples of the actor pattern for concurrency in GO. Here is a handler example using such pattern:
func (a *API) handleNext(w http.ResponseWriter, r *http.Request) {
var (
notFound = make(chan struct{})
otherError = make(chan error)
nextID = make(chan string)
)
a.action <- func() {
s, err := a.log.Oldest()
if err == ErrNoSegmentsAvailable {
close(notFound)
return
}
if err != nil {
otherError <- err
return
}
id := uuid.New()
a.pending[id] = pendingSegment{s, time.Now().Add(a.timeout), false}
nextID <- id
}
select {
case <-notFound:
http.NotFound(w, r)
case err := <-otherError:
http.Error(w, err.Error(), http.StatusInternalServerError)
case id := <-nextID:
fmt.Fprint(w, id)
}
}
And there's a loop behind the scenes listening for the action channel:
func (a *API) loop() {
for {
select {
case f := <-a.action:
f()
}
}
}
My question is what is the benefit to all of this? The handler isn't any faster because it is still blocking until some action in the action func returns something to it. Which is essentially the same thing as just returning the function from outside the go routine. What am I missing here?
The benefits are not to a single call but to the sum of all calls.
For example you can use this to limit actual execution to a single goroutine and thereby avoid all the problems concurrent execution would bring with it.
For example I use this pattern to synchronise all usage of a connection to a hardware device that talks serial.

How to catch runtime error from a function invoked from a waitgroup?

How to handle crashes in a waitgroup gracefully?
In other words, in the following snippet of code, how to catch the panics/crashes of goroutines invoking method do()?
func do(){
str := "abc"
fmt.Print(str[3])
defer func() {
if err := recover(); err != nil {
fmt.Print(err)
}
}()
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 1; i++ {
wg.Add(1)
go do()
defer func() {
wg.Done()
if err := recover(); err != nil {
fmt.Print(err)
}
}()
}
wg.Wait()
fmt.Println("This line should be printed after all those invocations fail.")
}
First, registering a deferred function to recover should be the first line in the function, as since you do it last, it won't even be reached because the line / code before the defer already panics and so the deferred function does not get registered which would restore the panicing state.
So change your do() function to this:
func do() {
defer func() {
if err := recover(); err != nil {
fmt.Println("Restored:", err)
}
}()
str := "abc"
fmt.Print(str[3])
}
Second: this alone will not make your code work, as you call wg.Defer() in a deferred function which would only run once main() finishes - which is never because you call wg.Wait() in your main(). So wg.Wait() waits for the wg.Done() calls, but wg.Done() calls will not be run until wg.Wait() returnes. It's a deadlock.
You should call wg.Done() from the do() function, in the deferred function, something like this:
var wg sync.WaitGroup
func do() {
defer func() {
if err := recover(); err != nil {
fmt.Println(err)
}
wg.Done()
}()
str := "abc"
fmt.Print(str[3])
}
func main() {
for i := 0; i < 1; i++ {
wg.Add(1)
go do()
}
wg.Wait()
fmt.Println("This line should be printed after all those invocations fail.")
}
Output (try it on the Go Playground):
Restored: runtime error: index out of range
This line should be printed after all those invocations fail.
This of course needed to move the wg variable to global scope. Another option would be to pass it to do() as an argument. If you decide to go this way, note that you have to pass a pointer to WaitGroup, else only a copy will be passed (WaitGroup is a struct type) and calling WaitGroup.Done() on a copy will not have effect on the original.
With passing WaitGroup to do():
func do(wg *sync.WaitGroup) {
defer func() {
if err := recover(); err != nil {
fmt.Println("Restored:", err)
}
wg.Done()
}()
str := "abc"
fmt.Print(str[3])
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 1; i++ {
wg.Add(1)
go do(&wg)
}
wg.Wait()
fmt.Println("This line should be printed after all those invocations fail.")
}
Output is the same. Try this variant on the Go Playground.
#icza did a fantastic job explaining how to appropriately use WaitGroup and its functions Wait and Done
I like WaitGroup simplicity. However, I do not like that we need to pass the reference to the goroutine because that would mean that the concurrency logic would be mixed with your business logic.
So I came up with this generic function to solve this problem for me:
// Parallelize parallelizes the function calls
func Parallelize(functions ...func()) {
var waitGroup sync.WaitGroup
waitGroup.Add(len(functions))
defer waitGroup.Wait()
for _, function := range functions {
go func(copy func()) {
defer waitGroup.Done()
copy()
}(function)
}
}
So your example could be solved this way:
func do() {
defer func() {
if err := recover(); err != nil {
fmt.Println(err)
}
}()
str := "abc"
fmt.Print(str[3])
}
func main() {
Parallelize(do, do, do)
fmt.Println("This line should be printed after all those invocations fail.")
}
If you would like to use it, you can find it here https://github.com/shomali11/util

Resources