I wish to implement parallel api calling in golang using go routines. Once the requests are fired,
I need to wait for all responses (which take different time).
If any of the request fails and returns an error, I wish to end (or pretend) the routines.
I also want to have a timeout value associated with each go routine (or api call).
I have implemented the below for 1 and 2, but need help as to how can I implement 3. Also, feedback on 1 and 2 will also help.
package main
import (
"errors"
"fmt"
"sync"
"time"
)
func main() {
var wg sync.WaitGroup
c := make(chan interface{}, 1)
c2 := make(chan interface{}, 1)
err := make(chan interface{})
wg.Add(1)
go func() {
defer wg.Done()
result, e := doSomeWork()
if e != nil {
err <- e
return
}
c <- result
}()
wg.Add(1)
go func() {
defer wg.Done()
result2, e := doSomeWork2()
if e != nil {
err <- e
return
}
c2 <- result2
}()
go func() {
wg.Wait()
close(c)
close(c2)
close(err)
}()
for e := range err {
// here error happend u could exit your caller function
fmt.Println("Error==>", e)
return
}
fmt.Println(<-c, <-c2)
}
// mimic api call 1
func doSomeWork() (function1, error) {
time.Sleep(10 * time.Second)
obj := function1{"ABC", "29"}
return obj, nil
}
type function1 struct {
Name string
Age string
}
// mimic api call 2
func doSomeWork2() (function2, error) {
time.Sleep(4 * time.Second)
r := errors.New("Error Occured")
if 1 == 2 {
fmt.Println(r)
}
obj := function2{"Delhi", "Delhi"}
// return error as nil for now
return obj, nil
}
type function2 struct {
City string
State string
}
Thanks in advance.
This kind of fork-and-join pattern is exactly what golang.org/x/sync/errgroup was designed for. (Identifying the appropriate “first error” from a group of goroutines can be surprisingly subtle.)
You can use errgroup.WithContext to obtain a context.Context that is cancelled if any of the goroutines in the group returns. The (*Group).Wait method waits for the goroutines to complete and returns the first error.
For your example, that might look something like: https://play.golang.org/p/jqYeb4chHCZ.
You can then inject a timeout within any given call by wrapping the Context using context.WithTimeout.
(However, in my experience if you've plumbed in cancellation correctly, explicit timeouts are almost never helpful — the end user can cancel explicitly if they get tired of waiting, and you probably don't want to promote degraded service to a complete outage if something starts to take just a bit longer than you expected.)
To support timeouts and cancelation of goroutine work, the standard mechanism is to use context.Context.
ctx := context.Background() // root context
// wrap the context with a timeout and/or cancelation mechanism
ctx, cancel := context.WithTimeout(ctx, 5*time.Second) // with timeout or cancel
//ctx, cancel := context.WithCancel(ctx) // no timeout just cancel
defer cancel() // avoid memory leak if we never cancel/timeout
Next your worker goroutines need to support taking and monitoring the state of the ctx. To do this in parallel with the time.Sleep (to mimic a long computation), convert the sleep to a channel based solution:
// mimic api call 1
func doSomeWork(ctx context.Context) (function1, error) {
//time.Sleep(10 * time.Second)
select {
case <-time.After(10 * time.Second):
// wait completed
case <-ctx.Done():
return function1{}, ctx.Err()
}
// ...
}
And if one worker goroutine fails, to signal to the other worker that the request should be aborted, simply call the cancel() function.
result, e := doSomeWork(ctx)
if e != nil {
cancel() // <- add this
err <- e
return
}
Pulling this all together:
https://play.golang.org/p/1Kpe_tre7XI
EDIT: the sleep example above is obviously a contrived example of how to abort a "fake" task. In the real world, http or SQL DB calls would be involve - and since go 1.7 & 1.8 - the standard library added context support to any of these potentially blocking calls:
func doSomeWork(ctx context.Context) (error)
// DB
db, err := sql.Open("mysql", "...") // check err
//rows, err := db.Query("SELECT age from users", age)
rows, err := db.QueryContext(ctx, "SELECT age from users", age)
if err != nil {
return err // will return with error if context is canceled
}
// http
// req, err := http.NewRequest("GET", "http://example.com", nil)
req, err := http.NewRequestWithContext(ctx, "GET", "http://example.com", nil) // check err
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err // will return with error if context is canceled
}
}
EDIT (2): to poll a context's state without blocking, leverage select's default branch:
select {
case <-ctx.Done():
return ctx.Err()
default:
// if ctx is not done - this branch is used
}
the default branch can optional have code in it, but even if it is empty of code it's presence will prevent blocking - and thus just poll the status of the context in that instant of time.
Related
I have two goroutines: the main worker and a helper that it spins off for some help. helper can encounter errors, so I use a channel to communicate errors over from the helper to the worker.
func helper(c chan <- error) (){
//do some work
c <- err // send errors/nil on c
}
Here is how helper() is called:
func worker() error {
//do some work
c := make(chan error, 1)
go helper(c)
err := <- c
return err
}
Questions:
Is the statement err := <- c blocking worker? I don't think so, since the channel is buffered.
If it is blocking, how do I make it non-blocking? My requirement is to have worker and its caller continue with rest of the work, without waiting for the value to appear on the channel.
Thanks.
You can easily verify
func helper(c chan<- error) {
time.Sleep(5 * time.Second)
c <- errors.New("") // send errors/nil on c
}
func worker() error {
fmt.Println("do one")
c := make(chan error, 1)
go helper(c)
err := <-c
fmt.Println("do two")
return err
}
func main() {
worker()
}
Q: Is the statement err := <- c blocking worker? I don't think so, since the channel is buffered.
A: err := <- c will block worker.
Q: If it is blocking, how do I make it non-blocking? My requirement is to have worker and its caller continue with rest of the work, without waiting for the value to appear on the channel.
A: If you don't want blocking, just remove err := <-c. If you need err at the end, just move err := <-c to the end.
You can not read channel without blocking, if you go through without blocking, can can no more exec this code, unless your code is in a loop.
Loop:
for {
select {
case <-c:
break Loop
default:
//default will go through without blocking
}
// do something
}
And have you ever seen errgroup or waitgroup?
It use atomic, cancel context and sync.Once to implement this.
https://github.com/golang/sync/blob/master/errgroup/errgroup.go
https://github.com/golang/go/blob/master/src/sync/waitgroup.go
Or you can just use it, go you func and then wait for error in any place you want.
In your code, the rest of the work is independent of whether the helper encountered an error. You can simply receive from the channel after the rest of the work is completed.
func worker() error {
//do some work
c := make(chan error, 1)
go helper(c)
//do rest of the work
return <-c
}
I think you need this code..
run this code
package main
import (
"log"
"sync"
)
func helper(c chan<- error) {
for {
var err error = nil
// do job
if err != nil {
c <- err // send errors/nil on c
break
}
}
}
func worker(c chan error) error {
log.Println("first log")
go func() {
helper(c)
}()
count := 1
Loop:
for {
select {
case err := <- c :
return err
default:
log.Println(count, " log")
count++
isFinished := false
// do your job
if isFinished {
break Loop // remove this when you test
}
}
}
return nil
}
func main() {
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
c := make(chan error, 1)
worker(c)
wg.Done()
}()
wg.Wait()
}
In go I have two callbacks that eventually do not fire.
registerCb(func() {...})
registerCb(func() {...})
/* Wait for both func to execute with timeout */
I want to wait for both of them but having a timeout if one is not executed.
sync.WaitGroup does not work, since it is blocking and not channel based. Also you call WaitGroup.Done() without the risk of panic outside the callbacks.
My current solution is using just two booleans and a busy wait loop. But that's not satisfying.
Is there any idiomatic way that do not use polling or busy waiting?
Update:
Here is some code that demonstrates a busy wait solution but should return as soon as both callbacks are fired or after the timeout, without using polling
package main
import (
"fmt"
"log"
"sync"
"time"
)
var cbOne func()
var cbTwo func()
func registerCbOne(cb func()) {
cbOne = cb
}
func registerCbTwo(cb func()) {
cbTwo = cb
}
func executeCallbacks() {
<-time.After(1 * time.Second)
cbOne()
// Might never happen
//<-time.After(1 * time.Second)
//cbTwo()
}
func main() {
// Some process in background will execute our callbacks
go func() {
executeCallbacks()
}()
err := WaitAllOrTimeout(3 * time.Second)
if err != nil {
fmt.Println("Error: ", err.Error())
}
fmt.Println("Hello, playground")
}
func WaitAllOrTimeout(to time.Duration) error {
cbOneDoneCh := make(chan bool, 1)
cbTwoDoneCh := make(chan bool, 1)
cbOneDone := false
cbTwoDone := false
registerCbOne(func() {
fmt.Println("cb One");
cbOneDoneCh <- true
})
registerCbTwo(func() {
fmt.Println("cb Two");
cbTwoDoneCh <- true
})
// Wait for cbOne and cbTwo to be executed or a timeout
// Busywait solution
for {
select {
case <-time.After(to):
if cbOneDone && cbTwoDone {
fmt.Println("Both CB executed (we could poll more often)")
return nil
}
fmt.Println("Timeout!")
return fmt.Errorf("Timeout")
case <-cbOneDoneCh:
cbOneDone = true
case <-cbTwoDoneCh:
cbTwoDone = true
}
}
}
This is a followup to my comment, added after you added your example solution. To be clearer than I can in comments, your example code is actually not that bad. Here is your original example:
// Busywait solution
for {
select {
case <-time.After(to):
if cbOneDone && cbTwoDone {
fmt.Println("Both CB executed (we could poll more often)")
return nil
}
fmt.Println("Timeout!")
return fmt.Errorf("Timeout")
case <-cbOneDoneCh:
cbOneDone = true
case <-cbTwoDoneCh:
cbTwoDone = true
}
}
This isn't a "busy wait" but it does have several bugs (including the fact that you need an only-once send semantic for the done channels, or maybe easier and at least as good, to just close them once when done, perhaps using sync.Once). What we want to do is:
Start a timer with to as the timeout.
Enter a select loop, using the timer's channel and the two "done" channels.
We want to exit the select loop when the first of the following events occurs:
the timer fires, or
both "done" channels have been signaled.
If we're going to close the two done channels we'll want to have the Ch variables cleared (set to nil) as well so that the selects don't spin—that would turn this into a true busy-wait—but for the moment let's just assume instead that we send exactly once on them on callback, and otherwise just leak the channels, so that we can use your code as written as those selects will only ever return once. Here's the updated code:
t := timer.NewTimer(to)
for !cbOneDone || !cbTwoDone {
select {
case <-t.C:
fmt.Println("Timeout!")
return fmt.Errorf("timeout")
}
case <-cbOneDoneCh:
cbOneDone = true
case <-cbTwoDoneCh:
cbTwoDone = true
}
}
// insert t.Stop() and receive here to drain t.C if desired
fmt.Println("Both CB executed")
return nil
Note that we will go through the loop at most two times:
If we receive from both Done channels, once each, the loop stops without a timeout. There's no spinning/busy-waiting: we never received anything from t.C. We return nil (no error).
If we receive from one Done channel, the loop resumes but blocks waiting for the timer or the other Done channel.
If we ever receive from t.C, it means we didn't get both callbacks yet. We may have had one, but there's been a timeout and we choose to give up, which was our goal. We return an error, without going back through the loop.
A real version needs a bit more work to clean up properly and avoid leaking "done" channels (and the timer channel and its goroutine; see comment), but this is the general idea. You're already turning the callbacks into channel operations, and you already have a timer with its channel.
func wait(ctx context.Context, wg *sync.WaitGroup) error {
done := make(chan struct{}, 1)
go func() {
wg.Wait()
done <- struct{}{}
}()
select {
case <-done:
// Counter is 0, so all callbacks completed.
return nil
case <-ctx.Done():
// Context cancelled.
return ctx.Err()
}
}
Alternatively, you can pass a time.Duration and block on <-time.After(d) rather than on <-ctx.Done(), but I would argue that using context is more idiomatic.
below code present two variations,
the first is the regular pattern, nothing fancy, it does the job and does it well. You launch your callbacks into a routine, you make them push to a sink, listen that sink for a result or timeout. Take care to the sink channel initial capacity, to prevent leaking a routine it must match the number of callbacks.
the second factories out the synchronization mechanisms into small functions to assemble, two wait methods are provided, waitAll and waitOne. Nice to write, but definitely less efficient, more allocations, more back and forth with more channels, more complex to reason about, more subtle.
package main
import (
"fmt"
"log"
"sync"
"time"
)
func main() {
ExampleOne()
ExampleTwo()
ExampleThree()
fmt.Println("Hello, playground")
}
func ExampleOne() {
log.Println("start reg")
errs := make(chan error, 2)
go func() {
fn := callbackWithOpts("reg: so slow", 2*time.Second, nil)
errs <- fn()
}()
go func() {
fn := callbackWithOpts("reg: too fast", time.Millisecond, fmt.Errorf("broke!"))
errs <- fn()
}()
select {
case err := <-errs: // capture only one result,
// the fastest to finish.
if err != nil {
log.Println(err)
}
case <-time.After(time.Second): // or wait that many amount of time,
// in case they are all so slow.
}
log.Println("done reg")
}
func ExampleTwo() {
log.Println("start wait")
errs := waitAll(
withTimeout(time.Second,
callbackWithOpts("waitAll: so slow", 2*time.Second, nil),
),
withTimeout(time.Second,
callbackWithOpts("waitAll: too fast", time.Millisecond, nil),
),
)
for err := range trim(errs) {
if err != nil {
log.Println(err)
}
}
log.Println("done wait")
}
func ExampleThree() {
log.Println("start waitOne")
errs := waitOne(
withTimeout(time.Second,
callbackWithOpts("waitOne: so slow", 2*time.Second, nil),
),
withTimeout(time.Second,
callbackWithOpts("waitOne: too fast", time.Millisecond, nil),
),
)
for err := range trim(errs) {
if err != nil {
log.Println(err)
}
}
log.Println("done waitOne")
}
// a configurable callback for playing
func callbackWithOpts(msg string, tout time.Duration, err error) func() error {
return func() error {
<-time.After(tout)
fmt.Println(msg)
return err
}
}
// withTimeout return a function that returns first error or times out and return nil
func withTimeout(tout time.Duration, h func() error) func() error {
return func() error {
d := make(chan error, 1)
go func() {
d <- h()
}()
select {
case err := <-d:
return err
case <-time.After(tout):
}
return nil
}
}
// wait launches all func() and return their errors into the returned error channel; (merge)
// It is the caller responsability to drain the output error channel.
func waitAll(h ...func() error) chan error {
d := make(chan error, len(h))
var wg sync.WaitGroup
for i := 0; i < len(h); i++ {
wg.Add(1)
go func(h func() error) {
defer wg.Done()
d <- h()
}(h[i])
}
go func() {
wg.Wait()
close(d)
}()
return d
}
// wait launches all func() and return the first error into the returned error channel
// It is the caller responsability to drain the output error channel.
func waitOne(h ...func() error) chan error {
d := make(chan error, len(h))
one := make(chan error, 1)
var wg sync.WaitGroup
for i := 0; i < len(h); i++ {
wg.Add(1)
go func(h func() error) {
defer wg.Done()
d <- h()
}(h[i])
}
go func() {
for err := range d {
one <- err
close(one)
break
}
}()
go func() {
wg.Wait()
close(d)
}()
return one
}
func trim(err chan error) chan error {
out := make(chan error)
go func() {
for e := range err {
out <- e
}
close(out)
}()
return out
}
I've been reading a few go blogs and and more recently I stubbled upon Peter Bourgon's talk titled "Ways to do things". He shows a few examples of the actor pattern for concurrency in GO. Here is a handler example using such pattern:
func (a *API) handleNext(w http.ResponseWriter, r *http.Request) {
var (
notFound = make(chan struct{})
otherError = make(chan error)
nextID = make(chan string)
)
a.action <- func() {
s, err := a.log.Oldest()
if err == ErrNoSegmentsAvailable {
close(notFound)
return
}
if err != nil {
otherError <- err
return
}
id := uuid.New()
a.pending[id] = pendingSegment{s, time.Now().Add(a.timeout), false}
nextID <- id
}
select {
case <-notFound:
http.NotFound(w, r)
case err := <-otherError:
http.Error(w, err.Error(), http.StatusInternalServerError)
case id := <-nextID:
fmt.Fprint(w, id)
}
}
And there's a loop behind the scenes listening for the action channel:
func (a *API) loop() {
for {
select {
case f := <-a.action:
f()
}
}
}
My question is what is the benefit to all of this? The handler isn't any faster because it is still blocking until some action in the action func returns something to it. Which is essentially the same thing as just returning the function from outside the go routine. What am I missing here?
The benefits are not to a single call but to the sum of all calls.
For example you can use this to limit actual execution to a single goroutine and thereby avoid all the problems concurrent execution would bring with it.
For example I use this pattern to synchronise all usage of a connection to a hardware device that talks serial.
If RPC does not have a timeout mechanism, how do I "kill" an RPC call if it is trying to call an RPC method of a server that is closed?
You can use channels to implement a timeout pattern:
import "time"
c := make(chan error, 1)
go func() { c <- client.Call("Service", args, &result) } ()
select {
case err := <-c:
// use err and result
case <-time.After(timeoutNanoseconds):
// call timed out
}
The select will block until either client.Call returns or timeoutNanoseconds elapsed.
if you want to implement a timeout (to prevent a call from taking too long), then you'll want to change rpc.Dial for net.DialTimeout (notice they're separate packages: rpc vs net). Also be aware that the returned type isn't a client any more (as it is in the previous example); instead it is a 'connection'.
conn, err := net.DialTimeout("tcp", "localhost:8080", time.Minute)
if err != nil {
log.Fatal("dialing:", err)
}
client := rpc.NewClient(conn)
It seems the only solution for net/rpc is to close the underlying connection when you notice stuck requests. Then the client should finish pending requests with "connection broken" errors.
An alternative way is to use https://github.com/valyala/gorpc , which supports timeout RPC calls out of the box.
func (client *Client) Call(serviceMethod string, args interface{}, reply interface{}) error
Call method may block goroutine forever
Change use Go method:
func (client *Client) Go(serviceMethod string, args interface{}, reply interface{}, done chan *Call) *Call
Client example:
call := rpcClient.Go(method, args, reply, make(chan *rpc.Call, 1))
select {
case <-time.After(timeout):
log.Printf("[WARN] rpc call timeout(%v) %v => %v", timeout, rpcClient, s.RpcServer)
rpcClient.Close()
return errors.New("timeout")
case resp := <-call.Done:
if resp != nil && resp.Error != nil {
rpcClient.Close()
return resp.Error
}
Or, anno now, someone might prefer to use context instead. This also takes care of returning a proper error when timed out. (context.DeadlineExceeded)
import (
"context"
"log"
"net/rpc"
)
type Client struct {
*rpc.Client
}
// CallEx is a context aware wrapper around rpc's Client.Call()
func (c *client) CallEx(ctx context.Context, serviceMethod string, args interface{}, reply interface{}) error {
ec := make(chan error, 1)
go func() {
ec <- c.Call(serviceMethod, args, reply)
}()
select {
case err := <-ec:
return err
case <-ctx.Done():
return ctx.Err()
}
}
Invoke this with a Deadlined context:
type Args struct {
A, B int
}
func main(){
rpc, err := rpc.DialHTTP("tcp", "host")
if err != nil {
t.Fatal(err)
}
c := client{rpc}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var i int
if err := c.CallEx(ctx, "Calc.Multiply", Args{2, 2}, &i); err != nil {
log.Fatal(err)
}
}
I am playing with go lately and trying to make some server which responds to clients on a tcp connection.
My question is how do i cleanly shutdown the server and interrupt the go-routine which is currently "blocked" in the following call
func (*TCPListener) Accept?
According to the documentation of Accept
Accept implements the Accept method in the Listener interface; it waits for the next call and returns a generic Conn.
The errors are also very scarcely documented.
Simply Close() the net.Listener you get from the net.Listen(...) call and return from the executing goroutine.
TCPListener Deadline
You don't necessarily need an extra go routine (that keeps accepting), simply specify a Deadline.
for example:
for {
// Check if someone wants to interrupt accepting
select {
case <- someoneWantsToEndMe:
return // runs into "defer listener.Close()"
default: // nothing to do
}
// Accept with Deadline
listener.SetDeadline(time.Now().Add(1 * time.Second)
conn, err := listener.Accept()
if err != nil {
// TODO: Could do some err checking (to be sure it is a timeout), but for brevity
continue
}
go handleConnection(conn)
}
Here is what i was looking for. Maybe helps someone in the future.
Notice the use of select and the "c" channel to combine it with the exit channel
ln, err := net.Listen("tcp", ":8080")
if err != nil {
// handle error
}
defer ln.Close()
for {
type accepted struct {
conn net.Conn
err error
}
c := make(chan accepted, 1)
go func() {
conn, err := ln.Accept()
c <- accepted{conn, err}
}()
select {
case a := <-c:
if a.err != nil {
// handle error
continue
}
go handleConnection(a.conn)
case e := <-ev:
// handle event
return
}
}