I have a handler function for an endpoint. The handler takes a very long time to return a response, consists a lot of processing. I do not want other incoming requests to run concurrently but instead wait for the previous one to finish! Tried implementing waitGroups, check the code! Every time for a new request a new instance of wait group is created and it starts running concurrently instead of waiting for the older one to complete. Is my wait group approach incorrect?
var wg sync.WaitGroup
func Handler(c *gin.Context) {
// some stuff that takes ~10-15 seconds, can't be run concurrently
// If a second request comes put it in a queue and execute it only once this is done
wg.Add(1)
go func() {
defer wg.Done()
//some processing happens
time.Sleep(10 * time.Second)
}()
wg.Wait()
c.JSON(http.StatusOK, gin.H{"message": "Hello!"})
}
router.POST("/doSomething", Handler)
As already mentioned in the comments, this looks like a broken requirement. However if you really want to have one instance of the function running, you can use a mutex:
var lock sync.Mutex
func Handler(c *gin.Context) {
lock.Lock()
defer lock.Unlock()
// Process
}
Related
I have written an API that makes DB calls and does some business logic. I am invoking a goroutine that must perform some operation in the background.
Since the API call should not wait for this background task to finish, I am returning 200 OK immediately after calling the goroutine (let us assume the background task will never give any error.)
I read that goroutine will be terminated once the goroutine has completed its task.
Is this fire and forget way safe from a goroutine leak?
Are goroutines terminated and cleaned up once they perform the job?
func DefaultHandler(w http.ResponseWriter, r *http.Request) {
// Some DB calls
// Some business logics
go func() {
// some Task taking 5 sec
}()
w.WriteHeader(http.StatusOK)
}
I would recommend always having your goroutines under control to avoid memory and system exhaustion.
If you are receiving a spike of requests and you start spawning goroutines without control, probably the system will go down soon or later.
In those cases where you need to return an immediate 200Ok the best approach is to create a message queue, so the server only needs to create a job in the queue and return the ok and forget. The rest will be handled by a consumer asynchronously.
Producer (HTTP server) >>> Queue >>> Consumer
Normally, the queue is an external resource (RabbitMQ, AWS SQS...) but for teaching purposes, you can achieve the same effect using a channel as a message queue.
In the example you'll see how we create a channel to communicate 2 processes.
Then we start the worker process that will read from the channel and later the server with a handler that will write to the channel.
Try to play with the buffer size and job time while sending curl requests.
package main
import (
"fmt"
"log"
"net/http"
"time"
)
/*
$ go run .
curl "http://localhost:8080?user_id=1"
curl "http://localhost:8080?user_id=2"
curl "http://localhost:8080?user_id=3"
curl "http://localhost:8080?user_id=....."
*/
func main() {
queueSize := 10
// This is our queue, a channel to communicate processes. Queue size is the number of items that can be stored in the channel
myJobQueue := make(chan string, queueSize) // Search for 'buffered channels'
// Starts a worker that will read continuously from our queue
go myBackgroundWorker(myJobQueue)
// We start our server with a handler that is receiving the queue to write to it
if err := http.ListenAndServe("localhost:8080", myAsyncHandler(myJobQueue)); err != nil {
panic(err)
}
}
func myAsyncHandler(myJobQueue chan<- string) http.HandlerFunc {
return func(rw http.ResponseWriter, r *http.Request) {
// We check that in the query string we have a 'user_id' query param
if userID := r.URL.Query().Get("user_id"); userID != "" {
select {
case myJobQueue <- userID: // We try to put the item into the queue ...
rw.WriteHeader(http.StatusOK)
rw.Write([]byte(fmt.Sprintf("queuing user process: %s", userID)))
default: // If we cannot write to the queue it's because is full!
rw.WriteHeader(http.StatusInternalServerError)
rw.Write([]byte(`our internal queue is full, try it later`))
}
return
}
rw.WriteHeader(http.StatusBadRequest)
rw.Write([]byte(`missing 'user_id' in query params`))
}
}
func myBackgroundWorker(myJobQueue <-chan string) {
const (
jobDuration = 10 * time.Second // simulation of a heavy background process
)
// We continuosly read from our queue and process the queue 1 by 1.
// In this loop we could spawn more goroutines in a controlled way to paralelize work and increase the read throughput, but i don't want to overcomplicate the example.
for userID := range myJobQueue {
// rate limiter here ...
// go func(u string){
log.Printf("processing user: %s, started", userID)
time.Sleep(jobDuration)
log.Printf("processing user: %s, finisehd", userID)
// }(userID)
}
}
There is no "goroutine cleaning" you have to handle, you just launch goroutines and they'll be cleaned when the function launched as a goroutine returns. Quoting from Spec: Go statements:
When the function terminates, its goroutine also terminates. If the function has any return values, they are discarded when the function completes.
So what you do is fine. Note however that your launched goroutine cannot use or assume anything about the request (r) and response writer (w), you may only use them before you return from the handler.
Also note that you don't have to write http.StatusOK, if you return from the handler without writing anything, that's assumed to be a success and HTTP 200 OK will be sent back automatically.
See related / possible duplicate: Webhook process run on another goroutine
#icza is absolutely right there is no "goroutine cleaning" you can use a webhook or a background job like gocraft. The only way I can think of using your solution is to use the sync package for learning purposes.
func DefaultHandler(w http.ResponseWriter, r *http.Request) {
// Some DB calls
// Some business logics
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
// some Task taking 5 sec
}()
w.WriteHeader(http.StatusOK)
wg.wait()
}
you can wait for a goroutine to finish using &sync.WaitGroup:
// BusyTask
func BusyTask(t interface{}) error {
var wg = &sync.WaitGroup{}
wg.Add(1)
go func() {
// busy doing stuff
time.Sleep(5 * time.Second)
wg.Done()
}()
wg.Wait() // wait for goroutine
return nil
}
// this will wait 5 second till goroutune finish
func main() {
fmt.Println("hello")
BusyTask("some task...")
fmt.Println("done")
}
Other way is to attach a context.Context to goroutine and time it out.
//
func BusyTaskContext(ctx context.Context, t string) error {
done := make(chan struct{}, 1)
//
go func() {
// time sleep 5 second
time.Sleep(5 * time.Second)
// do tasks and signle done
done <- struct{}{}
close(done)
}()
//
select {
case <-ctx.Done():
return errors.New("timeout")
case <-done:
return nil
}
}
//
func main() {
fmt.Println("hello")
ctx, cancel := context.WithTimeout(context.TODO(), 2*time.Second)
defer cancel()
if err := BusyTaskContext(ctx, "some task..."); err != nil {
fmt.Println(err)
return
}
fmt.Println("done")
}
I am fairly new to golang and its concurrency principles. My use-case involves performing multiple http requests(for a single entity), on batch of entities. If any of the http request fails for an entity, I need to stop all parallel http requests for it. Also, I have to manage counts of entities failed with errors. I am trying to implement errorgroup inside entities goroutines, such that if any http request fails for a single entity the errorgroup terminates and return error to its parent goroutine. But I am not sure how to maintain count of errors.
func main(entity[] string) {
errorC := make(chan string) // channel to insert failed entity
var wg sync.WaitGroup
for _, link := range entity {
wg.Add(1)
// Spawn errorgroup here. errorgroup_spawn
}
go func() {
wg.Wait()
close(errorC)
}()
for msg := range errorC {
// here storing error entityIds somewhere.
}
}
and errorgroup like this
func errorgroup_spawn(ctx context.Context, errorC chan string, wg *sync.WaitGroup) { // and other params
defer (*wg).Done()
goRoutineCollection, ctxx := errgroup.WithContext(ctx)
results := make(chan *result)
goRoutineCollection.Go(func() error {
// http calls for single entity
// if error occurs, push it in errorC, and return Error.
return nil
})
go func() {
goRoutineCollection.Wait()
close(result)
}()
return goRoutineCollection.Wait()
}
PS: I was also thinking to apply nested errorgroups, but can't think to maintain error counts, while running other errorgroups
Can anyone guide me, is this a correct approach to handle such real world scenarios?
One way to keep track of errors is to use a status struct to keep track of which error came from where:
type Status struct {
Entity string
Err error
}
...
errorC := make(chan Status)
// Spawn error groups with name of the entity, and when error happens, push Status{Entity:entityName,Err:err} to the chanel
You can then read all errors from the error channel and figure out what failed why.
Another option is not to use errorgroups at all. This makes things more explicit, but whether it is better or not is debatable:
// Keep entity statuses
statuses:=make([]Status,len(entity))
for i, link := range entity {
statuses[i].Entity=link
wg.Add(1)
go func(i index) {
defer wg.Done()
ctx, cancel:=context.WithCancel(context.Background())
defer cancel()
// Error collector
status:=make(chan error)
defer close(status)
go func() {
for st:=range status {
if st!=nil {
cancel() // Stop all calls
// store first error
if statuses[i].Err==nil {
statuses[i].Err=st
}
}
}
}()
innerWg:=sync.WaitGroup{}
innerWg.Add(1)
go func() {
defer innerWg.Done()
status<- makeHttpCall(ctx)
}()
innerWg.Add(1)
go func() {
defer innerWg.Done()
status<- makeHttpCall(ctx)
}()
...
innerWg.Wait()
}(i)
}
When everything is done, statuses will contain all entities and corresponding statuses.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Many languages have their own high-level non-blocking HTTP client, for example, python's aiohttp. Namely, they send out HTTP requests; do not wait for response; When response arrives they make some kind of callbacks.
My questions are
is there a Go package for that?
or we just create a goroutine in which we use normal HTTP clients?
which way is better?
Other languages have such features because when they block waiting for request they block the thread they are using. This is the case for Java, Python or NodeJS. Therefore to make them useful, the developers needed to implement such long-standing blocking operations with callbacks. The root cause of that is the usage of the C library beneath that blocks threads on input-output operations.
Go does not use C library (only in some cases, but it can be turned off) and makes system calls by itself. While doing this the thread that executes current goroutine parks it and executes another goroutine. Therefore you can have enormous number of blocked goroutines without running out of threads. Goroutines are cheap with regard to memory, threads are operating system entities.
In Go using goroutines is better. There is no need for creating asynchronous client because of the above.
For comparison in Java you would quickly end up with multiple threads. The next step would be pooling them as they are costly. Pooling means limiting the concurrency.
As others have stated, goroutines are the way to go (pun intended).
Minimal Example:
type nonBlocking struct {
Response *http.Response
Error error
}
const numRequests = 2
func main() {
nb := make(chan nonBlocking, numRequests)
wg := &sync.WaitGroup{}
for i := 0; i < numRequests; i++ {
wg.Add(1)
go Request(nb)
}
go HandleResponse(nb, wg)
wg.Wait()
}
func Request(nb chan nonBlocking) {
resp, err := http.Get("http://example.com")
nb <- nonBlocking{
Response: resp,
Error: err,
}
}
func HandleResponse(nb chan nonBlocking, wg *sync.WaitGroup) {
for get := range nb {
if get.Error != nil {
log.Println(get.Error)
} else {
log.Println(get.Response.Status)
}
wg.Done()
}
}
Yip, built into the standard library, just not usable by a simple function call out of the box.
Take this example
package main
import (
"flag"
"log"
"net/http"
"sync"
"time"
)
var url string
var timeout time.Duration
func init() {
flag.StringVar(&url, "url", "http://www.stackoverflow.com", "url to GET")
flag.DurationVar(&timeout, "timeout", 5*time.Second, "timeout for the GET operation")
}
func main() {
flag.Parse()
// We use the channel as our means to
// hand the response over
rc := make(chan *http.Response)
// We need a waitgroup because all goroutines exit when main exits
var wg sync.WaitGroup
// We are spinning up an async request
// Increment the counter for our WaitGroup.
// What we are basically doing here is to tell the WaitGroup
// "Hey, there is one more task you have to wait for!"
wg.Add(1)
go func() {
// Notify the WaitGroup that one task is done as soon
// as we exit the goroutine.
defer wg.Done()
log.Printf("Doing GET request on \"%s\"", url)
resp, err := http.Get(url)
if err != nil {
log.Printf("GET for %s: %s", url, err)
}
// We send the reponse downstream
rc <- resp
// Now, the goroutine exits, the defered call to wg.Done()
// is executed.
}()
// And here we do our async processing.
// Note that you could have done the processing in the first goroutine
// as well, since http.Get would be a blocking operation and any subsequent
// code in the goroutine would have been excuted only after the Get returned.
// However, I put te processing into its own goroutine for demonstration purposes.
wg.Add(1)
go func() {
// As above
defer wg.Done()
log.Println("Doing something else")
// Setting up a timer for a timeout.
// Note that this could be done using a request with a context, as well.
to := time.NewTimer(timeout).C
select {
case <-to:
log.Println("Timeout reached")
// Exiting the goroutine, the deferred call to wg.Done is executed
return
case r := <-rc:
if r == nil {
log.Printf("Got no useful response from GETting \"%s\"", url)
// Exiting the goroutine, the deferred call to wg.Done is executed
return
}
log.Printf("Got response with status code %d (%s)", r.StatusCode, r.Status)
log.Printf("Now I can do something useful with the response")
}
}()
// Now we have set up all of our tasks,
// we are waiting until all of them are done...
wg.Wait()
log.Println("All tasks done, exiting")
}
If you look at this closely, we have all building blocks to make GETting an URL and processing the response async. We can start to abstract this a bit:
package main
import (
"flag"
"log"
"net/http"
"time"
)
var url string
var timeout time.Duration
func init() {
flag.StringVar(&url, "url", "http://www.stackoverflow.com", "url to GET")
flag.DurationVar(&timeout, "timeout", 5*time.Second, "timeout for the GET operation")
}
type callbackFunc func(*http.Response, error) error
func getWithCallBack(u string, callback callbackFunc) chan error {
// We create a channel which we can use to notify the caller of the
// result of the callback.
c := make(chan error)
go func() {
c <- callback(http.Get(u))
}()
return c
}
func main() {
flag.Parse()
c := getWithCallBack(url, func(resp *http.Response, err error) error {
if err != nil {
// Doing something useful with the err.
// Add additional cases as needed.
switch err {
case http.ErrNotSupported:
log.Printf("GET not supported for \"%s\"", url)
}
return err
}
log.Printf("GETting \"%s\": Got response with status code %d (%s)", url, resp.StatusCode, resp.Status)
return nil
})
if err := <-c; err != nil {
log.Printf("Error GETting \"%s\": %s", url, err)
}
log.Println("All tasks done, exiting")
}
And there you Go (pun intended): Async processing of GET requests.
I have a time ticker that will execute a function within time interval(eg every 5 minutes, 10 minutes). I create this time ticker within a goroutine. I heard that this kind of ticker could leak the memory even if the app stopped. This ticker will keep running as long as the app running. Should it stop? how to stop it properly? Here is my implementation:
go func() {
for range time.Tick(5 * time.Minute) {
ExecuteFunctionA()
}
}()
What is the proper implementation for time ticker like this?
You can use a channel as a intermediary to stop the ticker properly.
Usually, I did it as the following:
var stopChan chan bool = make(chan bool)
func Stop() {
stopChan <- true
}
func Run() {
go func() {
ticker := time.NewTicker(5 * time.Minute)
for {
select {
case <- c.stopChan:
ticker.Stop()
return
case <- ticker.C:
ExecuteFunctionA()
}
}
}()
}
And you can invoke Stop function at the time you want to stop the ticker safely.
Nothing will leak if the 'app stopped'. The warning in the documentation refers to the fact that the garbage collector will not be able to reclaim the channel once it is created (time.Tick() initializes and returns a chan) and it will sit in memory even if you decide to break out of your for loop.
Based on your description in the question, this shouldn't be an issue for you since you want the ticker running as long as the app is running. But if you decide otherwise, you can use an alternative way like:
go func() {
for {
time.Sleep(time.Duration(5) * time.Minute)
go ExecuteFunctionA()
if someConditionIsMet {
break // nothing leaks in this case
}
}
}()
A few months ago I was thinking how to implement a closable event loop in Go, for an RPC library. I managed to facilitate closing the server like so:
type Server struct {
listener net.Listener
closeChan chan bool
routines sync.WaitGroup
}
func (s *Server) Serve() {
s.routines.Add(1)
defer s.routines.Done()
defer s.listener.Close()
for {
select {
case <-s.closeChan:
// close server etc.
default:
s.listener.SetDeadline(time.Now().Add(2 * time.Second))
conn, _ := s.listener.Accept()
// handle conn routine
}
}
}
func (s *Server) Close() {
s.closeChan <- true // signal to close serve routine
s.routines.Wait()
}
The problem that I've found with this implementation is it involves a timeout, which means minimum close time is 2 seconds more than it could be. Is there a more idiomatic method of creating an event loop?
I don't think that event loops in Go need to be loops.
It would seem simpler to handle closing and connections in separate goroutines:
go func() {
<-s.closeChan
// close server, release resources, etc.
s.listener.Close()
}()
for {
conn, err := s.listener.Accept()
if err != nil {
// log, return
}
// handle conn routine
}
Note that you might also close the listener directly in your Close function without using a channel. What I have done here is used the error return value of Listener.Accept to facilitate inter-routine communication.
If at some point of the closing and connection handling implementations you need to protect some resources you're closing while you're answering, you may use a Mutex. But it's generally possible to avoid that.