I'm new to Golang, trying to build a system that fetches content from a set of urls and extract specific lines with regex. The problems start when i wrap the code with goroutines. I'm getting a different number of regex results and many of fetched lines are duplicates.
max_routines := 3
sem := make(chan int, max_routines) // to control the number of working routines
var wg sync.WaitGroup
ch_content := make(chan string)
client := http.Client{}
for i:=2; ; i++ {
// for testing
if i>5 {
break
}
// loop should be broken if feebbacks_checstr is found in content
if loop_break {
break
}
wg.Add(1)
go func(i int) {
defer wg.Done()
sem <- 1 // will block if > max_routines
final_url = url+a.tm_id+"/page="+strconv.Itoa(i)
resp, _ := client.Get(final_url)
var bodyString string
if resp.StatusCode == http.StatusOK {
bodyBytes, _ := ioutil.ReadAll(resp.Body)
bodyString = string(bodyBytes)
}
// checking for stop word in content
if false == strings.Contains(bodyString, feebbacks_checstr) {
res2 = regex.FindAllStringSubmatch(bodyString,-1)
for _,v := range res2 {
ch_content <- v[1]
}
} else {
loop_break = true
}
resp.Body.Close()
<-sem
}(i)
}
for {
select {
case r := <-ch_content:
a.feedbacks = append(a.feedbacks, r) // collecting the data
case <-time.After(500 * time.Millisecond):
show(len(a.feedbacks)) // < always different result, many entries in a.feedbacks are duplicates
fmt.Printf(".")
}
}
As a result len(a.feedbacks) gives sometimes 130, sometimes 139 and a.feedbacks contains duplicates. If i clean the duplicates the number of results is about half of what i'm expecting (109 without duplicates)
You're creating a closure by using an anonymous go routine function. I notice your final_url isn't := but = which means it's defined outside the closure. All go routines will have access to the same value of final_url and there's a race condition going on. Some go routines are overwriting final_url before other go routines are making their requests and this will result in duplicates.
If you define final_url inside the go routine then they won't be stepping on each other's toes and it should work as you expect.
That's the simple fix for what you have. A more idiomatically Go way to do this would be to create an input channel (containing the URLs to request) and an output channel (eventually containing whatever you're pulling out of the response) and instead of trying to manage the life and death of dozens of go routines you would keep alive a constant amount of go routines that try to empty out the input channel.
Related
I'm writing an app using Go that is interacting with Spotify's API and I find myself needing to use an infinite for loop to call an endpoint until the length of the returned slice is less than the limit, signalling that I've reached the end of the available entries.
For my user account, there are 1644 saved albums (I determined this by looping through without using goroutines). However, when I add goroutines in, I'm getting back 2544 saved albums with duplicates. I'm also using the semaphore pattern to limit the number of goroutines so that I don't exceed the rate limit.
I assume that the issue is with using the active variable rather than channels, but my attempt at that just resulted in an infinite loop
wg := &sync.WaitGroup{}
sem := make(chan bool, 20)
active := true
offset := 0
for {
sem <- true
if active {
// add each new goroutine to waitgroup
wg.Add(1)
go func() error {
// remove from waitgroup when goroutine is complete
defer wg.Done()
// release the worker
defer func() { <-sem }()
savedAlbums, err := client.CurrentUsersAlbums(ctx, spotify.Limit(50), spotify.Offset(offset))
if err != nil {
return err
}
userAlbums = append(userAlbums, savedAlbums.Albums...)
if len(savedAlbums.Albums) < 50 {
// since the limit is set to 50, we know that if the number of returned albums
// is less than 50 that we're done retrieving data
active = false
return nil
} else {
offset += 50
return nil
}
}()
} else {
wg.Wait()
break
}
}
Thanks in advance!
I suspect that your main issue may be a misunderstanding of what the go keyword does; from the docs:
A "go" statement starts the execution of a function call as an independent concurrent thread of control, or goroutine, within the same address space.
So go func() error { starts the execution of the closure; it does not mean that any of the code runs immediately. In fact because, client.CurrentUsersAlbums will take a while, it's likely you will be requesting the first 50 items 20 times. This can be demonstrated with a simplified version of your application (playground)
func main() {
wg := &sync.WaitGroup{}
sem := make(chan bool, 20)
active := true
offset := 0
for {
sem <- true
if active {
// add each new goroutine to waitgroup
wg.Add(1)
go func() error {
// remove from waitgroup when goroutine is complete
defer wg.Done()
// release the worker
defer func() { <-sem }()
fmt.Println("Getting from:", offset)
time.Sleep(time.Millisecond) // Simulate the query
// Pretend that we got back 50 albums
offset += 50
if offset > 2000 {
active = false
}
return nil
}()
} else {
wg.Wait()
break
}
}
}
Running this will produce somewhat unpredictable results (note that the playground caches results so try it on your machine) but you will probably see 20 X Getting from: 0.
A further issue is data races. Updating a variable from multiple goroutines without protection (e.g. sync.Mutex) results in undefined behaviour.
You will want to know how to fix this but unfortunately you will need to rethink your algorithm. Currently the process you are following is:
Set pos to 0
Get 50 records starting from pos
If we got 50 records then pos=pos+50 and loop back to step 2
This is a sequential algorithm; you don't know whether you have all of the data until you have requested the previous section. I guess you could make speculative queries (and handle failures) but a better solution would be to find some way to determine the number of results expected and then split the queries to get that number of records between multiple goroutines.
Note that if you do know the number of responses then you can do something like the following (playground):
noOfResultsToGet := 1644 // In the below we are getting 0-1643
noOfResultsPerRequest := 50
noOfSimultaneousRequests := 20 // You may not need this but many services will limit the number of simultaneous requests you can make (or, at least, rate limit them)
requestChan := make(chan int) // Will be passed the starting #
responseChan := make(chan []string) // Response from whatever request we are making (can be any type really)
// Start goroutines to make the requests
var wg sync.WaitGroup
wg.Add(noOfSimultaneousRequests)
for i := 0; i < noOfSimultaneousRequests; i++ {
go func(routineNo int) {
defer wg.Done()
for startPos := range requestChan {
// Simulate making the request
maxResult := startPos + noOfResultsPerRequest
if maxResult > noOfResultsToGet {
maxResult = noOfResultsToGet
}
rsp := make([]string, 0, noOfResultsPerRequest)
for x := startPos; x < maxResult; x++ {
rsp = append(rsp, strconv.Itoa(x))
}
responseChan <- rsp
fmt.Printf("Goroutine %d handling data from %d to %d\n", routineNo, startPos, startPos+noOfResultsPerRequest)
}
}(i)
}
// Close the response channel when all goroutines have shut down
go func() {
wg.Wait()
close(responseChan)
}()
// Send the requests
go func() {
for reqFrom := 0; reqFrom < noOfResultsToGet; reqFrom += noOfResultsPerRequest {
requestChan <- reqFrom
}
close(requestChan) // Allow goroutines to exit
}()
// Receive responses (note that these may be out of order)
result := make([]string, 0, noOfResultsToGet)
for x := range responseChan {
result = append(result, x...)
}
// Order the results and output (results from gorouting may come back in any order)
sort.Slice(result, func(i, j int) bool {
a, _ := strconv.Atoi(result[i])
b, _ := strconv.Atoi(result[j])
return a < b
})
fmt.Printf("Result: %v", result)
Relying on channels to pass messages often makes this kind of thing easier to think about and reduces the chance that you will make a mistake.
Set offset as an args -> go func(offset int) error {.
Increment offset by 50 after calling go func
Change active type to chan bool
To avoid data race on userAlbums = append(userAlbums, res...). We need to create channel that same type as userAlbums, then run for loop inside goroutine, then send the results to that channel.
this is the example : https://go.dev/play/p/yzk8qCURZFC
if applied to your code :
wg := &sync.WaitGroup{}
worker := 20
active := make(chan bool, worker)
for i := 0; i < worker; i++ {
active <- true
}
// I assume the type of userAlbums is []string
resultsChan := make(chan []string, worker)
go func() {
offset := 0
for {
if <-active {
// add each new goroutine to waitgroup
wg.Add(1)
go func(offset int) error {
// remove from waitgroup when goroutine is complete
defer wg.Done()
savedAlbums, err := client.CurrentUsersAlbums(ctx, spotify.Limit(50), spotify.Offset(offset))
if err != nil {
// active <- false // maybe you need this
return err
}
resultsChan <- savedAlbums.Albums
if len(savedAlbums.Albums) < 50 {
// since the limit is set to 50, we know that if the number of returned albums
// is less than 50 that we're done retrieving data
active <- false
return nil
} else {
active <- true
return nil
}
}(offset)
offset += 50
} else {
wg.Wait()
close(resultsChan)
break
}
}
}()
for res := range resultsChan {
userAlbums = append(userAlbums, res...)
}
I have a LOOP that does the hashing inside for a given key and returns the result, but on Result, If I have 1500 list of URL that goes inside LOOP, it never returns the result of 1500, it returns always lesser then 1500.
What am I doing wrong in below:
if len(URLLists) > 0 {
var base = "https://example.com/query?="
var wg sync.WaitGroup
var mutex = sync.Mutex{}
wg.Add(len(URLLists))
for _, url := range URLLists {
// wg.Add(1) OR above for loop
go func() {
defer wg.Done()
hmac := "HMAX_123"
out := encoding.HexEncodeURL(hmac, url)
final := base + out
list := Lists{
Old: url,
New: final,
}
mutex.Lock()
response.URL = append(response.URL, list)
mutex.Unlock()
}()
}
wg.Wait()
jR, err := json.Marshal(response)
if err != nil {
w.Write([]byte(`{"success": false, "url" : ""}`))
} else {
w.Write(jR)
}
return
}
I tried both method for Add - one inside loop by 1 and one outside loop by total length.
I want function to return all 1500 URL list and not just "700, 977, 1123" random list.
It looks like - wg.Wait() is not waiting for all the wg.Add - added
There are a couple of errors in this program:
You are using the loop variable inside the goroutine. The loop variable is rewritten at each iteration, so when the goroutine uses the url, it might have already moved on to the next URL, thus you end up with multiple goroutines hashing the same URL. To fix:
for _, url := range URLLists {
url:=url // Create a copy of the url
// wg.Add(1) OR above for loop
You have a race condition. You have to protect access to response.URL because it is being written by multiple goroutines. You can use a mutex:
lock:=sync.Mutex{}
for _,url:=...
...
lock.Lock()
response.URL = append(response.URL, list)
lock.Unlock()
A better way would be to send these over a channel.
You have a pretty serious race condition here:
response.URL = append(response.URL, list)
If you're starting as many as 1500 concurrent Go routines, you will have many hundreds of them all attempting to execute this line at once. They will be constantly overwriting changes to the array.
You need to protect the insertion of new data into this slice with a sync.Mutex, or be sending results over a channel and having a single Go routine reading from that channel and appending to the list.
There are several times in this book talk about deadlock regarding to incorrect usage of goroutine and channel, and I failed to grasp why the deadlock happen.
First thing I want to say is that, I know channel send&receive will block until there is items to be read or rooms to send items into, and maybe that's the deep reason of some deadlock. But please enlighten me with some explanation according to the following excerpt from the book:
Page 240
This code is to concurrently crawl urls, BFS style:
func main() {
worklist := make(chan []string)
// Start with the command-line arguments.
go func() { worklist <- os.Args[1:] }()
// Crawl the web concurrently.
seen := make(map[string]bool)
for list := range worklist {
for _, link := range list {
if !seen[link] {
seen[link] = true
go func(link string) {
worklist <- crawl(link)
}(link)
}
}
}
}
quoting book's second paragraph:
...the initial send of the command-line arguments to the worklist must run in its own goroutine to avoid deadlock, a stuck situation in which both the main goroutine and a crawler goroutine attempt to send to each other while neither is receiving...
suppose the initial send to worklist is not in its own goroutin, I imagine it works like this:
main goroutine send initial to worklist, block until received
the for range receive the initial item, so unblock the worklist channel
the crawler goroutine send its items into worklist, loop...
So to my understanding, it won't block and deadlock. Where am I wrong?
UPDATE: #mkopriva helped me realized since step 1 is blocked, step 2,3 is unreachable. So I'm clear on this one.
Page 243
This deadlock situation might be the same as in page 240:
func main() {
worklist := make(chan []string) // list of URLs, may have duplicates
unseenLinks := make(chan string) // de-duplicated URLs
// Add command-lin arguments to worklist.
go func() { worklist <- os.Args[1:] }()
// Create 20 crawler goroutines to fetch each unseen link.
for i := 0; i < 20; i++ {
go func() {
for link := range unseenLinks {
foundLinks := crawl(link)
go func() { worklist <- foundLinks }()
}
}()
}
// The main goroutine de-duplicates worklist items
// and sends the unseen ones to the crawlers.
seen := make(map[string]bool)
for list := range worklist {
for _, link := range list {
if !seen[link] {
seen[link] = true
unseenLinks <- link
}
}
}
}
So if I put omit go inside the for-range loop, how the deadlock happen?
In the first snippet, the initial channel send needs to be done in a goroutine because without a goroutine the statement would block indefinitely and the execution would never reach the loop that receives from that channel. i.e. To get from 1. to 2., 1. needs to be done in a goroutine. If 1. blocks however, then 2. is never reached.
Where the comments start is where the execution stops:
func main() {
worklist := make(chan []string)
worklist <- os.Args[1:]
//
// seen := make(map[string]bool)
// for list := range worklist {
// for _, link := range list {
// if !seen[link] {
// seen[link] = true
// go func(link string) {
// worklist <- crawl(link)
// }(link)
// }
// }
// }
// }
In the second snippet you have a for-range loop over a channel, such a loop will NOT exit until the ranged-over channel is closed. That means that, while the body of such a loop may continue to get executed, the code after the loop-with-unclosed-channel will never be reached.
https://golang.org/ref/spec#For_range
For channels, the iteration values produced are the successive values sent on the channel until the channel is closed. If the channel
is nil, the range expression blocks forever.
Where the comments start is where the execution stops:
func main() {
worklist := make(chan []string)
unseenLinks := make(chan string)
go func() { worklist <- os.Args[1:] }()
for i := 0; i < 20; i++ {
for link := range unseenLinks {
// foundLinks := crawl(link)
// go func() { worklist <- foundLinks }()
// }
// }
//
// seen := make(map[string]bool)
// for list := range worklist {
// for _, link := range list {
// if !seen[link] {
// seen[link] = true
// unseenLinks <- link
// }
// }
// }
// }
In the second snippet, I think the author is talking about the go routine in below:
go func() { worklist <- foundLinks }()
Links found by crawl are sent to the worklist from a dedicated
goroutine to avoid deadlock.
Why must this go exist?
I am trying to parallelize a recursive problem in Go, and I am unsure what the best way to do this is.
I have a recursive function, which works like this:
func recFunc(input string) (result []string) {
for subInput := range getSubInputs(input) {
subOutput := recFunc(subInput)
result = result.append(result, subOutput...)
}
result = result.append(result, getOutput(input)...)
}
func main() {
output := recFunc("some_input")
...
}
So the function calls itself N times (where N is 0 at some level), generates its own output and returns everything in a list.
Now I want to make this function run in parallel. But I am unsure what the cleanest way to do this is. My Idea:
Have a "result" channel, to which all function calls send their result.
Collect the results in the main function.
Have a wait group, which determines when all results are collected.
The Problem: I need to wait for the wait group and collect all results in parallel. I can start a separate go function for this, but how do I ever quit this separate go function?
func recFunc(input string) (result []string, outputChannel chan []string, waitGroup &sync.WaitGroup) {
defer waitGroup.Done()
waitGroup.Add(len(getSubInputs(input))
for subInput := range getSubInputs(input) {
go recFunc(subInput)
}
outputChannel <-getOutput(input)
}
func main() {
outputChannel := make(chan []string)
waitGroup := sync.WaitGroup{}
waitGroup.Add(1)
go recFunc("some_input", outputChannel, &waitGroup)
result := []string{}
go func() {
nextResult := <- outputChannel
result = append(result, nextResult ...)
}
waitGroup.Wait()
}
Maybe there is a better way to do this? Or how can I ensure the anonymous go function, that collects the results, is quited when done?
tl;dr;
recursive algorithms should have bounded limits on expensive resources (network connections, goroutines, stack space etc.)
cancelation should be supported - to ensure expensive operations can be cleaned up quickly if a result is no longer needed
branch traversal should support error reporting; this allows errors to bubble up the stack & partial results to be returned without the entire recursion traversal to fail.
For asychronous results - whether using recursions or not - use of channels is recommended. Also, for long running jobs with many goroutines, provide a method for cancelation (context.Context) to aid with clean-up.
Since recursion can lead to exponential consumption of resources it's important to put limits in place (see bounded parallelism).
Below is a design patten I use a lot for asynchronous tasks:
always support taking a context.Context for cancelation
number of workers needed for the task
return a chan of results & a chan error (will only return one error or nil)
var (
workers = 10
ctx = context.TODO() // use request context here - otherwise context.Background()
input = "abc"
)
resultC, errC := recJob(ctx, workers, input) // returns results & `error` channels
// asynchronous results - so read that channel first in the event of partial results ...
for r := range resultC {
fmt.Println(r)
}
// ... then check for any errors
if err := <-errC; err != nil {
log.Fatal(err)
}
Recursion:
Since recursion quickly scales horizontally, one needs a consistent way to fill the finite list of workers with work but also ensure when workers are freed up, that they quickly pick up work from other (over-worked) workers.
Rather than create a manager layer, employ a cooperative peer system of workers:
each worker shares a single inputs channel
before recursing on inputs (subIinputs) check if any other workers are idle
if so, delegate to that worker
if not, current worker continues recursing that branch
With this algorithm, the finite count of workers quickly become saturated with work. Any workers which finish early with their branch - will quickly be delegated a sub-branch from another worker. Eventually all workers will run out of sub-branches, at which point all workers will be idled (blocked) and the recursion task can finish up.
Some careful coordination is needed to achieve this. Allowing the workers to write to the input channel helps with this peer coordination via delegation. A "recursion depth" WaitGroup is used to track when all branches have been exhausted across all workers.
(To include context support and error chaining - I updated your getSubInputs function to take a ctx and return an optional error):
func recFunc(ctx context.Context, input string, in chan string, out chan<- string, rwg *sync.WaitGroup) error {
defer rwg.Done() // decrement recursion count when a depth of recursion has completed
subInputs, err := getSubInputs(ctx, input)
if err != nil {
return err
}
for subInput := range subInputs {
rwg.Add(1) // about to recurse (or delegate recursion)
select {
case in <- subInput:
// delegated - to another goroutine
case <-ctx.Done():
// context canceled...
// but first we need to undo the earlier `rwg.Add(1)`
// as this work item was never delegated or handled by this worker
rwg.Done()
return ctx.Err()
default:
// noone available to delegate - so this worker will need to recurse this item themselves
err = recFunc(ctx, subInput, in, out, rwg)
if err != nil {
return err
}
}
select {
case <-ctx.Done():
// always check context when doing anything potentially blocking (in this case writing to `out`)
// context canceled
return ctx.Err()
case out <- subInput:
}
}
return nil
}
Connecting the Pieces:
recJob creates:
input & output channels - shared by all workers
"recursion" WaitGroup detects when all workers are idle
"output" channel can then safely be closed
error channel for all workers
kicks-off recursion workload by writing initial input to input channel
func recJob(ctx context.Context, workers int, input string) (resultsC <-chan string, errC <-chan error) {
// RW channels
out := make(chan string)
eC := make(chan error, 1)
// R-only channels returned to caller
resultsC, errC = out, eC
// create workers + waitgroup logic
go func() {
var err error // error that will be returned to call via error channel
defer func() {
close(out)
eC <- err
close(eC)
}()
var wg sync.WaitGroup
wg.Add(1)
in := make(chan string) // input channel: shared by all workers (to read from and also to write to when they need to delegate)
workerErrC := createWorkers(ctx, workers, in, out, &wg)
// get the ball rolling, pass input job to one of the workers
// Note: must be done *after* workers are created - otherwise deadlock
in <- input
errCount := 0
// wait for all worker error codes to return
for err2 := range workerErrC {
if err2 != nil {
log.Println("worker error:", err2)
errCount++
}
}
// all workers have completed
if errCount > 0 {
err = fmt.Errorf("PARTIAL RESULT: %d of %d workers encountered errors", errCount, workers)
return
}
log.Printf("All %d workers have FINISHED\n", workers)
}()
return
}
Finally, create the workers:
func createWorkers(ctx context.Context, workers int, in chan string, out chan<- string, rwg *sync.WaitGroup) (errC <-chan error) {
eC := make(chan error) // RW-version
errC = eC // RO-version (returned to caller)
// track the completeness of the workers - so we know when to wrap up
var wg sync.WaitGroup
wg.Add(workers)
for i := 0; i < workers; i++ {
i := i
go func() {
defer wg.Done()
var err error
// ensure the current worker's return code gets returned
// via the common workers' error-channel
defer func() {
if err != nil {
log.Printf("worker #%3d ERRORED: %s\n", i+1, err)
} else {
log.Printf("worker #%3d FINISHED.\n", i+1)
}
eC <- err
}()
log.Printf("worker #%3d STARTED successfully\n", i+1)
// worker scans for input
for input := range in {
err = recFunc(ctx, input, in, out, rwg)
if err != nil {
log.Printf("worker #%3d recurseManagers ERROR: %s\n", i+1, err)
return
}
}
}()
}
go func() {
rwg.Wait() // wait for all recursion to finish
close(in) // safe to close input channel as all workers are blocked (i.e. no new inputs)
wg.Wait() // now wait for all workers to return
close(eC) // finally, signal to caller we're truly done by closing workers' error-channel
}()
return
}
I can start a separate go function for this, but how do I ever quit this separate go function?
You can range over the output channel in the separate go-routine. The go-routine, in that case, will exit safely, when the channel is closed
go func() {
for nextResult := range outputChannel {
result = append(result, nextResult ...)
}
}
So, now the thing that we need to take care of is that the channel is closed after all the go-routines spawned as part of the recursive function call have successfully existed
For that, you can use a shared waitgroup across all the go-routines and wait on that waitgroup in your main function, as you are already doing. Once the wait is over, close the outputChannel, so that the other go-routine also exits safely
func recFunc(input string, outputChannel chan, wg &sync.WaitGroup) {
defer wg.Done()
for subInput := range getSubInputs(input) {
wg.Add(1)
go recFunc(subInput)
}
outputChannel <-getOutput(input)
}
func main() {
outputChannel := make(chan []string)
waitGroup := sync.WaitGroup{}
waitGroup.Add(1)
go recFunc("some_input", outputChannel, &waitGroup)
result := []string{}
go func() {
for nextResult := range outputChannel {
result = append(result, nextResult ...)
}
}
waitGroup.Wait()
close(outputChannel)
}
PS: If you want to have bounded parallelism to limit the exponential growth, check this out
I am learning Golang concurrency and have written a program to display URL's in order. I expect the code to return
http://bing.com*
http://google.com*
But it always returns http:/google.com*** . As if the variable is being overwritten.Since i am using goroutines i would expect it to return both values at the sametime.
func check(u string) string {
tmpres := u+"*****"
return tmpres
}
func IsReachable(url string) string {
ch := make(chan string, 1)
go func() {
ch <- check(url)
}()
select {
case reachable := <-ch:
// use err and reply
return reachable
case <-time.After(3* time.Second):
// call timed out
return "none"
}
}
func main() {
var urls = []string{
"http://bing.com/",
"http://google.com/",
}
for _, url := range urls {
go func() {
fmt.Println(IsReachable(url))
}()
}
time.Sleep(1 * time.Second)
}
Two problems. First, you've created a race condition. By closing over the loop variable, you're sharing it between the thread running the loop and the thread running the goroutine, which is causing your described problem: by the time the goroutine that was started for the first URL tries to run, the value of the variable has changed. You need to either copy it to a local variable, or pass it as an argument, e.g.:
for _, url := range urls {
go func(url string) {
fmt.Println(IsReachable(url))
}(url)
}
Second, you said you wanted to display them "in order", which is not a goal generally compatible with concurrency/parallism, because you cannot control the order of parallel operations. If you want them in order, you should do them in order in a single thread. Otherwise, you'll have to collect the results, wait for all them to come back, then sort the results back into the desired order before printing them.