I need some help on understanding how to use goroutines in this problem. I will post only some snippets of code but if you want to take a deep look you can check it out here
Basically, I have a distributor function which receives a request slice being called many times, and each time the function is called it must distribute this request among other functions to actually resolve the request. And what I'm trying to create a channel and launch this function to resolve the request on a new goroutine, so the program can handle requests concurrently.
How the distribute function is called:
// Run trigger the system to start receiving requests
func Run() {
// Since the programs starts here, let's make a channel to receive requests
requestCh := make(chan []string)
idCh := make(chan string)
// If you want to play with us you need to register your Sender here
go publisher.Sender(requestCh)
go makeID(idCh)
// Our request pool
for request := range requestCh {
// add ID
request = append(request, <-idCh)
// distribute
distributor(request)
}
// PROBLEM
for result := range resultCh {
fmt.Println(result)
}
}
The distribute function itself:
// Distribute requests to respective channels.
// No waiting in line. Everybody gets its own goroutine!
func distributor(request []string) {
switch request[0] {
case "sum":
arithCh := make(chan []string)
go arithmetic.Exec(arithCh, resultCh)
arithCh <- request
case "sub":
arithCh := make(chan []string)
go arithmetic.Exec(arithCh, resultCh)
arithCh <- request
case "mult":
arithCh := make(chan []string)
go arithmetic.Exec(arithCh, resultCh)
arithCh <- request
case "div":
arithCh := make(chan []string)
go arithmetic.Exec(arithCh, resultCh)
arithCh <- request
case "fibonacci":
fibCh := make(chan []string)
go fibonacci.Exec(fibCh, resultCh)
fibCh <- request
case "reverse":
revCh := make(chan []string)
go reverse.Exec(revCh, resultCh)
revCh <- request
case "encode":
encCh := make(chan []string)
go encode.Exec(encCh, resultCh)
encCh <- request
}
}
And the fibonacci.Exec function to illustrate how I'm trying to calculate the Fibonacci given a request received on the fibCh and sending the result value through the resultCh.
func Exec(fibCh chan []string, result chan map[string]string) {
fib := parse(<-fibCh)
nthFibonacci(fib)
result <- fib
}
So far, at the Run function when I range over the resultCh I get the results but also a deadlock. But why? Also, I imagine that I should use the waitGroup function to wait the goroutines to finish but I'm not sure of how implement that since I'm expecting receive a continuous stream of requests. I would appreciate some help on understanding what I'm doing wrong here and a way to solve it.
I'm not digging into the implementation details of your application, but basically as it sounds to me, you can use the workers pattern.
Using the workers pattern multiple goroutines can read from a single channel, distributing an amount of work between CPU cores, hence the workers name. In Go, this pattern is easy to implement - just start a number of goroutines with channel as parameter, and just send values to that channel - distributing and multiplexing will be done by Go runtime, automagically.
Here is a simple implementation of the workers pattern:
package main
import (
"fmt"
"sync"
"time"
)
func worker(tasksCh <-chan int, wg *sync.WaitGroup) {
defer wg.Done()
for {
task, ok := <-tasksCh
if !ok {
return
}
d := time.Duration(task) * time.Millisecond
time.Sleep(d)
fmt.Println("processing task", task)
}
}
func pool(wg *sync.WaitGroup, workers, tasks int) {
tasksCh := make(chan int)
for i := 0; i < workers; i++ {
go worker(tasksCh, wg)
}
for i := 0; i < tasks; i++ {
tasksCh <- i
}
close(tasksCh)
}
func main() {
var wg sync.WaitGroup
wg.Add(36)
go pool(&wg, 36, 50)
wg.Wait()
}
Another useful resource how you can use the WaitGroup to wait for all the goroutines to finish the execution before to continue (hence to not trap into deadlock) is this nice article:
http://nathanleclaire.com/blog/2014/02/15/how-to-wait-for-all-goroutines-to-finish-executing-before-continuing/
And a very basic implementation of it:
Go playground
If you do not want to change the implementation to use the worker pattern maybe would be a good idea to use another channel to signify the end of goroutine execution, because deadlock happens when there is no receiver to accept the sent message through unbuffered channel.
done := make(chan bool)
//.....
done <- true //Tell the main function everything is done.
So when you receive the message you mark the execution as completed by setting the channel value to true.
Related
I am trying to wrap my mind around go. I want to make a simple program that basically
starts a bunch of go routines
process messages
sends the processed result to a channel
have the main thread collect these results
shut down.
Seems simple. I started with no logic at all. I just send a number and try to get that number back.
issue: I'm deadlocking and I'm not sure why. I think I might be misusing wait groups with channels, because they work individually, but I'm not sure how to get the main thread to block on an arbitrary number of initiated go routines.
package main
import (
"fmt"
"sync"
"time"
)
func main() {
queue := make(chan int)
start := time.Now()
var wg sync.WaitGroup
for i := 0; i < 10; i += 1 {
wg.Add(1)
go count(i, queue, &wg)
}
wg.Wait()
for value := range queue {
println(value)
}
close(queue)
fmt.Println(time.Now().Sub(start))
// fmt.Println(summation)
}
func count(number int, queue chan int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Println("Starting ", number)
queue <- number
fmt.Println("ending")
}
Your goroutines block on queue <- number because queue is an unbuffered channel and nobody is reading from it, as main blocks on wg.Wait.
Declare queue as a buffered channel instead. For example: queue := make(chan int, 10)
From the Go Tour (concurrency) and subsequent page:
By default, sends and receives block until the other side is ready. This allows goroutines to synchronize without explicit locks or condition variables.
Sends to a buffered channel block only when the buffer is full. Receives block when the buffer is empty.
Alternatively, move wg.Wait after the for v := range queue loop.
This should help.
package main
import (
"fmt"
"sync"
"time"
)
type event struct {
data chan string
numWorker int
}
func (e event) Send() {
var wg sync.WaitGroup
// Spaw numWorker goroutines that sends message to
// the same channel.
for i := 0; i < e.numWorker; i++ {
wg.Add(1)
go func(id int) {
// Do some fake work
time.Sleep(1 * time.Second)
e.data <- fmt.Sprintf("message from go #%d", id)
wg.Done()
}(i)
}
// Wait for goroutines to finish their work.
wg.Wait()
// Close the channel to signal Recv to stop ranging
// over the channel.
close(e.data)
}
func (e event) Recv() {
// Range over the data channel to receive message(s).
for msg := range e.data {
fmt.Println(msg)
}
}
func main() {
e := event{
numWorker: 10, // Number of worker goroutine(s)
data: make(chan string, 5 /* Buffer Size */),
}
// Spawn a goroutine for Send
go e.Send()
// Recv receives data from Send
e.Recv()
}
To avoid deadlocking you can manage the channel and wait groups in separate goroutine. Try change that:
wg.Wait()
for value := range queue {
println(value)
}
close(queue)
with this:
go func() {
wg.Wait()
close(queue)
}()
for value := range queue {
println(value)
}
so I'm new to the language and I know that the usual way of waiting for multiple workers to finish is by using a WaitGroup. However, I'm not sure why my code below that uses channels for this purpose is causing a deadlock. The simplified version is as follows:
func Crawl(url string, depth int, fetcher Fetcher, done chan bool) {
body, urls, err := fetcher.Fetch(url)
// do something with fetched data...
// create a separate channel to wait for all workers to finish
done_2 := make (chan bool)
// create workers
for _, u := range urls {
go Crawl(u, depth - 1, fetcher, done_2)
}
// wait for all workers to write to the channel, which indicates their completion
for i := 0; i < len (urls); i++ {
fmt.Printf ("On URL %v, iteration %v\n", url, i)
<- done_2
}
// indicate the completion of the current worker
done <- true
}
func main() {
done := make (chan bool)
go Crawl("https://golang.org/", 4, fetcher, done)
<- done
}
The program gives the desired output, but instead of exiting after that, it enters into a deadlock.
I'm trying to build a generic pipeline library using worker pools. I created an interface for a source, pipe, and sink. You see, the pipe's job is to receive data from an input channel, process it, and output the result onto a channel. Here is its intended behavior:
Receive data from an input channel.
Delegate the data to an available worker.
The worker sends the result to the output channel.
Close the output channel once all workers are finished.
func (p *pipe) Process(in chan interface{}) (out chan interface{}) {
var wg sync.WaitGroup
out = make(chan interface{}, 100)
go func() {
for i := 1; i <= 100; i++ {
go p.work(in, out, &wg)
}
wg.Wait()
close(out)
}()
return
}
func (p *pipe) work(jobs <-chan interface{}, out chan<- interface{}, wg *sync.WaitGroup) {
for j := range jobs {
func(j Job) {
defer wg.Done()
wg.Add(1)
res := doSomethingWith(j)
out <- res
}(j)
}
}
However, running it may either exit without processing all of the inputs or panic with a send on closed channel message. Building the source with the -race flag gives out a data race warning between close(out) and out <- res.
Here's what I think might happen. Once a number of workers have finished their jobs, there's a split second where wg's counter reach zero. Hence, wg.Wait() is done and the program proceeds to close(out). Meanwhile, the job channel isn't finished producing data, meaning some workers are still running in another goroutine. Since the out channel is already closed, it results in a panic.
Should the wait group be placed somewhere else? Or is there a better way to wait for all workers to finish?
It's not clear why you want one worker per job, but if you do, you can restructure your outer loop setup (see untested code below). This kind of obviates the need for worker pools in the first place.
Always, though, do a wg.Add before spinning off any worker. Right here, you are spinning off exactly 100 workers:
var wg sync.WaitGroup
out = make(chan interface{}, 100)
go func() {
for i := 1; i <= 100; i++ {
go p.work(in, out, &wg)
}
wg.Wait()
close(out)
}()
You could therefore do this:
var wg sync.WaitGroup
out = make(chan interface{}, 100)
go func() {
wg.Add(100) // ADDED - count the 100 workers
for i := 1; i <= 100; i++ {
go p.work(in, out, &wg)
}
wg.Wait()
close(out)
}()
Note that you can now move wg itself down into the goroutine that spins off the workers. This can make things cleaner, if you give up on the notion of having each worker spin off jobs as new goroutines. But if each worker is going to spin off another goroutine, that worker itself must also use wg.Add, like this:
for j := range jobs {
wg.Add(1) // ADDED - count the spun-off goroutines
func(j Job) {
res := doSomethingWith(j)
out <- res
wg.Done() // MOVED (for illustration only, can defer as before)
}(j)
}
wg.Done() // ADDED - our work in `p.work` is now done
That is, each anonymous function is another user of the channel, so increment the users-of-channel count (wg.Add(1)) before spinning off a new goroutine. When you have finished reading the input channel jobs, call wg.Done() (perhaps via an earlier defer, but I showed it at the end here).
The key to thinking about this is that wg counts the number of active goroutines that could, at this point, write to the channel. It only goes to zero when no goroutines intend to write any more. That makes it safe to close the channel.
Consider using the rather simpler (but untested):
func (p *pipe) Process(in chan interface{}) (out chan interface{}) {
out = make(chan interface{})
var wg sync.WaitGroup
go func() {
defer close(out)
for j := range in {
wg.Add(1)
go func(j Job) {
res := doSomethingWith(j)
out <- res
wg.Done()
}(j)
}
wg.Wait()
}()
return out
}
You now have one goroutine that is reading the in channel as fast as it can, spinning off jobs as it goes. You'll get one goroutine per incoming job, except when they finish their work early. There is no pool, just one worker per job (same as your code except that we knock out the pools that aren't doing anything useful).
Or, since there are only some number of CPUs available, spin off some number of goroutines as you did before at the start, but have each one run one job to completion, and deliver its result, then go back to reading the next job:
func (p *pipe) Process(in chan interface{}) (out chan interface{}) {
out = make(chan interface{})
go func() {
defer close(out)
var wg sync.WaitGroup
ncpu := runtime.NumCPU() // or something fancier if you like
wg.Add(ncpu)
for i := 0; i < ncpu; i++ {
go func() {
defer wg.Done()
for j := range in {
out <- doSomethingWith(j)
}
}()
}
wg.Wait()
}
return out
}
By using runtime.NumCPU() we get only as many workers reading jobs as there are CPUs to run jobs. Those are the pools and they only do one job at a time.
There's generally no need to buffer the output channel, if the output-channel readers are well-structured (i.e., don't cause the pipeline to constipate). If they're not, the depth of buffering here limits how many jobs you can "work ahead" of whoever is consuming the results. Set it based on how useful it is to do this "working ahead"—not necessarily the number of CPUs, or the number of expected jobs, or whatever.
It's possible that the jobs are being completed just as fast as they're being sent. In this case the WaitGroup will be floating near zero even while there's many more items to process.
One fix for this is to add one before sending jobs, and decrement that one after sending them all, effectively consider the sender to be one of the 'jobs'. In this case, it's better if we do the wg.Add in the sender:
func (p *pipe) Process(in chan interface{}) (out chan interface{}) {
var wg sync.WaitGroup
out = make(chan interface{}, 100)
go func() {
for i := 1; i <= 100; i++ {
wg.Add(1)
go p.work(in, out, &wg)
}
wg.Wait()
close(out)
}()
return
}
func (p *pipe) work(jobs <-chan interface{}, out chan<- interface{}, wg *sync.WaitGroup) {
for j := range jobs {
func(j Job) {
res := doSomethingWith(j)
out <- res
wg.Done()
}(j)
}
}
One thing I notice in the code is that a goroutine is started for each job. At the same time each job processes the jobs channel in a loop until empty/closed. It doesn't seem necessary to do both.
I've just installed Go on Mac, and here's the code
package main
import (
"fmt"
"time"
)
func Product(ch chan<- int) {
for i := 0; i < 100; i++ {
fmt.Println("Product:", i)
ch <- i
}
}
func Consumer(ch <-chan int) {
for i := 0; i < 100; i++ {
a := <-ch
fmt.Println("Consmuer:", a)
}
}
func main() {
ch := make(chan int, 1)
go Product(ch)
go Consumer(ch)
time.Sleep(500)
}
I "go run producer_consumer.go", there's no output on screen, and then it quits.
Any problem with my program ? How to fix it ?
This is a rather verbose answer, but to put it simply:
Using time.Sleep to wait until hopefully other routines have completed their jobs is bad.
The consumer and producer shouldn't know anything about each other, apart from the type they exchange over the channel. Your code relies on both consumer and producer knowing how many ints will be passed around. Not a realistic scenario
Channels can be iterated over (think of them as a thread-safe, shared slice)
channels should be closed
At the bottom of this rather verbose answer where I attempt to explain some basic concepts and best practices (well, better practices), you'll find your code rewritten to work and display all the values without relying on time.Sleep. I've not tested that code, but should be fine
Right, there's a couple of problems here. Just as a bullet-list:
Your channel is buffered to 1, which is fine, but it's not necessary
Your channel is never closed
You're waiting 500ns, then exit regardless of the routines having completed, or even started processing for that matter.
There's no centralised control on over the routines, once you've started them, you have 0 control. If you hit ctrl+c, you might want to cancel routines when writing code that'll handle important data. Check signal handling, and context for this
Channel buffer
Seeing as you already know how many values you're going to push onto your channel, why not simply create ch := make(chan int, 100)? That way your publisher can continue to push messages onto the channel, regardless of what the consumer does.
You don't need to do this, but adding a sensible buffer to your channel, depending on what you're trying to do, is definitely worth checking out. At the moment, though, both routines are using fmt.Println & co, which is going to be a bottleneck either way. Printing to STDOUT is thread-safe, and buffered. This means that each call to fmt.Print* is going to acquire a lock, to avoid text from both routines to be combined.
Closing the channel
You could simply push all the values onto your channel, and then close it. This is, however, bad form. The rule of thumb WRT channels is that channels are created and closed in the same routine. Meaning: you're creating the channel in the main routine, that's where it should be closed.
You need a mechanism to sync up, or at least keep tabs on whether or not your routines have completed their job. That's done using the sync package, or through a second channel.
// using a done channel
func produce(ch chan<- int) <-chan struct{} {
done := make(chan struct{})
go func() {
for i := 0; i < 100; i++ {
ch <- i
}
// all values have been published
// close done channel
close(done)
}()
return done
}
func main() {
ch := make(chan int, 1)
done := produce(ch)
go consume(ch)
<-done // if producer has done its thing
close(ch) // we can close the channel
}
func consume(ch <-chan int) {
// we can now simply loop over the channel until it's closed
for i := range ch {
fmt.Printf("Consumed %d\n", i)
}
}
OK, but here you'll still need to wait for the consume routine to complete.
You may have already noticed that the done channel technically isn't closed in the same routine that creates it either. Because the routine is defined as a closure, however, this is an acceptable compromise. Now let's see how we could use a waitgroup:
import (
"fmt"
"sync"
)
func product(wg *sync.WaitGroup, ch chan<- int) {
defer wg.Done() // signal we've done our job
for i := 0; i < 100; i++ {
ch <- i
}
}
func main() {
ch := make(chan int, 1)
wg := sync.WaitGroup{}
wg.Add(1) // I'm adding a routine to the channel
go produce(&wg, ch)
wg.Wait() // will return once `produce` has finished
close(ch)
}
OK, so this looks promising, I can have the routines tell me when they've finished their tasks. But if I add both consumer and producer to the waitgroup, I can't simply iterate over the channel. The channel will only ever get closed if both routines invoke wg.Done(), but if the consumer is stuck looping over a channel that'll never get closed, then I've created a deadlock.
Solution:
A hybrid would be the easiest solution at this point: Add the consumer to a waitgroup, and use the done channel in the producer to get:
func produce(ch chan<- int) <-chan struct{} {
done := make(chan struct{})
go func() {
for i := 0; i < 100; i++ {
ch <- i
}
close(done)
}()
return done
}
func consume(wg *sync.WaitGroup, ch <-chan int) {
defer wg.Done()
for i := range ch {
fmt.Printf("Consumer: %d\n", i)
}
}
func main() {
ch := make(chan int, 1)
wg := sync.WaitGroup{}
done := produce(ch)
wg.Add(1)
go consume(&wg, ch)
<- done // produce done
close(ch)
wg.Wait()
// consumer done
fmt.Println("All done, exit")
}
I have changed slightly(expanded time.Sleep) your code. Works fine on my Linux x86_64
func Product(ch chan<- int) {
for i := 0; i < 10; i++ {
fmt.Println("Product:", i)
ch <- i
}
}
func Consumer(ch <-chan int) {
for i := 0; i < 10; i++ {
a := <-ch
fmt.Println("Consmuer:", a)
}
}
func main() {
ch := make(chan int, 1)
go Product(ch)
go Consumer(ch)
time.Sleep(10000)
}
Output
go run s1.go
Product: 0
Product: 1
Product: 2
As JimB hinted at, time.Sleep takes a time.Duration, not an integer. The godoc shows an example of how to call this correctly. In your case, you probably want:
time.Sleep(500 * time.Millisecond)
The reason that your program is exiting quickly (but not giving you an error) is due to the (somewhat surprising) way that time.Duration is implemented.
time.Duration is simply a type alias for int64. Internally, it uses the value to represent the duration in nanoseconds. When you call time.Sleep(500), the compiler will gladly interpret the numeric literal 500 as a time.Duration. Unfortunately, that means 500 ns.
time.Millisecond is a constant equal to the number of nanoseconds in a millisecond (1,000,000). The nice thing is that requiring you to do that multiplication explicitly makes it obvious to that caller what the units are on that argument. Unfortunately, time.Sleep(500) is perfectly valid go code but doesn't do what most beginners would expect.
I am implementing a web crawler and I have a Parse function that takes an link as an input and should return all links contained in the page.
I would like to make the most of go routines to make it as fast as possible. To do so, I want to create a pool of workers.
I set up a channel of strings representing the links links := make(chan string) and pass it as an argument to the Parse function. I want the workers to communicate through a unique channel. When the function starts, it takes a link from links, parse it and **for each valid link found in the page, add the link to links.
func Parse(links chan string) {
l := <- links
// If link already parsed, return
for url := newUrlFounds {
links <- url
}
}
However, the main issue here is to indicate when no more links have been found. One way I thought of doing it was to wait before all workers have completed. But I don't know how to do so in Go.
As Tim already commented, don't use the same channel for reading and writing in a worker. This will deadlock eventually (even if buffered, because Murphy).
A far simpler design is simply launching one goroutine per URL. A buffered channel can serve as a simple semaphore to limit the number of concurrent parsers (goroutines that don't do anything because they are blocked are usually negligible). Use a sync.WaitGroup to wait until all work is done.
package main
import (
"sync"
)
func main() {
sem := make(chan struct{}, 10) // allow ten concurrent parsers
wg := &sync.WaitGroup{}
wg.Add(1)
Parse("http://example.com", sem, wg)
wg.Wait()
// all done
}
func Parse(u string, sem chan struct{}, wg *sync.WaitGroup) {
defer wg.Done()
sem <- struct{}{} // grab
defer func() { <-sem }() // release
// If URL already parsed, return.
var newURLs []string
// ...
for u := range newURLs {
wg.Add(1)
go Parse(u)
}
}