Suppose there are a maximum two elements (worker addresses) on registerChan at any point. Then for some reason, the following code does not call wg.Done() in the last two goroutines.
func schedule(jobName string, mapFiles []string, nReduce int, phase jobPhase, registerChan chan string) {
var ntasks int
var nOther int // number of inputs (for reduce) or outputs (for map)
switch phase {
case mapPhase:
ntasks = len(mapFiles)
nOther = nReduce
case reducePhase:
ntasks = nReduce
nOther = len(mapFiles)
}
fmt.Printf("Schedule: %v %v tasks (%d I/Os)\n", ntasks, phase, nOther)
const rpcname = "Worker.DoTask"
var wg sync.WaitGroup
for taskNumber := 0; taskNumber < ntasks; taskNumber++ {
file := mapFiles[taskNumber%len(mapFiles)]
taskArgs := DoTaskArgs{jobName, file, phase, taskNumber, nOther}
wg.Add(1)
go func(taskArgs DoTaskArgs) {
workerAddr := <-registerChan
print("hello\n")
// _ = call(workerAddr, rpcname, taskArgs, nil)
registerChan <- workerAddr
wg.Done()
}(taskArgs)
}
wg.Wait()
fmt.Printf("Schedule: %v done\n", phase)
}
If I put wg.Done() before registerChan <- workerAddr it works just fine and I have no idea why. I have also tried deferring wg.Done() but that doesn't seem to work even though I expected it to. I think I have some misunderstanding of how go routines and channels work which is causing my confusion.
Because it stopped here:
workerAddr := <-registerChan
For a buffered channel:
To get this workerAddr := <-registerChan to work: the channel registerChan must have a value; otherwise, the code will stop here waiting for the channel.
I managed to run your code this way (try this):
package main
import (
"fmt"
"sync"
)
func main() {
registerChan := make(chan int, 1)
for i := 1; i <= 10; i++ {
wg.Add(1)
go fn(i, registerChan)
}
registerChan <- 0 // seed
wg.Wait()
fmt.Println(<-registerChan)
}
func fn(taskArgs int, registerChan chan int) {
workerAddr := <-registerChan
workerAddr += taskArgs
registerChan <- workerAddr
wg.Done()
}
var wg sync.WaitGroup
Output:
55
Explanation:
This code adds 1 to 10 using a channel and 10 goroutines plus one main goroutine.
I hope this helps.
When you run this statement registerChan <- workerAddr, if the channel capacity is full you cannot add in it and it will block. If you have a pool of, say 10, workerAddr, you could add all of them in a buffered channel of capacity 10 before calling schedule. Do not add after the call, to guarantee that if you take a value from the channel, there is space to add it again after. Using defer at the beginning of your goroutine is good.
Related
Unable to find a reason as to why this code deadlocks. The aim here is make the worker go routines do some work only after they are signaled.
If the signalStream channel is removed from the code, it works fine. But when this is introduced, it deadlocks. Not sure the reason for this. Also if there are any tools to explain the occurrence of deadlock, it would help.
package main
import (
"log"
"sync"
)
const jobs = 10
const workers = 5
var wg sync.WaitGroup
func main() {
// create channel
dataStream := make(chan interface{})
signalStream := make(chan interface{})
// Generate workers
for i := 1; i <= workers; i++ {
wg.Add(1)
go worker(dataStream, signalStream, i*100)
}
// Generate jobs
for i := 1; i <= jobs; i++ {
dataStream <- i
}
close(dataStream)
// start consuming data
close(signalStream)
wg.Wait()
}
func worker(c <-chan interface{}, s <-chan interface{}, id int) {
defer wg.Done()
<-s
for i := range c {
log.Printf("routine - %d - %d \n", id, i)
}
}
Generate the jobs in a separate gorouine, i.e. put the whole jobs loop into a goroutine. If you don't then dataStream <- i will block and your program will never "start consuming data"
// Generate jobs
go func() {
for i := 1; i <= jobs; i++ {
dataStream <- i
}
close(dataStream)
}()
https://go.dev/play/p/ChlbsJlgwdE
I want to send a value in a channel to go routines from main function. What happens is which go routine will receive the value from the channel first.
package main
import (
"fmt"
"math/rand"
//"runtime"
"strconv"
"time"
)
func main() {
var ch chan int
ch = make(chan int)
ch <- 1
receive(ch)
}
func receive(ch chan int){
for i := 0; i < 4; i++ {
// Create some threads
go func(i int) {
time.Sleep(time.Duration(rand.Intn(1000)) * time.Millisecond)
fmt.Println(<-ch)
}(i)
}
}
My current implementation is giving an error.
fatal error: all goroutines are asleep - deadlock!
How can I know that which go routine will receive the value from the channel first. And what happens to other go routine If those will run or throw an error since there is no channel to receive the value. As it is already received by one of them.
If a create a buffered channel my code works. So I don't get it what has happened behind the scene which is making it work when creating a buffered channel like below:
func main() {
var ch chan int
ch = make(chan int, 10)
ch <- 1
receive(ch)
}
If we look at below code. I can see that we can send values through channels directly there is no need of creating a go routine to send a value thorugh a channel to another go routines.
package main
import "fmt"
func main() {
// We'll iterate over 2 values in the `queue` channel.
queue := make(chan string, 2)
queue <- "one"
queue <- "two"
close(queue)
for elem := range queue {
fmt.Println(elem)
}
}
Then what is wrong with my code. Why is it creating a deadlock.
If all you need is to start several workers and send a task to any of them, then you'd better run workers before sending a value to a channel, because as #mkopriva said above, writing to a channel is a blocking operation. You always have to have a consumer, or the execution will freeze.
func main() {
var ch chan int
ch = make(chan int)
receive(ch)
ch <- 1
}
func receive(ch chan int) {
for i := 0; i < 4; i++ {
// Create some threads
go func(i int) {
time.Sleep(time.Duration(rand.Intn(1000)) * time.Millisecond)
fmt.Printf("Worker no %d is processing the value %d\n", i, <-ch)
}(i)
}
}
Short answer for the question "Which go routine will receive it?" - Whatever. :) Any of them, you can't say for sure.
However I have no idea what is time.Sleep(...) for there, kept it as is.
An unbuffered channel (without a length) blocks until the value has been received. This means the program that wrote to the channel will stop after writing to the channel until it has been read from. If that happens in the main thread, before your call to receive, it causes a deadlock.
There are two more issues: you need to use a WaitGroup to pause the completion until finished, and a channel behaves like a concurrent queue. In particular, it has push and pop operations, which are both performed using <-. For example:
//Push to channel, channel contains 1 unless other things were there
c <- 1
//Pop from channel, channel is empty
x := <-c
Here is a working example:
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
func main() {
var ch chan int
ch = make(chan int)
go func() {
ch <- 1
ch <- 1
ch <- 1
ch <- 1
}()
receive(ch)
}
func receive(ch chan int) {
wg := &sync.WaitGroup{}
for i := 0; i < 4; i++ {
// Create some threads
wg.Add(1)
go func(i int) {
time.Sleep(time.Duration(rand.Intn(1000)) * time.Millisecond)
fmt.Println(<-ch)
wg.Done()
}(i)
}
wg.Wait()
fmt.Println("done waiting")
}
Playground Link
As you can see the WaitGroup is quite simple as well. You declare it at a higher scope. It's essentially a fancy counter, with three primary methods. When you call wg.Add(1) the counter is increased, when you call wg.Done() the counter is decreased, and when you call wg.Wait(), the execution is halted until the counter reaches 0.
the below code that get worker from channel and execute function "call", all routines finish and print that they are done but wait never finishes,
i traced the counter of WaitGroup by making varible counter incresing when add to wg and decresing when done and it was zero at the end of for loop
any help please
package mapreduce
import (
"fmt"
"sync"
)
func schedule(jobName string, mapFiles []string, nReduce int, phase jobPhase, registerChan chan string) {
var ntasks int
var n_other int // number of inputs (for reduce) or outputs (for map)
switch phase {
case mapPhase:
ntasks = len(mapFiles)
n_other = nReduce
case reducePhase:
ntasks = nReduce
n_other = len(mapFiles)
}
fmt.Printf("Schedule: %v %v tasks (%d I/Os)\n", ntasks, phase, n_other)
var wg sync.WaitGroup
for i := 0; i < ntasks; i++ {
worker := <-registerChan
doTaskArg := DoTaskArgs{jobName, mapFiles[i], phase, i, n_other}
wg.Add(1)
go func() {
defer wg.Done()
done := call(worker, "Worker.DoTask", doTaskArg, nil)
if done {
registerChan <- worker
} else {
i = i - 1
}
}()
}
wg.Wait()
fmt.Printf("Schedule: %v phase done\n", phase)
}
The channel is blocking your goroutine. If you put some data into a unbuffered channel the goroutine waits until the receiver gets the the data from the channel. In your case your routine blocks at register <- worker and defer wg.Done() is never called, because the function is waiting.
I have some issues with the following code:
package main
import (
"fmt"
"sync"
)
// This program should go to 11, but sometimes it only prints 1 to 10.
func main() {
ch := make(chan int)
var wg sync.WaitGroup
wg.Add(2)
go Print(ch, wg) //
go func(){
for i := 1; i <= 11; i++ {
ch <- i
}
close(ch)
defer wg.Done()
}()
wg.Wait() //deadlock here
}
// Print prints all numbers sent on the channel.
// The function returns when the channel is closed.
func Print(ch <-chan int, wg sync.WaitGroup) {
for n := range ch { // reads from channel until it's closed
fmt.Println(n)
}
defer wg.Done()
}
I get a deadlock at the specified place. I have tried setting wg.Add(1) instead of 2 and it solves my problem. My belief is that I'm not successfully sending the channel as an argument to the Printer function. Is there a way to do that? Otherwise, a solution to my problem is replacing the go Print(ch, wg)line with:
go func() {
Print(ch)
defer wg.Done()
}
and changing the Printer function to:
func Print(ch <-chan int) {
for n := range ch { // reads from channel until it's closed
fmt.Println(n)
}
}
What is the best solution?
Well, first your actual error is that you're giving the Print method a copy of the sync.WaitGroup, so it doesn't call the Done() method on the one you're Wait()ing on.
Try this instead:
package main
import (
"fmt"
"sync"
)
func main() {
ch := make(chan int)
var wg sync.WaitGroup
wg.Add(2)
go Print(ch, &wg)
go func() {
for i := 1; i <= 11; i++ {
ch <- i
}
close(ch)
defer wg.Done()
}()
wg.Wait() //deadlock here
}
func Print(ch <-chan int, wg *sync.WaitGroup) {
for n := range ch { // reads from channel until it's closed
fmt.Println(n)
}
defer wg.Done()
}
Now, changing your Print method to remove the WaitGroup of it is a generally good idea: the method doesn't need to know something is waiting for it to finish its job.
I agree with #Elwinar's solution, that the main problem in your code caused by passing a copy of your Waitgroup to the Print function.
This means the wg.Done() is operated on a copy of wg you defined in the main. Therefore, wg in the main could not get decreased, and thus a deadlock happens when you wg.Wait() in main.
Since you are also asking about the best practice, I could give you some suggestions of my own:
Don't remove defer wg.Done() in Print. Since your goroutine in main is a sender, and print is a receiver, removing wg.Done() in receiver routine will cause an unfinished receiver. This is because only your sender is synced with your main, so after your sender is done, your main is done, but it's possible that the receiver is still working. My point is: don't leave some dangling goroutines around after your main routine is finished. Close them or wait for them.
Remember to do panic recovery everywhere, especially anonymous goroutine. I have seen a lot of golang programmers forgetting to put panic recovery in goroutines, even if they remember to put recover in normal functions. It's critical when you want your code to behave correctly or at least gracefully when something unexpected happened.
Use defer before every critical calls, like sync related calls, at the beginning since you don't know where the code could break. Let's say you removed defer before wg.Done(), and a panic occurrs in your anonymous goroutine in your example. If you don't have panic recover, it will panic. But what happens if you have a panic recover? Everything's fine now? No. You will get deadlock at wg.Wait() since your wg.Done() gets skipped because of panic! However, by using defer, this wg.Done() will be executed at the end, even if panic happened. Also, defer before close is important too, since its result also affects the communication.
So here is the code modified according to the points I mentioned above:
package main
import (
"fmt"
"sync"
)
func main() {
ch := make(chan int)
var wg sync.WaitGroup
wg.Add(2)
go Print(ch, &wg)
go func() {
defer func() {
if r := recover(); r != nil {
println("panic:" + r.(string))
}
}()
defer func() {
wg.Done()
}()
for i := 1; i <= 11; i++ {
ch <- i
if i == 7 {
panic("ahaha")
}
}
println("sender done")
close(ch)
}()
wg.Wait()
}
func Print(ch <-chan int, wg *sync.WaitGroup) {
defer func() {
if r := recover(); r != nil {
println("panic:" + r.(string))
}
}()
defer wg.Done()
for n := range ch {
fmt.Println(n)
}
println("print done")
}
Hope it helps :)
I see lots of tutorials and examples on how to make Go wait for x number of goroutines to finish, but what I'm trying to do is have ensure there are always x number running, so a new goroutine is launched as soon as one ends.
Specifically I have a few hundred thousand 'things to do' which is processing some stuff that is coming out of MySQL. So it works like this:
db, err := sql.Open("mysql", connection_string)
checkErr(err)
defer db.Close()
rows,err := db.Query(`SELECT id FROM table`)
checkErr(err)
defer rows.Close()
var id uint
for rows.Next() {
err := rows.Scan(&id)
checkErr(err)
go processTheThing(id)
}
checkErr(err)
rows.Close()
Currently that will launch several hundred thousand threads of processTheThing(). What I need is that a maximum of x number (we'll call it 20) goroutines are launched. So it starts by launching 20 for the first 20 rows, and from then on it will launch a new goroutine for the next id the moment that one of the current goroutines has finished. So at any point in time there are always 20 running.
I'm sure this is quite simple/standard, but I can't seem to find a good explanation on any of the tutorials or examples or how this is done.
You may find Go Concurrency Patterns article interesting, especially Bounded parallelism section, it explains the exact pattern you need.
You can use channel of empty structs as a limiting guard to control number of concurrent worker goroutines:
package main
import "fmt"
func main() {
maxGoroutines := 10
guard := make(chan struct{}, maxGoroutines)
for i := 0; i < 30; i++ {
guard <- struct{}{} // would block if guard channel is already filled
go func(n int) {
worker(n)
<-guard
}(i)
}
}
func worker(i int) { fmt.Println("doing work on", i) }
Here I think something simple like this will work :
package main
import "fmt"
const MAX = 20
func main() {
sem := make(chan int, MAX)
for {
sem <- 1 // will block if there is MAX ints in sem
go func() {
fmt.Println("hello again, world")
<-sem // removes an int from sem, allowing another to proceed
}()
}
}
Thanks to everyone for helping me out with this. However, I don't feel that anyone really provided something that both worked and was simple/understandable, although you did all help me understand the technique.
What I have done in the end is I think much more understandable and practical as an answer to my specific question, so I will post it here in case anyone else has the same question.
Somehow this ended up looking a lot like what OneOfOne posted, which is great because now I understand that. But OneOfOne's code I found very difficult to understand at first because of the passing functions to functions made it quite confusing to understand what bit was for what. I think this way makes a lot more sense:
package main
import (
"fmt"
"sync"
)
const xthreads = 5 // Total number of threads to use, excluding the main() thread
func doSomething(a int) {
fmt.Println("My job is",a)
return
}
func main() {
var ch = make(chan int, 50) // This number 50 can be anything as long as it's larger than xthreads
var wg sync.WaitGroup
// This starts xthreads number of goroutines that wait for something to do
wg.Add(xthreads)
for i:=0; i<xthreads; i++ {
go func() {
for {
a, ok := <-ch
if !ok { // if there is nothing to do and the channel has been closed then end the goroutine
wg.Done()
return
}
doSomething(a) // do the thing
}
}()
}
// Now the jobs can be added to the channel, which is used as a queue
for i:=0; i<50; i++ {
ch <- i // add i to the queue
}
close(ch) // This tells the goroutines there's nothing else to do
wg.Wait() // Wait for the threads to finish
}
Create channel for passing data to goroutines.
Start 20 goroutines that processes the data from channel in a loop.
Send the data to the channel instead of starting a new goroutine.
Grzegorz Żur's answer is the most efficient way to do it, but for a newcomer it could be hard to implement without reading code, so here's a very simple implementation:
type idProcessor func(id uint)
func SpawnStuff(limit uint, proc idProcessor) chan<- uint {
ch := make(chan uint)
for i := uint(0); i < limit; i++ {
go func() {
for {
id, ok := <-ch
if !ok {
return
}
proc(id)
}
}()
}
return ch
}
func main() {
runtime.GOMAXPROCS(4)
var wg sync.WaitGroup //this is just for the demo, otherwise main will return
fn := func(id uint) {
fmt.Println(id)
wg.Done()
}
wg.Add(1000)
ch := SpawnStuff(10, fn)
for i := uint(0); i < 1000; i++ {
ch <- i
}
close(ch) //should do this to make all the goroutines exit gracefully
wg.Wait()
}
playground
This is a simple producer-consumer problem, which in Go can be easily solved using channels to buffer the paquets.
To put it simple: create a channel that accept your IDs. Run a number of routines which will read from the channel in a loop then process the ID. Then run your loop that will feed IDs to the channel.
Example:
func producer() {
var buffer = make(chan uint)
for i := 0; i < 20; i++ {
go consumer(buffer)
}
for _, id := range IDs {
buffer <- id
}
}
func consumer(buffer chan uint) {
for {
id := <- buffer
// Do your things here
}
}
Things to know:
Unbuffered channels are blocking: if the item wrote into the channel isn't accepted, the routine feeding the item will block until it is
My example lack a closing mechanism: you must find a way to make the producer to wait for all consumers to end their loop before returning. The simplest way to do this is with another channel. I let you think about it.
I've wrote a simple package to handle concurrency for Golang. This package will help you limit the number of goroutines that are allowed to run concurrently:
https://github.com/zenthangplus/goccm
Example:
package main
import (
"fmt"
"goccm"
"time"
)
func main() {
// Limit 3 goroutines to run concurrently.
c := goccm.New(3)
for i := 1; i <= 10; i++ {
// This function have to call before any goroutine
c.Wait()
go func(i int) {
fmt.Printf("Job %d is running\n", i)
time.Sleep(2 * time.Second)
// This function have to when a goroutine has finished
// Or you can use `defer c.Done()` at the top of goroutine.
c.Done()
}(i)
}
// This function have to call to ensure all goroutines have finished
// after close the main program.
c.WaitAllDone()
}
Also can take a look here: https://github.com/LiangfengChen/goutil/blob/main/concurrent.go
The example can refer the test case.
func TestParallelCall(t *testing.T) {
format := "test:%d"
data := make(map[int]bool)
mutex := sync.Mutex{}
val, err := ParallelCall(1000, 10, func(pos int) (interface{}, error) {
mutex.Lock()
defer mutex.Unlock()
data[pos] = true
return pos, errors.New(fmt.Sprintf(format, pos))
})
for i := 0; i < 1000; i++ {
if _, ok := data[i]; !ok {
t.Errorf("TestParallelCall pos not found: %d", i)
}
if val[i] != i {
t.Errorf("TestParallelCall return value is not right (%d,%v)", i, val[i])
}
if err[i].Error() != fmt.Sprintf(format, i) {
t.Errorf("TestParallelCall error msg is not correct (%d,%v)", i, err[i])
}
}
}