I am encountering for the below code fatal error: all goroutines are asleep - deadlock!
Am I right in using a buffered channel? I would appreciate it if you can give me pointers. I am unfortunately at the end of my wits.
func main() {
valueChannel := make(chan int, 2)
defer close(valueChannel)
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go doNothing(&wg, valueChannel)
}
for {
v, ok := <- valueChannel
if !ok {
break
}
fmt.Println(v)
}
wg.Wait()
}
func doNothing(wg *sync.WaitGroup, numChan chan int) {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(1000)) * time.Millisecond)
numChan <- 12
}
The main goroutine blocks on <- valueChannel after receiving all values. Close the channel to unblock the main goroutine.
func main() {
valueChannel := make(chan int, 2)
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go doNothing(&wg, valueChannel)
}
// Close channel after goroutines complete.
go func() {
wg.Wait()
close(valueChannel)
}()
// Receive values until channel is closed.
// The for / range loop here does the same
// thing as the for loop in the question.
for v := range valueChannel {
fmt.Println(v)
}
}
Run the example on the playground.
The code above works independent of the number of values sent by the goroutines.
If the main() function can determine the number of values sent by the goroutines, then receive that number of values from main():
func main() {
const n = 10
valueChannel := make(chan int, 2)
for i := 0; i < n; i++ {
go doNothing(valueChannel)
}
// Each call to doNothing sends one value. Receive
// one value for each call to doNothing.
for i := 0; i < n; i++ {
fmt.Println(<-valueChannel)
}
}
func doNothing(numChan chan int) {
time.Sleep(time.Duration(rand.Intn(1000)) * time.Millisecond)
numChan <- 12
}
Run the example on the playground.
The main problem is on the for loop of channel receiving.
The comma ok idiom is slightly different on channels, ok indicates whether the received value was sent on the channel (true) or is a zero value returned because the channel is closed and empty (false).
In this case the channel is waiting a data to be sent and since it's already finished sending the value ten times : Deadlock.
So apart of the design of the code if I just need to do the less change possible here it is:
func main() {
valueChannel := make(chan int, 2)
defer close(valueChannel)
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go doNothing(&wg, valueChannel)
}
for i := 0; i < 10; i++ {
v := <- valueChannel
fmt.Println(v)
}
wg.Wait()
}
func doNothing(wg *sync.WaitGroup, numChan chan int) {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(1000)) * time.Millisecond)
numChan <- 12
}
Related
I want to make X number of goroutines to update CountValue using parallelism (numRoutines are as much as how many CPU you have).
Solution 1:
func count(numRoutines int) (countValue int) {
var mu sync.Mutex
k := func(i int) {
mu.Lock()
defer mu.Unlock()
countValue += 5
}
for i := 0; i < numRoutines; i++ {
go k(i)
}
It becomes a data race and the returned countValue = 0.
Solution 2:
func count(numRoutines int) (countValue int) {
k := func(i int, c chan int) {
c <- 5
}
c := make(chan int)
for i := 0; i < numRoutines; i++ {
go k(i, c)
}
for i := 0; i < numRoutines; i++ {
countValue += <- c
}
return
}
I did a benchmark test on it and doing a sequential addition would work faster than using goroutines. I think it's because I have two for loops here as when I put countValue += <- c inside the first for loop, the code runs faster.
Solution 3:
func count(numRoutines int) (countValue int) {
var wg sync.WaitGroup
c := make(chan int)
k := func(i int) {
defer wg.Done()
c <- 5
}
for i := 0; i < numShards; i++ {
wg.Add(1)
go k(i)
}
go func() {
for i := range c {
countValue += i
}
}()
wg.Wait()
return
}
Still a race count :/
Is there any way better to do this?
There definitely is a better way to safely increment a variable: using sync/atomic:
import "sync/atomic"
var words int64
k := func() {
_ = atomic.AddInt64(&words, 5) // increment atomically
}
Using a channel basically eliminates the need for a mutex, or takes away the the risk of concurrent access to the variable itself, and a waitgroup here is just a bit overkill
Channels:
words := 0
done := make(chan struct{}) // or use context
ch := make(chan int, numRoutines) // buffer so each routine can write
go func () {
read := 0
for i := range ch {
words += 5 // or use i or something
read++
if read == numRoutines {
break // we've received data from all routines
}
}
close(done) // indicate this routine has terminated
}()
for i := 0; i < numRoutines; i++ {
ch <- i // write whatever value needs to be used in the counting routine on the channel
}
<- done // wait for our routine that increments words to return
close(ch) // this channel is no longer needed
fmt.Printf("Counted %d\n", words)
As you can tell, the numRoutines no longer is the number of routines, but rather the number of writes on the channel. You can move that to individual routines, still:
for i := 0; i < numRoutines; i++ {
go func(ch chan<- int, i int) {
// do stuff here
ch <- 5 * i // for example
}(ch, i)
}
WaitGroup:
Instead of using a context that you can cancel, or a channel, you can use a waitgroup + atomic to get the same result. The easiest way IMO to do so is to create a type:
type counter struct {
words int64
}
func (c *counter) doStuff(wg *sync.WaitGroup, i int) {
defer wg.Done()
_ = atomic.AddInt64(&c.words, i * 5) // whatever value you need to add
}
func main () {
cnt := counter{}
wg := sync.WaitGroup{}
wg.Add(numRoutines) // create the waitgroup
for i := 0; i < numRoutines; i++ {
go cnt.doStuff(&wg, i)
}
wg.Wait() // wait for all routines to finish
fmt.Println("Counted %d\n", cnt.words)
}
Fix for your third solution
As I mentioned in the comment: your third solution is still causing a race condition because the channel c is never closed, meaning the routine:
go func () {
for i := range c {
countValue += i
}
}()
Never returns. The waitgroup also only ensures that you've sent all values on the channel, but not that the countValue has been incremented to its final value. The fix would be to either close the channel after wg.Wait() returns so the routine can return, and add a done channel that you can close when this last routine returns, and add a <-done statement before returning.
func count(numRoutines int) (countValue int) {
var wg sync.WaitGroup
c := make(chan int)
k := func(i int) {
defer wg.Done()
c <- 5
}
for i := 0; i < numShards; i++ {
wg.Add(1)
go k(i)
}
done := make(chan struct{})
go func() {
for i := range c {
countValue += i
}
close(done)
}()
wg.Wait()
close(c)
<-done
return
}
This adds some clutter, though, and IMO is a bit messy. It might just be easier to just move the wg.Wait() call to a routine:
func count(numRoutines int) (countValue int) {
var wg sync.WaitGroup
c := make(chan int)
// add wg as argument, makes it easier to move this function outside of this scope
k := func(wg *sync.WaitGroup, i int) {
defer wg.Done()
c <- 5
}
wg.Add(numShards) // increment the waitgroup once
for i := 0; i < numShards; i++ {
go k(&wg, i)
}
go func() {
wg.Wait()
close(c) // this ends the loop over the channel
}()
// just iterate over the channel until it is closed
for i := range c {
countValue += i
}
// we've added all values to countValue
return
}
can someone give me some insight about this code, Why this get deadlock error in for x:=range c
func main() {
c:=make(chan int,10)
for i:=0;i<5;i++{
go func(chanel chan int,i int){
println("i",i)
chanel <- 1
}(c,i)
}
for x:=range c {
println(x)
}
println("Done!")
}
Because this:
for x:=range c {
println(x)
}
will loop until the channel c closes, which is never done here.
Here is one way you can fix it, using a WaitGroup:
package main
import "sync"
func main() {
var wg sync.WaitGroup
c := make(chan int, 10)
for i := 0; i < 5; i++ {
wg.Add(1)
go func(chanel chan int, i int) {
defer wg.Done()
println("i", i)
chanel <- 1
}(c, i)
}
go func() {
wg.Wait()
close(c)
}()
for x := range c {
println(x)
}
println("Done!")
}
Try it on Go Playground
You create five goroutines, each sending an integer value to the channel. Once all those five values are written, there's no other goroutine left that writes to the channel.
The main goroutine reads those five values from the channel. But there are no goroutines that can possibly write the sixth value or close the channel. So, you're deadlocked waiting data from the channel.
Close the channel once all the writes are completed. It should be an interesting exercise to figure out how you can do that with this code.
Channel needs to be closed to indicate task is complete.
Coordinate with a sync.WaitGroup:
c := make(chan int, 10)
var wg sync.WaitGroup // here
for i := 0; i < 5; i++ {
wg.Add(1) // here
go func(chanel chan int, i int) {
defer wg.Done()
println("i", i)
chanel <- 1
}(c, i)
}
go func() {
wg.Wait() // and here
close(c)
}()
for x := range c {
println(x)
}
println("Done!")
https://play.golang.org/p/VWcBC2YGLvM
How do i block the the main func and allow goroutines communicate through channels the following code sample throws me an error
0fatal error: all goroutines are asleep - deadlock!
package main
import (
"fmt"
"time"
)
func main() {
ch := make(chan int)
go func() {
value := <-ch
fmt.Print(value) // This never prints!
}()
go func() {
for i := 0; i < 100; i++ {
time.Sleep(100 * time.Millisecond)
ch <- i
}
}()
c := make(chan int)
<-c
}
I think you want to print all value [0:99]. Then you need loop in 1st go routine.
And also, you need to pass signal to break loop
func main() {
ch := make(chan int)
stopProgram := make(chan bool)
go func() {
for i := 0; i < 100; i++ {
value := <-ch
fmt.Println(value)
}
// Send signal through stopProgram to stop loop
stopProgram <- true
}()
go func() {
for i := 0; i < 100; i++ {
time.Sleep(100 * time.Millisecond)
ch <- i
}
}()
// your problem will wait here until it get stop signal through channel
<-stopProgram
}
I wonder what would be idiomatic way to do as following.
I have N slow API queries, and one database connection, I want to have a buffered channel, where responses will come, and one database transaction which I will use to write data.
I could only come up with semaphore thing as following makeup example:
func myFunc(){
//10 concurrent API calls
sem := make(chan bool, 10)
//A concurrent safe map as buffer
var myMap MyConcurrentMap
for i:=0;i<N;i++{
sem<-true
go func(i int){
defer func(){<-sem}()
resp:=slowAPICall(fmt.Sprintf("http://slow-api.me?%d",i))
myMap.Put(resp)
}(i)
}
for j=0;j<cap(sem);j++{
sem<-true
}
tx,_ := db.Begin()
for data:=range myMap{
tx.Exec("Insert data into database")
}
tx.Commit()
}
I am nearly sure there is simpler, cleaner and more proper solution, but it is seems complicated to grasp for me.
EDIT:
Well, I come with following solution, this way I do not need the buffer map, so once data comes to resp channel the data is printed or can be used to insert into a database, it works, I am still not sure if everything OK, at last there are no race.
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
//Gloab waitGroup
var wg sync.WaitGroup
func init() {
//just for fun sake, make rand seeded
rand.Seed(time.Now().UnixNano())
}
//Emulate a slow API call
func verySlowAPI(id int) int {
n := rand.Intn(5)
time.Sleep(time.Duration(n) * time.Second)
return n
}
func main() {
//Amount of tasks
N := 100
//Concurrency level
concur := 10
//Channel for tasks
tasks := make(chan int, N)
//Channel for responses
resp := make(chan int, 10)
//10 concurrent groutinezs
wg.Add(concur)
for i := 1; i <= concur; i++ {
go worker(tasks, resp)
}
//Add tasks
for i := 0; i < N; i++ {
tasks <- i
}
//Collect data from goroutiens
for i := 0; i < N; i++ {
fmt.Printf("%d\n", <-resp)
}
//close the tasks channel
close(tasks)
//wait till finish
wg.Wait()
}
func worker(task chan int, resp chan<- int) {
defer wg.Done()
for {
task, ok := <-task
if !ok {
return
}
n := verySlowAPI(task)
resp <- n
}
}
There's no need to use channels for a semaphore, sync.WaitGroup was made for waiting for a set of routines to complete.
If you're using the channel to limit throughput, you're better off with a worker pool, and using the channel to pass jobs to the workers:
type job struct {
i int
}
func myFunc(N int) {
// Adjust as needed for total number of tasks
work := make(chan job, 10)
// res being whatever type slowAPICall returns
results := make(chan res, 10)
resBuff := make([]res, 0, N)
wg := new(sync.WaitGroup)
// 10 concurrent API calls
for i = 0; i < 10; i++ {
wg.Add(1)
go func() {
for j := range work {
resp := slowAPICall(fmt.Sprintf("http://slow-api.me?%d", j.i))
results <- resp
}
wg.Done()
}()
}
go func() {
for r := range results {
resBuff = append(resBuff, r)
}
}
for i = 0; i < N; i++ {
work <- job{i}
}
close(work)
wg.Wait()
close(results)
}
Maybe this will work for you. Now you can get rid of your concurrent map. Here is a code snippet:
func myFunc() {
//10 concurrent API calls
sem := make(chan bool, 10)
respCh := make(chan YOUR_RESP_TYPE, 10)
var responses []YOUR_RESP_TYPE
for i := 0; i < N; i++ {
sem <- true
go func(i int) {
defer func() {
<-sem
}()
resp := slowAPICall(fmt.Sprintf("http://slow-api.me?%d",i))
respCh <- resp
}(i)
}
respCollected := make(chan struct{})
go func() {
for i := 0; i < N; i++ {
responses = append(responses, <-respCh)
}
close(respCollected)
}()
<-respCollected
tx,_ := db.Begin()
for _, data := range responses {
tx.Exec("Insert data into database")
}
tx.Commit()
}
Than we need to use one more goroutine that will collect all responses in some slice or map from a response channel.
I have the following code that goes into deadlock when sending on channel from a goroutine below:
package main
import (
"fmt"
"sync"
)
func main() {
for a := range getCh(10) {
fmt.Println("Got:", a)
}
}
func getCh(n int) <-chan int {
var wg sync.WaitGroup
ch := make(chan int)
defer func() {
fmt.Println("closing")
wg.Wait()
close(ch)
}()
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < n; i++ {
ch <- i
}
}()
wg.Add(1)
go func() {
defer wg.Done()
for i := n; i < 0; i-- {
ch <- i
}
}()
return ch
}
I know that it is legal to use wg.Wait() in defer. But I haven't been able to find a use in a function with channel as a return value.
I think the mistake you're making is that you think that the deferred function will run asynchronously too. But that is not the case, so the getCh() will block in its deferred part, waiting for the WaitGroup. But as there is no one reading from the channel, the goroutines which write into it can't return and thus the WaitGroup causes deadlock. Try something like this:
func getCh(n int) <-chan int {
ch := make(chan int)
go func() {
var wg sync.WaitGroup
wg.Add(1)
go func(n int) {
defer wg.Done()
for i := 0; i < n; i++ {
ch <- i
}
}(n)
wg.Add(1)
go func(n int) {
defer wg.Done()
for i := n; i > 0; i-- {
ch <- i
}
}(n)
wg.Wait()
fmt.Println("closing")
close(ch)
}()
return ch
}
It looks like your channels are blocking since you are not using any buffered channels. check out this quick example https://play.golang.org/p/zMnfA33qZk
ch := make(chan int, n)
Remember that channels block when they are filled. I am not sure what your goal is with your code, but it looks like you were aiming for using a buffered channel. This is a good piece from effective go https://golang.org/doc/effective_go.html#channels
Receivers always block until there is data to receive. If the channel is unbuffered, the sender blocks until the receiver has received the value. If the channel has a buffer, the sender blocks only until the value has been copied to the buffer; if the buffer is full, this means waiting until some receiver has retrieved a value.