I have implemented a function contractGraph which calculates a minimal cut of a graph using randomized contraction. I am running it a specified number of times and calculating the minimum cut:
minCut := 0
for i := 0; i < totalCount; i++ {
_minCut := contractGraph(graph)
if minCut == 0 || _minCut < minCut {
minCut = _minCut
}
}
contractGraph does CPU intensive calculations, but the program uses only one CPU core on my machine. I want to modify it, so at any time 4 parallel executions of contractGraph happen, the results are put in channel and are read synchronously and the minimum is calculated.
I tried:
func worker(graph Graph, i int, workerChan <- chan bool, minCutChan chan <- int) {
defer func () { <- workerChan }()
min_cut := contractGraph(graph)
minCutChan <- min_cut
}
func workerRunner(graph Graph, minCutChan chan int, totalCount int, workerCount int) {
workerChan := make(chan bool, workerCount)
for i := 0; i < totalCount; i++ {
go worker(graph, i, workerChan, minCutChan)
}
}
minCutChan := make(chan int)
go workerRunner(graph, minCutChan, totalCount, 4)
// read the resulting min cuts
minCut := 0
for _minCut := range minCutChan {
if minCut == 0 || _minCut < minCut {
minCut = _minCut
}
}
But still only one core is used and I get at the end:
fatal error: all goroutines are asleep - deadlock!
Also I don't like having to channels, I think it should be possible to have only one channel with the results.
What pattern would you recommend to use?
You forgot to close the minCutChan so main is stuck into range and all the go routines have completed.
to not use the channel you can use sync.WaitGroup
EDIT: To handle the totalCount I would use atomic.AddInt64 see the new updated examples:
see a working mock example with these edits: http://play.golang.org/p/WyCQrWK5aa
package main
import (
"fmt"
"sync"
"sync/atomic"
)
type Graph struct {
}
func contractGraph(Graph) int { return 0 }
func worker(wg *sync.WaitGroup, graph Graph, i int, minCutChan chan<- int) {
defer wg.Done()
for {
count := atomic.AddInt64(&totalCount, -1)
if count < 0 {
break
}
fmt.Println("Worker Iteration", count)
min_cut := contractGraph(graph)
minCutChan <- min_cut
}
}
func workerRunner(graph Graph, minCutChan chan int, workerCount int) {
wg := new(sync.WaitGroup)
wg.Add(workerCount)
for i := 0; i < workerCount; i++ {
go worker(wg, graph, i, minCutChan)
}
wg.Wait()
close(minCutChan)
}
var totalCount int64
func main() {
workerCount := 4
graph := Graph{}
totalCount = 100
minCutChan := make(chan int, workerCount+1)
go workerRunner(graph, minCutChan, workerCount)
go func() {
}()
// read the resulting min cuts
minCut := 0
for _minCut := range minCutChan {
if minCut == 0 || _minCut < minCut {
minCut = _minCut
}
}
fmt.Println(minCut)
}
even more in go style is to spin the workers inside an anonymous function:
http://play.golang.org/p/nT0uUutQyS
package main
import (
"fmt"
"sync"
"sync/atomic"
)
type Graph struct {
}
func contractGraph(Graph) int { return 0 }
var totalCount int64
func workerRunner(graph Graph, minCutChan chan int, workerCount int) {
var wg sync.WaitGroup
wg.Add(workerCount)
for i := 0; i < workerCount; i++ {
go func() {
defer wg.Done()
for {
count := atomic.AddInt64(&totalCount, -1)
if count < 0 {
break
}
fmt.Println("Worker Iteration", count)
min_cut := contractGraph(graph)
minCutChan <- min_cut
}
}()
}
wg.Wait()
close(minCutChan)
}
func main() {
workerCount := 4
totalCount = 100
graph := Graph{}
minCutChan := make(chan int, workerCount+1)
go workerRunner(graph, minCutChan, workerCount)
// read the resulting min cuts
minCut := 0
for _minCut := range minCutChan {
if minCut == 0 || _minCut < minCut {
minCut = _minCut
}
}
fmt.Println(minCut)
}
Related
I made some edits from the gobyexample:
import (
"fmt"
"math/rand"
"time"
)
type DemoResult struct {
Name string
Rate int
}
func random(min, max int) int {
rand.Seed(time.Now().UTC().UnixNano())
return rand.Intn(max-min) + min
}
func worker(id int, jobs <-chan int, results chan<- DemoResult) {
for j := range jobs {
fmt.Println("worker", id, "started job", j)
time.Sleep(time.Second)
fmt.Println("worker", id, "finished job", j)
myrand := random(1, 4)
if myrand == 2 {
results <- DemoResult{Name: "succ", Rate: j}
}
// else {
// results <- DemoResult{Name: "failed", Rate: 999}
// }
}
}
func main() {
const numJobs = 5
jobs := make(chan int, numJobs)
results := make(chan DemoResult)
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
for a := 1; a <= numJobs; a++ {
out := <-results
if out.Name == "succ" {
fmt.Printf("%v\n", out)
}
}
}
I commented the following code intentional to make it stuck forever:
// else {
// results <- DemoResult{Name: "failed", Rate: 999}
// }
It seems like we should make the result's length the same as jobs'. I was wondering if we could make it have different length?
Use a wait group to detect when the workers are done. Close the results channel when the workers are done. Receive results until the channel is closed.
func worker(wg *sync.WaitGroup, id int,
jobs <-chan int,
results chan<- DemoResult) {
// Decrement wait group counter on return from
// function.
defer wg.Done()
⋮
}
func main() {
⋮
// Declare wait group and increment counter for
// each worker.
var wg sync.WaitGroup
for w := 1; w <= 3; w++ {
wg.Add(1)
go worker(&wg, w, jobs, results)
}
⋮
// Wait for workers to decrement wait group
// counter to zero and close channel.
// Execute in goroutine so we can continue on
// to receiving values from results in main.
go func() {
wg.Wait()
close(results)
}()
⋮
// Loop until results is closed.
for out := range results {
⋮
}
}
https://go.dev/play/p/FOQwybMl7tM
I was wondering if we could make it have different length?
Absolutely but you need some way of determining when you have reached the end of the results. This is the reason your example fails - currently the function assumes there will be numJobs (one result per job) results and waits for that many.
An alternative would be to use the channels closure to indicate this i.e. (playground)
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
type DemoResult struct {
Name string
Rate int
}
func random(min, max int) int {
rand.Seed(time.Now().UTC().UnixNano())
return rand.Intn(max-min) + min
}
func worker(id int, jobs <-chan int, results chan<- DemoResult) {
for j := range jobs {
fmt.Println("worker", id, "started job", j)
time.Sleep(time.Second)
fmt.Println("worker", id, "finished job", j)
myrand := random(1, 4)
if myrand == 2 {
results <- DemoResult{Name: "succ", Rate: j}
} // else {
// results <- DemoResult{Name: "failed", Rate: 999}
//}
}
}
func main() {
const numWorkers = 3
const numJobs = 5
jobs := make(chan int, numJobs)
results := make(chan DemoResult)
var wg sync.WaitGroup
wg.Add(numWorkers)
for w := 1; w <= numWorkers; w++ {
go func() {
worker(w, jobs, results)
wg.Done()
}()
}
go func() {
wg.Wait() // Wait for go routines to complete then close results channel
close(results)
}()
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
for out := range results {
if out.Name == "succ" {
fmt.Printf("%v\n", out)
}
}
}
I have 3 merge sort implementations:
MergeSort: simple one without concurrency;
MergeSortSmart: with concurrency limited by buffered channel size limit. If buffer is full, calls the simple implementation;
MergeSortSmartBug: same strategy as the previous one, but with a small "refactor", passing wg pointer to a function reducing code duplication.
The first two works as expected, but the third one returns an empty slice instead of the sorted input. I couldn't understand what happened and found no answers as well.
Here is the playground link for the code: https://play.golang.org/p/DU1ypbanpVi
package main
import (
"fmt"
"math/rand"
"runtime"
"sync"
)
type pass struct{}
var semaphore = make(chan pass, runtime.NumCPU())
func main() {
rand.Seed(10)
s := make([]int, 16)
for i := 0; i < 16; i++ {
s[i] = int(rand.Int31n(1000))
}
fmt.Println(s)
fmt.Println(MergeSort(s))
fmt.Println(MergeSortSmart(s))
fmt.Println(MergeSortSmartBug(s))
}
func merge(l, r []int) []int {
tmp := make([]int, 0, len(l)+len(r))
for len(l) > 0 || len(r) > 0 {
if len(l) == 0 {
return append(tmp, r...)
}
if len(r) == 0 {
return append(tmp, l...)
}
if l[0] <= r[0] {
tmp = append(tmp, l[0])
l = l[1:]
} else {
tmp = append(tmp, r[0])
r = r[1:]
}
}
return tmp
}
func MergeSort(s []int) []int {
if len(s) <= 1 {
return s
}
n := len(s) / 2
l := MergeSort(s[:n])
r := MergeSort(s[n:])
return merge(l, r)
}
func MergeSortSmart(s []int) []int {
if len(s) <= 1 {
return s
}
n := len(s) / 2
var wg sync.WaitGroup
wg.Add(2)
var l, r []int
select {
case semaphore <- pass{}:
go func() {
l = MergeSortSmart(s[:n])
<-semaphore
wg.Done()
}()
default:
l = MergeSort(s[:n])
wg.Done()
}
select {
case semaphore <- pass{}:
go func() {
r = MergeSortSmart(s[n:])
<-semaphore
wg.Done()
}()
default:
r = MergeSort(s[n:])
wg.Done()
}
wg.Wait()
return merge(l, r)
}
func MergeSortSmartBug(s []int) []int {
if len(s) <= 1 {
return s
}
n := len(s) / 2
var wg sync.WaitGroup
wg.Add(2)
l := mergeSmart(s[:n], &wg)
r := mergeSmart(s[n:], &wg)
wg.Wait()
return merge(l, r)
}
func mergeSmart(s []int, wg *sync.WaitGroup) []int {
var tmp []int
select {
case semaphore <- pass{}:
go func() {
tmp = MergeSortSmartBug(s)
<-semaphore
wg.Done()
}()
default:
tmp = MergeSort(s)
wg.Done()
}
return tmp
}
Why does the Bug version returns an empty slice? How can I refactor the Smart version without doing two selects one after the other?
Sorry for I couldn't reproduce this behavior in a smaller example.
The problem is not with the WaitGroup itself. It's with your concurrency handling. Your mergeSmart function lunches a go routine and returns the tmp variable without waiting for the go routine to finish.
You might want to try a pattern more like this:
leftchan := make(chan []int)
rightchan := make(chan []int)
go mergeSmart(s[:n], leftchan)
go mergeSmart(s[n:], rightchan)
l := <-leftchan
r := <-rightchan
Or you can use a single channel if order doesn't matter.
mergeSmart doesn't wait on the wg, so it returns a tmp that hasn't received a value yet. You could probably repair it by passing a reference to the destination slice in to the function, instead of returning a slice.
Look at the mergeSmart function. When the select enter into the first case, the goroutine is launched and imediatly returns tmp (which is an empty array). In that case there is no way to get the right value. (See advanced debugging prints here https://play.golang.org/p/IedaY3muso2)
Maybe passing arrays preallocated by reference?
I implemented both suggestions (passing slice by reference and using channels) and the (working!) result is here: https://play.golang.org/p/DcDC_-NjjAH
package main
import (
"fmt"
"math/rand"
"runtime"
"sync"
)
type pass struct{}
var semaphore = make(chan pass, runtime.NumCPU())
func main() {
rand.Seed(10)
s := make([]int, 16)
for i := 0; i < 16; i++ {
s[i] = int(rand.Int31n(1000))
}
fmt.Println(s)
fmt.Println(MergeSort(s))
fmt.Println(MergeSortSmart(s))
fmt.Println(MergeSortSmartPointer(s))
fmt.Println(MergeSortSmartChan(s))
}
func merge(l, r []int) []int {
tmp := make([]int, 0, len(l)+len(r))
for len(l) > 0 || len(r) > 0 {
if len(l) == 0 {
return append(tmp, r...)
}
if len(r) == 0 {
return append(tmp, l...)
}
if l[0] <= r[0] {
tmp = append(tmp, l[0])
l = l[1:]
} else {
tmp = append(tmp, r[0])
r = r[1:]
}
}
return tmp
}
func MergeSort(s []int) []int {
if len(s) <= 1 {
return s
}
n := len(s) / 2
l := MergeSort(s[:n])
r := MergeSort(s[n:])
return merge(l, r)
}
func MergeSortSmart(s []int) []int {
if len(s) <= 1 {
return s
}
n := len(s) / 2
var wg sync.WaitGroup
wg.Add(2)
var l, r []int
select {
case semaphore <- pass{}:
go func() {
l = MergeSortSmart(s[:n])
<-semaphore
wg.Done()
}()
default:
l = MergeSort(s[:n])
wg.Done()
}
select {
case semaphore <- pass{}:
go func() {
r = MergeSortSmart(s[n:])
<-semaphore
wg.Done()
}()
default:
r = MergeSort(s[n:])
wg.Done()
}
wg.Wait()
return merge(l, r)
}
func MergeSortSmartPointer(s []int) []int {
if len(s) <= 1 {
return s
}
n := len(s) / 2
var l, r []int
var wg sync.WaitGroup
wg.Add(2)
mergeSmartPointer(&l, s[:n], &wg)
mergeSmartPointer(&r, s[n:], &wg)
wg.Wait()
return merge(l, r)
}
func mergeSmartPointer(tmp *[]int, s []int, wg *sync.WaitGroup) {
select {
case semaphore <- pass{}:
go func() {
*tmp = MergeSortSmartPointer(s)
<-semaphore
wg.Done()
}()
default:
*tmp = MergeSort(s)
wg.Done()
}
}
func MergeSortSmartChan(s []int) []int {
if len(s) <= 1 {
return s
}
n := len(s) / 2
lchan := make(chan []int)
rchan := make(chan []int)
go mergeSmartChan(s[:n], lchan)
go mergeSmartChan(s[n:], rchan)
l := <-lchan
r := <-rchan
return merge(l, r)
}
func mergeSmartChan(s []int, c chan []int) {
select {
case semaphore <- pass{}:
go func() {
c <- MergeSortSmartChan(s)
<-semaphore
}()
default:
c <- MergeSort(s)
}
}
I understood 100% what I was doing wrong, thanks!
And for future references, here's the benchmark of sorting a slice of 100,000 elems:
$ go test -bench=.
goos: linux
goarch: amd64
cpu: Intel(R) Core(TM) i5-9300H CPU # 2.40GHz
BenchmarkMergeSort-8 97 12230309 ns/op
BenchmarkMergeSortSmart-8 181 7209844 ns/op
BenchmarkMergeSortSmartPointer-8 163 7483136 ns/op
BenchmarkMergeSortSmartChan-8 156 8149585 ns/op
Suppose I have a helper function helper(n int) which returns a slice of integers of variable length. I would like to run helper(n) in parallel for various values of n and collect the output in one big slice. My first attempt at this is the following:
package main
import (
"fmt"
"golang.org/x/sync/errgroup"
)
func main() {
out := make([]int, 0)
ch := make(chan int)
go func() {
for i := range ch {
out = append(out, i)
}
}()
g := new(errgroup.Group)
for n := 2; n <= 3; n++ {
n := n
g.Go(func() error {
for _, i := range helper(n) {
ch <- i
}
return nil
})
}
if err := g.Wait(); err != nil {
panic(err)
}
close(ch)
// time.Sleep(time.Second)
fmt.Println(out) // should have the same elements as [0 1 0 1 2]
}
func helper(n int) []int {
out := make([]int, 0)
for i := 0; i < n; i++ {
out = append(out, i)
}
return out
}
However, if I run this example I do not get all 5 expected values, instead I get
[0 1 0 1]
(If I uncomment the time.Sleep I do get all five values, [0 1 2 0 1], but this is not an acceptable solution).
It seems that the problem with this is that out is being updated in a goroutine, but the main function returns before it is done updating.
One thing that would work is using a buffered channel of size 5:
func main() {
ch := make(chan int, 5)
g := new(errgroup.Group)
for n := 2; n <= 3; n++ {
n := n
g.Go(func() error {
for _, i := range helper(n) {
ch <- i
}
return nil
})
}
if err := g.Wait(); err != nil {
panic(err)
}
close(ch)
out := make([]int, 0)
for i := range ch {
out = append(out, i)
}
fmt.Println(out) // should have the same elements as [0 1 0 1 2]
}
However, although in this simplified example I know what the size of the output should be, in my actual application this is not known a priori. Essentially what I would like is an 'infinite' buffer such that sending to the channel never blocks, or a more idiomatic way to achieve the same thing; I've read https://blog.golang.org/pipelines but wasn't able to find a close match to my use case. Any ideas?
In this version of the code, the execution is blocked until ch is closed.
ch is always closed at the end of a routine that is responsible to push into ch. Because the program pushes to ch in a routine, it is not needed to use a buffered channel.
package main
import (
"fmt"
"golang.org/x/sync/errgroup"
)
func main() {
ch := make(chan int)
go func() {
g := new(errgroup.Group)
for n := 2; n <= 3; n++ {
n := n
g.Go(func() error {
for _, i := range helper(n) {
ch <- i
}
return nil
})
}
if err := g.Wait(); err != nil {
panic(err)
}
close(ch)
}()
out := make([]int, 0)
for i := range ch {
out = append(out, i)
}
fmt.Println(out) // should have the same elements as [0 1 0 1 2]
}
func helper(n int) []int {
out := make([]int, 0)
for i := 0; i < n; i++ {
out = append(out, i)
}
return out
}
Here is the fixed version of the first code, it is convoluted but demonstrates the usage of sync.WaitGroup.
package main
import (
"fmt"
"sync"
"golang.org/x/sync/errgroup"
)
func main() {
out := make([]int, 0)
ch := make(chan int)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
for i := range ch {
out = append(out, i)
}
}()
g := new(errgroup.Group)
for n := 2; n <= 3; n++ {
n := n
g.Go(func() error {
for _, i := range helper(n) {
ch <- i
}
return nil
})
}
if err := g.Wait(); err != nil {
panic(err)
}
close(ch)
wg.Wait()
// time.Sleep(time.Second)
fmt.Println(out) // should have the same elements as [0 1 0 1 2]
}
func helper(n int) []int {
out := make([]int, 0)
for i := 0; i < n; i++ {
out = append(out, i)
}
return out
}
Why the result is not as expected with flag "-race" ?
It expected the same result: 1000000 - with flag "-race" and without this
https://gist.github.com/romanitalian/f403ceb6e492eaf6ba953cf67d5a22ff
package main
import (
"fmt"
"runtime"
"sync/atomic"
"time"
)
//$ go run -race main_atomic.go
//954203
//
//$ go run main_atomic.go
//1000000
type atomicCounter struct {
val int64
}
func (c *atomicCounter) Add(x int64) {
atomic.AddInt64(&c.val, x)
runtime.Gosched()
}
func (c *atomicCounter) Value() int64 {
return atomic.LoadInt64(&c.val)
}
func main() {
counter := atomicCounter{}
for i := 0; i < 100; i++ {
go func(no int) {
for i := 0; i < 10000; i++ {
counter.Add(1)
}
}(i)
}
time.Sleep(time.Second)
fmt.Println(counter.Value())
}
The reason why the result is not the same is because time.Sleep(time.Second) does not guarantee that all of your goroutines are going to be executed in the timespan of one second. Even if you execute go run main.go, it's not guaranteed that you will get the same result every time. You can test this out if you put time.Milisecond instead of time.Second, you will see much more inconsistent results.
Whatever value you put in the time.Sleep method, it does not guarantee that all of your goroutines will be executed, it just means that it's less likely that all of your goroutines won't finish in time.
For consistent results, you would want to synchronise your goroutines a bit. You can use WaitGroup or channels.
With WaitGroup:
//rest of the code above is the same
func main() {
counter := atomicCounter{}
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func(no int) {
for i := 0; i < 10000; i++ {
counter.Add(1)
}
wg.Done()
}(i)
}
wg.Wait()
fmt.Println(counter.Value())
}
With channels:
func main() {
valStream := make(chan int)
doneStream := make(chan int)
result := 0
for i := 0; i < 100; i++ {
go func() {
for i := 0; i < 10000; i++ {
valStream <- 1
}
doneStream <- 1
}()
}
go func() {
counter := 0
for count := range doneStream {
counter += count
if counter == 100 {
close(doneStream)
}
}
close(valStream)
}()
for val := range valStream {
result += val
}
fmt.Println(result)
}
How to realize a nested iterator that takes a depth argument. A simple iterator would be when depth = 1. it is a simple iterator which runs like a simple for loop.
func Iter () chan int {
ch := make(chan int);
go func () {
for i := 1; i < 60; i++ {
ch <- i
}
close(ch)
} ();
return ch
}
Output is 1,2,3...59
For depth = 2 Output would be "1,1" "1,2" ... "1,59" "2,1" ... "59,59"
For depth = 3 Output would be "1,1,1" ... "59,59,59"
I want to avoid a nested for loop. What is the solution here ?
I don't know if it is possible to avoid nested loops, but one solution is to use a pipeline of channels. For example:
const ITER_N = 60
// ----------------
func _goFunc1(out chan string) {
for i := 1; i < ITER_N; i++ {
out <- fmt.Sprintf("%d", i)
}
close(out)
}
func _goFuncN(in chan string, out chan string) {
for j := range in {
for i := 1; i < ITER_N; i++ {
out <- fmt.Sprintf("%s,%d", j, i)
}
}
close(out)
}
// ----------------
// create the pipeline
func IterDepth(d int) chan string {
c1 := make(chan string)
go _goFunc1(c1)
var c2 chan string
for ; d > 1; d-- {
c2 = make(chan string)
go _goFuncN(c1, c2)
c1 = c2
}
return c1
}
You can test it with:
func main() {
c := IterDepth(2)
for i := range c {
fmt.Println(i)
}
}
I usually implement iterators using closures. Multiple dimensions don't make the problem much harder. Here's one example of how to do this:
package main
import "fmt"
func iter(min, max, depth int) func() ([]int, bool) {
s := make([]int, depth)
for i := range s {
s[i] = min
}
s[0] = min - 1
return func() ([]int, bool) {
s[0]++
for i := 0; i < depth-1; i++ {
if s[i] >= max {
s[i] = min
s[i+1]++
}
}
if s[depth-1] >= max {
return nil, false
}
return s, true
}
}
func main() {
// Three dimensions, ranging between [1,4)
i := iter(1, 4, 3)
for s, ok := i(); ok; s, ok = i() {
fmt.Println(s)
}
}
Try it out on the Playground.
It'd be a simple change for example to give arguments as a single int slice instead, so that you could have per-dimension limits, if such a thing were necessary.