There is this website called rosetta code that has algorithms in all languages so you can learn and compare when learning new languages.
Here I saw that one of the solutions for go lang is pretty interesting but I don't fully understand it.
func fib(c chan int) {
a, b := 0, 1
for {
c <- a
a, b = b, a+b
}
}
func main() {
c := make(chan int)
go fib(c)
for i := 0; i < 10; i++ {
fmt.Println(<-c)
}
}
Here are some of my doubts
How does the infinite for loop know when to stop?
How does the c channel communicate this?
What is the logical sequence between the func calls?
Thanks for the help kind strangers.
How does the infinite for loop know when to stop?
As you said: This is an infinite loop and doesn't stop at all (as long as the program is running).
How does the c channel communicate this?
The channel c doesn't communicate stopping the for loop at all, the loop is not stopped. The sole purpose of c is to deliver the next number in the sequence from the calculation site (the infinite for loop) to the usage site (the print loop).
What is the logical sequence between the func calls?
go fib(c) starts fib as a goroutine. This is the one and only function call (*) ever happening in your code. Once go fib(c) has happened you have to concurrent things running: 1. The main function which will print 10 times and 2. fib(c) which does the computation.
The interesting stuff -- the synchronization between main() and fib(c) -- happens when main executes <-c and ("in the same moment") fib executes c <- a. Both functions, main and fib work until they both reach those lines. Once both are "at that line", both will happen "at the same time": fib will write/send to c and main consumes/receives from c "simultaneous". Afterwards both functions main and fib continue independently.
Once main is done the program finishes (this also "stops" fib's infinite loop).
(*) for the nitpickers: besides the fmt.Printf and make call which are irrelevant for understanding of this code.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
I am trying concurrency in Go with simple problem example which is the nth prime palindrome numbers, eg: 1st to 9th palindrome sequence is 2, 3, 5, 7, 11, 101, 131, 151. I am stuck and have no idea what to do with it.
My current code is like this:
n := 9999999
count := 0
primePalindrome := 0
for i := 0; count < n; i++ {
go PrimeNumber(i)
go PalindromeNumber(i)
if PrimeNumber(i) && PalindromeNumber(i) {
primePalindrome = i
count++
}
}
How do I know that function PrimeNumber(i) and PalindromeNumber(i) already executed and done by go routine so that I can give IF CONDITION to get the number?
Here's my solution: https://go.dev/play/p/KShMctUK9yg
The question is, what if I want to find the 9999999th palindrome with faster runtime using Go concurrency, how to apply concurrency in PrimeNumber() and PalindromeNumber() function?
Similar to Torek, I add concurrency by creating multiple workers that can each check a number, thereby checking several numbers in parallel.
Algorithmically, the program works like this. One goroutine generates possible prime palindromes. A pool of multiple goroutines all check candidates. The main goroutine collects results. When we have at least enough results to provide the nth prime palindrome, we sort the list and then return the answer.
Look closely at the use of channels and a wait group to communicate between the goroutines:
multiple worker goroutines. Their job is to run palindrome and then prime checks for candidate numbers. They listen on the candidates channel; when the channel is closed, they end, communicating that they're done to the wg sync.WaitGroup.
a single candidate generator. This goroutine sends each candidate to one of the workers by sending to the candidates channel. If it finds the done channel to be closed, it ends.
the main goroutine also functions as the collector of results. It reads from the resc (results) channel and adds the results to a list. When the list is at least the required length, it closes done, signaling the generator to stop generating candidates.
The done channel may seem redundant, but it's important because the main results collector knows when we are done generating candidates, but isn't the one sending to candidates. IF we closed candidates in main, there's a good chance the generator would attempt to write to it and that would crash the program. The generator is the one writing candidates; it is the only goroutine that "knows" no more candidates will be written.
Note that this implementation generates at least n prime palindromes. Since it generates the prime palindromes in parallel, there's no guarantee that we have them in order. We might generate up to prime palindrome n+m where m is the number of workers minus one, in the worst case.
There's a lot of room for improvement here. I'm pretty sure the generator and collector roles could be combined in one select loop on candidate and result channels. The program also seems to have a very hard time if n is as big as 9999999 when I run it on my windows machine - see if your results vary.
Edit: Performance enhancements
If you're looking to improve performance, here's a few things I found and noticed last night.
No need to check even numbers. for i := start; ; i += 1 + (i % 2) skips to the next odd number, then adds 2 every other time to keep on the odd numbers.
all palindromes of even numbered decimal length are divisble by 11. So whole sets of numbers of even length can be skipped. I do this by adding math.Pow10(len(str)) to even numbered decimal representations to add another digit. This is what caused the program to stop outputting numbers for large amounts of time - every even numbered set of numbers cannot produce prime palindromes.
if the number's decimal notation starts with an even number it can't be a prime palindrome unless it's only one digit long. Same is true of 5. In the code below I add math.Pow10(len(str)-1) to the number to move to the next odd numbered sequence. If the number starts with 5, I double that to move to the next odd numbered sequence.
These tricks make the code a lot more performant, but it's still a brute force at the end of the day and I still haven't gotten even close to 9999999.
// send candidates
go func() {
// send candidates out
for i := start; ; i += 1 + (i % 2) {
str := strconv.FormatInt(i, 10)
if len(str) % 2 == 0 && i != 11 {
newi := int64(math.Pow10(len(str)))+1
log.Printf("%d => %d", i, newi)
i = newi
continue
}
if first := (str[0]-'0'); first % 2 == 0 || first == 5 {
if i < 10 {
continue
}
nexti := int64(math.Pow10(len(str)-1))
if first == 5 {
nexti *= 2
}
newi := i + nexti
log.Printf("%d -> %d", i, newi)
i = newi-2
continue
}
select {
case _, ok := <-done:
if !ok {
close(candidates)
return
}
case candidates <- i:
continue
}
}
}()
There are multiple issues to solve here:
we want to spin off "is prime" and "is palindrome" tests
we want to sequentially order the numbers that pass the tests
and of course, we have to express the "spin off" and "wait for result" parts of the problem in our programming language (in this case, Go).
(Besides this, we might want to optimize our primality testing, perhaps with a Sieve of Eratothenese algorithm or similar, which may also involve parallelism.)
The middle problem here is perhaps the hardest one. There is a fairly obvious way to do it, though: we can observe that if we assign an ascending-order number to each number tested, the answers that come back (n is/isnot suitable), even if they come back in the wrong order, are easily re-shuffled into order.
Since your overall loop increments by 1 (which is kind of a mistake1), the numbers tested are themselves suitable for this purpose. So we should create a Go channel whose type is a pair of results: Here is the number I tested, and here is my answer:
type result struct {
tested int // the number tested
passed bool // pass/fail result
}
testC := make(chan int)
resultC := make(chan result)
Next, we'll use a typical "pool of workers". Then we run our loop of things-to-test. Here is your existing loop:
for i := 0; count < n; i++ {
go PrimeNumber(i)
go PalindromeNumber(i)
if PrimeNumber(i) && PalindromeNumber(i) {
primePalindrome = i
count++
}
}
We'll restructure this as:
count := 0
busy := 0
results := []result{}
for toTest := 0;; toTest += 2 {
// if all workers are busy, wait for one result
if busy >= numWorkers {
result := <-resultC // get one result
busy--
results := addResult(results, result)
if result.passed {
count++ // passed: increment count
if count >= n {
break
}
}
}
// still working, so test this number
testC <- toTest
busy++
}
close(testC) // tell workers to stop working
// collect remaining results
for result := range resultC {
results := addResult(results, result)
}
(The "busy" test is a bit klunky; you could use a select to send or receive, whichever you can do first, instead, but if you do that, the optimizations outlined below get a little more complicated.)
This does mean our standard worker pool pattern needs to close the result channel resultC, which means we'll add a sync.WaitGroup when we spawn off the numWorkers workers:
var wg sync.WaitGroup
wg.Add(numWorkers)
for i := 0; i < numWorkers; i++ {
go worker(&wg, testC, resultC)
}
go func() {
wg.Wait()
close(resultC)
}()
This makes our for result := range resultC loop work; the workers all stop (and return and call wg.Done() via their defers, which are not shown here) when we close testC so resultC is closed once the last worker exits.
Now we have one more problem, which is: the results come back in semi-random order. That's why we have a slice of results. The addResult function needs to expand the slice and insert the result in the proper position, using the value-tested. When the main loop reaches the break statement, the number in toTest is at least the n'th palindromic prime, but it may be great than the n'th. So we need to collect the remaining results and look backwards to see if some earlier number was in fact the n'th.
There are a number of optimizations to make at this point: in particular, if we've tested numbers through k and they're all known to have passed or failed and k + numWorkers < n, we no longer need any of these results (whether they passed or failed) so we can shrink the results slice. Or, if we're interested in building a table of palindromic primes, we can do that, or anything else we might choose. The above is not meant to be the best solution, just a solution.
Note again that we "overshoot": whatever numWorkers is, we may test up to numWorkers-1 values that we didn't need to test at all. That, too, might be optimizable by having each worker quit early (using some sort of quit indicator, whether that's a "done" channel or just a sync/atomic variable) if they're working on a number that's higher than the now-known-to-be-at-least-n'th value.
1We can cut the problem in half by starting with answers pre-loaded with 2, or 1 and 2 if you choose to consider 1 prime—see also http://ncatlab.org/nlab/show/too+simple+to+be+simple. Then we run our loop from 3 upwards by 2 each time, so that we don't even bother testing even numbers, since 2 is the only even prime number.
The answer depends of many aspects of your problem
For instance, if each step is independent, you can use the example below. If the next step depends on the previous you need to find another solution.
For instance: a palindrome number does not depends of the previous number. You can paralelize the palindrome detection. While the prime is a sequential code.
Perhaps check if a palindrome is prime by checking the divisors until the square root of that number. You need to benchmark it
numbers := make(chan int, 1000) // add some buffer
var wg sync.WaitGroup
for … { // number of consumers
wg.Add(1)
go consume(&wg, numbers)
}
for i … {
numbers <- i
}
close(numbers)
wg.Wait()
// here all consumers ended properly
…
func consume(wg *sync.WaitGroup, numbers chan int){
defer wg.Done()
for i := range numbers {
// use i
}
}
A fairly naive go question. I was going through go-concurrency tutorial and I came across this https://tour.golang.org/concurrency/4.
I modified the code to add a print statement in the fibonacci function. So the code looks something like
package main
import (
"fmt"
)
func fibonacci(n int, c chan int) {
x, y := 0, 1
for i := 0; i < n; i++ {
c <- x
x, y = y, x+y
fmt.Println("here")
}
close(c)
}
func main() {
c := make(chan int, 10)
go fibonacci(cap(c), c)
for i := range c {
fmt.Println(i)
}
}
And I got this as an output
here
here
here
here
here
here
here
here
here
here
0
1
1
2
3
5
8
13
21
34
I was expecting here and the numbers to be interleaved. (Since the routine gets executed concurrently)
I think I am missing something basic about go-routines. Not quite sure what though.
A few things here.
You have 2 goroutines, one running main(), and one running fibonacci(). Because this is a small program, there isn't a good reason for the go scheduler not to run them one after another on the same thread, so that's what happens consistently, though it isn't guaranteed. Because the goroutine in main() is waiting for the chan, the fibonacci() routine is scheduled first. It's important to remember that goroutines aren't threads, they're routines that the go scheduler runs on threads according to its liking.
Because you're passing the length of the buffered channel to fibonacci() there will almost certainly (never rely on this behavior) be cap(c) heres printed after which the channel is filled, the for loop finishes, a close is sent to the chan, and the goroutine finishes. Then the main() goroutine is scheduled and cap(c) fibonacci's will be printed. If the buffered chan had filled up, then main() would have been rescheduled:
https://play.golang.org/p/_IgFIO1K-Dc
By sleeping you can tell the go scheduler to give up control. But in practice never do this. Restructure in some way or, if you must, use a Waitgroup. See: https://play.golang.org/p/Ln06-NYhQDj
I think you're trying to do this: https://play.golang.org/p/8Xo7iCJ8Gj6
I think what you are observing is that Go has its own scheduler, and at the same time there is a distinction between "concurrency" and "parallelism". In the words of Rob Pike: Concurrency is not Parallelism
Goroutines are much more lightweight than OS threads and they are managed in "userland" (within the Go process) as opposed to the operating system. Some programs have many thousands (even tens of thousands) of goroutines running, whilst there would certainly be far fewer operating system threads allocated. (This is one of Go's major strengths in asynchronous programs with many routines)
Because your program is so simple, and the channel buffered, it does not block on writing to the channel:
c <- x
The fibonacci goroutine isn't getting preempted before it completes the short loop.
Even the fmt.Println("here") doesn't deterministically introduce preemption - I learned something myself there in writing this answer. It is buffered, like the analagous printf and scanf from C.
(see the source code https://github.com/golang/go/blob/master/src/fmt/print.go)
For interest, if you wanted to artificially control the number of OS threads, you can set the GOMAXPROCS environment variable on the command line:
~$ GOMAXPROCS=1 go run main.go
However, with your simple program there probably would be no discernable difference, because the Go runtime is still perfectly capable of scheduling many goroutines against 1 OS thread.
For example, here is a minor variation to your program. By making the channel buffer smaller (5), but still iterating 10 times, we introduce a point at which the fibonacci go routine can (but won't necessarily) be preempted, where it could block at least once on writing to the channel:
package main
import (
"fmt"
)
func fibonacci(n int, c chan int) {
x, y := 0, 1
for i := 0; i < n; i++ {
c <- x
x, y = y, x+y
fmt.Println("here")
}
close(c)
}
func main() {
c := make(chan int, 5)
go fibonacci(cap(c)*2, c)
for i := range c {
fmt.Println(i)
}
}
~$ GOMAXPROCS=1 go run main.go
here
here
here
here
here
here
0
1
1
2
3
5
8
here
here
here
here
13
21
34
Long explanation here, short explanation is that there are a multitude of reasons that a go routine can temporarily block and those are ideal opportunities for the go scheduler to schedule execution of another go routine.
If you add this after the fmt.Println in the fibonacci loop, you will see the results interleaved the way you would expect:
time.Sleep(1 * time.Second)
This gives the Go scheduler a reason to block the execution of the fibonacci() goroutine long enough to allow the main() goroutine to read from the channel.
I'm having trouble understanding the use of goroutines and channels in the tour of go. Referencing the code below from:
"https://tour.golang.org/concurrency/2"
package main
import "fmt"
func sum(s []int, c chan int) {
sum := 0
for _, v := range s {
sum += v
}
c <- sum // send sum to c
}
func main() {
s := []int{7, 2, 8, -9, 4, 0}
c := make(chan int)
go sum(s[:len(s)/2], c)
go sum(s[len(s)/2:], c)
x, y := <-c, <-c // receive from c
fmt.Println(x, y, x+y)
}
It runs the sum functions using goroutines with the 'go' keyword in front of them, but all they do is send in values to a channel. They shouldn't have to be run with go routines. However, when removing the go keyword to run the functions as normal I get this error:
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan send]:
main.sum(0xc420059f10, 0x3, 0x6, 0xc420088060)
/tmp/compile33.go:10 +0x5a
main.main()
/tmp/compile33.go:17 +0x99
I can't understand why goroutines are needed here. I might be misunderstanding the concept and would appreciate if anyone more familiar with go could shed some light.
Thanks,
Others have already pointed out in the comments that in terms of being an example, you obviously don't need to write this program with channels.
From your question, though, it sounds like you're curious about why separate goroutines are needed in order for the program to run.
To answer that, it might be helpful to think about how this might work in a world where you were only thinking about threads. You've got your main thread, and that thread invokes sum(s[:len(s)/2], c). So now the main thread gets to the c <- sum line in sum, and it blocks, because the channel is unbuffered - meaning there must be another listening thread to "take" from that channel in order for our main thread to put something into it. In other words, the threads are passing messages directly to each other, but there's no second thread to pass to. Deadlock!
In this context, goroutines and threads are functionally equivalent. So without a second goroutine, you've got your main goroutine calling...but nobody's picking up the telephone on the other end.
I have two question:
a) Does it make sense to spin up multiple goroutines in a loop for something like calculating a math result?
b) Why doesn't my code work (this is my first attempt at goroutines)? I'm guessing it has something to do with closing the channel.
package main
import (
"fmt"
"math"
"sync"
)
func main() {
input := [][]int{
[]int{10, 9},
[]int{5, 2},
[]int{4, 9},
}
var wg sync.WaitGroup
c := make(chan int)
for _, val := range input {
wg.Add(1)
go func(coordinates []int, c chan int) {
defer wg.Done()
c <- calculateDistance(coordinates[0], coordinates[1])
}(val, c)
}
distances := []int{}
for val := range c {
distances = append(distances, val)
}
wg.Wait()
fmt.Println(distances)
}
func calculateDistance(x int, y int) int {
v := math.Exp2(float64(x)) + math.Exp2(float64(y))
distance := math.Sqrt(v)
return int(distance)
}
Playground link: https://play.golang.org/p/0iJ9hFnb8R
a) Yes it can make sense to spin up multiple go routines to do CPU bound tasks, if you have multiple CPU's. Also it's super important to profile your code to see if there is actually any benefit. You could use go's built in benchmark framework to help do this.
Because you're limited by CPU it could be a good start to do synchronously, then to bound your goroutines to the number of CPU cores instead of the # of items in your input list, but really it should be metrics driven to see. Go provides amazing toolchain using benchmarks and pprof to empirically determine what is the most efficient approach :)
b) https://play.golang.org/p/zGEQGC9EIy Your channel never closes and your main thread never end. The example waits until all go routines finish their work, then closes the channel.
range loops over channels terminate when the channel is closed. Since you never close the channel in your programm, the main goroutine will eventually block forever, trying to receive from c.
Does it make sense to spin up multiple goroutines in a loop for something like calculating a math result?
Depends. If you haven't seen it yet, I can recommend Rob Pike's talk Concurrency is not parallelism. This may give you some intuition about where it is beneficial, and where it isn't.
Here is a code snippet from the official tutorial
package main
import "fmt"
func sum(s []int, c chan int) {
sum := 0
for _, v := range s {
sum += v
}
c <- sum // send sum to c
}
func main() {
s := []int{7, 2, 8, -9, 4, 0}
c := make(chan int)
go sum(s[:len(s)/2], c)
go sum(s[len(s)/2:], c)
x, y := <-c, <-c // receive from c
fmt.Println(x, y, x+y)
}
Since we are doing the calculation in parallel, and each thread saves its result into the same channel, doesn't this screw up the data?
It's true that when you send two values over a channel from two different goroutines that the ordering is not necessarily guaranteed (unless you've done something else to coordinate their sends).
However, in this example, the ordering doesn't matter at all. Two values are being sent on the channel: the sum of the first half and the sum of the second.
go sum(s[:len(s)/2], c)
go sum(s[len(s)/2:], c)
Since the only thing those two values are used for is to calculate the total sum, the order doesn't matter at all. In fact, if you ran the example enough times you should see that x and y are often swapped, but the sum x+y is always the same.
Operations with channels are goroutine safe. You can read/write/close in any goroutine without corrupting anything that goes in or out of the channel. Basically, channels are synchronization points. Unbuffered channels (like in your case) will block on every write and read. When you write your code will block and wait until someone starts reading on the other end. When you read your code will block and wait until someone starts writing on the other end.
In your case calculations in goroutines will be done concurrently (not necessary in parallel) but will block on channel write. Your main goroutine will block on the first read, read the value. Block on the second read, read the value.
Even if you use a buffered channel - c := make(chan int, 2). Your goroutines will finish calculations, write resuls to the channel without blocking and terminate. Nothing will be corrupted. In the meantime main goroutine will block on channel read and wait until someone writes to it.
I suggest you read effective go, Go Concurrency Patterns and try A Tour of Go