Golang concurrency and blocking with a single channel, explanation needed - go

I'm playing around with the code presented on https://tour.golang.org/concurrency/5. My idea was that I could simplify the code by getting rid of the quit channel and still keep the correct program behavior - just for learning purposes.
Here is the code I've got (simplified it even more for better readability):
package main
import "fmt"
import "time"
func sendNumbers(c chan int) {
for i := 0; i < 2; i++ {
c <- i
}
}
func main() {
c := make(chan int)
go func() {
for i := 0; i < 2; i++ {
fmt.Println(<-c)
}
}()
time.Sleep(0 * time.Second)
sendNumbers(c)
}
In this code, the go routine I spawn should be able to receive 2 numbers before it returns. The sendNumbers() function I call next sends exactly 2 numbers to the channel c.
So, the expected output of the program is 2 lines: 0 and 1. However, what I'm getting when I run the code on the page, is just a single line containing 0.
Note the
time.Sleep(0 * time.Second)
though that I deliberately added before calling sendNumbers(c). If I change it to
time.Sleep(1 * time.Second)
The output becomes as expected:
0
1
So, I'm confused with what happens. Can someone explain what is going on? Shouldn't both sends and receives block regardless of how much time is passed before I call sendNumbers()?

In Go, the program exits as soon as the main function returns regardless of whether other goroutines are still running or not. If you want to make sure the main function does not exit prematurely, you need some synchronization mechanism. https://nathanleclaire.com/blog/2014/02/15/how-to-wait-for-all-goroutines-to-finish-executing-before-continuing/
You don’t absolutely have to use synchronization primitives, you could also do it with channels only, arguably a more idiomatic way to do it.

Related

Doesn't go routine and the channels work in order of call?

Doesn't go routine and the channels worked in the order they were called.
and go routine share values between the region variables?
main.go
var dataSendChannel = make(chan int)
func main() {
a(dataSendChannel)
time.Sleep(time.Second * 10)
}
func a(c chan<- int) {
for i := 0; i < 1000; i++ {
go b(dataSendChannel)
c <- i
}
}
func b(c <-chan int) {
val := <-c
fmt.Println(val)
}
output
> go run main.go
0
1
54
3
61
5
6
7
8
9
Channels are ordered. Goroutines are not. Goroutines may run, or stall, more or less at random, whenever they are logically allowed to run. They must stop and wait whenever you force them to do so, e.g., by attempting to write on a full channel, or using a mutex.Lock() call on an already-locked mutex, or any of those sorts of things.
Your dataSendChannel is unbuffered, so an attempt to write to it will pause until some goroutine is actively attempting to read from it. Function a spins off one goroutine that will attempt one read (go b(...)), then writes and therefore waits for at least one reader to be reading. Function b immediately begins reading, waiting for data. This unblocks function a, which can now write some integer value. Function a can now spin off another instance of b, which begins reading; this may happen before, during, or after the b that got a value begins calling fmt.Println. This second instance of b must now wait for someone—which in this case is always function a, running the loop—to send another value, but a does that as quickly as it can. The second instance of b can now begin calling fmt.Println, but it might, mostly-randomly, not get a chance to do that yet. The first instance of b might already be in fmt.Println, or maybe it isn't yet, and the second one might run first—or maybe both wait around for a while and a third instance of b spins up, reads some value from the channel, and so on.
There's no guarantee which instance of b actually gets into fmt.Println when, so the values you see printed will come out in some semi-random order. If you want the various b instances to sequence themselves, they will need to do that somehow.

Computing mod inverse

I want to compute the inverse element of a prime in modular arithmetic.
In order to speed things up I start a few goroutines which try to find the element in a certain range. When the first one finds the element, it sends it to the main goroutine and at this point I want to terminate the program. So I call close in the main goroutine, but I don't know if the goroutines will finish their execution (I guess not). So a few questions arise:
1) Is this a bad style, should I have something like a WaitGroup?
2) Is there a more idiomatic way to do this computation?
package main
import "fmt"
const (
Procs = 8
P = 1000099
Base = 1<<31 - 1
)
func compute(start, end uint64, finished chan struct{}, output chan uint64) {
for i := start; i < end; i++ {
select {
case <-finished:
return
default:
break
}
if i*P%Base == 1 {
output <- i
}
}
}
func main() {
finished := make(chan struct{})
output := make(chan uint64)
for i := uint64(0); i < Procs; i++ {
start := i * (Base / Procs)
end := (i + 1) * (Base / Procs)
go compute(start, end, finished, output)
}
fmt.Println(<-output)
close(finished)
}
Is there a more idiomatic way to do this computation?
You don't actually need a loop to compute this.
If you use the GCD function (part of the standard library), you get returned numbers x and y such that:
x*P+y*Base=1
this means that x is the answer you want (because x*P = 1 modulo Base):
package main
import (
"fmt"
"math/big"
)
const (
P = 1000099
Base = 1<<31 - 1
)
func main() {
bigP := big.NewInt(P)
bigBase := big.NewInt(Base)
// Compute inverse of bigP modulo bigBase
bigGcd := big.NewInt(0)
bigX := big.NewInt(0)
bigGcd.GCD(bigX,nil,bigP,bigBase)
// x*bigP+y*bigBase=1
// => x*bigP = 1 modulo bigBase
fmt.Println(bigX)
}
Is this a bad style, should I have something like a WaitGroup?
A wait group solves a different problem.
In general, to be a responsible go citizen here and ensure your code runs and tidies up behind itself, you may need to do a combination of:
Signal to the spawned goroutines to stop their calculations when the result of the computation has been found elsewhere.
Ensure a synchronous process waits for the goroutines to stop before returning. This is not mandatory if they properly respond to the signal in #1, but if you don't wait, there will be no guarantee they have terminated before the parent goroutine continues.
In your example program, which performs this task and then quits, there is strictly no need to do either. As this comment indicates, your program's main method terminates upon a satisfactory answer being found, at which point the program will end, any goroutines will be summarily terminated, and the operating system will tidy up any consumed resources. Waiting for goroutines to stop is unnecessary.
However, if you wrapped this code up into a library or it became part of a long running "inverse prime calculation" service, it would be desirable to tidy up the goroutines you spawned to avoid wasting cycles unnecessarily. Additionally, in general, you may have other scenarios in which goroutines store state, hold handles to external resources, or hold handles to internal objects which you risk leaking if not properly tidied away – it is desirable to properly close these.
Communicating the requirement to stop working
There are several approaches to communicate this. I don't claim this is an exhaustive list! (Please do suggest other general-purpose methods in the comments or by proposing edits to the post.)
Using a special channel
Signal the child goroutines by closing a special "shutdown" channel reserved for the purpose. This exploits the channel axiom:
A receive from a closed channel returns the zero value immediately
On receiving from the shutdown channel, the goroutine should immediately arrange to tidy any local state and return from the function. Your earlier question had example code which implemented this; a version of the pattern is:
func myGoRoutine(shutdownChan <-chan struct{}) {
select {
case <-shutdownChan:
// tidy up behaviour goes here
return
// You may choose to listen on other channels here to implement
// the primary behaviour of the goroutine.
}
}
func main() {
shutdownChan := make(chan struct{})
go myGoRoutine(shutdownChan)
// some time later
close(shutdownChan)
}
In this instance, the shutdown logic is wasted because the main() method will immediately return after the call to close. This will race with the shutdown of the goroutine, but we should assume it will not properly execute its tidy-up behaviour. Point 2 addresses ways to fix this.
Using a context
The context package provides the option to create a context which can be cancelled. On cancellation, a channel exposed by the context's Done() method will be closed, which signals time to return from the goroutine.
This approach is approximately the same as the previous method, with the exception of neater encapsulation and the availability of a context to pass to downstream calls in your goroutine to cancel nested calls where desired. Example:
func myGoRoutine(ctx context.Context) {
select {
case <-ctx.Done():
// tidy up behaviour goes here
return
// Put real behaviour for the goroutine here.
}
}
func main() {
// Get a context (or use an existing one if you are provided with one
// outside a `main` method:
ctx := context.Background()
// Create a derived context with a cancellation method
ctx, cancel := context.WithCancel(ctx)
go myGoRoutine(ctx)
// Later, when ready to quit
cancel()
}
This has the same bug as the other case in that the main method will not wait for the child goroutines to quit before returning.
Waiting (or "join"ing) for child goroutines to stop
The code which closes the shutdown channel or closes the context in the above examples will not wait for child goroutines to stop working before continuing. This may be acceptable in some instances, while in others you may require the guarantee that goroutines have stopped before continuing.
sync.WaitGroup can be used to implement this requirement. The documentation is comprehensive. A wait group is a counter which should be incremented using its Add method on starting a goroutine and decremented using its Done method when a goroutine completes. Code can wait for the counter to return to zero by calling its Wait method, which blocks until the condition is true. All calls to Add must occur before a call to Wait.
Example code:
func main() {
var wg sync.WaitGroup
// Increment the WaitGroup with the number of goroutines we're
// spawning.
wg.Add(1)
// It is common to wrap a goroutine in a function which performs
// the decrement on the WaitGroup once the called function returns
// to avoid passing references of this control logic to the
// downstream consumer.
go func() {
// TODO: implement a method to communicate shutdown.
callMyFunction()
wg.Done()
}()
// Indicate shutdown, e.g. by closing a channel or cancelling a
// context.
// Wait for goroutines to stop
wg.Wait()
}
Is there a more idiomatic way to do this computation?
This algorithm is certainly parallelizable through use of goroutines in the manner you have defined. As the work is CPU-bound, the limitation of goroutines to the number of available CPUs makes sense (in the absence of other work on the machine) to benefit from the available compute resource.
See peterSO's answer for a bug fix.

Golang goroutine cannot use function return value(s)

I wish I could do something like the following code with golang:
package main
import (
"fmt"
"time"
)
func getA() (int) {
fmt.Println("getA: Calculating...")
time.Sleep(300 * time.Millisecond)
fmt.Println("getA: Done!")
return 100
}
func getB() (int) {
fmt.Println("getB: Calculating...")
time.Sleep(400 * time.Millisecond)
fmt.Println("getB: Done!")
return 200
}
func main() {
A := go getA()
B := go getB()
C := A + B // waits till getA() and getB() return
fmt.Println("Result:", C)
fmt.Println("All done!")
}
More specifically, I wish Go could handle concurrency behind the scene.
This might be a bit off topic, but I am curious what people think of having such implicit concurrency support. Is it worth to put some effort on it? and what are the potential difficulties and drawbacks?
Edit:
To be clear, the question is not about "what Go is doing right now?", and not even "how it is implemented?" though I appreciate #icza post on what exactly we should expect from Go as it is right now. The question is why it does not or is not capable of returning values, and what are the potential complications of doing that?
Back to my simple example:
A := go getA()
B := go getB()
C := A + B // waits till getA() and getB() return
I do not see any issues concerning the scope of variables, at least from the syntax point of view. The scope of A, B, and C is clearly defined by the block they are living inside (in my example the scope is the main() function). However, a perhaps more legitimate question would be if those variables (here A and B) are "ready" to read from? Of course they should not be ready and accessible till getA() and getB() are finished. In fact, this is all I am asking for: The compiler could implement all the bell and whistles behind the scene to make sure the execution will be blocked till A and B are ready to consume (instead of forcing the programmer to explicitly implement those waits and whistles using channels).
This could make the concurrent programming much simpler, especially for the cases where computational tasks are independent of each other. The channels still could be used for explicit communication and synchronization, if really needed.
But this is very easily and idiomatically doable. The language provides the means: Channel types.
Simply pass a channel to the functions, and have the functions send the result on the channel instead of returning them. Channels are safe for concurrent use.
Only one goroutine has access to the value at any given time. Data races cannot occur, by design.
For more, check out the question: If I am using channels properly should I need to use mutexes?
Example solution:
func getA(ch chan int) {
fmt.Println("getA: Calculating...")
time.Sleep(300 * time.Millisecond)
fmt.Println("getA: Done!")
ch <- 100
}
func getB(ch chan int) {
fmt.Println("getB: Calculating...")
time.Sleep(400 * time.Millisecond)
fmt.Println("getB: Done!")
ch <- 200
}
func main() {
cha := make(chan int)
chb := make(chan int)
go getA(cha)
go getB(chb)
C := <-cha + <-chb // waits till getA() and getB() return
fmt.Println("Result:", C)
fmt.Println("All done!")
}
Output (try it on the Go Playground):
getB: Calculating...
getA: Calculating...
getA: Done!
getB: Done!
Result: 300
All done!
Note:
The above example can be implemented with a single channel too:
func main() {
ch := make(chan int)
go getA(ch)
go getB(ch)
C := <-ch + <-ch // waits till getA() and getB() return
fmt.Println("Result:", C)
fmt.Println("All done!")
}
Output is the same. Try this variant on the Go Playground.
Edit:
The Go spec states that the return values of such functions are discarded. More on this: What happens to return value from goroutine.
What you propose bleeds from multiple wounds. Each variable in Go has a scope (in which they can be referred to). Accessing variables do not block. Execution of statements or operators may block (e.g. Receive operator or Send statement).
Your proposal:
go A := getA()
go B := getB()
C := A + B // waits till getA() and getB() return
What is the scope of A and B? 2 reasonable answer would be a) from the go statement or after the go statement. Either way we should be able to access it after the go statement. After the go statement they would be in scope, reading/writing their values should not block.
But then if C := A + B would not block (because it is just reading a variable), then either
a) at this line A and B should already be populated which means the go statement would need to wait for getA() to complete (but then it defeats the purpose of go statement)
b) or else we would need some external code to synchronize but then again we don't gain anything (just make it worse compared to the solution with channels).
Do not communicate by sharing memory; instead, share memory by communicating.
By using channels, it is crystal clear what (may) block and what not. It is crystal clear that when a receive from the channel completes, the goroutine is done. And it gives us the mean to execute the receive whenever we're want to (at the point when its value is needed and we're willing to wait for it), and also the mean to check if the value is ready without blocking (comma-ok idiom).

Issue with runtime.LockOSThread() and runtime.UnlockOSThread

I have a code like,
Routine 1 {
runtime.LockOSThread()
print something
send int to routine 2
runtime.UnlockOSThread
}
Routine 2 {
runtime.LockOSThread()
print something
send int to routine 1
runtime.UnlockOSThread
}
main {
go Routine1
go Routine2
}
I use run time lock-unlock because, I don't want that printing of
Routine 1 will mix with Routine 2. However, after execution of above
code, it outputs same as without lock-unlock (means printing outputs
mixed). Can anybody help me why this thing happening and how to force
this for happening.
NB: I give an example of print something, however there are lots of
printing and sending events.
If you want to serialize "print something", e.g. each "print something" should perform atomically, then just serialize it.
You can surround "print something" by a mutex. That'll work unless the code deadlock because of that - and surely it easily can in a non trivial program.
The easy way in Go to serialize something is to do it with a channel. Collect in a (go)routine everything which should be printed together. When collection of the print unit is done, send it through a channel to some printing "agent" as a "print job" unit. That agent will simply receive its "tasks" and atomically print each one. One gets that atomicity for free and as an important bonus the code can not deadlock easily no more in the simple case, where there are only non interdependent "print unit" generating goroutines.
I mean something like:
func printer(tasks chan string) {
for s := range tasks {
fmt.Printf(s)
}
}
func someAgentX(tasks chan string) {
var printUnit string
//...
tasks <- printUnit
//...
}
func main() {
//...
tasks := make(chan string, size)
go printer(tasks)
go someAgent1(tasks)
//...
go someAgentN(tasks)
//...
<- allDone
close(tasks)
}
What runtime.LockOSThread does is prevent any other goroutine from running on the same thread. It forces the runtime to create a new thread and run Routine2 there. They are still running concurrently but on different threads.
You need to use sync.Mutex or some channel magic instead.
You rarely need to use runtime.LockOSThread but it can be useful for forcing some higher priority goroutine to run on a thread of it's own.
package main
import (
"fmt"
"sync"
"time"
)
var m sync.Mutex
func printing(s string) {
m.Lock() // Other goroutines will stop here if until m is unlocked
fmt.Println(s)
m.Unlock() // Now another goroutine at "m.Lock()" can continue running
}
func main() {
for i := 0; i < 10; i++ {
go printing(fmt.Sprintf("Goroutine #%d", i))
}
<-time.After(3e9)
}
I think, this is because of runtime.LockOSThread(),runtime.UnlockOSThread does not work all time. It totaly depends on CPU, execution environment etc. It can't be forced by anyother way.

Mutual Exclusion of Concurrent Goroutines

In my code there are three concurrent routines. I try to give a brief overview of my code,
Routine 1 {
do something
*Send int to Routine 2
Send int to Routine 3
Print Something
Print Something*
do something
}
Routine 2 {
do something
*Send int to Routine 1
Send int to Routine 3
Print Something
Print Something*
do something
}
Routine 3 {
do something
*Send int to Routine 1
Send int to Routine 2
Print Something
Print Something*
do something
}
main {
routine1
routine2
routine3
}
I want that, while codes between two do something (codes between two star marks) is executing, flow of control must not go to other goroutines. For example, when routine1 is executing the events between two stars (sending and printing events), routine 2 and 3 must be blocked (means flow of execution does not pass to routine 2 or 3 from routine 1). After completing last print event, flow of execution may pass to routine 2 or 3. Can anybody help me by specifying, how can I achieve this ? Is it possible to implement above specification by WaitGroup ? Can anybody show me by giving a simple example how to implement above specified example by using WaitGroup. Thanks.
NB: I give two sending and two printing options, in fact there is lots of sending and printing.
If I have gotten it correctly, what you want is to prevent simultaneous execution of some part of each function and other functions. The following code does this: fmt.Println lines won't happen as other routines are running. Here's what happens: when execution gets to the print section, it waits until other routines end, if they are running, and while this print line is executing other routines don't start and wait. I hope that is what you are looking for. Correct me if I'm wrong about this.
package main
import (
"fmt"
"rand"
"sync"
)
var (
mutex1, mutex2, mutex3 sync.Mutex
wg sync.WaitGroup
)
func Routine1() {
mutex1.Lock()
// do something
for i := 0; i < 200; i++ {
mutex2.Lock()
mutex3.Lock()
fmt.Println("value of z")
mutex2.Unlock()
mutex3.Unlock()
}
// do something
mutex1.Unlock()
wg.Done()
}
func Routine2() {
mutex2.Lock()
// do something
for i := 0; i < 200; i++ {
mutex1.Lock()
mutex3.Lock()
fmt.Println("value of z")
mutex1.Unlock()
mutex3.Unlock()
}
// do something
mutex2.Unlock()
wg.Done()
}
func Routine3() {
mutex3.Lock()
// do something
for i := 0; i < 200; i++ {
mutex1.Lock()
mutex2.Lock()
fmt.Println("value of z")
mutex1.Unlock()
mutex2.Unlock()
}
// do something
mutex3.Unlock()
wg.Done()
}
func main() {
wg.Add(3)
go Routine1()
go Routine2()
Routine3()
wg.Wait()
}
UPDATE: Let me explain these three mutex here: a mutex is, as documentation says: "a mutual exclusion lock." That means when you call Lock on a mutex your code just waits there if somebody else have locked the same mutex. Right after you call Unlock that blocked code will be resumed.
Here I put each function in its own mutex by locking a mutex at the beginning of the function and unlocking it the end. By this simple mechanism you can avoid running any part of code you want at the same time as those functions. For instance, everywhere you want to have a code that should not be running when Routine1 is running, simply lock mutex1 at the beginning of that code and unlock at at the end. That is what I did in appropriate lines in Routine2 and Routine3. Hope that clarifies things.
You could use sync.Mutex. For example everything between im.Lock() and im.Unlock() will exclude the other goroutine.
package main
import (
"fmt"
"sync"
)
func f1(wg *sync.WaitGroup, im *sync.Mutex, i *int) {
for {
im.Lock()
(*i)++
fmt.Printf("Go1: %d\n", *i)
done := *i >= 10
im.Unlock()
if done {
break
}
}
wg.Done()
}
func f2(wg *sync.WaitGroup, im *sync.Mutex, i *int) {
for {
im.Lock()
(*i)++
fmt.Printf("Go2: %d\n", *i)
done := *i >= 10
im.Unlock()
if done {
break
}
}
wg.Done()
}
func main() {
var wg sync.WaitGroup
var im sync.Mutex
var i int
wg.Add(2)
go f1(&wg, &im, &i)
go f2(&wg, &im, &i)
wg.Wait()
}
Another way would be to have a control channel, with only one goroutine allowed to execute at any one time, with each routine sending back to the 'control lock' whenever they are done doing their atomic operation:
package main
import "fmt"
import "time"
func routine(id int, control chan struct{}){
for {
// Get the control
<-control
fmt.Printf("routine %d got control\n", id)
fmt.Printf("A lot of things happen here...")
time.Sleep(1)
fmt.Printf("... but only in routine %d !\n", id)
fmt.Printf("routine %d gives back control\n", id)
// Sending back the control to whichever other routine catches it
control<-struct{}{}
}
}
func main() {
// Control channel is blocking
control := make(chan struct{})
// Start all routines
go routine(0, control)
go routine(1, control)
go routine(2, control)
// Sending control to whichever catches it first
control<-struct{}{}
// Let routines play for some time...
time.Sleep(10)
// Getting control back and terminating
<-control
close(control)
fmt.Println("Finished !")
}
This prints:
routine 0 got control
A lot of things happen here...... but only in routine 0 !
routine 0 gives back control
routine 1 got control
A lot of things happen here...... but only in routine 1 !
routine 1 gives back control
routine 2 got control
A lot of things happen here...... but only in routine 2 !
routine 2 gives back control
routine 0 got control
A lot of things happen here...... but only in routine 0 !
routine 0 gives back control
routine 1 got control
A lot of things happen here...... but only in routine 1 !
routine 1 gives back control
routine 2 got control
A lot of things happen here...... but only in routine 2 !
routine 2 gives back control
routine 0 got control
A lot of things happen here...... but only in routine 0 !
routine 0 gives back control
routine 1 got control
A lot of things happen here...... but only in routine 1 !
routine 1 gives back control
routine 2 got control
A lot of things happen here...... but only in routine 2 !
routine 2 gives back control
routine 0 got control
A lot of things happen here...... but only in routine 0 !
routine 0 gives back control
routine 1 got control
A lot of things happen here...... but only in routine 1 !
routine 1 gives back control
routine 2 got control
A lot of things happen here...... but only in routine 2 !
routine 2 gives back control
Finished !
You are asking for a library function that explicitly stop other routines ? Make it clear that, it is not possible in Go and also for most other languages. And sync, mutex case is not possible for your case.

Resources