Mutual Exclusion of Concurrent Goroutines - go

In my code there are three concurrent routines. I try to give a brief overview of my code,
Routine 1 {
do something
*Send int to Routine 2
Send int to Routine 3
Print Something
Print Something*
do something
}
Routine 2 {
do something
*Send int to Routine 1
Send int to Routine 3
Print Something
Print Something*
do something
}
Routine 3 {
do something
*Send int to Routine 1
Send int to Routine 2
Print Something
Print Something*
do something
}
main {
routine1
routine2
routine3
}
I want that, while codes between two do something (codes between two star marks) is executing, flow of control must not go to other goroutines. For example, when routine1 is executing the events between two stars (sending and printing events), routine 2 and 3 must be blocked (means flow of execution does not pass to routine 2 or 3 from routine 1). After completing last print event, flow of execution may pass to routine 2 or 3. Can anybody help me by specifying, how can I achieve this ? Is it possible to implement above specification by WaitGroup ? Can anybody show me by giving a simple example how to implement above specified example by using WaitGroup. Thanks.
NB: I give two sending and two printing options, in fact there is lots of sending and printing.

If I have gotten it correctly, what you want is to prevent simultaneous execution of some part of each function and other functions. The following code does this: fmt.Println lines won't happen as other routines are running. Here's what happens: when execution gets to the print section, it waits until other routines end, if they are running, and while this print line is executing other routines don't start and wait. I hope that is what you are looking for. Correct me if I'm wrong about this.
package main
import (
"fmt"
"rand"
"sync"
)
var (
mutex1, mutex2, mutex3 sync.Mutex
wg sync.WaitGroup
)
func Routine1() {
mutex1.Lock()
// do something
for i := 0; i < 200; i++ {
mutex2.Lock()
mutex3.Lock()
fmt.Println("value of z")
mutex2.Unlock()
mutex3.Unlock()
}
// do something
mutex1.Unlock()
wg.Done()
}
func Routine2() {
mutex2.Lock()
// do something
for i := 0; i < 200; i++ {
mutex1.Lock()
mutex3.Lock()
fmt.Println("value of z")
mutex1.Unlock()
mutex3.Unlock()
}
// do something
mutex2.Unlock()
wg.Done()
}
func Routine3() {
mutex3.Lock()
// do something
for i := 0; i < 200; i++ {
mutex1.Lock()
mutex2.Lock()
fmt.Println("value of z")
mutex1.Unlock()
mutex2.Unlock()
}
// do something
mutex3.Unlock()
wg.Done()
}
func main() {
wg.Add(3)
go Routine1()
go Routine2()
Routine3()
wg.Wait()
}
UPDATE: Let me explain these three mutex here: a mutex is, as documentation says: "a mutual exclusion lock." That means when you call Lock on a mutex your code just waits there if somebody else have locked the same mutex. Right after you call Unlock that blocked code will be resumed.
Here I put each function in its own mutex by locking a mutex at the beginning of the function and unlocking it the end. By this simple mechanism you can avoid running any part of code you want at the same time as those functions. For instance, everywhere you want to have a code that should not be running when Routine1 is running, simply lock mutex1 at the beginning of that code and unlock at at the end. That is what I did in appropriate lines in Routine2 and Routine3. Hope that clarifies things.

You could use sync.Mutex. For example everything between im.Lock() and im.Unlock() will exclude the other goroutine.
package main
import (
"fmt"
"sync"
)
func f1(wg *sync.WaitGroup, im *sync.Mutex, i *int) {
for {
im.Lock()
(*i)++
fmt.Printf("Go1: %d\n", *i)
done := *i >= 10
im.Unlock()
if done {
break
}
}
wg.Done()
}
func f2(wg *sync.WaitGroup, im *sync.Mutex, i *int) {
for {
im.Lock()
(*i)++
fmt.Printf("Go2: %d\n", *i)
done := *i >= 10
im.Unlock()
if done {
break
}
}
wg.Done()
}
func main() {
var wg sync.WaitGroup
var im sync.Mutex
var i int
wg.Add(2)
go f1(&wg, &im, &i)
go f2(&wg, &im, &i)
wg.Wait()
}

Another way would be to have a control channel, with only one goroutine allowed to execute at any one time, with each routine sending back to the 'control lock' whenever they are done doing their atomic operation:
package main
import "fmt"
import "time"
func routine(id int, control chan struct{}){
for {
// Get the control
<-control
fmt.Printf("routine %d got control\n", id)
fmt.Printf("A lot of things happen here...")
time.Sleep(1)
fmt.Printf("... but only in routine %d !\n", id)
fmt.Printf("routine %d gives back control\n", id)
// Sending back the control to whichever other routine catches it
control<-struct{}{}
}
}
func main() {
// Control channel is blocking
control := make(chan struct{})
// Start all routines
go routine(0, control)
go routine(1, control)
go routine(2, control)
// Sending control to whichever catches it first
control<-struct{}{}
// Let routines play for some time...
time.Sleep(10)
// Getting control back and terminating
<-control
close(control)
fmt.Println("Finished !")
}
This prints:
routine 0 got control
A lot of things happen here...... but only in routine 0 !
routine 0 gives back control
routine 1 got control
A lot of things happen here...... but only in routine 1 !
routine 1 gives back control
routine 2 got control
A lot of things happen here...... but only in routine 2 !
routine 2 gives back control
routine 0 got control
A lot of things happen here...... but only in routine 0 !
routine 0 gives back control
routine 1 got control
A lot of things happen here...... but only in routine 1 !
routine 1 gives back control
routine 2 got control
A lot of things happen here...... but only in routine 2 !
routine 2 gives back control
routine 0 got control
A lot of things happen here...... but only in routine 0 !
routine 0 gives back control
routine 1 got control
A lot of things happen here...... but only in routine 1 !
routine 1 gives back control
routine 2 got control
A lot of things happen here...... but only in routine 2 !
routine 2 gives back control
routine 0 got control
A lot of things happen here...... but only in routine 0 !
routine 0 gives back control
routine 1 got control
A lot of things happen here...... but only in routine 1 !
routine 1 gives back control
routine 2 got control
A lot of things happen here...... but only in routine 2 !
routine 2 gives back control
Finished !

You are asking for a library function that explicitly stop other routines ? Make it clear that, it is not possible in Go and also for most other languages. And sync, mutex case is not possible for your case.

Related

Can WaitGroup used with normal functions

Can I use waitgroup with normal function and not with always goroutines
I have following type
type Manager struct {
....
wg sync.WaitGroup
}
func (m *Manager) create() {
m.wg.Add(1)
defer m.wg.Done()
....
....
}
func (m *Manager) close() {
m.wg.Wait()
}
It is working for me fine, I just want to know if this is correct
In the concurrent context, a waitgroup allows you to halt a goroutine until the group is "done". If you use the waitgroup incorrectly, you can have these erroneous outcomes:
Less Done than Add: The waitgroup never finishes, and the waiting goroutine halts forever (either panics if all goroutines are deadlocked, or a silent failure otherwise)
Less Add than Done: panics. See WaitGroup.Add
In the non-concurrent context, you won't receive any benefit from the synchronization of the waitgroup, so the only effect it can really have is demonstrating that the total amount added to the counter and the total amount taken away are equal (as is in correct use of a waitgroup). However, in the case of incorrect usage, it can result in a silent failure (see above), so you should not use it in this way.
There could be a legitimate use case where you want to increment / decrement a counter and then verify that it has been resolved to 0 at the end. In a non-concurrent context, you don't need such a fancy tool to do this: just use an int!
For example:
var counter int
// setup
// vv equivalent to wg.Add
counter += expectedNumberOfActions
for x := range actions {
// do something
// vv equivalent to wg.Done
counter--
}
// vv achieves the purpose of wg.Wait
if counter != 0 {
panic("oh no! the counter was not resolved correctly. there may be some bug in the implementation")
}

Computing mod inverse

I want to compute the inverse element of a prime in modular arithmetic.
In order to speed things up I start a few goroutines which try to find the element in a certain range. When the first one finds the element, it sends it to the main goroutine and at this point I want to terminate the program. So I call close in the main goroutine, but I don't know if the goroutines will finish their execution (I guess not). So a few questions arise:
1) Is this a bad style, should I have something like a WaitGroup?
2) Is there a more idiomatic way to do this computation?
package main
import "fmt"
const (
Procs = 8
P = 1000099
Base = 1<<31 - 1
)
func compute(start, end uint64, finished chan struct{}, output chan uint64) {
for i := start; i < end; i++ {
select {
case <-finished:
return
default:
break
}
if i*P%Base == 1 {
output <- i
}
}
}
func main() {
finished := make(chan struct{})
output := make(chan uint64)
for i := uint64(0); i < Procs; i++ {
start := i * (Base / Procs)
end := (i + 1) * (Base / Procs)
go compute(start, end, finished, output)
}
fmt.Println(<-output)
close(finished)
}
Is there a more idiomatic way to do this computation?
You don't actually need a loop to compute this.
If you use the GCD function (part of the standard library), you get returned numbers x and y such that:
x*P+y*Base=1
this means that x is the answer you want (because x*P = 1 modulo Base):
package main
import (
"fmt"
"math/big"
)
const (
P = 1000099
Base = 1<<31 - 1
)
func main() {
bigP := big.NewInt(P)
bigBase := big.NewInt(Base)
// Compute inverse of bigP modulo bigBase
bigGcd := big.NewInt(0)
bigX := big.NewInt(0)
bigGcd.GCD(bigX,nil,bigP,bigBase)
// x*bigP+y*bigBase=1
// => x*bigP = 1 modulo bigBase
fmt.Println(bigX)
}
Is this a bad style, should I have something like a WaitGroup?
A wait group solves a different problem.
In general, to be a responsible go citizen here and ensure your code runs and tidies up behind itself, you may need to do a combination of:
Signal to the spawned goroutines to stop their calculations when the result of the computation has been found elsewhere.
Ensure a synchronous process waits for the goroutines to stop before returning. This is not mandatory if they properly respond to the signal in #1, but if you don't wait, there will be no guarantee they have terminated before the parent goroutine continues.
In your example program, which performs this task and then quits, there is strictly no need to do either. As this comment indicates, your program's main method terminates upon a satisfactory answer being found, at which point the program will end, any goroutines will be summarily terminated, and the operating system will tidy up any consumed resources. Waiting for goroutines to stop is unnecessary.
However, if you wrapped this code up into a library or it became part of a long running "inverse prime calculation" service, it would be desirable to tidy up the goroutines you spawned to avoid wasting cycles unnecessarily. Additionally, in general, you may have other scenarios in which goroutines store state, hold handles to external resources, or hold handles to internal objects which you risk leaking if not properly tidied away – it is desirable to properly close these.
Communicating the requirement to stop working
There are several approaches to communicate this. I don't claim this is an exhaustive list! (Please do suggest other general-purpose methods in the comments or by proposing edits to the post.)
Using a special channel
Signal the child goroutines by closing a special "shutdown" channel reserved for the purpose. This exploits the channel axiom:
A receive from a closed channel returns the zero value immediately
On receiving from the shutdown channel, the goroutine should immediately arrange to tidy any local state and return from the function. Your earlier question had example code which implemented this; a version of the pattern is:
func myGoRoutine(shutdownChan <-chan struct{}) {
select {
case <-shutdownChan:
// tidy up behaviour goes here
return
// You may choose to listen on other channels here to implement
// the primary behaviour of the goroutine.
}
}
func main() {
shutdownChan := make(chan struct{})
go myGoRoutine(shutdownChan)
// some time later
close(shutdownChan)
}
In this instance, the shutdown logic is wasted because the main() method will immediately return after the call to close. This will race with the shutdown of the goroutine, but we should assume it will not properly execute its tidy-up behaviour. Point 2 addresses ways to fix this.
Using a context
The context package provides the option to create a context which can be cancelled. On cancellation, a channel exposed by the context's Done() method will be closed, which signals time to return from the goroutine.
This approach is approximately the same as the previous method, with the exception of neater encapsulation and the availability of a context to pass to downstream calls in your goroutine to cancel nested calls where desired. Example:
func myGoRoutine(ctx context.Context) {
select {
case <-ctx.Done():
// tidy up behaviour goes here
return
// Put real behaviour for the goroutine here.
}
}
func main() {
// Get a context (or use an existing one if you are provided with one
// outside a `main` method:
ctx := context.Background()
// Create a derived context with a cancellation method
ctx, cancel := context.WithCancel(ctx)
go myGoRoutine(ctx)
// Later, when ready to quit
cancel()
}
This has the same bug as the other case in that the main method will not wait for the child goroutines to quit before returning.
Waiting (or "join"ing) for child goroutines to stop
The code which closes the shutdown channel or closes the context in the above examples will not wait for child goroutines to stop working before continuing. This may be acceptable in some instances, while in others you may require the guarantee that goroutines have stopped before continuing.
sync.WaitGroup can be used to implement this requirement. The documentation is comprehensive. A wait group is a counter which should be incremented using its Add method on starting a goroutine and decremented using its Done method when a goroutine completes. Code can wait for the counter to return to zero by calling its Wait method, which blocks until the condition is true. All calls to Add must occur before a call to Wait.
Example code:
func main() {
var wg sync.WaitGroup
// Increment the WaitGroup with the number of goroutines we're
// spawning.
wg.Add(1)
// It is common to wrap a goroutine in a function which performs
// the decrement on the WaitGroup once the called function returns
// to avoid passing references of this control logic to the
// downstream consumer.
go func() {
// TODO: implement a method to communicate shutdown.
callMyFunction()
wg.Done()
}()
// Indicate shutdown, e.g. by closing a channel or cancelling a
// context.
// Wait for goroutines to stop
wg.Wait()
}
Is there a more idiomatic way to do this computation?
This algorithm is certainly parallelizable through use of goroutines in the manner you have defined. As the work is CPU-bound, the limitation of goroutines to the number of available CPUs makes sense (in the absence of other work on the machine) to benefit from the available compute resource.
See peterSO's answer for a bug fix.

Golang concurrency and blocking with a single channel, explanation needed

I'm playing around with the code presented on https://tour.golang.org/concurrency/5. My idea was that I could simplify the code by getting rid of the quit channel and still keep the correct program behavior - just for learning purposes.
Here is the code I've got (simplified it even more for better readability):
package main
import "fmt"
import "time"
func sendNumbers(c chan int) {
for i := 0; i < 2; i++ {
c <- i
}
}
func main() {
c := make(chan int)
go func() {
for i := 0; i < 2; i++ {
fmt.Println(<-c)
}
}()
time.Sleep(0 * time.Second)
sendNumbers(c)
}
In this code, the go routine I spawn should be able to receive 2 numbers before it returns. The sendNumbers() function I call next sends exactly 2 numbers to the channel c.
So, the expected output of the program is 2 lines: 0 and 1. However, what I'm getting when I run the code on the page, is just a single line containing 0.
Note the
time.Sleep(0 * time.Second)
though that I deliberately added before calling sendNumbers(c). If I change it to
time.Sleep(1 * time.Second)
The output becomes as expected:
0
1
So, I'm confused with what happens. Can someone explain what is going on? Shouldn't both sends and receives block regardless of how much time is passed before I call sendNumbers()?
In Go, the program exits as soon as the main function returns regardless of whether other goroutines are still running or not. If you want to make sure the main function does not exit prematurely, you need some synchronization mechanism. https://nathanleclaire.com/blog/2014/02/15/how-to-wait-for-all-goroutines-to-finish-executing-before-continuing/
You don’t absolutely have to use synchronization primitives, you could also do it with channels only, arguably a more idiomatic way to do it.

Why is this code Undefined Behavior?

var x int
done := false
go func() { x = f(...); done = true }
while done == false { }
This is a Go code piece. My fiend told me this is UB code. Why?
As explained in "Why does this program terminate on my system but not on playground?"
The Go Memory Model does not guarantee that the value written to x in the goroutine will ever be observed by the main program.
A similarly erroneous program is given as an example in the section on go routine destruction.
The Go Memory Model also specifically calls out busy waiting without synchronization as an incorrect idiom in this section.
(in your case, there is no guarantee that the value written to done in the goroutine will ever be observed by the main program)
Here, You need to do some kind of synchronization in the goroutine in order to guarantee that done=true happens before one of the iterations of the for loop in main.
The "while" (non-existent in Go) should be replaced by, for instance, a channel you block on (waiting for a communication)
for {
<-c // 2
}
Based on a channel (c := make(chan bool)) created in main, and closed (close(c)) in the goroutine.
The sync package provides other means to wait for a gorountine to end before exiting main.
See for instance Golang Example Wait until all the background goroutine finish:
var w sync.WaitGroup
w.Add(1)
go func() {
// do something
w.Done()
}
w.Wait()

Issue with runtime.LockOSThread() and runtime.UnlockOSThread

I have a code like,
Routine 1 {
runtime.LockOSThread()
print something
send int to routine 2
runtime.UnlockOSThread
}
Routine 2 {
runtime.LockOSThread()
print something
send int to routine 1
runtime.UnlockOSThread
}
main {
go Routine1
go Routine2
}
I use run time lock-unlock because, I don't want that printing of
Routine 1 will mix with Routine 2. However, after execution of above
code, it outputs same as without lock-unlock (means printing outputs
mixed). Can anybody help me why this thing happening and how to force
this for happening.
NB: I give an example of print something, however there are lots of
printing and sending events.
If you want to serialize "print something", e.g. each "print something" should perform atomically, then just serialize it.
You can surround "print something" by a mutex. That'll work unless the code deadlock because of that - and surely it easily can in a non trivial program.
The easy way in Go to serialize something is to do it with a channel. Collect in a (go)routine everything which should be printed together. When collection of the print unit is done, send it through a channel to some printing "agent" as a "print job" unit. That agent will simply receive its "tasks" and atomically print each one. One gets that atomicity for free and as an important bonus the code can not deadlock easily no more in the simple case, where there are only non interdependent "print unit" generating goroutines.
I mean something like:
func printer(tasks chan string) {
for s := range tasks {
fmt.Printf(s)
}
}
func someAgentX(tasks chan string) {
var printUnit string
//...
tasks <- printUnit
//...
}
func main() {
//...
tasks := make(chan string, size)
go printer(tasks)
go someAgent1(tasks)
//...
go someAgentN(tasks)
//...
<- allDone
close(tasks)
}
What runtime.LockOSThread does is prevent any other goroutine from running on the same thread. It forces the runtime to create a new thread and run Routine2 there. They are still running concurrently but on different threads.
You need to use sync.Mutex or some channel magic instead.
You rarely need to use runtime.LockOSThread but it can be useful for forcing some higher priority goroutine to run on a thread of it's own.
package main
import (
"fmt"
"sync"
"time"
)
var m sync.Mutex
func printing(s string) {
m.Lock() // Other goroutines will stop here if until m is unlocked
fmt.Println(s)
m.Unlock() // Now another goroutine at "m.Lock()" can continue running
}
func main() {
for i := 0; i < 10; i++ {
go printing(fmt.Sprintf("Goroutine #%d", i))
}
<-time.After(3e9)
}
I think, this is because of runtime.LockOSThread(),runtime.UnlockOSThread does not work all time. It totaly depends on CPU, execution environment etc. It can't be forced by anyother way.

Resources