Explaining deadlocks with a single lock from The Little Go Book - go

I'm reading The Little Go Book.
Page 76 demonstrates how you can deadlock with a single lock:
var (
lock sync.Mutex
)
func main() {
go func() { lock.Lock() }()
time.Sleep(time.Millisecond * 10)
lock.Lock()
}
Running this results in a deadlock as explained by the author. However, what I don't understand is why.
I changed the program to this:
var (
lock sync.Mutex
)
func main() {
go func() { lock.Lock() }()
lock.Lock()
}
My expectation was that a deadlock would still be thrown. But it wasn't.
Could someone explain to me what's happening here please?
The only scenario I can think of that explains this is the following (but this is guesswork):
First example
The lock is acquired in the first goroutine
The call to time.Sleep() ensures the lock is acquired
The main function attempts to acquire the lock resulting in a deadlock
Program exits
Second example
The lock is acquired in the first goroutine but this takes some time to happen (??)
Since there is no delay the main function acquires the lock before the goroutine can
Program exits

In the first example, main sleeps long enough to give the child goroutine the opportunity to start and acquire the lock. That goroutine then will merrily exit without releasing the lock.
By the time main resumes its control flow, the shared mutex is locked so the next attempt to acquire it will block forever. Since at this point main is the only routine left alive, blocking forever results in a deadlock.
In the second example, without the call to time.Sleep, main proceeds straight away to acquire the lock. This succeeds, so main goes ahead and exits. The child goroutine would then block forever, but since main has exited, the program just terminates, without deadlock.
By the way, even if main didn't exit, as long as there is at least one goroutine which is not blocking, there's no deadlock. For this purpose, time.Sleep is not a blocking operation, it simply pauses execution for the specified time duration.

go shows the deadlock error when all goroutines (including the main) are asleep.
in your first example, the inside goroutine is executed and terminated after he call mutex.Lock(). then the main goroutine tries to lock again but it goes asleep waiting for the opportunity to occupy the lock. so now we have all the goroutines(the main one) in the program in asleep mode which will cause a deadlock error !
it is important to understand this because a deadlock may happen but it will not always show an error if there still a running goroutine. which mostly what will happen in production. error will be reported only when the whole program get in a deadlock.

Related

Is this is dead lock? Why does executing the program say it is a dead lock? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
When I run this program:
package main
import (
"fmt"
"sync"
"time"
)
var mu = new(sync.Mutex)
func f2() {
mu.Lock()
fmt.Println("call f2...")
}
func main() {
go f2()
time.Sleep(time.Second * 2)
mu.Lock()
fmt.Println("get lock in main")
}
I get this output:
call f2...
fatal error: all goroutines are asleep - deadlock!
playground link.
As we know there are four condition to have dead lock, one is Hold and wait or resource holding which require there are at least two kind of resource to get, but here is only one resource.
So is this a dead lock or not?
I know quit a lot of concurrence programming. It just in this case it is only f2() not release lock, and that is not a deadlock as wikipedia defines.
Note beforehand: A "goroutine deadlock" (this is what you experience) is a state when all existing goroutines are blocked and cannot proceed on their own (ever). It doesn't matter if there are multiple existing goroutines or just a single one. If the go runtime detects a goroutine deadlock, terminates the app (there's no point letting the app hang forever, it will never recover, it's in the definition of goroutine deadlock).
You have a single mutex in your code which is locked in 2 goroutines: in the main goroutine and the one executing f2().
If f2() reaches sooner the point to lock the mutex, the main goroutine will never be able to lock it again because no one unlocks the mutex, so then it's a deadlock. Because f2() will return after locking and printing, and its goroutine will end, and the only remaining main goroutine is blocked trying to lock mu. And since you used a 2-seconds sleep in main() between launching f2 (as a goroutine) and locking the mutex, you will observe this deadlock like always.
Note that if the main goroutine would be scheduled to lock the mutex first, no deadlock would occur because the main() function could continue to run, and once returning, the app would terminate (not waiting for f2() to finish). But again, due to the sleep you inserted, you will likely never get this outcome.
I used the phrases "like always" and "likely never" because although a sleep instruction is a good scheduling point for the goroutine scheduler, it is not a synchronization point. The runtime will schedule other goroutines to run while a goroutine is in sleep (why wouldn't it), but this is not guaranteed. Synchronization points could give you guarantee (such as channel communications, locks, sync.Once etc.). This is detailed in The Go Memory Model.

Golang: forever channel

Just have a question, what is happening here?
forever := make(chan bool)
log.Printf(" [*] Waiting for messages. To exit press CTRL+C")
<-forever
That code creates an unbuffered channel, and attempts to receive from it.
And since no one ever sends anything on it, it's essentially a blocking forever operation.
The purpose of this is to keep the goroutine from ending / returning, most likely because there are other goroutines which do some work concurrently or they wait for certain events or incoming messages (like your log message says).
And the need for this is that without this, the application might quit without waiting for other goroutines. Namely, if the main goroutine ends, the program ends as well. Quoting from Spec: Program execution:
Program execution begins by initializing the main package and then invoking the function main. When that function invocation returns, the program exits. It does not wait for other (non-main) goroutines to complete.
Check out this answer for similar and more techniques: Go project's main goroutine sleep forever?
For an introduction about channels, see What are channels used for?
The code in the question is probably coming from the RabbitMQ golang tutorial here.
Here's a more extended chunk of it with some commends of my own:
...
// create an unbuffered channel for bool types.
// Type is not important but we have to give one anyway.
forever := make(chan bool)
// fire up a goroutine that hooks onto msgs channel and reads
// anything that pops into it. This essentially is a thread of
// execution within the main thread. msgs is a channel constructed by
// previous code.
go func() {
for d := range msgs {
log.Printf("Received a message: %s", d.Body)
}
}()
log.Printf(" [*] Waiting for messages. To exit press CTRL+C")
// We need to block the main thread so that the above thread stays
// on reading from msgs channel. To do that just try to read in from
// the forever channel. As long as no one writes to it we will wait here.
// Since we are the only ones that know of it it is guaranteed that
// nothing gets written in it. We could also do a busy wait here but
// that would waste CPU cycles for no good reason.
<-forever

Discrepancies between Go Playground and Go on my machine?

To settle some misunderstandings I have about goroutines, I went to the Go playground and ran this code:
package main
import (
"fmt"
)
func other(done chan bool) {
done <- true
go func() {
for {
fmt.Println("Here")
}
}()
}
func main() {
fmt.Println("Hello, playground")
done := make(chan bool)
go other(done)
<-done
fmt.Println("Finished.")
}
As I expected, Go playground came back with an error: Process took too long.
This seems to imply that the goroutine created within other runs forever.
But when I run the same code on my own machine, I get this output almost instantaneously:
Hello, playground.
Finished.
This seems to imply that the goroutine within other exits when the main goroutine finishes. Is this true? Or does the main goroutine finish, while the other goroutine continues to run in the background?
Edit: Default GOMAXPROCS has changed on the Go Playground, it now defaults to 8. In the "old" days it defaulted to 1. To get the behavior described in the question, set it to 1 explicitly with runtime.GOMAXPROCS(1).
Explanation of what you see:
On the Go Playground, GOMAXPROCS is 1 (proof).
This means one goroutine is executed at a time, and if that goroutine does not block, the scheduler is not forced to switch to other goroutines.
Your code (like every Go app) starts with a goroutine executing the main() function (the main goroutine). It starts another goroutine that executes the other() function, then it receives from the done channel - which blocks. So the scheduler must switch to the other goroutine (executing other() function).
In your other() function when you send a value on the done channel, that makes both the current (other()) and the main goroutine runnable. The scheduler chooses to continue to run other(), and since GOMAXPROCS=1, main() is not continued. Now other() launches another goroutine executing an endless loop. The scheduler chooses to execute this goroutine which takes forever to get to a blocked state, so main() is not continued.
And then the timeout of the Go Playground's sandbox comes as an absolution:
process took too long
Note that the Go Memory Model only guarantees that certain events happen before other events, you have no guarantee how 2 concurrent goroutines are executed. Which makes the output non-deterministic.
You are not to question any execution order that does not violate the Go Memory Model. If you want the execution to reach certain points in your code (to execute certain statements), you need explicit synchronization (you need to synchronize your goroutines).
Also note that the output on the Go Playground is cached, so if you run the app again, it won't be run again, but instead the cached output will be presented immediately. If you change anything in the code (e.g. insert a space or a comment) and then you run it again, it then will be compiled and run again. You will notice it by the increased response time. Using the current version (Go 1.6) you will see the same output every time though.
Running locally (on your machine):
When you run it locally, most likely GOMAXPROCS will be greater than 1 as it defaults to the number of CPU cores available (since Go 1.5). So it doesn't matter if you have a goroutine executing an endless loop, another goroutine will be executed simultaneously, which will be the main(), and when main() returns, your program terminates; it does not wait for other non-main goroutines to complete (see Spec: Program execution).
Also note that even if you set GOMAXPROCS to 1, your app will most likely exit in a "short" time as the scheduler imlementation will switch to other goroutines and not just execute the endless loop forever (however, as stated above, this is non-deterministic). And when it does, it will be the main() goroutine, and so when main() finishes and returns, your app terminates.
Playing with your app on the Go Playground:
As mentioned, by default GOMAXPROCS is 1 on the Go Playground. However it is allowed to set it to a higher value, e.g.:
runtime.GOMAXPROCS(2)
Without explicit synchronization, execution still remains non-deterministic, however you will observe a different execution order and a termination without running into a timeout:
Hello, playground
Here
Here
Here
...
<Here is printed 996 times, then:>
Finished.
Try this variant on the Go Playground.
What you will see on screen is nondeterministic. Or more precisely if by any chance the true value you pass to channel is delayed you would see some "Here".
But usually the Stdout is buffered, it means it's not printed instantaneously but the data gets accumulated and after it gets to maximum buffer size it's printed. In your case before the "here" is printed the main function is already finished thus the process finishes.
The rule of thumb is: main function must be alive otherwise all other goroutines gets killed.

OK to exit program with active goroutine?

Take the following code snippet:
func main() {
ch := make(chan int)
quit := make(chan int)
go func() {
for {
ch <- querySomePeriodicThing()
}
}()
// ...
loop:
for {
select {
case <-ch: handlePeriodicThing()
case <-quit: break loop
}
}
}
The goroutine should run for the duration of execution. When the select statement receives something from the quit channel, it breaks out of the loop and the program ends, without any attempt to stop the goroutine.
My question: will this have any intermittent adverse effects that are not obvious from running it once or twice? I know that in other languages threads should be cleaned up (i.e., exited) before the program ends, but is go different? Assume that querySomePeriodicThing() does not open file descriptors or sockets or anything that would be bad to leave open.
As mentioned in the spec, your program will exit when the main function completes:
Program execution begins by initializing the main package and then invoking the function main. When that function invocation returns, the program exits. It does not wait for other (non-main) goroutines to complete.
So the fact you have other goroutines still running is not a problem from a language point of view. It may still be a problem depending on what your program is doing.
If the goroutine has created some resources that should be cleaned up before program exit, then having execution stop mid-way through might be a problem: in this case, you should make your main function wait for them to complete first. There is no equivalent to pthread_join, so you will need to code this yourself (e.g. by using a channel or sync.WaitGroup).
Note that for some resources are automatically cleaned up by the operating system on process exit (e.g. open files, file locks, etc), so in some cases no special clean up will be necessary
Goroutines aren't threads, they are very lightweight and the runtime automatically cleans them up when they are no longer running, or if the program exits.

Why is this Go code blocking?

I wrote the following program:
package main
import (
"fmt"
)
func processevents(list chan func()) {
for {
//a := <-list
//a()
}
}
func test() {
fmt.Println("Ho!")
}
func main() {
eventlist := make(chan func(), 100)
go processevents(eventlist)
for {
eventlist <- test
fmt.Println("Hey!")
}
}
Since the channel eventlist is a buffered channel, I think I should get at exactly 100 times the output "Hey!", but it is displayed only once. Where is my mistake?
Update (Go version 1.2+)
As of Go 1.2, the scheduler works on the principle of pre-emptive multitasking.
This means that the problem in the original question (and the solution presented below) are no longer relevant.
From the Go 1.2 release notes
Pre-emption in the scheduler
In prior releases, a goroutine that was looping forever could starve out other goroutines
on the same thread, a serious problem when GOMAXPROCS provided only one user thread.
In Go > 1.2, this is partially addressed: The scheduler is invoked occasionally upon
entry to a function. This means that any loop that includes a (non-inlined) function
call can be pre-empted, allowing other goroutines to run on the same thread.
Short answer
It is not blocking on the writes. It is stuck in the infinite loop of processevents.
This loop never yields to the scheduler, causing all goroutines to lock indefinitely.
If you comment out the call to processevents, you will get results as expected, right until the 100th write. At which point the program panics, because nobody reads from the channel.
Another solution is to put a call to runtime.Gosched() in the loop.
Long answer
With Go1.0.2, Go's scheduler works on the principle of Cooperative multitasking.
This means that it allocates CPU time to the various goroutines running within a given OS thread by having these routines interact with the scheduler in certain conditions.
These 'interactions' occur when certain types of code are executed in a goroutine.
In go's case this involves doing some kind of I/O, syscalls or memory allocation (in certain conditions).
In the case of an empty loop, no such conditions are ever encountered. The scheduler is therefore never allowed to run its scheduling algorithms for as long as that loop is running. This consequently prevents it from allotting CPU time to other goroutines waiting to be run and the result you observed ensues: You effectively created a deadlock that can not be detected or broken out of by the scheduler.
The empty loop is usually never desired in Go and will, in most cases, indicate a bug in the program. If you do need it for whatever reason, you have to manually yield to the scheduler by calling runtime.Gosched() in every iteration.
for {
runtime.Gosched()
}
Setting GOMAXPROCS to a value > 1 was mentioned as a solution. While this will get rid of the immediate problem you observed, it will effectively move the problem to a different OS thread, if the scheduler decides to move the looping goroutine to its own OS thread that is. There is no guarantee of this, unless you call runtime.LockOSThread() at the start of the processevents function. Even then, I would still not rely on this approach to be a good solution. Simply calling runtime.Gosched() in the loop itself, will solve all the issues, regardless of which OS thread the goroutine is running in.
Here is another solution - use range to read from the channel. This code will yield to the scheduler correctly and also terminate properly when the channel is closed.
func processevents(list chan func()) {
for a := range list{
a()
}
}
Good news, since Go 1.2 (december 2013) the original program now works as expected.
You may try it on Playground.
This is explained in the Go 1.2 release notes, section "Pre-emption in the scheduler" :
In prior releases, a goroutine that was looping forever could starve
out other goroutines on the same thread, a serious problem when
GOMAXPROCS provided only one user thread. In Go 1.2, this is partially
addressed: The scheduler is invoked occasionally upon entry to a
function.

Resources