Unlocking mutex without defer [duplicate] - go

Going through the standard library, I see a lot functions similar to the following:
// src/database/sql/sql.go
func (dc *driverConn) removeOpenStmt(ds *driverStmt) {
dc.Lock()
defer dc.Unlock()
delete(dc.openStmt, ds)
}
...
func (db *DB) addDep(x finalCloser, dep interface{}) {
//println(fmt.Sprintf("addDep(%T %p, %T %p)", x, x, dep, dep))
db.mu.Lock()
defer db.mu.Unlock()
db.addDepLocked(x, dep)
}
// src/expvar/expvar.go
func (v *Map) addKey(key string) {
v.keysMu.Lock()
defer v.keysMu.Unlock()
v.keys = append(v.keys, key)
sort.Strings(v.keys)
}
// etc...
I.e.: simple functions with no returns and presumably no way to panic that are still deferring the unlock of their mutex. As I understand it, the overhead of a defer has been improved (and perhaps is still in the process of being improved), but that being said: Is there any reason to include a defer in functions like these? Couldn't these types of defers end up slowing down a high traffic function?

Always deferring things like Mutex.Unlock() and WaitGroup.Done() at the top of the function makes debugging and maintenance easier, since you see immediately that those pieces are handled correctly so you know that those important pieces are taken care of, and can quickly move on to other issues.
It's not a big deal in 3 line functions, but consistent-looking code is also just easier to read in general. Then as the code grows, you don't have to worry about adding an expression that may panic, or complicated early return logic, because the pre-existing defers will always work correctly.

Panic is a sudden (so it could be unpredicted or unprepared) violation of normal control flow. Potentially it can emerge from anything - quite often from external things - for example memory failure. defer mechanism gives an easy and quite cheap tool to perform exit operation. Thus not leave a system in broken state. This is important for locks in high load applications because it help not to lose locked resources and freeze the whole system in lock once.
And if for some moment code has no places for panic (hard to guess such system ;) but things evolve. Later this function would be more complex and able to throw panic.
Conclusion: Defer helps you to ensure your function will exit correctly if something “goes wring”. Also quite important it is future-proof - same reply for different failures.
So it’s a food style to put them even in easy functions. As a programmer you can see nothing is lost. And be more sure in a code.

Related

Will time.Tick cause memory leak when I never need to stop it?

Consider the following code:
go func() {
for now := range time.Tick(time.Minute) {
fmt.Println(now, statusUpdate())
}
}()
I need the for loop runs forever and never need to stop it.
Will this cause memory leak or not?
I knew if I ever need to break the for loop, it will cause memory leak. But what if I don't need to break the for loop?
The doc says
While Tick is useful for clients that have no need to shut down the Ticker, be aware that without a way to shut it down the underlying Ticker cannot be recovered by the garbage collector; it "leaks".
I just want to get it right.
First, let's see a definition of what a "memory leak" is, from Wikipedia:
In computer science, a memory leak is a type of resource leak that occurs when a computer program incorrectly manages memory allocations in a way that memory which is no longer needed is not released.
It's important to note that in the doc you quoted, it does not specifically mention a "memory leak", just a "leak" (meaning "resource leak"). The resources in question are not only the memory used by the ticker, but the goroutine that runs it. Because of this, I'll interpret this definition as applying to "resource leaks" more broadly.
As mentioned by the doc you quoted, time.Tick does not make it possible to release the ticker's resources.
So by this definition, if the ticker is no longer needed at any subsequent point in the program, then a resource leak has occurred. If the ticker is always needed for the remainder of the program after created, then it's not a leak.
Further in the Wikipedia definition, however, is this note:
A memory leak may also happen when an object is stored in memory but cannot be accessed by the running code.
Again, time.Tick does not make it possible to release the ticker's resources.
So by this continued definition, you might say that the use of time.Tick is always a resource leak.
In practical terms, as long as you range over the time.Tick without breaking, you have reasonable assurance that the ticker is going to continue to be used for the remainder of the program, and there will be no "leak". If you have any doubts of if a ticker will be used forever, then use time.NewTicker and Stop() it appropriately:
go func() {
ticker := time.NewTicker(time.Minute)
defer ticker.Stop()
for now := range ticker.C {
fmt.Println(now, statusUpdate())
// some exception
if (something) {
break
}
}
}()

Are there any advantages to having a defer in a simple, no return, non-panicking function?

Going through the standard library, I see a lot functions similar to the following:
// src/database/sql/sql.go
func (dc *driverConn) removeOpenStmt(ds *driverStmt) {
dc.Lock()
defer dc.Unlock()
delete(dc.openStmt, ds)
}
...
func (db *DB) addDep(x finalCloser, dep interface{}) {
//println(fmt.Sprintf("addDep(%T %p, %T %p)", x, x, dep, dep))
db.mu.Lock()
defer db.mu.Unlock()
db.addDepLocked(x, dep)
}
// src/expvar/expvar.go
func (v *Map) addKey(key string) {
v.keysMu.Lock()
defer v.keysMu.Unlock()
v.keys = append(v.keys, key)
sort.Strings(v.keys)
}
// etc...
I.e.: simple functions with no returns and presumably no way to panic that are still deferring the unlock of their mutex. As I understand it, the overhead of a defer has been improved (and perhaps is still in the process of being improved), but that being said: Is there any reason to include a defer in functions like these? Couldn't these types of defers end up slowing down a high traffic function?
Always deferring things like Mutex.Unlock() and WaitGroup.Done() at the top of the function makes debugging and maintenance easier, since you see immediately that those pieces are handled correctly so you know that those important pieces are taken care of, and can quickly move on to other issues.
It's not a big deal in 3 line functions, but consistent-looking code is also just easier to read in general. Then as the code grows, you don't have to worry about adding an expression that may panic, or complicated early return logic, because the pre-existing defers will always work correctly.
Panic is a sudden (so it could be unpredicted or unprepared) violation of normal control flow. Potentially it can emerge from anything - quite often from external things - for example memory failure. defer mechanism gives an easy and quite cheap tool to perform exit operation. Thus not leave a system in broken state. This is important for locks in high load applications because it help not to lose locked resources and freeze the whole system in lock once.
And if for some moment code has no places for panic (hard to guess such system ;) but things evolve. Later this function would be more complex and able to throw panic.
Conclusion: Defer helps you to ensure your function will exit correctly if something “goes wring”. Also quite important it is future-proof - same reply for different failures.
So it’s a food style to put them even in easy functions. As a programmer you can see nothing is lost. And be more sure in a code.

Why Go's bufio uses panic under the hood?

Reading the code from the bufio package I've found such things:
// fill reads a new chunk into the buffer.
func (b *Reader) fill() {
...
if b.w >= len(b.buf) {
panic("bufio: tried to fill full buffer")
}
...
}
At the same time the Effective Go section about panic contains the next paragraph:
This is only an example but real library functions should avoid panic.
If the problem can be masked or worked around, it's always better to
let things continue to run rather than taking down the whole program.
So, I wonder, is the problem with a particular buffered reader so important to cause the panic call in the standard library code?
It may be questionable, but consider: fill is a private method, and b.w and b.buf are private fields. If the condition that causes the panic is ever true, it's due to a logic error in the implementation of bufio. Since it should never really be possible to get into that state in the first place (a "can't happen" condition), we don't really know how we got there, and it's unclear how much other state got corrupted before the problem was detected and what, if anything, the user can do about it. In that kind of situation, a panic seems reasonable.

Go "panic: sync: unlock of unlocked mutex" without a known reason

I have a cli application in Go (still in development) and no changes were made in source code neither on dependencies but all of a sudden it started to panic panic: sync: unlock of unlocked mutex.
The only place I'm running concurrent code is to handle when program is requested to close:
func handleProcTermination() {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
go func() {
<-c
curses.Endwin()
os.Exit(0)
}()
defer curses.Endwin()
}
Only thing I did was to rename my $GOPATH and work space folder. Can this operation cause such error?
Have some you experienced any related problem without having any explanation? Is there a rational check list that would help to find the cause of the problem?
Ok, after some unfruitful debugging sessions, as a last resort, I simply wiped all third party code (dependencies) from the workspace:
cd $GOPATH
rm -rf pkg/ bin/ src/github.com src/golang.org # the idea is to remove all except your own source
Used go get to get all used dependencies again:
go get github.com/yadayada/yada
go get # etc
And the problem is gone! Application is starting normally and tests are passing. No startup panics anymore. It looks like this problem happens when you mv your work space folder but I'm not 100% sure yet. Hope it helps someone else.
From now on, re install dependencies will be my first step when weird panic conditions like that suddenly appear.
You're not giving much information to go on, so my answer is generic.
In theory, bugs in concurrent code can remain unnoticed for a long time and then suddenly show up. In practice, if the bug is easily repeatable (happens nearly every run) this usually indicates that something did change in the code or environment.
The solution: debug.
Knowing what has changed can be helpful to identify the bug. In this case, it appears that lock/unlock pairs or not matching up. If you are not passing locks between threads, you should be able to find a code path within the thread that has not acquired the lock, or has released it early. It may be helpful to put assertions at certain points to validate that you are holding the lock when you think you are.
Make sure you don't copy the lock somewhere.
What can happen with seemingly bulletproof code in concurrent environments is that the struct including the code gets copied elsewhere, which results in the underlying lock being different.
Consider this code snippet:
type someStruct struct {
lock sync.Mutex
}
func (s *someStruct) DoSomethingUnderLock() {
s.lock.Lock()
defer s.lock.Unlock() // This will panic
time.Sleep(200 * time.Millisecond)
}
func main() {
s1 := &someStruct{}
go func() {
time.Sleep(100 * time.Millisecond) // Wait until DoSomethingUnderLock takes the lock
s2 := &someStruct{}
*s1 = *s2
}()
s1.DoSomethingUnderLock()
}
*s1 = *s2 is the key here - it results in a different lock being used by the same receiver function and if the struct is replaced while the lock is taken, we'll get sync: unlock of unlocked mutex.
What makes it harder to debug is that someStruct might be nested in another struct (and another, and another), and if the outer struct gets replaced (as long as someStruct is not a reference there) in a similar manner, the result will be the same.
If this is the case, you can use a reference to the lock (or the whole struct) instead. Now you need to initialize it, but it's a small price that might save you some obscure bugs. See the modified code that doesn't panic here.
For those who come here and didn't solve your problem. Check If the application is compiled in one version of Linux but running in another version. At least in my case it happened.

Recursive locking in Go

Go's sync package has a Mutex. Unfortunately it's not recursive. What's the best way to implement recursive locks in Go?
I'm sorry to not answer your question directly:
IMHO, the best way how to implement recursive locks in Go is to not implement them, but rather redesign your code to not need them in the first place. It's probable, I think, that the desire for them indicates a wrong approach to some (unknown here) problem is being used.
As an indirect "proof" of the above claim: Would a recursive lock be a common/correct approach to the/some usual situations involving mutexes, it would be sooner or later included in the standard library.
And finally, last but not least: What Russ Cox from the Go development team wrote here https://groups.google.com/d/msg/golang-nuts/XqW1qcuZgKg/Ui3nQkeLV80J:
Recursive (aka reentrant) mutexes are a bad idea.
The fundamental reason to use a mutex is that mutexes
protect invariants, perhaps internal invariants like
"p.Prev.Next == p for all elements of the ring", or perhaps
external invariants like "my local variable x is equal to p.Prev."
Locking a mutex asserts "I need the invariants to hold"
and perhaps "I will temporarily break those invariants."
Releasing the mutex asserts "I no longer depend on those
invariants" and "If I broke them, I have restored them."
Understanding that mutexes protect invariants is essential to
identifying where mutexes are needed and where they are not.
For example, does a shared counter updated with atomic
increment and decrement instructions need a mutex?
It depends on the invariants. If the only invariant is that
the counter has value i - d after i increments and d decrements,
then the atmocity of the instructions ensures the
invariants; no mutex is needed. But if the counter must be
in sync with some other data structure (perhaps it counts
the number of elements on a list), then the atomicity of
the individual operations is not enough. Something else,
often a mutex, must protect the higher-level invariant.
This is the reason that operations on maps in Go are not
guaranteed to be atomic: it would add expense without
benefit in typical cases.
Let's take a look at recursive mutexes.
Suppose we have code like this:
func F() {
mu.Lock()
... do some stuff ...
G()
... do some more stuff ...
mu.Unlock()
}
func G() {
mu.Lock()
... do some stuff ...
mu.Unlock()
}
Normally, when a call to mu.Lock returns, the calling code
can now assume that the protected invariants hold, until
it calls mu.Unlock.
A recursive mutex implementation would make G's mu.Lock
and mu.Unlock calls be no-ops when called from within F
or any other context where the current thread already holds mu.
If mu used such an implementation, then when mu.Lock
returns inside G, the invariants may or may not hold. It depends
on what F has done before calling G. Maybe F didn't even realize
that G needed those invariants and has broken them (entirely
possible, especially in complex code).
Recursive mutexes do not protect invariants.
Mutexes have only one job, and recursive mutexes don't do it.
There are simpler problems with them, like if you wrote
func F() {
mu.Lock()
... do some stuff
}
you'd never find the bug in single-threaded testing.
But that's just a special case of the bigger problem,
which is that they provide no guarantees at all about
the invariants that the mutex is meant to protect.
If you need to implement functionality that can be called
with or without holding a mutex, the clearest thing to do
is to write two versions. For example, instead of the above G,
you could write:
// To be called with mu already held.
// Caller must be careful to ensure that ...
func g() {
... do some stuff ...
}
func G() {
mu.Lock()
g()
mu.Unlock()
}
or if they're both unexported, g and gLocked.
I am sure that we'll need TryLock eventually; feel free to
send us a CL for that. Lock with timeout seems less essential
but if there were a clean implementation (I don't know of one)
then maybe it would be okay. Please don't send a CL that
implements recursive mutexes.
Recursive mutexes are just a mistake, nothing more than
a comfortable home for bugs.
Russ
You could quite easily make a recursive lock out of a sync.Mutex and a sync.Cond. See Appendix A here for some ideas.
Except for the fact that the Go runtime doesn't expose any notion of goroutine Id. This is to stop people doing silly things with goroutine local storage, and probably indicates that the designers think that if you need a goroutine Id you are doing it wrong.
You can of course dig the goroutine Id out of the runtime with a bit of C if you really want to. You might want to read that thread to see why the designers of Go think it is a bad idea.
as was already established, this is a miserable, horrible, awful, and terrible idea from a concurrency perspective.
Anyway, since your question is really about Go's type system, here's how you would define a type with a recursive method.
type Foo struct{}
func (f Foo) Bar() { fmt.Println("bar") }
type FooChain struct {
Foo
child *FooChain
}
func (f FooChain) Bar() {
if f.child != nil {
f.child.Bar()
}
f.Foo.Bar()
}
func main() {
fmt.Println("no children")
f := new(FooChain)
f.Bar()
for i := 0; i < 10; i++ {
f = &FooChain{Foo{}, f}
}
fmt.Println("with children")
f.Bar()
}
http://play.golang.org/p/mPBHKpgxnd

Resources