Recursive locking in Go - go

Go's sync package has a Mutex. Unfortunately it's not recursive. What's the best way to implement recursive locks in Go?

I'm sorry to not answer your question directly:
IMHO, the best way how to implement recursive locks in Go is to not implement them, but rather redesign your code to not need them in the first place. It's probable, I think, that the desire for them indicates a wrong approach to some (unknown here) problem is being used.
As an indirect "proof" of the above claim: Would a recursive lock be a common/correct approach to the/some usual situations involving mutexes, it would be sooner or later included in the standard library.
And finally, last but not least: What Russ Cox from the Go development team wrote here https://groups.google.com/d/msg/golang-nuts/XqW1qcuZgKg/Ui3nQkeLV80J:
Recursive (aka reentrant) mutexes are a bad idea.
The fundamental reason to use a mutex is that mutexes
protect invariants, perhaps internal invariants like
"p.Prev.Next == p for all elements of the ring", or perhaps
external invariants like "my local variable x is equal to p.Prev."
Locking a mutex asserts "I need the invariants to hold"
and perhaps "I will temporarily break those invariants."
Releasing the mutex asserts "I no longer depend on those
invariants" and "If I broke them, I have restored them."
Understanding that mutexes protect invariants is essential to
identifying where mutexes are needed and where they are not.
For example, does a shared counter updated with atomic
increment and decrement instructions need a mutex?
It depends on the invariants. If the only invariant is that
the counter has value i - d after i increments and d decrements,
then the atmocity of the instructions ensures the
invariants; no mutex is needed. But if the counter must be
in sync with some other data structure (perhaps it counts
the number of elements on a list), then the atomicity of
the individual operations is not enough. Something else,
often a mutex, must protect the higher-level invariant.
This is the reason that operations on maps in Go are not
guaranteed to be atomic: it would add expense without
benefit in typical cases.
Let's take a look at recursive mutexes.
Suppose we have code like this:
func F() {
mu.Lock()
... do some stuff ...
G()
... do some more stuff ...
mu.Unlock()
}
func G() {
mu.Lock()
... do some stuff ...
mu.Unlock()
}
Normally, when a call to mu.Lock returns, the calling code
can now assume that the protected invariants hold, until
it calls mu.Unlock.
A recursive mutex implementation would make G's mu.Lock
and mu.Unlock calls be no-ops when called from within F
or any other context where the current thread already holds mu.
If mu used such an implementation, then when mu.Lock
returns inside G, the invariants may or may not hold. It depends
on what F has done before calling G. Maybe F didn't even realize
that G needed those invariants and has broken them (entirely
possible, especially in complex code).
Recursive mutexes do not protect invariants.
Mutexes have only one job, and recursive mutexes don't do it.
There are simpler problems with them, like if you wrote
func F() {
mu.Lock()
... do some stuff
}
you'd never find the bug in single-threaded testing.
But that's just a special case of the bigger problem,
which is that they provide no guarantees at all about
the invariants that the mutex is meant to protect.
If you need to implement functionality that can be called
with or without holding a mutex, the clearest thing to do
is to write two versions. For example, instead of the above G,
you could write:
// To be called with mu already held.
// Caller must be careful to ensure that ...
func g() {
... do some stuff ...
}
func G() {
mu.Lock()
g()
mu.Unlock()
}
or if they're both unexported, g and gLocked.
I am sure that we'll need TryLock eventually; feel free to
send us a CL for that. Lock with timeout seems less essential
but if there were a clean implementation (I don't know of one)
then maybe it would be okay. Please don't send a CL that
implements recursive mutexes.
Recursive mutexes are just a mistake, nothing more than
a comfortable home for bugs.
Russ

You could quite easily make a recursive lock out of a sync.Mutex and a sync.Cond. See Appendix A here for some ideas.
Except for the fact that the Go runtime doesn't expose any notion of goroutine Id. This is to stop people doing silly things with goroutine local storage, and probably indicates that the designers think that if you need a goroutine Id you are doing it wrong.
You can of course dig the goroutine Id out of the runtime with a bit of C if you really want to. You might want to read that thread to see why the designers of Go think it is a bad idea.

as was already established, this is a miserable, horrible, awful, and terrible idea from a concurrency perspective.
Anyway, since your question is really about Go's type system, here's how you would define a type with a recursive method.
type Foo struct{}
func (f Foo) Bar() { fmt.Println("bar") }
type FooChain struct {
Foo
child *FooChain
}
func (f FooChain) Bar() {
if f.child != nil {
f.child.Bar()
}
f.Foo.Bar()
}
func main() {
fmt.Println("no children")
f := new(FooChain)
f.Bar()
for i := 0; i < 10; i++ {
f = &FooChain{Foo{}, f}
}
fmt.Println("with children")
f.Bar()
}
http://play.golang.org/p/mPBHKpgxnd

Related

Unlocking mutex without defer [duplicate]

Going through the standard library, I see a lot functions similar to the following:
// src/database/sql/sql.go
func (dc *driverConn) removeOpenStmt(ds *driverStmt) {
dc.Lock()
defer dc.Unlock()
delete(dc.openStmt, ds)
}
...
func (db *DB) addDep(x finalCloser, dep interface{}) {
//println(fmt.Sprintf("addDep(%T %p, %T %p)", x, x, dep, dep))
db.mu.Lock()
defer db.mu.Unlock()
db.addDepLocked(x, dep)
}
// src/expvar/expvar.go
func (v *Map) addKey(key string) {
v.keysMu.Lock()
defer v.keysMu.Unlock()
v.keys = append(v.keys, key)
sort.Strings(v.keys)
}
// etc...
I.e.: simple functions with no returns and presumably no way to panic that are still deferring the unlock of their mutex. As I understand it, the overhead of a defer has been improved (and perhaps is still in the process of being improved), but that being said: Is there any reason to include a defer in functions like these? Couldn't these types of defers end up slowing down a high traffic function?
Always deferring things like Mutex.Unlock() and WaitGroup.Done() at the top of the function makes debugging and maintenance easier, since you see immediately that those pieces are handled correctly so you know that those important pieces are taken care of, and can quickly move on to other issues.
It's not a big deal in 3 line functions, but consistent-looking code is also just easier to read in general. Then as the code grows, you don't have to worry about adding an expression that may panic, or complicated early return logic, because the pre-existing defers will always work correctly.
Panic is a sudden (so it could be unpredicted or unprepared) violation of normal control flow. Potentially it can emerge from anything - quite often from external things - for example memory failure. defer mechanism gives an easy and quite cheap tool to perform exit operation. Thus not leave a system in broken state. This is important for locks in high load applications because it help not to lose locked resources and freeze the whole system in lock once.
And if for some moment code has no places for panic (hard to guess such system ;) but things evolve. Later this function would be more complex and able to throw panic.
Conclusion: Defer helps you to ensure your function will exit correctly if something “goes wring”. Also quite important it is future-proof - same reply for different failures.
So it’s a food style to put them even in easy functions. As a programmer you can see nothing is lost. And be more sure in a code.

Question on the go memory model,the last example

i have a question on the go memory model.
in the last example:
type T struct {
msg string
}
var g *T
func setup() {
t := new(T)
t.msg = "hello, world"
g = t
}
func main() {
go setup()
for g == nil {
}
print(g.msg)
}
In my opnion,reads and writes of values with a single machine word is a atomic behavior.I try many times to run the test but it is always can be observed.
So please tell me why g.msg is not guarntee to observed? I want to know the reason in detail,please.
Because there are 2 write operations in the launched goroutine:
t := new(T) // One
t.msg = "hello, world" // Two
g = t
It may be that the main goroutine will observe the non-nil pointer assignment to g in the last line, but since there is no explicit synchronization between the 2 goroutines, the compiler is allowed to reorder the operations (that doesn't change the behavior in the launched goroutine), e.g. to the following:
t := new(T) // One
g = t
t.msg = "hello, world" // Two
If operations would be rearranged like this, the behavior of the launched goroutine (setup()) would not change, so a compiler is allowed to to this. And in this case the main goroutine could observe the effect of g = t, but not t.msg = "hello, world".
Why would a compiler reorder the operations? E.g. because a different order may result in a more efficient code. E.g. if the pointer assigned to t is already in a register, it can also be assigned to g right away, without having to reload it again if the assignment to g would not be executed right away.
This is mentioned in the Happens Before section:
Within a single goroutine, reads and writes must behave as if they executed in the order specified by the program. That is, compilers and processors may reorder the reads and writes executed within a single goroutine only when the reordering does not change the behavior within that goroutine as defined by the language specification. Because of this reordering, the execution order observed by one goroutine may differ from the order perceived by another. For example, if one goroutine executes a = 1; b = 2;, another might observe the updated value of b before the updated value of a.
If you use proper synchronization, that will forbid the compiler to perform such rearranging that would change the observed behavior from other goroutines.
Running your example any number of times and not observing this does not mean anything. It may be the problem will never arise, it may be it will arise on a different architecture, or on a different machine, or when compiled with a different (future) version of Go. Simply do not rely on such behavior that is not guaranteed. Always use proper synchronization, never leave any data races in your app.

Goroutine Channel, Copy vs Pointer

Both functions are doing the same task which is initializing "Data struct". what are the Pros or Cons of each function? e.g. the function should unmarshal a big JSON file.
package main
type Data struct {
i int
}
func funcp(c chan *Data) {
var t *Data
t = <-c //receive
t.i = 10
}
func funcv(c chan Data) {
var t Data
t.i = 20
c <- t //send
}
func main() {
c := make(chan Data)
cp := make(chan *Data)
var t Data
go funcp(cp)
cp <- &t //send
println(t.i)
go funcv(c)
t = <- c //receive
println(t.i)
}
Link to Go Playground
The title of your question seems wrong. You are asking not about swapping things but rather about whether to send a pointer to some data or a copy of some data. More importantly, the overall thrust of your question lacks crucial information.
Consider two analogies:
Which is better, chocolate ice cream, or strawberry? That's probably a matter of opinion, but at least both with serve similar purposes.
Which is better, a jar of glue or a brick of C4? That depends on whether you want to build something, or blow something up, doesn't it?
If you send a copy of data through a channel, the receiver gets ... a copy. The receiver does not have access to the original. The copying process may take some time, but the fact that the receiver does not have to share access may speed things up. So this is something of an opinion, and if your question is about which is faster, well, you'll have to benchmark it. Be sure to benchmark the real problem, and not a toy example, because benchmarks on toy examples don't translate to real-world performance.
If you send a pointer to data through a channel, the receiver gets a copy of the pointer, and can therefore modify the original data. Copying the pointer is fast, but the fact that the receiver has to share access may slow things down. But if the receiver must be able to modify the data, you have no choice. You must use a tool that works, and not one that does not.
In your two functions, one generates values (funcv) so it does not have to send pointers. That's fine, and gives you the option. The other (funcp) receives objects but wants to update them so it must receive a pointer to the underlying object. That's fine too, but it means that you are now communicating by sharing (the underlying data structure), which requires careful coordination.

Why finalizer is never called?

var p = &sync.Pool{
New: func() interface{} {
return &serveconn{}
},
}
func newServeConn() *serveconn {
sc := p.Get().(*serveconn)
runtime.SetFinalizer(sc, (*serveconn).finalize)
fmt.Println(sc, "SetFinalizer")
return sc
}
func (sc *serveconn) finalize() {
fmt.Println(sc, "finalize")
*sc = serveconn{}
runtime.SetFinalizer(sc, nil)
p.Put(sc)
}
The above code tries to reuse object by SetFinalizer, but after debug I found finalizer is never called, why?
UPDATE
This may be related:https://github.com/golang/go/issues/2368
The above code tries to reuse object by SetFinalizer, but after debug I found finalizer is never called, why?
The finalizer is only called on an object when the GC
marks it as unused and then tries to sweep (free) at the end
of the GC cycle.
As a corollary, if a GC cycle is never performed during the runtime of your program, the finalizers you set may never be called.
Just in case you might hold a wrong assumption about the Go's GC, it may worth noting that Go does not employ reference counting on values; instead, it uses GC which works in parallel with the program, and the sessions during which it works happen periodically and are triggered by certain parameters like pressure on the heap produced by allocations.
A couple assorted notes regarding finalizers:
When the program terminates, no GC is forcibly run.
A corollary of this is that a finalizer is not guaranteed
to run at all.
If the GC finds a finalizer on an object about to be freed,
it calls the finalizer but does not free the object.
The object itself will be freed only at the next GC cycle —
wasting the memory.
All in all, you appear as trying to implement destructors.
Please don't: make your objects implement the sort-of standard method called Close and state in the contract of your type that the programmer is required to call it when they're done with the object.
When a programmer wants to call such a method no matter what, they use defer.
Note that this approach works perfectly for all types in the Go
stdlib which wrap resources provided by the OS—file and socket descriptors. So there is no need to pretend your types are somehow different.
Another useful thing to keep in mind is that Go was explicitly engineered to be no-nonsense, no-frills, no-magic, in-your-face language, and you're just trying to add magic to it.
Please don't, those who like decyphering layers of magic do program in Scala different languages.

Are there any advantages to having a defer in a simple, no return, non-panicking function?

Going through the standard library, I see a lot functions similar to the following:
// src/database/sql/sql.go
func (dc *driverConn) removeOpenStmt(ds *driverStmt) {
dc.Lock()
defer dc.Unlock()
delete(dc.openStmt, ds)
}
...
func (db *DB) addDep(x finalCloser, dep interface{}) {
//println(fmt.Sprintf("addDep(%T %p, %T %p)", x, x, dep, dep))
db.mu.Lock()
defer db.mu.Unlock()
db.addDepLocked(x, dep)
}
// src/expvar/expvar.go
func (v *Map) addKey(key string) {
v.keysMu.Lock()
defer v.keysMu.Unlock()
v.keys = append(v.keys, key)
sort.Strings(v.keys)
}
// etc...
I.e.: simple functions with no returns and presumably no way to panic that are still deferring the unlock of their mutex. As I understand it, the overhead of a defer has been improved (and perhaps is still in the process of being improved), but that being said: Is there any reason to include a defer in functions like these? Couldn't these types of defers end up slowing down a high traffic function?
Always deferring things like Mutex.Unlock() and WaitGroup.Done() at the top of the function makes debugging and maintenance easier, since you see immediately that those pieces are handled correctly so you know that those important pieces are taken care of, and can quickly move on to other issues.
It's not a big deal in 3 line functions, but consistent-looking code is also just easier to read in general. Then as the code grows, you don't have to worry about adding an expression that may panic, or complicated early return logic, because the pre-existing defers will always work correctly.
Panic is a sudden (so it could be unpredicted or unprepared) violation of normal control flow. Potentially it can emerge from anything - quite often from external things - for example memory failure. defer mechanism gives an easy and quite cheap tool to perform exit operation. Thus not leave a system in broken state. This is important for locks in high load applications because it help not to lose locked resources and freeze the whole system in lock once.
And if for some moment code has no places for panic (hard to guess such system ;) but things evolve. Later this function would be more complex and able to throw panic.
Conclusion: Defer helps you to ensure your function will exit correctly if something “goes wring”. Also quite important it is future-proof - same reply for different failures.
So it’s a food style to put them even in easy functions. As a programmer you can see nothing is lost. And be more sure in a code.

Resources