Why Go's bufio uses panic under the hood? - go

Reading the code from the bufio package I've found such things:
// fill reads a new chunk into the buffer.
func (b *Reader) fill() {
...
if b.w >= len(b.buf) {
panic("bufio: tried to fill full buffer")
}
...
}
At the same time the Effective Go section about panic contains the next paragraph:
This is only an example but real library functions should avoid panic.
If the problem can be masked or worked around, it's always better to
let things continue to run rather than taking down the whole program.
So, I wonder, is the problem with a particular buffered reader so important to cause the panic call in the standard library code?

It may be questionable, but consider: fill is a private method, and b.w and b.buf are private fields. If the condition that causes the panic is ever true, it's due to a logic error in the implementation of bufio. Since it should never really be possible to get into that state in the first place (a "can't happen" condition), we don't really know how we got there, and it's unclear how much other state got corrupted before the problem was detected and what, if anything, the user can do about it. In that kind of situation, a panic seems reasonable.

Related

Unlocking mutex without defer [duplicate]

Going through the standard library, I see a lot functions similar to the following:
// src/database/sql/sql.go
func (dc *driverConn) removeOpenStmt(ds *driverStmt) {
dc.Lock()
defer dc.Unlock()
delete(dc.openStmt, ds)
}
...
func (db *DB) addDep(x finalCloser, dep interface{}) {
//println(fmt.Sprintf("addDep(%T %p, %T %p)", x, x, dep, dep))
db.mu.Lock()
defer db.mu.Unlock()
db.addDepLocked(x, dep)
}
// src/expvar/expvar.go
func (v *Map) addKey(key string) {
v.keysMu.Lock()
defer v.keysMu.Unlock()
v.keys = append(v.keys, key)
sort.Strings(v.keys)
}
// etc...
I.e.: simple functions with no returns and presumably no way to panic that are still deferring the unlock of their mutex. As I understand it, the overhead of a defer has been improved (and perhaps is still in the process of being improved), but that being said: Is there any reason to include a defer in functions like these? Couldn't these types of defers end up slowing down a high traffic function?
Always deferring things like Mutex.Unlock() and WaitGroup.Done() at the top of the function makes debugging and maintenance easier, since you see immediately that those pieces are handled correctly so you know that those important pieces are taken care of, and can quickly move on to other issues.
It's not a big deal in 3 line functions, but consistent-looking code is also just easier to read in general. Then as the code grows, you don't have to worry about adding an expression that may panic, or complicated early return logic, because the pre-existing defers will always work correctly.
Panic is a sudden (so it could be unpredicted or unprepared) violation of normal control flow. Potentially it can emerge from anything - quite often from external things - for example memory failure. defer mechanism gives an easy and quite cheap tool to perform exit operation. Thus not leave a system in broken state. This is important for locks in high load applications because it help not to lose locked resources and freeze the whole system in lock once.
And if for some moment code has no places for panic (hard to guess such system ;) but things evolve. Later this function would be more complex and able to throw panic.
Conclusion: Defer helps you to ensure your function will exit correctly if something “goes wring”. Also quite important it is future-proof - same reply for different failures.
So it’s a food style to put them even in easy functions. As a programmer you can see nothing is lost. And be more sure in a code.

Why finalizer is never called?

var p = &sync.Pool{
New: func() interface{} {
return &serveconn{}
},
}
func newServeConn() *serveconn {
sc := p.Get().(*serveconn)
runtime.SetFinalizer(sc, (*serveconn).finalize)
fmt.Println(sc, "SetFinalizer")
return sc
}
func (sc *serveconn) finalize() {
fmt.Println(sc, "finalize")
*sc = serveconn{}
runtime.SetFinalizer(sc, nil)
p.Put(sc)
}
The above code tries to reuse object by SetFinalizer, but after debug I found finalizer is never called, why?
UPDATE
This may be related:https://github.com/golang/go/issues/2368
The above code tries to reuse object by SetFinalizer, but after debug I found finalizer is never called, why?
The finalizer is only called on an object when the GC
marks it as unused and then tries to sweep (free) at the end
of the GC cycle.
As a corollary, if a GC cycle is never performed during the runtime of your program, the finalizers you set may never be called.
Just in case you might hold a wrong assumption about the Go's GC, it may worth noting that Go does not employ reference counting on values; instead, it uses GC which works in parallel with the program, and the sessions during which it works happen periodically and are triggered by certain parameters like pressure on the heap produced by allocations.
A couple assorted notes regarding finalizers:
When the program terminates, no GC is forcibly run.
A corollary of this is that a finalizer is not guaranteed
to run at all.
If the GC finds a finalizer on an object about to be freed,
it calls the finalizer but does not free the object.
The object itself will be freed only at the next GC cycle —
wasting the memory.
All in all, you appear as trying to implement destructors.
Please don't: make your objects implement the sort-of standard method called Close and state in the contract of your type that the programmer is required to call it when they're done with the object.
When a programmer wants to call such a method no matter what, they use defer.
Note that this approach works perfectly for all types in the Go
stdlib which wrap resources provided by the OS—file and socket descriptors. So there is no need to pretend your types are somehow different.
Another useful thing to keep in mind is that Go was explicitly engineered to be no-nonsense, no-frills, no-magic, in-your-face language, and you're just trying to add magic to it.
Please don't, those who like decyphering layers of magic do program in Scala different languages.

Is it good idea to generate random string until success with "crypto/rand"?

Is it a good idea to generate a secure random hex string until the process succeeds?
All examples I've come across show that if rand.Read returns error, we should panic, os.Exit(1) or return empty string and the error.
I need my program to continue to function in case of such errors and wait until a random string is generated. Is it a good idea to loop until the string is generated, any pitfalls with that?
import "crypto/rand"
func RandomHex() string {
var buf [16]byte
for {
_, err := rand.Read(buf[:])
if err == nil {
break
}
}
return hex.EncodeToString(buf[:])
}
No. It may always return an error in certain contexts.
Example: playground: don't use /dev/urandom in crypto/rand
Imagine that a machine does not have the source that crypto/rand gets data from or the program runs in a context that doesn't have access to that source. In that case you might consider having the program return that error in a meaningful way rather than spin.
More explicitly, if you are serious in your use of crypto/rand then consider writing RandomHex such that it is exceptionally clear to the caller that it is meant for security contexts (possibly rename it) and return the error from RandomHex. The calling function needs to handle that error and let the user know that something is very wrong. For example in a rest api, I'd expect that error to surface to the request handler, fail & return a 500 at that point, and log a high severity error.
Is it a good idea to loop until the string is generated,
That depends. Probably yes.
any pitfalls with that?
You discard the random bytes read on error. And this in a tight loop.
This may drain you entropy source (depending on the OS) faster than
it can be filled.
Instead of an unbound infinite loop: Break after n rounds and give up.
Graceful degradation or stopping is best: If your program is stuck in
an endless loop it is also not "continue"ing.

Are there any advantages to having a defer in a simple, no return, non-panicking function?

Going through the standard library, I see a lot functions similar to the following:
// src/database/sql/sql.go
func (dc *driverConn) removeOpenStmt(ds *driverStmt) {
dc.Lock()
defer dc.Unlock()
delete(dc.openStmt, ds)
}
...
func (db *DB) addDep(x finalCloser, dep interface{}) {
//println(fmt.Sprintf("addDep(%T %p, %T %p)", x, x, dep, dep))
db.mu.Lock()
defer db.mu.Unlock()
db.addDepLocked(x, dep)
}
// src/expvar/expvar.go
func (v *Map) addKey(key string) {
v.keysMu.Lock()
defer v.keysMu.Unlock()
v.keys = append(v.keys, key)
sort.Strings(v.keys)
}
// etc...
I.e.: simple functions with no returns and presumably no way to panic that are still deferring the unlock of their mutex. As I understand it, the overhead of a defer has been improved (and perhaps is still in the process of being improved), but that being said: Is there any reason to include a defer in functions like these? Couldn't these types of defers end up slowing down a high traffic function?
Always deferring things like Mutex.Unlock() and WaitGroup.Done() at the top of the function makes debugging and maintenance easier, since you see immediately that those pieces are handled correctly so you know that those important pieces are taken care of, and can quickly move on to other issues.
It's not a big deal in 3 line functions, but consistent-looking code is also just easier to read in general. Then as the code grows, you don't have to worry about adding an expression that may panic, or complicated early return logic, because the pre-existing defers will always work correctly.
Panic is a sudden (so it could be unpredicted or unprepared) violation of normal control flow. Potentially it can emerge from anything - quite often from external things - for example memory failure. defer mechanism gives an easy and quite cheap tool to perform exit operation. Thus not leave a system in broken state. This is important for locks in high load applications because it help not to lose locked resources and freeze the whole system in lock once.
And if for some moment code has no places for panic (hard to guess such system ;) but things evolve. Later this function would be more complex and able to throw panic.
Conclusion: Defer helps you to ensure your function will exit correctly if something “goes wring”. Also quite important it is future-proof - same reply for different failures.
So it’s a food style to put them even in easy functions. As a programmer you can see nothing is lost. And be more sure in a code.

Golang error function arguments too large for new goroutine

I am running a program with go 1.4 and I am trying to pass a large struct to a go function.
go ProcessImpression(network, &logImpression, campaign, actualSpent, partnerAccount, deviceId, otherParams)
I get this error:
runtime.newproc: function arguments too large for new goroutine
I have moved to pass by reference which helps but I am wondering if there is some way to pass large structs in a go function.
Thanks,
No, none I know of.
I don't think you should be too aggressive tuning to avoid copying, but it appears from the source that this error is emitted when parameters exceed the usable stack space for a new goroutine, which should be kilobytes. The copying overhead is real at that point, especially if this isn't the only time these things are copied. Perhaps some struct either explicitly is larger than expected thanks to a large struct member (1kb array rather than a slice, say) or indirectly. If not, just using a pointer as you have makes sense, and if you're worried about creating garbage, recycle the structs pointed to using sync.Pool.
I was able to fix this issue by changing the arguments from
func doStuff(prev, next User)
to
func doStuff(prev, next *User)
The answer from #twotwotwo in here is very helpful.
Got this issue at processing list of values([]BigType) of big struct:
for _, stct := range listBigStcts {
go func(stct BigType) {
...process stct ...
}(stct) // <-- error occurs here
}
Workaround is to replace []BigType with []*BigType

Resources