How to recover from an asynchronous panic in external package - go

I'm learning Go and I'm trying to understand how to properly deal with panics from external packages.
Here is a runnable example, say a package defines the doFoo method. (It's located in the same package here for the sake of the example )
package main
import (
"log"
"net/http"
"sync"
"time"
"github.com/gorilla/handlers"
"github.com/gorilla/mux"
)
// Method from External package
func doFoo() {
var wg sync.WaitGroup
wg.Add(1)
// Do some cool async stuff
go func() {
time.Sleep(500)
wg.Done()
panic("Oops !")
}()
}
func router() *mux.Router {
var router = mux.NewRouter().StrictSlash(true)
router.HandleFunc("/doFoo", index).Methods("GET")
return router
}
func main() {
log.Fatal(http.ListenAndServe(":8080", handlers.RecoveryHandler()(router())))
}
func index(w http.ResponseWriter, r *http.Request) {
defer func() {
recover()
w.WriteHeader(http.StatusInternalServerError)
}()
doFoo()
w.WriteHeader(http.StatusOK)
}
Invoking the doFoo method will crash the server, I appreciate that is correct behavior, since the application is now in an undermined state. And it's best to crash and have subsequent requests forwarded to a different processes trough some load balancer.
But, my api server might still be serving other clients, it might be maintaining websockets, and I Might would also want to return a 500 error here.
Coming from nodejs, I am used to the concept of uncaughtException, for handeling uncaptured synchronous exceptions and unhandledRejection to handle uncaptured asynchronous exceptions. These two process constructs give the developer the choice to either crash the program right away ( if it makes sense ), or log the error, return a proper http code, and then maybe shutdown gracefully if needed.
In my online research I find a lot of resources saying, panic's are not like exceptions, they are unusual, you don't need to worry about them. But it seems like it's actually very easy to cause a panic when writing code. It's completely up to the developer to ensure his library does not panic, the human factor is 100% involved here.
This leads me to wonder, do I need to audit the entire code base of every single package I'm going to use, including all the package dependencies as well ? just because I have no means of safeguarding against a missed recover in some external package that will take down my whole server and ruin my user's experience ?
Or is there some strategy I am not aware of that I can fail gracefully when an asynchronous panic occurs in library code ?
I noticed there is graceful shutdown since 1.8, but I can't use this because my program has already crashed.
https://golang.org/pkg/net/http/#Server.Shutdown
There is the gorilla recovery handler, but again, this only protects against synchronous panics.
http://www.gorillatoolkit.org/pkg/handlers#RecoveryHandler
Update:
I am aware that panics are not exceptions. Restating that does not answer the question, panics and exceptions is not what this question is about. This question is about understanding what tools the language may provide to enforce boundaries without bestowing the need to read every single line in the entire package tree onto the developer. If it's not possible in the language then stating that is a valid answer. I just don't know if it is or not.

Panics are not exceptions. Do not treat them like exceptions and you will be fine.
First things first: Package APIs should never panic, they should always return an error except in certain very rare cases, and then they must be clearly documented as to when and why they can panic (regexp.MustCompile is a good example of something that may panic). Any package that panics if it hits an error (and doesn't have a very good reason to do so) is bad, don't use it.
If you do bounds checking, make sure not to acess nil pointers, etc you should never have to worry about panics.
As for recovering panics in a goroutine, unless the goroutine has its own recovery handler you can't.
If the goroutine in from a third party library, don't use that library! If they are lax enough not to check edge cases and/or are lazy enough to just panic on error, why are you using their code? Who knows what other mines it holds?
If the goroutine is your own code, try to eliminate things that can panic, then add a recovery handler to catch the ones you can't prevent if needed.

Related

Golang Concurrency Code Review of Codewalk

I'm trying to understand best practices for Golang concurrency. I read O'Reilly's book on Go's concurrency and then came back to the Golang Codewalks, specifically this example:
https://golang.org/doc/codewalk/sharemem/
This is the code I was hoping to review with you in order to learn a little bit more about Go. My first impression is that this code is breaking some best practices. This is of course my (very) unexperienced opinion and I wanted to discuss and gain some insight on the process. This isn't about who's right or wrong, please be nice, I just want to share my views and get some feedback on them. Maybe this discussion will help other people see why I'm wrong and teach them something.
I'm fully aware that the purpose of this code is to teach beginners, not to be perfect code.
Issue 1 - No Goroutine cleanup logic
func main() {
// Create our input and output channels.
pending, complete := make(chan *Resource), make(chan *Resource)
// Launch the StateMonitor.
status := StateMonitor(statusInterval)
// Launch some Poller goroutines.
for i := 0; i < numPollers; i++ {
go Poller(pending, complete, status)
}
// Send some Resources to the pending queue.
go func() {
for _, url := range urls {
pending <- &Resource{url: url}
}
}()
for r := range complete {
go r.Sleep(pending)
}
}
The main method has no way to cleanup the Goroutines, which means if this was part of a library, they would be leaked.
Issue 2 - Writers aren't spawning the channels
I read that as a best practice, the logic to create, write and cleanup a channel should be controlled by a single entity (or group of entities). The reason behind this is that writers will panic when writing to a closed channel. So, it is best for the writer(s) to create the channel, write to it and control when it should be closed. If there are multiple writers, they can be synced with a WaitGroup.
func StateMonitor(updateInterval time.Duration) chan<- State {
updates := make(chan State)
urlStatus := make(map[string]string)
ticker := time.NewTicker(updateInterval)
go func() {
for {
select {
case <-ticker.C:
logState(urlStatus)
case s := <-updates:
urlStatus[s.url] = s.status
}
}
}()
return updates
}
This function shouldn't be in charge of creating the updates channel because it is the reader of the channel, not the writer. The writer of this channel should create it and pass it to this function. Basically saying to the function "I will pass updates to you via this channel". But instead, this function is creating a channel and it isn't clear who is responsible of cleaning it up.
Issue 3 - Writing to a channel asynchronously
This function:
func (r *Resource) Sleep(done chan<- *Resource) {
time.Sleep(pollInterval + errTimeout*time.Duration(r.errCount))
done <- r
}
Is being referenced here:
for r := range complete {
go r.Sleep(pending)
}
And it seems like an awful idea. When this channel is closed, we'll have a goroutine sleeping somewhere out of our reach waiting to write to that channel. Let's say this goroutine sleeps for 1h, when it wakes up, it will try to write to a channel that was closed in the cleanup process. This is another example of why the writters of the channels should be in charge of the cleanup process. Here we have a writer who's completely free and unaware of when the channel was closed.
Please
If I missed any issues from that code (related to concurrency), please list them. It doesn't have to be an objective issue, if you'd have designed the code in a different way for any reason, I'm also interested in learning about it.
Biggest lesson from this code
For me the biggest lesson I take from reviewing this code is that the cleanup of channels and the writing to them has to be synchronized. They have to be in the same for{} or at least communicate somehow (maybe via other channels or primitives) to avoid writing to a closed channel.
It is the main method, so there is no need to cleanup. When main returns, the program exits. If this wasn't the main, then you would be correct.
There is no best practice that fits all use cases. The code you show here is a very common pattern. The function creates a goroutine, and returns a channel so that others can communicate with that goroutine. There is no rule that governs how channels must be created. There is no way to terminate that goroutine though. One use case this pattern fits well is reading a large resultset from a
database. The channel allows streaming data as it is read from the
database. In that case usually there are other means of terminating the
goroutine though, like passing a context.
Again, there are no hard rules on how channels should be created/closed. A channel can be left open, and it will be garbage collected when it is no longer used. If the use case demands so, the channel can be left open indefinitely, and the scenario you worry about will never happen.
As you are asking about if this code was part of a library, yes it would be poor practice to spawn goroutines with no cleanup inside a library function. If those goroutines carry out documented behaviour of the library, it's problematic that the caller doesn't know when that behaviour is going to happen. If you have any behaviour that is typically "fire and forget", it should be the caller who chooses when to forget about it. For example:
func doAfter5Minutes(f func()) {
go func() {
time.Sleep(5 * time.Minute)
f()
log.Println("done!")
}()
}
Makes sense, right? When you call the function, it does something 5 minutes later. The problem is that it's easy to misuse this function like this:
// do the important task every 5 minutes
for {
doAfter5Minutes(importantTaskFunction)
}
At first glance, this might seem fine. We're doing the important task every 5 minutes, right? In reality, we're spawning many goroutines very quickly, probably consuming all available memory before they start dropping off.
We could implement some kind of callback or channel to signal when the task is done, but really, the function should be simplified like so:
func doAfter5Minutes(f func()) {
time.Sleep(5 * time.Minute)
f()
log.Println("done!")
}
Now the caller has the choice of how to use it:
// call synchronously
doAfter5Minutes(importantTaskFunction)
// fire and forget
go doAfter5Minutes(importantTaskFunction)
This function arguably should also be changed. As you say, the writer should effectively own the channel, as they should be the one closing it. The fact that this channel-reading function insists on creating the channel it reads from actually coerces itself into this poor "fire and forget" pattern mentioned above. Notice how the function needs to read from the channel, but it also needs to return the channel before reading. It therefore had to put the reading behaviour in a new, un-managed goroutine to allow itself to return the channel right away.
func StateMonitor(updates chan State, updateInterval time.Duration) {
urlStatus := make(map[string]string)
ticker := time.NewTicker(updateInterval)
defer ticker.Stop() // not stopping the ticker is also a resource leak
for {
select {
case <-ticker.C:
logState(urlStatus)
case s := <-updates:
urlStatus[s.url] = s.status
}
}
}
Notice that the function is now simpler, more flexible and synchronous. The only thing that the previous version really accomplishes, is that it (mostly) guarantees that each instance of StateMonitor will have a channel all to itself, and you won't have a situation where multiple monitors are competing for reads on the same channel. While this may help you avoid a certain class of bugs, it also makes the function a lot less flexible and more likely to have resource leaks.
I'm not sure I really understand this example, but the golden rule for channel closing is that the writer should always be responsible for closing the channel. Keep this rule in mind, and notice a few points about this code:
The Sleep method writes to r
The Sleep method is executed concurrently, with no method of tracking how many instances are running, what state they are in, etc.
Based on these points alone, we can say that there probably isn't anywhere in the program where it would be safe to close r, because there's seemingly no way of knowing if it will be used again.

Proper logging implementation in Golang package

I have small Golang package which does some work. This work suppose a high amount of errors could be produced and this is OK. Currently all errors are ignored. Yes it may look strange, but visit the link and check the main purpose of package.
I'd like to extend functionality of the package and provide ability to see errors occurred during runtime. But due to lack of software design skills I have some questions with no answers.
At first, I thought to implement logging inside the package using the existing logging (zerolog, zap or whatever else). But, will it be ok for package's users? Because they might want to use other logging packages and would like to modify output format.
Maybe it's possible to provide a way to user to inject it's own logging?
I'd like to achieve the ability to provide easy-configurable way for logging which could be switched on or off on users demands.
Some go lib use logging like this
in your packge definite a logger interface
type Yourlogging interface{
Errorf(...)
Warningf(...)
Infof(...)
Debugf(...)
}
and definite a variable for this interface
var mylogger Yourlogging
func SetLogger(l yourlogging)error{
mylogger = l
}
in your func, you can call them for logging
mylogger.Infof(..)
mylogger.Errorf(...)
you don't need implement the interface, but you can use them who implement this interface
for example:
SetLogger(os.Stdout) //logging output to stdout
SetLogger(logrus.New()) // logging output to logrus (github.com/sirupsen/logrus)
In Go, you will see some libraries implement logging interfaces like other answers have suggested. However, you could completely avoid your packages needing to log if you structured your application differently, for your example.
For example, in your example application you linked, your main application runtime calls idleexacts.Run(), which starts this function.
// startLoop starts workload using passed settings and database connection.
func startLoop(ctx context.Context, log log.Logger, pool db.DB, tables []string, jobs uint16, minTime, maxTime time.Duration) error {
rand.Seed(time.Now().UnixNano())
// Increment maxTime up to 1 due to rand.Int63n() never return max value.
maxTime++
// While running, keep required number of workers using channel.
// Run new workers only until there is any free slot.
guard := make(chan struct{}, jobs)
for {
select {
// Run workers only when it's possible to write into channel (channel is limited by number of jobs).
case guard <- struct{}{}:
go func() {
table := selectRandomTable(tables)
naptime := time.Duration(rand.Int63n(maxTime.Nanoseconds()-minTime.Nanoseconds()) + minTime.Nanoseconds())
err := startSingleIdleXact(ctx, pool, table, naptime)
if err != nil {
log.Warnf("start idle xact failed: %s", err)
}
// When worker finishes, read from the channel to allow starting another worker.
<-guard
}()
case <-ctx.Done():
return nil
}
}
}
The problem here is all of the orchestration of your logic is happening inside of your packages. Instead, this loop should be running in your main application, and this package should provide users with simple actions such as selectRandomTable() or createTempTable().
If the orchestration of code was in your main application and the package only provided simple actions. It would be much easier to return errors to the user as part of the function calls.
It would also make your packages easier for others to reuse because they have simple actions and open users to use them in other ways than you intended.

Use of goroutines when steps are sequential

I feel like the answer to my question is no but asking for certainty as I've only started playing around with Go for a few days. Should we encapsulate IO bound tasks (like http requests) into goroutines even if it was to be used in a sequential use case?
Here's my naive example. Say I have a method that makes 3 http requests but need to be executed sequentially. Is there any benefit in creating the invoke methods as goroutines? I understand the example below would actually take a performance hit.
func myMethod() {
chan1 := make(chan int)
chan2 := make(chan int)
chan3 := make(chan int)
go invoke1(chan1)
res1 := <-chan1
invoke2(res1, chan2)
res2 := <-chan2
invoke3(res2, chan3)
// Do something with <-chan3
}
One possible reason that comes to mind is to future proof the invoke methods for when they're called in a concurrent context later on when other develops start re-using the method. Any other reasons?
There's nothing standard that would say yes or no to this question.
Although you can do it correctly this way, it is much simpler to stick to plain sequential execution.
Three reasons come to mind:
this is still sequential: you're waiting for every single goroutine sequentially, so this buys you nothing. Performance probably doesn't change much if it's only doing an http request, both cases will spend most of their time waiting for the response.
error handling is much simpler if you just get result, err := invoke; if err != nil .... rather than having to pass both results and errors through channels
over-generalization is a more apt word than "future proofing". If you need to call your invoke methods asynchronously in the future, then change your code in the future. It will be just as easy then to add asynchronous wrappers around your functions.
I am a little bit late to the party but I think I still have something to share.
Before answering the question, I would like to dive a little into the Go Proverb, Concurrency is not parellism. It is a very common misunderstanding of goroutine and Go's language feature that when people thinking of goroutine, they thinks about the ability of being parell.
But as Rob Pike pointed out in many Go Talks, what Go and goroutine auctually provides, is concurency. Concurency is a model of better interepting the real world, a way and a structure of code, of how code interact.
So back to the question. Should goroutine be used when steps are sequential? It depends on the design. If your code compose of individual parts that talks to each other very naturally, or if some of your code preserve a state and frequently return just does not make sense, or if your code fits in any other concurency design, it is perfectly fine to use goroutine and channel and select statement. Rob Pike, again, gives a Go Talk on a lexer used by now Go's text/template where the lexer uses a goroutine but the parser, obviously, only use the lexer sequentially. He stated that by using goroutine and channel, at cost of a little performance, a better API is acheived.
But on the other hand, in your example and probably what you are thinking about (codes that may require future parellism), I agree with #Marc. Stick for blocking call, at least for now.
You could use a channel like buf, or use []string (slice of strings urls). You don't get any benefit from gorutines, if you need only sequential execution, because we couldn't control goroutine when it's start.
But we can ask then don't wait runtime.Gosched()
From documentation:
Gosched yields the processor, allowing other goroutines to run. It does not suspend the current goroutine, so execution resumes automatically.
Example of sequential execution:
package main
func main() {
urls := []string{
"https://google.com",
"https://yahoo.com",
"https://youtube.com",
}
buf := make([][]byte, 0, len(urls))
for _, v := range urls {
buf = append(buf, sendRequest(v))
}
}
func sendRequest(url string) []byte {
//send
return []byte("")
}
For the solution as such, I see no gain by solving your problem the way you are in the real world. That doesn't mean there isn't gain in your solution. If you're after practice for example synchronising go routines, then you have a plethora of possibilities experimenting and learning.
So to me this is far from a clear no, but no clear yes either. I would refrain from saying maybe, it depends on what exactly your after. Are you after actually solving a problem or learning?

Should I use panic or return error?

Go provides two ways of handling errors, but I'm not sure which one to use.
Assuming I'm implementing a classic ForEach function which accepts a slice or a map as an argument. To check whether an iterable is passed in, I could do:
func ForEach(iterable interface{}, f interface{}) {
if isNotIterable(iterable) {
panic("Should pass in a slice or map!")
}
}
or
func ForEach(iterable interface{}, f interface{}) error {
if isNotIterable(iterable) {
return fmt.Errorf("Should pass in a slice or map!")
}
}
I saw some discussions saying panic() should be avoided, but people also say that if program cannot recover from error, you should panic().
Which one should I use? And what's the main principle for picking the right one?
You should assume that a panic will be immediately fatal, for the entire program, or at the very least for the current goroutine. Ask yourself "when this happens, should the application immediately crash?" If yes, use a panic; otherwise, use an error.
Use panic.
Because your use case is to catch a bad use of your API. This should never happen at runtime if the program is calling your API properly.
In fact, any program calling your API with correct arguments will behave in the same way if the test is removed. The test is there only to fail early with an error message helpful to the programmer that did the mistake. Ideally, the panic might be reached once during development when running the testsuite and the programmer would fix the call even before committing the bad code, and that incorrect use would never reach production.
See also this reponse to question Is function parameter validation using errors a good pattern in Go?.
I like the way it's done in some libraries where on top of a regular method DoSomething, its "panicky" version is added with MustDoSomething. I'm relatively new to go, but I've already seen it in several places, notably sqlx.
In general, if you want to expose your code to someone else, you should either have Must- and a regular version of the method, or your methods/functions should give the client a chance to recover the way they want and so error should be available to them in a go-idiomatic way.
Having said that, I agree that if your API/library is used inappropriately, it's Ok to panic as well. As a matter of fact, I've also seen methods like MustGetenv() that will panic if a critical env.var is missing. Fail-fast mechanism basically.
If some mandatory requirement is not provided or not there while starting the service (eg. database connection, some service configuration which is required) then you should use panic.
There should be return error for any user response or server side error.
Ask yourself these questions:
Do you expect the exceptional situation to occur, regardless how well would you code your app? Do you think it should be useful to make the user aware of such condition as part of the normal usage of your app? Handle it as an error, because it concerns the application as working normally.
Should that exceptional situation NOT occur if you code appropriately (and somewhat defensively)? (example: dividing by zero, or accessing an array element out of bounds) Is your app totally clueless under that error? Panic.
Do you have your API and want to ensure users use it appropriately? Panic. Your API will seldom recover if used incorrectly.
Use error whenever possible
Only use panic when your code could end up in a bad state that would be prone to crashing; something truly unexpected. The example above with ForEach() is an exported func that accepts an interface so it should expect someone will improperly call it. And if it is improperly called, you know why you cannot continue and you know how to handle that error. isNotIterable is literally binary and easy to control.
But error is not like a try/catch
Even if you try to justify panic/recover by looking at throw/catch from other languages, you still use errors. We know you are trying the function because you are calling it, we know there was an error because err != nil, and just like checking the type of exception thrown you can check the type of error returned with errors.Is(err, ErrNotIterable)
So should you use panic for errors in concurrency?
The answer is still most likely no. Errors are still the preferred way in Go and you can use a wait group to shut down the goroutines:
ctx, cancel := context.WithTimeout(context.Background(), time.Minute*5)
// automatically cancel in 5 min
defer cancel()
errGroup, ctx := errgroup.WithContext(ctx)
errGroup.Go(func() error {
// do crazy stuff; you can still check for errors
if ... {
return fmt.Errorf("critical error, stopping all goroutines now")
}
// code completed without issues
return nil
})
err = errGroup.Wait()
Even using the structure of the original example, you still have better control with errors than panics:
func ForEach(iterable interface{}, f interface{}) error {
if isNotIterable(iterable) {
return fmt.Errorf("expected something iterable but got %v", reflect.ValueOf(iterable).String())
}
switch v.Kind() {
case reflect.Map:
...
case reflect.Array, reflect.Slice:
...
default:
return fmt.Errorf("isNotIterable is false but I do not know how to iterate through %v", reflect.ValueOf(iterable).String())
}
But error feels very verbose
Yes, that is the point. When an error is returned, it is at that point to do something about it. You are giving the calling code options rather than making the decision to start shutting down and killing the application unless you recover(). If you are just returning the same error all the way up the call stack then error will seem inferior to panic, but this is due to not addressing issues when they happen.
So when to use panic?
When your code is on a collision course to crash and you cannot assume your way out of it. Another is when the code assumes something that is no longer true and having to check the integrity in every function from here on out would be tedious (and might impact performance). Still, you would use panic() only to get out of the layers of uncertainty... then still handle errors:
func ForEach(iterable interface{}, f interface{}) error {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("cannot iterate due to unexpected runtime error %v", r)
return
}
}()
...
// perhaps a broken pipe in a global var
// or an included module threw a panic at you!
}
But if you are still not convinced... Here is the Go FAQ
We believe that coupling exceptions to a control structure, as in the try-catch-finally idiom, results in convoluted code. It also tends to encourage programmers to label too many ordinary errors, such as failing to open a file, as exceptional.
Go takes a different approach. For plain error handling, Go's multi-value returns make it easy to report an error without overloading the return value. A canonical error type, coupled with Go's other features, makes error handling pleasant but quite different from that in other languages.
A panic typically means something went unexpectedly wrong. Mostly used to fail fast on errors that shouldn’t occur during normal operation, or that we aren’t prepared to handle gracefully. So in this case just return the error, you don't want your program to panic.
I think none of the previous answers are correct:
By default, if we don't know what to do with the "error" code must panic following best programming patterns:
https://en.wikipedia.org/wiki/Fail-fast
Putting it more formally, our "Turing Machine" is broken and we need to come back to an "stable state" or "reset state". More info at https://en.wikipedia.org/wiki/Reset_(computing)
For example in web (micro)services that means returning a 40X error (panic caused by input from user) or 50X error (panic caused by something else - hardware, network, assert error, ...)
If we know what to do with the "error", then we do not have an error in first place, but an uncomfortable return value. This is a normal execution condition and probably not an error. Normally this correspond to the happy vs non-happy path modeling.
In a summary, the err return value is mostly a wrong idea, even if the GO community has adopted it as a religion. Using error return values is just a patchy way to speed up program execution since it require fewer CPU instructions to be implemented, but most of the time, except for low-level services, it is useless and promote dirty code. (note that GO was designed to implement those low-level services as an "easy-C", but it was adopted for high-level (Level 7) application programs when an error must fail fast to avoid continuing with undefined states that can potentially cause money being lost of fatal casualties. In case of doubt, default to panic.
Don't use panic for normal error handling. Use error and multiple return values. See https://golang.org/doc/effective_go.html#errors.

is it wrong to treat panic / recover as throw / catch

Speaking as a new go enthusiast trying to work with the go way of error handling. To be clear - I like exceptions.
I have a server that accepts a connection , processes a set of requests and replies to them. I found that I can do
if err != nil{
panic(err)
}
in the deep down processing code
and have
defer func() {
if err := recover(); err != nil {
log.Printf("%s: %s", err, debug.Stack()) // line 20
}
}()
in the client connection code (each connection is in a goroutine). This nicely wraps everything up, forcefully closes the connection (other defers fire) and my server continues to hum along.
But this feels an awful lot like a throw/catch scenario - which golang states it doesn't support. Questions
is this stable. ie recovering a panic is an OK thing to do as an
ongoing way of life. Its not intended to just slightly defer an
immediate shutdown
I looked for a discussion on this topic and did not find it anywhere - any pointers?
I feel that the answer is 'yes it works' and can be used inside you own code, but panic should NOT be used by a library intended for wider use. The standard and polite way for a library to behave is by error returns
Yes, you can do what you suggest. There are some situations within the standard packages where panic/recover is used for handling errors. The official Go blog states:
For a real-world example of panic and recover, see the json package
from the Go standard library. It decodes JSON-encoded data with a set
of recursive functions. When malformed JSON is encountered, the parser
calls panic to unwind the stack to the top-level function call, which
recovers from the panic and returns an appropriate error value (see
the 'error' and 'unmarshal' methods of the decodeState type in
decode.go).
Some pointers:
Use error for your normal use cases. This should be your default.
If your code would get clearer and simpler by using a panic/recover (such as with a recursive call stack), then use it for that particular case.
Never let a package leak panics. Panics used within a package should be recovered within the package and returned as an error.
Recovering from a panic is stable. Don't worry about continuing execution after a recover. You can see such behavior in standard library such as with the net/http package which recovers from panics within handlers to prevent the entire http server to go crash when panicing on a single request.
Generally most methods won't panic, they will return an error instead, and there's a bit of an overhead of using defer.
So yes, it does work, but the "proper" / "go" way is to return an error instead of using panic / recover.

Resources