What are the differences in outcome for panic vs log.Fatalln()? - go

From the documentation on log.Fatalln():
func Fatalln(v ...interface{}) Fatalln is equivalent to Println()
followed by a call to os.Exit(1).
The source code for Fatalln:
310 // Fatalln is equivalent to Println() followed by a call to os.Exit(1).
311 func Fatalln(v ...interface{}) {
312 std.Output(2, fmt.Sprintln(v...))
313 os.Exit(1)
314 }
It seems the main difference is whether or not the error is recoverable (since you can recover a panic) - is there anything more significantly different between these?
Panic's interface definition is:
215 // The panic built-in function stops normal execution of the current
216 // goroutine. When a function F calls panic, normal execution of F stops
217 // immediately. Any functions whose execution was deferred by F are run in
218 // the usual way, and then F returns to its caller. To the caller G, the
219 // invocation of F then behaves like a call to panic, terminating G's
220 // execution and running any deferred functions. This continues until all
221 // functions in the executing goroutine have stopped, in reverse order. At
222 // that point, the program is terminated and the error condition is reported,
223 // including the value of the argument to panic. This termination sequence
224 // is called panicking and can be controlled by the built-in function
225 // recover.
226 func panic(v interface{})
It appears panic does not return anything.
Is that the primary difference? Otherwise, they seem to perform the same function in an application, assuming the panic is not recovered.

The log message goes to the configured log output, while panic is only going to write to stderr.
Panic will print a stack trace, which may not be relevant to the error at all.
Defers will be executed when a program panics, but calling os.Exit exits immediately, and deferred functions can't be run.
In general, only use panic for programming errors, where the stack trace is important to the context of the error. If the message isn't targeted at the programmer, you're simply hiding the message in superfluous data.

panic is often used in little programs to just terminate the program once an error appears you don't know how to handle or don't want to handle. The downside of panic is exactly that: it'll terminate the program (mostly, unless you use recover). It's generally not good to use panic unless you intend to recover from it or unless something has happened you really can't recover from at all nor where you can gracefully terminate the program otherwise. Consider for example an API that offers you functionality but that API secretly has a panic somewhere and you notice that your program terminates in production due to this. Thus, the "outward API" of whatever code you write should recover from panics and return an error instead. The same thing applies to anything that terminates the program.
However, os.Exit() can't be recovered from nor does it execute defers.

Another observation about panic vs log.Fatalln:
If a function supposed to return something does not have an unconditional return statement, there is a compile-time error (missing return). Now, if that function calls panic it will compile just fine, but not if it calls log.Fatalln:
// this will not compile ("missing return") even if it calls log.Fatalln:
func my_func(key string) string {
my_map := map[string]string{"key": "value"}
if val, key_exists := my_map[key]; key_exists {
return val
}
log.Fatalln(fmt.Sprintf("unknown key %q", key))
}
// this will compile just fine if it calls panic:
func my_func(key string) string {
my_map := map[string]string{"key": "value"}
if val, key_exists := my_map[key]; key_exists {
return val
}
panic(fmt.Sprintf("unknown key %q", key))
}

Related

How does this go-routine in an anonymous function exactly work?

func (s *server) send(m *message) error {
go func() {
s.outgoingMessageChan <- message
}()
return nil
}
func main(s *server) {
for {
select {
case <-someChannel:
// do something
case msg := <-s.outGoingMessageChan:
// take message sent from "send" and do something
}
}
}
I am pulling out of this s.outgoingMessageChan in another function, before using an anonymous go function, a call to this function would usually block - meaning whenever send is called, s.outgoingMessageChan <- message would block until something is pulling out of it. However after wrapping it like this it doesn't seem to block anymore. I understand that it kind of sends this operation to the background and proceeds as usual, but I'm not able to wrap my head around how this doesn't affect the current function call.
Each time send is called a new goroutine is created, and returns immediately. (BTW there is no reason to return an error if there can never be an error.) The goroutine (which has it's own "thread" of execution) will block if nothing is ready to read from the chan (assuming it's unbuffered). Once the message is read off the chan the goroutine will continue but since it does nothing else it will simply end.
I should point out that there is no such thing as an anonymous goroutine. Goroutines have no identifier at all (except for a number that you should only use for debugging purposes). You have an anonymous function which you put the go keyword in front causing it to run in a separate goroutine.
For a send function that blocks as you seem to want then just use:
func (s *server) send(m *message) {
s.outgoingMessageChan <- message
}
However, I can't see any point in this function (though it would be inlined and just as efficient as not using a function).
I suspect you may be calling send many times before anything is read from the chan. In this case many new goroutines will be created (each time you call send) which will all block. Each time the chan is read from one will unblock delivering its value and that goroutine will terminate. Doing this you are simply creating an inefficient buffering mechanism. Moreover, if send is called for a prolonged period at a faster rate than the values can be read from the chan then you will eventually run out of memory. Better would be to use a buffered chan (and no goroutines) that once it (the chan) became full exerted "back-pressure" on whatever was producing the messages.
Another point is that the function name main is used to identify the entry point to a program. Please use another name for your 2nd function above. It also seems like it should be a method (using s *server receiver) than a function.

Check if function is being called as goroutine or not

Is there any way to find out if a running function was called as goroutine or not?
I've read 'go tour' and I am interested in building a websocket server with golang, so I found this tutorial https://tutorialedge.net/golang/go-websocket-tutorial/
Now I'm wondering if wsEndpoint function from the tutorial is invoked as goroutine (e.g. go wsEndpoint(...)) or not.
I've tried to read http package documentation, but did not get clear picture, just a guess that the handler will be called with go routine. Is that true?
Every function is called from a goroutine, even the main() function (which is called the main goroutine).
And goroutines in Go have no identity. It does not matter which goroutine calls a function.
To answer your "original" question:
Is there any way to find out if a running function was called as goroutine or not?
If we define this as the function being called with the go statement or without that, then the answer is yes: we can check that.
But before we do: I would not use this information for anything. Don't write code that depends on this, nor on which goroutine calls a function. If you need to access a resource concurrently from multiple goroutines, just use proper synchronization.
Basically we can check the call stack: the list of functions that call each other. If the function is at the top of that list, then it was called using go (check note at the end of the answer). If there are other functions before that in the call stack, then it was called without go, from another function (that places before in the call stack).
We may use runtime.Callers() to get the calling goroutine's stack. This is how we can check if there are other functions calling "us":
func f(name string) {
count := runtime.Callers(3, make([]uintptr, 1))
if count == 0 {
fmt.Printf("%q is launched as new\n", name)
}
}
Testing it:
func main() {
f("normal")
go f("with-go")
func() { f("from-anon") }()
func() { go f("from-anon-with-go") }()
f2("from-f2")
go f2("with-go-from-f2")
f3("from-f3")
go f3("with-go-from-f3")
time.Sleep(time.Second)
}
func f2(name string) { f(name) }
func f3(name string) { go f(name) }
This will output (try it on the Go Playground):
"with-go" is launched as new
"from-anon-with-go" is launched as new
"from-f3" is launched as new
"with-go-from-f3" is launched as new
Note: basically there is a runtime.goexit() function on "top" of all call stacks, this is the top-most function running on a goroutine and is the "exit" point for all goroutines. This is why we skip 3 frames from the stack (0. is runtime.Callers() itself, 1. is the f() function, and the last one to skip is runtime.goexit()). You can check the full call stacks with function and file names+line numbers in this Go Playground. This doesn't change the viability of this solution, it's just that we have to skip 3 frames instead of 2 to tell if f() was called from another function or with the go statement.

What is the possible use case for defer?

What is the actual use of the defer keyword?
for example, instead of writing this:
func main() {
f := createFile("/tmp/defer.txt")
defer closeFile(f)
writeFile(f)
}
I can just write this:
func main() {
f := createFile("/tmp/defer.txt")
writeFile(f)
closeFile(f)
}
So, why should I use it instead of a usual placing of functions?
Deferred functions always get executed, even after a panic or return statement.
In real world code a lot of stuff happens between Open/Close type of call pairs, and defer lets you keep them close together in the source, and you don't have to repeat the Close call for every return statement.
Go and write some real code. The usefulness of defer will be blatantly obvious before long.
Very useful when catching code that has the potential to panic.
Often when using interface{} (an "any" type) or reflection, you will encounter issues where you are trying to cast to a type that doesn't match the actual type of the data.
defering a function at the top to handle that error is how you save the day and keep your application running.

How to implement a PHP function `die()` (or `exit()`) in Go?

In PHP, die() is used to stop running the script for preventing the unexpected behaviour. In Go, what is the idiomatic way to die a handle function? panic() or return?
You should use os.Exit.
Exit causes the current program to exit with the given status code.
Conventionally, code zero indicates success, non-zero an error. The
program terminates immediately; deferred functions are not run.
package main
import (
"fmt"
"os"
)
func main() {
fmt.Println("Start")
os.Exit(1)
fmt.Println("End")
}
Even, you can use panic, it's also stop normal execution but throw Error when execution stop.
The panic built-in function stops normal execution of the current
goroutine. When a function F calls panic, normal execution of F stops
immediately. Any functions whose execution was deferred by F are run
in the usual way, and then F returns to its caller. To the caller G,
the invocation of F then behaves like a call to panic, terminating G's
execution and running any deferred functions. This continues until all
functions in the executing goroutine have stopped, in reverse order.
At that point, the program is terminated and the error condition is
reported, including the value of the argument to panic. This
termination sequence is called panicking and can be controlled by the
built-in function recover.
package main
import "fmt"
func main() {
fmt.Println("Start")
panic("exit")
fmt.Println("End")
}
If you don't want to print a stack trace after exiting the program, you can use os.Exit. Also you are able to set a specific exit code with os.Exit.
Example (https://play.golang.org/p/XhDkKMhtpm):
package main
import (
"fmt"
"os"
)
func foo() {
fmt.Println("bim")
os.Exit(1)
fmt.Println("baz")
}
func main() {
foo()
foo()
}
Also be aware, that os.Exit immediately stops the program and doesn't run any deferred functions, while panic() does. See https://play.golang.org/p/KjGFZzTrJ7 and https://play.golang.org/p/Q4iciT35kP.
You can use panic in HTTP handler. Server will handle it. See Handler.
If ServeHTTP panics, the server (the caller of ServeHTTP) assumes that the effect of the panic was isolated to the active request. It recovers the panic, logs a stack trace to the server error log, and hangs up the connection.
Function panic is reserved for the situation when program just cannot continue. Inability to serve just one request is not the same as inability to continue to work, so I would log the error, set a correct HTTP status and use return. See Effective Go.
The usual way to report an error to a caller is to return an error as an extra return value. The canonical Read method is a well-known instance; it returns a byte count and an error. But what if the error is unrecoverable? Sometimes the program simply cannot continue.
The idiomatic way to break a function in Go is to use panic(). This is the defacto way to stop the execution of an event on runtime. If you want to recover the panic you can use the built in recover() function.
Panic explanation:
Panic is a built-in function that stops the ordinary flow of control
and begins panicking. When the function F calls panic, execution of F
stops, any deferred functions in F are executed normally, and then F
returns to its caller.
https://blog.golang.org/defer-panic-and-recover
Recover explanation:
Recover is a built-in function that regains control of a panicking
goroutine. Recover is only useful inside deferred functions. During
normal execution, a call to recover will return nil and have no other
effect. If the current goroutine is panicking, a call to recover will
capture the value given to panic and resume normal execution.
https://blog.golang.org/defer-panic-and-recover
And here is a simple example:
package main
import "fmt"
func badCall() {
panic("Bad call happend!")
}
func test() {
defer func() {
if err := recover(); err != nil {
fmt.Printf("Panicking %s\n\r", err)
}
}()
badCall()
fmt.Println("This is never executed!!")
}
func main() {
fmt.Println("Start testing")
test()
fmt.Println("End testing")
}

Goroutine only works when fmt.Println is executed

For some reason, when I remove the fmt.Printlns then the code is blocking.
I've got no idea why it happens. All I want to do is to implement a simple concurrency limiter...
I've never experienced such a weird thing. It's like that fmt flushes the variables or something and makes it work.
Also, when I use a regular function instead of a goroutine then it works too.
Here's the following code -
package main
import "fmt"
type ConcurrencyLimit struct {
active int
Limit int
}
func (c *ConcurrencyLimit) Block() {
for {
fmt.Println(c.active, c.Limit)
// If should block
if c.active == c.Limit {
continue
}
c.active++
break
}
}
func (c *ConcurrencyLimit) Decrease() int {
fmt.Println("decrease")
if c.active > 0 {
c.active--
}
return c.active
}
func main() {
c := ConcurrencyLimit{Limit: 1}
c.Block()
go func() {
c.Decrease()
}()
c.Block()
}
Clarification: Even though I've accepted #kaedys 's answer(here) a solution was answered by #Kaveh Shahbazian (here)
You're not giving c.Decrease() a chance to run. c.Block() runs an infinite for loop, but it never blocks in that for loop, just calling continue over and over on every iteration. The main thread spins at 100% usage endlessly.
However, when you add an fmt.Print() call, that makes a syscall, which allows the other goroutine to run.
This post has details on how exactly goroutines yield or are pre-empted. Note, however, that it's slightly out of date, as entering a function now has a random chance to yield that thread to another goroutine, to prevent similar style flooding of threads.
As others have pointed out, Block() will never yield; a goroutine is not a thread. You could use Gosched() in the runtime package to force a yield -- but note that spinning this way in Block() is a pretty terrible idea.
There are much better ways to do concurrency limiting. See http://jmoiron.net/blog/limiting-concurrency-in-go/ for one example
What you are looking for is called a semaphore. You can apply this pattern using channels
http://www.golangpatterns.info/concurrency/semaphores
The idea is that you create a buffered channel of a desired length. Then you make callers acquire the resource by putting a value into the channel and reading it back out when they want to free the resource. Doing so creates proper synchronization points in your program so that the Go scheduler runs correctly.
What you are doing now is spinning the cpu and blocking the Go scheduler. It depends on how many cpus you have available, the version of Go, and the value of GOMAXPROCS. Given the right combination, there may not be another available thread to service other goroutines while you infinitely spin that particular thread.
While other answers pretty much covered the reason (not giving a chance for the goroutine to run) - and I'm not sure what you intend to achieve here - you are mutating a value concurrently without proper synchronization. A rewrite of above code with synchronization considered; would be:
type ConcurrencyLimit struct {
active int
Limit int
cond *sync.Cond
}
func (c *ConcurrencyLimit) Block() {
c.cond.L.Lock()
for c.active == c.Limit {
c.cond.Wait()
}
c.active++
c.cond.L.Unlock()
c.cond.Signal()
}
func (c *ConcurrencyLimit) Decrease() int {
defer c.cond.Signal()
c.cond.L.Lock()
defer c.cond.L.Unlock()
fmt.Println("decrease")
if c.active > 0 {
c.active--
}
return c.active
}
func main() {
c := ConcurrencyLimit{
Limit: 1,
cond: &sync.Cond{L: &sync.Mutex{}},
}
c.Block()
go func() {
c.Decrease()
}()
c.Block()
fmt.Println(c.active, c.Limit)
}
sync.Cond is a synchronization utility designed for times that you want to check if a condition is met, concurrently; while other workers are mutating the data of the condition.
The Lock and Unlock functions work as we expect from a lock. When we are done with checking or mutating, we can call Signal to awake one goroutine (or call Broadcast to awake more than one), so the goroutine knows that is free to act upon the data (or check a condition).
The only part that may seem unusual is the Wait function. It is actually very simple. It is like calling Unlock and instantly call Lock again - with the exception that Wait would not try to lock again, unless triggered by Signal (or Broadcast) in other goroutines; like the workers that are mutating the data (of the condition).

Resources