I have a different output for println and fmt.Println in race detector which I couldn't explain. I expected both to be race, or at least both to be no race.
package main
var a int
func f() {
a = 1
}
func main() {
go f()
println(a)
}
And, it finds race condition as expected.
0
==================
WARNING: DATA RACE
Write by goroutine 5:
main.f()
/home/felmas/test.go:6 +0x30
Previous read by main goroutine:
main.main()
/home/felmas/test.go:11 +0x4d
Goroutine 5 (running) created at:
main.main()
/home/felmas/test.go:10 +0x38
==================
Found 1 data race(s)
However, this one runs without any detected race.
package main
import "fmt"
var a int
func f() {
a = 1
}
func main() {
go f()
fmt.Println(a)
}
To my knowledge, no race is detected doesn't mean there is no race so is this one of these deficiencies or is there a deeper explanation since println is builtin and quite special?
The race detector is a dynamic testing tool and no static analysis. In order to get reliable results from the race detector, you should strife for a high test coverage of your program, preferable by writing lots of benchmarks using multiple processes (by setting GOMAXPROCS > 1, GOMAXPROCS=NumCPU is the default for Go 1.5) and use a continuous integration tool that executes those tests regularly.
The race detector does not report any false positives so you should take every output serious. On the other hand it might not detect every race on every run, depending on the order goroutines and processes are scheduled.
In your example, wrapping everything in a tight loop and re-executing the tests reports the race correctly in both cases.
Related
Page 253 of The Go Programming Language states:
... if instead of returning from main in the event of cancellation, we execute a call to panic, then the runtime will dump the stack of every goroutine in the program.
This code deliberately leaks a goroutine by waiting on a channel that never has anything to receive:
package main
import (
"fmt"
"time"
)
func main() {
never := make(chan struct{})
go func() {
defer fmt.Println("End of child")
<-never
}()
time.Sleep(10 * time.Second)
panic("End of main")
}
However, the runtime only lists the main goroutine when panic is called:
panic: End of main
goroutine 1 [running]:
main.main()
/home/simon/panic/main.go:15 +0x7f
exit status 2
If I press Ctrl-\ to send SIGQUIT during the ten seconds before main panics, I do see the child goroutine listed in the output:
goroutine 1 [sleep]:
time.Sleep(0x2540be400)
/usr/lib/go-1.17/src/runtime/time.go:193 +0x12e
main.main()
/home/simon/panic/main.go:14 +0x6c
goroutine 18 [chan receive]:
main.main.func1()
/home/simon/panic/main.go:12 +0x76
created by main.main
/home/simon/panic/main.go:10 +0x5d
I thought maybe the channel was getting closed as panic runs (which still wouldn't guarantee the deferred fmt.Println had time to execute), but I get the same behaviour if the child goroutine does a time.Sleep instead of waiting on a channel.
I know there are ways to dump goroutine stacktraces myself, but my question is why doesn't panic behave as described in the book? The language spec only says that a panic will terminate the program, so is the book simply describing implementation-dependent behaviour?
Thanks to kostix for pointing me to the GOTRACEBACK runtime environment variable. Setting this to all instead of leaving it at the default of single restores the behaviour described in TGPL. Note that this variable is significant to the runtime, but you can't manipulate it with go env.
The default to only list the panicking goroutine is a change in go 1.6 - my edition of the book is copyrighted 2016 and gives go 1.5 as the prequisite for its example code, so it must predate the change. It's interesting reading the change discussion that there was concern about hiding useful information (as the recipient of many an incomplete error report, I can sympathise with this), but nobody called out the issue of scaling to large production systems that kostix mentioned.
Trying to understand the flow of goroutines so i wrote this code only one thing which i am not able to understand is that how routine-end runs between the other go routines and complete a single go routines and print the output from the channel at the end.
import(
"fmt"
)
func add(dataArr []int,dataChannel chan int,i int ){
var sum int
fmt.Println("GOROUTINE",i+1)
for i:=0;i<len(dataArr);i++{
sum += dataArr[i]
}
fmt.Println("wRITING TO CHANNEL.....")
dataChannel <- sum
fmt.Println("routine-end")
}
func main(){
fmt.Println("main() started")
dataChannel := make(chan int)
dataArr := []int{1,2,3,4,5,6,7,8,9}
for i:=0;i<len(dataArr);i+=3{
go add(dataArr[i:i+3],dataChannel,i)
}
fmt.Println("came to blocking statement ..........")
fmt.Println(<-dataChannel)
fmt.Println("main() end")
}
output
main() started
came to blocking statement ..........
GOROUTINE 1
wRITING TO CHANNEL.....
routine-end
GOROUTINE 4
wRITING TO CHANNEL.....
6
main() end
Your for loop launches 3 goroutines that invoke the add function.
In addition, main itself runs in a separate "main" goroutine.
Since goroutines execute concurrently, the order of their run is typically unpredictable and depends on timing, how busy your machine is, etc. Results may differ between runs and between machines. Inserting time.Sleep calls in various places may help visualize it. For example, inserting time.Sleep for 100ms before "came to blocking statement" shows that all add goroutines launch.
What you may see in your run typically is that one add goroutine launches, adds up its slice to its sum and writes sum to dataChannel. Since main launches a few goroutines and immediately reads from the channel, this read gets the sum written by add and then the program exists -- because by default main won't wait for all goroutines to finish.
Moreover, since the dataChannel channel is unbuffered and main only reads one value, the other add goroutines will block on the channel indefinitely while writing.
I do recommend going over some introductory resources for goroutines and channels. They build up the concepts from simple principles. Some good links for you:
Golang tour
https://gobyexample.com/ -- start with the Goroutines example and do the next several ones.
I'm facing a problem in golang
var a = 0
func main() {
go func() {
for {
a = a + 1
}
}()
time.Sleep(time.Second)
fmt.Printf("result=%d\n", a)
}
expected: result=(a big int number)
result: result=0
You have a race condition,
run your program with -race flag
go run -race main.go
==================
WARNING: DATA RACE
Read at 0x0000005e9600 by main goroutine:
main.main()
/home/jack/Project/GoProject/src/gitlab.com/hooshyar/GoNetworkLab/StackOVerflow/race/main.go:17 +0x6c
Previous write at 0x0000005e9600 by goroutine 6:
main.main.func1()
/home/jack/Project/GoProject/src/gitlab.com/hooshyar/GoNetworkLab/StackOVerflow/race/main.go:13 +0x56
Goroutine 6 (running) created at:
main.main()
/home/jack/Project/GoProject/src/gitlab.com/hooshyar/GoNetworkLab/StackOVerflow/race/main.go:11 +0x46
==================
result=119657339
Found 1 data race(s)
exit status 66
what is solution?
There is some solution, A solution is using a mutex:
var a = 0
func main() {
var mu sync.Mutex
go func() {
for {
mu.Lock()
a = a + 1
mu.Unlock()
}
}()
time.Sleep(3*time.Second)
mu.Lock()
fmt.Printf("result=%d\n", a)
mu.Unlock()
}
before any read and write lock the mutex and then unlock it, now you don not have any race and resault will bi big int at the end.
For more information read this topic.
Data races in Go(Golang) and how to fix them
and this
Golang concurrency - data races
As other writers have mentioned, you have a data race, but if you are comparing this behavior to, say, a program written in C using pthreads, you are missing some important data. Your problem is not just about timing, it's about the very language definition. Because concurrency primitives are baked into the language itself, the Go language memory model (https://golang.org/ref/mem) describes exactly when and how changes in one goroutine -- think of goroutines as "super-lightweight user-space threads" and you won't be too far off -- are guaranteed to be visible to code running in another goroutine.
Without any synchronizing actions, like channel sends/receives or sync.Mutex locks/unlocks, the Go memory model says that any changes you make to 'a' inside that goroutine don't ever have to be visible to the main goroutine. And, since the compiler knows that, it is free to optimize away pretty much everything in your for loop. Or not.
It's a similar situation to when you have, say, a local int variable in C set to 1, and maybe you have a while loop reading that variable in a loop waiting for it to be set to 0 by an ISR, but then your compiler gets too clever and decides to optimize away the test for zero because it thinks your variable can't ever change within the loop and you really just wanted an infinite loop, and so you have to declare the variable as volatile to fix the 'bug'.
If you are going to be working in Go, (my current favorite language, FWIW,) take time to read and thoroughly grok the Go memory model linked above, and it will really pay off in the future.
Your program is running into race condition. go can detect such scenarios.
Try running your program using go run -race main.go assuming your file name is main.go. It will show how race occured ,
attempted write inside the goroutine ,
simultaneous read by the main goroutine.
It will also print a random int number as you expected.
I'm just wondering if there is potential for corruption as a result of writing the same value to a global variable at the same time. My brain is telling me there is nothing wrong with this because its just a location in memory, but I figure I should probably double check this assumption.
I have concurrent processes writing to a global map var linksToVisit map[string]bool. The map is actually tracking what links on a website need to be further crawled.
However it can be the case that concurrent processes may have the same link on their respective pages and therefore each will mark that same link as true concurrently. There's nothing wrong with NOT using locks in this case right? NOTE: I never change the value back to false so either the key exists and it's value is true or it doesn't exist.
I.e.
var linksToVisit = map[string]bool{}
...
// somewhere later a goroutine finds a link and marks it as true
// it is never marked as false anywhere
linksToVisit[someLink] = true
What happens if concurrent processes write to a global variable the
same value?
The results of a data race are undefined.
Run the Go data race detector.
References:
Wikipedia: Race condition
Benign Data Races: What Could Possibly Go Wrong?
The Go Blog: Introducing the Go Race Detector
Go: Data Race Detector
Go 1.8 Release Notes
Concurrent Map Misuse
In Go 1.6, the runtime added lightweight, best-effort detection of
concurrent misuse of maps. This release improves that detector with
support for detecting programs that concurrently write to and iterate
over a map.
As always, if one goroutine is writing to a map, no other goroutine
should be reading (which includes iterating) or writing the map
concurrently. If the runtime detects this condition, it prints a
diagnosis and crashes the program. The best way to find out more about
the problem is to run the program under the race detector, which will
more reliably identify the race and give more detail.
For example,
package main
import "time"
var linksToVisit = map[string]bool{}
func main() {
someLink := "someLink"
go func() {
for {
linksToVisit[someLink] = true
}
}()
go func() {
for {
linksToVisit[someLink] = true
}
}()
time.Sleep(100 * time.Millisecond)
}
Output:
$ go run racer.go
fatal error: concurrent map writes
$
$ go run -race racer.go
==================
WARNING: DATA RACE
Write at 0x00c000078060 by goroutine 6:
runtime.mapassign_faststr()
/home/peter/go/src/runtime/map_faststr.go:190 +0x0
main.main.func2()
/home/peter/gopath/src/racer.go:16 +0x6a
Previous write at 0x00c000078060 by goroutine 5:
runtime.mapassign_faststr()
/home/peter/go/src/runtime/map_faststr.go:190 +0x0
main.main.func1()
/home/peter/gopath/src/racer.go:11 +0x6a
Goroutine 6 (running) created at:
main.main()
/home/peter/gopath/src/racer.go:14 +0x88
Goroutine 5 (running) created at:
main.main()
/home/peter/gopath/src/racer.go:9 +0x5b
==================
fatal error: concurrent map writes
$
It is better to use locks if you are changing the same value concurrently using multiple go routines. Since mutex and locks are used whenever it comes to secure the value from accessing when another function is changing the same just like writing to database table while accessing the same table.
For your question on using maps with different keys it is not preferable in Go as:
The typical use of maps did not require safe access from multiple
goroutines, and in those cases where it did, the map was probably part
of some larger data structure or computation that was already
synchronized. Therefore requiring that all map operations grab a mutex
would slow down most programs and add safety to few.
Map access is unsafe only when updates are occurring. As long as all
goroutines are only reading—looking up elements in the map, including
iterating through it using a for range loop—and not changing the map
by assigning to elements or doing deletions, it is safe for them to
access the map concurrently without synchronization.
So In case of update of maps it is not recommended. For more information Check FAQ on why maps operations not defined atomic.
Also it is noticed that if you realy wants to go for there should be a way to synchronize them.
Maps are not safe for concurrent use: it's not defined what happens
when you read and write to them simultaneously. If you need to read
from and write to a map from concurrently executing goroutines, the
accesses must be mediated by some kind of synchronization mechanism.
One common way to protect maps is with sync.RWMutex.
Concurrent map write is not ok, so you will most likely get a fatal error. So I think a lock should be used
As of Go 1.6, simultaneous map writes will cause a panic. Use a sync.Map to synchronize access.
See the map value assign implementation:
https://github.com/golang/go/blob/fe8a0d12b14108cbe2408b417afcaab722b0727c/src/runtime/hashmap.go#L519
To settle some misunderstandings I have about goroutines, I went to the Go playground and ran this code:
package main
import (
"fmt"
)
func other(done chan bool) {
done <- true
go func() {
for {
fmt.Println("Here")
}
}()
}
func main() {
fmt.Println("Hello, playground")
done := make(chan bool)
go other(done)
<-done
fmt.Println("Finished.")
}
As I expected, Go playground came back with an error: Process took too long.
This seems to imply that the goroutine created within other runs forever.
But when I run the same code on my own machine, I get this output almost instantaneously:
Hello, playground.
Finished.
This seems to imply that the goroutine within other exits when the main goroutine finishes. Is this true? Or does the main goroutine finish, while the other goroutine continues to run in the background?
Edit: Default GOMAXPROCS has changed on the Go Playground, it now defaults to 8. In the "old" days it defaulted to 1. To get the behavior described in the question, set it to 1 explicitly with runtime.GOMAXPROCS(1).
Explanation of what you see:
On the Go Playground, GOMAXPROCS is 1 (proof).
This means one goroutine is executed at a time, and if that goroutine does not block, the scheduler is not forced to switch to other goroutines.
Your code (like every Go app) starts with a goroutine executing the main() function (the main goroutine). It starts another goroutine that executes the other() function, then it receives from the done channel - which blocks. So the scheduler must switch to the other goroutine (executing other() function).
In your other() function when you send a value on the done channel, that makes both the current (other()) and the main goroutine runnable. The scheduler chooses to continue to run other(), and since GOMAXPROCS=1, main() is not continued. Now other() launches another goroutine executing an endless loop. The scheduler chooses to execute this goroutine which takes forever to get to a blocked state, so main() is not continued.
And then the timeout of the Go Playground's sandbox comes as an absolution:
process took too long
Note that the Go Memory Model only guarantees that certain events happen before other events, you have no guarantee how 2 concurrent goroutines are executed. Which makes the output non-deterministic.
You are not to question any execution order that does not violate the Go Memory Model. If you want the execution to reach certain points in your code (to execute certain statements), you need explicit synchronization (you need to synchronize your goroutines).
Also note that the output on the Go Playground is cached, so if you run the app again, it won't be run again, but instead the cached output will be presented immediately. If you change anything in the code (e.g. insert a space or a comment) and then you run it again, it then will be compiled and run again. You will notice it by the increased response time. Using the current version (Go 1.6) you will see the same output every time though.
Running locally (on your machine):
When you run it locally, most likely GOMAXPROCS will be greater than 1 as it defaults to the number of CPU cores available (since Go 1.5). So it doesn't matter if you have a goroutine executing an endless loop, another goroutine will be executed simultaneously, which will be the main(), and when main() returns, your program terminates; it does not wait for other non-main goroutines to complete (see Spec: Program execution).
Also note that even if you set GOMAXPROCS to 1, your app will most likely exit in a "short" time as the scheduler imlementation will switch to other goroutines and not just execute the endless loop forever (however, as stated above, this is non-deterministic). And when it does, it will be the main() goroutine, and so when main() finishes and returns, your app terminates.
Playing with your app on the Go Playground:
As mentioned, by default GOMAXPROCS is 1 on the Go Playground. However it is allowed to set it to a higher value, e.g.:
runtime.GOMAXPROCS(2)
Without explicit synchronization, execution still remains non-deterministic, however you will observe a different execution order and a termination without running into a timeout:
Hello, playground
Here
Here
Here
...
<Here is printed 996 times, then:>
Finished.
Try this variant on the Go Playground.
What you will see on screen is nondeterministic. Or more precisely if by any chance the true value you pass to channel is delayed you would see some "Here".
But usually the Stdout is buffered, it means it's not printed instantaneously but the data gets accumulated and after it gets to maximum buffer size it's printed. In your case before the "here" is printed the main function is already finished thus the process finishes.
The rule of thumb is: main function must be alive otherwise all other goroutines gets killed.