In Go, a sync.Mutex or chan is used to prevent concurrent access of shared objects. However, in some cases I am just interested in the "latest" value of a variable or field of an object.
Or I like to write a value and do not care if another go-routine overwrites it later or has just overwritten it before.
Update: TLDR; Just don't do this. It is not safe. Read the answers, comments, and linked documents!
Update 2021: The Go memory model is going to be specified more thoroughly and there are three great articles by Russ Cox that will teach you more about the surprising effects of unsynchronized memory access. These articles summarize a lot of the below discussions and learnings.
Here are two variants good and bad of an example program, where both seem to produce "correct" output using the current Go runtime:
package main
import (
"flag"
"fmt"
"math/rand"
"time"
)
var bogus = flag.Bool("bogus", false, "use bogus code")
func pause() {
time.Sleep(time.Duration(rand.Uint32()%100) * time.Millisecond)
}
func bad() {
stop := time.After(100 * time.Millisecond)
var name string
// start some producers doing concurrent writes (DANGER!)
for i := 0; i < 10; i++ {
go func(i int) {
pause()
name = fmt.Sprintf("name = %d", i)
}(i)
}
// start consumer that shows the current value every 10ms
go func() {
tick := time.Tick(10 * time.Millisecond)
for {
select {
case <-stop:
return
case <-tick:
fmt.Println("read:", name)
}
}
}()
<-stop
}
func good() {
stop := time.After(100 * time.Millisecond)
names := make(chan string, 10)
// start some producers concurrently writing to a channel (GOOD!)
for i := 0; i < 10; i++ {
go func(i int) {
pause()
names <- fmt.Sprintf("name = %d", i)
}(i)
}
// start consumer that shows the current value every 10ms
go func() {
tick := time.Tick(10 * time.Millisecond)
var name string
for {
select {
case name = <-names:
case <-stop:
return
case <-tick:
fmt.Println("read:", name)
}
}
}()
<-stop
}
func main() {
flag.Parse()
if *bogus {
bad()
} else {
good()
}
}
The expected output is as follows:
...
read: name = 3
read: name = 3
read: name = 5
read: name = 4
...
Any combination of read: and read: name=[0-9] is correct output for this program. Receiving any other string as output would be an error.
When running this program with go run --race bogus.go it is safe.
However, go run --race bogus.go -bogus warns of the concurrent reads and writes.
For map types and when appending to slices I always need a mutex or a similar method of protection to avoid segfaults or unexpected behavior. However, reading and writing literals (atomic values) to variables or field values seems to be safe.
Question: Which Go data types can I safely read and safely write concurrently without a mutext and without producing segfaults and without reading garbage from memory?
Please explain why something is safe or unsafe in Go in your answer.
Update: I rewrote the example to better reflect the original code, where I had the the concurrent writes issue. The important leanings are already in the comments. I will accept an answer that summarizes these learnings with enough detail (esp. on the Go-runtime).
However, in some cases I am just interested in the latest value of a variable or field of an object.
Here is the fundamental problem: What does the word "latest" mean?
Suppoose that, mathematically speaking, we have a sequence of values Xi, with 0 <= i < N. Then obviously Xj is "later than" Xi if j > i. That's a nice simple definition of "latest" and is probably the one you want.
But when two separate CPUs within a single machine—including two goroutines in a Go program—are working at the same time, time itself loses meaning. We cannot say whether i < j, i == j, or i > j. So there is no correct definition for the word latest.
To solve this kind of problem, modern CPU hardware, and Go as a programming language, gives us certain synchronization primitives. If CPUs A and B execute memory fence instructions, or synchronization instructions, or use whatever other hardware provisions exist, the CPUs (and/or some external hardware) will insert whatever is required for the notion of "time" to regain its meaning. That is, if the CPU uses barrier instructions, we can say that a memory load or store that was executed before the barrier is a "before" and a memory load or store that is executed after the barrier is an "after".
(The actual implementation, in some modern hardware, consists of load and store buffers that can rearrange the order in which loads and stores go to memory. The barrier instruction either synchronizes the buffers, or places an actual barrier in them, so that loads and stores cannot move across the barrier. This particular concrete implementation gives an easy way to think about the problem, but isn't complete: you should think of time as simply not existing outside the hardware-provided synchronization, i.e., all loads from, and stores to, some location are happening simultaneously, rather than in some sequential order, except for these barriers.)
In any case, Go's sync package gives you a simple high level access method to these kinds of barriers. Compiled code that executes before a mutex Lock call really does complete before the lock function returns, and the code that executes after the call really does not start until after the lock function returns.
Go's channels provide the same kinds of before/after time guarantees.
Go's sync/atomic package provides much lower level guarantees. In general you should avoid this in favor of the higher level channel or sync.Mutex style guarantees. (Edit to add note: You could use sync/atomic's Pointer operations here, but not with the string type directly, as Go strings are actually implemented as a header containing two separate values: a pointer, and a length. You could solve this with another layer of indirection, by updating a pointer that points to the string object. But before you even consider doing that, you should benchmark the use of the language's preferred methods and verify that these are a problem, because code that works at the sync/atomic level is hard to write and hard to debug.)
Which Go data types can I safely read and safely write concurrently without a mutext and without producing segfaults and without reading garbage from memory?
None.
It really is that simple: You cannot, under no circumstance whatsoever, read and write concurrently to anything in Go.
(Btw: Your "correct" program is not correct, it is racy and even if you get rid of the race condition it would not deterministically produce the output.)
Why can't you use channels
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup // wait group to close channel
var buffer int = 1 // buffer of the channel
// channel to get the share data
cName := make(chan string, buffer)
for i := 0; i < 10; i++ {
wg.Add(1) // add to wait group
go func(i int) {
cName <- fmt.Sprintf("name = %d", i)
wg.Done() // decrease wait group.
}(i)
}
go func() {
wg.Wait() // wait of wait group to be 0
close(cName) // close the channel
}()
// process all the data
for n := range cName {
println("read:", n)
}
}
The above code returns the following output
read: name = 0
read: name = 5
read: name = 1
read: name = 2
read: name = 3
read: name = 4
read: name = 7
read: name = 6
read: name = 8
read: name = 9
https://play.golang.org/p/R4n9ssPMOeS
Article about channels
Related
My use case:
append items(small struct) to a slice in the main process
every 100 items I want to process items in a processor go routine (then pop them from slice)
items comme in very fast continuously
I read that if there is at least one "write" in more then two goroutines using a variable (slice in my case), one shall handle the concurrency (mutex or similar).
My questions:
If I do not handle with a mutex the r/w on slice do I risk problems ? (ie. Item 101 arrives while the processor is working on 1-100s)
What is the best concurrency technique for the incoming item flow to remain "fluent" ?
Disclaimer:
I do not want any event queueing, I need to process items in a given "bundle" size
Actually you don't need a lock here. Here is a working code:
package main
import (
"fmt"
"sync"
)
type myStruct struct {
Cpt int
}
func main() {
buf := make([]myStruct, 0, 100)
wg := sync.WaitGroup{}
// Main process
// Appending one million times
for i := 0; i < 10e6; i++ {
// Locking buffer
// Appending
buf = append(buf, myStruct{Cpt: i})
// Did we reach 100 items ?
if len(buf) >= 100 {
// Yes we did. Creating a slice from the buffer
processSlice := make([]myStruct, 100)
copy(processSlice, buf[0:100])
// Emptying buffer
buf = buf[:0]
// Running processor in parallel
// Adding one element to waitgroup
wg.Add(1)
go processor(&wg, processSlice)
}
}
// Waiting for all processors to finish
wg.Wait()
}
func processor(wg *sync.WaitGroup, processSlice []myStruct) {
// Removing one element to waitgroup when done
defer wg.Done()
// Doing some process
fmt.Printf("Procesing items from %d to %d\n", processSlice[0].Cpt, processSlice[99].Cpt)
}
A few notes about your problem and this solution:
If you want a minimal stop time in your feeding process (e.g, to respond as fast as possible to a HTTP call), then the minimal thing to do is just the copy part, and run the processor function in a go routine. By doing so, you have to create a unique process slice dynamically and copying the content of your buffer inside it.
The sync.WaitGroup object is needed to ensure that the last processor function has ended before exiting the program.
Note that this is not a perfect solution: If you run this pattern for a long time, and the input data comes in more than 100 times faster than the processor processes the slices, then there are going to be:
More and more processSlice instances in RAM -> Risks for filling the RAM and hitting the swap
More and more parallel processor goroutines -> Same risks for the RAM, and more to process in the same time, making each of the calls be slower and the problem gets self-feeding.
This will end up in the system crashing at some point.
The solution for this is to have a limited number of workers that ensures there is no crash. However, when this number of workers is fully busy, then there will be wait in the feeding process, which does not answer what you want. However this is a good solution to absorb a charge which intensity is changing in time.
In general, just remember that if you feed more data than you can process in the same time, your program will just reach a limit at some point where it can't handle it so it has to slow down input acquisition or crash. This is mathematical!
The question concerns the following code:
package main
import "fmt"
func main() {
var counters = map[int]int{}
for i := 0; i < 5; i++ {
go func(counters map[int]int, th int) {
for j := 0; j < 5; j++ {
counters[th*10+j]++
}
}(counters, i)
}
fmt.Scanln()
fmt.Println("counters result", counters)
}
Here is the output I get when I run this code with go run -race race.go
$ go run -race race.go
==================
WARNING: DATA RACE
Read at 0x00c000092150 by goroutine 8:
runtime.mapaccess1_fast64()
/usr/lib/go-1.13/src/runtime/map_fast64.go:12 +0x0
main.main.func1()
/tmp/race.go:10 +0x6b
Previous write at 0x00c000092150 by goroutine 7:
runtime.mapassign_fast64()
/usr/lib/go-1.13/src/runtime/map_fast64.go:92 +0x0
main.main.func1()
/tmp/race.go:10 +0xaf
Goroutine 8 (running) created at:
main.main()
/tmp/race.go:8 +0x67
Goroutine 7 (finished) created at:
main.main()
/tmp/race.go:8 +0x67
==================
==================
WARNING: DATA RACE
Read at 0x00c0000aa188 by main goroutine:
reflect.typedmemmove()
/usr/lib/go-1.13/src/runtime/mbarrier.go:177 +0x0
reflect.copyVal()
/usr/lib/go-1.13/src/reflect/value.go:1297 +0x7b
reflect.(*MapIter).Value()
/usr/lib/go-1.13/src/reflect/value.go:1251 +0x15e
internal/fmtsort.Sort()
/usr/lib/go-1.13/src/internal/fmtsort/sort.go:61 +0x259
fmt.(*pp).printValue()
/usr/lib/go-1.13/src/fmt/print.go:773 +0x146f
fmt.(*pp).printArg()
/usr/lib/go-1.13/src/fmt/print.go:716 +0x2ee
fmt.(*pp).doPrintln()
/usr/lib/go-1.13/src/fmt/print.go:1173 +0xad
fmt.Fprintln()
/usr/lib/go-1.13/src/fmt/print.go:264 +0x65
main.main()
/usr/lib/go-1.13/src/fmt/print.go:274 +0x13c
Previous write at 0x00c0000aa188 by goroutine 10:
main.main.func1()
/tmp/race.go:10 +0xc4
Goroutine 10 (finished) created at:
main.main()
/tmp/race.go:8 +0x67
==================
counters result map[0:1 1:1 2:1 3:1 4:1 10:1 11:1 12:1 13:1 14:1 20:1 21:1 22:1 23:1 24:1 30:1 31:1 32:1 33:1 34:1 40:1 41:1 42:1 43:1 44:1]
Found 2 data race(s)
exit status 66
Here is what I can't understand. Why there a race condition at all? Aren't we reading/writing values only one go routine can access? For example routine 0 will modify values only in counter[0] through counters[4], routine 1 will modify values only in counters[10] through counters[14], routine 2 will only modify values in counters[20] through counters[24] and so on. I'm not seeing a race condition here. Feels like I'm missing something. Will someone be able to shed some light on this?
Just an FYI I'm new to go. If you could dumb down the explanation (if it is possible) I would appreciate it.
That would be true for an array (or a slice), but a map is a complicated data structure which, among others, have the following properties:
It's free to relocate the elements stored in it in memory at any time it sees fit.
A map is initially empty, and placing an element in it (what appears as assignment in your case) involves a lot of operations on the map's internals.
Additionally, in a case like yours — incrementing an integer stored in a map — is really a map lookup, increment, and a map store.
The first and the last operations involve lookup by key.
Now consider what happens if one goroutine performs lookup at the same time another goroutine modifies the map's internal state when performing map store.
You might want to read up a bit on what is an associative array, and how it's typically implemented.
Aren't we reading/writing values only one go routine can access?
You already got a great answer from #kostix on that matter: the internals of the map are modified when you add elements to it, so it's not accurate to think that routine 0 will modify values only in counter[0] through counters[4].
But that's not all.
There's yet another data race issue in your code that's a bit more subtle and might be very difficult to catch even in tests.
To explore it, let's get rid of the "map internals" issue that #kostix mentioned, by imagining that your code is almost exactly the same, but with one tiny change: instead of using a map[int]int, imagine that you're using a []int, initialized to have at least length 56. Something like this:
// THERE'S ANOTHER RACE CONDITION HERE.
// var counters = map[int]int{}
var counters = make([]int, 56)
for i := 0; i < 5; i++ {
// go func(counters map[int]int, th int) {
go func(counters []int, th int) {
for j := 0; j < 5; j++ {
counters[th*10+j]++
}
}(counters, i)
}
fmt.Scanln()
fmt.Println("counters result", counters)
This is nearly equivalent, but gets rid of the "map internals" issue. The goal is to shift the focus away from "map internals" to show you the second issue.
There's still a race condition there. By the way, it's also similar to a race condition that exists in the first attempted solution in another answer you got, that uses a sync.Mutex but in a way that is still wrong.
The problem here is that there's no happens before relationship between the operations that change the counters and the operation that reads from it.
The fmt.Scanln() doesn't help: even though it allows you to introduce an arbitrary time delay between the code right before it (i.e., when the for loop launches the goroutines) and the code right after it (i.e., the fmt.Println()) — so that you could think "Ok, I'm just gonna wait 'a reasonably long amount of time' before pressing Enter", that doesn't eliminate the race condition.
The race condition here arises from the fact that "passage of time" (i.e., you waiting to hit Enter) does not establish a happens-before relationship between the writes to counters and the reads from it.
This notion of happens-before is absolutely fundamental for avoiding data races: you can only guarantee the absence of a data race if you can guarantee the existence of a happens-before relationship between 2 operations.
Like I mentioned, "passage of time" doesn't establish a "happens before". To establish it, you could use one of many alternatives, including primitives in the sync or atomic packages, or channels, etc.
While I'd probably suggest focusing on studying channels, and then the sync package (sync.Mutex, sync.WaitGroup, etc), and maybe only after all that the atomic package, if you do want to read more about this idea of happens before from the authoritative source, here's the link: https://golang.org/ref/mem . But be warned that it's a nasty can of worms.
Hopefully these comments here help you see why it's absolutely fundamental to follow the standard patterns for concurrency in Go. Things can be way more subtle than at first sight.
And to conclude, a quote from The Go Memory Model link I shared above:
If you must read the rest of this document to understand the behavior of your program, you are being too clever.
Don't be clever.
EDIT: for completion, here's how you could solve the problem.
There are 2 parts to the solution: (1) make sure that there's no concurrent modifications to the map; (2) make sure that there's a happens-before between all the changes to the map and the read.
For (1), you can use a sync.Mutex. Lock it before writing, unlock it after the write.
For (2), you need to ensure that the main goroutine can only get to the fmt.Println() after all the modifications are done. And remember: here, after doesn't mean "at a later point in time", but it specifically means that a happens-before relationship must be established. The 2 common patterns to solve this are to use a channel or a sync.WaitGroup. The WaitGroup solution is probably easier to reason about here, so that's what I'd use.
var mu sync.Mutex // (A)
var wg sync.WaitGroup // (A)
var counters = map[int]int{}
wg.Add(5) // (B)
for i := 0; i < 5; i++ {
go func(counters map[int]int, th int) {
for j := 0; j < 5; j++ {
mu.Lock() // (C)
counters[th*10+j]++
mu.Unlock() // (C)
}
wg.Done() // (D)
}(counters, i)
}
wg.Wait() // (E)
fmt.Scanln()
fmt.Println("counters result", counters)
(A) You don't need to initialize either the Mutex nor the WaitGroup, since their zero values are ready to use. Also, you don't need to make them pointers to anything.
(B) You .Add(5) to the WaitGroup's counter, meaning that it will have to wait for 5 .Done() signals before proceeding if you .Wait() on it. The number 5 here is because you're launching 5 goroutines, and you need to establish happens-before relationships between the changes made on all of them and the main goroutine's fmt.Println().
(C) You .Lock() and .Unlock() the Mutex around modifications to the map, to ensure that they are not done concurrently.
(D) Just before each goroutine terminates, you call wg.Done(), which decrements the WaitGroup's internal counter.
(E) Finally, you wg.Wait(). This function blocks until the wg's counter reaches 0. And here's the super important piece: the WaitGroup establishes a happens-before relationship between the calls to wg.Done() and the return of the wg.Wait() call. In other words, from a memory consistency perspective, the main goroutine is guaranteed to see all the changes performed to the map by all the goroutines!
AND FINALLY you can run that code with -race and be happy!
For you to explore further: instead of map + sync.Mutex, you could replace that with just sync.Map. But the sync.WaitGroup would still be necessary. Try to write a solution using that, it might be a nice exercise.
In addition to #kostix answer. You've to know that multiple goroutines should not access (write/read) to the same ressource at a given time.
So, in your implementation you may easly be in the case that multiple goroutines are updating (reading/writing) concurrently the same ressource (which is your map) at the same time.
What should happen ? Which value should be in this given map key ? This a what called race condition
Here is some potential fixes to your code:
Using Mutex:
package main
import (
"fmt"
"sync"
)
func main() {
var counters = map[int]int{}
var mutex = &sync.Mutex{}
for i := 0; i < 3; i++ {
go func(counters map[int]int, th int) {
for j := 0; j < 3; j++ {
mutex.Lock() // Lock the access to the map
counters[th*10+j]++
mutex.Unlock() // Release the access
}
}(counters, i)
}
fmt.Scanln()
fmt.Println("counters result", counters)
}
Output:
counters result map[0:1 1:1 2:1 10:1 11:1 12:1 20:1 21:1 22:1]
Using sync.Map:
package main
import (
"fmt"
"sync"
)
func main() {
var counters sync.Map
for i := 0; i < 3; i++ {
go func(th int) {
for j := 0; j < 3; j++ {
if result, ok := counters.Load(th*10 + j); ok {
value := result.(int) + 1
counters.Store(th*10+j, value+1)
} else {
counters.Store(th*10+j, 1)
}
}
}(i)
}
fmt.Scanln()
counters.Range(func(k, v interface{}) bool {
fmt.Println("key:", k, ", value:", v)
return true
})
}
Output:
key: 21 , value: 1
key: 10 , value: 1
key: 11 , value: 1
key: 0 , value: 1
key: 1 , value: 1
key: 20 , value: 1
key: 2 , value: 1
key: 22 , value: 1
key: 12 , value: 1
In golang if two goroutines read and write a variable without mutex and atomic, that may bring data race condition.
Use command go run --race xxx.go will detect the race point.
While the implementation of Mutex in src/sync/mutex.go use the following code
func (m *Mutex) Lock() {
// Fast path: grab unlocked mutex.
if atomic.CompareAndSwapInt32(&m.state, 0, mutexLocked) {
if race.Enabled {
race.Acquire(unsafe.Pointer(m))
}
return
}
var waitStartTime int64
starving := false
awoke := false
iter := 0
old := m.state // This line confuse me !!!
......
The code old := m.state confuse me, because m.state is read and write by different goroutine.
The following function Test obvious has race condition problem. But if i put it in mutex.go, no race conditon will detect.
# mutex.go
func Test(){
a := int32(1)
go func(){
atomic.CompareAndSwapInt32(&a, 1, 4)
}()
_ = a
}
If put it in other package like src/os/exec.go, the conditon race problem will detect.
package main
import(
"sync"
"os"
)
func main(){
sync.Test() // race condition will not detect
os.Test() // race condition will detect
}
First of all the golang source always changes so let's make sure we are looking at the same thing. Take release 1.12 at
https://github.com/golang/go/blob/release-branch.go1.12/src/sync/mutex.go
as you said the Lock function begins
func (m *Mutex) Lock() {
// fast path where it will set the high order bit and return if not locked
if atomic.CompareAndSwapInt32(&m.state, 0, mutexLocked) {
return
}
//reads value to decide on the lower order bits
for {
//if statements involving CompareAndSwaps on the lower order bits
}
}
What is this CompareAndSwap doing? it looks atomically in that int32 and if it is 0 it swaps it to mutexLocked (which is 1 defined as a const above) and returns true that it swapped it.
Then it promptly returns. That is its fast path. The goroutine acquired the lock and now it is running can start running it's protected path.
If it is 1 (mutexLocked) already, it doesn't swap it and returns false (it didn't swap it).
Then it reads the state and enters a loop that it does atomic compare and swaps to determine how it should behave.
What are the possible states? combinations of locked, woken and starving as you see from the const block.
Now depending on how long the goroutine has been waiting on the waitlist it will get priority on when to check again if the mutex is now free.
But also observe that only Unlock() can set the mutexLocked bit back to 0.
in the Lock() CAS loop the only bits that are set are the starving and woken ones.Yes you can have multiple readers but only one writer at any time, and that writer is the one who is holding the mutex and is executing its protected path until calling Unlock(). Check out this article for more details.
By disassemble the binary output file, The Test function in different pack generate different code.
The reason is that the compiler forbid to generate race detect instrument in the sync package.
The code is :
var norace_inst_pkgs = []string{"sync", "sync/atomic"} // https://github.com/golang/go/blob/release-branch.go1.12/src/cmd/compile/internal/gc/racewalk.go
``
The "Go maps in action" entry in the Go blog states:
Maps are not safe for concurrent use: it's not defined what happens when you read and write to them simultaneously. If you need to read from and write to a map from concurrently executing goroutines, the accesses must be mediated by some kind of synchronization mechanism. One common way to protect maps is with sync.RWMutex.
However, one common way to access maps is to iterate over them with the range keyword. It is not clear if for the purposes of concurrent access, execution inside a range loop is a "read", or just the "turnover" phase of that loop. For example, the following code may or may not run afoul of the "no concurrent r/w on maps" rule, depending on the specific semantics / implementation of the range operation:
var testMap map[int]int
testMapLock := make(chan bool, 1)
testMapLock <- true
testMapSequence := 0
...
func WriteTestMap(k, v int) {
<-testMapLock
testMap[k] = v
testMapSequence++
testMapLock<-true
}
func IterateMapKeys(iteratorChannel chan int) error {
<-testMapLock
defer func() {
testMapLock <- true
}
mySeq := testMapSequence
for k, _ := range testMap {
testMapLock <- true
iteratorChannel <- k
<-testMapLock
if mySeq != testMapSequence {
close(iteratorChannel)
return errors.New("concurrent modification")
}
}
return nil
}
The idea here is that the range "iterator" is open when the second function is waiting for a consumer to take the next value, and the writer is not blocked at that time. However, it is never the case that two reads in a single iterator are on either side of a write - this is a "fail fast" iterator, the borrow a Java term.
Is there anything anywhere in the language specification or other documents that indicates if this is a legitimate thing to do, however? I could see it going either way, and the above quoted document is not clear on exactly what consititutes a "read". The documentation seems totally quiet on the concurrency aspects of the for/range statement.
(Please note this question is about the currency of for/range, but not a duplicate of: Golang concurrent map access with range - the use case is completely different and I am asking about the precise locking requirement wrt the 'range' keyword here!)
You are using a for statement with a range expression. Quoting from Spec: For statements:
The range expression is evaluated once before beginning the loop, with one exception: if the range expression is an array or a pointer to an array and at most one iteration variable is present, only the range expression's length is evaluated; if that length is constant, by definition the range expression itself will not be evaluated.
We're ranging over a map, so it's not an exception: the range expression is evaluated only once before beginning the loop. The range expression is simply a map variable testMap:
for k, _ := range testMap {}
The map value does not include the key-value pairs, it only points to a data structure that does. Why is this important? Because the map value is only evaluated once, and if later pairs are added to the map, the map value –evaluated once before the loop– will be a map that still points to a data structure that includes those new pairs. This is in contrast to ranging over a slice (which would be evaluated once too), which is also only a header pointing to a backing array holding the elements; but if elements are added to the slice during the iteration, even if that does not result in allocating and copying over to a new backing array, they will not be included in the iteration (because the slice header also contains the length - already evaluated). Appending elements to a slice may result in a new slice value, but adding pairs to a map will not result in a new map value.
Now on to iteration:
for k, v := range testMap {
t1 := time.Now()
someFunction()
t2 := time.Now()
}
Before we enter into the block, before the t1 := time.Now() line k and v variables are holding the values of the iteration, they are already read out from the map (else they couldn't hold the values). Question: do you think the map is read by the for ... range statement between t1 and t2? Under what circumstances could that happen? We have here a single goroutine that is executing someFunc(). To be able to access the map by the for statement, that would either require another goroutine, or it would require to suspend someFunc(). Obviously neither of those happen. (The for ... range construct is not a multi-goroutine monster.) No matter how many iterations there are, while someFunc() is executed, the map is not accessed by the for statement.
So to answer one of your questions: the map is not accessed inside the for block when executing an iteration, but it is accessed when the k and v values are set (assigned) for the next iteration. This implies that the following iteration over the map is safe for concurrent access:
var (
testMap = make(map[int]int)
testMapLock = &sync.RWMutex{}
)
func IterateMapKeys(iteratorChannel chan int) error {
testMapLock.RLock()
defer testMapLock.RUnlock()
for k, v := range testMap {
testMapLock.RUnlock()
someFunc()
testMapLock.RLock()
if someCond {
return someErr
}
}
return nil
}
Note that unlocking in IterateMapKeys() should (must) happen as a deferred statement, as in your original code you may return "early" with an error, in which case you didn't unlock, which means the map remained locked! (Here modeled by if someCond {...}).
Also note that this type of locking only ensures locking in case of concurrent access. It does not prevent a concurrent goroutine to modify (e.g. add a new pair) the map. The modification (if properly guarded with write lock) will be safe, and the loop may continue, but there is no guarantee that the for loop will iterate over the new pair:
If map entries that have not yet been reached are removed during iteration, the corresponding iteration values will not be produced. If map entries are created during iteration, that entry may be produced during the iteration or may be skipped. The choice may vary for each entry created and from one iteration to the next.
The write-lock-guarded modification may look like this:
func WriteTestMap(k, v int) {
testMapLock.Lock()
defer testMapLock.Unlock()
testMap[k] = v
}
Now if you release the read lock in the block of the for, a concurrent goroutine is free to grab the write lock and make modifications to the map. In your code:
testMapLock <- true
iteratorChannel <- k
<-testMapLock
When sending k on the iteratorChannel, a concurrent goroutine may modify the map. This is not just an "unlucky" scenario, sending a value on a channel is often a "blocking" operation, if the channel's buffer is full, another goroutine must be ready to receive in order for the send operation to proceed. Sending a value on a channel is a good scheduling point for the runtime to run other goroutines even on the same OS thread, not to mention if there are multiple OS threads, of which one may already be "waiting" for the write lock in order to carry out a map modification.
To sum the last part: you releasing the read lock inside the for block is like yelling to others: "Come, modify the map now if you dare!" Consequently in your code encountering that mySeq != testMapSequence is very likely. See this runnable example to demonstrate it (it's a variation of your example):
package main
import (
"fmt"
"math/rand"
"sync"
)
var (
testMap = make(map[int]int)
testMapLock = &sync.RWMutex{}
testMapSequence int
)
func main() {
go func() {
for {
k := rand.Intn(10000)
WriteTestMap(k, 1)
}
}()
ic := make(chan int)
go func() {
for _ = range ic {
}
}()
for {
if err := IterateMapKeys(ic); err != nil {
fmt.Println(err)
}
}
}
func WriteTestMap(k, v int) {
testMapLock.Lock()
defer testMapLock.Unlock()
testMap[k] = v
testMapSequence++
}
func IterateMapKeys(iteratorChannel chan int) error {
testMapLock.RLock()
defer testMapLock.RUnlock()
mySeq := testMapSequence
for k, _ := range testMap {
testMapLock.RUnlock()
iteratorChannel <- k
testMapLock.RLock()
if mySeq != testMapSequence {
//close(iteratorChannel)
return fmt.Errorf("concurrent modification %d", testMapSequence)
}
}
return nil
}
Example output:
concurrent modification 24
concurrent modification 41
concurrent modification 463
concurrent modification 477
concurrent modification 482
concurrent modification 496
concurrent modification 508
concurrent modification 521
concurrent modification 525
concurrent modification 535
concurrent modification 541
concurrent modification 555
concurrent modification 561
concurrent modification 565
concurrent modification 570
concurrent modification 577
concurrent modification 591
concurrent modification 593
We're encountering concurrent modification quite often!
Do you want to avoid this kind of concurrent modification? The solution is quite simple: don't release the read lock inside the for. Also run your app with the -race option to detect race conditions: go run -race testmap.go
Final thoughts
The language spec clearly allows you to modify the map in the same goroutine while ranging over it, this is what the previous quote relates to ("If map entries that have not yet been reached are removed during iteration.... If map entries are created during iteration..."). Modifying the map in the same goroutine is allowed and is safe, but how it is handled by the iterator logic is not defined.
If the map is modified in another goroutine, if you use proper synchronization, The Go Memory Model guarantees that the goroutine with the for ... range will observe all modifications, and the iterator logic will see it just as if "its own" goroutine would have modified it – which is allowed as stated before.
The unit of concurrent access for a for range loop over a map is the map. Go maps in action.
A map is a dynamic data structure that changes for inserts, updates and deletes. Inside the Map Implementation. For example,
The iteration order over maps is not specified and is not guaranteed
to be the same from one iteration to the next. If map entries that
have not yet been reached are removed during iteration, the
corresponding iteration values will not be produced. If map entries
are created during iteration, that entry may be produced during the
iteration or may be skipped. The choice may vary for each entry
created and from one iteration to the next. If the map is nil, the
number of iterations is 0. For statements, The Go Programming
Language Specification
Reading a map with a for range loop with interleaved inserts, updates and deletes is unlikely to be useful.
Lock the map:
package main
import (
"sync"
)
var racer map[int]int
var race sync.RWMutex
func Reader() {
race.RLock() // Lock map
for k, v := range racer {
_, _ = k, v
}
race.RUnlock()
}
func Write() {
for i := 0; i < 1e6; i++ {
race.Lock()
racer[i/2] = i
race.Unlock()
}
}
func main() {
racer = make(map[int]int)
Write()
go Write()
Reader()
}
Don't lock after the read -- fatal error: concurrent map iteration and map write:
package main
import (
"sync"
)
var racer map[int]int
var race sync.RWMutex
func Reader() {
for k, v := range racer {
race.RLock() // Lock after read
_, _ = k, v
race.RUnlock()
}
}
func Write() {
for i := 0; i < 1e6; i++ {
race.Lock()
racer[i/2] = i
race.Unlock()
}
}
func main() {
racer = make(map[int]int)
Write()
go Write()
Reader()
}
Use the Go Data Race Detector. Read Introducing the Go Race Detector.
I have something like this mock (code below) which distributes the same keyword out to multiple goroutines, except the goroutines all take different amount of times doing things with the keyword but can operate independently of each other so they don't need any synchronization. The solution given below to distribute clearly synchronizes the goroutines.
I just want to toss this idea out there to see how other people would deal with this type of distribution, as I assume it is fairly common and someone else has thought about it before.
Here are some other solutions I have thought up and why they seem kinda meh to me:
One goroutine for each keyword
Each time a new keyword comes in spawn a goroutine to handle the distribution
Give the keyword a bitmask or something for each goroutine to update
This way once all of the workers have touched the keyword it can be deleted and we can move on
Give each worker its own stack to work off of
This seems kinda appealing, just give each worker a stack to add each keyword to, but we would eventually run into a problem of a ton of memory being taken up since it is planned to run so long
The problem with all of these is that my code is supposed to run for a long time, unwatched, and that would lead to either a huge build up of keywords or goroutines due to the lazy worker taking longer than the others. It almost seems like it'd be nice to give each worker its own Amazon SQS queue or implement something similar to that myself.
EDIT:
Store the keyword outside the program
I just thought of doing it this way instead, I could perhaps just store the keyword outside the program until they all grab it and then delete it once it has been used up. This sits ok with me actually, I don't have a problem with using up disk space
Anyway here is an example of the approach that waits for all to finish:
package main
import (
"flag"
"fmt"
"math/rand"
"os"
"os/signal"
"strconv"
"time"
)
var (
shutdown chan struct{}
count = flag.Int("count", 5, "number to run")
)
type sleepingWorker struct {
name string
sleep time.Duration
ch chan int
}
func NewQuicky(n string) sleepingWorker {
var rq sleepingWorker
rq.name = n
rq.ch = make(chan int)
rq.sleep = time.Duration(rand.Intn(5)) * time.Second
return rq
}
func (r sleepingWorker) Work() {
for {
fmt.Println(r.name, "is about to sleep, number:", <-r.ch)
time.Sleep(r.sleep)
}
}
func NewLazy() sleepingWorker {
var rq sleepingWorker
rq.name = "Lazy slow worker"
rq.ch = make(chan int)
rq.sleep = 20 * time.Second
return rq
}
func distribute(gen chan int, workers ...sleepingWorker) {
for kw := range gen {
for _, w := range workers {
fmt.Println("sending keyword to:", w.name)
select {
case <-shutdown:
return
case w.ch <- kw:
fmt.Println("keyword sent to:", w.name)
}
}
}
}
func main() {
flag.Parse()
shutdown = make(chan struct{})
go func() {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
<-c
close(shutdown)
}()
x := make([]sleepingWorker, *count)
for i := 0; i < (*count)-1; i++ {
x[i] = NewQuicky(strconv.Itoa(i))
go x[i].Work()
}
x[(*count)-1] = NewLazy()
go x[(*count)-1].Work()
gen := make(chan int)
go distribute(gen, x...)
go func() {
i := 0
for {
i++
select {
case <-shutdown:
return
case gen <- i:
}
}
}()
<-shutdown
os.Exit(0)
}
Let's assume I understand the problem correctly:
There's not too much you can do about it I'm afraid. You have limited resources (assuming all resources are limited) so if data to your input is written faster then you process it, there will be some synchronisation needed. At the end the whole process will run as quickly as the slowest worker anyway.
If you really need data from the workers available as soon as possible, the best you can do is to add some kind of buffering. But the buffer must be limited in size (even if you run in the cloud it would be limited by your wallet) so assuming never ending torrent of input it will only postpone the choke until some time in the future where you will start seeing "synchronisation" again.
All the ideas you presented in your questions are based on buffering the data. Even if you run a routine for every keyword-worker pair, this will buffer one element in every routine and, unless you implement the limit on total number of routines, you'll run out of memory. And even if you always leave some room for the quickest worker to spawn a new routine, the input queue won't be able to deliver new items as it would be choked on the slowest worker.
Buffering would solve your problem if on average you input is slower than processing time, but you have occasional spikes. If your buffer is big enough you can than accommodate the increase of throughput and maybe your quickest worker won't notice a thing.
Solution?
As go comes with buffered channels, this is the easiest to implement (also suggested by icza in the comment). Just give each worker a buffer. If you know which worker is the slowest, you can give it a bigger buffer. In this scenario you're limited by the memory of your machine.
If you're not happy with the single-machine memory limit then yes, per one of your ideas, you can "simply" store the buffer (queue) for each worker on the hard drive. But this is also limited and just postpones the blocking scenario until later. This is essentially the same as your Amazon SQS proposal (you could keep buffer in the cloud, but you need either limit it reasonably or prepare for the bill.)
The final note, depending on the system you're building, it might be not a good idea to buffer items in such a massive scale allowing to build up the backlog for the slower workers – it's often not desirable to have a worker hours, days, weeks behind the input flow and this is what would happen with an infinite buffer. The real answer then would be: improve your slowest worker to process things faster. (And add some buffering to improve the experience.)