sync.Map seems not safe concurrent read/write - go

Test the sync.Map in golang standard package. It seems not safe concurrent read and write.
What's wrong?
Test code:
package main
import (
"log"
"sync"
)
func main() {
var m sync.Map
m.Store("count", 0)
var wg sync.WaitGroup
for numOfThread := 0; numOfThread < 10; numOfThread++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 1000; i++ {
value, ok := m.Load("count")
if !ok {
log.Println("load count error")
} else {
v, _ := value.(int)
m.Store("count", v+1)
}
}
}()
}
log.Println("threads starts")
wg.Wait()
value, ok := m.Load("count")
if ok {
v, _ := value.(int)
log.Printf("final count: %d", v)
}
log.Println("all done")
}
https://play.golang.org/p/E-pw4iZUceB
The result should be 10000, but get random number but not 10000:
2009/11/10 23:00:00 threads starts
2009/11/10 23:00:00 final count: 6696
2009/11/10 23:00:00 all done

You have a race condition:
value, ok := m.Load("count")
...
v, _ := value.(int)
m.Store("count", v+1)
The read-modify-store above does not protect other goroutines do the same thing, thus some of the increments performed by other goroutines will be missed.
The sync.Map protects concurrent access to its members. That means, a write to the map will not cause other goroutines to read an inconsistent map. If you read-modify-write, nothing will protect other goroutines from updating the value at the same time. You need a mutex to protect access to the map when you read-modify-update.

package main
import (
"log"
"sync"
)
func main() {
var m sync.Map
m.Store("count", 0)
var wg sync.WaitGroup
var mu *sync.Mutex
for numOfThread := 0; numOfThread < 10; numOfThread++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 1000; i++ {
mu.Lock()
value, ok := m.Load("count")
if !ok {
log.Println("load count error")
} else {
v, _ := value.(int)
m.Store("count", v+1)
}
mu.Unlock()
}
}()
}
log.Println("threads starts")
wg.Wait()
value, ok := m.Load("count")
if ok {
v, _ := value.(int)
log.Printf("final count: %d", v)
}
log.Println("all done")
}
both read and write operation are thread safe, but you are attempting an upsert or read+write operation. It is not thread safe. Modified code to make it thread safe.

Related

I applied a range to a goroutine channel, but I am getting an error. what's the problem?

I am studying goroutines and channels. I wrote a practice code to figure out the concurrency problem of goroutine and solve it. Deposit() is called 10 times, passing a bool to the done channel. After that, this is the code that resolves the concurrency while receiving done.
I am getting an error when I run the following code:
package main
import (
"bank"
"fmt"
"log"
"time"
)
func main() {
start := time.Now()
done := make(chan bool)
// Alice
for i := 0; i < 10; i++ {
go func() {
bank.Deposit(1)
done <- true
}()
}
// Wait for both transactions.
for flag := range done {
if !flag {
panic("error")
}
}
fmt.Printf("Balance = %d\n", bank.Balance())
defer log.Printf("[time] Elipsed Time: %s", time.Since(start))
}
package bank
var deposits = make(chan int) // send amount to deposit
var balances = make(chan int) // receive balance
func Deposit(amount int) { deposits <- amount }
func Balance() int { return <-balances }
func teller() {
var balance int // balance is confined to teller goroutine
for {
select {
case amount := <-deposits:
balance += amount
case balances <- balance:
}
}
}
func init() {
go teller() // start the monitor goroutine
}
But I get an error.
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan receive]:
main.main()
/Users/kyounghwan.choi/go/main.go:48 +0xd6
goroutine 49 [select]:
bank.teller()
/usr/local/go/src/bank/bank.go:14 +0x85
created by bank.init.0
/usr/local/go/src/bank/bank.go:23 +0x25
exit status 2
am i missing something?
what's the problem?
The deadlock occurs because the runtime detected that the remaining routines were stuck and could never proceed further.
That has happen because the implementation does not provide the required logic to exit the loop iteration over the done channel.
To exit that iteration the implementation must close the channel or break out.
This is commonly solved using a WaitGroup.
A WaitGroup waits for a collection of goroutines to finish. The main goroutine calls Add to set the number of goroutines to wait for. Then each of the goroutines runs and calls Done when finished. At the same time, Wait can be used to block until all goroutines have finished.
A WaitGroup must not be copied after first use.
func main() {
start := time.Now()
done := make(chan bool)
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
bank.Deposit(1)
done <- true
}()
}
go func() {
wg.Wait()
close(done)
}()
// Wait for the channel to close.
for flag := range done {
if !flag {
panic("error")
}
}
fmt.Printf("Balance = %d\n", bank.Balance())
defer log.Printf("[time] Elipsed Time: %s", time.Since(start))
}
https://go.dev/play/p/pyuguc6LaEX
Though, closing the channel, in this convoluted example, is really just a burden without additional values.
This main function could be written,
func main() {
start := time.Now()
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
bank.Deposit(1)
}()
}
wg.Wait()
fmt.Printf("Balance = %d\n", bank.Balance())
defer log.Printf("[time] Elipsed Time: %s", time.Since(start))
}
https://go.dev/play/p/U4Zh62Rt_Be
Though, it appears to me that removing the "concurrency" just works as good https://go.dev/play/p/qXs2oqi_1Zw
Using channels it is also possible to read at most as many times it was written.
func main() {
start := time.Now()
done := make(chan bool)
// Alice
for i := 0; i < 10; i++ {
go func() {
bank.Deposit(1)
done <- true
}()
}
// Read that much writes.
for i := 0; i < 10; i++ {
if !<-done {
panic("error")
}
}
fmt.Printf("Balance = %d\n", bank.Balance())
defer log.Printf("[time] Elipsed Time: %s", time.Since(start))
}
Change the following code blocks,
for i := 0; i < 10; i++ {
go func() {
bank.Deposit(1)
done <- true
}()
}
into
go func() {
for i := 0; i < 10; i++ {
bank.Deposit(1)
done <- true
}
close(done)
}()
Note: you need to explicitly close the channel.

what is going on under the hood so this concurrent usage of a map is racy

in below example, the race detector will trigger an error. I am fine with it, though, as it does not change keys, the map header (if i might say), i struggle figuring out what is the reason of the race. I simply dont understand what is going on under hood so that a race detection is emitted.
package main
import (
"fmt"
"sync"
)
// scores holds values incremented by multiple goroutines.
var scores = make(map[string]int)
func main() {
var wg sync.WaitGroup
wg.Add(2)
scores["A"] = 0
scores["B"] = 0
go func() {
for i := 0; i < 1000; i++ {
// if _, ok := scores["A"]; !ok {
// scores["A"] = 1
// } else {
scores["A"]++
// }
}
wg.Done()
}()
go func() {
for i := 0; i < 1000; i++ {
scores["B"]++ // Line 28
}
wg.Done()
}()
wg.Wait()
fmt.Println("Final scores:", scores)
}
Map values are not addressable, so incrementing the integer values requires writing them back to the map itself.
The line
scores["A"]++
is equivalent to
tmp := scores["A"]
scores["A"] = tmp + 1
If you use a pointer to make the integer values addressable, and assign all the keys before the goroutines are dispatched, you can see there is no longer a race on the map itself:
var scores = make(map[string]*int)
func main() {
var wg sync.WaitGroup
wg.Add(2)
scores["A"] = new(int)
scores["B"] = new(int)
go func() {
for i := 0; i < 1000; i++ {
(*scores["A"])++
}
wg.Done()
}()
go func() {
for i := 0; i < 1000; i++ {
(*scores["B"])++
}
wg.Done()
}()
wg.Wait()
fmt.Println("Final scores:", scores)
}

Why the result is not as expected with flag "-race"?

Why the result is not as expected with flag "-race" ?
It expected the same result: 1000000 - with flag "-race" and without this
https://gist.github.com/romanitalian/f403ceb6e492eaf6ba953cf67d5a22ff
package main
import (
"fmt"
"runtime"
"sync/atomic"
"time"
)
//$ go run -race main_atomic.go
//954203
//
//$ go run main_atomic.go
//1000000
type atomicCounter struct {
val int64
}
func (c *atomicCounter) Add(x int64) {
atomic.AddInt64(&c.val, x)
runtime.Gosched()
}
func (c *atomicCounter) Value() int64 {
return atomic.LoadInt64(&c.val)
}
func main() {
counter := atomicCounter{}
for i := 0; i < 100; i++ {
go func(no int) {
for i := 0; i < 10000; i++ {
counter.Add(1)
}
}(i)
}
time.Sleep(time.Second)
fmt.Println(counter.Value())
}
The reason why the result is not the same is because time.Sleep(time.Second) does not guarantee that all of your goroutines are going to be executed in the timespan of one second. Even if you execute go run main.go, it's not guaranteed that you will get the same result every time. You can test this out if you put time.Milisecond instead of time.Second, you will see much more inconsistent results.
Whatever value you put in the time.Sleep method, it does not guarantee that all of your goroutines will be executed, it just means that it's less likely that all of your goroutines won't finish in time.
For consistent results, you would want to synchronise your goroutines a bit. You can use WaitGroup or channels.
With WaitGroup:
//rest of the code above is the same
func main() {
counter := atomicCounter{}
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func(no int) {
for i := 0; i < 10000; i++ {
counter.Add(1)
}
wg.Done()
}(i)
}
wg.Wait()
fmt.Println(counter.Value())
}
With channels:
func main() {
valStream := make(chan int)
doneStream := make(chan int)
result := 0
for i := 0; i < 100; i++ {
go func() {
for i := 0; i < 10000; i++ {
valStream <- 1
}
doneStream <- 1
}()
}
go func() {
counter := 0
for count := range doneStream {
counter += count
if counter == 100 {
close(doneStream)
}
}
close(valStream)
}()
for val := range valStream {
result += val
}
fmt.Println(result)
}

Go: channel many slow API queries into single SQL transaction

I wonder what would be idiomatic way to do as following.
I have N slow API queries, and one database connection, I want to have a buffered channel, where responses will come, and one database transaction which I will use to write data.
I could only come up with semaphore thing as following makeup example:
func myFunc(){
//10 concurrent API calls
sem := make(chan bool, 10)
//A concurrent safe map as buffer
var myMap MyConcurrentMap
for i:=0;i<N;i++{
sem<-true
go func(i int){
defer func(){<-sem}()
resp:=slowAPICall(fmt.Sprintf("http://slow-api.me?%d",i))
myMap.Put(resp)
}(i)
}
for j=0;j<cap(sem);j++{
sem<-true
}
tx,_ := db.Begin()
for data:=range myMap{
tx.Exec("Insert data into database")
}
tx.Commit()
}
I am nearly sure there is simpler, cleaner and more proper solution, but it is seems complicated to grasp for me.
EDIT:
Well, I come with following solution, this way I do not need the buffer map, so once data comes to resp channel the data is printed or can be used to insert into a database, it works, I am still not sure if everything OK, at last there are no race.
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
//Gloab waitGroup
var wg sync.WaitGroup
func init() {
//just for fun sake, make rand seeded
rand.Seed(time.Now().UnixNano())
}
//Emulate a slow API call
func verySlowAPI(id int) int {
n := rand.Intn(5)
time.Sleep(time.Duration(n) * time.Second)
return n
}
func main() {
//Amount of tasks
N := 100
//Concurrency level
concur := 10
//Channel for tasks
tasks := make(chan int, N)
//Channel for responses
resp := make(chan int, 10)
//10 concurrent groutinezs
wg.Add(concur)
for i := 1; i <= concur; i++ {
go worker(tasks, resp)
}
//Add tasks
for i := 0; i < N; i++ {
tasks <- i
}
//Collect data from goroutiens
for i := 0; i < N; i++ {
fmt.Printf("%d\n", <-resp)
}
//close the tasks channel
close(tasks)
//wait till finish
wg.Wait()
}
func worker(task chan int, resp chan<- int) {
defer wg.Done()
for {
task, ok := <-task
if !ok {
return
}
n := verySlowAPI(task)
resp <- n
}
}
There's no need to use channels for a semaphore, sync.WaitGroup was made for waiting for a set of routines to complete.
If you're using the channel to limit throughput, you're better off with a worker pool, and using the channel to pass jobs to the workers:
type job struct {
i int
}
func myFunc(N int) {
// Adjust as needed for total number of tasks
work := make(chan job, 10)
// res being whatever type slowAPICall returns
results := make(chan res, 10)
resBuff := make([]res, 0, N)
wg := new(sync.WaitGroup)
// 10 concurrent API calls
for i = 0; i < 10; i++ {
wg.Add(1)
go func() {
for j := range work {
resp := slowAPICall(fmt.Sprintf("http://slow-api.me?%d", j.i))
results <- resp
}
wg.Done()
}()
}
go func() {
for r := range results {
resBuff = append(resBuff, r)
}
}
for i = 0; i < N; i++ {
work <- job{i}
}
close(work)
wg.Wait()
close(results)
}
Maybe this will work for you. Now you can get rid of your concurrent map. Here is a code snippet:
func myFunc() {
//10 concurrent API calls
sem := make(chan bool, 10)
respCh := make(chan YOUR_RESP_TYPE, 10)
var responses []YOUR_RESP_TYPE
for i := 0; i < N; i++ {
sem <- true
go func(i int) {
defer func() {
<-sem
}()
resp := slowAPICall(fmt.Sprintf("http://slow-api.me?%d",i))
respCh <- resp
}(i)
}
respCollected := make(chan struct{})
go func() {
for i := 0; i < N; i++ {
responses = append(responses, <-respCh)
}
close(respCollected)
}()
<-respCollected
tx,_ := db.Begin()
for _, data := range responses {
tx.Exec("Insert data into database")
}
tx.Commit()
}
Than we need to use one more goroutine that will collect all responses in some slice or map from a response channel.

Going mutex-less

Alright, Go "experts". How would you write this code in idiomatic Go, aka without a mutex in next?
package main
import (
"fmt"
)
func main() {
done := make(chan int)
x := 0
for i := 0; i < 10; i++ {
go func() {
y := next(&x)
fmt.Println(y)
done <- 0
}()
}
for i := 0; i < 10; i++ {
<-done
}
fmt.Println(x)
}
var mutex = make(chan int, 1)
func next(p *int) int {
mutex <- 0
// critical section BEGIN
x := *p
*p++
// critical section END
<-mutex
return x
}
Assume you can't have two goroutines in the critical section at the same time, or else bad things will happen.
My first guess is to have a separate goroutine to handle the state, but I can't figure out a way to match up inputs / outputs.
You would use an actual sync.Mutex:
var mutex sync.Mutex
func next(p *int) int {
mutex.Lock()
defer mutex.Unlock()
x := *p
*p++
return x
}
Though you would probably also group the next functionality, state and sync.Mutex into a single struct.
Though there's no reason to do so in this case, since a Mutex is better suited for mutual exclusion around a single resource, you can use goroutines and channels to achieve the same effect
http://play.golang.org/p/RR4TQXf2ct
x := 0
var wg sync.WaitGroup
send := make(chan *int)
recv := make(chan int)
go func() {
for i := range send {
x := *i
*i++
recv <- x
}
}()
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
send <- &x
fmt.Println(<-recv)
}()
}
wg.Wait()
fmt.Println(x)
As #favoretti mentioned, sync/atomic is a way to do it.
But, you have to use int32 or int64 rather than int (since int can be different sizes on different platforms).
Here's an example on Playground
package main
import (
"fmt"
"sync/atomic"
)
func main() {
done := make(chan int)
x := int64(0)
for i := 0; i < 10; i++ {
go func() {
y := next(&x)
fmt.Println(y)
done <- 0
}()
}
for i := 0; i < 10; i++ {
<-done
}
fmt.Println(x)
}
func next(p *int64) int64 {
return atomic.AddInt64(p, 1) - 1
}

Resources