So I have this func..
func Set(firstSet map[string][]App, store map[string]*Parsed) map[string][string]struct{} {
s := make(map[string]map[string]struct{})
for dmn, parsed := range store {
for cId, apps := range firstSet {
if _, ok := s[dmn]; !ok {
s[dmn] = make(map[string]struct{})
}
s[dmn][cId] = struct{}{}
}
}
return s
}
Line 3 of that func (for dmn, parsed := range store) is giving me the error concurrent map iteration and map write error in Golang 1.8. any idea?
It looks like Concurrent Map Misuse . Probably your function invoked from different gorotines. Try to enclose function body in mutex.Lock()/Unlock() so that your map is safe for concurrent use.
There is a enhanced concurrent access check added in the Golang 1.8, and this is the source code in runtime/hashmap.go:736,
if h.flags&hashWriting != 0 {
throw("concurrent map iteration and map write")
}
Related
I am trying to manipulate a golang sync.Map of sync.Map, but I have some issues with the casting.
I have the following code:
func (cluster *Cluster) action(object1, object2 MyObject) {
value, _ := cluster.globalMap.LoadOrStore(object1.name, sync.Map{})
localMap := value.(sync.Map)
localMap.Store(object2.Name, object2)
value2, _ := cluster.resourceInflight.Load(node.Name)
forComparison := value2.(sync.Map)
fmt.Println(localMap.Load(object2.Name))
fmt.Println(forComparison.Load(object2.Name))
}
{myObject map[] map[]} true
<nil> false
I am doing this since I wish to keep the content of localMap thread safe.
The problem is I am expecting to have the same result for my two print, as "forComparison" should be pointing to the same object than "localMap". But second result is nil.
I am suspecting that the problem is coming from the casting of the interface "value" into an actual "sync.Map". But I am not sure how I can actually call the .Store method with inline casting.
I thought about Storing localMap inside cluster.globalMap, but this seems incorrect to me as it would break the whole point of using a localSyncMap and create concurrency issues.
Any input on what I should do ?
As per the comments the issue was that you were copying a sync.Map; the following code will fail (output "Not found" - playground):
var sm sync.Map
var x interface{}
x = sm
sm2 := x.(sync.Map)
sm2.Store("test", "test")
result, ok := sm.Load("test")
if ok {
fmt.Printf("Found: %s\n", result)
} else {
fmt.Printf("Not found\n")
}
Whereas using a pointer works as expected:
var sm sync.Map
var x interface{}
x = &sm
sm2 := x.(*sync.Map)
sm2.Store("test", "test")
result, ok := sm.Load("test")
if ok {
fmt.Printf("Found: %s\n", result)
} else {
fmt.Printf("Not found\n")
}
Running go vet would probably have warned you about other issues (sync.Map contains a sync.Mutex and these "must not be copied after first use").
Note that the docs for Sync.Map state:
The Map type is specialized. Most code should use a plain Go map instead, with separate locking or coordination, for better type safety and to make it easier to maintain other invariants along with the map content.
I have a type that contains a sync.Map where the key in the map is a string and the value is a slice. My code for inserting items into the map is as follows:
newList := []*Item{item}
if result, ok := map.LoadOrStore(key, newList); ok {
resultList := result.([]*Item)
resultList = append(resultList, item)
map.Store(key, resultList)
}
This is not concurrency-safe because the the slice can be loaded and modified by multiple calls concurrently. This code is very fragile so I've attempted to modify it to be:
newList := []*Item{item}
if result, ok := map.LoadOrStore(key, &newList); ok {
resultList := result.(*[]*Item)
*resultList = append(*resultList, item)
}
All this does is make the issues occur deterministically. So, I'm trying to find a way to have a map-of-slices that can be added to concurrently. My instinct is to use sync.Mutex to lock the list while I'm adding to it but in order to maintain the concurrent access to the sync.Map I would need to create a map of sync.Mutex objects as well, like this:
newLock := sync.Mutex{}
raw, _ := lockMap.LoadOrStore(key, &newLock)
lock := raw.(*sync.Mutex)
newList := []*Item{item}
if result, ok := map.LoadOrStore(key, &newList); ok {
lock.Lock()
resultList := result.(*[]*Item)
*resultList = append(*resultList, item)
lock.Unlock()
}
Is there an easier way to go about this?
It isn't very different from your current plan, but you could save yourself the trouble of handling two maps by using a struct with an embedded mutex for the values of the map.
The struct would look something like this:
type SafeItems struct {
sync.Mutex
Items []*Item
}
And it could be used like this:
newMapEntry := SafeItems{Items: itemPtrList}
if result, ok := map.LoadOrStore(key, &newMapEntry); ok {
mapEntry := result.(*SafeItems)
mapEntry.Lock()
mapEntry.Items = append(mapEntry.Items, item)
mapEntry.Unlock()
}
It's not a huge change but it does provide some syntactic sugar.
I am using counters to count the number of requests. Is there any way to get current value of a prometheus counter?
My aim is to reuse existing counter without allocating another variable.
Golang prometheus client version is 1.1.0.
It's easy, have a function to fetch Prometheus counter value
import (
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/log"
)
func GetCounterValue(metric *prometheus.CounterVec) float64 {
var m = &dto.Metric{}
if err := metric.WithLabelValues("label1", "label2").Write(m); err != nil {
log.Error(err)
return 0
}
return m.Counter.GetValue()
}
Currently there is no way to get the value of a counter in the official Golang implementation.
You can also avoid double counting by incrementing your own counter and use an CounterFunc to collect it.
Note: use integral type and atomic to avoid concurrent access issues
// declare the counter as unsigned int
var requestsCounter uint64 = 0
// register counter in Prometheus collector
prometheus.MustRegister(prometheus.NewCounterFunc(
prometheus.CounterOpts{
Name: "requests_total",
Help: "Counts number of requests",
},
func() float64 {
return float64(atomic.LoadUint64(&requestsCounter))
}))
// somewhere in your code
atomic.AddUint64(&requestsCounter, 1)
It is possible to read the value of a counter (or any metric) in the official Golang implementation. I'm not sure when it was added.
This works for me for a simple metric with no vector:
func getMetricValue(col prometheus.Collector) float64 {
c := make(chan prometheus.Metric, 1) // 1 for metric with no vector
col.Collect(c) // collect current metric value into the channel
m := dto.Metric{}
_ = (<-c).Write(&m) // read metric value from the channel
return *m.Counter.Value
}
Update: here's a more general version that works with vectors and on histograms...
// GetMetricValue returns the sum of the Counter metrics associated with the Collector
// e.g. the metric for a non-vector, or the sum of the metrics for vector labels.
// If the metric is a Histogram then number of samples is used.
func GetMetricValue(col prometheus.Collector) float64 {
var total float64
collect(col, func(m dto.Metric) {
if h := m.GetHistogram(); h != nil {
total += float64(h.GetSampleCount())
} else {
total += m.GetCounter().GetValue()
}
})
return total
}
// collect calls the function for each metric associated with the Collector
func collect(col prometheus.Collector, do func(dto.Metric)) {
c := make(chan prometheus.Metric)
go func(c chan prometheus.Metric) {
col.Collect(c)
close(c)
}(c)
for x := range c { // eg range across distinct label vector values
m := dto.Metric{}
_ = x.Write(&m)
do(m)
}
}
While it is possible to obtain counter values in github.com/prometheus/client_golang as pointed at this answer, this looks too complicated. This can be greatly simplified by using an alternative library for exporing Prometheus metrics - github.com/VictoriaMetrics/metrics:
import (
"github.com/VictoriaMetrics/metrics"
)
var requestsTotal = metrics.NewCounter(`http_requests_total`)
//...
func getRequestsTotal() uint64 {
return requestsTotal.Get()
}
E.g. just call Get() function on the needed counter.
When you use a map in a program with concurrent access, is there any need to use a mutex in functions to read values?
Multiple readers, no writers is okay:
https://groups.google.com/d/msg/golang-nuts/HpLWnGTp-n8/hyUYmnWJqiQJ
One writer, no readers is okay. (Maps wouldn't be much good otherwise.)
Otherwise, if there is at least one writer and at least one more either writer or reader, then all readers and writers must use synchronization to access the map. A mutex works fine for this.
sync.Map has merged to Go master as of April 27, 2017.
This is the concurrent Map we have all been waiting for.
https://github.com/golang/go/blob/master/src/sync/map.go
https://godoc.org/sync#Map
I answered your question in this reddit thread few days ago:
In Go, maps are not thread-safe. Also, data requires locking even for
reading if, for example, there could be another goroutine that is
writing the same data (concurrently, that is).
Judging by your clarification in the comments, that there are going to be setter functions too, the answer to your question is yes, you will have to protect your reads with a mutex; you can use a RWMutex. For an example you can look at the source of the implementation of a table data structure (uses a map behind the scenes) which I wrote (actually the one linked in the reddit thread).
You could use concurrent-map to handle the concurrency pains for you.
// Create a new map.
map := cmap.NewConcurrentMap()
// Add item to map, adds "bar" under key "foo"
map.Add("foo", "bar")
// Retrieve item from map.
tmp, ok := map.Get("foo")
// Checks if item exists
if ok == true {
// Map stores items as interface{}, hence we'll have to cast.
bar := tmp.(string)
}
// Removes item under key "foo"
map.Remove("foo")
if you only have one writer, then you can probably get away with using an atomic Value. The following is adapted from https://golang.org/pkg/sync/atomic/#example_Value_readMostly (the original uses locks to protect writing, so supports multiple writers)
type Map map[string]string
var m Value
m.Store(make(Map))
read := func(key string) (val string) { // read from multiple go routines
m1 := m.Load().(Map)
return m1[key]
}
insert := func(key, val string) { // update from one go routine
m1 := m.Load().(Map) // load current value of the data structure
m2 := make(Map) // create a new map
for k, v := range m1 {
m2[k] = v // copy all data from the current object to the new one
}
m2[key] = val // do the update that we need (can delete/add/change)
m.Store(m2) // atomically replace the current object with the new one
// At this point all new readers start working with the new version.
// The old version will be garbage collected once the existing readers
// (if any) are done with it.
}
Why no made use of Go concurrency model instead, there is a simple example...
type DataManager struct {
/** This contain connection to know dataStore **/
m_dataStores map[string]DataStore
/** That channel is use to access the dataStores map **/
m_dataStoreChan chan map[string]interface{}
}
func newDataManager() *DataManager {
dataManager := new(DataManager)
dataManager.m_dataStores = make(map[string]DataStore)
dataManager.m_dataStoreChan = make(chan map[string]interface{}, 0)
// Concurrency...
go func() {
for {
select {
case op := <-dataManager.m_dataStoreChan:
if op["op"] == "getDataStore" {
storeId := op["storeId"].(string)
op["store"].(chan DataStore) <- dataManager.m_dataStores[storeId]
} else if op["op"] == "getDataStores" {
stores := make([]DataStore, 0)
for _, store := range dataManager.m_dataStores {
stores = append(stores, store)
}
op["stores"].(chan []DataStore) <- stores
} else if op["op"] == "setDataStore" {
store := op["store"].(DataStore)
dataManager.m_dataStores[store.GetId()] = store
} else if op["op"] == "removeDataStore" {
storeId := op["storeId"].(string)
delete(dataManager.m_dataStores, storeId)
}
}
}
}()
return dataManager
}
/**
* Access Map functions...
*/
func (this *DataManager) getDataStore(id string) DataStore {
arguments := make(map[string]interface{})
arguments["op"] = "getDataStore"
arguments["storeId"] = id
result := make(chan DataStore)
arguments["store"] = result
this.m_dataStoreChan <- arguments
return <-result
}
func (this *DataManager) getDataStores() []DataStore {
arguments := make(map[string]interface{})
arguments["op"] = "getDataStores"
result := make(chan []DataStore)
arguments["stores"] = result
this.m_dataStoreChan <- arguments
return <-result
}
func (this *DataManager) setDataStore(store DataStore) {
arguments := make(map[string]interface{})
arguments["op"] = "setDataStore"
arguments["store"] = store
this.m_dataStoreChan <- arguments
}
func (this *DataManager) removeDataStore(id string) {
arguments := make(map[string]interface{})
arguments["storeId"] = id
arguments["op"] = "removeDataStore"
this.m_dataStoreChan <- arguments
}
In the example code below, I have a few users in manySimpleUsers that I would like to remove from manyFullUsers based on the Username.
If I do it with a nested couple of for... range loops, there will be many cycles required to filter all of the elements, especially when there are large numbers of elements in both Slices.
What is the best way to achieve this in Go?
package main
import "fmt"
func main() {
fmt.Println("Hello, playground")
type FullUser struct {
UserName string
UserEmail string
}
manyFullUsers := []FullUser{{"foo", "foo#jawohl.com"},
{"bar", "bar#jawohl.com"},
{"baz", "baz#jawohl.com"}}
type SimpleUser struct {
UserName string
}
manySimpleUsers := []SimpleUser{{"foo"}, {"bar"}}
fmt.Println(manyFullUsers)
fmt.Println(manySimpleUsers)
}
Create a map then use it to filter.
func filterByUserName(fu []FullUser, su []SimpleUser) (out []FullUser) {
f := make(map[string]struct{}, len(su))
for _, u := range su {
f[u.UserName] = struct{}{}
}
for _, u := range fu {
if _, ok := f[u.UserName]; ok {
out = append(out, u)
}
}
return
}
playground
you can use Filter() and Exist() from https://github.com/ledongthuc/goterators that I created to reuse aggregate & transform functions.
filteredFullUsers := goterators.Filter(manyFullUsers, func(item FullUser) bool {
return !goterators.Exist(manySimpleUsers, SimpleUser{item.UserName})
})
You also can build a map from manySimpleUsers that will be helpful to optimize searching username.
This lib requires the Go 1.18 to use that support generic + dynamic type you want to use with.