I know I can do like this:
func randomFunc() {
// do stuff
go destroyObjectAfterXHours(4, "idofobject")
// do other stuff
}
func destroyObjectAfterXHours(hours int, id string) {
time.Sleep(hours * time.Hour)
destroyObject(id)
}
but if we imagine destroyObjectAfterXHours is called a million times within a few minutes, this solution will be very bad.
I was hoping someone could share a more efficient solution to this problem.
I've been thinking about a potential solution where destruction time and object id was stored somewhere, and then there would be one func that would traverse through the list every X minutes, destroy the objects that had to be destroyed and remove their id and time info from wherever that info was stored. Would this be a good solution?
I worry it would also be bad solution since you will then have to traverse through a list with millions of items all the time, and then have to efficiently remove some of the items, etc.
The time.AfterFunc function is designed for this use case:
func randomFunc() {
// do stuff
time.AfterFunc(4*time.Hour, func() { destroyObject("idofobject") })
// do other stuff
}
time.AfterFunc is efficient and simple to use.
As the documentation states, the function is called in a goroutine after the duration elapses. The goroutine is not created up front as in the question.
So I'd agree with your solution #2 than number 1.
Traversing through a list of a million numbers is much easier than having a million separate Go Routines
Go Routines are expensive (compared to loops) and take memory and processing time. For eg. a million Go Routines take about 4GB of RAM.
Traversing through a list on the other hand takes very little space and is done in O(n) time.
A good example of this exact function is Go Cache which deletes its expired elements in a Go Routine it runs periodically
https://github.com/patrickmn/go-cache/blob/master/cache.go#L931
This is a more detailed example of how they did it:
type Item struct {
Object interface{}
Expiration int64
}
func (item Item) Expired() bool {
if item.Expiration == 0 {
return false
}
return time.Now().UnixNano() > item.Expiration
}
func RemoveItem(s []Item, index int) []int {
return append(s[:index], s[index+1:]...)
}
func deleteExpired(items []Item){
var deletedItems []int
for k, v := range items {
if v.Expired(){
deletedItems = append(deletedItems, k)
}
}
for key, deletedIndex := range deleteditems{
items = RemoveItem(items, deletedIndex)
}
}
The above implementation could definitely be improved with a linked list instead of an array but this is the general idea
This is an interesting question. I come to a solution where it use a heap to maintain the queue of items to be destroyed and sleep for exactly time until the next item is up for destruction. I think it is more efficient, but the gain might be slim on some cases. Nonetheless, you can see the code here:
package main
import (
"container/heap"
"fmt"
"time"
)
type Item struct {
Expiration time.Time
Object interface{} // It would make more sence to be *interface{}, but not as convinient
}
//MINIT is the minimal interval for delete to run. In most cases, it is better to be set as 0
const MININT = 1 * time.Second
func deleteExpired(addCh chan Item) (quitCh chan bool) {
quitCh = make(chan bool)
go func() {
h := make(ExpHeap, 0)
var t *time.Timer
item := <-addCh
heap.Push(&h, &item)
t = time.NewTimer(time.Until(h[0].Expiration))
for {
//Check unfinished incoming first
for incoming := true; incoming; {
select {
case item := <-addCh:
heap.Push(&h, &item)
default:
incoming = false
}
}
if delta := time.Until(h[0].Expiration); delta >= MININT {
t.Reset(delta)
} else {
t.Reset(MININT)
}
select {
case <-quitCh:
return
//New Item incoming, break the timer
case item := <-addCh:
heap.Push(&h, &item)
if item.Expiration.After(h[0].Expiration) {
continue
}
if delta := time.Until(item.Expiration); delta >= MININT {
t.Reset(delta)
} else {
t.Reset(MININT)
}
//Wait until next item to be deleted
case <-t.C:
for !h[0].Expiration.After(time.Now()) {
item := heap.Pop(&h).(*Item)
destroy(item.Object)
}
if delta := time.Until(h[0].Expiration); delta >= MININT {
t.Reset(delta)
} else {
t.Reset(MININT)
}
}
}
}()
return quitCh
}
type ExpHeap []*Item
func (h ExpHeap) Len() int {
return len(h)
}
func (h ExpHeap) Swap(i, j int) {
h[i], h[j] = h[j], h[i]
}
func (h ExpHeap) Less(i, j int) bool {
return h[i].Expiration.Before(h[j].Expiration)
}
func (h *ExpHeap) Push(x interface{}) {
item := x.(*Item)
*h = append(*h, item)
}
func (h *ExpHeap) Pop() interface{} {
old, n := *h, len(*h)
item := old[n-1]
*h = old[:n-1]
return item
}
//Auctural destroy code.
func destroy(x interface{}) {
fmt.Printf("%v # %v\n", x, time.Now())
}
func main() {
addCh := make(chan Item)
quitCh := deleteExpired(addCh)
for i := 30; i > 0; i-- {
t := time.Now().Add(time.Duration(i) * time.Second / 2)
addCh <- Item{t, t}
}
time.Sleep(7 * time.Second)
quitCh <- true
}
playground: https://play.golang.org/p/JNV_6VJ_yfK
By the way, there are packages like cron for job management, but I am not familiar with them so I cannot speak for their efficiency.
Edit:
Still I haven't enough reputation to comment :(
About performance: this code basically has less CPU usage as it only wake it self when necessary and only traverse items that is up for destruction instead of the whole list. Based on personal (actually ACM experience), roughly a mordern CPU can process a loop of 10^9 in 1.2 seconds or so, which means on a scale of 10^6, traversing the whole list takes about over 1 millisecond excluding auctual destruction code AND data copy (which will cost a lot on average on more than thousands of runs, to a scale of 100 millisecond or so). My code's approach is O(lg N) which on 10^6 scale is at least a thousand times faster (considering constant). Please note again all these calculation is based on experience instead of benchmarks (there was but I am not able to provide them).
Edit 2:
With a second thought, I think the plain solution can use a simple optimization:
func deleteExpired(items []Item){
tail = len(items)
for index, v := range items { //better naming
if v.Expired(){
tail--
items[tail],items[index] = v,items[tail]
}
}
deleteditems := items[tail:]
items:=items[:tail]
}
With this change, it no longer copy data unefficiently and will not allocate extra space.
Edit 3:
changing code from here
I tested the memoryuse of afterfunc. On my laptop it is 250 bytes per call, while on palyground it is 69 (I am curious at the reason). With my code, A pointer + a time.Time is 28 byte. At a scale of a million, the difference is slim. Using After Func is a much better option.
If it is a one-shot, this can be easily achieved with
// Make the destruction cancelable
cancel := make(chan bool)
go func(t time.Duration, id int){
expired := time.NewTimer(t).C
select {
// destroy the object when the timer is expired
case <-expired:
destroyObject(id)
// or cancel the destruction in case we get a cancel signal
// before it is destroyed
case <-cancel:
fmt.Println("Cancelled destruction of",id)
return
}
}(time.Hours * 4, id)
if weather == weather.SUNNY {
cancel <- true
}
If you want to do it every 4 h:
// Same as above, though since id may have already been destroyed
// once, I name the channel different
done := make(chan bool)
go func(t time.Duration,id int){
// Sends to the channel every t
tick := time.NewTicker(t).C
// Wrap, otherwise select will only execute the first tick
for{
select {
// t has passed, so id can be destroyed
case <-tick:
destroyObject(id)
// We are finished destroying stuff
case <-done:
fmt.Println("Ok, ok, I quit destroying...")
return
}
}
}()
if weather == weather.RAINY {
done <- true
}
The idea behind it is to run a single goroutine per destruction job which can be cancelled. Say, you have a session and the user did something, so you want to keep the session alive. Since goroutines are extremely cheap, you can simply fire off another goroutine.
Related
I have the following piece of code. I'm trying to run 3 GO routines at the same time never exceeding three. This works as expected, but the code is supposed to be running updates a table in the DB.
So the first routine processes the first 50, then the second 50, and then third 50, and it repeats. I don't want two routines processing the same rows at the same time and due to how long the update takes, this happens almost every time.
To solve this, I started flagging the rows with a new column processing which is a bool. I set it to true for all rows to be updated when the routine starts and sleep the script for 6 seconds to allow the flag to be updated.
This works for a random amount of time, but every now and then, I'll see 2-3 jobs processing the same rows again. I feel like the method I'm using to prevent duplicate updates is a bit janky and was wondering if there was a better way.
stopper := make(chan struct{}, 3)
var counter int
for {
counter++
stopper <- struct{}{}
go func(db *sqlx.DB, c int) {
fmt.Println("start")
updateTables(db)
fmt.Println("stop"b)
<-stopper
}(db, counter)
time.Sleep(6 * time.Second)
}
in updateTables
var ids[]string
err := sqlx.Select(db, &data, `select * from table_data where processing = false `)
if err != nil {
panic(err)
}
for _, row:= range data{
list = append(ids, row.Id)
}
if len(rows) == 0 {
return
}
for _, row:= range data{
_, err = db.Exec(`update table_data set processing = true where id = $1, row.Id)
if err != nil {
panic(err)
}
}
// Additional row processing
I think there's a misunderstanding on approach to go routines in this case.
Go routines to do these kind of work should be approached like worker Threads, using channels as the communication method in between the main routine (which will be doing the synchronization) and the worker go routines (which will be doing the actual job).
package main
import (
"log"
"sync"
"time"
)
type record struct {
id int
}
func main() {
const WORKER_COUNT = 10
recordschan := make(chan record)
var wg sync.WaitGroup
for k := 0; k < WORKER_COUNT; k++ {
wg.Add(1)
// Create the worker which will be doing the updates
go func(workerID int) {
defer wg.Done() // Marking the worker as done
for record := range recordschan {
updateRecord(record)
log.Printf("req %d processed by worker %d", record.id, workerID)
}
}(k)
}
// Feeding the records channel
for _, record := range fetchRecords() {
recordschan <- record
}
// Closing our channel as we're not using it anymore
close(recordschan)
// Waiting for all the go routines to finish
wg.Wait()
log.Println("we're done!")
}
func fetchRecords() []record {
result := []record{}
for k := 0; k < 100; k++ {
result = append(result, record{k})
}
return result
}
func updateRecord(req record) {
time.Sleep(200 * time.Millisecond)
}
You can even buffer things in the main go routine if you need to update all the 50 tables at once.
I've been working on a sort-of pub-sub mechanism in the application we're building. The business logic basically generates a whack-ton of events, which in turn can be used to feed data to the client using API's, or persistent in storage if the application is running with that option enabled.
What we had, and observed:
Long story short, it turns out we were dropping data we really ought not to have been dropping. The "subscriber" had a channel with a large buffer, and essentially only read data from this channel, checked a few things, and appended it to a slice. The capacity of the slice is such that memory allocations were kept to a minimum. Simulating a scenario where the subscriber channel had a buffer of, say, 1000 data-sets, we noticed data could be dropping after only 10 sets being sent. The very first event was never dropped.
The code we had at this point looks something like this:
type broker struct {
ctx context.Context
subs []*sub
}
type sub struct {
ctx context.Context
mu *sync.Mutex
ch chan []interface{}
buf []interface{}
}
func (b *broker) Send(evts ...interface{}) {
rm := make([]int, 0, len(b.subs))
defer func() {
for i := len(rm) - 1; i >= 0; i-- {
// last element
if i == len(b.subs)-1 {
b.subs = b.subs[:i]
continue
}
b.subs = append(b.subs[:i], b.subs[i+1:]...)
}
}()
for i, s := range b.subs {
select {
case <-b.ctx.Done(): // is the app still running
return
case <-s.Stopped(): // is this sub still valid
rm = append(rm, i)
case s.C() <- evts: // can we write to the channel
continue
default: // app is running, sub is valid, but channel is presumably full, skip
fmt.Println("skipped some events")
}
}
}
func NewSub(ctx context.Context, buf int) *sub {
s := &sub{
ctx: ctx,
mu: &sync.Mutex{},
ch: make(chan []interface{}, buf),
buf: make([]interface{}, 0, buf),
}
go s.loop(ctx) // start routine to consume events
return s
}
func (s *sub) C() chan<- []interface{} {
return s.ch
}
func (s *sub) Stopped() <-chan struct{} {
return s.ctx.Done()
}
func (s *sub) loop(ctx context.Context) {
defer func() {
close(s.ch)
}()
for {
select {
case <-ctx.Done():
return
case data := <-s.ch:
// do some processing
s.mu.Lock()
s.buf = append(s.buf, data...)
s.mu.Unlock()
}
}
}
func (s *sub) GetProcessedData(amt int) []*wrappedT {
s.mu.Lock()
data := s.buf
if len(data) == amt {
s.buf = s.buf[:0]
} else if len(data) > amt {
data = data[:amt]
s.buf = s.buf[amt:]
} else {
s.buf = make([]interface{}, 0, cap(s.buf))
}
s.mu.Unlock()
ret := make([]*wrappedT, 0, len(data))
for _, v := range data {
// some processing
ret = append(ret, &wrappedT{v})
}
return ret
}
Now obviously, the buffers are there to ensure that events can still be consumed when we're calling things like GetProcessedData. That type of call is usually the result of an API request, or some internal flush/persist to storage mechanism. Because of the mutex lock, we might not be reading from the internal channel. Weirdly, the channel buffers never got backed up all the way through, but not all data made its way through to the subscribers. As mentioned, the first event always did, which made us even more suspicious.
What we eventually tried (to fix):
After a fair bit of debugging, hair pulling, looking at language specs, and fruitless googling I began to suspect the select statement to be the problem. Instead of sending to the channels directly, I changed it to the rather hacky:
func (b *broker) send(s *sub, evts []interface{}) {
ctx, cfunc := context.WithTimeout(b.ctx, 100 *time.Millisecond)
defer cfunc()
select {
case <-ctx:
return
case sub.C() <- evts:
return
case <-sub.Closed():
return
}
}
func (b *broker) Send(evts ...interface{}) {
for _, s := range b.subs {
go b.send(s, evts)
}
}
Instantly, all events were correctly propagated through the system. Calling Send on the broker wasn't blocking the part of the system that actually performs the heavy lifting (that was the reason for the use of channels after all), and things are performing reasonably well.
The actual question(s):
There's a couple of things still bugging me:
The way I read the specs, the default statement ought to be the last resort, solely as a way out to prevent blocking channel operations in a select statement. Elsewhere, I read that the runtime may not consider a case ready for communication if there is no routine consuming what you're about to write to the channel, irrespective of channel buffers. Is this indeed how it works?
For the time being, the context with timeout fixes the bigger issue of data not propagating properly. However, I do feel like there should be a better way.
Has anyone ever encountered something similar, and worked out exactly what's going on?
I'm happy to provide more details where needed. I've kept the code as minimal as possible, omitting a lot of complexities WRT the broker system we're using (different event types, different types of subscribers, etc...). We don't use the interface{} type anywhere in case anyone is worried about that :P
For now, though, I think this is plenty of text for a Friday.
I'm writing a concurrency-safe memo:
package mu
import (
"sync"
)
// Func represents a memoizable function, operating on a string key, to use with a Mu
type Func func(key string) interface{}
// Mu is a cache that memoizes results of an expensive computation
//
// It has a traditional implementation using mutexes.
type Mu struct {
// guards done
mu sync.RWMutex
done map[string]chan bool
memo map[string]interface{}
f Func
}
// Get a string key if it exists, otherwise computes the value and caches it.
//
// Returns the value and whether or not the key existed.
func (c *Mu) Get(key string) (interface{}, bool) {
c.mu.RLock()
_, ok := c.done[key]
c.mu.RUnlock()
if ok {
return c.get(key), true
}
c.mu.Lock()
_, ok = c.done[key]
if ok {
c.mu.Unlock()
} else {
c.done[key] = make(chan bool)
c.mu.Unlock()
v := c.f(key)
c.memo[key] = v
close(c.done[key])
}
return c.get(key), ok
}
// get returns the value of key, blocking on an existing computation
func (c *Mu) get(key string) interface{} {
<-c.done[key]
v, _ := c.memo[key]
return v
}
As you can see, there's a mutex guarding the done field, which is used
to signal to other goroutines that a computation for a key is pending or done. This avoids duplicate computations (calls to c.f(key)) for the same key.
My question is around the guarantees of this code; by ensuring that the computing goroutine closes the channel after it writes to c.memo, does this guarantee that other goroutines that access c.memo[key] after a blocking call to <-c.done[key] are guaranteed to see the result of the computation?
The short answer is yes.
We can simplify some of the code to get to the essence of why. Consider your Mu struct:
type Mu struct {
memo int
done chan bool
}
We can now define 2 functions, compute and read
func compute(r *Mu) {
time.Sleep(2 * time.Second)
r.memo = 42
close(r.done)
}
func read(r *Mu) {
<-r.done
fmt.Println("Read value: ", r.memo)
}
Here, compute is a computationally heavy task (which we can simulate by sleeping for some time)
Now, in the main function, we start a new compute go routine, along with starting some read go routines at regular intervals:
func main() {
r := &Mu{}
r.done = make(chan bool)
go compute(r)
// this one starts immediately
go read(r)
time.Sleep(time.Second)
// this one starts in the middle of computation
go read(r)
time.Sleep(2*time.Second)
// this one starts after the computation is complete
go read(r)
// This is to prevent the program from terminating immediately
time.Sleep(3 * time.Second)
}
In all three cases, we print out the result of the compute task.
Working code here
When you "close" a channel in go, all statements which wait for the result of the channel (including statements that are executed after it's closed) will block. So provided that the only place that the channel is being closed from is the place where the memo value is computed, you will have that guarantee.
The only place where you should be careful, is to make sure that this channel isn't closed anywhere else in your code.
I need to search a huge slice of maps[string]string. My thought was that this is a good chance for go's channel and go routines.
The Plan was to divide the slice in parts and send search them in parallel.
But I was kind of shocked that my parallel version timed out while the search of the whole slice did the trick.
I am not sure what I am doing wrong. Down below is my code which I used to test the concept. The real code would involve more complexity
//Search for a giving term
//This function gets the data passed which will need to be search
//and the search term and it will return the matched maps
// the data is pretty simply the map contains { key: andSomeText }
func Search(data []map[string]string, term string) []map[string]string {
set := []map[string]string{}
for _, v := range data {
if v["key"] == term {
set = append(set, v)
}
}
return set
}
So this works pretty well to search the slice of maps for a given SearchTerm.
Now I thought if my slice would have like 20K entries, I would like to do the search in parallel
// All searches all records concurrently
// Has the same function signature as the the search function
// but the main task is to fan out the slice in 5 parts and search
// in parallel
func All(data []map[string]string, term string) []map[string]string {
countOfSlices := 5
part := len(data) / countOfSlices
fmt.Printf("Size of the data:%v\n", len(data))
fmt.Printf("Fragemnt Size:%v\n", part)
timeout := time.After(60000 * time.Millisecond)
c := make(chan []map[string]string)
for i := 0; i < countOfSlices; i++ {
// Fragments of the array passed on to the search method
go func() { c <- Search(data[(part*i):(part*(i+1))], term) }()
}
result := []map[string]string{}
for i := 0; i < part-1; i++ {
select {
case records := <-c:
result = append(result, records...)
case <-timeout:
fmt.Println("timed out!")
return result
}
}
return result
}
Here are my tests:
I have a function to generate my test data and 2 tests.
func GenerateTestData(search string) ([]map[string]string, int) {
rand.Seed(time.Now().UTC().UnixNano())
strin := []string{"String One", "This", "String Two", "String Three", "String Four", "String Five"}
var matchCount int
numOfRecords := 20000
set := []map[string]string{}
for i := 0; i < numOfRecords; i++ {
p := rand.Intn(len(strin))
s := strin[p]
if s == search {
matchCount++
}
set = append(set, map[string]string{"key": s})
}
return set, matchCount
}
The 2 tests: The first just traverses the slice and the second searches in parallel
func TestSearchItem(t *testing.T) {
tests := []struct {
InSearchTerm string
Fn func(data []map[string]string, term string) []map[string]string
}{
{
InSearchTerm: "This",
Fn: Search,
},
{InSearchTerm: "This",
Fn: All,
},
}
for i, test := range tests {
startTime := time.Now()
data, expectedMatchCount := GenerateTestData(test.InSearchTerm)
result := test.Fn(data, test.InSearchTerm)
fmt.Printf("Test: [%v]:\nTime: %v \n\n", i+1, time.Since(startTime))
assert.Equal(t, len(result), expectedMatchCount, "expected: %v to be: %v", len(result), expectedMatchCount)
}
}
It would be great if someone could explain me why my parallel code is so slow? What is wrong with the code and what I am missing here as well as what the recommended way would be to search huge slices in memory 50K+.
This looks like just a simple typo. The problem is that you divide your original big slice into 5 pieces (countOfSlices), and you properly launch 5 goroutines to search each part:
for i := 0; i < countOfSlices; i++ {
// Fragments of the array passed on to the search method
go func() { c <- Search(data[(part*i):(part*(i+1))], term) }()
}
This means you should expect 5 results, but you don't. You expect 4000-1 results:
for i := 0; i < part-1; i++ {
select {
case records := <-c:
result = append(result, records...)
case <-timeout:
fmt.Println("timed out!")
return result
}
}
Obviously if you only launched 5 goroutines, each of which delivers 1 single result, you can only expect as many (5). And since your loop waits a lot more (which will never come), it times out as expected.
Change the condition to this:
for i := 0; i < countOfSlices; i++ {
// ...
}
Concurrency is not parallelism. Go is massively concurrent language, not parallel. Even using multicore machine you will pay for data exchange between CPUs when accessing your shared slice in computation threads. You can take advantage of concurrency searching just first match for example. Or doing something with results(say print them, or write to some Writer, or sort) while continue to search.
func Search(data []map[string]string, term string, ch chan map[string]string) {
for _, v := range data {
if v["key"] == term {
ch <- v
}
}
}
func main(){
...
go search(datapart1, term, ch)
go search(datapart2, term, ch)
go search(datapart3, term, ch)
...
for vv := range ch{
fmt.Println(vv) //do something with match concurrently
}
...
}
The recommended way to search huge slice would be to keep it sorted, or make binary tree. There are no magic.
There are two problems - as icza notes you never finish the select as you need to use countOfSlices, and then also your call to Search will not get the data you want as you need to allocate that before calling the go func(), so allocate the slice outside and pass it in.
You might find it still isn't faster though to do this particular work in parallel with such simple data (perhaps with more complex data on a machine with lots of cores it would be worthwhile)?
Be sure when testing that you try swapping the order of your test runs - you might be surprised by the results! Also perhaps try the benchmarking tools available in the testing package which runs your code lots of times for you and averages the results, this might help you get a better idea of whether the fanout speeds things up.
I am making a cache wrapper around a database. To account for possibly slow database calls, I was thinking of a mutex per key (pseudo Go code):
mutexes = map[string]*sync.Mutex // instance variable
mutexes[key].Lock()
defer mutexes[key].Unlock()
if value, ok := cache.find(key); ok {
return value
}
value = databaseCall(key)
cache.save(key, value)
return value
However I don't want my map to grow too much. My cache is an LRU and I want to have a fixed size for some other reasons not mentioned here. I would like to do something like
delete(mutexes, key)
when all the locks on the key are over but... that doesn't look thread safe to me... How should I do it?
Note: I found this question
In Go, can we synchronize each key of a map using a lock per key? but no answer
A map of mutexes is an efficient way to accomplish this, however the map itself must also be synchronized. A reference count can be used to keep track of entries in concurrent use and remove them when no longer needed. Here is a working map of mutexes complete with a test and benchmark.
(UPDATE: This package provides similar functionality: https://pkg.go.dev/golang.org/x/sync/singleflight )
mapofmu.go
// Package mapofmu provides locking per-key.
// For example, you can acquire a lock for a specific user ID and all other requests for that user ID
// will block until that entry is unlocked (effectively your work load will be run serially per-user ID),
// and yet have work for separate user IDs happen concurrently.
package mapofmu
import (
"fmt"
"sync"
)
// M wraps a map of mutexes. Each key locks separately.
type M struct {
ml sync.Mutex // lock for entry map
ma map[interface{}]*mentry // entry map
}
type mentry struct {
m *M // point back to M, so we can synchronize removing this mentry when cnt==0
el sync.Mutex // entry-specific lock
cnt int // reference count
key interface{} // key in ma
}
// Unlocker provides an Unlock method to release the lock.
type Unlocker interface {
Unlock()
}
// New returns an initalized M.
func New() *M {
return &M{ma: make(map[interface{}]*mentry)}
}
// Lock acquires a lock corresponding to this key.
// This method will never return nil and Unlock() must be called
// to release the lock when done.
func (m *M) Lock(key interface{}) Unlocker {
// read or create entry for this key atomically
m.ml.Lock()
e, ok := m.ma[key]
if !ok {
e = &mentry{m: m, key: key}
m.ma[key] = e
}
e.cnt++ // ref count
m.ml.Unlock()
// acquire lock, will block here until e.cnt==1
e.el.Lock()
return e
}
// Unlock releases the lock for this entry.
func (me *mentry) Unlock() {
m := me.m
// decrement and if needed remove entry atomically
m.ml.Lock()
e, ok := m.ma[me.key]
if !ok { // entry must exist
m.ml.Unlock()
panic(fmt.Errorf("Unlock requested for key=%v but no entry found", me.key))
}
e.cnt-- // ref count
if e.cnt < 1 { // if it hits zero then we own it and remove from map
delete(m.ma, me.key)
}
m.ml.Unlock()
// now that map stuff is handled, we unlock and let
// anything else waiting on this key through
e.el.Unlock()
}
mapofmu_test.go:
package mapofmu
import (
"math/rand"
"strconv"
"strings"
"sync"
"testing"
"time"
)
func TestM(t *testing.T) {
r := rand.New(rand.NewSource(42))
m := New()
_ = m
keyCount := 20
iCount := 10000
out := make(chan string, iCount*2)
// run a bunch of concurrent requests for various keys,
// the idea is to have a lot of lock contention
var wg sync.WaitGroup
wg.Add(iCount)
for i := 0; i < iCount; i++ {
go func(rn int) {
defer wg.Done()
key := strconv.Itoa(rn)
// you can prove the test works by commenting the locking out and seeing it fail
l := m.Lock(key)
defer l.Unlock()
out <- key + " A"
time.Sleep(time.Microsecond) // make 'em wait a mo'
out <- key + " B"
}(r.Intn(keyCount))
}
wg.Wait()
close(out)
// verify the map is empty now
if l := len(m.ma); l != 0 {
t.Errorf("unexpected map length at test end: %v", l)
}
// confirm that the output always produced the correct sequence
outLists := make([][]string, keyCount)
for s := range out {
sParts := strings.Fields(s)
kn, err := strconv.Atoi(sParts[0])
if err != nil {
t.Fatal(err)
}
outLists[kn] = append(outLists[kn], sParts[1])
}
for kn := 0; kn < keyCount; kn++ {
l := outLists[kn] // list of output for this particular key
for i := 0; i < len(l); i += 2 {
if l[i] != "A" || l[i+1] != "B" {
t.Errorf("For key=%v and i=%v got unexpected values %v and %v", kn, i, l[i], l[i+1])
break
}
}
}
if t.Failed() {
t.Logf("Failed, outLists: %#v", outLists)
}
}
func BenchmarkM(b *testing.B) {
m := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
// run uncontended lock/unlock - should be quite fast
m.Lock(i).Unlock()
}
}
I wrote a simple similar implementation: mapmutex
But instead of a map of mutexes, in this implementation, a mutex is used to guard the map and each item in the map is used like a 'lock'. The map itself is just simple ordinary map.