How to create a Singleton Cache in Go - go

I am working in Golang, which I am new to, and I have come across two interesting articles:
https://hackernoon.com/in-memory-caching-in-golang
The one from hackernoon is really good and the first example (Simple Map) is precisely what I am for to create a cache as it gives an example for expiring values in a cache. Where I am struggling to understand, is that it does not say whether the implementation creates just one instantation of the cache and not multiple copies, which would conflict or you have one value in one copy and one in another, and the look ups won't work properly.
In another link https://thedevelopercafe.com/articles/singleton-in-golang-839d8610958b it talks about instantation of one cache.
So, my question in both they use sync and so can I ask someone who has experience in Golang to advise me whether the example from Hackernoon in the function called newlocalcache sets up a singleton and if not what do I need to do to add it?

the function called newlocalcache sets up a singleton
No, it constructs and returns a new local cache every time it's called.
if not what do I need to do to add it?
Call it just once. For example:
var localCacheSingleton *localCache
var newLocalCacheOnce sync.Once
func newLocalCache(cleanupInterval time.Duration) *localCache {
newLocalCacheOnce.Do(func() {
lc := &localCache{
users: make(map[int64]cachedUser),
stop: make(chan struct{}),
}
lc.wg.Add(1)
go func(cleanupInterval time.Duration) {
defer lc.wg.Done()
lc.cleanupLoop(cleanupInterval)
}(cleanupInterval)
localCacheSingleton = lc
})
return localCacheSingleton
}

Related

K8s Operator listen to secret change with event filter

We have created a few month ago controller which runs great using kubebuilder.
Few weeks ago we added a “listener” to a secret which when the secret is changing (secret properties)
The reconcile should be invoked, the problem is that it is sometimes working and sometimes not, (you change the secret apply it and the reconcile doesn’t happens) , we are doing it for the exact same secret file.
We tried few days to find the root cause without success, (we change the k8s.io/client-go v0.23.4 and also to v0.22.3 and now v0.22.1 that is only working.
Any idea what the issue could be? any hint will be helpful. or Any other way to do it that we can try out.
func (r *vtsReconciler) SetupWithManager(mgr ctrl.Manager) error {
manager := ctrl.NewControllerManagedBy(mgr).
For(&vts.str).
WithEventFilter(predicate.Or(predicate.AnnotationChangedPredicate{}))
manager = manager.Watches(&source.Kind{Type: &v1.Secret{}}, handler.EnqueueRequestsFromMapFunc(func(a client.Object) []reconcile.Request {
return r.SecretRequests.SecretFinder(a.GetName())
}))
return manager.Complete(r)
}
func (secm *SecretMapper) SecretFinder(name string) []reconcile.Request {
v := cli.ObjectKey{Name: name}
return secm.SecMap[v.String()]
}
Most likely the issue is that the WithEventFIlter applies to all watched objects by the controller. The generation is auto-incremented for CRDs, but this doesn't hold for all resource types.
From the GenerationChangedPredicate docs:
// Caveats:
//
// * The assumption that the Generation is incremented only on writing to the spec does not hold for all APIs.
// E.g For Deployment objects the Generation is also incremented on writes to the metadata.annotations field.
// For object types other than CustomResources be sure to verify which fields will trigger a Generation increment when they are written to.
You can check this by creating a secret / updating a secret you will see that there is no generation set (at least not on my local k3d cluster).
Most likely it works for the creation as initially the controller will sync existing resources with the cluster.
To solve it you can use:
func (r *vtsReconciler) SetupWithManager(mgr ctrl.Manager) error {
manager := ctrl.NewControllerManagedBy(mgr).
For(&vts.str, WithPredicates(predicate.Or(predicate.GenerationChangedPredicate{}, predicate.AnnotationChangedPredicate{}))).
manager = manager.Watches(&source.Kind{Type: &v1.Secret{}}, handler.EnqueueRequestsFromMapFunc(func(a client.Object) []reconcile.Request {
return r.SecretRequests.FindForSecret(a.GetNamespace(), a.GetName())
}))
return manager.Complete(r)
}
which should apply the predicates only to your custom resource.

Use a single mutex across multiple goroutines

I'm trying to reduce the amount of http requests my discord bot is making.
It's reading from an API.
With the fetched data it updates an internal database and outputs the changes.
Thing is: that database is different for every server the bot is in, and that's where I'm using the go routines. But, some servers need to fetch the same data, here is where I want to reduce the http requests. Right now I'm making requests regardless if I've already fetched a character. I want to create some sort of data that could be shared between the go routines and before making a request search within this data.
I was advised to use mutex. I'm trying. Original question: Working with unbuffered channels in golang
I made a skeleton of the real code I've tried: https://play.golang.org/p/mt229ns1R8m
In this example master := make([][]map[string]interface{}, 0) is simulating the discord servers.
Chars and Chars2 would be the tracked chars for each individual server.
The char "Test" is mutual to both of them, so it should be fetched from the API only once.
It's outputing this:
[[map[Level:15 Name:Test] map[Level:150 Name:Test2]] [map[Level:1500 Name:Test3] map[Level:15 Name:Test]]]
------
A call would be made
A call would be made
A call would be made
A call would be made
Cache: [map[Level:150 Name:Test2] map[Level:15 Name:Test]]Cache: [map[Level:15 Name:Test] map[Level:1500 Name:Test3]]Done
I was expecting the output to be:
[[map[Level:15 Name:Test] map[Level:150 Name:Test2]] [map[Level:1500 Name:Test3] map[Level:15 Name:Test]]]
------
A call would be made
A call would be made
A call would be made
Cache: [map[Level:150 Name:Test2] map[Level:15 Name:Test] map[Level:1500 Name:Test3]]Done
But a new cache is being generated by every go routine. How can I fix this?
Thanks.
There are too many unknowns here for me to really write a proper design, but let's make a few notes:
Try not to use interface{} at all, if at all possible. In this case, it seems that it must be possible, though I'm not sure what the actual types will be.
Try to make your data as simple as possible, but no simpler. In this case, that probably means: have one data structure for "thing that talks to a Discord server" and a separate one for "thing that talks to the local database" (is this a caching database? if so, what are the criteria for invalidating a cache entry?). But if one "character" (whatever that is—apparently a string) can have different properties per Discord server, that means that your index into your local database is not just a character, but rather a pair of values: the string value itself plus a Discord-server-identifier.
This might give you a functional interface like this:
var cacheServer *CacheServer
func InitCacheServer() error {
cacheServer = ... // whatever it takes to initialize the cache server
}
(I've assumed lazy initialization of the cache server. If you can do up-front initialization, you can drop the next test below. Replace ValueType with the type of the result of a cached lookup of a name.)
func (DiscordServer ds) Get(name string) (ValueType, error) {
if cacheserver == nil {
if err := InitCacheServer(); err != nil {
return nil, err
}
}
// Do a cache lookup. Tell the cache server that if there
// is no entry, it should return a NoEntry error and we will
// fill the cache ourselves, so it should hold this slot as
// "will be filled, so wait for it".
slot, v, err := cacheServer.Lookup(name, ds.identity, CacheServer.IntentToFill)
if err == CacheServer.NoEntry {
// We have the slot held. Try to look up the right info
// directly in the Discord server, then cache it.
v, err = ds.UncachedGet(name)
// Tell cache server that this is the value, or that it should
// produce this error instead of NoCache.
cacheServer.FillSlot(slot, v, err)
}
}
You might only want to cache some error types, rather than all; that's another one of those design questions that needs an answer that I cannot provide here. There are other ways to do this that don't necessarily need a slot pointer return value, too; I've just chosen this one for this example.
Note that most of the "hard work" is now in the cache server, which definitely requires some fancy footwork. In particular you will want to lock the overall data structure for a little while, use that to find the correct slot, then hold the slot itself so that other users of the slot must wait, while releasing the overall lock so that other users of other entries need not wait. This introduces locking order constraints: be careful to avoid deadlock. One method that should work is:
type CacheServer struct {
lock sync.Mutex
data map[string]map[string]*Entry
// more fields
}
type Entry {
lock sync.Mutex
cachedValue ValueType
cachedError error
}
(You'll need some more types, like Intent—just two enumerated integers for now—below, and probably more fields in the above; this is just a skeleton.)
func (cs *CacheServer) Lookup(name, srv string, flags Intent) (*Entry, ValueType, error) {
cs.lock.Lock()
defer cs.lock.Unlock()
// first, look up the server - if it does not exist, create one
smap := cs.data[srv]
if smap == nil {
cs.data[server] = make(map[string]*Entry)
}
entry := smap[name]
if entry == nil {
// no cached entry - if this is a pure lookup, just error,
// but if not, make a locked entry
if flags == CacheServer.IntentToFill {
// make a new entry and return with it locked
entry = &Entry{}
smap[name] = entry
entry.lock.Lock() // and do not unlock
}
return entry, nil, NoEntry
}
entry.lock.Lock() // wait for someone to fill it, if needed
defer entry.lock.Unlock()
return nil, entry.cachedValue, entry.cachedError
}
You need a routine to fill and release the entry as well, but it's pretty simple. You could, if you choose, make this a method on the Entry type rather than on the CacheServer type, as at least in this particular prototype, there is no need to use the cache server data structures directly. If you start getting fancier with cache invalidation, though, it might be nice to have access to the CacheServer object.
Note: I've designed this so that you can do a cache lookup without an intent-to-fill, if that's useful. If not, there's no reason to bother with the Intent argument.

How to FIFO Order a Map during Unmarshalling

I know from reading around that Maps are intentionally unordered in Go, but they offer a lot of benefits that I would like to use for this problem I'm working on. My question is how might I order a map FIFO style? Is it even worth trying to make this happen? Specifically I am looking to make it so that I can unmarshal into a set of structures hopefully off of an interface.
I have:
type Package struct {
Account string
Jobs []*Jobs
Libraries map[string]string
}
type Jobs struct {
// Name of the job
JobName string `mapstructure:"name" json:"name" yaml:"name" toml:"name"`
// Type of the job. should be one of the strings outlined in the job struct (below)
Job *Job `mapstructure:"job" json:"job" yaml:"job" toml:"job"`
// Not marshalled
JobResult string
// For multiple values
JobVars []*Variable
}
type Job struct {
// Sets/Resets the primary account to use
Account *Account `mapstructure:"account" json:"account" yaml:"account" toml:"account"`
// Set an arbitrary value
Set *Set `mapstructure:"set" json:"set" yaml:"set" toml:"set"`
// Contract compile and send to the chain functions
Deploy *Deploy `mapstructure:"deploy" json:"deploy" yaml:"deploy" toml:"deploy"`
// Send tokens from one account to another
Send *Send `mapstructure:"send" json:"send" yaml:"send" toml:"send"`
// Utilize eris:db's native name registry to register a name
RegisterName *RegisterName `mapstructure:"register" json:"register" yaml:"register" toml:"register"`
// Sends a transaction which will update the permissions of an account. Must be sent from an account which
// has root permissions on the blockchain (as set by either the genesis.json or in a subsequence transaction)
Permission *Permission `mapstructure:"permission" json:"permission" yaml:"permission" toml:"permission"`
// Sends a bond transaction
Bond *Bond `mapstructure:"bond" json:"bond" yaml:"bond" toml:"bond"`
// Sends an unbond transaction
Unbond *Unbond `mapstructure:"unbond" json:"unbond" yaml:"unbond" toml:"unbond"`
// Sends a rebond transaction
Rebond *Rebond `mapstructure:"rebond" json:"rebond" yaml:"rebond" toml:"rebond"`
// Sends a transaction to a contract. Will utilize eris-abi under the hood to perform all of the heavy lifting
Call *Call `mapstructure:"call" json:"call" yaml:"call" toml:"call"`
// Wrapper for mintdump dump. WIP
DumpState *DumpState `mapstructure:"dump-state" json:"dump-state" yaml:"dump-state" toml:"dump-state"`
// Wrapper for mintdum restore. WIP
RestoreState *RestoreState `mapstructure:"restore-state" json:"restore-state" yaml:"restore-state" toml:"restore-state"`
// Sends a "simulated call" to a contract. Predominantly used for accessor functions ("Getters" within contracts)
QueryContract *QueryContract `mapstructure:"query-contract" json:"query-contract" yaml:"query-contract" toml:"query-contract"`
// Queries information from an account.
QueryAccount *QueryAccount `mapstructure:"query-account" json:"query-account" yaml:"query-account" toml:"query-account"`
// Queries information about a name registered with eris:db's native name registry
QueryName *QueryName `mapstructure:"query-name" json:"query-name" yaml:"query-name" toml:"query-name"`
// Queries information about the validator set
QueryVals *QueryVals `mapstructure:"query-vals" json:"query-vals" yaml:"query-vals" toml:"query-vals"`
// Makes and assertion (useful for testing purposes)
Assert *Assert `mapstructure:"assert" json:"assert" yaml:"assert" toml:"assert"`
}
What I would like to do is to have jobs contain a map of string to Job and eliminate the job field, while maintaining order in which they were placed in from the config file. (Currently using viper). Any and all suggestions for how to achieve this are welcome.
You would need to hold the keys in a separate slice and work with that.
type fifoJob struct {
m map[string]*Job
order []string
result []string
// Not sure where JobVars will go.
}
func (str *fifoJob) Enqueue(key string, val *Job) {
str.m[key] = val
str.order = append(str.order, key)
}
func (str *fifoJob) Dequeue() {
if len(str.order) > 0 {
delete(str.m, str.order[0])
str.order = str.order[1:]
}
}
Anyways if you're using viper you can use something like the fifoJob struct defined above. Also note that I'm making a few assumptions here.
type Package struct {
Account string
Jobs *fifoJob
Libraries map[string]string
}
var config Package
config.Jobs = fifoJob{}
config.Jobs.m = map[string]*Job{}
// Your config file would need to store the order in an array.
// Would've been easy if viper had a getSlice method returning []interface{}
config.Jobs.order = viper.GetStringSlice("package.jobs.order")
for k,v := range viper.GetStringMap("package.jobs.jobmap") {
if job, ok := v.(Job); ok {
config.Jobs.m[k] = &job
}
}
for
PS: You're giving too many irrelevant details in your question. I was asking for a MCVE.
Maps are by nature unordered but you can fill up a slice instead with your keys. Then you can range over your slice and sort it however you like. You can pull out specific elements in your slice with [i].
Check out pages 170, 203, or 204 of some great examples of this:
Programming in Go

Return Slice, Channel or Custom Iterator from DB Repository?

What would be the idiomatic way to design database repository for Go ?
I am using Couchbase cbgo to fetch the items which returns a reader where. I get each item one by one.
I do not want to return this abstraction to the end user of my lib.
So what would be the best way to do this ?
I could just iterate over the items and append them to a Slice.
Or, I could return a Channel, and then push each row to the channel so that the client can for range over it.
Or I could make my own iterator abstraction.
What do others do in this kind of scenarios ?
I don't really need to have the result as a slice as the data is just piped to other modules.
Maps
I learned it more here.
First, generate a struct with db handlers.
Second, make interfaces matching dbhandler methods.
Third, specify the custom interface as the key of the map.
Now, regardless of the Model struct you should be able to use db handler's methods
dbHandler := infrastructure.NewSqliteHandler("/var/tmp/production.sqlite")
handlers := make(map[string] interfaces.DbHandler)
handlers["DbUserRepo"] = dbHandler
handlers["DbCustomerRepo"] = dbHandler
handlers["DbItemRepo"] = dbHandler
handlers["DbOrderRepo"] = dbHandler

Container types in Go

I am trying to familiarize myself with Go and so was trying to implements some search function but looking through the docs for the container types, none of the inbuilt type implements a contains method. Am i missing something and if not how do i go about testing for membership? Do I have to implement my own method or i have to iterate through all elements. If this is so what is the rationale behind the omission of this elementary method for container types?
The standard library's container types require you do type assertions when pulling elements out. The containers themselves have no way of doing tests for membership because they don't know the types they're containing and have no way of doing a comparison.
Ric Szopa's skip list implementation might be what you're looking for. It has a Set type which implements a Contains method.
https://github.com/ryszard/goskiplist
I've been using it in production and am quite happy with it.
Maps are a built-in type which has a "contains" construct, not a method, though.
http://play.golang.org/p/ddpmiskxqS
package main
import (
"fmt"
)
func main() {
a := map[string]string{"foo": "bar"}
_, k := a["asd"]
fmt.Println(k)
_, k = a["foo"]
fmt.Println(k)
}
With the container/list package, you write your own loop to search for things. The reasoning for not having this provided in the package is probably as Dystroy said, that would hide an O(n) operation.
You can't add a method, so you just write a loop.
for e := l.Front(); e != nil; e = e.Next() {
data := e.Value.(dataType) // type assertion
if /* test on data */ {
// do something
break
}
}
It's simple enough and the O(n) complexity is obvious.
In your review of data structures supplied with Go that support searching, don't miss the sort package. Functions there allow a slice to be sorted in O(n log(n)) and then binary searched in O(log(n)) time.
Finally as Daniel suggested, consider third-party packages. There are some popular and mature packages for container types.

Resources