I need to store some keys in the cache, that each one expires individually, but have them grouped.
So I do something like this:
Connection.GetDatabase().StringSet($"Logged:{userID}", "", TimeSpan.FromSeconds(30));
At some point I want to get all the grouped keys, so following the example in library's github documentation
https://github.com/StackExchange/StackExchange.Redis/blob/main/docs/KeysScan.md
I do this:
var servers = Connection.GetEndPoints();
return Connection.GetServer(servers[0]).Keys(pattern: "Logged:*);
But in the same page in github there is this warning
Either way, both SCAN and KEYS will need to sweep the entire keyspace,
so should be avoided on production servers - or at least, targeted at
replicas.
What else can I use to achieve what I want without using Keys, if we don't have replicas?
Related
Is it acceptable to perform multiple increment operations on different fields of the same object on Parse Server ?
e.g., in Cloud Code :
node.increment('totalExpense', cost);
node.increment('totalLabourCost', cost);
node.increment('totalHours', hours);
return node.save(null,{useMasterKey: true});
seems like mongodb supports it, based on this answer, but does Parse ?
Yes. One thing you can't do is both add and remove something from the same array within the same save. You can only do one of those operations. But, incrementing separate keys shouldn't be a problem. Incrementing a single key multiple times might do something weird but I haven't tried it.
FYI you can also use the .increment method on a key for a shell object. I.e., this works:
var node = new Parse.Object.("Node");
node.id = request.params.nodeId;
node.increment("myKey", value);
return node.save(null, {useMasterKey:true});
Even though we didn't fetch the data, we don't need to know the previous value in order to increment it on the database. Note that you don't have the data so can't access any other necessary data here.
I'm using Redis to store a bunch of "Foos" in a hash:
foo:<id> => {
name = 'whatever',
status = 'incomplete|complete|removed',
user = <user_id>,
...
}
I want to set up an index so I can pull Foos with a particular status for a particular user. Best thing I've come up with is using sets named like so:
foo:user:<user_id>:status:<status> => [ <foo_id>, <foo_id2>, ... ]
But that seems very clunky, and I'd have to make sure to track the old status and remove it from one set when I change the status, to keep data consistent. Is there a more clever structure I can use here?
I think the way you're thinking about storing this stuff is fine. You can always change the foo:user:<user_id>:status:<status> name, if you think it's too long, but the idea makes sense. The one thing I'd add is you should make a short lua function that updates statuses for you by getting the old status from a foo, updating that foo's status to the new status, and then updating the old status' set and the new status' set. Those operations are each O(1), so it won't be expensive (especially since its Lua), and since everything will be managed by one little script, you won't need to worry about doing it all in your code every time.
How to use the KV_INCR_KEY?
I found a useful feature in gwan api, but without any sample.
I want to add items to the KV store with this as primary key.
Also, how to get the value of this key?
The KV_INCR_KEY value is a flag intended to be passed to k_add().
You get the newly inserted key's value by checking the return value of k_add(). The documentation states:
kv_add(): add/update a value associated to a key
return: 0:out of memory, else:pointer on existing/inserted kv_item struct
This was derived from an idea discussed on the G-WAN forum. And, like for some other flags (timestamp or persistence, for example), it has not not been implemented yet (KV_NO_UPDATE is functional).
Since what follows the next version (focussed on new scripted languages) is a kind of zero-configuration mapReduce, the KV store will get more attention soon.
I'm using memcahced (specifically the Enyim memcached client) and I would like to able to make a keys in the cache dependant on other keys, i.e. if Key A is dependent on Key B, then whenever Key B is deleted or changed, Key A is also invalidated.
If possible I would also like to make sure that data integrity is maintained in the case of a node in the cluster fails, i.e. if Key B is at some point unavailable, Key A should still be invalid if Key B should become invalid.
Based on this post I believe that this is possible, but I'm struggling to understand the algorithm enough to convince myself how / why this works.
Can anyone help me out?
I've been using memcached quite a bit lately and I'm sure what you're trying to do with depencies isn't possible with memcached "as is" but would need to be handled from client side. Also that the data replication should happen server side and not from the client, these are 2 different domains. (With memcached at least, seeing its lack of data storage logic. The point of memcached though is just that, extreme minimalism for bettter performance)
For the data replication (protection against a physical failing cluster node) you should check out membased http://www.couchbase.org/get/couchbase/current instead.
For the deps algorithm, I could see something like this in a client: For any given key there is a suspected additional key holding the list/array of dependant keys.
# - delete a key, recursive:
function deleteKey( keyname ):
deps = client.getDeps( keyname ) #
foreach ( deps as dep ):
deleteKey( dep )
memcached.delete( dep )
endeach
memcached.delete( keyname )
endfunction
# return the list of keynames or an empty list if the key doesnt exist
function client.getDeps( keyname ):
return memcached.get( key_name + "_deps" ) or array()
endfunction
# Key "demokey1" and its counterpart "demokey1_deps". In the list of keys stored in
# "demokey1_deps" there is "demokey2" and "demokey3".
deleteKey( "demokey1" );
# this would first perform a memcached get on "demokey1_deps" then with the
# value returned as a list of keys ("demokey2" and "demokey3") run deleteKey()
# on each of them.
Cheers
I don't think it's a direct solution but try creating a system of namespaces in your memcache keys, e.g. http://www.cakemail.com/namespacing-in-memcached/. In short, the keys are generated and contain the current values of other memcached keys. In the namespacing problem the idea is to invalidate a whole range of keys who are within a certain namespace. This is achieved by something like incrementing the value of the namespace key, and any keys referencing the previous namespace value will not match when the key is regenerated.
Your problem looks a little different, but I think that by setting up Key A to be in the Key B "namespace, if a node B was unavailable then calculating Key A's full namespaced key e.g.
"Key A|Key B:<whatever Key B value is>"
will return false, thus allowing you to determine that B is unavailable and invalidate the cache lookup for Key A.
Given the SPList.ID and a site collection (or an SPWeb with subwebs), how do I quickly find the document library with the given ID?
I can recursively enumerate through all webs and perform a web.Lists[guid] on each one of them, but there might be thousands of subwebs in my case, and I'm looking for a realtime solution.
If there is no way to do this quickly, any other suggestions on how to uniquely identify a document library? I could store the full path (url), but the identification will be publicly visible and I don't feel very comfortable giving away our exact SharePoint document structure like that. Should I resort to maintaining a manual ID <-> library mapping in a separate list?
I vote for the manual ID -> URL pair matching in a top-level, well-known list that's visible only to the elevated privileges account.
Since you are storing the ListID somewhere, you may also store the WebId. Lists are opened by the context SPWeb always, so if you go to:
http://toplevel/_layouts/ListGeneralSettings.aspx?ID={GUID1} // OK
http://toplevel/sub1/_layouts/ListGeneralSettings.aspx?ID={GUID1} // Wont Work (same Guid)
Having the WebId and ListId you can simply:
using(SPWeb subweb = (new SPSite("http://url")).OpenWeb(new Guid("{000...}")))
{
SPList list = subweb.Lists.GetList(new Guid("{111...}"), true);
// list logic
}
MS does not support this :)...
But take a look at this for giggles: http://weblogs.sqlteam.com/jhermiz/archive/2007/08/15/60288.aspx
If you have MOSS Search available, then it might help, depending on the lag you have between these lists getting created and needing to search for them. You could probably map list id as a managed property and do a quick search for list objects with the id in question.
For lots of classes of problems it seems like search is the fastest way to rip through huge sets of data. In fact if this approach worked for you, you really wouldn't even need to know the site collection up front. Don't have access to any of my MOSS environments at the moment, so can't verify this will work though.