I want to search keys with the string pattern. I don't see SCAN is straight forward as Keys do.
redistemplate.opsForSet().getOperations().keys(pattern);
This is so straight forward, so if I have my value as my key, I can do search and also sorting to an extent. But my only problem is that there is a warning stating not to use KEYS command. Not sure if Spring has handled it, please provide your thoughts.
You should consider KEYS (http://redis.io/commands/keys) a debug command. Running it in redis-cli on your development instance is perfectly fine, but don't use it in code that will eventually end up on your production instance.
Depending on the size of your redis database and the pattern used with KEYS, the command can potentially take a long time to execute. During that time the redis server will not be able to service any other commands.
SCAN may not be as straight-forward, but it is the right way to enumerate keys without slowing the server down. And you'll find plenty of samples for Spring, like this one: https://stackoverflow.com/a/30260108/3677188
Related
I am writing a small ruby application that utilizes etcd via the etcd-ruby gem to coordinate activities across a cluster.
One problem I have is how to write specs for it.
My first approach was to attempt to mock out the etcd calls at the client level, however this is sub-optimal because the responses returned by the client are quite complex with metadata. I thought about writing a wrapper over the etcd client to strip away the metadata and make a mocking approach easier, but the problem is the algorithm does depend on this metadata at times, so the abstraction becomes very leaky and just a painful layer of indirection.
Another approach is to use VCR to record actual requests. This has the benefit of allowing specs to run without etcd, but it becomes a mess of initializing state and managing cassettes.
This brings me to my question. etcd is fast enough as a solo node that it seems easiest and most straightforward to just use it directly in tests and not attempt to stub it at all. The only problem here is that I can't see any easy way to clear the keyspace between tests. Recursive delete on the root key is not allowed. Also, this doesn't reset the indices. I checked the etcd-ruby gem specs, and it appears to bypass the issue by using keys based on uuids so that keys simply never collide. I suppose that is a viable approach, but is there something better?
I would test against an etcd docker container which you can tear down and restore very quickly for tests.
I am writing a bit of code which uses Riak DB and I want to reset my database into a known state at the beginning of each test.
Is there a way to truncate a riak database cleanly? What about a way to execute inside of a transaction and rollback at the end of the test?
Currently I use a bit of code like this:
riak.buckets.each do |bucket|
bucket.keys.each do |key|
bucket.delete(key)
end
end
But I imagine this would be pretty slow to do this at the beginning of every test.
I think every test-oriented developer faces this dilemma when working with Riak. As Christian has mentioned, there is no concept of rollbacks in Riak. And there is no single "truncate database" command that you can issue.
You have 3 approaches available to you:
Clear all the data on your test cluster. This essentially means issuing shell commands (assuming your test server is running on the same machine as your test suite). If you're using a Memory backend, this means issuing riak restart between each test. For other backends, you'd have to stop the node and delete the whole data directory and start it again: riak stop && rm -rf <...>/data/* && riak start. PROS: Wipes the cluster data clean between each test. CONS: This is slow (when you take into account shutdown and restart times), and issuing shell commands from your test suite is often awkward. (Sidenote: while it may be slow to do between each test, you can certainly feel free to clear the data directory before each run of the whole test suite.)
Loop through all the buckets and keys and delete them, on your test cluster, as you've suggested above. PROS: Simple to understand and implement. CONS: Also slow (to run between each test).
Have each test clean up after itself. So, if your test creates a User object, make sure to issue a DELETE command for that object at the end of the test. Optionally, test that a user doesn't exist initially, before creating one. (To make doubly sure that the previous test cleaned up). PROS: Simple to understand and implement. Fast (definitely faster than looping through all the buckets and keys between each test). CONS: Easy for developers to forget to clean up after each insert.
After having debated these approaches, I've settled on using #3 (combined, frequently, with wiping the test server data directory before each test suite run).
Some thoughts on mitigating the CONS of the 'each test cleans up after itself, manually' approach:
Use a testing framework that runs tests in random order. Many frameworks, like Ruby's Minitest, do this out of the box. This often helps catch tests that depend on other tests conveniently forgetting to clean up
Periodically examine your test cluster (via a list buckets) after the tests run, to make sure there's nothing left. In fact, you can do this programmatically at the end of each test suite (something as simple as doing a bucket list and making sure it's empty).
(This is good testing practice in general, but especially relevant with Riak) Write less tests that hit the database. Maintain strict division between Unit Tests (that test object state and behavior without hitting the db) and Integration or Functional Tests (that do hit the db). Make sure there's a lot more of the former than the latter. To put it in other words -- you don't have to test that the database works, with each unit test. Trust it (though obviously, verify, during the integration tests).
For example, if you're using Riak with Ruby on Rails, and you're testing your models, don't call test_user.save! to verify that a user instance is valid (like I once did, when first getting started). You can simply test for test_user.valid?, and understand that the call to save will work (or fail) accordingly, during actual use. Consider using Mockist-style testing, which verifies whether or not a save! function was actually invoked, instead of actually saving to the db and then reading back. And so on.
There are few possible answers here.
Are you testing that data is persisted by querying Riak using its key? If so, you can set up a test server. Documentation, such as it is, is here, http://rubydoc.info/github/basho/riak-ruby-client/Riak/TestServer
Are you testing access by secondary index? If so, why? Do you not trust Riak or the Ruby driver?
In all probability, your tests shouldn't be coupled to the data store in any case. It slows down things.
If you do insist and the TestServer isn't working for you, set up a new bucket for every test run. Each bucket is its own namespace, so it's pretty much clean slate. Periodically, stop the nodes and clear out data directories as per Christian's answer above.
As there is no concept of transactions or rollbacks in Riak, that is not possible. The memory backend is however commonly used for testing as it supports the features of Bitcask (auto-expiry) and LevelDB (secondary indexes). Whenever the database needs to be cleared, the nodes just need to be restarted.
If using Bitcask or LevelDB when testing, the most efficient method to clear the database is to shut down the node and simply remove the data directories.
I have few key's stored in the MemCached server. Like...
KEY-2312sdasd78
KEY-5lk65klk343
KEY-klk34k3lkl3
TEST-34k3l4k3l4
TEST-kl3k2lk3l2
Now, I want to remove the key's from MemCached server which are start with "KEY".
I have tried to find google but there is no RegEX based support in MemCached.
Does anybody faced this kind of issues, and what is the optimum work around for this.
Any help will be appreciated. Thanks.
Possible duplicate: Regex on memcached key?
Also See http://code.google.com/p/memcached-tag/
I think something like this is much easier with something like Redis because it:
Supports Transactions
Supports atomic data structures like Lists
So in Redis when you add a key,value you will add the key to some giant global list in the same transaction.
There's no way to do this without knowing that the keys are.
The only way that you could do something like this is by prefixing each set of keys with something common, e.g. KEY-KEYSET1-. You could then invalidate them all by internally bumping 1 to 2 in your code, which means that the existing values will not be accessed and eventually expire.
Im trying to find a good way to handle memcache keys for storing, retrieving and updating data to/from the cache layer in a more civilized way.
Found this pattern, which looks great, but how do I turn it into a functional part of a PHP application?
The Identity Map pattern: http://martinfowler.com/eaaCatalog/identityMap.html
Thanks!
Update: I have been told about the modified memcache (memcache-tag) that apparently does do a lot of this, but I can't install linux software on my windows development box...
Well, memcache use IS an identity map pattern. You check your cache, then you hit your database (or whatever else you're using). You can go about finding information about the source by storing objects instead of just values, but you'll take a performance hit for that.
You effectively cannot ask the cache what it contains as a list. To mass invalidate, you'll have to keep a list of what you put in and iterate it, or you'll have to iterate every possible key that could fit the pattern of concern. The resource you point out, memcache-tag can simplify this, but it doesn't appear to be maintained inline with the memcache project.
So your options now are iterative deletes, or totally flushing everything that is cached. Thus, I propose a design consideration is the question that you should be asking. In order to get a useful answer for you, I query thus: why do you want to do this?
I need some guidance related to the following scenario in infinispan. Here is my scenario:
1) I created two nodes and started successfully in infinispan using client server mode.
2) In the hot rod client I created a remotechachemanager and then obtained a RemoteCache.
3) In the remote cache I put like this cache.put(key, new HashMap()); it is successfully added.
4) Now when I am going to clear this value using cache.remove(key) , I am seeing that it is not getting removed and the hash map is still there every time I go to remove it.
How can clear the value so that it will be cleared from all node of the cluster?
How can I also propagate the changes like adding or removing from the value HashMap above?
Has it anything to do with implementing DeltaAware and Delta interface?
Please suggest me about this concept or some pointers where I can learn
Thank you
Removal of the HashMap should work as long as you use the same key and have equals() and hashCode() correctly implemented on the key. I assume you're using distributed or replicated mode.
EDIT: I've realized that equals() and hashCode() are not that important for RemoteCache, since the key is serialized anyway and all the comparison will be executed on the underlying byte[].
Remote cache does not directly support DeltaAware. Generally, using these is quite tricky even in library mode.
If you want to use cache with maps, I suggest rather using composite key like cache-key#map-key than storing complex HashMap.