Can someone please confirm whether CacheManager.Net supports redis pipelining?
I could not find it in the documentation
Thanks a lot.
Cheers,
U
Kind of.
CacheManager does not have support any batch operations directly.
But in case of Redis you can use cache.Put which internally uses the fire and forget flag of StackExchange.Redis. This is one kind of pipelining as the client doesn't wait for one operation to complete before you can excecute the next one.
If you use cache.Add (or Update and such) instead, CacheManager has to wait for the reply, e.g. if the operation was successful or not, maybe the item did exist already etc...
So, if you just want to push a lot of data into the cache, use Put.
Related
has anyone used mapdb as a state store in spring boot for a request lifecycle ?
I mean set things like "isDebug", etc in mapdb for the particular request and then clear it out at the end of the request ?
mapdb sounds very much like how Redux, etc is used in React, so im trying to leverage similar patterns.
if you have done so, how do you manage the flushing of data at the end of a request, etc ?
how do you manage the flushing of data at the end of a request
The documentation of MapDb is rather sparse, but at a first glance DB.close() seems to close the current transaction and write data to files, if it is actually backed by a file, which I guess you mean by "flushing".
Of course this begs the question why you would want to perma
I'd assume it destroys an in memory database
Note: Just as M. Deinum I don't really see what you expect to gain from using mapdb.
A few years ago I read ODL recommendation not to use READ operation but instead use Data Change Listener or some of its variation. Is it still valid recommendation?
Looking at the ODL code, I got impression that each transaction commit is applied to “In Memory Data Store” immediately during the commit simultaneously with sending notification to the listener. Is it correct?
Why in this case, reading is not as efficient as using the notification?
Where did you read this recommendation? It depends on your use case. Using a data tree change listener (DTCL) with your own cache is going to have faster access than issuing a read operation, especially in a clustered environment if the shard leader is remote. However maintaining your own cache via a DTCL is eventually consistent, meaning your cache may not have up-to-date data. This has to be considered for the use case. If you need strong consistency, then you must use read operations.
This is more of a theorical question.
Well, imagine that I have two programas that work simultaneously, the main one only do something when he receives a flag marked with true from a secondary program. So, this main program has a function that will keep asking to the secondary for the value of the flag, and when it gets true, it will do something.
What I learned at college is that the polling is the simplest way of doing that. But when I started working as an developer, coworkers told me that this method generate some overhead or it's waste of computation, by asking every certain amount of time for a value.
I tried to come up with some ideas for doing this in a different way, searched on the internet for something like this, but didn't found a useful way about how to do this.
I read about interruptions and passive ways that can cause the main program to get that data only if was informed by the secondary program. But how this happen? The main program will need a function to check for interruption right? So it will not end the same way as before?
What could I do differently?
There is no magic...
no program will guess when it has new information to be read, what you can do is decide between two approaches,
A -> asks -> B
A <- is informed <- B
whenever use each? it depends in many other factors like:
1- how fast you need the data be delivered from the moment it is generated? as far as possible? or keep a while and acumulate
2- how fast the data is generated?
3- how many simoultaneuos clients are requesting data at same server
4- what type of data you deal with? persistent? fast-changing?
If you are building something like a stocks analyzer where you need to ask the price of stocks everysecond (and it will change also everysecond) the approach you mentioned may be the best
if you are writing a chat based app like whatsapp where you need to check if there is some new message to the client and most of time wont... publish subscribe may be the best
but all of this is a very superficial look into a high impact architecture decision, it is not possible to get the best by just looking one factor
what i want to show is that
coworkers told me that this method generate some overhead or it's
waste of computation
it is not a right statement, it may be in some particular scenario but overhead will always exist in distributed systems
The typical way to prevent polling is by using the Publish/Subscribe pattern.
Your client program will subscribe to the server program and when an event occurs, the server program will publish to all its subscribers for them to handle however they need to.
If you flip the order of the requests you end up with something more similar to a standard web API. Your main program (left in your example) would be a server listening for requests. The secondary program would be a client hitting an endpoint on the server to trigger an event.
There's many ways to accomplish this in every language and it doesn't have to be tied to tcp/ip requests.
I'll add a few links for you shortly.
Well, in most of languages you won't implement such a low level. But theorically speaking, there are different waiting strategies, you are talking about active waiting. Doing this you can easily eat all your memory.
Most of languages implements libraries to allow you to start a process as a service which is at passive waiting and it is triggered when a request comes.
Background: I'm writing network traffic processing kernel module.
I'm getting packets using netfilter hooks. All filtering is done inside hook function, but I don't want to do packet processing here. So solution is tasklets or workqueues. I know the difference between them, I can use both, but I have some problems and I need an advice.
Tasklets solution. Preferrable. I can create and start tasklet for
each packet, but who will delete this tasklet? Tasklet function? I
don't think its a good idea - to dealloc tasklet while it is
executing. Create global pool of tasklets? Well, since there can't
be 2 executing tasklets on one processor, the pool size will be the
number of processors. But how to find out when tasklet is available
for new use? There are only two states: shed and run, but there is
no "done" state. Ok, I probably can wrap tasklet with some struct
with flag. But wouldn't that all be too much overkill?
Workqueue solution. Same problem: who will delete work? Same "solution" as for tasklets?
Workqueue solution 2. Just create permanent work due module loading, save packets to some queue and process them inside the work. May be two works and two queues: incoming and outgoing. But I'm afraid that with that solution I will use only one (or two) processors since looks like work can't be performed on few processors simultaneously.
Any other solutions?
One can use high-priority(WQ_HIGH_PRI), unbound(WQ_UNBOUND) workqueues and stick with option3 listed in the question.
WQ_HIGH_PRI guarantees that the processing is initiated ASAP. WQ_UNBOUND eliminates the single-CPU bottleneck as the scheduler assigns the work to any available CPU immediately.
How does Redis handle multiple threads (from different clients) updating the same data structure in Redis ? What is the recommended best practice for such a use case?
if you read the Little redis book at some point this sentence comes.
"You might not know it, but Redis is actually single-threaded, which is how every command is guaranteed to be atomic.
While one command is executing, no other command will run."
Have a look in http://openmymind.net/2012/1/23/The-Little-Redis-Book/ for more information
Regards