Get number of weak_ptr objects that point to resource - caching

I am trying to create a custom cashing mechanism where I am returning a weak_ptr to the cache created. Internally, I hold a shared_ptr to control the lifetime of the object.
When the maximum cache pre-set is consumed, the disposer looks for those cache objects that are not accessed for a long time and will clean them up.
Unfortunately this may not be ideal. If it was possible to check how many cache objects can be accessed through the weak_ptr, then this can be a criteria for making the decision to clean up or not.
Turns out there is no way to check how many weak_ptr(s) have handle to the resource.
But when I look at the shared_ptr documentation and implementation notes
=> the number of weak_ptrs that refer to the managed object
is part of the implementation. Why is this not exposed through an API ?

Related

which classes are Rails cacheable

My class caches fine on development environment, but can I be sure about production environment, with memcache, redis, or whatever running? I wonder which data types are cacheable with low level Rails.cache.write('mykey', myobj) besides strings, numbers and their array? Are there some criterion to see is a given class save to cache? At the least with the typical cache stores.
Short answer: ANY object.
Longer answer: By default, objects over 1Kb will be compressed. And by default, the compression is done via Marshal.dump.
The documentation for Marshal.dump states:
Marshal can't dump following objects:
anonymous Class/Module.
objects which are related to system (ex: Dir, File::Stat, IO, File,
Socket and so on)
an instance of MatchData, Data, Method, UnboundMethod, Proc, Thread,
ThreadGroup, Continuation
objects which define singleton methods
So in order to cache large objects that fall under the above categories, you'd need to either increase the compress_threshold, set compress: false or define an alternate way of "compressing" the data (?!).

C++ std::unique_ptr with STL container

I got a piece of code which uses a std::set to keep a bunch of pointer.
I used this to be sure of that each pointer will present only once in my container.
Then, I heard about std::unique_ptr that ensure the pointer will exists only once in my entire code, and that's exactly what I need.
So my question is quite simple, should I change my container type to std::vector ? Or It won't changes anything leaving a std::set ?
I think the job your set is doing is probably different to unique_ptr.
Your set is likely recording some events, and ensuring that only 1 event is recorded for each object that triggers, using these terms very loosely.
An example might be tracing through a mesh and recording all the nodes that are passed through.
The objects themselves already exist and are owned elsewhere.
The purpose of unique_ptr is to ensure that there is only one owner for a dynamically allocated object, and ensure automatic destruction of the object. Your objects already have owners, they don't need new ones!

Using Core Data as cache

I am using Core Data for its storage features. At some point I make external API calls that require me to update the local object graph. My current (dumb) plan is to clear out all instances of old NSManagedObjects (regardless if they have been updated) and replace them with their new equivalents -- a trump merge policy of sorts.
I feel like there is a better way to do this. I have unique identifiers from the server, so I should be able to match them to my objects in the store. Is there a way to do this without manually fetching objects from the context by their identifiers and resetting each property? Is there a way for me to just create a completely new context, regenerate the object graph, and just give it to Core Data to merge based on their unique identifiers?
Your strategy of matching, based on the server's unique IDs, is a good approach. Hopefully you can get your server to deliver only the objects that have changed since the time of your last update (which you will keep track of, and provide in the server call).
In order to update the Core Data objects, though, you will have to fetch them, instantiate the NSManagedObjects, make the changes, and save them. You can do this all in a background thread (child context, performBlock:), but you'll still have to round-trip your objects into memory and back to store. Doing it in a child context and its own thread will keep your UI snappy, but you'll still have to do the processing.
Another idea: In the last day or so I've been reading about AFIncrementalStore, an NSIncrementalStore implementation which uses AFNetworking to provide Core Data properties on demand, caching locally. I haven't built anything with it yet but it looks pretty slick. It sounds like your project might be a good use of this library. Code is on GitHub: https://github.com/AFNetworking/AFIncrementalStore.

Cache Management with Numerous Similar Database Queries

I'm trying to introduce caching into an existing server application because the database is starting to become overloaded.
Like many server applications we have the concept of a data layer. This data layer has many different methods that return domain model objects. For example, we have an employee data access object with methods like:
findEmployeesForAccount(long accountId)
findEmployeesWorkingInDepartment(long accountId, long departmentId)
findEmployeesBySearch(long accountId, String search)
Each method queries the database and returns a list of Employee domain objects.
Obviously, we want to try and cache as much as possible to limit the number of queries hitting the database, but how would we go about doing that?
I see a couple possible solutions:
1) We create a cache for each method call. E.g. for findEmployeesForAccount we would add an entry with a key account-employees-accountId. For findEmployeesWorkingInDepartment we could add an entry with a key department-employees-accountId-departmentId and so on. The problem I see with this is when we add a new employee into the system, we need to ensure that we add it to every list where appropriate, which seems hard to maintain and bug-prone.
2) We create a more generic query for findEmployeesForAccount (with more joins and/or queries because more information will be required). For other methods, we use findEmployeesForAccount and remove entries from the list that don't fit the specified criteria.
I'm new to caching so I'm wondering what strategies people use to handle situations like this? Any advice and/or resources on this type of stuff would be greatly appreciated.
I've been struggling with the same question myself for a few weeks now... so consider this a half-answer at best. One bit of advice that has been working out well for me is to use the Decorator Pattern to implement the cache layer. For example, here is an article detailing this in C#:
http://stevesmithblog.com/blog/building-a-cachedrepository-via-strategy-pattern/
This allows you to literally "wrap" your existing data access methods without touching them. It also makes it very easy to swap out the cached version of your DAL for the direct access version at runtime quite easily (which can be useful for unit testing).
I'm still struggling to manage my cache keys, which seem to spiral out of control when there are numerous parameters involved. Inevitably, something ends up not being properly cleared from the cache and I have to resort to heavy-handed ClearAll() approaches that just wipe out everything. If you find a solution for cache key management, I would be interested, but I hope the decorator pattern layer approach is helpful.

Caching and clearing in a proxy with relations between proxy calls

As part of a system I am working on we have put a layer of caching in a proxy which calls another system. The key to this cache is built up of the key value pairs which are used in the proxy call. So if the proxy is called with the same values the item will be retrieved from the cache rather than from the other service. This works and is fairly simple.
It gets more complicated when it comes to clearing the cache as it is not obvious which items to clear when an item is changed. if object A is contained in nodeset B and object A is changed, how do we know that nodeset B is stale.
We have got round the problem by having the service that we call return the nodesets to clear when objects are changed. However this breaks encapsulation and adds a layer of complexity in that we have to look in the responses to see what needs clearing.
Is there a better/standard way to deal with such situations.
Isn't thsi the sort of thing that could be (and should be) handled with the Observer pattern? Namely, B should listen to events that affect it's liveness, in this case the state of A.
A Map is a pretty natural abstraction for a cache and this is how Oracle Coherence and Terracotta do it. Coherence, with which I'm far more familiar, has mechanisms to listen to cache events either in general or for specific nodes. That's probably what you should emulate.
You also might want to look at the documentation for either of those even if its just as a guide or source of ideas.
You don't say what platform you're running in but perhaps we can suggest some alternatives to rolling your own, which is always going to be fraught with problems, particularly with something as complicated as a cache (make no mistake: caches are complicated).

Resources