Laravel - Redis caching on model level - laravel-5

I came from CodeIgniter using memcached.
Basically, a model call in CI was very different:
$this->model_name->get_some_row_by_id($id);
which would return a row.
So caching would be done by having
get_some_row_by_id look in the cache first, if no hit, then use the callback to fetch it from the database and store it.
I like this method of caching on the model level as any controller which needs to use the model will automatically use the cached call.
I've been reading all the documentation in Laravel and from what I can see, I cannot do this on the model level without some massive hacks, caching happens inside the controller using Cache::remember.
The biggest issue I see with this is the need to define cache in every controller and then keep track of the proper keys if I want to cache objects by id->row rather than by page composition.
If anyone has a method to accomplish something similar in Laravel to what I did in CodeIgniter, I would love to know it.

Please read this https://laravel.com/docs/5.4/cache.
And the Retrieving Items From The Cache section says:
You may even pass a Closure as the default value. The result of the Closure will be returned if the specified item does not exist in the cache. Passing a Closure allows you to defer the retrieval of default values from a database or other external service:
$value = Cache::get('key', function () {
return DB::table(...)->get();
});
It is same like "caching would be done by having get_some_row_by_id look in the cache first, if no hit, then use the callback to fetch it from the database and store it". The code above only fetch it from db and not store it into cache , but you can add it.
And about cache configure of laravel
By default, Laravel is configured to use the file cache driver, which stores the serialized, cached objects in the filesystem. For larger applications, it is recommended that you use a more robust driver such as Memcached or Redis.

Related

When to use Redis and when to use DataLoader in a GraphQL server setup

I've been working on a GraphQL server for a while now and although I understand most of the aspects, I cannot seem to get a grasp on caching.
When it comes to caching, I see both DataLoader mentioned as well as Redis but it's not clear to me when I should use what and how I should use them.
I take it that DataLoader is used more on a field level to counter the n+1 problem? And I guess Redis is on a higher level then?
If anyone could shed some light on this, I would be most grateful.
Thank you.
DataLoader is primarily a means of batching requests to some data source. However, it does optionally utilize caching on a per request basis. This means, while executing the same GraphQL query, you only ever fetch a particular entity once. For example, we can call load(1) and load(2) concurrently and these will be batched into a single request to get two entities matching those ids. If another field calls load(1) later on while executing the same request, then that call will simply return the entity with ID 1 we fetched previously without making another request to our data source.
DataLoader's cache is specific to an individual request. Even if two requests are processed at the same time, they will not share a cache. DataLoader's cache does not have an expiration -- and it has no need to since the cache will be deleted once the request completes.
Redis is a key-value store that's used for caching, queues, PubSub and more. We can use it to provide response caching, which would let us effectively bypass the resolver for one or more fields and use the cached value instead (until it expires or is invalidated). We can use it as a cache layer between GraphQL and the database, API or other data source -- for example, this is what RESTDataSource does. We can use it as part of a PubSub implementation when implementing subscriptions.
DataLoader is a small library used to tackle a particular problem, namely generating too many requests to a data source. The alternative to using DataLoader is to fetch everything you need (based on the requested fields) at the root level and then letting the default resolver logic handle the rest. Redis is a key-value store that has a number of uses. Whether you need one or the other, or both, depends on your particular business case.

Need to suspend and flush memcached server (or server pool) to maintain data integrity

I have this problem related to maintaining and I have looked in several places for the answer but I have found no specific answer.
The situation is like this:
We have several mysql queries which generate menus for our web application. About once a day, we need to update the tables and those updates affect the menu generation. Naturally, we enclose those updates within a transaction.
So far so good. But the improve the speed and responsiveness and also reduce database load, we want to use memcached. And in all respects, memcached is perfect for this role because the updates happen only once a day.
But what we would like to do is this:
Our update scripts starts and its first operation is to "suspend" the memcached pool. Once this is done, memcached no longer answers queries and all queries are passed through to mysql. The important thing is that the memcached server still responds with a miss quickly so that mysql comes into action quickly. The other important thing is that during this period, memcached will refuse to set any data.
Flush all data in memcached pool.
Update script runs.
Restore memcached to normal operation.
So, 1. and 4. is where I am stuck.
Our technology is based around mysql and PHP. I am using the nginx memcached module to directly retrieve data from memcached. But the PHP which sets the cache could run in many different places.
Having said that, I am open to using any language or technology. This is a generic enough problem and we could discuss anything that works best.
Thanks in advance for responses.
The usual method of (atomically) swapping over from one set of data in cache is with a Namespace. A prefix that is stored in its own key and is queried first before going on to fetch the main cached data.
It works like this:
You have a 'namespace' under a key - it could be date/time based for example - menuNamespace = 'menu:15050414:' (the 2015-05-04, 2pm menu build).
That key is a prefix for all the actual data for the menus, or other data, eg: menu:15050414:top-menu, menu:15050414:l2-menu, etc, etc
The back end system builds a new set of cached data with new keys: menu:15050510:top-menu, menu:15050510:l2-menu
Only when the data is in place, do you change namespace key cached entry from 'menu:15050414:' to 'menu:15050510:'
The next time the namespace is fetched, it is used as a prefix to then fetch the new data.
There is some more in a MemcacheD FAQ/tricks page on Namespacing.
Based on #alister_b's initial answer, there is a simpler way to solve my initial problem.
The key is to signal to the PHP code to stop setting the cache values. That can be done through memcached entry like setCache:false or through a MySQL column.
Then, a flush command will guarantee nginx cache misses.
Once the tables are updated, setCache is set to true and normal sets by php are resumed.
This will work with my Ajax calls without issues.
It is not mutually exclusive with namespaces.

Plone 4.2 how to make PAS cache external usera data

I'm implementing a PAS plugin that handles authentications against mailservers. Actually only DBMail is implemented.
I realized, that the enumerateUsers function from the PAS plugin is called numerous times per request and requires my plugin to open/close an SQL connections for every (subsequent) request. Of course, this is very expensive.
The connections itself are handled in a plone tool, which is able to handle multiple different mailservers and delegeates the enumerateUsers call to wrapper objects that represent registered servers.
My question is now, what sort of cache (OOBTree, Session?) I should use to provide a temporary local storage for repeating enumerations and avoid subsequent SQL connections?
Another idea was, to hook into the user creation process that takes place on the first login, an external user issues and completely "localize" the users.
Third idea was, to store the needed data in the specific member, if possible.
What would be best practice here?
I'd cache the query results, indeed. You need to make a decision on how long to cache the results, and if stored long term, how to invalidate that cache or check for changes.
There are no best practices for these decisions, as they depend entirely on the type of data stored and the APIs of the backends. If they support some kind of freshness query, for example, then you store everything forever and poll the backend to see if the cache needs updating.
You can start with a simple request cache; query once per request, store it on the request object. Your cache will automatically be invalidated at the end of the request as the request object is cleaned up, the next request will be a clean slate.
If your backend users rarely change, you can cache information for longer, in a local cache. I'd use a volatile attribute on the plugin. Any attribute starting with _v_ is ignored by the persistence machinery. Thus, anything stored in a _v_ volatile attribute is both thread-local and only exists for the lifetime of the process, a restart of the server clears these automatically.
At the very least you should use an _v_ volatile attribute to store your backend SQL connections. That way they can stay open between requests, and can be re-used. Something like the following method would do nicely:
def _connection(self):
# Return a backend connection
if getattr(self, '_v_connection', None) is None:
# Create connection here
self._v_connection = yourdatabaseconnection
return self._v_connection
You could also use a persistent attribute on your plugin to store your cache. This cache would be committed to the ZODB and persist across restarts. You then really need to work out how to invalidate the contents; store timestamps and evict data when to old, etc.
Your cache datastructure depends entirely on your application needs. If you don't persist information, a dictionary (username -> information) could be more than enough. Persisted caches could benefit from using a OOBTree instead of a dictionary as they reduce chances of conflicts between different threads and are more efficient when it comes to large sets of data.
Whatever you do, you do not need to use a Session. Sessions are prone to conflicts, do not scale well, and are in any case not the place to store a cache of this kind.

Flushing entire cache at once with Enterprise Caching Block

I am looking into using Enterprise Caching Block for my .NET 3.5 service to cache a bunch of static data from the database.
From everything I have read, it seems that FileDependency is the best option for storing static data that does not expire too often. However, when the file changes and the cache is flushed, I need to get a callback once to do some post processing for that particular cache. If I implement ICacheItemRefreshAction and register it during adding an item to the cache, I get a callback for each one of them.
Is there a way to register a callback for the entire cache so that I dont see thousands of callbacks being invoked when the cache flushes?
Thanks
To address your follow up for a better way than FileDependency: you could wrap a SqlDependency in an ICacheItemExpiration. See SqlCacheDependency with the Caching Application Block for sample code.
That approach would only work with SQL Server and would require setting up Service Broker.
In terms of a cache level callback, I don't see an out of the box way to achieve that; almost everything is geared to the item level. What you could do would be to create your own CacheManager Implementation that features a cache level callback.
Another approach might be to have a ICacheItemRefreshAction that only performs any operations when the cache is empty (i.e. the last item has been removed).

object session in playframework

How I can store an instance object foreach user session?
I have a class to modeling a complex algorithm. This algorithm is designed to run step-by-step. I need to instantiate objects of this class for each user. Each user should be able to advance step by step their instance.
You can only store the objects in the Cache. The objects must be serializable for this. In the session you can store a key (which must be a String) to the Cache. Make sure that your code still works if the object was removed from the cache (same as a session-timeout). It's explained in http://www.playframework.org/documentation/1.0.3/cache.
Hope that solve your problem.
To store values in the session:
//first get the user's session
//if your class extends play.mvc.Controller you can access directly to the session object
Session session = Scope.Session.current();
//to store values into the session
session.put("name", object);
If you want to invalidate / clear the session object
session.clear()
from play documentation: http://www.playframework.org/documentation/1.1.1/cache
Play has a cache library and will use Memcached when used in a distributed environment.
If you don’t configure Memcached, Play will use a standalone cache that stores data in the JVM heap. Caching data in the JVM application breaks the “share nothing” assumption made by Play: you can’t run your application on several servers, and expect the application to behave consistently. Each application instance will have a different copy of the data.
You can put any object in the cache, as in the following example (in this example from the doc http://www.playframework.org/documentation/1.1.1/controllers#session, you use session.getId() to save messages for each user)
public static void index() {
List messages = Cache.get(session.getId() + "-messages", List.class);
if(messages == null) {
// Cache miss
messages = Message.findByUser(session.get("user"));
Cache.set(session.getId() + "-messages", messages, "30mn");
}
render(messages);
}
Because it's a cache, and not a session, you have to take into account that the data might no longer be available, and have some mean to retrieve it once again from somehere (the Message model, in this case)
Anyway, if you have enough memory and it involves a short interaction with the user the data should be there, and in case it's not you can redirect the user to the beginning of the wizard (you are talking about some kind of wizard page, right?)
Have in mind that play, with it's stateless share-nothing approach, really have no sessión at all, underneath it just handles it through cookies, that's why it can only accept strings of limited size
Here's how you can save "objects" in a session. Basically, you serialize/deserialize objects to JSON and store it in the cookie.
https://stackoverflow.com/a/12032315/82976

Resources