I am using AppFabric to handle the caching capabilities of my website. I would like the
products cache to be updated once there has been a change within the products table
within my database. I read about implementing a read-through but from reading about I
found:
Read: Called when a cache client requests a cached item that does not currently exist in the associated cache.
This doesn't seem like it will solve my problem as I'd like the products cache to be
updated once a change to the table has been discovered, it won't necessarily not exist
in the cache in the first place. Is there any way I can do this using AppFabric
capabilities?
Basically, you just want to have a SqlCacheDependency mechanism. In the past, it was very usefull for In Memory Cache.
However, there is no support within AppFabric caching for the SqlCacheDependency mechanism (or in fact for any kind of dependency). You have to build your own ...
In addition, With a read-through provider, the cache detects the missing item and calls the provider to perform the data load. If your add your item with an expiration of 30 minutes, the provider will never not try to reload it before its expiration.
Eventually, you can try to update the cache with a Write-Behind provider. It will update periodically you database.
Related
I have a questions about Redis cache and laravel. By default laravel uses file which caches views to a file and load them from that cache.
Now here's the thing, I started using ElastiCache with Redis for my Laravel 5.4 project. If I change the driver to redis and it starts to cache (which I can tell by a loading time) but what does it actually cache? Does it automatically cache and retrieve my views? css? js? anything else?
I am also using redis for sessions driver, what does that give me?
Is it worth caching database as well? I was planning to cache whole database every hour and then whenever new item is added to database, add it to the existing cache. Is that possible?
The redis could give you two advantages:
faster data retrieving. Any memory-based cache system can give you this advantage than file-based or DB-based, such as memcached.
flexible data saving in redis. redis support many data-type store such as string, list, set, sorted-set and so on.
About caching what?
Cache the frequent request thing. If your client request something or query something to you, and you do not have cache, you will have to query it from your database, which give you an disk I/O time cost. And if the thing is heavy, then the IO cost will be bigger and slow down your server. So the smart way is , just query once and then save it into redis by suitable data-type store. After that retrive thousands with Cache. But you do not need to cache the overall database. It looks rude. And when you update something in db, just delete from your cache and after next time someone query this, it will save into cache again.
About Session. this is very frequent access thing for http server , so every user'session into cache is more light weight than file or db if your app server many many many people.
Cache the static file. Actually I has not dealt with this. But it can do this definitely! E.g. In modern architecture, there is often a Http server stand before your laravel such as nginx. In this way, you will use nginx serve the static file directly. And if you want decrease the disk io about this, you can add a module like redis2-nginx-module for nginx to do a same thing : save the static file into redis once and serve thousands.
The book 'architecture of open source software' says that the most common type of global cache in a web application is responsible for fetching the data itself, in case it is missing, as shown on this fixure. This seems different than what I've encountered so far. Most applications I have encountered make the application server responsible for fetching data from the db, and updating the server. At first, I thought the book might be talking about caching proxies, like Varnish, but they cover those in the next section, so that doesn't seem to be the case.
What cache systems actually fetch the data in case of a miss, and how do they know how to interact with the database?
Caching solutions provide read-through/write behind features which enable users to configure a read-through/write-behind provider be implementing some interface and deploying it with cache server. These providers contain logic about how cache server can interact with database to load/save data in database.
On a cache fetch operation if data is not present in cache server, cache loads data from database using configured provider thus avoiding a cache miss.
This way client applications deal cache as only data source and cache itself is responsible for interactions with database. You can read further details in this article by Iqbal Khan.
NCache and TayzGrid are enterprise solutions among many others that provide this feature.
Is it possible, reliable and secure to cache all entities in distrubuted cache and notifies dao layer on update? My possible idea is;
Use JPA 2.1 and Hibernate implementation.
On creation persist it db
After persisting it, cache it to distrubuted cache.
Canalise all read actions to cache
on update notify dao layer to update entity .
yes you can design a system that will
On addition: persists data to db and adds to cache
On read: reads data from cache, and considers a cache miss as not
present in database as well.
On update: updates data in db and then updates in cache (or vice
versa)
On delete: deletes data from cache and then deletes from database
This approach will work fine if you have a single application using that database and if data is not that critical. However if data integrity is of more importance, you may face following problems in this approach:
You may face a cache miss when data is present in database(persisted
but not yet cached)
You may get stale data from cache (updated in db but not yet updated
in cache)
Also if data is removed from database by some other application, it
will still ramained cached in distributed cache(invalid data on
reads)
A better mothod my be if you use a rich featured distributed caching solution like NCache / Tayzgrid which provides Read Trough / Write behind features. This way your application will only need to use cache for all reads, writes or updates and cache will keep database updated using configured providers.
Another approach may be to use distributed cache as hibernate's second level cache and you will not need to add a caching layer by your self. See this article for details about hibernate's second level cache.
Distributed caching solutions like Tayzgrid provide caching provider for hibernate that can be easily configured. You can find hibernate providers for other solutions as well.
Since I am caching an item in azure cache appfabric and in some cases i need information when a item was cached (like datetime of caching), So azure cache provide any such kind of function to provide this data about timing. By using this i will find out whether item is cached on current day or some other day.
I don't think the WAAF Caching has this information. You might need to add the time in your data by yourself.
I have a series of code books in my database, and I am using plain JDBC calls to fetch them and store them in a collection. I would like to put these in some kind of a cache at application startup time in order to save time later.
I don't need any fancy stuff like automatic object invalidation, TTL etc - the code books change rarely, so I'll trigger the update myself and just reload the whole cache when the need arises.
The project where I need this uses Spring, and this is my first project using it. Is there a standard/elegant way to do this in Spring?
Thanks.
Check out Spring-cache.
Supports EHCache, OSCache and a memory cache, but allows pluggable cache providers too.