update ignite cache with time-stamp data - caching

My issue is that how to update cache with new entries from database table?
my cache has my Cassandra table data suppose for till 3 p.m.
till that time user has purchase 3 item so my cache has 3 items entry associated with that user.
But after sometime (say 30min) what if user purchase 2 more item ?
As i have 3 entry in cache it wont query from database, how to get those 2 new entry at time of calculation final bill.
One option i have is to call cache.loadCache(null, null) every 15 min? but some where this is not feasible to call every time?

The better option here is to insert data not directly to Cassandra, but using Ignite. It will give a possibility to have always updated data in the cache without running any additional synchronizations with DB.
But if you will choose to run loadCache each time, you can add a timestamp to your object in DB and implement your own CacheStore, which will have addition method that will load only new data from DB. Here is a link to the documentation, it will help you to implement your own CacheStore.

Related

Update currency rates in database or file

i'm using laravel 5.7
i need to get the exchange rates and I want to create a scheduler and run every 5 minutes
Now, do you think it's possible to put the currency information in a config file or save it to a database and, of course, i think the cache could be, right?
In your situation the data will be changed only in 5 minutes. So when you get rates just store in the database and then get new values and cache them and give the user rates from cached information. But when the next 5 minutes left and you get newest rates then you must clear existing cache, store new values into database then get new values from database and store into cache and again give the rates to user from new cached content.
If your database getting grow day to day you must add additional logic to not store millions of records into cache every 5 minutes. I advice not to store dynamic content into the cache. Anyway if you want to cache you can use Redis, Memcached etc. But keep in mind to clear and again store new content into cache.
If you only get daily rates in every 5 minutes and not storing anything into database then you can use logic I have told above without database section. Directly save records into cache and clear when new rates getted.
And also add index to database table to get data more faster.

Update database records based on date column

I'm working on a app where I have some entities in the database that have a column representing the date until that particular entity is available for some actions. When it expires I need to change it's state, meaning updating a column representing it's state.
What I'm doing so far, whenever I ask the database for those entities to do something with them, I first check if they are not expired and if they are, I update them. I don't particularly like this approach, since that means I will have a bunch of records in the database that would be in the wrong state just because I haven't queried them. Another approach would be to have a periodic task that runs over those records and updates them as necessary. That I also don't like since again, I would have records in a inconsistent state and in this case, the first approach seems more reasonable.
Is there another way of doing this, am I missing something? I need to mention, I use spring-boot + hibernate for my application. The underlying db is Postgresql. Is there any technology specific trick I can use to obtain what I want?
in database there it no triger type expired. if you have somethind that expired and you should do somethig with that there is two solutions (you have wrote about then) : do some extra with expired before you use data , and some cron/task (it might be on db level or on server side).
I recomend you use cron approach. Here is explanation :
do something with expired before you get data :
updated before select
+: you update expired data before you need it , and here are questions - update only that you requested or all that expired... update all might be time consumed in case if from all records you need just 2 records and updated 2000 records that are not related you you working dataset.
-: long time to update all record ; if database is shared - access to db not only throth you application , logic related to expired is not executed(if you have this case); you need controll entry point where you should do something with expired and where you shouldn't ; if time expired in min , sec - then even after you execure logic for expired , in next sec new records might be expired too;also if you need update workflow logic for expired data handling you need keep it in one plase - in cron , in case with update before you do select you should update changed logic too.
CRON/TASK
-: you should spend time to configure it just once 30-60 mins max:) ;
+: it's executed in the background ; if your db is used not only by your application , expired data logic also be available; you don't have to check(and don't rememebr about it , and explaine about for new employee....) is there any staled data in your java code before select something; you do split logic between cares about staled data , and normal queries do db .
You can execute 'select for update' in cron and even if you do select during update time from server side query you will wait will staled data logic complets and you get in select up to date data
for spring :
spring scheduling documentation , simple example spring-quartz-schedule
for db level postgresql job scheduler
scheduler/cron it's best practices for such things

JCS - Dynamic update of cache from database

I maintain an application which leverage JCS to hold the cache in JVM (JVM1). This data will be loaded from a database for the first time when the JVM gets started/ restarted.
However the database will be accessed from a different JVM (JVM2) and will help adding data to database.
In order to make sure this additional/ newly added records loaded into cache, we need to restart JVM1 for every addition in the database.
Is there a way we can refresh/load the cache (only for newly added records) in JVM1 for regular intervals (instead of frequent db polling)?
Thanks,
Jaya Krishna
Can you not simply have JVM1 first check the in memory cache, and then, if the item is absent in the in-memory cache, check the database cache?
If you, however, need to list all items in existance, of some certain type, and don't want to access the database. Then, for JVM1 to know that there's a new item in the databse, I suppose that either 1) JVM2 would have to send a network message to JVM1 telling it that there're new entries in the database. Or 2) there could be a database trigger that fires when new data is inserted, and sends a network message to JVM1. (But having the database send network messages to an application server feels rather weird I think.) — I think these approaches seem rather complicated though.
Have you considered some kind of new-item-ids table, that logs the IDs of items recently inserted into the database? It could be updated by a database trigger, or by JVM1 and 2 when they write to the databse. Then JVM1 would only need to poll this single table perhaps once per second, to get a list of new IDs, and then it could load the new items from the database.
Finally, have you considered a distributed cache? So that both JVM1 and 2 share the same cache, and JVM1 and 2 writes items to this cache when they insert them into the datbase. (This approach would be somewhat similar to sending network messages between JVM1 and 2, but the distributed cache system would send the messages itself, so you didn't need to write any new code)

How to refresh iBatis Cache with database operations

We have an Java EE web application using iBatis for ORM. One of the dropdown (select box) shows a master data which is refreshed on a daily basis (say 4.00 AM) via cron jobs loading flat file into oracle database table.
Since the dropdown/select-box has to list ~1000 records and it was static data for 24 hrs, we used the CacheModel feature in iBatis. The select query was made to use a CacheModel with settings "ReadOnly=true & Serialized=true flushInterval=24 hours", so that a single cache will be shared across all users.
There will no insert/update/delete operations happening from the application to modify this master data
Question:
If the external job loading data to this oracle table fails and if the iBatis cache is flushed for the day before we manually load the data in the table, how can i get the iBatis cache flushed again inbetween of the day when i rerun a failed cron job ?
Please note that, there will not be any Insert/Update/Delete operations from the application
You can flush cache programmatically.
There are 2 methods
void flushDataCache()
Flushes all data caches.
and
void flushDataCache(java.lang.String cacheId)
Flushes the data cache that matches the cache model ID provided.
in SqlMapClient interface.
http://ibatis.apache.org/docs/java/user/com/ibatis/sqlmap/client/SqlMapClient.html

optimizing large selects in hibernate/jpa with 2nd level cache

I have a user object represented in JPA which has specific sub-types. Eg, think of User and then a subclass Admin, and another subclass Power User.
Let's say I have 100k users. I have successfully implemented the second level cache using Ehcache in order to increase performance and have validated that it's working.
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-cache
I know it does work (ie, you load the object from the cache rather than invoke an sql query) when you call the load method. I've verified this via logging at the hibernate level and also verifying that it's quicker.
However, I actually want to select a subset of all the users...for example, let's say I want to do a count of how many Power Users there are.
Furthermore, my users have an associated ZipCode object...the ZipCode objects are also second level cached...what I'd like to do is actually be able to ask queries like...how many Power Users do i have in New York state...
However, my question is...how do i write a query to do this that will hit the second level cache and not the database. Note that my second level cache is configured to be read/write...so as new users are added to the system they should automatically be added to the cache...also...note that I have investigated the Query cache briefly but I'm not sure it's applicable as this is for queries that are run multiple times...my problem is more a case of...the data should be in the second level cache anyway so what do I have to do so that the database doesn't get hit when I write my query.
cheers,
Brian
(...) the data should be in the second level cache anyway so what do I have to do so that the database doesn't get hit when I write my query.
If the entities returned by your query are cached, have a look at Query#iterate(). This will trigger a first query to retrieve a list of IDs and then subsequent queries for each ID... that would hit the L2 cache.

Resources