How to refresh iBatis Cache with database operations - spring

We have an Java EE web application using iBatis for ORM. One of the dropdown (select box) shows a master data which is refreshed on a daily basis (say 4.00 AM) via cron jobs loading flat file into oracle database table.
Since the dropdown/select-box has to list ~1000 records and it was static data for 24 hrs, we used the CacheModel feature in iBatis. The select query was made to use a CacheModel with settings "ReadOnly=true & Serialized=true flushInterval=24 hours", so that a single cache will be shared across all users.
There will no insert/update/delete operations happening from the application to modify this master data
Question:
If the external job loading data to this oracle table fails and if the iBatis cache is flushed for the day before we manually load the data in the table, how can i get the iBatis cache flushed again inbetween of the day when i rerun a failed cron job ?
Please note that, there will not be any Insert/Update/Delete operations from the application

You can flush cache programmatically.
There are 2 methods
void flushDataCache()
Flushes all data caches.
and
void flushDataCache(java.lang.String cacheId)
Flushes the data cache that matches the cache model ID provided.
in SqlMapClient interface.
http://ibatis.apache.org/docs/java/user/com/ibatis/sqlmap/client/SqlMapClient.html

Related

Data Readiness Check

Let's say there is a job A which executes a Python to connect to Oracle, fetch the data from Table A and load the data into Snowflake once a day. Application A dependent on Table A in Snowflake can just depend on the success of job A for further processing, this is easy.
But if the data movement is via Replication (Change Data Capture from Oracle moves to s3 using Golden Gate, pipes pushes into stage, stream to target using Task every few mins) - what is the best way to let Application A know that the data is ready? How to check if the data is ready? is there something available in Oracle, like a table level marker that can be moved over to Snowflake? Table's in Oracle cannot be modified to add anything new, marker rows also cannot be added - these are impractical. But something that Oracle provides implicitly, which can be moved over to Snowflake or some SCN like number at the table level that can be compared every few minutes could be a solution, eager to know any approaches.

update ignite cache with time-stamp data

My issue is that how to update cache with new entries from database table?
my cache has my Cassandra table data suppose for till 3 p.m.
till that time user has purchase 3 item so my cache has 3 items entry associated with that user.
But after sometime (say 30min) what if user purchase 2 more item ?
As i have 3 entry in cache it wont query from database, how to get those 2 new entry at time of calculation final bill.
One option i have is to call cache.loadCache(null, null) every 15 min? but some where this is not feasible to call every time?
The better option here is to insert data not directly to Cassandra, but using Ignite. It will give a possibility to have always updated data in the cache without running any additional synchronizations with DB.
But if you will choose to run loadCache each time, you can add a timestamp to your object in DB and implement your own CacheStore, which will have addition method that will load only new data from DB. Here is a link to the documentation, it will help you to implement your own CacheStore.

Role of H2 database in Apache Ignite

I have an Apache Spark Job and one of its components fires queries at Apache Ignite Data Grid using Ignite SQL and the query is a SQLFieldsQuery. I was going through the thread dump and in one of the Executor logs I saw the following :
org.h2.mvstore.db.TransactionStore.begin(TransactionStore.java:229)
org.h2.engine.Session.getTransaction(Session.java:1580)
org.h2.engine.Session.getStatementSavepoint(Session.java:1588)
org.h2.engine.Session.setSavepoint(Session.java:793)
org.h2.command.Command.executeUpdate(Command.java:252)
org.h2.jdbc.JdbcStatement.executeUpdateInternal(JdbcStatement.java:130)
org.h2.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:115)
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.connectionForThread(IgniteH2Indexing.java:428)
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.connectionForSpace(IgniteH2Indexing.java:360)
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryLocalSqlFields(IgniteH2Indexing.java:770)
org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:892)
org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:886)
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1666)
org.apache.ignite.internal.processors.query.GridQueryProcessor.queryLocalFields(GridQueryProcessor.java:886)
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:698)
com.test.ignite.cache.CacheWrapper.queryFields(CacheWrapper.java:1019)
The last line in my code executes a sql fields query as follows :
SqlFieldsQuery sql = new SqlFieldsQuery(queryString).setArgs(args);
cache.query(sql);
According to my understanding, Ignite has its own data grid which it uses to store the cache data and indices. It only makes use of H2 database to parse the SQL query and get a query execution plan.
But, the Thread dump shows that updates are being executed and transactions are involved. I don't understand the need for transactions or updates in a SQL Select Query.
I want to know the following about the role of H2 database in Ignite :
I went into the open source code of Apache Ignite(version 1.7.0) and saw that it was trying to open a connection to a specific schema in H2 database by executing the query SET SCHEMA schema_name ( connectionForThread() method of IgniteH2Indexing class ). Is one schema or one table created for every cache ? If yes, what information does it contain since all the data is stored in ignite's data grid.
I also came across another interesting thing in the open source code which is that Ignite tries to derive the schema name in H2 from space name ( reference can be found in queryLocalSqlFields() method of IgniteH2Indexing class ). I want to know what does this space name indicate and is it something internal to Ignite or configurable ?
Would the setting of schema and connection to H2 db happen for each of my SQL query, if yes then is there any way to avoid this ?
Yes, we call executeUpdate to set schema. In Ignite 2.x we will be able to switch to Connection.setSchema for that. Right now we create SQL schema for each cache and you can create multiple tables in it, but this is going to be changed in the future. It does not actually contain anything, we just utilize some H2 APIs.
Space name is basically the same thing as a cache name. You can configure SQL schema name for a cache using CacheConfiguration.setSqlSchema.
If you run queries using the same cache instance, schema will not change.

Pentaho CDA cache scheduler with Dynamic data

I have a SQL query with 2 Parameters fromDate and toDate in my dashboard.
A user can select any date range. But still users prefer the date ranges for Last 30 Days, Current Week or Current Month.
The dashboard has some huge data so my sql queries are getting too slow to load the dashboard.
So I've enabled the cda cache to make the dashboard reload faster. But the data needs to be updated on hourly basis. So a refresh is required for every hour.
When I clear the cache the dashboard is too slow the first time it is loaded. So I tried to schedule the query with the CDA cache manager.
Refer to this URL : How to Reload CDA and Mondrian cache in Pentaho CE 4.8?
Unfortunately, I am unable to schedule the queries with dynamic parameters.
How can I schedule the queries with my dynamic parameters?
Also is there any way to do clear cda cache for specific queries?
Kindly suggest your solution.
Cheers,

JCS - Dynamic update of cache from database

I maintain an application which leverage JCS to hold the cache in JVM (JVM1). This data will be loaded from a database for the first time when the JVM gets started/ restarted.
However the database will be accessed from a different JVM (JVM2) and will help adding data to database.
In order to make sure this additional/ newly added records loaded into cache, we need to restart JVM1 for every addition in the database.
Is there a way we can refresh/load the cache (only for newly added records) in JVM1 for regular intervals (instead of frequent db polling)?
Thanks,
Jaya Krishna
Can you not simply have JVM1 first check the in memory cache, and then, if the item is absent in the in-memory cache, check the database cache?
If you, however, need to list all items in existance, of some certain type, and don't want to access the database. Then, for JVM1 to know that there's a new item in the databse, I suppose that either 1) JVM2 would have to send a network message to JVM1 telling it that there're new entries in the database. Or 2) there could be a database trigger that fires when new data is inserted, and sends a network message to JVM1. (But having the database send network messages to an application server feels rather weird I think.) — I think these approaches seem rather complicated though.
Have you considered some kind of new-item-ids table, that logs the IDs of items recently inserted into the database? It could be updated by a database trigger, or by JVM1 and 2 when they write to the databse. Then JVM1 would only need to poll this single table perhaps once per second, to get a list of new IDs, and then it could load the new items from the database.
Finally, have you considered a distributed cache? So that both JVM1 and 2 share the same cache, and JVM1 and 2 writes items to this cache when they insert them into the datbase. (This approach would be somewhat similar to sending network messages between JVM1 and 2, but the distributed cache system would send the messages itself, so you didn't need to write any new code)

Resources