Controlling Ehcache to not cache the zero record from query - ehcache

I am using mybatis with ehcache for caching a query result to avoid DB hits frequency. My question is: If select query returns zero record, then ehcache is caching that and always returning zero as a result even a valid record inserted post this query execution.
Can anyone suggest how to configure ehcache to not cache if query returns only zero record.
<mapper namespace="org.test">
<cache type="org.mybatis.caches.ehcache.EhcacheCache">
<property name="eternal" value="false" />
<property name="maxEntriesLocalHeap" value="10000"/>
<property name="maxEntriesLocalDisk" value="10000000"/>
<property name="timeToIdleSeconds" value="3600" />
<property name="timeToLiveSeconds" value="3600" />
<property name="memoryStoreEvictionPolicy" value="LRU" />
<property name="statistics" value="true" />
</cache>
<select id="userInfo" parameterType="map" resultMap="userInfoList" useCache="true">
SELECT USERNAME,USERID FROM TEST_TABLE
</select>

This looks like a classical invalidation problem.
Whatever logic created that additional record(s) must be responsible for invalidating the cached query. You are seeing the problem very obviously when no records are returned as a query result and you add one record that should match.
But you will face the exact same problem when your query returns n records, gets cached, then you add another record ... but still the query will return n results not n+1.
I would recommend looking into what mybatis offers for cache invalidation.

Related

DB Insert if not present - Spring Integration

We have a int-jms:message-driven-channel-adapter --> a transformer --> a filter --> another transformer --> int-jdbc:outbound-channel-adapter (to insert in table_1)
(considering --> as channels)
I want to change this flow to insert into 2 tables instead of 1 but for table_2 I want to insert only if data corresponding to some fields in the message is already not present in the table i.e. insert if not present.
One thing I figured is that I will now need a pub-sub-channel with ignore-failures=false to split the flow into 2 tables.
My question is that what component should I use to select the data to check if data exists in table_2? I first thought inbound-channel-adapter is the right choice but couldn't figure out how to slide it between 2 components i.e. say a transformer and outbound-channel-adapter.
Some things I can think of are:
1. Use a filter, passing it jdbcTemplate so that the filter itself can fire a JDBC query to accept if record doesn't exist.
2. Use a outbound-channel-adapter and the insert query should have the check for data existence also, something like insert-if. I am not sure if Oracle has something like this. I am researching.
Please point me to an example or documentation or tell me the best way.
Thanks
Actually you can use
<chain>
<header-enricher>
<header name="original" expression="payload"/>
</header-enricher>
<int-jdbc:outbound-gateway query="SELECT count(*) from TABLE where ..."/>
<filter expression="payload == 0"/>
<transformer expression="headers.original"/>
</chain>
From other side <filter> with JdbcTemplate direct usage is good choice too.
Re. insert-if. It can work too if you have a unique constraint on the table. In this case an Exception will be thrown. And if you have <publish-subscribe-channel ignore-failures=false>, it would work too.

JDBC selecting a sequence

I need to call a webservice and the response needs to be inserted into a DB parent table which has a sequence as key. Also I need to select that sequence number just inserted and insert the data into 2 child tables all in one transaction. How could this be achieved? I can do all inserts in a transaction but I need to do a select to get the sequence after the first insert into the parent table. Any help would be greatly appreciated.
You can accomplish this by wrapping all of your calls into a transaction. There are several exception strategies available, but it sounds like a simple rollback strategy would work for you. If any of the calls in the transactional block generate an exception, the exception strategy will be triggered. Keep in mind that if you want your web service call to throw an exception on failure, you will need to check the status code and generate the exception if it isn't what you expect.
<transactional action="ALWAYS_BEGIN" doc:name="Transactional">
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="getSequenceNumber" queryTimeout="-1" connector-ref="myConnector" doc:name="Database">
<jdbc-ee:transaction action="BEGIN_OR_JOIN" />
</jdbc-ee:outbound-endpoint>
<http:outbound-endpoint exchange-pattern="request-response" host="${webServiceHost}"
port="${webServicePort}"
path="${webServicePath}"
method="GET" doc:name="HTTP">
</http:outbound-endpoint>
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="createRecord" queryTimeout="-1" connector-ref="myConnector" doc:name="Database">
<jdbc-ee:transaction action="BEGIN_OR_JOIN" />
</jdbc-ee:outbound-endpoint>
<rollback-exception-strategy doc:name="Rollback Exception Strategy" />
</transactional>
You can read more about more transactions here: http://www.mulesoft.org/documentation/display/current/Transaction+Management

Hibernate reverse engineers (oracle) NUMBER to zero-precision big_decimal

When reverse engineering a column in an Oracle view with data type "NUMBER", the resultant column in *.hbm.xml is a big_decimal with precision="0".
I also use these mapping files with Derby to do acceptance tests, but from the Derby docs:
The precision must be between 1 and 31.
I do not have control over the definition of the view. I read through the reverse engineering docs and I can't see a way of controlling the precision.
How do I instruct hibernate to give me a valid (derby) precision?
you should modify hibernate.reveng.xml as example below with your needs (precision,scale) and set output type (hibernate-type) then generate classess and mapping files.
<hibernate-reverse-engineering>
<type-mapping>
<sql-type jdbc-type="NUMERIC" precision="1" scale="0" hibernate-type="boolean"/>
<sql-type jdbc-type="NUMERIC" precision="22" scale="0" hibernate-type="long"/>
<sql-type jdbc-type="OTHER" hibernate-type="..."/>
</type-mapping>
....

How do I create an index online with liquibase?

I have a migration that will create an index in a table of our Oracle database. The DBA would like the index to be created ONLINE. (Something like CREATE INDEX ..... ONLINE;) Is there something I can add to the tag below to accomplish what I want or do I need to write the SQL into the migration?
<createIndex tableName="USER" indexName="IX_USER_TX_ID">
<column name="TX_ID" />
</createIndex>
There is no standard tag to specify it as ONLINE. Your 3 options to create it as ONLINE are:
Fall back to the tag where you specify the exact SQL you want
Use the modifySql tag to append to the generated SQL.
Create an extension to createIndex that always generates ONLINE at the end of the SQL or adds a parameter to createIndex that allows you to control if it is added or not.
Option #2 is probably the best mix of easy yet flexible and would look something like this:
<changeSet id="1" author="nvoxland">
<createIndex tableName="USER" indexName="IX_USER_TX_ID">
<column name="TX_ID" />
</createIndex>
<modifySql dbms="oracle">
<append value=" ONLINE"/>
</modifySql>
</changeSet>
Notice the space in the value tag. Append does a very simple text append with no built-in logic.

Mapping a long text string in Oracle and NHibernate

Using NHibernate 3.1 with both SQL Server and Oracle DBs, we need to store a text string that is longer than 4000 characters. The text is actually XML, but that is not important - we just want to treat it as raw text. With SQL Server, this is easy. We declare the column as NVARCHAR(MAX) and map it thusly:
<property name="MyLongTextValue" length="100000"/>
The use of the length property tells NHibernate to expect a string that may be longer than 4000 characters.
For the life of me, I cannot figure out how to make this work on Oracle 11g. I've tried declaring the column as both XMLTYPE and LONG with no success. In the first case, we end up with ORA-01461: can bind a LONG value only for insert into a LONG column when trying to insert a row. In the second case, the data is inserted correctly but comes back as an empty string when querying.
Does anyone know how to make this work? The answer has to be compatible with both SQL Server and Oracle. I'd rather not have to write custom extensions such as user types and driver subclasses. Thanks.
You should use something like this
<property name="MyLongTextValue" length="100000" type="StringClob"
not-null="false"/>
This should work with Oracle CLOB type and SqlServer NTEXT type.
Make sure the property on your model is nullable
public virtual string MyLongTextValue {get;set;}
You should always use the Oracle.DataAccess when dealing with CLOBs
For whom this may interest, I solved my problem following the step 3 of this article:
3. Using correct Mapping attributes: type="AnsiString"
Normally we can use type="String" default for CLOB/NCLOB. Try to use > type="AnsiString" if two steps above not work.
<property name="SoNhaDuongPho" column="SO_NHA_DUONG_PHO" type="AnsiString"/>
In my case I set it with FluentNHibernate:
.CustomType("AnsiString")
You might be interested in this article.
<property column="`LARGE_STRING`" name="LargeString" type="StringClob" sql-type="NCLOB" />

Resources