Setting defaultRowPrefetch has no effect on query - spring

I've got a weird issue with using Spring JDBC + Oracle 10g. Here's my dataSource config:
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="oracle.jdbc.OracleDriver" />
<property name="url" value="jdbc:oracle:thin:#localhost:1521:XE" />
<property name="username" value="admin" />
<property name="password" value="admin" />
<property name="validationQuery" value="SELECT 1 FROM DUAL"/>
<property name="testOnBorrow" value="true"/>
<property name="connectionProperties" value="defaultRowPrefetch=1000" />
</bean>
At first, I thought the connectionProperties value was being set but as I fine-tuned the query in SQL Developer (cost went from 3670 to 285 and plan explain went from :45 to :03), the time in the application never fluctuated from the original 15 seconds. Removing the connectionProperties setting had no effect. So, what I did was this:
DAO class
private List<Activity> getAllActivitiesJustJDBC() {
String query = "select * " + "from activity a, work_order w "
+ "where a.ac_customer = 'CSC' "
+ "and w.wo_customer = a.ac_customer "
+ "and a.ac_workorder = w.wo_workorder ";
long startTime = System.currentTimeMillis();
List<Activity> activities = new ArrayList<Activity>();
try {
Connection conn = jdbcTemplate.getDataSource().getConnection();
PreparedStatement st = conn.prepareStatement(query);
st.setFetchSize(1000);
ResultSet rs = st.executeQuery();
ActivityMapper mapper = new ActivityMapper();
while (rs.next()) {
Activity activity = mapper.mapRow(rs, 1);
activities.add(activity);
}
} catch (Exception ex) {
ex.printStackTrace();
}
System.out.println("Time it took...."
+ (System.currentTimeMillis() - startTime));
System.out.println("Number of activities = " + activities.size());
return activities;
}
This time, the time it took to fetch 11,115 rows took on average 2 seconds. The key statement is the setFetchSize(1000). So....I like option #2 but do I need to close the connection or is Spring handling this for me? In option #1, I would use the jdbcTemplate to call the query method, passing in the parameterized query and the BeanPropertyRowMapper instance using my data object and then returning the List.

Ok from looking at other questions similar to this one, I do need to close out the connection starting with the result set, then statement, and then the connection using a finally. I also remembered (from the days before Spring) I needed to wrap all that around a try/catch and then not really do anything if an exception occurs closing a connection.
FYI, I wonder if there's a way to set the fetch size when defining the datasource using Spring.
Here's the final method:
private List<Activity> getAllActivitiesJustJDBC() {
String query = "select * " + "from activity a, work_order w "
+ "where a.ac_customer = 'CSC' "
+ "and w.wo_customer = a.ac_customer "
+ "and a.ac_workorder = w.wo_workorder ";
long startTime = System.currentTimeMillis();
List<Activity> activities = new ArrayList<Activity>();
Connection conn = null;
PreparedStatement st = null;
ResultSet rs = null;
try {
conn = jdbcTemplate.getDataSource().getConnection();
st = conn.prepareStatement(query);
st.setFetchSize(1000);
rs = st.executeQuery();
while (rs.next()) {
activities.add(ActivityMapper.mapRow(rs));
}
} catch (Exception ex) {
ex.printStackTrace();
}
finally {
try {
rs.close();
st.close();
conn.close();
}
catch (Exception ex){
//Not much we can do here
}
}
System.out.println("Time it took...."
+ (System.currentTimeMillis() - startTime));
System.out.println("Number of activities = " + activities.size());
return activities;
}

Related

How to make Hibernate Transaction object in recursive call

I am getting below error in my console when I call recursively a method. The update query is running fine but it will not be updating record in the database.
org.springframework.transaction.TransactionSystemException: Could not commit Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
public boolean abcMethod() {
Transaction txn = session.beginTransaction();
String querySqlSold = "UPDATE abc_table SET inventory_type='SOLD', status='ACTIVE' where set_id="
+ setId + " and game_num=" + gameMaster.getGameNum() + " and priceScheme=" + prizeSchemeId;
SQLQuery querySold = session.createSQLQuery(querySqlSold);
querySold.executeUpdate();
String querySqlSelect = "SELECT set_id FROM abc_table where inventory_type='UPCOMING' and `status`='ACTIVE' and game_num="
+ gameMaster.getGameNum() + " and priceScheme=" + prizeSchemeId;
List list = session.createSQLQuery(querySqlSelect).list();
int newSetId = Integer.valueOf(list.get(0).toString());
if (newSetId != 0) {
String querySqlCurrent = "UPDATE abc_table SET inventory_type='CURRENT' where game_num="
+ gameMaster.getGameNum() + " and priceScheme=" + prizeSchemeId + " and set_id=" + newSetId;
SQLQuery queryCurrent = session.createSQLQuery(querySqlCurrent);
queryCurrent.executeUpdate();
txn.commit();
return true;
} else {
JSONObject jsonObject = new JSONObject();
jsonObject.put("errorCode", "809");
jsonObject.put("errorMsg", "finished");
throw new CustomException(jsonObject.toString());
}
public void xyzMethod() {
abcMethod();
abcMethod();
}
You might try something like this
try {
tx = session.beginTransaction();
if (!tx.wasCommitted()){
tx.commit();
}
} catch (Exception exp) {
tx.rollback();
}
It should help you understand the problem better.

System.NullReferenceException: Object not set to an instance of an object?

I am writing a plugin that deletes records between two dates when a contract is cancelled... The records to be deleted are from the cancellation date to the end of the contract. Here is the code I am using:
using System;
using System.Linq;
using System.ServiceModel;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Messages;
using Microsoft.Xrm.Sdk.Query;
/// <summary>
/// This plugin will trimm off unit orders after a contract is cancelled before the end of the contract duration
/// </summary>
namespace DCWIMS.Plugins
{
[CrmPluginRegistration(MessageNameEnum.Update,
"contract",
StageEnum.PostOperation,
ExecutionModeEnum.Asynchronous,
"statecode",
"Post-Update On Cancel Contract",
1000,
IsolationModeEnum.Sandbox,
Image1Name = "PreImage",
Image1Type = ImageTypeEnum.PreImage,
Image1Attributes = "")]
public class UnitPluginOnCancel : IPlugin
{
public void Execute(IServiceProvider serviceProvider)
{
// Extract the tracing service for use in debugging sandboxed plug-ins.
// Will be registering this plugin, thus will need to add tracing service related code.
ITracingService tracing = (ITracingService)serviceProvider.GetService(typeof(ITracingService));
//obtain execution context from service provider.
IPluginExecutionContext context = (IPluginExecutionContext)
serviceProvider.GetService(typeof(IPluginExecutionContext));
// InputParameters collection contains all the data passed in the message request.
if (context.InputParameters.Contains("Target") &&
context.InputParameters["Target"] is Entity)
{
Entity entity = (Entity)context.InputParameters["Target"];
//Get the before image of the updated contract
Entity PreImage = context.PreEntityImages["PreImage"];
//verify that the target entity represents the the contract entity has been cancelled
if (entity.LogicalName != "contract" || entity.GetAttributeValue<OptionSetValue>("statecode").Value != 4)
return;
//obtain the organization service for web service calls.
IOrganizationServiceFactory serviceFactory =
(IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);
//Core Plugin code in Try Block
try
{
//Get Contract line start date
var startDate = entity.GetAttributeValue<DateTime>("cancelon");
//Get Contract Line End Date
DateTime endDate = (DateTime)PreImage["expireson"];
//Get Contract range into weekdays list
Eachday range = new Eachday();
var weekdays = range.WeekDay(startDate, endDate);
//Get Unit Order Lookup Id
EntityReference unitOrder = (EntityReference)PreImage.Attributes["new_unitorderid"];
var unitOrderId = unitOrder.Id;
var unitOrders = service.Retrieve(unitOrder.LogicalName, unitOrder.Id, new ColumnSet("new_name"));
var uiName = unitOrders.GetAttributeValue<string>("new_name");
//Get Entity Collection to delete
string fetchXml = #" <fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false' top='2000'>
<entity name='new_units'>
<link-entity name='new_alterunitorder' from ='new_orderlineid' to = 'new_unitsid' >
<attribute name='new_alterunitorderid' />
<filter type='and'>
<condition attribute='new_orderdate' operator='on-or-after' value='" + startDate.ToShortDateString() + #"' />
<condition attribute='new_orderdate' operator='on-or-before' value='" + endDate.ToShortDateString() + #"' />
<condition attribute='new_orderlineid' operator='eq' uiname='" + uiName + #"' uitype='new_units' value='" + unitOrderId + #"' />
</filter>
</link-entity>
</entity>
</fetch>";
var result = service.RetrieveMultiple(new FetchExpression(fetchXml));
var entityRefs = result.Entities.Select(e => e.GetAttributeValue<EntityReference>("new_alterunitorderid"));
var batchSize = 1000;
var batchNum = 0;
var numDeleted = 0;
while (numDeleted < entityRefs.Count())
{
var multiReq = new ExecuteMultipleRequest()
{
Settings = new ExecuteMultipleSettings()
{
ContinueOnError = false,
ReturnResponses = false
},
Requests = new OrganizationRequestCollection()
};
var currentList = entityRefs.Skip(batchSize * batchNum).Take(batchSize).ToList();
currentList.ForEach(r => multiReq.Requests.Add(new DeleteRequest { Target = r }));
service.Execute(multiReq);
numDeleted += currentList.Count;
batchNum++;
}
}
catch (FaultException<OrganizationServiceFault> ex)
{
throw new InvalidPluginExecutionException("An error occured.. Phil is responsible. ", ex);
}
catch (Exception ex)
{
tracing.Trace("An error occured: {0}", ex.ToString());
throw;
}
}
}
}
}
I am getting a NullReferenceException on line 55... I have literally used the same line for a previous plugin without any problems.. The statecode for a cancelled contract has a value of 4 and I only want the plugin to execute when the contract has been cancelled. Here is an image of the debugging.
I have used this statement before for another plugin that acts on the contract entity and it worked fine, I don't know why this time it's not working. Here is the statement:
//verify that the target entity represents the the contract entity has been cancelled
if (entity.LogicalName != "contract" || entity.GetAttributeValue<OptionSetValue>("statecode").Value != 4)
return;
The entity you get from the input parameters may not have StateCode in it, so .Value is failing.
Maybe try entity.GetAttributeValue<OptionSetValue>("statecode") or entity.Contains("statecode") to see if there's anything there before you dereference .Value.
Since you have the PreImage, you may want to look there for the statecode.
Check to see if you are using the right profile while debugging... If you are using the wrong profile, a null reference will through on a method when it shouldn't!
Hope that helps!

Spring ResourceBundleMessageSource Encoding for Czech MessageResource.getMessage()

I have a converter which reader the properties from .properties file using ResourceBundleMessageSource for multiple locales e.g en_US,fr_FR,cs_CZ
Below is the xml to read properties.
<bean id="messageSource"
class="org.springframework.context.support.ResourceBundleMessageSource">
<property name="basenames">
<list>
<value>lang/beneficiaryproperty/beneficiary</value>
<value>lang/labelsbundlesproperty/labelsbundle</value>
</list>
</property>
</bean>
And below is the code to read and write the javascript file using File API in java.
String localeAr[]= lang.split("_");
Locale currentLocale = new Locale(localeAr[0].trim(),localeAr[1].trim());
file = new File(outputPath +file.separator+ jsFilename + "_" + lang + ".js");
System.out.println(file.getAbsolutePath());
System.out.println();
file.createNewFile();
buffer.append(disclaimerDetails);
buffer.append("var " + var + " = \n");
buffer.append("{ \n");
Iterator iterator = tplPropObj.keySet().iterator();
while (iterator.hasNext())
{
String key = (String) iterator.next();
System.out.println("Key : "+key+" - Label : "+tplPropObj.getProperty(key)+" - Locale : "+currentLocale);
String value = messageSource.getMessage(tplPropObj.getProperty(key), new Object[] { }, currentLocale);
System.out.println( "- Value : "+value);
buffer.append("\t" + "\"" + key + "\": ");
buffer.append("\"" + value + "\"");
if (iterator.hasNext())
{
buffer.append(",");
}
buffer.append("\n");
}
buffer.append("} ");
if(localeAr[1].equalsIgnoreCase("PL") || localeAr[1].equalsIgnoreCase("CZ"))
{
Writer out = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(file), "UTF-8"));
out.write(buffer.toString());
out.flush();
out.close();
buffer.delete(0, buffer.length());
}
else
{
System.setProperty("file.encoding", "ISO-8859-1");
BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(file));
bufferedWriter.write(buffer.toString());
bufferedWriter.flush();
bufferedWriter.close();
buffer.delete(0, buffer.length());
}
String value = messageSource.getMessage(tplPropObj.getProperty(key),
new Object[] { }, currentLocale);
Above particular line reads the properties from file having cs_CZ local (File Name
beneficiary_cs_CZ.properties
Below are the contents of beneficiary_cs_CZ.properties file..These are saved in STS using UTF-8 encoding For czech_language
lbl.beneficiary.name=Z technických důvodů zavřeno.
lbl.beneficiary.number=123456
lbl.beneficiary.loc=Prosím vás, kde je
divadlo lbl.beneficiary.owner=Mã Người Ký Phát
Up to this is oK. But when I read these values from message Resource objects it returns different values
Below are the values generated from MessageSource object.
-
Key : Beneficiary.Name - Label : lbl.beneficiary.name - Locale : cs_CZ
Value : Z technických důvodů zavÅ?eno.
Key : Beneficiary.Number - Label : lbl.beneficiary.number - Locale : cs_CZ
- Value : 123456
Key : Beneficiary.Owner - Label : lbl.beneficiary.owner - Locale : cs_CZ
- Value : Mã Ngư�i Ký Phát
I don' understand why this is happening if I am reading the values from MessageResouce using locale.
Any help ..Appriciated.
M. Deinum is right, you should not use UTF-8 file encoding for Java property files. (This has changed in Java9, see PowerStat`s comment.) Instead you should use escape sequences for utf8 charachters within that file. (https://stackoverflow.com/a/4660058/280244)
But (I do not recommend this) you can use UTF8 encodes message files with Spring. The key is that you need to configure the MessageResource
<bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource">
<property name="defaultEncoding" value="UTF-8"/>
<property name="basenames">...</property>
</bean>

Apache Ignite indexing performance

I have a cache with string as a key and TileKey (class below) as a value, I've noticed that when I execute a query (below) the performance is affected almost linearly by the cache size even though all the fields that are used in the query are indexed.
Here is a representative benchmark - I've used the same query (below) with the same parameters for all benchmarks :
The query returns (the same) 30 entries in all benchmarks
Query on 5350 entries cache took 6-7ms
Query on 10700 entries cache took 8-10ms
Query on 48150 entries cache took 30-42ms
Query on 96300 entries cache took 50-70ms
I've executed the benchmark with 8gb single node and 4gb 2 nodes, the results were pretty much the same (in terms of query speed relative to cache size)
I've also tried using QuerySqlFieldGroup by using the "time" field as the first group field, it should reduce the result set to only 1000 entries in all benchmarks, i'm not sure that this is the right usage for QuerySqlFieldGroup as from my understanding it should be mainly used for join queries between caches.
Am I doing something wrong or these are the expected query performance using Ignite indexing?
Code :
String strQuery = "time = ? and zoom = ? and x >= ? and x <= ? and y >= ? and y <= ?";
SqlQuery<String, TileKey> query= new SqlQuery<String, TileKey>(TileKey.class, strQuery);
query.setArgs(time, zoom, xMin,xMax,yMin, yMax);
QueryCursor<Entry<String, TileKey>> tileKeyCursor = tileKeyCache.query(query);
Map<String, TileKey> tileKeyMap = new HashMap<String, TileKey>();
for (Entry<String, TileKey> p : keysCursor) {
tileKeyMap.put(p.getKey(), p.getValue());
}
Cache config :
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="KeysCache" />
<property name="cacheMode" value="PARTITIONED" />
<property name="atomicityMode" value="ATOMIC" />
<property name="backups" value="0" />
<property name="queryIndexEnabled" value="true"/>
<property name="indexedTypes">
<list>
<value>java.lang.String</value>
<value>org.ess.map.TileKey</value>
</list>
</property>
</bean>
Class :
#QueryGroupIndex.List(#QueryGroupIndex(name = "idx1"))
public class TileKey implements Serializable {
/**
*
*/
private static final long serialVersionUID = 1L;
private String id;
#QuerySqlField(index = true)
#QuerySqlField.Group(name = "idx1", order = 0)
private int time;
#QuerySqlField(index = true)
#QuerySqlField.Group(name = "idx1", order = 1)
private int zoom;
#QuerySqlField(index = true)
#QuerySqlField.Group(name = "idx1", order = 2)
private int x;
#QuerySqlField(index = true)
#QuerySqlField.Group(name = "idx1", order = 3)
private int y;
#QuerySqlField(index = true)
private boolean inCache;
}
I have found the problem, thank you bobby_brew for leading me in the right direction.
The indexing example of Ignite is incorrect, there is an open issue about it.
I've changes the indexed field annotations from
#QuerySqlField(index = true)
#QuerySqlField.Group(name = "idx1", order = x)
To
#QuerySqlField(index = true, orderedGroups = {#QuerySqlField.Group(name = "idx1", order = x)})
and now the query duration is solid 2ms in all scenarios

glassfish 3.1.2 - ResultSetWrapper40 cannot be cast to oracle.jdbc.OracleResultSet

I recently migrate from glassfish 3.1.1 to 3.1.2 and I got the following error
java.lang.ClassCastException: com.sun.gjc.spi.jdbc40.ResultSetWrapper40 cannot be cast to oracle.jdbc.OracleResultSet
at the line
oracle.sql.BLOB bfile = ((OracleResultSet) rs).getBLOB("filename");
in the following routine:
public void fetchPdf(int matricola, String anno, String mese, String tableType, ServletOutputStream os) {
byte[] buffer = new byte[2048];
String query = "SELECT filename FROM "
+ tableType + " where matricola = " + matricola
+ " and anno = " + anno
+ ((tableType.equals("gf_blob_ced") || tableType.equals("gf_blob_car")) ? " and mese = " + mese : "");
InputStream ins = null;
//--------
try {
Connection conn = dataSource.getConnection();
//Connection conn = DriverManager.getConnection(connection, "glassfish", pwd);
java.sql.Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
if (rs.next()) {
logger.info("select ok " + query);
oracle.sql.BLOB bfile = ((OracleResultSet) rs).getBLOB("filename");
ins = bfile.getBinaryStream();
int length;
while ((length = (ins.read(buffer))) >= 0) {
os.write(buffer, 0, length);
}
ins.close();
} else {
logger.info("select Nok " + query);
}
rs.close();
stmt.close();
//conn.close();
} catch (IOException ex) {
logger.warn("blob file non raggiungibile: "+query);
} catch (SQLException ex) {
logger.warn("connessione non riuscita");
}
}
I'm using the glassfish connection pool
#Resource(name = "jdbc/ape4")
private DataSource dataSource;
and the jdbc/ape4 resource belongs to an oracle connection pool with the following param
NetworkProtocol tcp
LoginTimeout 0
PortNumber 1521
Password xxxxxxxx
MaxStatements 0
ServerName server
DataSourceName OracleConnectionPoolDataSource
URL jdbc:oracle:thin:#server:1521:APE4
User glassfish
ExplicitCachingEnabled false
DatabaseName APE4
ImplicitCachingEnabled false
The oracle driver is ojdbc6.jar, oracle DB is 10g.
Could anyone help me what is happening? On Glassfish 3.1.1 it was working fine.
There is no need for not using standard JDBC api in this code. You are not using any Oracle-specific functionality so rs.getBlob("filename").getBinaryStream() will work just as well.
If you insist on keeping this, turn off JDBC Object wrapping option for your datasource.

Resources