Application not loading data from hibernate - spring

Our application has 2 servers A and B and requests are managed by a load balancer.
Code is same inside two weblogic servers but
when same page is loaded from one server it is getting displayed
but same page loaded from second server its giving
Error 500--Internal Server Error
war file is same in both weblogic servers but when when I check logs I can see that some exception is observed.
org.hibernate.HibernateException: Problem while trying to load or access OracleTypes.CURSOR value
at org.hibernate.dialect.Oracle8iDialect.registerResultSetOutParameter(Oracle8iDialect.java:399)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1586)
at org.hibernate.loader.Loader.doQuery(Loader.java:696)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.doList(Loader.java:2228)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2125)
at org.hibernate.loader.Loader.list(Loader.java:2120)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:312)
at org.hibernate.impl.SessionImpl.listCustomQuery(SessionImpl.java:1722)
at org.hibernate.impl.AbstractSessionImpl.list(AbstractSessionImpl.java:165)
at org.hibernate.impl.SQLQueryImpl.list(SQLQueryImpl.java:175)
so I directly went through the dialect code in Oracle8iDialect.java inside the hibernate-3.2.7.ga jar file.
Hibernate uses the following code to load the ORACLE TYPES class.
public int registerResultSetOutParameter(CallableStatement statement, int col) throws SQLException {
// register the type of the out param - an Oracle specific type
statement.registerOutParameter( col, getOracleCursorTypeSqlType() );
col++;
return col;
}
So there is no code description inside the Oracle8iDialect.java that throws the above Exception “Problem while trying to load or access OracleTypes.CURSOR value” so I investigated that there is one more class with the same name Oracle8iDialect inside the z_easybeans-uberjar-hibernate-1.1.0-M3-JONAS.jar file. I think that the same class is conflicted between the two jar files by the class loader. So at runtime web logic pick up the oracle8idialect class file in z_easybeans-uberjar-hibernate-1.1.0-M3-JONAS.jar instead of correct class in hibernate-3.2.7.ga jar.
dialect code in Oracle8iDialect.java inside the z_easybeans-uberjar-hibernate-1.1.0-M3-JONAS.jar.
public int registerResultSetOutParameter(java.sql.CallableStatement statement,int col) throws SQLException {
if(oracletypes_cursor_value==0) {
try {
Class types = ReflectHelper.classForName("oracle.jdbc.driver.OracleTypes");
oracletypes_cursor_value = types.getField("CURSOR").getInt(types.newInstance());
} catch (Exception se) {
throw new HibernateException("Problem while trying to load or access OracleTypes.CURSOR value",se);
}
}
// register the type of the out param - an Oracle specific type
statement.registerOutParameter(col, oracletypes_cursor_value);
col++;
return col;
}
May be there is a different version of hibernate is used in this jar and that causes the conflict in second server
Any one Please provide us a solution for this problem.

I have found the Answer by help of a friend.
Changed dialect code in Oracle8iDialect.java inside the z_easybeans-uberjar-hibernate-1.1.0-M3-JONAS.jar.
public int registerResultSetOutParameter(java.sql.CallableStatement statement,int col) throws SQLException {
if(oracletypes_cursor_value==0) {
try {
Class types = ReflectHelper.classForName("**oracle.jdbc.OracleTypes**");
oracletypes_cursor_value = types.getField("CURSOR").getInt(**null**);
} catch (Exception se) {
throw new HibernateException("Problem while trying to load or access OracleTypes.CURSOR value",se);
}
}
// register the type of the out param - an Oracle specific type
statement.registerOutParameter(col, oracletypes_cursor_value);
col++;
return col;
}
and compiled the particular dialect file and added the class file to z_easybeans-uberjar-hibernate-1.1.0-M3-JONAS.jar after deleting old class file
then made war and activated it from weblogic.
Then it worked normally.

Related

Profiling Hadoop

UPDATE:
I had mailed Shevek, founder of Karmasphere, for help. He had given a presentation on hadoop profiling at ApacheCon 2011. He advised to look for Throwable. Catch block for Throwable shows :
localhost: java.lang.IncompatibleClassChangeError: class com.kannan.mentor.sourcewalker.ClassInfoGatherer has interface org.objectweb.asm.ClassVisitor as super class
localhost: at java.lang.ClassLoader.defineClass1(Native Method)
localhost: at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
Hadoop has ASM3.2 jar and I am using 5.0. In 5.0, ClassVisitor is a Super Class and in 3.2 it is an Interface. I am planning to change my profiler to 3.2. Is there any other better way to fix this issue?
BTW, Shevek is super cool. A Founder and CEO, responding to some
anonymous guys emails. Imagine that.
END UPDATE
I am trying to profile Hadoop (JobTracker, Name Node, Data Node etc). Created a profiler using ASM5. Tested it on Spring and everything works fine.
Then tested the profiler on Hadoop in pseudo-distributed mode.
#Override
public byte[] transform(ClassLoader loader, String className,
Class<?> classBeingRedefined, ProtectionDomain protectionDomain,
byte[] classfileBuffer) throws IllegalClassFormatException {
try {
/*1*/ System.out.println(" inside transformer " + className);
ClassReader cr = new ClassReader(classfileBuffer);
ClassWriter cw = new ClassWriter(ClassWriter.COMPUTE_MAXS);
/* c-start */ // CheckClassAdapter cxa = new CheckClassAdapter(cw);
ClassVisitor cv = new ClassInfoGatherer(cw);
/* c-end */ cr.accept(cv, ClassReader.EXPAND_FRAMES);
byte[] b = cw.toByteArray();
/*2*/System.out.println(" inside transformer - returning" + b.length);
return b;
} catch (Exception e) {
System.out.println( " class might not be found " + e.getMessage()) ;
try {
throw new ClassNotFoundException(className, e);
} catch (ClassNotFoundException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
return null;
}
I can see the first sysout statement printed but not the second one. There is no error either. If I comment out from /* c-start / to / c-stop*/ and replace cw with classFileBuffer, I can see the second sysout statement. The moment I uncomment line
ClassVisitor cv = new ClassInfoGatherer(cw);
ClassInfoGatherer constructor:
public ClassInfoGatherer(ClassVisitor cv) {
super(ASM5, cv);
}
I am not seeing the second sysout statement.
What am i doing wrong here. Is Hadoop swallowing my sysouts. Tried sys err too. Even if that is the case why can i see the first sysout statement?
Any suggestion would be helpful. I think I am missing something simple and obvious here...but can't figure it out.
following lines were added to hadoop-env.sh
export HADOOP_NAMENODE_OPTS="-javaagent:path to jar $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-path to jar $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-javaagent:path to jar $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-javaagent:path to jar $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-javaagent:path to jar $HADOOP_JOBTRACKER_OPTS"
Hadoop had asm 3.2 and I was using ASM 5. In ASM5, ClassVisitor is a Superclass and in 3.2 it is an interface. For some reason, the error was a Throwable (credits to Shevek) and catch block was only catching exceptions. The throwable error wasn't captured in any of hadoop logs. So, it was very tough to debug.
Used jar jar links to fix asm version issues and everything works fine now.
If you are using Hadoop and something is not working and no logs to show any error's, then please try to catch Throwable.
Arun

Hibernate flush optimization using `hibernate.ejb.use_class_enhancer`

I am trying to use the hibernate feature that enhances the flush performance without making code changes. I came across the option hibernate.ejb.use_class_enhancer.
I made the following changes.
1) enabled the property hibernate.ejb.use_class_enhancer to true.
Build failed with error 'Cannot apply class transformer without LoadTimeWeaver specified'
2) I added
context:load-time-weaver to the context files.
Build failed with the following error :
Specify a custom LoadTimeWeaver or start your Java virtual machine with Spring’s agent: -javaagent:spring-agent.jar
3) I added the following to the maven-surefire-plugin
javaagent:${settings.localRepository}/org/springframework/spring-
agent/2.5.6.SEC03/spring-agent-2.5.6.SEC03.jar
the build is successful now.
We have an interceptor that tracks the number of entities being flushed in a transaction.
After I did the above changes, I was expecting that number to come down significantly, but, they did not.
My question is:
Are the above changes correct/enough for getting the 'entity flush optimization'?
How to verify that the application is indeed using the optimization?
Edit:
After debugging, I found the following.
There is a time when our DO class is submitted for transformation, but, the logic that figures out whether a given class is supposed to be transformed is not handling the class names correctly (in my case), because of that, the DO class goes without being transformed.
Is there a way I can pass my logic instead ?
the relevant code is below.
The return copyEntities.contains( className ); is coming out false for the following inputs.
copyEntities contains list of strings "com.x.y.abcDO", "com.x.y.asxDO" where are the className is "com.x.y.abcDO_$$_jvsteb8_48"
public InterceptFieldClassFileTransformer(List<String> entities) {
final List<String> copyEntities = new ArrayList<String>( entities.size() );
copyEntities.addAll( entities );
classTransformer = Environment.getBytecodeProvider().getTransformer(
//TODO change it to a static class to make it faster?
new ClassFilter() {
public boolean shouldInstrumentClass(String clas sName) {
return copyEntities.contains( className );
}
},
//TODO change it to a static class to make it faster?
new FieldFilter() {
#Override
public boolean shouldInstrumentField(String clas sName, String fieldName) {
return true;
}
#Override
public boolean shouldTransformFieldAccess(
String transformingClassName, String fieldOwnerClassName, String fieldName
) {
return true;
}
}
);
}
edited on June 15th
I updated my project to use Spring 4.0.5.RELEASE and hibernate to 4.3.5.Final
I started using org.hibernate.jpa.HibernatePersistenceProvider
and
org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver
and
hibernate.ejb.use_class_enhancer=true
with these changes, I am debugging the flush behavior. I have a question in this code block .
private boolean isUnequivocallyNonDirty(Object entity) {
if(entity instanceof SelfDirtinessTracker)
return ((SelfDirtinessTracker) entity).$$_hibernate_hasDirtyAttributes();
final CustomEntityDirtinessStrategy customEntityDirtinessStrategy =
persistenceContext.getSession().getFactory().getCustomEntityDirtinessStrategy();
if ( customEntityDirtinessStrategy.canDirtyCheck( entity, getPersister(), (Session) persistenceContext.getSession() ) ) {
return ! customEntityDirtinessStrategy.isDirty( entity, getPersister(), (Session) persistenceContext.getSession() );
}
if ( getPersister().hasMutableProperties() ) {
return false;
}
if ( getPersister().getInstrumentationMetadata().isInstrumented() ) {
// the entity must be instrumented (otherwise we cant check dirty flag) and the dirty flag is false
return ! getPersister().getInstrumentationMetadata().extractInterceptor( entity ).isDirty();
}
return false;
}
In my case, the flow is returning false because of persister saying yes for hasMutableProperties. I think the interceptor did not have a chance to answer at all.
Is it not that the bytecode transformer cause an interceptor here? Or the bytecode transform should make the entity a SelfDirtinessTracker?
Can anyone explain, what is the behavior I should expect here from the bytecode transformation here.

Test Hector Spring Cassandra connection

In my Java-Spring based web app I'm connecting to Cassandra DB using Hector and Spring.
The connection works just fine but I would like to be able to test the connection.
So if I intentionally provide a wrong host to CassandraHostConfigurator I get an error:
ERROR connection.HConnectionManager: Could not start connection pool for host <myhost:myport>
Which is ok of course. But how can I test this connection?
If I define the connection pragmatically (and not via spring context) it is clear, but via spring context it is not really clear how to test it.
can you think of an idea?
Since I could not come up nor find a satisfying answer I decided to define my connection pragmatically and to use a simple query:
private ColumnFamilyResult<String, String> readFromDb(Keyspace keyspace) {
ColumnFamilyTemplate<String, String> template = new ThriftColumnFamilyTemplate<String, String>(keyspace, tableName, StringSerializer.get(),
StringSerializer.get());
// It doesn't matter if the column actually exists or not since we only check the
// connection. In case of connection failure an exception is thrown,
// else something comes back.
return template.queryColumns("some_column");
}
And my test checks that the returned object in not null.
Another way that works fine:
public boolean isConnected() {
List<KeyspaceDefinition> keyspaces = null;
try {
keyspaces = cluster.describeKeyspaces();
} catch (HectorException e) {
return false;
}
return (!CollectionUtils.isEmpty(keyspaces));
}

OrmLiteSqliteOpenHelper onDowngrade

I'm using ormlite for android and I have a database table class that extends OrmLiteSqliteOpenHelper. I've had some reports in google play that the application had a force close caused by:
android.database.sqlite.SQLiteException: Can't downgrade database from version 3 to 2
The users probably get to downgrade via a backup or something. The problem is I cannot implement the onDowngrade method that exists on SQLiteOpenHelper:
Does ormlite support the downgrade? Is there any work around for this? At least to avoid the force close.
Interesting. So the onDowngrade(...) method was added in API 11. I can't just add support for it into ORMLite. Unfortunately this means that you are going to have to make your own onDowngrade method which is the same as the onUpgrade(...) in OrmLiteSqliteOpenHelper. Something like the followign:
public abstract void onUpgrade(SQLiteDatabase database, ConnectionSource connectionSource, int oldVersion,
int newVersion) {
// your code goes here
}
public final void onDowngrade(SQLiteDatabase db, int oldVersion, int newVersion) {
ConnectionSource cs = getConnectionSource();
/*
* The method is called by Android database helper's get-database calls when Android detects that we need to
* create or update the database. So we have to use the database argument and save a connection to it on the
* AndroidConnectionSource, otherwise it will go recursive if the subclass calls getConnectionSource().
*/
DatabaseConnection conn = cs.getSpecialConnection();
boolean clearSpecial = false;
if (conn == null) {
conn = new AndroidDatabaseConnection(db, true);
try {
cs.saveSpecialConnection(conn);
clearSpecial = true;
} catch (SQLException e) {
throw new IllegalStateException("Could not save special connection", e);
}
}
try {
onDowngrade(db, cs, oldVersion, newVersion);
} finally {
if (clearSpecial) {
cs.clearSpecialConnection(conn);
}
}
}
For more information about the onDowngrade(...) method see below:
public void onDowngrade (SQLiteDatabase db, int oldVersion, int newVersion);
To quote from the javadocs:
Called when the database needs to be downgraded. This is strictly similar to onUpgrade(SQLiteDatabase, int, int) method, but is called whenever current version is newer than requested one. However, this method is not abstract, so it is not mandatory for a customer to implement it. If not overridden, default implementation will reject downgrade and throws SQLiteException
Also see:
Can't downgrade database from version 2 to 1

Entity Framework 4.3.1 add-migration error: "model backing the context has changed"

I'm getting an error when trying to run the EF 4.3.1 add-migrations command:
"The model backing the ... context has changed since the database was created".
Here's one sequence that gets the error (although I've tried probably a dozen variants which also all fail)...
1) Start with a database that was created by EF Code First (ie, already contains a _MigrationHistory table with only the InitialCreate row).
2) The app's code data model and database are in-sync at this point (the database was created by CF when the app was started).
3) Because I have four DBContexts in my "Services" project, I didn't run 'enable-migrations' command (it doesn't handle multipe contexts). Instead, I manually created the Migrations folder in the Services project and the Configuration.cs file (included at end of this post). [I think I read this in a post somewhere]
4) With the database not yet changed, and the app stopped, I use the VS EDM editor to make a trivial change to my data model (add one property to an existing entity), and have it generate the new classes (but not modify the database, obviously). I then rebuild the solution and all looks OK (but don't delete the database or restart the app, of course).
5) I run the following PMC command (where "App" is the name of one of the classes in Configuration.cs):
PM> add-migration App_AddTrivial -conf App -project Services -startup Services -verbose
... which fails with the "The model ... has changed. Consider using Code First Migrations..." error.
What am I doing wrong? And does anyone else see the irony in the tool telling me to use what I'm already trying to use ;-)
What are the correct steps for setting-up a solution starting with a database that was created by EF CF? I've seen posts saying to run an initial migration with -ignorechanges, but I've tried that and it doesn't help. Actually, I've spent all DAY testing various permutations, and nothing works!
I must be doing something really stupid, but I don't know what!
Thanks,
DadCat
Configuration.cs:
namespace mynamespace
{
internal sealed class App : DbMigrationsConfiguration
{
public App()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.App.Repository.Migrations";
}
protected override void Seed(.Services.App.Repository.ModelContainer context)
{
}
}
internal sealed class Catalog : DbMigrationsConfiguration<Services.Catalog.Repository.ModelContainer>
{
public Catalog()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.Catalog.Repository.Migrations";
}
protected override void Seed(Services.Catalog.Repository.ModelContainer context)
{
}
}
internal sealed class Portfolio : DbMigrationsConfiguration<Services.PortfolioManagement.Repository.ModelContainer>
{
public Portfolio()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.PortfolioManagement.Repository.Migrations";
}
protected override void Seed(Services.PortfolioManagement.Repository.ModelContainer context)
{
}
}
internal sealed class Scheduler : DbMigrationsConfiguration<.Services.Scheduler.Repository.ModelContainer>
{
public Scheduler()
{
AutomaticMigrationsEnabled = false;
MigrationsNamespace = "Services.Scheduler.Repository.Migrations";
}
protected override void Seed(Services.Scheduler.Repository.ModelContainer context)
{
}
}
}
When using EF Migrations you should have one data context per database. I know that it can grow really large, but by trying to split it you will run into several problems. One is the migration issue that you are experiencing. Later on you will probably be facing problems when trying to make queries joining tables from the different contexts. Don't go that way, it's against how EF is designed.

Categories

Resources