SQLException when accessing H2 metamodel through JOOQ-meta - jdbc

I am trying to obtain a list of all tables in my H2 in-memory DB using JOOQ's DSLContext.meta():
DSL_CONTEXT_PROVIDER.db().meta().getTables();
results in:
java.lang.RuntimeException: org.jooq.exception.DataAccessException: Error while accessing DatabaseMetaData
at MyTest.deleteEntities(MyTest.java:222)
at MyTest.cleanupDatabase(MyTest.java:201)
at MyTest.afterTestCase(MyTest.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
[... omitted for brevity ...]
Caused by: org.jooq.exception.DataAccessException: Error while accessing DatabaseMetaData
at org.jooq.impl.MetaImpl.getCatalogs(MetaImpl.java:160)
at org.jooq.impl.MetaImpl.getSchemas(MetaImpl.java:168)
at org.jooq.impl.MetaImpl.getTables(MetaImpl.java:179)
at MyTest.deleteEntities(MyTest.java:210)
... 29 more
Caused by: org.h2.jdbc.JdbcSQLException: The object is already closed [90007-174]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:332)
at org.h2.message.DbException.get(DbException.java:172)
at org.h2.message.DbException.get(DbException.java:149)
at org.h2.message.DbException.get(DbException.java:138)
at org.h2.jdbc.JdbcConnection.checkClosed(JdbcConnection.java:1410)
at org.h2.jdbc.JdbcConnection.checkClosed(JdbcConnection.java:1388)
at org.h2.jdbc.JdbcDatabaseMetaData.checkClosed(JdbcDatabaseMetaData.java:2963)
at org.h2.jdbc.JdbcDatabaseMetaData.getCatalogs(JdbcDatabaseMetaData.java:756)
at org.jooq.impl.MetaImpl.getCatalogs(MetaImpl.java:143)
... 32 more
DSL_CONTEXT_PROVIDER.db() looks like this:
JdbcDataSource h2ds = new JdbcDataSource();
h2ds.setURL("jdbc:h2:mem:testDB;create=true");
h2ds.setUser("");
h2ds.setPassword("");
return DSL.using(new DefaultConfiguration().set(new DataSourceConnectionProvider(h2ds)));
Ordinary queries work fine with the above configuration, but not the meta().getTables(). If I replace DataSourceConnectionProvider with an anonymous implementation that doesn't close the connection, no exception is thrown anymore.
It seems H2 does not approve of calling methods like getCatalogs() on the object returned by connection.getMetaData() after the underlying connection has been closed. Is this this a bug in jooq-meta (I use 3.7.0) or is my configuration flawed?

jOOQ 3.7.0 / 3.7.1 and earlier are caching the DatabaseMetaData in org.jooq.Meta. This is a bug (4762) and will be fixed soon.
The reason why you're running into this issue is because you're using the DataSourceConnectionProvider, which isn't really intended to work with standalone connections, or "simple" DataSources. It closes the connection after every query (which normally translates to returning it to the pool). After closing the connection, the cached DatabaseMetaData reference is stale.
You've already documented the workaround: Don't use a "simple" DataSource with jOOQ's DSLContext.meta() API.

Related

How can I configurate the late acceptance algorithm?

I am encountering some problems when it comes to modifying the applied algorithms:
First, it seems I cannot change the late acceptance algorithm configuration. This is my local search configuration, which works nicely when I am not specifying neither acceptor nor forager:
<localSearch>
<localSearchType>LATE_ACCEPTANCE</localSearchType>
<acceptor>
<lateAcceptanceSize>5000</lateAcceptanceSize>
<entityTabuRatio>0.1</entityTabuRatio>
<!--valueTabuSize>10000</valueTabuSize-->
</acceptor>
<forager>
<acceptedCountLimit>1</acceptedCountLimit>
</forager>
<termination>
<minutesSpentLimit>10</minutesSpentLimit>
<bestScoreLimit>[0/11496/11496]hard/[0]soft</bestScoreLimit>
</termination>
</localSearch>
I get this error message as part of the output
java.lang.IllegalArgumentException: The localSearchType (...
java.lang.IllegalArgumentException: The localSearchType (LATE_ACCEPTANCE) must not be configured if the acceptorConfig (AcceptorConfig()) is explicitly configured.
at org.optaplanner.core.config.localsearch.LocalSearchPhaseConfig.buildAcceptor(LocalSearchPhaseConfig.java:182)
at org.optaplanner.core.config.localsearch.LocalSearchPhaseConfig.buildDecider(LocalSearchPhaseConfig.java:132)
at org.optaplanner.core.config.localsearch.LocalSearchPhaseConfig.buildPhase(LocalSearchPhaseConfig.java:117)
at org.optaplanner.core.config.localsearch.LocalSearchPhaseConfig.buildPhase(LocalSearchPhaseConfig.java:54)
at org.optaplanner.core.config.solver.SolverConfig.buildPhaseList(SolverConfig.java:364)
at org.optaplanner.core.config.solver.SolverConfig.buildSolver(SolverConfig.java:270)
at org.optaplanner.core.impl.solver.AbstractSolverFactory.buildSolver(AbstractSolverFactory.java:61)
at com.gmv.g2gmps.CBand.App.main(App.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:282)
at java.lang.Thread.run(Thread.java:748)
Finally, I was told that the configuration of LATE_ACCEPTANCE and lateAcceptanceSize at the same time is incompatible. Therefore I set late acceptance size as the only parameter (therefore Late acceptance as an acceptor) instead of writing it down as the local search type.
<localSearch>
<acceptor>
<lateAcceptanceSize>5000</lateAcceptanceSize>
<entityTabuRatio>0.1</entityTabuRatio>
<!--valueTabuSize>10000</valueTabuSize-->
</acceptor>
<forager>
<acceptedCountLimit>1</acceptedCountLimit>
</forager>
<termination>
<minutesSpentLimit>10</minutesSpentLimit>
<bestScoreLimit>[0/11496/11496]hard/[0]soft</bestScoreLimit>
</termination>
</localSearch>

Unable to load driverless ai models with pb and toml formats in the same JVM

I'm trying to use results of two models built in h2o driverless ai. One of them was built in 1.6.0 version and the other one was built in latest 1.7.0 version.
When I try to load the pipeline.mojo of these two models(One of them will be in pb format and the other one in toml format) in my java application, the first model file is read fine where as it throws IllegalArgumentException when reading the second model file in the same JVM instance.
The mojo2-runtime jar that is being used is from 1.7.0 version.
String toml_fileName = "/usr/toml/pipeline.mojo";
MojoPipeline toml_model = MojoPipeline.loadFrom(toml_fileName);
String pb_fileName = "/usr/pb/pipeline.mojo";
MojoPipeline pb_model = MojoPipeline.loadFrom(pb_fileName);
I get the following exception while trying to load the second model file which is pb_model in my case.
Can someone help me in finding out what's going wrong?
java.lang.IllegalArgumentException: wrong number of arguments
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at ai.h2o.mojos.runtime.b.aj.a(SourceFile:62)
at ai.h2o.mojos.runtime.b.J.a(SourceFile:35)
at ai.h2o.mojos.runtime.readers.MojoReader.read(SourceFile:133)
at ai.h2o.mojos.runtime.readers.MojoTransformerReader.readExecPipeTransformer(SourceFile:311)
at ai.h2o.mojos.runtime.readers.MojoTransformerReader.read(SourceFile:41)
at ai.h2o.mojos.runtime.readers.MojoReader.read(SourceFile:121)
at ai.h2o.mojos.runtime.MojoPipelineFactoryImpl.loadFrom(SourceFile:59)
at ai.h2o.mojos.runtime.MojoPipelineFactoryImpl.loadFrom(SourceFile:22)
at ai.h2o.mojos.runtime.MojoPipeline.loadFrom(SourceFile:41)

Exception Handling in Apache Beam pipelines when writing to database using Java

When writing simple records to a table in Postgres (could be any db) at the end of a pipeline, some of the potential records violate uniqueness constraints and trigger an exception. As far as I can tell, there's no straight forward way to handle these gracefully - the pipeline either errors out completely, or depending on the runner, enters an interminable death spiral.
There doesn't appear to be any mention of error handling for this case in the beam docs. The medium posts on error handling don't seem to apply to this particular type of PTransform which returns PDone.
This answer isn't comprehensible and is devoid of examples.
In my example, I'm reading from a file with 2 duplicate lines and trying to write them into a table.
CREATE TABLE foo (
field CHARACTER VARYING(100) UNIQUE
);
foo.txt contains:
a
a
The pipeline looks like this:
Pipeline p = Pipeline.create();
p.apply(TextIO.read().from("/path/to/foo.txt"))
.apply(
JdbcIO.<String>write()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("org.postgresql.Driver", "jdbc:postgresql://localhost:5432/somedb"))
.withStatement("INSERT INTO foo (field) VALUES (?)")
.withPreparedStatementSetter(new JdbcIO.PreparedStatementSetter<String>() {
private static final long serialVersionUID = 1L;
public void setParameters(String element, PreparedStatement query) throws SQLException {
query.setString(1, element);
}
}))
;
p.run();
Here is the output from the simple example above:
[WARNING]
org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO foo (field) VALUES ('a') was aborted: ERROR: duplicate key value violates unique constraint "foo_field_key"
Detail: Key (field)=(a) already exists. Call getNextException to see other errors in the batch.
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish (DirectRunner.java:332)
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish (DirectRunner.java:302)
at org.apache.beam.runners.direct.DirectRunner.run (DirectRunner.java:197)
at org.apache.beam.runners.direct.DirectRunner.run (DirectRunner.java:64)
at org.apache.beam.sdk.Pipeline.run (Pipeline.java:313)
at org.apache.beam.sdk.Pipeline.run (Pipeline.java:299)
at com.thing.Main.main (Main.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run (ExecJavaMojo.java:282)
at java.lang.Thread.run (Thread.java:748)
Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO foo (field) VALUES ('a') was aborted: ERROR: duplicate key value violates unique constraint "foo_field_key"
Detail: Key (field)=(a) already exists. Call getNextException to see other errors in the batch.
at org.postgresql.jdbc.BatchResultHandler.handleError (BatchResultHandler.java:148)
at org.postgresql.core.ResultHandlerDelegate.handleError (ResultHandlerDelegate.java:50)
at org.postgresql.core.v3.QueryExecutorImpl.processResults (QueryExecutorImpl.java:2184)
at org.postgresql.core.v3.QueryExecutorImpl.execute (QueryExecutorImpl.java:481)
at org.postgresql.jdbc.PgStatement.executeBatch (PgStatement.java:840)
at org.postgresql.jdbc.PgPreparedStatement.executeBatch (PgPreparedStatement.java:1538)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.executeBatch (JdbcIO.java:846)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.finishBundle (JdbcIO.java:819)
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "foo_field_key"
Detail: Key (field)=(a) already exists.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse (QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults (QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute (QueryExecutorImpl.java:481)
at org.postgresql.jdbc.PgStatement.executeBatch (PgStatement.java:840)
at org.postgresql.jdbc.PgPreparedStatement.executeBatch (PgPreparedStatement.java:1538)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.executeBatch (JdbcIO.java:846)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.finishBundle (JdbcIO.java:819)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn$DoFnInvoker.invokeFinishBundle (Unknown Source)
at org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.finishBundle (SimpleDoFnRunner.java:285)
at org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimplePushbackSideInputDoFnRunner.finishBundle (SimplePushbackSideInputDoFnRunner.java:118)
at org.apache.beam.runners.direct.ParDoEvaluator.finishBundle (ParDoEvaluator.java:223)
at org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.finishBundle (DoFnLifecycleManagerRemovingTransformEvaluator.java:73)
at org.apache.beam.runners.direct.DirectTransformExecutor.finishBundle (DirectTransformExecutor.java:188)
at org.apache.beam.runners.direct.DirectTransformExecutor.run (DirectTransformExecutor.java:126)
at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511)
at java.util.concurrent.FutureTask.run (FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:624)
at java.lang.Thread.run (Thread.java:748)
I'd like to be able to arrest that exception and divert it to some dead letter construct.
There is no general way of doing it in Beam yet. There are discussions from time to time about modifying the IOs to not return PDone but to my knowledge there is nothing readily available.
At the moment I can think of couple of workarounds, all of them are far from ideal:
in the driver program handle the restart of the pipeline when it fails;
copy-paste JdbcIO, parts of it, or implement your own Jdbc ParDo with custom exception handling;
add an exception handling feature to JdbcIO and contribute it to Beam, it will be appreciated;
I was also facing same issue. So, I created custom jdbcio write and returned PCollectionTuple instead of PDone where I classified successfully inserted records and other record which thrown sqlexception while execute batch in WriteFn.
Below is the link for more details:
https://sachin4java.blogspot.com/2021/11/extract-error-records-while-inserting.html

C3p0 trying to create a new connection pool and failing with ClassNotFoundException

I'm seeing a very strange behavior in my application.
My application setup: Spring + Hibernate + C3p0
Application keeps running fine, when all of a sudden I start seeing these errors in logs and system totally stop processing any database specific requests.
WARN c3p0.C3P0Registry - Could not create for find ConnectionCustomizer with class name ''.
java.lang.ClassNotFoundException:
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at com.mchange.v2.c3p0.C3P0Registry.getConnectionCustomizer(C3P0Registry.java:181)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getConnectionCustomizer(C3P0PooledConnectionPoolManager.java:636)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.createPooledConnectionPool(C3P0PooledConnectionPoolManager.java:738)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getPool(C3P0PooledConnectionPoolManager.java:257)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getPool(C3P0PooledConnectionPoolManager.java:271)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:80)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:423)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:144)
at org.hibernate.jdbc.AbstractBatcher.prepareSelectStatement(AbstractBatcher.java:123)
at org.hibernate.id.SequenceGenerator.generate(SequenceGenerator.java:73)
at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:99)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:187)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:172)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:94)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:70)
at org.hibernate.impl.SessionImpl.fireSaveOrUpdate(SessionImpl.java:507)
at org.hibernate.impl.SessionImpl.saveOrUpdate(SessionImpl.java:499)
at org.hibernate.impl.SessionImpl.saveOrUpdate(SessionImpl.java:495)
at org.springframework.orm.hibernate3.HibernateTemplate$18.doInHibernate(HibernateTemplate.java:690)
at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:365)
at org.springframework.orm.hibernate3.HibernateTemplate.saveOrUpdate(HibernateTemplate.java:687)
Why would C3p0 require to create a new connection pool at this
particular time, before these exceptions application is 100% working
fine and responding perfectly.
Also I've not provided any connectionCustomizerClassName property in
my c3p0 configurations, why would it load one? in this stack trace I
see it's not-null empty string ''.
Any clues?
==============================================================================
Following hibernate jars I see in application's classpath:
hibernate-3.2.6.ga.jar
spring-hibernate-1.2.6.jar
Following c3p0 jars I see in application's classpath:
c3p0-0.9.1.jar
c3p0-0.9.2-pre5.jar
c3p0-oracle-thin-extras-0.9.2-pre5.jar
Code that manually read these properties and set on datasource (I do not read/set any connectionCustomizerClassName property here at all)
ComboPooledDataSource dataSource = new ComboPooledDataSource();
dataSource.setMinPoolSize(Integer.parseInt(props.getProperty("jdbc.hibernate.c3p0.minPoolSize")));
.....
Here are C3p0 properties being used:
jdbc.hibernate.c3p0.minPoolSize=100
jdbc.hibernate.c3p0.initialPoolSize=100
jdbc.hibernate.c3p0.maxPoolSize=1000
jdbc.hibernate.c3p0.maxIdleTime=21600
jdbc.hibernate.c3p0.maxStatementsPerConnection=0
jdbc.hibernate.c3p0.maxStatements=0
jdbc.hibernate.c3p0.numHelperThreads=30
jdbc.hibernate.c3p0.checkoutTimeout=30000
jdbc.hibernate.c3p0.idleConnectionTestPeriod=900
jdbc.hibernate.c3p0.preferredTestQuery=SELECT 1 FROM dual
jdbc.hibernate.c3p0.maxConnectionAge=0
jdbc.hibernate.c3p0.maxIdleTimeExcessConnections=3600
jdbc.hibernate.c3p0.acquireIncrement=10
jdbc.hibernate.c3p0.acquireRetryDelay=5000
jdbc.hibernate.c3p0.acquireRetryAttempts=6
jdbc.hibernate.c3p0.propertyCycle=180
Following up a conversation in the comments on the posted question, it looks like the issue here is that VisualVM updates the null valued property connectionCustomizerClassName to an empty String value, which c3p0 currently treats an non-null and interprets as a class name.
Going forward (c3p0-0.9.5-pre7 and above), c3p0 will guard against this, interpret an all-whitespace connectionCustomizerClassName as equivalent to null. But in the meantime or for older versions, take care.
One easy workaround would be to define a NullConnectionCustomizer:
package mypkg;
import com.mchange.v2.c3p0.*;
public class NullConnectionCustomizer extends AbstractConnectionCustomizer
{}
And then use mypkg.NullConnectionCustomizer for connectionCustomizerClassName, so that the corresponding field in VisualVM is not empty and ambiguously interpretable as empty String or null.

Unknown error in XPath

One of our web application, which has been running in our production environment for a long time, most recently it is running into an weird error when there is a high volume of transactions. We couldn't figure out what is exactly the root cause of the problem, but we found some similar issues in the previous version, WebSphere 6, related to a bug in Xalan version used by the app server. Our application server actually is WebSphere 7, which is supposed to have it fixed, besides it's not using Xalan under the hood anymore. Our application doesn't have Xalan jar embedded too.
To have it fixed we just restart the application itself.
One important note is that the Document is being cached (docs.get(tableName)) and reused to execute the XPath evaluation. We did it to avoid the cost of parsing the Document every time.
The app code is
Document doc = null;
try {
doc = docs.get(tableName);
if (doc == null)
return null;
XPathFactory xFactory = XPathFactory.newInstance();
XPath xpath = xFactory.newXPath();
XPathExpression expr = xpath.compile(toUpper(xPathQuery));
Object result = expr.evaluate(doc, XPathConstants.NODESET);
return (NodeList) result;
} catch (XPathExpressionException e) {
logger.error("Error executing XPath", e);
}
And the error stack is here
javax.xml.transform.TransformerException: Unknown error in XPath.
at java.lang.Throwable.<init>(Throwable.java:67)
at javax.xml.transform.TransformerException.<init>(Unknown Source)
at org.apache.xpath.XPath.execute(Unknown Source)
at org.apache.xpath.jaxp.XPathExpressionImpl.evaluate(Unknown Source)
Caused by: java.lang.NullPointerException
at org.apache.xerces.dom.ElementNSImpl.getPrefix(Unknown Source)
at org.apache.xml.dtm.ref.dom2dtm.DOM2DTM.processNamespacesAndAttributes(Unknown Source)
at org.apache.xml.dtm.ref.dom2dtm.DOM2DTM.nextNode(Unknown Source)
at org.apache.xml.dtm.ref.DTMDefaultBase._nextsib(Unknown Source)
at org.apache.xml.dtm.ref.DTMDefaultBase.getNextSibling(Unknown Source)
at org.apache.xml.dtm.ref.DTMDefaultBaseTraversers$ChildTraverser.next(Unknown Source)
at org.apache.xpath.axes.AxesWalker.getNextNode(Unknown Source)
at org.apache.xpath.axes.AxesWalker.nextNode(Unknown Source)
at org.apache.xpath.axes.WalkingIterator.nextNode(Unknown Source)
at org.apache.xpath.axes.NodeSequence.nextNode(Unknown Source)
at org.apache.xpath.axes.NodeSequence.runTo(Unknown Source)
at org.apache.xpath.axes.NodeSequence.setRoot(Unknown Source)
at org.apache.xpath.axes.LocPathIterator.execute(Unknown Source)
... 16 more
This is the similar issue what I mentioned.
http://www-01.ibm.com/support/docview.wss?uid=swg1PK42574
Thakns.
Many people don't realise that the DOM is not thread-safe (even if you are only doing reads). If you are caching a DOM object, make sure you synchronize all access to it.
Frankly, this makes DOM unsuited to this kind of application. I know I'm biased, but I would suggest switching to Saxon, whose native tree implementation is not only far faster than DOM, but also thread-safe. One user recently tweeted about getting a 100x performance improvement when they made this switch.

Resources