WebSphere insert/update statements with SQL-SERVER hangs with REQUIRES_NEW propagation - websphere

We are facing an issue in our spring batch application when we are deploying the application on WebSphere.
Example: One class contains parent() method and Second class contains child() method, where child method requires a new transaction. After execution of the methods when transaction is committed the commit routine hangs and nothing happens further.
#Transactional //using current transaction
public void parent(){
child();
}
#Transactional(propagation=REQUIRES_NEW) //creates new transaction
public void child(){
//Database save statements including update, insert and deletes
}
This issue only persists in WebSphere and code works fine on our local machine where we are using tomcat as web container.
WebSphere logs/stacktrace shows that the WebSphere prepared statement keeps on waiting for the response from the database. At the same time update and inserts are locked out on the affected tables i.e. if we run an insert or update query manually on the affected table the query doesn't execute.
We are using Spring JPA for data persistence and Spring’s JpaTransactionManager for transaction management and MSSQLServer database.
Is it that WebSphere does not support creating new transaction from existing transaction?

Yes, the pattern you are describing is supported by WebSphere Application Server. Given that this involved locked entries within the database, you might be running into a difference between the application servers in which transaction isolation level is used by default. In WebSphere Application Server, you get a default of java.sql.Connection.TRANSACTION_REPEATABLE_READ for SQL Server, whereas I think in most other cases you end up with a default of java.sql.Connection.TRANSACTION_READ_COMMITTED (less locking). If the default value is the problem, you can change it on the data source configuration.
If you are using WebSphere Application Server Liberty, then the default isolation level can be configured in server.xml as a property of the dataSource element, like this,
<dataSource isolationLevel="TRANSACTION_READ_COMMITTED" jndiName=...
If you are using WebSphere Application Server traditional, then the default isolation level can be configured as the webSphereDefaultIsolationLevel custom property, which can be set to the numeric value of the isolation level constant on java.sql.Connection (value for TRANSACTION_READ_COMMITTED is 2).
See this linked article for the steps of doing so via the admin console.

Related

TransactionManagementType.CONTAINER vs TransactionManagementType.BEAN

what is the difference between TransactionManagementType.CONTAINER and TransactionManagementType.BEAN
as Im using TransactionManagementType.CONTAINER in all of my EJBs and when ever the multiple instances of database is used, It throws an error which gets resolved if i change it to TransactionManagementType.BEAN
I want to know what are the advantages and disadvantages and how it gets effected if I change it to TransactionManagementType.BEAN
ERROR:
Error updating database. Cause: java.sql.SQLException: javax.resource.ResourceException:
IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.
connectionmanager.listener.TxConnectionListener#680f2be0[state=NORMAL managed
connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection#7ba33a94 connection
handles=0 lastReturned=1495691675021 lastValidated=1495690817487
lastCheckedOut=1495691675018 trackByTx=false pool=org.jboss.jca.core.connectionmanager.
pool.strategy.OnePool#efd42c4 mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool
#71656eec[pool=FAQuery] xaResource=LocalXAResourceImpl#4c786e85
[connectionListener=680f2be0 connectionManager=5c3b98bc warned=false
currentXid=null productName=Oracle productVersion=Oracle Database 12c
Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
jndiName=java:/FAQuery] txSync=null]
TransactionManagementType.CONTAINER
You let the container itself manage transactions(container will commit and rollback). You can control behavior of your transactions (more precisely transaction propagation) by annotating methods with #TransactionManagementAttribute and specifying one of the attributes from TransactionAttribute Types.
TransactionManagementType.BEAN
You have to do transaction demarcation (start, commit, rollback) explicitly yourself, by obtaining UserTransaction interface.
#Resource
UserTransaction ut;
public void method(){
ut.begin();
... // your business logic here
ut.commit(); // or ut.rollback();
}
Note that you have to either commit and rollback before you exit the same method which stated the transaction for Stateless and Message Driven Beans, but it is not required for Stateful bean.
Regarding your question, advantage of BMT is that scope of the transaction can be less than scope of the method itself, i.e explicit control of the transaction.
You would most probably use CMT, BMT is required only in some narrow corner cases to support specific business logic.
Another advantage or use case of BMT is if you need to use Extended Persistence Context Type, which can be supported in BMT with Stateful Session Beans.
Regarding your specific problem, without seeing any bean code or error messages, this is probably what is happening: if you have more databases then each database and its corresponding driver must be able to join an existing transaction, otherwise you will get a specific detailed error.
If you use TransactionManagementType.BEAN the bean is required to start a brand new transaction. Which means you are not joining an existing transaction, and each database operations begin and commit independently from each others.
You can achieve the same effect by leaving TransactionManagementType.CONTAINER and annotating your method with REQUIRES_NEW, provide of course you are calling each EJB trough the corresponding proxy/interface.
So it is not correct to put it like BEAN vs CONTAINER, but rather you have to made some design choice and annotate your methods accordingly.
As requested for a method marked with one of the following:
REQUIRES_NEW existing transactions gets suspended, and the method runs under a brand new transaction
REQUIRED is the default behavior, if a transaction exists it is reused by the method, otherwise gets created and the method runs in it

How to turn off JPA for SpringBatch under SpringBoot

We have a Spring Boot application that uses Spring Integration and Spring Batch. We drop a file in the poller and it processes. This process inserts records into a database and then reads them back out does some processing and writes a file. Let's say there are 10 records. The first time we get 10 records read and 10 written. Without stopping the server, we delete all the records through a SQL client on the database, run the same file again and we get 10 records read with 20 written. I believe there is some JPA or caching going on with the datasource. We've tried turning off several auto configuration options for JPA and caching but we haven't found the right configuration option to turn off caching.
Adding a bit more detail to the question.
Basically we have cron scheduler that has a FileHandler. This the handleFile methods we have the following.
public File handleFile(File file) throws Throwable {
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder();
Job job = (Job) appContext.getBean("processInitialFileJob");
JobExecution jb = jobLauncher.run(job, jobParametersBuilder.toJobParameters());
....
}
What can we do to the code above to ensure that it has a new JPA session or not use the JPA session at all? This job needs to read from the database each time and not a cached representation of the database.
Are u using Hibernate. Hibernate First Level cache may be creating the problem for u. Hibernate manages a First Level cache which is local to your Session. So once u create a session and do any transactions in that hibernate syncs that within. But when u do any changes to the table outside hibernate then hibernate wont sync that until flush is called on the session and session is closed.
To make sure this is not happening, inside your poller logic try creating new Session(or EntityManager in case of JPA) and close the session for every read/process/write cycle.
Also make sure this hibernate.current_session_context_class is not set to Thread. Since thread can be reused by the poller so the same Hibernate Session may be injected again.
This ended up not being an issue with Hibernate or JPA, but an issue of a StringBuilder holding on to data from previous runs. I believe this will need to be setup as #JobScope so that it is not reused across different executions of the job.

Multiple Embedded HSQLDB databases in jUnit errors during build

I'm working on a new Spring Batch (3.0.3.RELEASE) application where there will be multiple databases accessed during the jobs. For testing we are using HSQLDB (2.3.2) as the embedded database.
In my Application context I have the following.
<jdbc:embedded-database id="dataSource">
</jdbc:embedded-database>
<jdbc:embedded-database id="proDataSource">
<jdbc:script location="classpath:script-tables.sql" />
<jdbc:script location="classpath:script-constraints.sql" />
</jdbc:embedded-database>
<jdbc:embedded-database id="altDataSource">
<jdbc:script location="classpath:script-alt-tables.sql" />
</jdbc:embedded-database>
When I run a single test in Eclipse, things are fine. When I build from the command line, after the first test, I get errors
Failed to execute SQL script statement at line 3 of resource class path resource [script-promrkt-promo.sql]
object name already exists: PROMRKT
It appears to me that the population process in EmbeddedDatabaseFactory is receiving an already populated database. From what I can tell is that after each test there is not a SHUTDOWN being executed and HSQLDB is leaving the already populated database in memory.
I have re-reviewed the documentation and in a Spring Doc this does show a explicit shutdown command. But if spring starts up the embedded database when my test starts why doesn't it shut it down when the test completes ?
Is it expected the embedded databases will remain after each unit test for the same application context?
What is the order that spring starts up an embedded database and when is the transactional context initialized?
Do I need to use a database cleaner ?
Can the populate be updated to only populate when the database is first started, and rollback to the original script configuration when my test is complete ( kinda like how the AbstractTransactionalSpringContextTests worked )
Do I need some transactional markers? Spring Batch's JobRepo is properly being populated and destroyed between each test. Why are my custom dataSources not ?
The script the log message is complaining about isn't in your configuration. I presume it's being executed somewhere else? If that's the case, you'll probably need to add #DirtiesContext to your tests so that Spring doesn't cache the context (I'm assuming you're using the SpringJunit4Runner with #ContextConfiguration but can't be sure since your actual test isn't in the question).
If my assumption is correct, Spring caches the context in an effort to improve performance over the running of a unit test suite. If your test modifies the context in a way that can impact other tests (like running scripts in one test that need to be run again in others), you mark the tests with #DirtiesContext and Spring won't cache the context. You can use the annotation at either the method or class level. You can read more about the annotation here: http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/test/annotation/DirtiesContext.html
I spent a lot of time looking at this and reading the Spring Framework documentation (gasp!) and tracing through code. There are some interesting changes in 4.1 spring core, especially the testing.
I found out that ApplicationContext(s) are cached at the JVM level now. If a second test asks for a context the TestContext looks first in it's cache to see if some other test has already asked for the identical configuration.
I have some profiles for some of my tests. A test with a different profile but the same #ContextConfiguration causes the that context to be re-loaded with the profile applied. When the "Bean Loader" arrives at creating the embedded databases, the EmbeddedDatabaseFactory does not take into consideration that the embedded database (in memory HSQLDB) may have already been created or cached from previous tests and does not need to be re-initialized.
Therefore I added some logic to the EmbeddedDatabaseFactory.initDatabase() checking if the database already exists before re-initializing & running the DatabasePopulator.
List existingDataBases = org.hsqldb.DatabaseManager.getDatabaseURIs();
boolean isExisting = false;
String localDBName = StringUtils.lowerCase(this.databaseName);
for (Object object : existingDataBases) {
if (object.toString().contains(localDBName)) {
isExisting = true;
break;
}
}
// Now populate the database
if (!isExisting && this.databasePopulator != null) {
( of course this isn't quite kosher for what spring would need but it gets the point across )
In my opinion it looks like an issue partially with the EmbeddedDatabaseFactory and the TestContext caching mechanism. My "jdbc:embedded-database" definitions do not have any profiles associated with them. Why does the cache need to re-create them and not load them out of the existing cached beans?
You can try to force creation of new embedded database by setting unique name with generateUniqueName(true) each time new object is created.
Here is an example:
embeddedDatabase = new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.generateUniqueName(true)
.addScripts("db/sql/create-db.sql", "db/sql/insert-data.sql")
.build();

Commiting transaction within test with Spring test

I am testing my repository for update operation
#Test
public void updateStatusByEmailWithEmailCustomer()
{
customerQuickRegisterRepository.save(standardEmailCustomer());
assertEquals(CUST_STATUS_EMAIL,customerQuickRegisterRepository.findByEmail(CUST_EMAIL).getStatus());
customerQuickRegisterRepository.updateStatusByEmail(CUST_EMAIL,STATUS_EMAIL_VERFIED );
assertEquals(STATUS_EMAIL_VERFIED,customerQuickRegisterRepository.findByEmail(CUST_EMAIL).getStatus());
}
In the test case i saving my entity with default status and after that i am changing it to other using updateStatusByEmail(CUST_EMAIL,STATUS_EMAIL_VERFIED ) but still next assert statement is failing, this is due to fact that the updates during the test execution are commited after completion of test....Is there any way that I can commit my changes within the test?
You can likely get by with flushing the underlying Hibernate Session, as this will push your changes to the underlying tables in the database (within the current test-managed transaction).
Search for "false positives" in the testing chapter of the Spring reference manual for details.
Basically, you will want to call flush() on the current Hibernate Session after your update call and before the corresponding assertion.
If that does not solve your problem, with Spring Framework 4.1, you can use the new TestTransaction API for programmatic transaction management within tests.
Regards,
Sam (lead of the spring-test module)

EntityManager Lifecycle when using Oracle's Virtual Private Database

I had a few questions all related to the way an entity manager is created and used in an application with respect to Virtual Private Databases, which is a feature in Oracle DB which enables Row Level Security.
In a session bean, we generally have the entity manager as a member, and its generally injected by the container. How is this entity manager managed by the container - I mean, if we want to implement a Virtual Private Database then we have to make sure that the Virtual Private Database-context remains valid for the entire user session, and we do not have to set this context everytime before we fire a query. (to include more verbiage here : a session bean implements a couple of functions and each of these functions uses the same entity manager; now, it should not be the case that we set the Virtual Private Database everytime in each of these functions which do some DB manipulations).
Further to #1, since the entity manager is cached in the session bean, do we need to explicitly close the entity manager in any scenario? (like we do for JDBC connections?)
Also, I was wondering what should be the use case(or design criteria) for using a JTA or a non-JTA datasource. Is the way we create an entity manager dependant on this?
To add w.r.t the requirement on VPD:
It would be nice if the container managed EM can somehow be made to enforce the VPD per user. Note that EM is injected in here, so there should be a mechanism to set the VPD on the connection(and later retrieve the same connection for 'this' user in 'this' session).
Without an injected EM, i think using a reference to EMF and then setting the properties for the EM can be done. Something like :
((org.eclipse.persistence.internal.jpa.EntityManagerImpl)em.getDelegate()).setProperties
It would be an overkill, if the VPD is set everytime before the query is fired, rather the connection should 'maintain' the VPD context during the user's session and later release the connection (after clearing the VPD) back to the pool.
In a session bean, an injected entity manager is container managed and by default transaction scoped.
This means when you call any method on the session bean and a transaction is started, the persistence context of the entity manager starts. When the transaction is committed or rollbacked it ends. There is thus no scenario in which you have to explicitly close the entity manager.
Furthermore, when there already is a transaction in progress, this is joined by default and when there already is a persistence context attached to said transaction it's propagated instead of a new one being created.
Stateful session beans have another option, and that's the extended persistence context. This one is coupled to the scope of the stateful bean instead of to individual transactions. You still don't have to do any closing yourself here.
Then, you can also inject an EntityManagerFactory (using #PersistenceUnit) and then get an entity manager from it: In that case you'll have an application managed entity manager. In this case you'll have to explicitly close it.
JTA datasources (transactional datasources) are by default used with container managed entity managers. The container takes care of everything here. non-JTA datasources are for situations where you need separate connections to a DB, possibly outside any running transaction, on which you can set auto commit mode, commit, rollback, etc your self.
These two different datasource types can be defined in orm.xml for a persistence unit. If you define a persistence unit with a non-JTA datasource, you typically create an entity manager for it using a factory and then manage everything your self.
Update:
Regarding the Virtual Private Database, what you seem to need here is a user specific connection per entity manager, but the normal way of doing things is coupling a persistence unit to a general data source. I guess what could be needed here is a datasource that's aware of the user's context when a connection is requested.
If you completely bypass the container and even largely bypass the JPA abstraction, you could go directly to Hibernate. It has providers that you can register globally like DriverManagerConnectionProvider and DatasourceConnectionProvider. If you provide your own implementations for these with a setter for the actual connection, you can ask these back from a specific entity manager instance just prior to using it, and then set your own connection in it.
It's doable, but needless to say a bit hacky. Hopefully someone else can give a more 'official' answer. Best would of course if Oracle provided an official plug-in for e.g. EclipseLink to support this. This document hints that it does:
TopLink / EclipseLink : Support filtering data through their
#AdditionalCriteria annotation and XML. This allows an arbitrary JPQL
fragment to be appended to all queries for the entity. The fragment
can contain parameters that can be set through persistence unit or
context properties at runtime. Oracle VPD is also supported, include
Oracle proxy authentication and isolated data.
See also How to use EclipseLink JPA with Oracle Proxy Authentication.

Resources