JMS rollback - jms

I have a process which involves sending a JMS message.
The process is part of a transaction.
If a later part of the transaction fails, a part that is after a previous part that sent the message, I need to cancel the message.
One thought I had was to somehow set on the message that it is not to be picked up for a certain amount of time, and if I need to rollback, then I could go and cancel the message.
Not knowing messaging, I do not know if the idea is possible.
Or, is there a better idea?
Thanks

You can use JMS and JTA (Java Transaction API) together. When doing that, the sending of a JMS message or the consumption of a received message actually happens atomically as part of the transaction commit.
What does this mean? If the transaction fails or is rolled back, the "sent" message doesn't go out and any "received" messages aren't really consumed. All handled for you by your JMS and JTA provider.
You need to be using a JMS implementation that supports JTA. Sounds like you're already using transactions, so it might be a matter of doing some configuration to make it happen (waving hand vigorously...).
I've had experience using this (BEA WebLogic 7 w/ BEA WebLogic Integration). Worked as advertised -- "the outside world" saw no impact of JMS stuff I tried unless the transaction committed successfully.
Earlier versions of this linked to a Java page describing JMS/JTA integration generally. The page went stale and I don't see an equivalent replacement. This javadoc is for a JMS interface related to this capability.

What you have described is an XA transaction. This allows a transaction to scope across multiple layers i.e. JMS provider, DB or any other EIS. Most containers can be configured to use both non XA and none XA transaction so check your container settings!
For example if you are using JMS with XA transactions the following is possible.
Start Transaction
|
DB Insert
|
Send JMS Msg
|
More DB Inserts
|
Commit Transaction <- Only at this point will the database records be inserted and the JMS message sent.
XA Tranactions are only available in full Java EE containers so XA transactions are not available in Tomcat.
Good luck!
Karl

Related

JMS and JPA transactions without two phase commit (i.e. JTA is not supported)

I am migrating a Spring boot app running on a JEE app server which makes use of JTA to coordinate JMS and JPA transactions:
Exceptions raised while processing a message trigger a JPA and JMS roll-backs (i.e. message goes back to the originating queue)
If all database operations are successful, and, the message is successfully moved to the next queue, JPA and JMS transactions are both committed
The target environment does not support JTA.
I am looking for guidance on how to setup transaction managers so that:
a JPA transaction is started immediately after starting the JMS transaction
a JPA transaction is concluded just before terminating the JMS transaction
a failure of terminating the JPA transaction would fail the JMS transaction
Any documentation or sample code would be awesome.
Many thanks in advance
A possible way forward for this scenario where none of the resources support XA or JTA:
JMS provider
Database driver
... is to use a "bracketing" transaction manager configured to run two transactions. At high level, steps are as specified in Dave Syer's article
Start messaging transaction
Receive message
Start database transaction
Update database
Commit database transaction
Commit messaging transaction
The Spring Data project provides such a transaction manager: see ChainedTransactionManager .
This strategy works well when the error occurs in committing the database transaction, i.e. step 5. For cases where the error happens in the JMS commit, i.e. step 6, one will end up with the message in dead-letter.
Testing this implementation with 100,000 messages showed that about 0.005 % message end up in dead letter. For some, the error occurred when committing the JPA transaction, some for JMS.
For the system to be operable, messages in dead-letter should be retriable regardless of the point of failure. This means that the bracketing transaction manager option is only viable if the app is changed to implement idempotency: keep a trace of the JMS Message IDs already processed. The app has to skip the update part, step 4, for when the JMS Message ID was already processed.

Transacted Route and Transactional Endpoints, Transaction commit order

My route looks like below
from("jms:queue:IN_QUEUE) //(A) Transactional Endpoint
.transacted("required") //(B) TX Policy with PROPAGATION_REQUIRED and JPATxManager
.bean("someBean", "readFromDB()") //(C) Read from DB
.bean("someBean", "writeToDB()") //(D) Write to DB
.to("file:/home/src?fileName=demo_${id}.txt")
I know the JMS consumer at (A) will fork out JMS Transaction on each poll and attaches to the
thread.
Also the transacted node in (B) will fork out JPA transaction after an exchange reaches there
and attaches to the thread.
Please find my questions below:
Can two different transactions get attached to a single thread (like the one above) ?
If Yes, which one should get suspended ?
What should be the commit and rollback order of the above mentioned route ?
Note: I didn't find any obvious answer from Camel In Action 2nd Ed book, So please guide me
Good afternoon,
This is a variation on your other question.
The:
from("jms:queue:IN_QUEUE) //(A) Transactional Endpoint
endpoint is transacted, meaning that you have marked the JMS component as transacted and the JMS sessions will be managed by a JmsTransactionManager.
.transacted("required") //(B) TX Policy with PROPAGATION_REQUIRED and
JPATxManager
This should not be a JPA transaction manager, but a JTA transaction manager (like Arjuna). As in your other question, you now have a JMS local transaction for reading you message, and local JPA transacted sessions for your DB access. You want the PlatformTransactionManager (a JTA transaction manager) to synchronize the local transactions for you.
As to your questions:
Can two different transactions get attached to a single thread (like the one above) ?
That really doesn't make any sense.
If Yes, which one should get suspended ?
Nothing will get suspended.
What should be the commit and rollback order of the above mentioned route ?
The DB read is not transactional and does not need to be committed. The file write will actually happen as the JTA transactional context is closed. This leaves the DB write. If that fails, then the DB read does not matter, the message will get put back on the source destination and the file write will not be called.
Enabling the DEBUG logging for the various transaction managers is very helpful.
I could go on about this with this in painful detail. This is more for burki. I think that you will really appreciate this. Very subtle, and it happens a lot.
from("jms:queue:SRC_QUEUE")
.transacted("required")
.to("jms1:queue:DEST_QUEUE")
What happens if the two endpoints are marked as transacted ... but... you do not have the 'transacted' line? Well, a JMS local transaction was started on the message listener. This will be committed as the route ends. There are two independent local JMS transactions. These are not synchronized by a JTA transaction manager.
What actually happens is that the commit for the message 'get' is called. There is no actual commit for the message 'put'. The message 'put' is committed when the JMS session is closed. This is in the JMS spec, that closing the connection inherently commits any transaction. So, because there is no linkage between the two components, the 'get' is committed, and then the 'put' session is closed.
This means that you can lose messages if there is an outage between the commit for the message 'get' and the session close for the message 'put'.
Does that make sense? There is no linkage between the local transactions, so Camel closes them in order, starting with committing the 'get' before calling the 'put'.
JTA transaction synchronization is the key. You still have local transaction resources (not XA), but they can be very well managed in a pretty lightweight JTA transactional context.
from("jms:queue:SRC_QUEUE")
.transacted("required")
.to("DB:transactedwrite")
.to("jms1:queue:DEST_QUEUE")
I couldn't be bothered to look up the correct syntax for a database insert, but you get the idea. In this case, you can get duplicate DB inserts if the JMS 'put' fails. This is not 'all or nothing' XA transactions. The transactions are committed in order. If one in the middle succeeds, then the next transaction fails, well the 'get' will get rolled back and you will get duplicates up to the point of failure.
Sorry, I can't answer your specific questions, but I can give some specific infos about the transactions of your route.
You've got 3 different "systems" with different transaction "scopes"
A JMS broker from which you consume
A database you read and write from and you configured a JPA TxManager for
A file system (no transactions at all) as destination
First of all, if you want to have transaction safety across JMS and the database you have to use XA transactions.
Then, it is unclear if you expect the JMS consumer to be transacted (because of the transacted() in your route) or if you really configured a JMS connection with local JMS transactions. I assume that you really consume transacted.
Let's talk about what you got without line B of your route:
You consume transacted from the broker
Camel processes the message through your route
When any error occurs during route processing, the message is not committed on the broker and therefore redelivered to your route
The transaction that is opened by a transactional consumer is kept open by Camel until the route is successfully processed.
So the only obvious problem would be an error after the database write, that triggers a redelivery and the database write is done once again. Probably the write is not idempotent and therefore must not happen twice.
So to solve these problems, you either have to use XA transactions or you simply use local JMS transactions and implement compensation logic for the "gaps" like the one described above.
The database transaction on the other hand has no benefit unless the read and write operation must be done in a transaction (but I have doubts that this is the case with two individual bean calls and a JMS consumer).

Wildfly 10 jms send Message to queue as part of XA transaction

I have recently had to support a colleague in verifying why some system tests are not passing in wildfly, system tests that pass consistently on weblogic and glass fish.
After analysing the log, it became clear the reason is related to a JMS message sent by a backed thread getting committed to a queue too soon, when the expectation was the message would be committed when the entry point Container Managed Transaction of an MDB commits. So the message is going out before the MDB that sends it is done running.
In weblogic, to achieve the expected behaviour, you need to make sure that when you take the connection factory given by the container , which is XA configured, you set the connection.createseesion with
transacted = true and
acknowledgement = session transacted.
In a process similar to the one depicted in this URL
http://www.mastertheboss.com/jboss-server/jboss-jms/sending-jms-messages-over-xa-with-wildfly-jboss-as
Except in the snippet above auto acknowledge is set and the first parameter is set to false.
In wildly when our weblogic and glass fish configuration is used, nothing is committed and the system behaves as if the JMS message sent were to be rolled back.
If configuration as in the example above were to be used, instead what would happen is that the JMS message is immediately and the consumer MDB immediately launches being trigerred before the producer transaction actually ends, causing the system test to fail.
According to the official JMS configuration, by using a connection-pooled factory with the transaction=XA attribute, the container should immediately bind the commit of the transaction to the lifecycle of the parent transaction.
See official documentation bellow in particular in respect to the Java:/JmsXa connection factory.
https://docs.jboss.org/author/display/WFLY10/Messaging+configuration
My colleague was initially using a non pooled connection factory, but the injection info reference has since then been fixed. I have tried all possible combinations of parameters in the shed message, but my outcome is sitll:
Either sent too soon or never sent.
To conclude all the other resources are XA. Namely the oracle db is using the XA driver.
Can anyone confirm if in wildly the send JMS message only when parent transaction commits is working and if so how the session is being configured?
I will check if perhaps my colleague has not made a mistake in terms of the configuration of the connection factory used by the Men's themselves to consume messages out of the queue.but if that one is also XA... Then it is a big problem.
So the issues is fixed.
The commit of the JMS message to the queue at the end of the transaction works perfectly.
The issue was two fold:
(a) First spot of code I was looking at address the issue was not correct. Someone had decided to write his own send telegram to queue API elsewhere, and was not using the central API for sneding telegrams, so any modification I to the injection connection factory was actually not taking effect. The stale connection factories were still being used.
(b) Once the correct API was spotted it was easy to make the mechanism work by using the widlfy XA pooled connection factory mentioned in the post above.
The one thing that was tweaked was the connection.CreationSession api.
The API in JEE 7 has been enlarged and it is now better documented than in jEE 6.
To send a JMS message in a container as part of an XA transaction one should do:
connection.createSession() without any parameters.
This can easily be seen in the connection javadoc:
https://docs.oracle.com/javaee/7/api/javax/jms/Connection.html
QUOTE 1:
This method has been superseded by the method createSession(int
sessionMode) which specifies the same information using a single
argument, and by the method createSession() which is for use in a Java
EE JTA transaction. Applications should consider using those methods
instead of this one.
QUOTE 2:
In a Java EE web or EJB container, when there is an active JTA
transaction in progress:
Both arguments transacted and acknowledgeMode are ignored. The session will participate in the JTA transaction and will be committed
or rolled back when that transaction is committed or rolled back, not
by calling the session's commit or rollback methods. Since both
arguments are ignored, developers are recommended to use
createSession(), which has no arguments, instead of this method.
Which means, the code snippet in:
http://www.mastertheboss.com/jboss-server/jboss-jms/sending-jms-messages-over-xa-with-wildfly-jboss-as
Is not appropriate. What one should be doing is creating the session without any parameter and leting the container handle the rest.
Which it does just fine.

Standalone spring app XA transactions with IBM MQ and Oracle as resources

I am in the process of developing a stand alone Apache camel application (not running on a J2EE container).
This apps needs to be capable of routing messages from an IBM MQ queue manager to an Oracle database in a distributed transaction.
My google searches pretty much took me to a few places but none of those were able to give me some good clues about how to put everything together.
This link below was the closest to what I need but unfortunately it is not cler enough to put me on the right path.
IBM MQManager as XA Transaction Manager with Spring-jms and Spring-tx
Thank you in advance for your inputs.
You will need to use a JTA TransactionManager, but since not being in a j2ee container i would sugest to use Atomikos.
https://github.com/camelinaction/camelinaction/tree/master/chapter9/xa
My route which is working with IBM MQ -> Oracle Database, in an J2EE yes but still should be working with Atomikos setup. I would say this isn't the proper way to to it, but it's the only way I managed to get it working - working well enough for my use case.
from(inQueue)
.transacted()
.setHeader("storeData", constant(false))
.to("direct:a")
.choice().when(header("storeData").isEqualTo(false)) // if this is true the database calls are 'successful'
.log("Sending message to errorQueue")
.to(errorQueue)
;
StoreDataBean storeDataBean = new StoreDataBean();
from("direct:a")
.onException(Exception.class).handled(false).log("Rollbacking database changes").markRollbackOnlyLast().end()
.transacted("PROPAGATION_REQUIRES_NEW")
.bean(storeDataBean) //storeData sets the header storeData to true if no SQLException or other exceptions are thrown
.end()
;
The commit are handled by the transaction manager, so if I actually get an error with the database commit, it should rollback the message.
Next problem that I have had with rollbacking messages is that I can't set the deadLetterQueue, as you can with ActiveMQ. So it rolls back to the incoming queue. Therefore I actually handle the database transaction as I do, it is in a new transaction, which is rollbacked in case of a normal SQLException when calling the database.
I wished this worked:
from(inQueue)
.onException(Exception.class).to(errorQueue).markRollbackOnly().end()
.bean(storeDataBean)
.end()
I have posted about this in the community forums but no answers at all.

JMS and JTA Transactions in Java EE

I think I am not getting something right with JMS and JTA. I am running in a Java EE container with all CMTs. Here is what I am doing:
In an SLSB, write something to the database
From the same method of the SLSB, post a message to a JMS queue
An MDB in the same container listens to the JMS queue and picks up the message
The MDB reads the database
The problem is, the MDB does not see the changes made to the database in step 1.
I verified that steps 1 and 2 happen inside a single XA transaction, as expected. My expectation is that a second XA transaction would start at step 3, after the first XA has been committed. But it seems that the MDB receives the message before the XA transaction that posted the message has been committed.
Is my expectation wrong and what I am seeing is normal?
I am running under JBoss 6. The SLSB is local. Both the SLSB and the MDB are in the same application.
I found the problem! My JMS connection factory was not XA aware. I had looked up /XAConnectionFactory for my JMS connection factory. In spite of the name, that's the wrong resource to lookup for a regular app in JBoss. There is a java:/XAConnectionFactory too, which does not work either. The correct resource name is java:/JmsXA. I used it and everything is working just as expected.
Thanks to #strmqm for nudging me to the right direction.
I saw a conceptually similar problem in an app built w/ WebLogic 7. The DB commit from tx1 wasn't complete by the time that tx2 (initiated by a JMS send in tx1) tried to read it.
The trouble there was that our configuration involved a WLS 7 XA emulation layer with a non-XA db connection (to Oracle DB). This risk was part of that XA shortcut. Apparently if we'd gone w/ the true XA all the way to the DB, that hole would have closed. Never ended up testing that.
You say this is JBoss. Any chance that they've got some similar shim that bypasses the XA and gives this same surprising result?

Resources