Spring Reactive events and transactional Context - spring

I am migrating one existing app from Spring MVC traditional model to Spring Reactive. My first refactor let me with this piece of code:
return Mono.fromSupplier(() -> visitRequestDao.findById(requestId).get())
.map(request -> request.approve())
.map(request -> ResponseEntity.ok(listOfPendingVisitRequest(owner)));
After execute the endpoint I noticed that my entity did not have the state changed. As a Hibernate user, I know that when I load some object, any change applied will be reflect in the database, after the commit. My guess was that the event was executing in a different thread. I changed the code a little bit.
return Mono.fromSupplier(() -> visitRequestDao.findById(requestId).get())
.map(request -> transactionalContext.execute(() -> request.approve()))
.map(request -> ResponseEntity.ok(listOfPendingVisitRequest(owner)))
A class TransactionalContext was created and marked as transactional. So, now I know that any time its method is called, a new transactional will be started or a current one will be used. Is this the correct approach? Is there any solution?

Related

Hibernate uses old closed SessionImpl instead of given SessionImpl

I have a problem with Hibernate Enverse (Version 5.2.0-Final).
Context:
I'm auditing some entities with some lazy relations. I have a jsf-page that loads one version of one entity with all relations of that version. That works fine. So now I have a page that shows a revision of the entity with all relations of that revision. On this page I can open a fieldset, that triggers an AJAX. In this request we reattach all relations by calling entityManager.merge(entity) to be able to fetch the lazy relations in this fieldset. (The EntityManager is RequestScoped)
The Problem:
The AJAX is a new request. The server calls entityManager.merge(entity), what enforces creation of a new EntityManager (So a new org.hibernate.internal.SessionImpl is created). On this object hibernate calls SessionImpl.merge(...). But in the method org.hibernate.internal.AbstractSharedSessionContract.createQuery(String) a other SessionImpl object is used, which is already closed in the request before. That enforces an java.lang.IllegalStateException: Session/EntityManager is closed.
In one sentence: Although a new entityManager was created and a merge was called on that new entityManger, Hibernate uses an old Session/EntityManager of the request before.
I debugged the problem and found following:
Debug1: Shows the Stacktrace of the SessionImpl.merge(...) with the session's object id
Debug2: Shows the last method with the correct SessionImpl object (see it's id). This object is not used in next methods.
Debug3: The step after Debug2 does not know the given SessionImpl object. It has it's own SessionImpl object in collection.initializor.versionsReader. This session was created and closed in the request before (on loading the page).
Debug4: Now Hibernate wants to create the query wit the closed SessionImpl
Debug5: This enforces the exception, as the session is closed.
My questions:
Is this a bug of Hibernate?
Why is the given SessionImpl in method org.hibernate.type.CollectionType.getElementIterater(...) not used?
Anyone knows a solution or workaround for this problem?
Tank you very much for any idea. I spent days on this bug.
Why is the Session arg in o.h.type.CollectionType.getElementIterator not used?
The short answer is it isn't required, its simply a backward compatibility concern from 8 years ago.
The long answer is the type-system used to actually deviate some behavior based on whether or not the user had specified the session to operate in EntityMode.MAP or EntityMode.POJO and therefore the types needed to know what mode the session was in; hence why it was passed.
But even back in 2011 when this was changed, the session argument only ever influenced behavior if and only if the session was operating in EntityMode.MAP. In other words, all other modes always routed directly to the underlying collections Collection#iterator() method.
All this aside however, this doesn't have any impact on what you experience in your Debug3 screen-shot.
Is this a bug in Hibernate?
No, based on what I have read, I believe you're mixing concerns.
In Hibernate (no Envers), you can basically do this
// Request 1
request1EntityManager = getEntityManager();
sessionScopeEntity = request1EntityManager.find( MyEntity.class, myEntityId );
// Request 2
request2EntityManager = getEntityManager();
sessionScopeEntity = request2EntityManager.merge( sessionScopeEntity );
for ( SomeCollectionItem Item : sessionScopeEntity.getSomeCollection() ) {
// do things here
}
The above works because you reassociate the entity with the new session which in-turn injects the session into all the uninitialized proxies the entity maintains. But you can also rewrite the above as
// Request 1
request1EntityManager = getEntityManager();
sessionScopeEntity = request1EntityManager.find( MyEntity.class, myEntityId );
sessionScopeEntity.getSomeCollection().size() // initialize collection w/request1Session
// Request 2
request2EntityManager = getEntityManager();
for ( SomeCollectionItem Item : sessionScopeEntity.getSomeCollection() ) {
// do things here
}
The difference is the collection gets initialized with the first session and therefore when you attempt to access it with the second session, the entity doesn't necessarily need a merge because the collection is no longer a proxy but actually populated like a normal fetched collection would be.
The major difference between an entity instance returned by Hibernate and an audited entity instance returned by Envers is that the audited entity instance is NOT a managed persistent entity.
Depending on your scenario, you may decide to only audit a subset of fields on an entity mapping. This is why you cannot nor should not use things like merge with that instance as it could easily lead to unintended side effects with your real data.
If you intend to pass the audited entity instance across sessions, i would highly suggest that you instead consider initializing the collections you need up-front with the first session where you fetched the instance because presently there is no way to re-associate an audited entity instance with a new session.

Spring Integration Usage and Approach Validation

I am testing out using Spring Integration to tie together disperate modules within the same Spring-Boot application, for now, and services into a unified flow starting with a single-entry point.
I am looking for the following clarifications with Spring Integration if possible:
Is the below code the right way to structure flows using the DSL?
In "C" below, can i bubble up the result to the "B" flow?
Is using the DSL vs. the XML the better approach?
I am confused as to how to correctly "terminate" a flow?
Flow Overview
In the code below, I am just publishing a page to a destination. The overall flow goes like this.
Publisher flow listens for the payload and splits it into parts.
Content flow filters out pages and splits them into parts.
AWS flow subscribes and handles the part.
File flow subscribes and handles the part.
Eventually, there may be additional and very different types of consumers to the Publisher flow which are not content which is why I split the publisher from the content.
A) Publish Flow (publisher.jar):
This is my "main" flow initiated through a gateway. The intent, is that this serves as the entry point to begin trigger all publishing flows.
Receive the message
Preprocess the message and save it.
Split the payload into individual entries contained in it.
Enrich each of the entries with the rest of the data
Put each entry on the output channel.
Below is the code:
#Bean
IntegrationFlow flowPublish()
{
return f -> f
.channel(this.publishingInputChannel())
//Prepare the payload
.<Package>handle((p, h) -> this.save(p))
//Split the artifact resolved items
.split(Package.class, Package::getItems)
//Find the artifact associated to each item (if available)
.enrich(
e -> e.<PackageEntry>requestPayload(
m ->
{
final PackageEntry item = m.getPayload();
final Publishable publishable = this.findPublishable(item);
item.setPublishable(publishable);
return item;
}))
//Send the results to the output channel
.channel(this.publishingOutputChannel());
}
B) Content Flow (content.jar)
This module's responsibility is to handle incoming "content" payloads (i.e. Page in this case) and split/route them to the appropriate subscriber(s).
Listen on the publisher output channel
Filter the entries by Page type only
Add the original payload to the header for later
Transform the payload into the actual type
Split the page into its individual elements (blocks)
Route each element to the appropriate PubSub channel.
At least for now, the subscribed flows do not return any response - they should just fire and forget but i would like to know how to bubble up the result when using the pub-sub channel.
Below is the code:
#Bean
#ContentChannel("asset")
MessageChannel contentAssetChannel()
{
return MessageChannels.publishSubscribe("assetPublisherChannel").get();
//return MessageChannels.queue(10).get();
}
#Bean
#ContentChannel("page")
MessageChannel contentPageChannel()
{
return MessageChannels.publishSubscribe("pagePublisherChannel").get();
//return MessageChannels.queue(10).get();
}
#Bean
IntegrationFlow flowPublishContent()
{
return flow -> flow
.channel(this.publishingChannel)
//Filter for root pages (which contain elements)
.filter(PackageEntry.class, p -> p.getPublishable() instanceof Page)
//Put the publishable details in the header
.enrichHeaders(e -> e.headerFunction("item", Message::getPayload))
//Transform the item to a Page
.transform(PackageEntry.class, PackageEntry::getPublishable)
//Split page into components and put the type in the header
.split(Page.class, this::splitPageElements)
//Route content based on type to the subscriber
.<PageContent, String>route(PageContent::getType, mapping -> mapping
.resolutionRequired(false)
.subFlowMapping("page", sf -> sf.channel(this.contentPageChannel()))
.subFlowMapping("image", sf -> sf.channel(this.contentAssetChannel()))
.defaultOutputToParentFlow())
.channel(IntegrationContextUtils.NULL_CHANNEL_BEAN_NAME);
}
C) AWS Content (aws-content.jar)
This module is one of many potential subscribers to the content specific flows. It handles each element individually based off of the routed channel published to above.
Subscribe to the appropriate channel.
Handle the action appropriately.
There can be multiple modules with flows that subscribe to the above routed output channels, this is just one of them.
As an example, the the "contentPageChannel" could invoke the below flowPageToS3 (in aws module) and also a flowPageToFile (in another module).
Below is the code:
#Bean
IntegrationFlow flowAssetToS3()
{
return flow -> flow
.channel(this.assetChannel)
.publishSubscribeChannel(c -> c
.subscribe(s -> s
.<PageContent>handle((p, h) ->
{
return this.publishS3Asset(p);
})));
}
#Bean
IntegrationFlow flowPageToS3()
{
return flow -> flow
.channel(this.pageChannel)
.publishSubscribeChannel(c -> c
.subscribe(s -> s
.<Page>handle((p, h) -> this.publishS3Page(p))
.enrichHeaders(e -> e.header("s3Command", Command.UPLOAD.name()))
.handle(this.s3MessageHandler())));
}
First of all there are a lot of content in your question: it's to hard to keep all the info during read. That is your project, so you should be very confident in the subject. But for us that is something new and may just give up even reading not talking already with attempt to answer.
Anyway I'll try to answer to your questions in the beginning, although I feel like you're going to start a long discussion "what?, how?, why?"...
Is the below code the right way to structure flows using the DSL?
It really depends of your logic. That is good idea to distinguish it between logical component, but that might be overhead to sever separate jar on the matter. Looking to your code that seems for me like you still collect everything into single Spring Boot application and just #Autowired appropriate channels to the #Configuration. So, yes, separate #Configuration is good idea, but separate jar is an overhead. IMHO.
In "C" below, can i bubble up the result to the "B" flow?
Well, since the story is about publish-subscribe that is really unusual to wait for reply. How many replies are you going to get from those subscribers? Right, that is the problem - we can send to many subscribers, but we can't get replies from all of them to single return. Let's come back to Java code: we can have several method arguments, but we have only one return. The same is applied here in Messaging. Anyway you may take a look into Scatter-Gather pattern implementation.
Is using the DSL vs. the XML the better approach?
Both are just a high-level API. Underneath there are the same integration components. Looking to your app you'd come to the same distributed solution with the XML configuration. Don't see reason to step back from the Java DSL. At least it is less verbose, for you.
I am confused as to how to correctly "terminate" a flow?
That's absolutely unclear having your big description. If you send to S3 or to File, that is a termination. There is no reply from those components, so no where to go, nothing to do. That is just stop. The same we have with the Java method with void. If you worry about your entry point gateway, so just make it void and don't wait for any replies. See Messaging Gateway for more info.

Write call/transaction is dropped in TransactionalEventListener

I am using spring-boot(1.4.1) with hibernate(5.0.1.Final). I noticed that when I try to write to the db from within #TransactionalEventListener handler the call is simply ignored. A read call works just fine.
When I say ignore, I mean there is no write in the db and there are no logs. I even enabled log4jdbc and still no logs which mean no hibernate session was created. From this, I reckon, somewhere in spring-boot we identify that its a transaction event handler and ignore a write call.
Here is an example.
// This function is defined in a class marked with #Service
#TransactionalEventListener
open fun handleEnqueue(event: EnqueueEvent) {
// some code to obtain encodeJobId
this.uploadService.saveUploadEntity(uploadEntity, encodeJobId)
}
#Service
#Transactional
class UploadService {
//.....code
open fun saveUploadEntity(uploadEntity: UploadEntity, encodeJobId: String): UploadEntity {
// some code
return this.save(uploadEntity)
}
}
Now if I force a new Transaction by annotating
#Transactional(propagation = Propagation.REQUIRES_NEW)
saveUploadEntity
a new transaction with connection is made and everything works fine.
I dont like that there is complete silence in logs when this write is dropped (again reads succeed). Is there a known bug?
How to enable the handler to start a new transaction? If I do Propogation.Requires_new on my handleEnqueue event, it does not work.
Besides, enabling log4jdbc which successfully logs reads/writes I have following settings in spring.
Thanks
I ran into the same problem. This behavior is actually mentioned in the documentation of the TransactionSynchronization#afterCompletion(int) which is referred to by the TransactionPhase.AFTER_COMMIT (which is the default TransactionPhase attribute of the #TransactionalEventListener):
The transaction will have been committed or rolled back already, but the transactional resources might still be active and accessible. As a consequence, any data access code triggered at this point will still "participate" in the original transaction, allowing to perform some cleanup (with no commit following anymore!), unless it explicitly declares that it needs to run in a separate transaction. Hence: Use PROPAGATION_REQUIRES_NEW for any transactional operation that is called from here.
Unfortunately this seems to leave no other option than to enforce a new transaction via Propagation.REQUIRES_NEW. The problem is that the transactionalEventListeners are implemented as transaction synchronizations and hence bound to the transaction. When the transaction is closed and its resources cleaned up, so are the listeners. There might be a way to use a customized EntityManager which stores events and then publishes them after its close() was called.
Note that you can use TransactionPhase.BEFORE_COMMIT on your #TransactionalEventListener which will take place before the commit of the transaction. This will write your changes to the database but you won't know whether the transaction you're listening on was actually committed or is about to be rolled back.

spring entity concurrency control while persisting into database

I am trying to control concurrent access to same object in spring+jpa configuration.
For Example, I have an entity named A. Now multiple processes updating the same object of A.
I am using versioning field but controlling it but here is the issue:
For example 2 processes reads the same entity (A) having version=1.
Now one process update the entity and version gets incremented.
when 2nd process tries to persist the object, Optimistic lock exception would be thrown.
I am using spring services and repository to access the objects.
Could you please help me here?
What's the problem then? That's how it's supposed to work.
You can catch the JpaOptimisticLockingFailureException and then decide what to do from there.
This, for example, would give a validation error message on a Spring MVC form:
...
if(!bindingResult.hasErrors()) {
try {
fooRepository.save(foo);
} catch (JpaOptimisticLockingFailureException exp){
bindingResult.reject("", "This record was modified by another user. Try refreshing the page.");
}
}
...

Does Spring's PlatformTransactionManager require transactions to be committed in a specific order?

I am looking to retrofit our existing transaction API to use Spring’s PlatformTransactionManager, such that Spring will manage our transactions. I chained my DataSources as follows:
DataSourceTransactionManager - > LazyConnectionDataSourceProxy - > dbcp.PoolingDataSource - > OracleDataSource
In experimenting with the DataSourceTransactionManager , I have found that where PROPAGATION_REQUIRES_NEW is used, it seems that Spring’s transaction management requires that the transactions be committed/rolled back in LIFO fashion, i.e. you must commit/rollback the most recently created transactions first.
Example:
#Test
public void testSpringTxns() {
// start a new txn
TransactionStatus txnAStatus = dataSourceTxnManager.getTransaction(propagationRequiresNewDefinition); // specifies PROPAGATION_REQUIRES_NEW
Connection connectionA = DataSourceUtils.getConnection(dataSourceTxnManager.getDataSource());
// start another new txn
TransactionStatus txnBStatus = dataSourceTxnManager.getTransaction(propagationRequiresNewDefinition);
Connection connectionB = DataSourceUtils.getConnection(dataSourceTxnManager.getDataSource());
assertNotSame(connectionA, connectionB);
try {
//... do stuff using connectionA
//... do other stuff using connectionB
} finally {
dataSourceTxnManager.commit(txnAStatus);
dataSourceTxnManager.commit(txnBStatus); // results in java.lang.IllegalStateException: Cannot deactivate transaction synchronization - not active
}
}
Sadly, this doesn’t fit at all well with our current transaction API which allows you to create transactions, represented by Java objects, and commit them in any order.
My question:
Am I right in thinking that this LIFO behaviour is fundamental to Spring’s transaction management (even for completely separate transactions)? Or is there a way to tweak its behaviour such that the above test will pass?
I know the proper way would be to use annotations, AOP, etc. but at present our code is not Spring-managed, so it is not really an option for us.
Thanks!
yes,I have met the same problems below when using spring:
java.lang.IllegalStateException: Cannot deactivate transaction synchronization - not active.
According above,Spring’s transaction management requires that the transactions be committed/rolled back in LIFO fashion(stack behavior).The problem disappear.
thanks.
Yes, I found this same behavior in my own application. Only one transaction is "active" at a time, and when you commit/rollback the current transaction, the next active transaction is the next most recently started transaction (LIFO/stack behavior). I wasn't able to find any way to control this, it seems to be built into the Spring Framework.

Resources