Can I assume the prePesist event and the Persist operation (related to that event) are always played in one atomic operation ?
You may not assume that. Persisting an entity fires the prePersist event but actual insertion is deferred until you flush the EntityManager. This means that there is a possibility for a race condition where process #1 issues INSERT query (i.e. flushes EntityManager) after process #2 persists entity but before flush is executed.
Related
I am using #Cacheable in Spring Boot 2.0 with EHcache, with sync=true.
I understand that if we set sync=true, all threads wait until one thread fetches the value to cache by executing the method that used #Cacheable.
What happens if there is an exception in that method? Do the other threads keep waiting or is the lock released?
The idea of the #Cacheable annotation is that you use it to mark the
method return values that will be stored in the cache.
Each time the
method is called, Spring will cache its return value after it is called
to ensure that the next time the method is executed with the same
parameters, the result can be obtained directly from the cache without
the need to execute the method again. Spring caches the return value
of a method with key-value pairs. The value is the return result of
the method.
Now coming to your question, lets first understand what is Synchronized Caching
Synchronized Caching
In a multi-threaded environment, certain operations might be
concurrently invoked for the same argument (typically on startup). By
default, the cache abstraction does not lock anything, and the same
value may be computed several times, defeating the purpose of caching.
For those particular cases, you can use the sync attribute to instruct
the underlying cache provider to lock the cache entry while the value
is being computed. As a result, only one thread is busy computing the
value, while the others are blocked until the entry is updated in the
cache
The sole purpose of sync attribute is that only one thread will build the cache and other will consume the cache. Now if there is an exception during execution of method, which means the thread which acquired the lock will never set anything in cache and exit, now next thread will get it's chance to get lock because nothing would be there in cache, and if during second thread's execution exception occurs, then next thread will get it's chance until one threads sets the cache for same parameters.
We are using microservices, cqrs, event store using nodejs cqrs-domain, everything works like a charm and the typical flow goes like:
REST->2. Service->3. Command validation->4. Command->5. aggregate->6. event->7. eventstore(transactional Data)->8. returns aggregate with aggregate ID-> 9. store in microservice local DB(essentially the read DB)-> 10. Publish Event to the Queue
The problem with the flow above is that since the transactional data save i.e. persistence to the event store and storage to the microservice's read data happen in a different transaction context if there is any failure at step 9 how should i handle the event which has already been propagated to the event store and the aggregate which has already been updated?
Any suggestions would be highly appreciated.
The problem with the flow above is that since the transactional data save i.e. persistence to the event store and storage to the microservice's read data happen in a different transaction context if there is any failure at step 9 how should i handle the event which has already been propagated to the event store and the aggregate which has already been updated?
You retry it later.
The "book of record" is the event store. The downstream views (the "published events", the read models) are derived from the book of record. They are typically behind the book of record in time (eventual consistency) and are not typically synchronized with each other.
So you might have, at some point in time, 105 events written to the book of record, but only 100 published to the queue, and a representation in your service database constructed from only 98.
Updating a view is typically done in one of two ways. You can, of course, start with a brand new representation and replay all of the events into it as part of each update. Alternatively, you track in the metadata of the view how far along in the event history you have already gotten, and use that information to determine where the next read of the event history begins.
Inside your event store, you could track whether read-side replication was successful.
As soon as step 9 suceeds, you can flag the event as 'replicated'.
That way, you could introduce a component watching for unreplicated events and trigger step 9. You could also track whether the replication failed multiple times.
Updating the read-side (step 9) and flagigng an event as replicated should happen consistently. You could use a saga pattern here.
I think i have now understood it to a better extent.
The Aggregate would still be created, answer is that all the validations for any type of consistency should happen before my aggregate is constructed, it is in case of a failure beyond the purview of the code that a failure exists while updating the read side DB of the microservice which needs to be handled.
So in an ideal case aggregate would be created however the event associated would remain as undispatched unless all the read dependencies are updated, if not it remains as undispatched and that can be handled seperately.
The Event Store will still have all the event and the eventual consistency this way is maintained as is.
I am implementing a NiFi processor and have couple of clarifications to make with respect to best practices:
session.getProvenanceReporter().modify(...) - Should we emit the event immediately after every session.transfer()
session.commit() - Documentation says, after performing operations on flowfiles, either commit or rollback can be invoked.
Developer guide: https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html#process_session
Question is, what do I lose by not invoking these methods explicitly?
1) Yes typically the provenance event is emitted after transferring the flow file.
2) It depends if you are extending AbstractProcessor, or AbstractSessionFactoryProcessor. AbstractProcessor will call commit or rollback for you so you don't need to, AbstractSessionFactoryProcessor requires you to call them appropriately.
If you are extending AbstractSessionFactoryProcessor and never call commit, eventually that session will get garbage collected and rollback will be called, and all the operations performed by that session will be rolled back.
There is also an annotation #SupportsBatching which can be placed on a processor. When this annotation is present, the UI shows a slider on the processor's scheduling tab that indicates how many milliseconds worth of framework operations like commit() can be batched together behind the scenes for increased throughput. If latency is more important then leaving the slides at 0 milliseconds is appropriate, but the key here is that the user gets to decide this when building the flow and configuring the processor.
If we add afterUpdate event code on a domain object (e.g. our Session object) in grails:
Is it called after the update has been committed, or after it is flushed, or other?
If the update failed (e.g. constraint, or optimistic lock fail), will the after event still be called?
Will afterUpdate be in the same transaction as the update?
will the commit of the service method which did the update wait till the afterUpdate method is finished, and, if so, is there any way round this (except creating a new thread)?
We have a number of instances of our grails application running on mutliple tomcats. Each has a session expiry quartz job to expire our session (domain object)
The job basically says getAllSession with lastUpdated > xxx, then loops through them calling session.close(Session.Expired)
Session.close just sets the session.status to Expired.
In theory, the same session could be closed twice at the same time buy the job running on two servers, but this doesn't matter (yet)
Now we want to auto cashout customers with expired (or killed) sessions. The cashout process entails making calls to external payment systems, which can take up to 1 minute, and may fail (but should not stop the session from being closed, or 'lock' other sessions)
If we used the afterUpdate on the session domain object, we can check the session.status, and fire of the cashout, either outside of the transaction, or in another thread (e.g. using Executors). But this is very risky - as we dont know the exact behaviour. E.g. if the update failed, would it still try to execute the afterUpdate call? We assume so, as we are guessing the commit wont happen till later.
The other unknown is how calling save and commit works with optimistic locking. E.g. if you call save(flush=true), and you dont get an error back, are you guaranteed that the commit will work (baring the db falling over), or are there scenarios where this can fail?
Is it called after the update has been committed, or after it is
flushed, or other?
After update has been made, but transaction has not been committed yet. So if an exception occurs inside afterUpdate, transaction will be rolled back.
If the update failed (e.g. constraint, or optimistic lock fail), will the after event still be called?
No
Will after Update be in the same transaction as the update?
Yes
Will the commit of the service method which did the update wait till the afterUpdate method is finished, and, if so, is there any way
round this (except creating a new thread)?
No easy way around
I have a program that uses handler, businessObject and DAO for program execution. Control starts from handler to businessObject and finally to DAO for Database operations.
For example my program does 3 operations: insertEmployee(), updateEmployee() and deleteEmployee() every method being called one after the other from handler. once insertEmployee() called control get back to handler then it calls updateEmployee() again control back to handler then it calls deleteEmployee().
Problem Statement: If my first two methods in dao are successful and control is back to handler and next method it request to dao is deleteEmployee(). Meanwhile it faces some kind of exception in deleteEmployee(). It should be able to rollback the earlier insertEmployee() and updateEmployee() operation also. It should not rollback only deleteEmployee(). It should behave as this program never ran in system.
Can any one point me how to achieve this in spring jdbcTemplate Transaction management.
You should check about transaction propagation, in special: PROPAGATION_REQUIRED.
More info:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/transaction.html#tx-propagation