Is it possible to use transactions when Neo4j is used as standalone server? I am using functions from my Spring repositories, and probably each of them is executed as a separate transaction, but I would like to merge them into one. Is it possible to do this?
SDN doesn't support remote transactions (which only work with the transactional endpoint and Cypher) yet.
So the option you have to speed your operation up is to move the processing of the SDN entities into the server an expose a domain level REST API to your clients (either with Jersey, or SD-REST).
see: http://inserpio.wordpress.com/2014/04/30/extending-the-neo4j-server-with-spring-data-neo4j/
Related
We are rewriting legacy app using microservices. Each microservice has its own DB. There are certain api calls that require to call another microservice and persist data into both DBs. How to implement distributed transaction management effectively in this case?
Since we are not migrated completely to the new micro services environment, we still writeback data to old monolith. For this when an microservice end point is called, we call monolith service from microservice api to writeback same data. How to deal with the same problem in this case as well.
Thanks in advance.
There are different distributer transaction frameworks usually included and maintained as part of heavy application servers like JBoss and WebLogic.
The standard usually used by such services is Jakarta Transactions (JTA; formerly Java Transaction API).
Tomcat and Spring don't support distributed transactions out-of-the-box. You can add this functionality using third party framework like Atomikos (just googled, I've never used it).
But remember, microservice with JTA ist not "micro" anymore :-)
Here is a small overview over available technologies and possible workarounds:
https://www.baeldung.com/transactions-across-microservices
If you can afford to write to the legacy system later (i.e. allow some latency between updating the microservice and the legacy system) you can use the outbox pattern.
Essentially that means that you write to the microservice database in a transactional way both to the tables you usually write and an additional "outbox" table of changes to apply and then have a separate process that reads that table and updates the legacy system.
You can also achieve something similar with a change data capture mechanism on the db used in the microservice(s)
Check out this answer on "Why is 2-phase commit not suitable for a microservices architecture?": https://stackoverflow.com/a/55258458/3794744
I am trying to understand the transaction management in Spring, and I have got some doubts.
I read a bit about transaction management in EJB world, which can be CMT or BMT. For CMT, as per the documentation, it is Application server (e.g. JBOSS) which manages the transaction.
Now, coming to Spring transaction management, and considering using Web container only (Apache Tomcat), how does this work?
Does Spring have its own transaction management with capability of handling local transaction and global transaction (which works with 2 phase commit). Do the actual support need to come by the underlying container (in this case Apache tomcat) or support from framework is sufficient?
I am not clear how all these pieces fit together.
Can anyone help me understand this?
Spring doesn't include any kind of transaction capability of its own, it only provides ways to connect to transaction functionality provided by the container or by standalone libraries.
If you run your application on Tomcat and don't provide any transaction manager libraries like bitronix, then you get only local jdbc transactions provided by the servlet container.
When you read the bullet points at https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/transaction.html notice it says spring is providing abstractions, that means it is providing access through its own apis and using aop to make transactions nonintrusive, but not providing any implementation of transactional functionality. It's facilitating gluing things together, which is the main thing spring does.
In our project we have a requirement to connect to IBM IMS and get data. Many of the existing applications are done it through code more coupled with IMS.
In one of the application we are using Spring CCI support and providing the CCIConnectionFactory to the JDBCTemplate and using it in a relational (kind of) manner.
However we are building a new application which is not using Spring framework. We are making use of JAVA CDI and it's aspects. But to integrate it with IMS through CCI I can see Spring is the best option. Anyone have experienced on this CCI connections? What way is the best you think? And any other frameworks in Java you are familiar with - apart from Spring's support?
Appreciate your help and input.
I had the same question 5 Month ago and it was very hard to collect information about jca. If your project works with wildfly or jboss take a look on my inbound-ra-example project. At first you must know what kind of resource adapter (RA) you need, inbound or outbound. In short, an inbound RA acts as a server for external data and send the data to a message driven bean. An outbound RA is called from an EJB via a connection factory and initiate the connection to the external information system. Read the readme.md of my example project. The inbound RA is much more difficult as an outbound RA. Generate the skeleton of your ra with the ironjacamar codegenerator. I described the process in my example project.
I'm writing an application that has to communicate across 3 different platforms. Two expose their DB via a REST API (no jdbc driver) and one is a native JDBC connection (ex: Derby, MySQL, Oracle, etc).
My problem is that I have no way of assuring any ACID'ity when updating data, given that the three should be updated at the same time.
I've tried reading up on Spring XA but it seems as both 2PC and 1PC require some form of transactional backends. Given that 2 of my 3 destinations are REST APIs, I don't have any transactions. Just a save/update option.
Are there techniques I can use to ensure that the 3 sources are synchronized and that I don't run into inconsistent states if ever a write fails (ie: REST endpoint unavialble, etc)?
A transaction example would be:
Read from DB
Write to REST-1 endpoint
Update DB
Write to REST-2 endpoint
Is there some form of XA I could employ to wrap everything in such a way I can be assured of consistency?
I am willing to create an example(code) using Spring in which business logic to be distibuted across different servers like JBoss or Glassfish and still under one transaction? First of all is this possible in Spring. I know using EJB has this option. Likewise do we have a similar technique in Spring also? I am looking for Synchronous communication approach and not using asynchronous message oriented middleware. Any help/pointer appreciated.
Thanks
Prakash
Spring has support for RMI or provides its own remoting mechamism HttpInvoker but according to the doc they don't provide any remote transaction propagation.
Similar questions:
Spring Distributed Transaction Involving RMI calls possible?
Transaction propagation in multiple servlet context with multiple data source