I have a scenario where in case a subsequent operation fails, a commit or a shallow delete might need to be reverted. This would be particularly useful in scenarios involving Mongo where there is no atomicity available across collections. Is this possible with Javers?
There is no 'rollback' option for now. It can be implemented in the future but there could be some limitations.
You could annotate your method with #Transactional annotation and if an exception occurs the database updates that occurred within that method would rollback which should include the Javers tables.
https://www.logicbig.com/tutorials/spring-framework/spring-data-access-with-jdbc/transactional-roll-back.html
Alternatively, you could use Spring AOP to perform a custom rollback and then delete the committed records manually.
How to call a custom rollback method in Spring Transaction Management?
Hope one of these options helps you.
If you need to travel back in time, instead of using Javers, you should redesign your database using functional programming idea called "persistent data structures". At the core of it is that you should never modify any existing data, you should always create new versions of existing entities.
You can read about persistent data structures for example here:
https://medium.com/#arpitbhayani/copy-on-write-semantics-9538bbeb9f86
https://medium.com/#mmdGhanbari/persisting-a-persistent-data-structure-3f4cfd46036
Related
For put request, Do I always have to check the old data and change the changed fields in order to update the existing data? Is it the right way to check for each data change?
I do not know of any project which takes the effort to only update fields that were actually changed.
Usually what you'd do is that you just override all fields in your table with the new value as this is the easiest and most reliable way of doing so.
Also consider, that custom logic that decides what to update also needs to be maintained and can have bugs. If you end up having a bug in that logic, most likely, you'll realize that you have data consistency errors which might be unfixable.
Most likely, when you use Spring Boot, you would probably also use Spring Data JPA and Hibernate which are going to take care of mapping your objects to your database. In that case, Hibernate is going to decide on the update strategy you use anyways.
If you are worried about data consistency and concurrent updates to the same record, I would recommend looking into Optimistic Locking, which is an easy way to handle that issue. It's very easy to setup by just adding a version column to your table.
I have a Spring Boot application with persistence using Hibernate/JPA.
I am using transactions to manage my database persistence, and I am using the #Transactional annotation to define the methods that should execute transactionally.
I have three main levels of transaction granularity when persisting:
Batches of entities to be persisted
Single entities to be persisted
Single database operations that persist an entity
Therefore, you can imagine that I have three levels of nested transactions when thinking about the whole persistence flux.
The interaction between between levels 2 and 3 works transparently as I desire because without specifying any Propagation behaviour for the transaction, the default is the REQUIRED behaviour, and so the entire entity (level 2) is rolled back because level 3 will support the transaction defined in level 2.
However, the problem is that I need an interaction between 1 and 2 that is slightly different. I need an entity to be rolled back individually if an error were to occur, but I wouldn't like the entire batch to be rolled back. That being said, I need to specify a propagation behavior in the level 2 annotation #Transactional(propagation = X) that follows these requirements.
I've tried REQUIRES_NEW but that doesn't work because it commits some of the entities from level 2 even if the whole batch had to be rolled back, which can also happen.
The behaviour that seems to fit the description better is NESTED, but that is not accepted when using Spring and Hibernate JPA, see here for more information.
This last link offers alternatives for the NESTED type, but I would like to know if NESTED would've really solved my problem, or if there was another behaviour that suited the job better.
I guess NESTED would roughly do what you want but I would question if this really is necessary. I don't know what you are trying to do or what the error condition is, but maybe you can get rid of the error condition by using some kind of WHERE clause or an UPSERT statement: Hibernate Transactions and Concurrency Using attachDirty (saveOrUpdate)
My problem.
I have a simple table, token. It has only a few attributes. id, token,username,version and a expire_date.
I have a rest service that will create or update a token. So when a user request a token, I would like to check if the user (using the username) already has an entry, if yes, then simply update the expire_date and return, if there is no entry create a new one. The problem is that if I create a test with a few concurrent users(using a jmeter script), that call the rest service, hibernate will very fast
throw a StaleObject exception because what happens is: Thread A will select the row for the user, change the expire_date, bump the version, meanwhile thread B will do the same but will actually manage to commit before thread A commits. Now when thread A will commit hibernate detects the version change and will throw the exception and rollback. All works as documented.
But what I would like to happen, is that Thread B will wait for Thread A to finish before doing it's thing.
What is the best way to solve this? Should I use java concurrency package and implement locks? Or is it a better option to implement a custom jpa isolation level?
Thanks
If you are using JEE server, EJB container will do it for you using #Singleton.
I think the best way is using JPA lock to acquire lock on resources you are currently updating(row lock). Don't push your effort to implement row locking using java concurrency by your self. Ex: it will be much easier to lock row contain user "john.doe" in dbms level rather finding a way locking a specific row using concurrency in your code.
I have the following problem: I am working on a spring-boot application which offers REST services and use a relational (SQL) database using spring-data-jpa.
I have two REST services:
- a entity-creation service, which create the child-entity, the parent-entity and associate them in a same transaction. When this service ends, the data are committed into the database.
- an entity consultation service, which get back the parent-entity with its children
These two services are annotated with the #Transactional annotation. It production case, it works well: I can create an parent-entity with its children in one transaction (which is commited/ended), and get it in another transaction latter.
The problem is when I want to create integration-tests. My idea was to annotate each test with the #Transactional annotation, and do a rollback after each test. This way I keep my database clean between each test, and I don't have a generate the schema again or clean all the records in the database.
The integration test consists in creating a parent and its children and then reading it, everything in one transaction (as the test is annotated with #Transaction). When reading the entity previously created in the same transaction, I can get the parent entity, but the children are not fetched (null value). I am not sure to understand very well the transaction mechanism: I was thinking that using the #Transactional on the test method, the services (annotated with "#Transactional") invoked by this test should detect and use the same transaction opened by the test method (the propagation is configured to "REQUIRED"). Hence as the transaction uses the same EntityManager, this one should be able to return the relation between the parent entity and its children created previously in the same transaction, even if the data has not been committed to the database. The strange thing is that it retrieve the parent entity (which has not been yet committed into the database), but not its children. Is my understanding of the transaction concept correct? If not, could someone explains me what am I missing?
Also, if someone did something similar, could he explain me how he did it please?
My code is quite complex. I first want to know if I understand well how are transaction managed and if someone already did something similar. If really it is required, I can send more information about my implementation (how the transaction-manager and the entity-manager are initialized, the JPA entities, the services etc...)
Binding the Entity-manager in my test and calling its flush method from my test,between the creation and the reading, the reading operation works well: I get the parent entity with its children. But the data are written into the database during the creation to read it latter during the read operation. And I don't want the transaction to be committed as I need my test to work on an empty database. My misunderstanding is not so much about the Transaction mechanism, but more about the entity-manager: it does not keep as a cache the entities created and theirs relations...
This post help me.
Issue with #Transactional annotations in Spring JPA
As a final word, I am thinking about calling an SQL script before each test to empty my database.
I have a table and two databases which have the same table, but one is a symlink of the other one and only read is permitted on this table.
I have mapped the table to Java using Hibernate and I use spring to set the Entity Manager's data source as one of the two databases based on some input criteria.
I call only read only operations (selects) when I am connected to the second database, but it seems Hibernate tries to flush something back to the database and it fails telling update is not allowed on this view.
How do I disable this update only for the second datasource and keep it normal for the first one?
Update:
Looking at the stack trace, the flush seems to be started here:
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365)
at org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:504)
... 55 more
Is this related to hibernate.transaction.flush_before_completion property? Can I set it to false for the second data source?
Most probably your entities become "dirty" the same moment they are loaded from the database, and Hibernate thinks that it needs to store the changes. This happens, if your accessors (get and set methods) are not returning the exact same value or reference that had been set by Hibernate.
In our code, this happened with lists, developers created new list instances because they didn't like the type they got in the setter.
If you don't want to change the code, change the mapping to field access.
You can also prevent Hibernate of storing changes by setting FlushMode to never on the session, but this only hides the real problem which will still occur in other situations an will lead to unnecessary updates.
First you need to determine if this is DDL or DML. If you don't know, then I recommend you set hibernate.show_sql=true to capture the offending statement.
If it is DDL, then it's most likely going to be Hibernate updating the schema for you and you'd want to additionally configure the hibernate.hbm2ddl.auto setting to be either "update" or "none", depending on whether you're using the actual db or the symlinked (read-only) one, respectivley. You can use "validate" instead of none, too.
If it is DML, then I would first determine whether your code is for some reason making a change to an instance which is still attached to an active Hibernate Session. If so, then a subsequent read may cause a flush of these changes without ever explicitly saving the object (Grails?). If this is the case, consider evicting the instance causing the flush ( or using transport objects instead ).
Are you perhaps using any aspects or Hibernate lifecycle events to provide auditing of the objects? This, too, could cause access of a read-only to result in an insert or update being run.
It may turn out that you need to provide alternative mappings for the offending class should the updatability of a field come into play, but the code is doing everything exactly as you'd like ( this is unlikely ;0 ). If you are in an all-annotation world, this may be tricky. If working with hbm.xml, then providing an alternative mapping is easier.