Can I commit a portion of an #Transactional sequence? - spring-boot

I have a Spring Boot application, and have a webservice where a user can POST a model of a CollegeCourse instance which includes links between that class and the Students who are taking it. (The data is used to store rows in the association table, since those classes have a many-to-many relationship.) This works fine.
Say the enrollment in the course changes. The User expects to send the same JSON structure to the webservice handling the PUT call. The code took the easy path for updating, first finding and deleting all the existing CollegeCourse-Student links, then saving the new links. (Rather than iterating through the two lists, matching up items.) This part worked also as given.
We then added a uniqueness constraint to the CollegeCourse-Student association table, so that said table could not have a single Student linked to one CollegeCourse multiple times. This crashed and burned. A debugging session revealed the culprit: the delete of the CollegeCourse-Student records did not actually remove them from the database until the transaction completed. Thus, when we tried to add the new links back in, any holdovers from the original POST conflicted with what was already in the database.
The service handling the PUT is preceded by a #Transactional annotation. I tried moving the code to find and delete the associations in a separate method, and tried both #Transactional(propagation=Propagation.REQUIRED) and REQUIRES_NEW, but neither prevented failing the uniqueness constraint. I also added #EnableTransactionManagement to my Application class - same story. Is there a simple solution to my dilemma?

Without knowing exactly what your repository looks like, have you tried to do a manual flush on the entity manager after the deletions?
Something along the lines of
entityManager.flush();
Or, if you're using a Spring Data JPA repository, you should be able to define a flush method in that interface and call it.

Related

Why is my spring boot app creating another record in the database instead of merging them together?

AHere's the setup:
Springboot, springboot-data-jpa, hibernate.
I have one entity, a Review, that has a property that has a many-to-one relationship with another entity, a vehicle. The vehicle entity conversely has a one-to-many relationship with the review entity.
To create the review entity via a POST endpoint, I give the vehicle entity as of the properties in the JSON.
This works fine, not only is the review record added in the database but it goes ahead and creates the vehicle entity as well. The only issue is that in the case where the vehicle already exists, instead of making that connection, it creates another record with the exact same information so I have a duplicate vehicle entity.
Is this because I should be handling the creation of the vehicle entity on my own instead of relying on hibernate? Am I just missing some annotation I'm not aware of?
Originally I was getting an error about flushing or something so I added the Cascade.ALL annotation to the review class 'vehicle' property and that fixed that problem. I tried changing the Cascade type as it seems to be relevant somehow but it either breaks the server or doesn't work at all.

Panache: Insert or ignore child

I want to persist an entity that has a #OneToMany relationship to a child entity. I'm using Quarkus 1.13.1 with Quarkus Panache.
Example
public class User {
private List<Item> items;
#OneToMany(cascade = CascadeType.ALL)
public List<Item> getItems()...
}
If I want to persist a user (user.persist()) with a few items that already exist in the item table, then I get of course a "duplicate key" exception. So far so good.
But I was wondering if there is a descent way to skip/ignore an insert if an item already exists in the table items.
Of course, I could query the database to check if the child value exists, but this seems somehow tedious and bloats the code with data checks, so I was wondering if there was some annotation or other shortcut to handle this.
A persist operation should be used exclusively to create (store) new objects in the database, and makes the Java objects managed by Hibernate until the Session is closed.
It's really important that you know which objects are managed, and which are not, and distinguish wich ones are newly made persistent rather than just represent an existing object in the database.
To this end, it would indeed be better to load the existing Items first; if you know for sure which ones are already existing in the DB you can use a lazy proxy to represent them and put those in the list before persisting the User.
If you don't know which Items already exist in the database, then you should indeed have to query the database first. There is no shortcut for this operation; I guess we could explore some improvements but generally automating such things is tricky.
I would suggest implement the checks explicitly so you have full control over the strategy. It might be a good idea to make Item a cached entity so you can implement safe validations without performance drawbacks.

Referencing object should be updated if referenced object is saved?

Imagine the following situation: We have two database tables, Tenant and House. Tenant references House with a #ManyToOne mapping.
Tenant tenant = tenantRepository.findById(id).orElseThrow();
House house = tenant.getHouse();
house.setPrice(340_000);
house = houseRepository.save(house); // A new instance is returned by the CrudRepository::save() method
// TODO Is this necessary for further use?
tenant.setHouse(house);
// Further use...
tenant.setAge(23);
tenant = tenantRepository.save(tenant); // Otherwise it is saved with the old reference where house's ID can be null?
...
Is it necessary to update the Tenant with the new reference of House?
EDIT: For clarification, you may assume the entities were loaded (therefore, in managed state) immediately before the above code. And because this "transaction" is a part of a Spring #RequestMapping function, the transaction will be implicitly committed in the end of it.
EDIT 2: The question is not whether I should or not save the house at all in the beginning to avoid this situation. It is about understanding better how the objects are managed.
--- But you may tell me also, should I just update everything first, and save in the end, as a common practice?
The critical question is are house and tenant already managed entities?
If yes (because they got loaded in the same transaction that is still running) all the House instances involved are the same and you don't need to set the house in tenant.
But in that case, you don't even need to call save anyway.
If they are detached instances, yes you need to call tenant.setHouse(house);.
Without it, you will get either an exception or overwrite the changes to house, depending on your cascade setting on the relation.
The preferred way to do all this is:
Within a single transaction:
Load the entities
manipulate them as desired
commit the transaction
JPA will track the changes to the entities and flush them to the database before actually committing the database transaction.

is this a good idea to implement validation in entity framework POCO entities in dabtase first?

It`s seems that the best place to implement validation is as close as possible to the database, so when I use entity framework the nearest objects are the entities, in my case the POCO entities.
The reason for that is that if I want to reuse this POCO entities, the validation is implemented in the POCO objects and then there are less posibilities to insert worng data in the database.
this also avoid that someone try to insert incorrect data in the databse creating another application, or because he does not implement the validation. So it is more secure.
One way to do that is using partial classes that extends the POCO entities and that implements the IValidatableObject interface and return a list of validationresult.
But other way is the following. I have a common assembly that has the following:
One interface that declare the methods that need to implement the repositories.
The POCO entities that will be used by the repositories.
One class with utilities, such as copy entities and methods to validate the data of the entities.
Then I can create many repositories that use different versions of EF or another technology and all of them use the common assembly. This repositories implements the validation using the methods in the common library.
In this case I implement the validation only once. The only problem is that the repositories need to call the methods to validate the data.
But there are advantages in this way, from my point of view. For example, I can validate the data of the entities depending on the type of the operation. For example, if I am adding a new record and the primary key as an autonumeric, if the ID is not 0, then I can throw an exception, or if I try to delete a register when the ID is 0, then I don't need to send the command to the database.
So this second solution solves the problem to implement the validation as close as possible to the database, bacause is used in the repository, that is the element that access to the database, but has the problem that if some developer creates a new repository and not use the validation methods, I can have incorrect data in the database.
So my question is if the best option is to use validation with partial classes or to use a common library and the validation is implemented in the repositories, that is really what the users will use.
Thanks.
OK - phew, big question. My opinion is that the APPLICATION DOMAIN of the application is the boss of everything. The database is just an add-on service. So, the application domain should ultimately validate ALL objects that are being SENT somewhere. No need to validate object coming out of the DB because they were validated going in.
As an example, what if you were creating some object that needed to be sent off to a web service and it needed validation. Lets say it was never going near the database or the repositories. Once the DOMAIN business objects have been validated, they can then be sent for persistence or anywhere else.
Another thing to consider is what you mean by validation. Does it mean the datatypes are correct? Does it mean the business object is valid? Does it mean the business object is valid in the given context? It could mean all or only some of these things.
As an example, what if your system allows users to partially update records (common with very long input forms). The business object may only become valid when ALL the required data is captured, but the database allows persistence of "partial" data. In other words, you can save the business object to the database although it is not valid for further processing yet. etc etc....

how can i update an object/entity that is not completely filled out?

I have an entity with several fields, but on one view i want to only edit one of the fields. for example... I have a user entity, user has, id, name, address, username, pwd, and so on. on one of the views i want to be able to change the pwd(and only the pwd). so the view only knows of the id and sends the pwd. I want to update my entity without loading the rest of the fields(there are many many more) and changing the one pwd field and then saving them ALL back to the database. has anyone tried this. or know where i can look. all help is greatly appreciated.
Thx in advance.
PS
i should have given more detail. im using hibernate, roo is creating my entities. I agree that each view should have its own entity, problem is, im only building controllers, everything was done before. we were finders from the service layer, but we wanted to use some other finders, they seemed to not be accessible through the service layer, the decision was made to blow away the service layer and just interact with the entities directly (through the finders), the UserService.update(user) is no longer an option. i have recently found a User.persist() and a User.merge(), does the merge update all the fields on the object or only the ones that are not null, or if i want one to now be null how would it know the difference?
Which technologies except Spring are you using?
First of all have separate DTOs for every view, stripped only to what's needed. One DTO for id+password, another for address data, etc. Remember that DTOs can inherit from each other, so you can avoid duplication. And never pass business/ORM entities directly to view. It is too risky, leaks in some frameworks might allow users to modify fields which you haven't intended.
After the DTO comes back from the view (most web frameworks work like this) simply load the whole entity and fill only the fields that are present in the DTO.
But it seems like it's the persistence that is troubling you. Assuming you are using Hibernate, you can take advantage of dynamic-update setting:
dynamic-update (optional - defaults to false): specifies that UPDATE SQL should be generated at runtime and can contain only those columns whose values have changed.
In this case you are still loading the whole entity into memory, but Hibernate will generate as small UPDATE as possible, including only modified (dirty) fields.
Another approach is to have separate entities for each use-case/view. So you'll have an entity with only id and password, entity with only address data, etc. All of them are mapped to the same table, but to different subset of columns. This easily becomes a mess and should be treated as a last resort.
See the hibernate reference here
For persist()
persist() makes a transient instance persistent. However, it does not guarantee that the
identifier value will be assigned to the persistent instance immediately, the assignment
might happen at flush time. persist() also guarantees that it will not execute an INSERT
statement if it is called outside of transaction boundaries. This is useful in long-running
conversations with an extended Session/persistence context.
For merge
if there is a persistent instance with the same identifier currently associated with the session, copy the state of the given object onto the persistent instance
if there is no persistent instance currently associated with the session, try to load it from the database, or create a new persistent instance
the persistent instance is returned
the given instance does not become associated with the session, it remains detached
persist() and merge() has nothing to do with the fact that the columns are modified or not .Use dynamic-update as #Tomasz Nurkiewicz has suggested for saving only the modified columns .Use dynamic-insert for inserting not null columns .
Some JPA providers such as EclipseLink support fetch groups. So you can load a partial instance and update it.
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/AttributeGroup

Resources