Double instances in database after using EntityManager.merge() in Transient Method - spring

I am new with Spring, my application, developed with Spring Roo has a Cron that every day download some files and update a database.
The update is done, after downloading and parsing the files, using merge(),
an Entity class Dataset has a list called resources, after the download I do:
dataset.setResources(resources);
dataset.merge();
and dataset.merge() does the following:
#Transactional
public Dataset Dataset.merge() {
if (this.entityManager == null) this.entityManager = entityManager();
Dataset merged = this.entityManager.merge(this);
this.entityManager.flush();
return merged;
}
I expect that doing dataset.setResources(resources); I would overwrite the filed resources, and so even the database entry would be overwritten.
But I get double entries in the database: every resource appear twice, with different IDs (incremental).
How can I succed in let my application doing updates and not insert? A naive solution would be delete manually the old resource and then call merge(); is this the way or is there some more smart solution?

This situation occurs when you use Hibernate as persistence engine and your entities have version field.
Normally the ID field is what we need for merging a detached object with its persistent state in the database, but Hibernate takes the version field in account and if you don't set it (it is null) Hibernate discards the value of ID field and creates a new object with new ID.
To know if you are affected by this strange feature of Hibernate, set a value in the version field, if an Exception is thrown you got it. In that case the best way to solve it is the data to parse contain the right value of version. Another ways are to disable version checking (see Hibernate ref guide to know about it) or load persistent state before merging.

Related

Retrieve the deleted record spring JPA

I am working on a spring application.
We have a specific requirement where when we get a specific event, we want to look it up in the DB. If we find the record in the DB, then we delete it from DB, create another event using the details and trigger it.
Now my concern is:
I do not want to use two different calls, one to find the record and another to
delete the record.
I am looking for a way where we can delete the record using a custom
query and simultaneously fetch the deleted record.
This saves two differnet calls to DB, one for fetch and another for delete.
What I found on the internet so far:
We can use the custom query for deletion using the annotation called #Modifying. But this does not allow us to return the object as a whole. You can only return void or int from the methods that are annotated using #Modifying.
We have removeBy or deleteBy named queries provided by spring. but this also returns int only and not the complete record object that is being deleted.
I am specifically looking for something like:
#Transactional
FulfilmentAcknowledgement deleteByEntityIdAndItemIdAndFulfilmentIdAndType(#Param(value = "entityId") String entityId, #Param(value = "itemId") String itemId,
#Param(value = "fulfilmentId") Long fulfilmentId, #Param(value = "type") String type);
Is it possible to get the deleted record from DB and make the above call work?
I could not find a way to retrieve the actual object being deleted either by custom #Query or by named queries. The only method that returns the object being deleted is deleteById or removeById, but for that, we need the primary key of the record that is being deleted. It is not always possible to have that primary key with us.
So far, the best way that I found to do this was:
Fetch the record from DB using the custom query.
Delete the record from DB by calling deleteById. Although, you can now delete it using any method since we would not be requiring the object being returned by deleteById. I still chose deleteById because my DB is indexed on the primary key and it is faster to delete it using that.
We can use reactor or executor service to run the processes asynchronously and parallelly.

JPA #Version behavior when data is changed from unmanaged connection

Enabling #Version on table Customer when running the tests below
#Test
public void actionsTest1 () throws InterruptedException {
CustomerState t = customerStateRepository.findById(1L).get();
Thread.sleep(20000);
t.setInvoiceNumber("1");
customerStateRepository.save(t);
}
While actionsTest1 is sleeping, I run actionsTest2 which updates the invoice number to 2.
#Test
public void actionsTest2 () throws InterruptedException {
CustomerState t = customerStateRepository.findById(1L).get();
t.setInvoiceNumber("2");
customerStateRepository.save(t);
}
When actionsTest1 returns from sleeping it tries to update too, and gets a ObjectOptimisticLockingFailureException
Works as expected.
But if I run actionsTest1 and while it is sleeping I open a SQL terminal and do a raw update of
update customer
set invoice_number='3' where id=1
When actionsTest1 returns from sleeping, its versioning mechanism doesn't catch the case and updates the value back to 1.
Is that expected behavior? Does versioning work only with connections managed by JPA?
It works as expected. If you do a update manually, you have to update your version as well.
If you using JPA with #Version, JPA is incrementing the version column.
To get your expected result you have to write the statement like this
update customer set invoice_number='3', version=XYZ (mabye version+1) where id=1
Is that expected behavior?
Yes.
Does versioning work only with connections managed by JPA?
No, it also works when using any other way of updating your data. But everything updating the data has to adhere to the rules of optimistic locking:
increment the version column whenever performing any update
(only required when the other process also want to detect concurrent updates): on every update check that the version number hasn't changes since the data on which the update is based was loaded.
Hibernate automatically increases/changes the value in #Version mapped column in your database.
When you fetch an entity record, hibernate keeps a copy of the record of the data along with the value of #Version. While performing a merge or update operation, hibernate checks if the current value in of Version is still the same and matches the copy of entity fetched earlier.
If the value matches, it means that the entity is not dirty(not updated by any other transaction) else an exception is thrown.

Can I commit a portion of an #Transactional sequence?

I have a Spring Boot application, and have a webservice where a user can POST a model of a CollegeCourse instance which includes links between that class and the Students who are taking it. (The data is used to store rows in the association table, since those classes have a many-to-many relationship.) This works fine.
Say the enrollment in the course changes. The User expects to send the same JSON structure to the webservice handling the PUT call. The code took the easy path for updating, first finding and deleting all the existing CollegeCourse-Student links, then saving the new links. (Rather than iterating through the two lists, matching up items.) This part worked also as given.
We then added a uniqueness constraint to the CollegeCourse-Student association table, so that said table could not have a single Student linked to one CollegeCourse multiple times. This crashed and burned. A debugging session revealed the culprit: the delete of the CollegeCourse-Student records did not actually remove them from the database until the transaction completed. Thus, when we tried to add the new links back in, any holdovers from the original POST conflicted with what was already in the database.
The service handling the PUT is preceded by a #Transactional annotation. I tried moving the code to find and delete the associations in a separate method, and tried both #Transactional(propagation=Propagation.REQUIRED) and REQUIRES_NEW, but neither prevented failing the uniqueness constraint. I also added #EnableTransactionManagement to my Application class - same story. Is there a simple solution to my dilemma?
Without knowing exactly what your repository looks like, have you tried to do a manual flush on the entity manager after the deletions?
Something along the lines of
entityManager.flush();
Or, if you're using a Spring Data JPA repository, you should be able to define a flush method in that interface and call it.

spring hibernate merge example

I have a Spring 4 and Hibernate 5 back-end RESTful web-service. This works great and is all unit tested. The front-end is a SmartGWT 5.0p application which uses DataSources, not RestDataSources to communicate with the back-end.
The front-end SmartGWT 5.0p uses a listgrid to edit data, and then the ListGrid is attched to a datasource. Only the edited data in the ListGrid is sent back, not the entire row. If I could, I'd like to be able send backthe entire listgrid row with edited data, and the unedited data. If I could get an answer to that, that would be great.
Or, the alternate is we let SmartGWT only send back part of the data which is edited. This comes to the back-end as JSON and is changed into an Object/Entity. The controller/end-point is not in a session yet, but then we call a method in the service layer which is transactional.
So, then question becomes we have a detached object in a session in the method in the service layer. We have a detached object with a database primary key ... but it also has 1 or 2 fields of updated data, and now we want to merge that data back to the database. We can't call an update with this entity because with the partial data, some of the fields are being set to null. In reality, we want to pull back the item from the databae, update the edited fields, and then write the data back to the database.
I could do this all manually ... but do I have to? I expect there is a more graceful way to handle this.
Thanks!
This is only a partial answer:
I can make SmartGWT combine old values and new values with a link I found here:
SMARTGWT DataSource (GWT-RPC-DATASource) LISTGRID
And the code is as follows:
private ListGridRecord getEditedRecord(DSRequest request)
{
// Retrieving values before edit
JavaScriptObject oldValues = request
.getAttributeAsJavaScriptObject("oldValues");
// Creating new record for combining old values with changes
ListGridRecord newRecord = new ListGridRecord();
// Copying properties from old record
JSOHelper.apply(oldValues, newRecord.getJsObj());
// Retrieving changed values
JavaScriptObject data = request.getData();
// Apply changes
JSOHelper.apply(data, newRecord.getJsObj());
return newRecord;
}
This JSON string contains the new fields updated, and the old fields. When this json string is sent back to the RESTful back end, the Jackson Mapper creates an entity and puts all the fields in, this is essentially a detached entity. It's an object outside of the session, but the id resides in the database.
Because I have a complete entity, I can all an update, and that works.
Problem solved.
BUT, I'd still like to find out an elegant solution for taking a detached entity, get the original record from the database, and then merge these two objects, and then finally update the record.
In the Service layer in Spring which has a transaction, and creates the session,
a manual process, which I don't want to do might look something like this:
1) get the id from the updatedEntity
2) get that attachedEntity from the database using that id
3) compare the fields
if( updatedEntity.updateField1 != null )
{ attachedEntity.setField1(updatedEntity.updateField1) }
If I had to do step 3 for multiple fields,
that's not very elegant.
4) Update attachedEntity to the database, because it now has updated fields.
So, again, an elegant solution to fix this might be helpful. Thanks!

How to use Hazelcast with MapStore

I am using Hazelcast as caching Solution for my application.
My application has few inserts and updates to the database and these needs to be synced to Cache also.
I want to use MapStore functionality so that when I do IMap.put(), Hazelcast takes care of persisting the Object in underlying Db and also update its cache.
In the overridden store implementation, I want to call my DAO in following way to persist the Data.
public void store(Long key, Product value)
{
log.info("Storing Data for Employee {} in Database using DataStore ", value);
Long employeeId = employeeDao.create(value);
value.setId(employeeId );
}
There are few issues listed below:-
1) In put call, I want to use "key" as the "employeeId", but this is generated only after insertion happens for this record in the Db. So how do I put into the Cache when I don't have the Id.? I want Hazelcast to use the "id" generated as part of store method call (or any other way) as the key to my Object.
Imap.put(key,new Employee("name_of_Employee","age_of_employee"))
2) The MapStore implementation's store method returns a void so I cannot return the Id generated for this Object to the Client. How can I achieve this?
I tried using MapEntryListeners on the Map but the entry added callback does not return new Object. I also added PostProcessingMapStore interface to my MapStore but could not get the new Value back to client.
Please advice
You have 2 options:
1) Generate the employeeId outside of the database. You can use the IdGenerator from Hazelcast to do this.
2) If you must let the database generate the id, then you need to put the Employee in the cache manually AFTER it has been stored in the database.

Resources