I'm facing a singular problem...
I need to update an entity, but i don't know when it is really updated
My method is
#Override
#Transactional(isolation = Isolation.SERIALIZABLE)
public void lightOn(int idInterruttore) {
Interruttore interruttore = dao.findById(idInterruttore);
String inputPin = interruttore.getInputPin();
String pinName = interruttore.getRelePin();
GpioController gpio = interruttore.getGpio();
GpioPinDigitalOutput rele = gpio.provisionDigitalOutputPin(RaspiPin.getPinByName(pinName));
try {
DateTime date = new DateTime();
Date now = date.toDate();
int i = 1;
while (getInput(inputPin, gpio) != 1) {
if(i > 1){
logger.debug(String.format("Try n %s", i));
}
pushButton(rele);
Thread.sleep(1000);
i++;
}
dao.updateInterruttore(idInterruttore, now, true);
} catch (GpioPinExistsException | InterruptedException gpe) {
logger.error("GPIO giĆ esistente", gpe);
} finally {
gpio.unprovisionPin(rele);
}
logger.debug(String.format("After the update status should be true and it's %s",
interruttore.isStato()));
}
updateInterruttore is (i used this form to be sure to call the commit after the update... I have the lock Option because multiple call can be done to this method but only the first must update
#Override
public void updateInterruttore(int idInterruttore, Date dateTime, boolean stato) {
Session session = getSession();
Transaction tx = session.beginTransaction();
String update = "update Interruttore i set i.dateTime = :dateTime, i.stato = :stato where idInterruttore = :idInterruttore";
session.createQuery(update).setTimestamp("dateTime", dateTime).setBoolean("stato", stato)
.setInteger("idInterruttore", idInterruttore).setLockOptions(LockOptions.UPGRADE).executeUpdate();
tx.commit();
}
}
Well... when I update the log says me:
After the update status should be true and it's false
This happens only the first time I call the method, the second time interruttore.isStato is correctly true.
Why this happens?
This happens because you're updating the database directly with the update statement. Hibernate does not update automatically an already loaded entity in this case. If you reload the entity after the call to dao.updateInterruttore you should get the updated data.
Two notes:
1) You are using a query to apply the update. In that case, Hibernate will no update the entity that is in the session. Unless you update the entity itself and call session.save(interruttore), then the entity will not be updated. (But the update shows up in the DB.) Furthermore, I don't understand why you just don't update the entity and save it via session.save().
2) You are annotating the service method with #Transactional. (Assuming that's Spring annotation) If you use JTA, your tx.commit() will have no effect. But once the method completes, your transaction is committed. (or rolled back if the method throws an exception) If you are not using JTA, then get rid of #Transactional and manage transaction in your DAO method, as you are doing. But that's considered bad practice.
Related
I've just run into a problem, where I'm trying to select (repo.findById(id)) an object from the database using it's id, while it's being updated (Hibernate's onPreUpdate and onPostUpdate methods in PreUpdateEventListener and PostUpdateEventListener interfaces), but it's throwing a NullPointerException for me.
Perhaps it's easier if I explain it this way:
I have an object with status "PENDING", if it's being changed to "CONFIRMED", I want to check what the previous status was in the onPreUpdate method by doing this:
#Override
public boolean onPreUpdate(PreUpdateEvent preUpdateEvent)
{
final Object entity = preUpdateEvent.getEntity();
if (entity instanceof Status)
{
Status status = (Status) entity;
//Method below throws NullPointerException
Status statusFromRepo = statusRepo.findByStatusId(status.getStatusId());
}
return false;
}
But since statusRepo is already updating this object in the database, am I not able to do anything to get the object BEFORE it's updated? PreUpdateEvent contains the already "updated" version which is going to be saved in the database.
It's better if this issue is explained with an example. I have a database table Person with an int column named [Num]. It has only a record with the initial value of Num == 0.
In my PersonAppService.cs, there are the following 2 methods
public void TestIncrementA()
{
using (var uow = _unitOfWorkManager.Begin(new UnitOfWorkOptions { IsolationLevel = IsolationLevel.RepeatableRead })
{
var person = _personRepository.Get(1);
person.Num += 1;
Thread.Sleep(3000);
uow.Complete();
}
}
public void TestIncrementB()
{
using (var uow = _unitOfWorkManager.Begin(new UnitOfWorkOptions { IsolationLevel = IsolationLevel.RepeatableRead })
{
var person = _personRepository.Get(1);
person.Num += 1;
uow.Complete();
}
}
The 2 methods are essentially the same which increment the value of the column Num by one except that the first method delays the thread.
Now in the console of a web browser, I run the following commands in quick succession.
abp.services.app.person.testIncrementA();
abp.services.app.person.testIncrementB();
I would expect the value of Num in my database to be 2 now since it's been incremented twice. However it's only 1.
It's clear the RepeatableRead UoW is not locking the row properly. I have also tried using the attribute [UnitOfWork(IsolationLevel.RepeatableRead)] to no avail.
But, if I were to set the following in the PreInitialize of a module, it works.
Configuration.UnitOfWork.IsolationLevel = IsolationLevel.RepeatableRead;
This will unfortunately force RepeatableRead app-wide. Is there something that I'm overlooking?
To set a different isolation level from the ambient unit of work, begin another with RequiresNew:
using (var uow = _unitOfWorkManager.Begin(new UnitOfWorkOptions
{
Scope = TransactionScopeOption.RequiresNew, // Add this
IsolationLevel = IsolationLevel.RepeatableRead
})
{
...
}
Explanation
From https://aspnetboilerplate.com/Pages/Documents/Unit-Of-Work:
If a unit of work method calls another unit of work method, both use the same connection & transaction. The first entered method manages the connection & transaction and then the others reuse it.
The default IsolationLevel for a unit of work is ReadUncommitted if it is not configured. ...
Conventional Unit Of Work Methods
Some methods are unit of work methods by default:
...
All Application Service methods.
...
I'm working on a process that checks and updates data from Oracle database. I'm using hibernate and spring framework in my application.
The application reads a csv file, processes the content, then persiste entities :
public class Main() {
Input input = ReadCSV(path);
EntityList resultList = Process.process(input);
WriteResult.write(resultList);
...
}
// Process class that loops over input
public class Process{
public EntityList process(Input input) :
EntityList results = ...;
...
for(Line line : input.readLine()){
results.add(ProcessLine.process(line))
...
}
return results;
}
// retrieving and updating entities
Class ProcessLine {
#Autowired
DomaineRepository domaineRepository;
#Autowired
CompanyDomaineService companydomaineService
#Transactional
public MyEntity process(Line line){
// getcompanyByXX is CrudRepository method with #Query that returns an entity object
MyEntity companyToAttach = domaineRepository.getCompanyByCode(line.getCode());
MyEntity companyToDetach = domaineRepository.getCompanyBySiret(line.getSiret());
if(companyToDetach == null || companyToAttach == null){
throw new CustomException("Custom Exception");
}
// AttachCompany retrieves some entity relationEntity, then removes companyToDetach and adds CompanyToAttach. this updates relationEntity.company attribute.
companydomaineService.attachCompany(companyToAttach, companyToDetach);
return companyToAttach;
}
}
public class WriteResult{
#Autowired
DomaineRepository domaineRepository;
#Transactional
public void write(EntityList results) {
for (MyEntity result : results){
domaineRepository.save(result)
}
}
}
The application works well on files with few lines, but when i try to process large files (200 000 lines), the performance slows drastically, and i get a SQL timeout.
I suspect cache issues, but i'm wondering if saving all the entities at the end of the processing isn't a bad practice ?
The problem is your for loop which is doing individual saves on the result and thus does single inserts slowing it down. Hibernate and spring support batch inserts and should be done when ever possible.
something like domaineRepository.saveAll(results)
Since you are processing lot of data it might be better to do things in batches so instead of getting one company to attach you should get a list of companies to attach processes those then get a list of companies to detach and process those
public EntityList process(Input input) :
EntityList results;
List<Code> companiesToAdd = new ArrayList<>();
List<Siret> companiesToRemove = new ArrayList<>();
for(Line line : input.readLine()){
companiesToAdd.add(line.getCode());
companiesToRemove.add(line.getSiret());
...
}
results = process(companiesToAdd, companiesToRemove);
return results;
}
public MyEntity process(List<Code> companiesToAdd, List<Siret> companiesToRemove) {
List<MyEntity> attachList = domaineRepository.getCompanyByCodeIn(companiesToAdd);
List<MyEntity> detachList = domaineRepository.getCompanyBySiretIn(companiesToRemove);
if (attachList.isEmpty() || detachList.isEmpty()) {
throw new CustomException("Custom Exception");
}
companydomaineService.attachCompany(attachList, detachList);
return attachList;
}
The above code is just sudo code to point you in the right direction, will need to work out what works for you.
For every line you read you are doing 2 read operations here
MyEntity companyToAttach = domaineRepository.getCompanyByCode(line.getCode());
MyEntity companyToDetach = domaineRepository.getCompanyBySiret(line.getSiret());
You can read more than one line and us the in query and then process that list of companies
Working on a WebApi project that's backed by mssql with EntityFramework, and also Oracle (12c) using oracle's ManagedDataAccess.Client.OracleConnection. We use autofac to inject an instance of our context per request, but all oracle access is just done ad hoc.
We have certain operations that depend on both databases at the same time, so we opted to use the TransactionScope object to manage the transaction.
For the most part it works well, the light weight transactions that are promoted to distributed work great. But there is one issue I've encountered after completing a distributed transaction.
Given:
public void Test()
{
var preItem = new HelpItem
{
Field1 = "pre batch";
};
_context.Items.Add(preItem);
_context.SaveChanges(); // This save always works.
var batchResult = FooService.BatchOperation(true);
var postItem = new HelpItem
{
Field1 = "post batch";
};
_context.Items.Add(postItem);
_context.SaveChanges(); // This will succeed/fail depending on whether FooService caused a distributed transaction.
}
With the BatchOperation method as:
public Result BatchOperation(bool triggerDtc)
{
using (var transaction = new new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted }))
{
if (triggerDtc){
// Make requests to both databases.
} else {
// Make request to one database.
}
// Always complete for the sake of the demonstration.
transaction.Complete();
}
}
If a distributed transaction is encountered and then completed & fully disposed EF doesn't seem to be able to recover and go back to working as it was before the transaction came into play.
The error:
Distributed transaction completed. Either enlist this session in a new
transaction or the NULL transaction.
What would be the correct way to handle this?
For this particular case you can simply create another transaction around the second part:
var batchResult = FooService.BatchOperation(true);
using (var transaction = new new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted }))
{
var postItem = new HelpItem
{
Field1 = "post batch";
};
_context.Items.Add(postItem);
_context.SaveChanges(); // This save depends on whether FooService caused a distributed transaction.
transaction.Complete();
}
But this issue came up because the FooService.BatchOperation method was altered with just a lookup to the other database, unknowingly breaking every method out there that continues to use the context after calling it. With normal transaction a single EF context can freely be used in and out of them without issue, is there any way to achieve the same with a distributed transaction?
EDIT:
This really just has me confused now. Just the act of making a request in another (non distributed) transactionscope is enough to restore EF functionality.
public IHttpActionResult Test()
{
var preItem = new HelpItem
{
Field1 = "pre batch";
};
_context.Items.Add(preItem);
_context.SaveChanges(); // This save works.
var batchResult = FooService.BatchOperation(true);
using (var transaction = new new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted }))
{
var lookupAnything = _context.Items.ToList();
transaction.Complete(); // This is optional, because we really don't care and it's disposed either way.
}
var postItem = new HelpItem
{
Field1 = "post batch";
};
_context.Items.Add(postItem);
_context.SaveChanges(); // Now this always works.
}
Obviously I can't just go around putting this everywhere, so still not sure what the actual solution is.
In my current project I have an entity which can be published to other systems. For keeping track on the publications the entity has a relation called "publications". I am using Eclipselink.
This entity bean also has a "PreUpdate" annotated method.
In order to be able to keep the other systems data up to date, I created an Aspect that is executed around the call to the PreUpdate method. Depending on which properties have changed, I need to remove some of the publications. Everything is working absolutely fine.
The problem I am having is that the portal-publishing component correctly sends delete commands and removes the publication from the entities "publications" list. I can even see in the changeset that JPA has noticed the "publications" property to have changed. After the transaction is flushed, the cached entity correctly doesn't have the deleted publications anymore. Unfortunately the database still does and when the system is restarted or the Entity is loaded from the DB again, the publication metadata is there again.
I tried allmost everything. I even managed to get the deleted instances from the JPA ChangeSet in the Aspect and tried to use the entityManager to manually delete them, but nothing actually worked. I seem to be unable to delete these relational entities. Currently I am thinking about using JDBC to delete them, but this would only be my last measure.
#Transactional
#Around("execution(* de.cware.services.truck.model.Truck.jpaPreUpdate(..))")
public Object truckPreUpdate(final ProceedingJoinPoint pjp) throws Throwable {
if (alreadyExecutingMarker.get() != Boolean.TRUE) {
alreadyExecutingMarker.set(Boolean.TRUE);
final Truck truck = (Truck) pjp.getTarget();
final JpaEntityManager jpaEntityManager = (JpaEntityManager) entityManager.getDelegate();
final UnitOfWorkChangeSet changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
final ObjectChangeSet objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
if (log.isDebugEnabled()) {
log.debug("--------------------- Truck pre update check (" + truck.getId() + ") ---------------------");
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// If the truck date has changed, revoke all publication copies.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
final ChangeRecord truckFreeDate = objectChangeSet.getChangesForAttributeNamed("lkwFreiDatum");
if (truckFreeDate != null) {
if (log.isDebugEnabled()) {
log.debug("The date 'truckFreeDate' of truck with id '" + truck.getId() + "' has changed. " +
"Revoking all publications that are not marked as main applications");
}
for (final String portal : truck.getPublishedPortals()) {
if (log.isDebugEnabled()) {
log.debug("- Revoking publications of copies to portal: " + portal);
}
portalService.deleteCopies(truck, portal);
// Get any deleted portal references and use the entityManager to finally delete them.
changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
final ChangeRecord publicationChanges = objectChangeSet.getChangesForAttributeNamed("publications");
if (publicationChanges != null) {
if (publicationChanges instanceof CollectionChangeRecord) {
final CollectionChangeRecord collectionChanges =
(CollectionChangeRecord) publicationChanges;
#SuppressWarnings("unchecked")
final Collection<ObjectChangeSet> removedPublications =
(Collection<ObjectChangeSet>)
collectionChanges.getRemoveObjectList().values();
for (final ObjectChangeSet removedPublication : removedPublications) {
final TruckPublication publication = (TruckPublication) ((org.eclipse.persistence.internal.sessions.ObjectChangeSet) removedPublication).getUnitOfWorkClone();
entityManager.remove(publication);
}
}
}
}
}
}
}
Chris
The issue is that PreUpdate is raised during the commit process, when the set of changes have already been computed, and the set of objects to delete have already been computed.
Ideally you would perform something like this in your application logic, not through a persistence event.
You could try executing a DeleteObjectQuery directly from your event (instead of using em.remove()), this may work, but in general it would be better to perform this logic in your application.
jpaEntityManager.getUnitOfWork().deleteObject(object);
Also note that getCurrentChanges() computes the changes, in a PreUpdate event the changes are already computed, so you should be able to use getUnitOfWorkChangeSet().
The only solution I found was to create a new Method for performing the delete and forcing JPA to create a new Transaction. As by this I am losing the changeSet, I had to manually find out which publications were deleted. I then simply call that helper method and the publications are correctly deleted, but I find this solution extremely ugly.
#Transactional
#Around("execution(* de.cware.services.truck.model.Truck.jpaPreUpdate(..))")
public Object truckPreUpdate(final ProceedingJoinPoint pjp) throws Throwable {
if (alreadyExecutingMarker.get() != Boolean.TRUE) {
alreadyExecutingMarker.set(Boolean.TRUE);
final Truck truck = (Truck) pjp.getTarget();
final JpaEntityManager jpaEntityManager = (JpaEntityManager) entityManager.getDelegate();
final UnitOfWorkChangeSet changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
final ObjectChangeSet objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
if (log.isDebugEnabled()) {
log.debug("--------------------- Truck pre update check (" + truck.getId() + ") ---------------------");
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// If the truck date has changed, revoke all publication copies.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
final ChangeRecord truckFreeDate = objectChangeSet.getChangesForAttributeNamed("lkwFreiDatum");
if (truckFreeDate != null) {
if (log.isDebugEnabled()) {
log.debug("The date 'truckFreeDate' of truck with id '" + truck.getId() + "' has changed. " +
"Revoking all publications that are not marked as main applications");
}
removeCopyPublications(truck);
}
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
protected void removeCopyPublications(Truck truck) {
// Delete all not-main-publications.
for (final String portal : truck.getPublishedPortals()) {
if (log.isDebugEnabled()) {
log.debug("- Revoking publications of copies to portal: " + portal);
}
final Map<Integer, TruckPublication> oldPublications = new HashMap<Integer, TruckPublication>();
for(final TruckPublication publication : truck.getPublications(portal)) {
oldPublications.put(publication.getId(), publication);
}
portalService.deleteCopies(truck, portal);
for(final TruckPublication publication : truck.getPublications(portal)) {
oldPublications.remove(publication.getId());
}
for (TruckPublication removedPublication : oldPublications.values()) {
if(!entityManager.contains(removedPublication)) {
removedPublication = entityManager.merge(removedPublication);
}
entityManager.remove(removedPublication);
entityManager.flush();
}
}
}
Why doesn't my first version work?
I had a similar problem, I have a class and its children, when I remove the children from the parent they were deleted from the DB, then I attached new children using merge on the class Parent (CascadeType.ALL) using JPA/EclipseLink, then the children didn't create on the DB but in the persistency Motor (JPA). I fix this doing the following:
1- I set shared-cache-mode to NONE in the persistence.xml file
2- When I remove the children, I executed inmediatly this:
public void remove(T entity) {
getEntityManager().remove(getEntityManager().merge(entity));
getEntityManager().getEntityManagerFactory().getCache().evictAll();
}
And that's all. I hope this would help anyone else.
CHECK REFERENCE
http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching