In my current project I have an entity which can be published to other systems. For keeping track on the publications the entity has a relation called "publications". I am using Eclipselink.
This entity bean also has a "PreUpdate" annotated method.
In order to be able to keep the other systems data up to date, I created an Aspect that is executed around the call to the PreUpdate method. Depending on which properties have changed, I need to remove some of the publications. Everything is working absolutely fine.
The problem I am having is that the portal-publishing component correctly sends delete commands and removes the publication from the entities "publications" list. I can even see in the changeset that JPA has noticed the "publications" property to have changed. After the transaction is flushed, the cached entity correctly doesn't have the deleted publications anymore. Unfortunately the database still does and when the system is restarted or the Entity is loaded from the DB again, the publication metadata is there again.
I tried allmost everything. I even managed to get the deleted instances from the JPA ChangeSet in the Aspect and tried to use the entityManager to manually delete them, but nothing actually worked. I seem to be unable to delete these relational entities. Currently I am thinking about using JDBC to delete them, but this would only be my last measure.
#Transactional
#Around("execution(* de.cware.services.truck.model.Truck.jpaPreUpdate(..))")
public Object truckPreUpdate(final ProceedingJoinPoint pjp) throws Throwable {
if (alreadyExecutingMarker.get() != Boolean.TRUE) {
alreadyExecutingMarker.set(Boolean.TRUE);
final Truck truck = (Truck) pjp.getTarget();
final JpaEntityManager jpaEntityManager = (JpaEntityManager) entityManager.getDelegate();
final UnitOfWorkChangeSet changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
final ObjectChangeSet objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
if (log.isDebugEnabled()) {
log.debug("--------------------- Truck pre update check (" + truck.getId() + ") ---------------------");
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// If the truck date has changed, revoke all publication copies.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
final ChangeRecord truckFreeDate = objectChangeSet.getChangesForAttributeNamed("lkwFreiDatum");
if (truckFreeDate != null) {
if (log.isDebugEnabled()) {
log.debug("The date 'truckFreeDate' of truck with id '" + truck.getId() + "' has changed. " +
"Revoking all publications that are not marked as main applications");
}
for (final String portal : truck.getPublishedPortals()) {
if (log.isDebugEnabled()) {
log.debug("- Revoking publications of copies to portal: " + portal);
}
portalService.deleteCopies(truck, portal);
// Get any deleted portal references and use the entityManager to finally delete them.
changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
final ChangeRecord publicationChanges = objectChangeSet.getChangesForAttributeNamed("publications");
if (publicationChanges != null) {
if (publicationChanges instanceof CollectionChangeRecord) {
final CollectionChangeRecord collectionChanges =
(CollectionChangeRecord) publicationChanges;
#SuppressWarnings("unchecked")
final Collection<ObjectChangeSet> removedPublications =
(Collection<ObjectChangeSet>)
collectionChanges.getRemoveObjectList().values();
for (final ObjectChangeSet removedPublication : removedPublications) {
final TruckPublication publication = (TruckPublication) ((org.eclipse.persistence.internal.sessions.ObjectChangeSet) removedPublication).getUnitOfWorkClone();
entityManager.remove(publication);
}
}
}
}
}
}
}
Chris
The issue is that PreUpdate is raised during the commit process, when the set of changes have already been computed, and the set of objects to delete have already been computed.
Ideally you would perform something like this in your application logic, not through a persistence event.
You could try executing a DeleteObjectQuery directly from your event (instead of using em.remove()), this may work, but in general it would be better to perform this logic in your application.
jpaEntityManager.getUnitOfWork().deleteObject(object);
Also note that getCurrentChanges() computes the changes, in a PreUpdate event the changes are already computed, so you should be able to use getUnitOfWorkChangeSet().
The only solution I found was to create a new Method for performing the delete and forcing JPA to create a new Transaction. As by this I am losing the changeSet, I had to manually find out which publications were deleted. I then simply call that helper method and the publications are correctly deleted, but I find this solution extremely ugly.
#Transactional
#Around("execution(* de.cware.services.truck.model.Truck.jpaPreUpdate(..))")
public Object truckPreUpdate(final ProceedingJoinPoint pjp) throws Throwable {
if (alreadyExecutingMarker.get() != Boolean.TRUE) {
alreadyExecutingMarker.set(Boolean.TRUE);
final Truck truck = (Truck) pjp.getTarget();
final JpaEntityManager jpaEntityManager = (JpaEntityManager) entityManager.getDelegate();
final UnitOfWorkChangeSet changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
final ObjectChangeSet objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
if (log.isDebugEnabled()) {
log.debug("--------------------- Truck pre update check (" + truck.getId() + ") ---------------------");
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// If the truck date has changed, revoke all publication copies.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
final ChangeRecord truckFreeDate = objectChangeSet.getChangesForAttributeNamed("lkwFreiDatum");
if (truckFreeDate != null) {
if (log.isDebugEnabled()) {
log.debug("The date 'truckFreeDate' of truck with id '" + truck.getId() + "' has changed. " +
"Revoking all publications that are not marked as main applications");
}
removeCopyPublications(truck);
}
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
protected void removeCopyPublications(Truck truck) {
// Delete all not-main-publications.
for (final String portal : truck.getPublishedPortals()) {
if (log.isDebugEnabled()) {
log.debug("- Revoking publications of copies to portal: " + portal);
}
final Map<Integer, TruckPublication> oldPublications = new HashMap<Integer, TruckPublication>();
for(final TruckPublication publication : truck.getPublications(portal)) {
oldPublications.put(publication.getId(), publication);
}
portalService.deleteCopies(truck, portal);
for(final TruckPublication publication : truck.getPublications(portal)) {
oldPublications.remove(publication.getId());
}
for (TruckPublication removedPublication : oldPublications.values()) {
if(!entityManager.contains(removedPublication)) {
removedPublication = entityManager.merge(removedPublication);
}
entityManager.remove(removedPublication);
entityManager.flush();
}
}
}
Why doesn't my first version work?
I had a similar problem, I have a class and its children, when I remove the children from the parent they were deleted from the DB, then I attached new children using merge on the class Parent (CascadeType.ALL) using JPA/EclipseLink, then the children didn't create on the DB but in the persistency Motor (JPA). I fix this doing the following:
1- I set shared-cache-mode to NONE in the persistence.xml file
2- When I remove the children, I executed inmediatly this:
public void remove(T entity) {
getEntityManager().remove(getEntityManager().merge(entity));
getEntityManager().getEntityManagerFactory().getCache().evictAll();
}
And that's all. I hope this would help anyone else.
CHECK REFERENCE
http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching
Related
If I commit the same entity twice, the first time with changes and the second time without any changes, I receive a Commit with a CommitId both times. The first one holds the changes, the second has a empty changes list.
Is this behaviour intentional? I would expect to not get a CommitId for the second commit as there is no change and also no commit in the database. I got around the issue by checking if the changes list is not empty.
My Repository
public class CustomerRepository {
private final Javers javers;
CustomerRepository(Javers javers) {
this.javers = javers;
}
public Optional<CommitId> save(Customer customer, Metadata metadata) {
try {
var author = Optional.ofNullable(metadata.author())
.orElse("unknown");
Commit commit = javers.commit(author, customer); //<-- this returns a Commit with a CommitId
return Optional.of(commit.getId());
} catch (Exception exception) {
log.warn("Couldn't commit customer", exception);
}
return Optional.empty();
}
public ChangesByCommit getCommitForCustomer(CustomerId customerId, CommitId commitId) {
var query = QueryBuilder
.byInstanceId(customerId.getValue(), Customer.class)
.withCommitId(commitId)
.withChildValueObjects()
.build();
return javers.findChanges(query).groupByCommit().stream().findFirst().orElseThrow(() -> new CommitNotFoundException(customerId, commitId));
}
}
and my test case would be this
#Test
void emptyCommit() {
var customer = new Customer(new CustomerId("id"));
var metadata = new Metadata("author");
Optional<CommitId> initialCommit = repository.save(customer, metadata);
assertThat(initialerCommit).isPresent();
customer.addDeliveryAddressToAddressBookList(new DeliveryAddress("name", "surname", "street", "city"));
Optional<CommitId> commitWithChanges = repository.save(customer, metadata);
Optional<CommitId> commitWithoutChanges = repository.save(customer, metadata);
assertThat(commitWithChanges).isPresent();
assertThat(commitWithoutChanges).isPresent();
ChangesByCommit initialChanges = repository.getCommitForCustomer(new CustomerId(customer.getId()), initialCommit.get());
ChangesByCommit addedAddressBookChanges = repository.getCommitForCustomer(new CustomerId(customer.getId()), commitWithChanges.get());
assertThrows(CommitNotFoundException.class, () -> repository.getCommitForCustomer(new CustomerId(customer.getId()), commitWithoutChanges.get()));
}
This is how Javers works, you can have empty commit (without snapshots), but you can't have no commit. This design is expressed in Javers API, when you call javers.commit() you get a Commit object and not Optional<Commit>.
I'm building a data pipeline with Spring Cloud Stream File Source app at the start of the pipeline. I need some help with working around some missing features
My File source app (based on org.springframework.cloud.stream.app:spring-cloud-starter-stream-source-file) works perfectly well excepting missing features that I need help with. I need
To delete files after polled and messaged
Poll into the subdirectories
With respect to item 1, I read that the delete feature doesn't exist in the file source app (it is available on sftp source). Every time the app is restarted, the files that were processed in the past will be re-picked, can the history of files processed made permanent? Is there an easy alternative?
To support those requirements you definitely need to modify the code of the mentioned File Source project: https://docs.spring.io/spring-cloud-stream-app-starters/docs/Einstein.BUILD-SNAPSHOT/reference/htmlsingle/#_patching_pre_built_applications
I would suggest to fork the project and poll it from GitHub as is, since you are going to modify existing code of the project. Then you follow instruction in the mentioned doc how to build the target binder-specific artifact which will be compatible with SCDF environment.
Now about the questions:
To poll sub-directories for the same file pattern, you need to configure a RecursiveDirectoryScanner on the Files.inboundAdapter():
/**
* Specify a custom scanner.
* #param scanner the scanner.
* #return the spec.
* #see FileReadingMessageSource#setScanner(DirectoryScanner)
*/
public FileInboundChannelAdapterSpec scanner(DirectoryScanner scanner) {
Note that all the filters must be configured on this DirectoryScanner instead.
There is going to be a warning otherwise:
// Check that the filter and locker options are _NOT_ set if an external scanner has been set.
// The external scanner is responsible for the filter and locker options in that case.
Assert.state(!(this.scannerExplicitlySet && (this.filter != null || this.locker != null)),
() -> "When using an external scanner the 'filter' and 'locker' options should not be used. " +
"Instead, set these options on the external DirectoryScanner: " + this.scanner);
To keep track of the files, it is better to consider to have a FileSystemPersistentAcceptOnceFileListFilter based on the external persistence store for the ConcurrentMetadataStore implementation: https://docs.spring.io/spring-integration/reference/html/#metadata-store. This must be used instead of that preventDuplicates(), because FileSystemPersistentAcceptOnceFileListFilter ensure only once logic for us as well.
Deleting file after sending might not be a case, since you may just send File as is and it is has to be available on the other side.
Also, you can add a ChannelInterceptor into the source.output() and implement its postSend() to perform ((File) message.getPayload()).delete(), which is going to happen when the message has been successfully sent to the binder destination.
#EnableBinding(Source.class)
#Import(TriggerConfiguration.class)
#EnableConfigurationProperties({FileSourceProperties.class, FileConsumerProperties.class,
TriggerPropertiesMaxMessagesDefaultUnlimited.class})
public class FileSourceConfiguration {
#Autowired
#Qualifier("defaultPoller")
PollerMetadata defaultPoller;
#Autowired
Source source;
#Autowired
private FileSourceProperties properties;
#Autowired
private FileConsumerProperties fileConsumerProperties;
private Boolean alwaysAcceptDirectories = false;
private Boolean deletePostSend;
private Boolean movePostSend;
private String movePostSendSuffix;
#Bean
public IntegrationFlow fileSourceFlow() {
FileInboundChannelAdapterSpec messageSourceSpec = Files.inboundAdapter(new File(this.properties.getDirectory()));
RecursiveDirectoryScanner recursiveDirectoryScanner = new RecursiveDirectoryScanner();
messageSourceSpec.scanner(recursiveDirectoryScanner);
FileVisitOption[] fileVisitOption = new FileVisitOption[1];
recursiveDirectoryScanner.setFilter(initializeFileListFilter());
initializePostSendAction();
IntegrationFlowBuilder flowBuilder = IntegrationFlows
.from(messageSourceSpec,
new Consumer<SourcePollingChannelAdapterSpec>() {
#Override
public void accept(SourcePollingChannelAdapterSpec sourcePollingChannelAdapterSpec) {
sourcePollingChannelAdapterSpec
.poller(defaultPoller);
}
});
ChannelInterceptor channelInterceptor = new ChannelInterceptor() {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
if (sent) {
File fileOriginalFile = (File) message.getHeaders().get("file_originalFile");
if (fileOriginalFile != null) {
if (movePostSend) {
fileOriginalFile.renameTo(new File(fileOriginalFile + movePostSendSuffix));
} else if (deletePostSend) {
fileOriginalFile.delete();
}
}
}
}
//Override more interceptor methods to capture some logs here
};
MessageChannel messageChannel = source.output();
((DirectChannel) messageChannel).addInterceptor(channelInterceptor);
return FileUtils.enhanceFlowForReadingMode(flowBuilder, this.fileConsumerProperties)
.channel(messageChannel)
.get();
}
private void initializePostSendAction() {
deletePostSend = this.properties.isDeletePostSend();
movePostSend = this.properties.isMovePostSend();
movePostSendSuffix = this.properties.getMovePostSendSuffix();
if (deletePostSend && movePostSend) {
String errorMessage = "The 'delete-file-post-send' and 'move-file-post-send' attributes are mutually exclusive";
throw new IllegalArgumentException(errorMessage);
}
if (movePostSend && (movePostSendSuffix == null || movePostSendSuffix.trim().length() == 0)) {
String errorMessage = "The 'move-post-send-suffix' is required when 'move-file-post-send' is set to true.";
throw new IllegalArgumentException(errorMessage);
}
//Add additional validation to ensure the user didn't configure a file move that will result in cyclic processing of file
}
private FileListFilter<File> initializeFileListFilter() {
final List<FileListFilter<File>> filtersNeeded = new ArrayList<FileListFilter<File>>();
if (this.properties.getFilenamePattern() != null && this.properties.getFilenameRegex() != null) {
String errorMessage = "The 'filename-pattern' and 'filename-regex' attributes are mutually exclusive.";
throw new IllegalArgumentException(errorMessage);
}
if (StringUtils.hasText(this.properties.getFilenamePattern())) {
SimplePatternFileListFilter patternFilter = new SimplePatternFileListFilter(this.properties.getFilenamePattern());
if (this.alwaysAcceptDirectories != null) {
patternFilter.setAlwaysAcceptDirectories(this.alwaysAcceptDirectories);
}
filtersNeeded.add(patternFilter);
} else if (this.properties.getFilenameRegex() != null) {
RegexPatternFileListFilter regexFilter = new RegexPatternFileListFilter(this.properties.getFilenameRegex());
if (this.alwaysAcceptDirectories != null) {
regexFilter.setAlwaysAcceptDirectories(this.alwaysAcceptDirectories);
}
filtersNeeded.add(regexFilter);
}
FileListFilter<File> createdFilter = null;
if (!Boolean.FALSE.equals(this.properties.isIgnoreHiddenFiles())) {
filtersNeeded.add(new IgnoreHiddenFileListFilter());
}
if (Boolean.TRUE.equals(this.properties.isPreventDuplicates())) {
filtersNeeded.add(new AcceptOnceFileListFilter<File>());
}
if (filtersNeeded.size() == 1) {
createdFilter = filtersNeeded.get(0);
} else {
createdFilter = new CompositeFileListFilter<File>(filtersNeeded);
}
return createdFilter;
}
}
I have a parent which stores a list of children. When i update the children(add/edit/remove), is there a way to automatically decide which child to remove or edit based on the foreign key? Or do i have to manually check through all the child to see which are new or modified?
Parent Class
#Entity
#EntityListeners(PermitEntityListener.class)
public class Permit extends Identifiable {
#OneToMany(fetch = FetchType.LAZY, cascade=CascadeType.ALL, mappedBy = "permit")
private List<Coordinate> coordinates;
}
Child Class
#Entity
public class Coordinate extends Identifiable {
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "permit_id", referencedColumnName = "id")
private Permit permit;
private double lat;
private double lon;
}
Parent's Controller
#PutMapping("")
public #ResponseBody ResponseEntity<?> update(#RequestBody Permit permit) {
logger.debug("update() with body {} of id {}", permit, permit.getId());
if (!repository.findById(permit.getId()).isPresent()) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(null);
}
Permit returnedEntity = repository.save(permit);
repository.flush();
return ResponseEntity.ok(returnedEntity);
}
=EDIT=
Controller Create
#Override
#PostMapping("")
public #ResponseBody ResponseEntity<?> create(#RequestBody Permit permit) {
logger.debug("create() with body {}", permit);
if (permit == null || permit.getId() != null) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(null);
}
List<Coordinate> coordinates = permit.getCoordinates();
if (coordinates != null) {
for (int x = 0; x < coordinates.size(); ++x) {
Coordinate coordinate = coordinates.get(x);
coordinate.setPermit(permit);
}
}
Permit returnedEntity = repository.save(permit);
repository.flush();
return ResponseEntity.ok(returnedEntity);
}
Controller Update
#PutMapping("")
public #ResponseBody ResponseEntity<?> update(#RequestBody Permit permit) {
logger.debug("update() with body {} of id {}", permit, permit.getId());
if (!repository.findById(permit.getId()).isPresent()) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(null);
}
List<Coordinate> repoCoordinate = coordinateRepository.findByPermitId(permit.getId());
List<Long> coordinateIds = new ArrayList<Long>();
for (Coordinate coordinate : permit.getCoordinates()) {
coordinate.setPermit(permit);
//if existing coordinate, save the ID in coordinateIds
if (coordinate.getId() != null) {
coordinateIds.add(coordinate.getId());
}
}
//loop through coordinate in repository to find which coordinate to remove
for (Coordinate coordinate : repoCoordinate) {
if (!(coordinateIds.contains(coordinate.getId()))) {
coordinateRepository.deleteById(coordinate.getId());
}
}
Permit returnedEntity = repository.save(permit);
repository.flush();
return ResponseEntity.ok(returnedEntity);
}
I have tested this and it is working, is there no simplified way of doing this?
You were close to the solution. The only thing you're missing is orphanRemoval=true on your one to many mapping:
#Entity
#EntityListeners(PermitEntityListener.class)
public class Permit extends Identifiable {
#OneToMany(mappedBy = "permit", cascade=CascadeType.ALL, orphanRemoval=true)
private List<Coordinate> coordinates;
}
Flagging the mapping for orphan removal will tell the underlying ORM to delete any entities that no longer belong to any parent entity. Since you removed a child element from the list, it will be deleted when you save the parent element.
Creating new elements and updating old is based on the CascadeType. Since you have CascadeType.ALL all elements in the list without an ID will be saved to the database and assigned a new ID when you save the parent entity, and all elements that are already in the list and have an ID will be updated.
On a side note, you might need to update the setter method for List coordinates to look something like:
public void setCoordinates(List<Coordinates> coordinates) {
this.coordinates = coordinates;
this.coordinates.forEach(coordinate -> coordinates.setPermit(this));
}
Or simply use #JsonManagedReference and #JsonBackReference if you're working with JSON.
I have a parent which stores a list of children.
Lets write the DDL for it.
TABLE parent (
id integer pk
)
TABLE child(
id integer pk
parent_id integer FOREIGN KEY (parent.id)
)
When i update the children(add/edit/remove), is there a way to automatically decide which child to remove or edit based on the foreign key?
Assuming you have a new child #5 bound to the parent #2 and:
The FK in the DDL is correctly
The entitys knows the FK
You are using the same jpa-context
The transaction is executed correctly
Then every call to parent.getChilds() must(!) return all the entitys that are existing before your transaction has been executed and the same instance of the entity that you have just committed to the database.
Then, if you remove child #5 of parent #2 and the transaction executed successfully parent.getChilds() must return all entitys without child #5.
Special case:
If you remove parent #2 and you have cascade-delete in the DDL as well as in the Java-Code all childrens must be removed from the Database as well as the parent #2 in the Database you just removed. In this case the parent #2 is not bound anymore to the jpa-context and all the childrens of parent #2 are not bound anymore to the jpa-context.
=Edit=
You could use merge. This will work for constructs like this:
POST {
"coordinates": [{
"lat":"51.33",
"lon":"22.44"
},{
"lat":"50.22",
"lon":"22.33"
}]
}
It will create one row in table "permit" and two rows in table "coordinate", both coordinates are bound to the permit-row. The result will include the ids set.
But: You will have to do the validation work (check that id is null, check that coordinates not refering different permit, ...)!
The removal of coordinates must be done using the DELETE method:
DELETE /permit/972/coordinate/3826648305
I'm working on a process that checks and updates data from Oracle database. I'm using hibernate and spring framework in my application.
The application reads a csv file, processes the content, then persiste entities :
public class Main() {
Input input = ReadCSV(path);
EntityList resultList = Process.process(input);
WriteResult.write(resultList);
...
}
// Process class that loops over input
public class Process{
public EntityList process(Input input) :
EntityList results = ...;
...
for(Line line : input.readLine()){
results.add(ProcessLine.process(line))
...
}
return results;
}
// retrieving and updating entities
Class ProcessLine {
#Autowired
DomaineRepository domaineRepository;
#Autowired
CompanyDomaineService companydomaineService
#Transactional
public MyEntity process(Line line){
// getcompanyByXX is CrudRepository method with #Query that returns an entity object
MyEntity companyToAttach = domaineRepository.getCompanyByCode(line.getCode());
MyEntity companyToDetach = domaineRepository.getCompanyBySiret(line.getSiret());
if(companyToDetach == null || companyToAttach == null){
throw new CustomException("Custom Exception");
}
// AttachCompany retrieves some entity relationEntity, then removes companyToDetach and adds CompanyToAttach. this updates relationEntity.company attribute.
companydomaineService.attachCompany(companyToAttach, companyToDetach);
return companyToAttach;
}
}
public class WriteResult{
#Autowired
DomaineRepository domaineRepository;
#Transactional
public void write(EntityList results) {
for (MyEntity result : results){
domaineRepository.save(result)
}
}
}
The application works well on files with few lines, but when i try to process large files (200 000 lines), the performance slows drastically, and i get a SQL timeout.
I suspect cache issues, but i'm wondering if saving all the entities at the end of the processing isn't a bad practice ?
The problem is your for loop which is doing individual saves on the result and thus does single inserts slowing it down. Hibernate and spring support batch inserts and should be done when ever possible.
something like domaineRepository.saveAll(results)
Since you are processing lot of data it might be better to do things in batches so instead of getting one company to attach you should get a list of companies to attach processes those then get a list of companies to detach and process those
public EntityList process(Input input) :
EntityList results;
List<Code> companiesToAdd = new ArrayList<>();
List<Siret> companiesToRemove = new ArrayList<>();
for(Line line : input.readLine()){
companiesToAdd.add(line.getCode());
companiesToRemove.add(line.getSiret());
...
}
results = process(companiesToAdd, companiesToRemove);
return results;
}
public MyEntity process(List<Code> companiesToAdd, List<Siret> companiesToRemove) {
List<MyEntity> attachList = domaineRepository.getCompanyByCodeIn(companiesToAdd);
List<MyEntity> detachList = domaineRepository.getCompanyBySiretIn(companiesToRemove);
if (attachList.isEmpty() || detachList.isEmpty()) {
throw new CustomException("Custom Exception");
}
companydomaineService.attachCompany(attachList, detachList);
return attachList;
}
The above code is just sudo code to point you in the right direction, will need to work out what works for you.
For every line you read you are doing 2 read operations here
MyEntity companyToAttach = domaineRepository.getCompanyByCode(line.getCode());
MyEntity companyToDetach = domaineRepository.getCompanyBySiret(line.getSiret());
You can read more than one line and us the in query and then process that list of companies
I'm facing a singular problem...
I need to update an entity, but i don't know when it is really updated
My method is
#Override
#Transactional(isolation = Isolation.SERIALIZABLE)
public void lightOn(int idInterruttore) {
Interruttore interruttore = dao.findById(idInterruttore);
String inputPin = interruttore.getInputPin();
String pinName = interruttore.getRelePin();
GpioController gpio = interruttore.getGpio();
GpioPinDigitalOutput rele = gpio.provisionDigitalOutputPin(RaspiPin.getPinByName(pinName));
try {
DateTime date = new DateTime();
Date now = date.toDate();
int i = 1;
while (getInput(inputPin, gpio) != 1) {
if(i > 1){
logger.debug(String.format("Try n %s", i));
}
pushButton(rele);
Thread.sleep(1000);
i++;
}
dao.updateInterruttore(idInterruttore, now, true);
} catch (GpioPinExistsException | InterruptedException gpe) {
logger.error("GPIO giĆ esistente", gpe);
} finally {
gpio.unprovisionPin(rele);
}
logger.debug(String.format("After the update status should be true and it's %s",
interruttore.isStato()));
}
updateInterruttore is (i used this form to be sure to call the commit after the update... I have the lock Option because multiple call can be done to this method but only the first must update
#Override
public void updateInterruttore(int idInterruttore, Date dateTime, boolean stato) {
Session session = getSession();
Transaction tx = session.beginTransaction();
String update = "update Interruttore i set i.dateTime = :dateTime, i.stato = :stato where idInterruttore = :idInterruttore";
session.createQuery(update).setTimestamp("dateTime", dateTime).setBoolean("stato", stato)
.setInteger("idInterruttore", idInterruttore).setLockOptions(LockOptions.UPGRADE).executeUpdate();
tx.commit();
}
}
Well... when I update the log says me:
After the update status should be true and it's false
This happens only the first time I call the method, the second time interruttore.isStato is correctly true.
Why this happens?
This happens because you're updating the database directly with the update statement. Hibernate does not update automatically an already loaded entity in this case. If you reload the entity after the call to dao.updateInterruttore you should get the updated data.
Two notes:
1) You are using a query to apply the update. In that case, Hibernate will no update the entity that is in the session. Unless you update the entity itself and call session.save(interruttore), then the entity will not be updated. (But the update shows up in the DB.) Furthermore, I don't understand why you just don't update the entity and save it via session.save().
2) You are annotating the service method with #Transactional. (Assuming that's Spring annotation) If you use JTA, your tx.commit() will have no effect. But once the method completes, your transaction is committed. (or rolled back if the method throws an exception) If you are not using JTA, then get rid of #Transactional and manage transaction in your DAO method, as you are doing. But that's considered bad practice.