Spring Data returns deleted object sometimes - spring

I am working by spring data to access database and do not use any cache. I have a problem that after deleting a record from database, I am sending an event to other micro system to re-query to update list of objects. So basically my code is :
private void deleteObject(MyObject object) {
myRepository.deleteById(object.getId());
myRepository.flush();
...
sendEventToSystemX();
}
Basically other micro service captures the event which is sent by sendEventToSystemX method and make a query to myRepository.
#Transactional(isolation = Isolation.READ_UNCOMMITTED, readOnly = true)
public Page<T> findAll(Specification<T> spec, Pageable pageable) {
TypedQuery<T> query = getQuery(spec, getDomainClass(), pageable.getSort());
if(pagable.isUnpaged()) {
return new PageImpl<>(query.getResultList())
}
return readPage(query, getDomainClass(), pageable, spec);
}
So note that I am flushing repo after deletion. And select query is done by different service so it is not in the same transaction. So why I still get deleted object for the first time I query after deletion. If I re-run findAll method then I get up-to data result. And this also does not happen always. So what can be reason behind it ?

Related

GraphiQL mutation for deletion not working properly

I am using GraphiQL in my project. I have an issue when deleting a record, called from another service. I am using Postgres DB.
EmployeeMutation.java:
#DgsData(parentType = "MutationResolver", field = "detachParty")
public Boolean deleteEmployee(#InputArgument("id") Long id) {
return employeeService.deleteEmployee(id);
}
EmployeeService.java:
public boolean deleteEmployee(Long id) {
final var employeeEntity = repository.findById(id);
if (employeeEntity.isPresent()) {
repository.delete(employeeEntity.get());
return true;
} else {
return false;
}
Above works fine if I call the deleteEmployee mutation API from GraphiQL editor. But if I call the above service separately from another service, deletion is not happening.
employeeService.deleteEmployee(employeeEntity.getId());
I am calling above method from another service (DepartymentService). In another service employeeService is autowired and also Id is also passed correctly but deletion is not happening.
I also tried directly calling using employee repository in Department Service but still deletion is not working. What can I try next?

Using Quarkus Cache with Reactive and Mutiny correctly

I'm trying to migrate my project to Quarkus Reactive with Hibernate Reactive Panache and I'm not sure how to deal with caching.
My original method looked like this
#Transactional
#CacheResult(cacheName = "subject-cache")
public Subject getSubject(#CacheKey String subjectId) throws Exception {
return subjectRepository.findByIdentifier(subjectId);
}
The Subject is loaded from the cache, if available, by the cache key "subjectId".
Migrating to Mutiny would look like this
#CacheResult(cacheName = "subject-cache")
public Uni<Subject> getSubject(#CacheKey String subjectId) {
return subjectRepository.findByIdentifier(subjectId);
}
However, it can't be right to store the Uni object in the cache.
There is also the option to inject the cache as a bean, however, the fallback function does not support to return an Uni:
#Inject
#CacheName("subject-cache")
Cache cache;
//does not work, cache.get function requires return type Subject, not Uni<Subject>
public Uni<Subject> getSubject(String subjectId) {
return cache.get(subjectId, s -> subjectRepository.findByIdentifier(subjectId));
}
//This works, needs blocking call to repo, to return response wrapped in new Uni
public Uni<Subject> getSubject(String subjectId) {
return cache.get(subjectId, s -> subjectRepository.findByIdentifier(subjectId).await().indefinitely());
}
Can the #CacheResult annotations be used with Uni / Multi and everything is handled under the hood correctly?
Your example with a #CacheResult on a method that returns Uni should actually work. The implementation will automatically "strip" the Uni type and only store the Subject in the cache.
The problem with caching Unis is that depending on how this Uni is created, multiple subscriptions can trigger some code multiple times. To avoid this you have to memoize the Uni like this:
#CacheResult(cacheName = "subject-cache")
public Uni<Subject> getSubject(#CacheKey String subjectId) {
return subjectRepository.findByIdentifier(subjectId)
.memoize().indefinitely();
}
This will ensure that every subscription to the cached Uni will always return the same value (item or failure) without re-executing anything of the original Uni flow.

Caching with Pagination on Spring Boot

I have an endpoint which fetches data from the database returns huge number of records. All of this records needs to be visible on the UI but as we scroll the view.
These records are used in 2 ways, one is displaying as it is another is displaying subset of the records based on some filter (through same/another endpoint).
Any suggestions on how this can be achieved using Spring features or without ?
You can achieve this using pageable. Sample controller code for pageable
#PostMapping
#ResponseStatus(HttpStatus.OK)
public Page<AllInventoryTransactions> fetchAllInwardInventory(#PageableDefault(page = 0, size = 10, sort = "created", direction = Direction.DESC) Pageable pageable) throws Exception
{
return allInventoryService.fetchAllInventory(pageable);
}
You can pass this pageable received from UI directly to Repo. Below is the service method
public Page<AllInventoryTransactions> fetchAllInventory(FilterDataList filterDataList, Pageable pageable) throws ParseException
{
Page<AllInventoryTransactions> data = allInventoryRepo.findAll(spec,pageable);
allInventoryReturnData.setTransactions(data);
return data;
}
in UI, you can handle logic to make next API call as user scrolls the page

Spring Data Solr #Transaction Commits

I currently have a setup where data is inserted into a database, as well as indexed into Solr. These two steps are wrapped in a spring-managed transaction via the #Transaction annotation. What I've noticed is that spring-data-solr issues an update with the following parameters whenever the transaction is closed : params{commit=true&softCommit=false&waitSearcher=true}
#Transactional
public void save(Object toSave){
dbRepository.save(toSave);
solrRepository.save(toSave);
}
The rate of commits into solr is fairly high, so ideally I'd like send data to the solr index, and have solr auto commit at regular intervals. I have the autoCommit (and autoSoftCommit) set in my solrconfig.xml, but since spring-data-solr is sending those commit parameters, it does a hard commit every time.
I'm aware that I can drop down to the SolrTemplate API and issue commits manually, I would like to keep the solr repository.save call within a spring-managed transaction if possible. Is there a way to modify the parameters that are sent to solr on commit?
After putting in an IDE debug breakpoint in org.springframework.data.solr.repository.support.SimpleSolrRepository here:
private void commitIfTransactionSynchronisationIsInactive() {
if (!TransactionSynchronizationManager.isSynchronizationActive()) {
this.solrOperations.commit(solrCollectionName);
}
}
I discovered that wrapping my code as #Transactional (and other details to actually enable the framework to begin/end code as a transaction) doesn't achieve what we expect with "Spring Data for Apache Solr". The stacktrace shows the Proxy and Transaction Interceptor classes for our code's Transactional scope but then it also shows the framework starting its own nested transaction with another Proxy and Transaction Interceptor of its own. When the framework exits its CrudRepository.save() method my code calls, the action to commit to Solr is done by the framework's nested transaction. It happens before our outer transaction is exited. So, the attempt to batch-process many saves with one commit at the end instead of one commit for every save is futile. It seems, for this area in my code, I'll have to make use of SolrJ to save (update) my entities to Solr and then have "my" transaction's exit be followed with a commit.
If using Spring Solr, I found using the SolrTemplate bean allows you to 'batch' updates when adding data to the Solr index. By using the bean for SolrTemplate, you can use "addBeans" method, which will add a collection to the index and not commit until the end of the transaction. In my case, I started out using solrClient.add() and taking up to 4 hours for my collection to get saved to the index by iterating over it, as it commits after every single save. By using solrTemplate.addBeans(Collect<?>), it finishes in just over 1 second, as the commit is on the entire collection. Here is a code snippet:
#Resource
SolrTemplate solrTemplate;
public void doReindexing(List<Image> images) {
if (images != null) {
/* CMSSolrImage is a class with #SolrDocument mappings.
* the List<Image> images is a collection pulled from my database
* I want indexed in Solr.
*/
List<CMSSolrImage> sImages = new ArrayList<CMSSolrImage>();
for (Image image : images) {
CMSSolrImage sImage = new CMSSolrImage(image);
sImages.add(sImage);
}
solrTemplate.saveBeans(sImages);
}
}
The way I've done something similar is to create a custom repository implementation of the save methods.
Interface for the repository:
public interface FooRepository extends SolrCrudRepository<Foo, String>, FooRepositoryCustom {
}
Interface for the custom overrides:
public interface FooRepositoryCustom {
public Foo save(Foo entity);
public Iterable<Foo> save(Iterable<Foo> entities);
}
Implementation of the custom overrides:
public class FooRepositoryImpl {
private SolrOperations solrOperations;
public SolrSampleRepositoryImpl(SolrOperations fooSolrOperations) {
this.solrOperations = fooSolrOperations;
}
#Override
public Foo save(Foo entity) {
Assert.notNull(entity, "Cannot save 'null' entity.");
registerTransactionSynchronisationIfSynchronisationActive();
this.solrOperations.saveBean(entity, 1000);
commitIfTransactionSynchronisationIsInactive();
return entity;
}
#Override
public Iterable<Foo> save(Iterable<Foo> entities) {
Assert.notNull(entities, "Cannot insert 'null' as a List.");
if (!(entities instanceof Collection<?>)) {
throw new InvalidDataAccessApiUsageException("Entities have to be inside a collection");
}
registerTransactionSynchronisationIfSynchronisationActive();
this.solrOperations.saveBeans((Collection<? extends T>) entities, 1000);
commitIfTransactionSynchronisationIsInactive();
return entities;
}
private void registerTransactionSynchronisationIfSynchronisationActive() {
if (TransactionSynchronizationManager.isSynchronizationActive()) {
registerTransactionSynchronisationAdapter();
}
}
private void registerTransactionSynchronisationAdapter() {
TransactionSynchronizationManager.registerSynchronization(SolrTransactionSynchronizationAdapterBuilder
.forOperations(this.solrOperations).withDefaultBehaviour());
}
private void commitIfTransactionSynchronisationIsInactive() {
if (!TransactionSynchronizationManager.isSynchronizationActive()) {
this.solrOperations.commit();
}
}
}
and you also need to provide a SolrOperations bean for the right solr core:
#Configuration
public class FooSolrConfig {
#Bean
public SolrOperations getFooSolrOperations(SolrClient solrClient) {
return new SolrTemplate(solrClient, "foo");
}
}
Footnote: auto commit is (to my mind) conceptually incompatible with a transaction. An auto commit is a promise from solr that it will try to start to write it to disk within a certain time limit. Many things might stop that from actually happening however - a timely power or hardware failure, errors between the document and the schema, etc. But the client won't know that solr failed to keep its promise, and the transaction will see a success when it actually failed.

#cacheput is not updating the existing cache

I am working with Spring 4 and Hazelcast 3.2. I am trying to add a new record to existing cache with below code. somehow cache is not getting updated and at the same time I don't see any errors also. below is the code snippet for reference.
Note:- Cacheable is working fine, only cacheput is not working. Please throw light on this
#SuppressWarnings("unchecked")`enter code here`
#Transactional(readOnly = true, propagation = Propagation.REQUIRED)
#Cacheable(value="user-role-data")
public List<User> getUsersList() {
// Business Logic
List<User> users= criteriaQuery.list();
}
#SuppressWarnings("unchecked")
#Transactional(readOnly = true, propagation = Propagation.SUPPORTS)
#CachePut(value = "user-role-data")
public User addUser(User user) {
return user;
}
I had the same issue and managed to solved it. The issue seemed to be tied to the transaction management.
Bascially updating the cache in the same method where you are creating or updating the new record does not work because the transaction was not committed. Here's how I solved it.
Service layer calls repo to insert user
Then go back to service layer
After the insert /update db call
In the service layer I called a refresh cache method
That returned the user data and this method has the cacheput annotation
After that it worked.
An alternative approach is you could use #CacheEvict(allEntries = true) on the method used to Save or Update or Delete the records. It will flush the existing cache.
Example:
#CacheEvict(allEntries = true)
public void saveOrUpdate(Person person)
{
personRepository.save(person);
}
A new cache will be formed with updated result the next time you call a #Cacheable method
Example:
#Cacheable // caches the result of getAllPersons() method
public List<Person> getAllPersons()
{
return personRepository.findAll();
}

Resources