Is there possibility to provide object dependent map for CommitPropertiesProvider? - javers

I'm using Javers to tracking record history change (on ModerationEntity) and have the necessary to retrieve it with some criteria in my specific case is filter some of them that have the ListEntity (id = 51).
ModerationEntity
{
"requestedPublicationStatus": "PUBLISHED_NATIONAL",
"currentPublicationStatus": "UNPUBLISHED",
"id": 1000004,
"list": {
"entity": "ListEntity",
"cdoId": 51
},
"status": "IN_MODERATION"
}
After looking around on the Javers JQL Example I didn't find any solution to deal with that except commit-property-filter. However as I'm using Javers with SpringBoot and the Javers commit is performed through JaversSpringDataJpaAuditableRepositoryAspect. In order to have the commit properties stored into DB we need to define the CommitPropertiesProvider (the default is EmptyPropertiesProvider) and unfortunately that it seem currently we just can define the static commit properties map (according to the API).
public interface CommitPropertiesProvider {
Map<String, String> provide();
}
My idea that if possible to have the concern object pass into the CommitPropertiesProvider#provide() API then we can construct the commit properties depend on the context.
public interface CommitPropertiesProvider {
Map<String, String> provide(Object domainObject);
}
By that I can easily declare my own CommitPropertiesProvider in order to define the mapping value return for each commit.
public class CustomCommitPropertiesProvider implements CommitPropertiesProvider {
public Map<String, String> provide(Object domainObject) {
if (domainObject instanceof ModerationEntity) {
// return map with key = "listId" & value = ModerationEntity#listId
}
// return emptyMap
}
}
Currently I cannot found any solution except turn off the springDataAuditableRepositoryAspect
javers.springDataAuditableRepositoryAspectEnabled=false
And then override with my own aspect (extends from AbstractSpringAuditableRepositoryAspect) in order to inject the logic I want.

Related

Capturing entity information in custom entity listener

I would like a custom entity listener to generate an auto-incremented alias for a few of the entities.
I have implemented one util class in order to generate auto incremented alias for the entities in a distributed environment as follows:
#Component
public class AutoIncrementingIdGenerationUtil {
private final RedisTemplate<String, Object> redisTemplate;
public AutoIncrementingIdGenerationUtil(
RedisTemplate<String, Object> redisTemplate) {
this.redisTemplate = redisTemplate;
}
public String getNextSequenceNumber(String keyName) {
RedisAtomicLong counter = new RedisAtomicLong(keyName,
Objects.requireNonNull(redisTemplate.getConnectionFactory()));
return counter.incrementAndGet();
}
}
Now, I have several entities in my application, for a FEW OF ENTITIES, I would like to generate the alias.
So I am writing my own custom entity listener as follows:
#Component
public class CustomEntityListener<T> {
private final AutoIncrementingIdGenerationUtil autoIncrementingIdGenerationUtil;
public CustomEntityListener(
AutoIncrementingIdGenerationUtil autoIncrementingIdGenerationUtil) {
this.autoIncrementingIdGenerationUtil = autoIncrementingIdGenerationUtil;
}
#PrePersist
void onPrePersist(Object entity) { <----HERE I WOULD LIKE TO CAST TO CONCRETE ENTITY TYPE,
if(StringUtils.isBlank(entity.getAlias())) {
entity.setAlias(autoIncrementingIdgenerationUtil.getNextSequenceNumber(entity.getEntityType());
}
}
As mentioned above, all of the entities do not have an alias attribute. I am not getting any proper idea regarding how to do this. One bad idea is to use getTEntityype(). But in this case, it would be too many if-else and typecast accordingly, which will not look good. Any better idea regarding how to do it?
Another related question in the same context, if I have an entity having a #PrePersist function already, will the function defined in entity listener override this, OR will both of them run?
Entity listeners cannot be parameterized. Just make the relevant entities implement an interface, e.g. Aliased, with a setAlias() method. You'll then have a single type to cast to.
Also, why use Redis? Doesn't your DB have sequences?

SpringData Mongo projection ignore and overide the values on save

Let me explain my problem with SpringData mongo, I have the following interface declared, I declared a custom query, with a projection to ignore the index, this example is only for illustration, in real life I will ignore a bunch of fields.
public interface MyDomainRepo extends MongoRepository<MyDomain, String> {
#Query(fields="{ index: 0 }")
MyDomain findByCode(String code);
}
In my MongoDB instance, the MyDomain has the following info, MyDomain(code="mycode", info=null, index=19), so when I use the findByCode from MyDomainRepo I got the following info MyDomain(code="mycode", info=null, index=null), so far so good, because this is expected behaviour, but the problem happens when..., I decided to save the findByCode return.
For instance, in the following example, I got the findByCode return and set the info property to myinfo and I got the object bellow.
MyDomain(code="mycode", info="myinfo", index=null)
So I used the save from MyDomainRepo, the index was ignored as expected by the projection, but, when I save it back, with or without an update, the SpringData Mongo, overridden the index property to null, and consequently, my record on the MongoDB instance is overridden too, the following example it's my MongoDB JSON.
{
"_id": "5f061f9011b7cb497d4d2708",
"info": "myinfo",
"_class": "io.springmongo.models.MyDomain"
}
There's a way to tell to SpringData Mongo, to simply ignores the null fields on saving?
Save is a replace operation and you won't be able to signal it to patch some fields. It will replace the document with whatever you send
Your option is to use the extension provided by Spring Data Repository to define custom repository methods
public interface MyDomainRepositoryCustom {
void updateNonNull(MyDomain myDomain);
}
public class MyDomainRepositoryImpl implements MyDomainRepositoryCustom {
private final MongoTemplate mongoTemplate;
#Autowired
public BookRepositoryImpl(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Override
public void updateNonNull(MyDomain myDomain) {
//Populate the fileds you want to patch
Update update = Update.update("key1", "value1")
.update("key2", "value2");
// you can you Update.fromDocument(Document object, String... exclude) to
// create you document as well but then you need to make use of `MongoConverter`
//to convert your domain to document.
// create `queryToMatchId` to mtach the id
mongoTemplate.updateFirst(queryToMatchId, update, MyDomain.class);
}
}
public interface MyDomainRepository extends MongoRepository<..., ...>,
MyDomainRepositoryCustom {
}

Is saveAll() of MongoRepository inserting data in one bulk?

I want to make the save operation efficient, so I'd like to write a bulk of objects to Mongo once in a while (i.e. when exceeding some capacity)
Would saveAll() do that for me? Should I use BulkOperations instead?
Short answer, yes, but only if all documents are new. If not, it will insert or update one by one.
Take a look at SimpleMongoRepository (MongoRepository's default implementation):
public <S extends T> List<S> saveAll(Iterable<S> entities) {
Assert.notNull(entities, "The given Iterable of entities not be null!");
Streamable<S> source = Streamable.of(entities);
boolean allNew = source.stream().allMatch((it) -> {
return this.entityInformation.isNew(it);
});
if (allNew) {
List<S> result = (List)source.stream().collect(Collectors.toList());
return new ArrayList(this.mongoOperations.insert(result, this.entityInformation.getCollectionName()));
} else {
return (List)source.stream().map(this::save).collect(Collectors.toList());
}
}
Notice that when all documents are new, the repository will use MongoOperations.insert method (MongoTemplate is the implementation), Then, if you look at that method's code you'll realize it does a batch insert:
public <T> Collection<T> insert(Collection<? extends T> batchToSave, String collectionName) {
Assert.notNull(batchToSave, "BatchToSave must not be null!");
Assert.notNull(collectionName, "CollectionName must not be null!");
return this.doInsertBatch(collectionName, batchToSave, this.mongoConverter);
}
UPDATE 2021:
As of spring-data-mongodb 1.9.0.RELEASE (current 3.2.2), BulkOperations comes with a lot of extra features.
If more advanced tasks are needed other than just saving a bunch of documents, then BulkOperations class is the way to go.
It covers bulk inserts, updates, and deletes:
insert(List<? extends Object> documents)
remove(List removes)
updateMulti(List<Pair<Query,Update>> updates)

Spring Data MongoDB: Dynamic field name converter

How do I set the MongoDB Document field name dynamically (without using #Field)?
#Document
public class Account {
private String username;
}
For example, field names should be capitalized. Result:
{"USERNAME": "hello"}
And I want this dynamic converter to work with any document, so a solution without using generics.
This a bit strange requirement. You can make use of Mongo Listener Life cycle events docs.
#Component
public class MongoListener extends AbstractMongoEventListener<Account> {
#Override
public void onBeforeSave(BeforeSaveEvent<Account> event) {
DBObject dbObject = event.getDBObject();
String username = (String) dbObject.get("username");// get the value
dbObject.put("USERNAME", username);
dbObject.removeField("username");
// You need to go through each and every field recursively in
// dbObject and then remove the field and then add the Field you
// want(with modification)
}
}
This is a bit cluncky, but I believe there is no clean way to do this.

Spring Data Solr #Transaction Commits

I currently have a setup where data is inserted into a database, as well as indexed into Solr. These two steps are wrapped in a spring-managed transaction via the #Transaction annotation. What I've noticed is that spring-data-solr issues an update with the following parameters whenever the transaction is closed : params{commit=true&softCommit=false&waitSearcher=true}
#Transactional
public void save(Object toSave){
dbRepository.save(toSave);
solrRepository.save(toSave);
}
The rate of commits into solr is fairly high, so ideally I'd like send data to the solr index, and have solr auto commit at regular intervals. I have the autoCommit (and autoSoftCommit) set in my solrconfig.xml, but since spring-data-solr is sending those commit parameters, it does a hard commit every time.
I'm aware that I can drop down to the SolrTemplate API and issue commits manually, I would like to keep the solr repository.save call within a spring-managed transaction if possible. Is there a way to modify the parameters that are sent to solr on commit?
After putting in an IDE debug breakpoint in org.springframework.data.solr.repository.support.SimpleSolrRepository here:
private void commitIfTransactionSynchronisationIsInactive() {
if (!TransactionSynchronizationManager.isSynchronizationActive()) {
this.solrOperations.commit(solrCollectionName);
}
}
I discovered that wrapping my code as #Transactional (and other details to actually enable the framework to begin/end code as a transaction) doesn't achieve what we expect with "Spring Data for Apache Solr". The stacktrace shows the Proxy and Transaction Interceptor classes for our code's Transactional scope but then it also shows the framework starting its own nested transaction with another Proxy and Transaction Interceptor of its own. When the framework exits its CrudRepository.save() method my code calls, the action to commit to Solr is done by the framework's nested transaction. It happens before our outer transaction is exited. So, the attempt to batch-process many saves with one commit at the end instead of one commit for every save is futile. It seems, for this area in my code, I'll have to make use of SolrJ to save (update) my entities to Solr and then have "my" transaction's exit be followed with a commit.
If using Spring Solr, I found using the SolrTemplate bean allows you to 'batch' updates when adding data to the Solr index. By using the bean for SolrTemplate, you can use "addBeans" method, which will add a collection to the index and not commit until the end of the transaction. In my case, I started out using solrClient.add() and taking up to 4 hours for my collection to get saved to the index by iterating over it, as it commits after every single save. By using solrTemplate.addBeans(Collect<?>), it finishes in just over 1 second, as the commit is on the entire collection. Here is a code snippet:
#Resource
SolrTemplate solrTemplate;
public void doReindexing(List<Image> images) {
if (images != null) {
/* CMSSolrImage is a class with #SolrDocument mappings.
* the List<Image> images is a collection pulled from my database
* I want indexed in Solr.
*/
List<CMSSolrImage> sImages = new ArrayList<CMSSolrImage>();
for (Image image : images) {
CMSSolrImage sImage = new CMSSolrImage(image);
sImages.add(sImage);
}
solrTemplate.saveBeans(sImages);
}
}
The way I've done something similar is to create a custom repository implementation of the save methods.
Interface for the repository:
public interface FooRepository extends SolrCrudRepository<Foo, String>, FooRepositoryCustom {
}
Interface for the custom overrides:
public interface FooRepositoryCustom {
public Foo save(Foo entity);
public Iterable<Foo> save(Iterable<Foo> entities);
}
Implementation of the custom overrides:
public class FooRepositoryImpl {
private SolrOperations solrOperations;
public SolrSampleRepositoryImpl(SolrOperations fooSolrOperations) {
this.solrOperations = fooSolrOperations;
}
#Override
public Foo save(Foo entity) {
Assert.notNull(entity, "Cannot save 'null' entity.");
registerTransactionSynchronisationIfSynchronisationActive();
this.solrOperations.saveBean(entity, 1000);
commitIfTransactionSynchronisationIsInactive();
return entity;
}
#Override
public Iterable<Foo> save(Iterable<Foo> entities) {
Assert.notNull(entities, "Cannot insert 'null' as a List.");
if (!(entities instanceof Collection<?>)) {
throw new InvalidDataAccessApiUsageException("Entities have to be inside a collection");
}
registerTransactionSynchronisationIfSynchronisationActive();
this.solrOperations.saveBeans((Collection<? extends T>) entities, 1000);
commitIfTransactionSynchronisationIsInactive();
return entities;
}
private void registerTransactionSynchronisationIfSynchronisationActive() {
if (TransactionSynchronizationManager.isSynchronizationActive()) {
registerTransactionSynchronisationAdapter();
}
}
private void registerTransactionSynchronisationAdapter() {
TransactionSynchronizationManager.registerSynchronization(SolrTransactionSynchronizationAdapterBuilder
.forOperations(this.solrOperations).withDefaultBehaviour());
}
private void commitIfTransactionSynchronisationIsInactive() {
if (!TransactionSynchronizationManager.isSynchronizationActive()) {
this.solrOperations.commit();
}
}
}
and you also need to provide a SolrOperations bean for the right solr core:
#Configuration
public class FooSolrConfig {
#Bean
public SolrOperations getFooSolrOperations(SolrClient solrClient) {
return new SolrTemplate(solrClient, "foo");
}
}
Footnote: auto commit is (to my mind) conceptually incompatible with a transaction. An auto commit is a promise from solr that it will try to start to write it to disk within a certain time limit. Many things might stop that from actually happening however - a timely power or hardware failure, errors between the document and the schema, etc. But the client won't know that solr failed to keep its promise, and the transaction will see a success when it actually failed.

Resources