I want to make the save operation efficient, so I'd like to write a bulk of objects to Mongo once in a while (i.e. when exceeding some capacity)
Would saveAll() do that for me? Should I use BulkOperations instead?
Short answer, yes, but only if all documents are new. If not, it will insert or update one by one.
Take a look at SimpleMongoRepository (MongoRepository's default implementation):
public <S extends T> List<S> saveAll(Iterable<S> entities) {
Assert.notNull(entities, "The given Iterable of entities not be null!");
Streamable<S> source = Streamable.of(entities);
boolean allNew = source.stream().allMatch((it) -> {
return this.entityInformation.isNew(it);
});
if (allNew) {
List<S> result = (List)source.stream().collect(Collectors.toList());
return new ArrayList(this.mongoOperations.insert(result, this.entityInformation.getCollectionName()));
} else {
return (List)source.stream().map(this::save).collect(Collectors.toList());
}
}
Notice that when all documents are new, the repository will use MongoOperations.insert method (MongoTemplate is the implementation), Then, if you look at that method's code you'll realize it does a batch insert:
public <T> Collection<T> insert(Collection<? extends T> batchToSave, String collectionName) {
Assert.notNull(batchToSave, "BatchToSave must not be null!");
Assert.notNull(collectionName, "CollectionName must not be null!");
return this.doInsertBatch(collectionName, batchToSave, this.mongoConverter);
}
UPDATE 2021:
As of spring-data-mongodb 1.9.0.RELEASE (current 3.2.2), BulkOperations comes with a lot of extra features.
If more advanced tasks are needed other than just saving a bunch of documents, then BulkOperations class is the way to go.
It covers bulk inserts, updates, and deletes:
insert(List<? extends Object> documents)
remove(List removes)
updateMulti(List<Pair<Query,Update>> updates)
Related
I would like a custom entity listener to generate an auto-incremented alias for a few of the entities.
I have implemented one util class in order to generate auto incremented alias for the entities in a distributed environment as follows:
#Component
public class AutoIncrementingIdGenerationUtil {
private final RedisTemplate<String, Object> redisTemplate;
public AutoIncrementingIdGenerationUtil(
RedisTemplate<String, Object> redisTemplate) {
this.redisTemplate = redisTemplate;
}
public String getNextSequenceNumber(String keyName) {
RedisAtomicLong counter = new RedisAtomicLong(keyName,
Objects.requireNonNull(redisTemplate.getConnectionFactory()));
return counter.incrementAndGet();
}
}
Now, I have several entities in my application, for a FEW OF ENTITIES, I would like to generate the alias.
So I am writing my own custom entity listener as follows:
#Component
public class CustomEntityListener<T> {
private final AutoIncrementingIdGenerationUtil autoIncrementingIdGenerationUtil;
public CustomEntityListener(
AutoIncrementingIdGenerationUtil autoIncrementingIdGenerationUtil) {
this.autoIncrementingIdGenerationUtil = autoIncrementingIdGenerationUtil;
}
#PrePersist
void onPrePersist(Object entity) { <----HERE I WOULD LIKE TO CAST TO CONCRETE ENTITY TYPE,
if(StringUtils.isBlank(entity.getAlias())) {
entity.setAlias(autoIncrementingIdgenerationUtil.getNextSequenceNumber(entity.getEntityType());
}
}
As mentioned above, all of the entities do not have an alias attribute. I am not getting any proper idea regarding how to do this. One bad idea is to use getTEntityype(). But in this case, it would be too many if-else and typecast accordingly, which will not look good. Any better idea regarding how to do it?
Another related question in the same context, if I have an entity having a #PrePersist function already, will the function defined in entity listener override this, OR will both of them run?
Entity listeners cannot be parameterized. Just make the relevant entities implement an interface, e.g. Aliased, with a setAlias() method. You'll then have a single type to cast to.
Also, why use Redis? Doesn't your DB have sequences?
The non-reactive counterpart of Elasticsearch Spring Data's org.springframework.data.elasticsearch.core.ElasticsearchTemplate provides a method public boolean deleteIndex(String indexName), which I can use to delete indices. However, I cannot find any hints of similar functionality in the ReactiveElasticsearchTemplate.
The DefaultReactiveElasticsearchClient which is created by
ReactiveRestClients.create(ClientConfiguration clientConfiguration)
implements the interface org.springframework.data.elasticsearch.client.reactive.ReactiveElasticsearchClient.Indices which has two methods to delete an index:
default Mono<Void> deleteIndex(DeleteIndexRequest deleteIndexRequest) {
return deleteIndex(HttpHeaders.EMPTY, deleteIndexRequest);
}
default Mono<Void> deleteIndex(Consumer<DeleteIndexRequest> consumer) {
DeleteIndexRequest request = new DeleteIndexRequest();
consumer.accept(request);
return deleteIndex(request);
}
default Mono<Void> deleteIndex(DeleteIndexRequest deleteIndexRequest) {
return deleteIndex(HttpHeaders.EMPTY, deleteIndexRequest);
}
So nothing to pass in an index name directly, but DeleteIndexRequesthas a constructor that just takes index name(s).
((DefaultReactiveElasticsearchClient)client).deleteIndex(new DeleteIndexRequest(indexname)).
So currently it's ugly with this cast but can be done. We have a ticket to add this functionality in the Operations interface and implementations.
I'm using Javers to tracking record history change (on ModerationEntity) and have the necessary to retrieve it with some criteria in my specific case is filter some of them that have the ListEntity (id = 51).
ModerationEntity
{
"requestedPublicationStatus": "PUBLISHED_NATIONAL",
"currentPublicationStatus": "UNPUBLISHED",
"id": 1000004,
"list": {
"entity": "ListEntity",
"cdoId": 51
},
"status": "IN_MODERATION"
}
After looking around on the Javers JQL Example I didn't find any solution to deal with that except commit-property-filter. However as I'm using Javers with SpringBoot and the Javers commit is performed through JaversSpringDataJpaAuditableRepositoryAspect. In order to have the commit properties stored into DB we need to define the CommitPropertiesProvider (the default is EmptyPropertiesProvider) and unfortunately that it seem currently we just can define the static commit properties map (according to the API).
public interface CommitPropertiesProvider {
Map<String, String> provide();
}
My idea that if possible to have the concern object pass into the CommitPropertiesProvider#provide() API then we can construct the commit properties depend on the context.
public interface CommitPropertiesProvider {
Map<String, String> provide(Object domainObject);
}
By that I can easily declare my own CommitPropertiesProvider in order to define the mapping value return for each commit.
public class CustomCommitPropertiesProvider implements CommitPropertiesProvider {
public Map<String, String> provide(Object domainObject) {
if (domainObject instanceof ModerationEntity) {
// return map with key = "listId" & value = ModerationEntity#listId
}
// return emptyMap
}
}
Currently I cannot found any solution except turn off the springDataAuditableRepositoryAspect
javers.springDataAuditableRepositoryAspectEnabled=false
And then override with my own aspect (extends from AbstractSpringAuditableRepositoryAspect) in order to inject the logic I want.
I have a method that fetches all the data and i am caching the result of that method but i am not able to evict the result.
#Component("cacheKeyGenerator")
public class CacheKeyGenerator implements KeyGenerator {
#Override
public Object generate(Object target, Method method, Object... params) {
final List<Object> key = new ArrayList<>();
key.add(method.getDeclaringClass().getName());
return key;
}
}
CachedMethod:-
#Override
#Cacheable(value="appCache",keyGenerator="cacheKeyGenerator")
public List<Contact> showAllContacts() {
return contactRepository.findAll();
}
#Override
#CachePut(value="appCache",key="#result.id")
public Contact addData(Contact contact) {
return contactRepository.save(contact);
}
Now when ever addData is called i want the data in the cache "appCache" with the key ="cacheKeyGenerator" to be evicted.So that the data returned by the method "showAllContacts()" is accurate.Can anyone please help!
The Entire code can be found at - https://github.com/iftekharkhan09/SpringCaching
Assuming you have a known constant cache key for showAllContacts then the solution should be to simply add #CacheEvict on addData passing in the cache name and key value:
#Override
#Caching(
put = {#CachePut(value="appCache", key="#result.id")},
evict = {#CacheEvict(cacheNames="appCache", key="someConstant")}
)
public Contact addData(Contact contact) {
return contactRepository.save(contact);
}
However because you use a key generator it is a bit more involved. Now given what your key generator does, you could instead pick a value for that cache key, making sure there can't be any collisions with the values from #result.id and use that value instead of a the key generator returned one.
I am caching some object using Spring cache implementation, the underline cache is EhCache. I am trying to evict the cache based on wildcard search for the keys,the reason is the way I stored them and I only know the partial key. Hence I wanted to do something like below. I did search this forum for relevant answer but did not find any.
#CacheEvict(beforeInvocation=true, key="userId+%")
public User getUser(String userId)
{
//some implementation
}
Now if I try this I get an error for the SPEL. Also I tried to create a custom keygenerator for this, here the eviction works if the key generator returns one key, but I have a couple of keys based on my search.
#CacheEvict(beforeInvocation=true, keyGenerator="cacheKeyEvictor")
public User getUser(String userId)
{
//some implementation
}
//Custom key generator for eviction
public class cacheKeyEvictor implements KeyGenerator {
#Override
public Object generate(Object arg0, Method arg1, Object... arg2) {
//loop the cache and do a like search and return the keys
return object; //works if I send one key. Won't work for a list of keys
}
}
Any help on this is appreciated.