How to use resilience4j on calling method? - spring-boot

I tried to use spring retry for Circuit breaking and retry as below and it is working as expected but issue is unable to configure "maxAttempts/openTimeout/resetTimeout" as env variables (error is should be constants). My question is how use resilience4j to achieve the below requirement?
also please suggest there is a way to pass env variables to "maxAttempts/openTimeout/resetTimeout".
#CircuitBreaker(value = {
MongoServerException.class,
MongoSocketException.class,
MongoTimeoutException.class
MongoSocketOpenException.class},
maxAttempts = 2,
openTimeout = 20000L ,
resetTimeout = 30000L)
public void insertDocument(ConsumerRecord<Long, GenericRecord> consumerRecord){
retryTemplate.execute(args0 -> {
LOGGER.info(String.format("Inserting record with key -----> %s", consumerRecord.key().toString()));
BasicDBObject dbObject = BasicDBObject.parse(consumerRecord.value().toString());
dbObject.put("_id", consumerRecord.key());
mongoCollection.replaceOne(<<BasicDBObject with id>>, getReplaceOptions());
return null;
});
}
#Recover
public void recover(RuntimeException t) {
LOGGER.info(" Recovering from Circuit Breaker ");
}
dependencies used are
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.retry</groupId>
<artifactId>spring-retry</artifactId>
</dependency>

You are not using resilience4j, but spring-retry.
You should adapt the title of your question.

CircuitBreakerConfig circuitBreakerConfig = CircuitBreakerConfig.custom()
.waitDurationInOpenState(Duration.ofMillis(20000))
.build();
CircuitBreakerRegistry circuitBreakerRegistry = CircuitBreakerRegistry.of(circuitBreakerConfig);
CircuitBreaker circuitBreaker = circuitBreakerRegistry.circuitBreaker("mongoDB");
RetryConfig retryConfig = RetryConfig.custom().maxAttempts(3)
.retryExceptions(MongoServerException.class,
MongoSocketException.class,
MongoTimeoutException.class
MongoSocketOpenException.class)
.ignoreExceptions(CircuitBreakerOpenException.class).build();
Retry retry = Retry.of("helloBackend", retryConfig);
Runnable decoratedRunnable = Decorators.ofRunnable(() -> insertDocument(ConsumerRecord<Long, GenericRecord> consumerRecord))
.withCircuitBreaker(circuitBreaker)
.withRetry(retry)
.decorate();
String result = Try.runRunnable(decoratedRunnable )
.recover(exception -> ...).get();

Related

Micrometer with Prometheus Pushgateway - Add TLS Support

I have a Spring boot application with Prometheus Pushgateway using Micrometer, mainly based on this tutorial:
https://luramarchanjo.tech/2020/01/05/spring-boot-2.2-and-prometheus-pushgateway-with-micrometer.html
pom.xml has following related dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_pushgateway</artifactId>
<version>0.16.0</version>
</dependency>
And application.properties file has:
management.metrics.export.prometheus.pushgateway.enabled=true
management.metrics.export.prometheus.pushgateway.shutdown-operation=PUSH
management.metrics.export.prometheus.pushgateway.baseUrl=localhost:9091
It is working fine locally in Dev environment while connecting to Pushgateway without any TLS. In our CI environment, Prometheus Pushgateway has TLS enabled. How do I configure TLS support and configure certs in this Spring boot application?
Due to the usage of TLS, you will need to customize a few Spring classes:
HttpConnectionFactory -> PushGateway -> PrometheusPushGatewayManager
A HttpConnectionFactory, is used by prometheus' PushGateway to create a secure connection, and then, create a PrometheusPushGatewayManager which uses the previous pushgateway.
You will need to implement the prometheus’ interface HttpConnectionFactory, I’m assuming you are able to create a valid javax.net.ssl.SSLContext object (if not, more details in the end¹).
HttpConnectionFactory example:
public class MyTlsConnectionFactory implements io.prometheus.client.exporter.HttpConnectionFactory {
#Override
public HttpURLConnection create(String hostUrl) {
// considering you can get javax.net.ssl.SSLContext or javax.net.ssl.SSLSocketFactory
URL url = new URL(hostUrl);
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
connection.setSSLSocketFactory(sslContext.getSocketFactory());
return connection;
}
}
PushGateway and PrometheusPushGatewayManager:
#Bean
public HttpConnectionFactory tlsConnectionFactory() {
return new MyTlsConnectionFactory();
}
#Bean
public PushGateway pushGateway(HttpConnectionFactory connectionFactory) throws MalformedURLException {
String url = "https://localhost:9091"; // replace by your props
PushGateway pushGateway = new PushGateway(new URL(url));
pushGateway.setConnectionFactory(connectionFactory);
return pushGateway;
}
#Bean
public PrometheusPushGatewayManager tlsPrometheusPushGatewayManager(PushGateway pushGateway,
CollectorRegistry registry) {
// fill the others params accordingly (the important is pushGateway!)
return new PrometheusPushGatewayManager(
pushGateway,
registry,
Duration.of(15, ChronoUnit.SECONDS),
"some-job-id",
null,
PrometheusPushGatewayManager.ShutdownOperation.PUSH
);
}
¹If you face difficulty retrieving the SSLContext from java code, I recommend studying the library https://github.com/Hakky54/sslcontext-kickstart and https://github.com/Hakky54/mutual-tls-ssl (which shows how to apply it with different client libs).
Then, will be possible to generate SSLContext in java code in a clean way, e.g.:
String keyStorePath = "client.jks";
char[] keyStorePassword = "password".toCharArray();
SSLFactory sslFactory = SSLFactory.builder()
.withIdentityMaterial(keyStorePath, keyStorePassword)
.build();
javax.net.ssl.SSLContext sslContext = sslFactory.getSslContext();
Finally, if you need setup a local Prometheus + TLS environment for testing purposes, I recommend following the post:
https://smallstep.com/hello-mtls/doc/client/prometheus

Broadcast to multiple reactive WebSocketSessions created using springboot webflux is not working

Following is the scenario:
I create a reactor kafka receiver
Data consumed from kafka receiver is published to a WebSocketHanlder
WebSocketHanlder is mapped to a URL using SimpleUrlHandlerMapping
URL pattern is api/v1/ws/{ID} and I expect multiple WebSocketSession to get created based on different ID used in URI which are managed by single WebSocketHanlder, which is actually happening
But when data from kafka receiver is published, only first created WebSocketSession receiveds it and all other WebSocketSessions do not receive the data
I am using spring-boot 2.6.3 with starter-tomcat
How to publish data to all the WebSocketSessions created
My Code:
Config for web socket handler
#Configuration
#Slf4j
public class OneSecPollingWebSocketConfig
{
private OneSecPollingWebSocketHandler oneSecPollingHandler;
#Autowired
public OneSecPollingWebSocketConfig(OneSecPollingWebSocketHandler oneSecPollingHandler)
{
this.oneSecPollingHandler = oneSecPollingHandler;
}
#Bean
public HandlerMapping webSocketHandlerMapping()
{
log.info("onesecpolling websocket configured");
Map<String, WebSocketHandler> handlerMap = new HashMap<>();
handlerMap.put(WEB_SOCKET_ENDPOINT, oneSecPollingHandler);
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setUrlMap(handlerMap);
mapping.setOrder(1);
return mapping;
}
}
Code for WebSocket HAndler
#Component
#Slf4j
public class OneSecPollingWebSocketHandler implements WebSocketHandler
{
private ObjectMapper objectMapper;
private OneSecPollingKafkaConsumerService oneSecPollingKafkaConsumerService;
private Map<String, WebSocketSession> wsSessionsByUserSessionId = new HashMap<>();
#Autowired
public OneSecPollingWebSocketHandler(ObjectMapper objectMapper, OneSecPollingKafkaConsumerService oneSecPollingKafkaConsumerService)
{
this.objectMapper = objectMapper;
this.oneSecPollingKafkaConsumerService = oneSecPollingKafkaConsumerService;
}
#Override
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Many<String> sink = Sinks.many().multicast().onBackpressureBuffer(Queues.SMALL_BUFFER_SIZE, false);
wsSessionsByUserSessionId.put(getUserPollingSessionId(webSocketSession), webSocketSession);
sinkSubscription(webSocketSession, sink);
Mono<Void> output = webSocketSession.send(sink.asFlux().map(webSocketSession::textMessage)).doOnSubscribe(subscription ->
{
});
return Mono.zip(webSocketSession.receive().then(), output).then();
}
public void sinkSubscription(WebSocketSession webSocketSession, Many<String> sink)
{
log.info("number of sessions; {}", wsSessionsByUserSessionId.size());
oneSecPollingKafkaConsumerService.getTestTopicFlux().doOnNext(record ->
{
//log.info("record: {}", record);
sink.tryEmitNext(record.value());
record.receiverOffset().acknowledge();
}).subscribe();
}
public String getOneSecPollingTopicRecord(ReceiverRecord<Integer, String> record, WebSocketSession webSocketSession)
{
String lastRecord = record.value();
log.info("record to send: {} : webSocketSession: {}", record.value(), webSocketSession.getId());
record.receiverOffset().acknowledge();
return lastRecord;
}
public String getUserPollingSessionId(WebSocketSession webSocketSession)
{
UriTemplate template = new UriTemplate(WEB_SOCKET_ENDPOINT);
URI uri = webSocketSession.getHandshakeInfo().getUri();
Map<String, String> parameters = template.match(uri.getPath());
String userPollingSessionId = parameters.get("userPollingSessionId");
return userPollingSessionId;
}
}
Kafka Receiver
#Service
#Slf4j
public class OneSecPollingKafkaConsumerService
{
private String bootStrapServers;
#Autowired
public OneSecPollingKafkaConsumerService(#Value("${bootstrap.servers}") String bootStrapServers)
{
this.bootStrapServers = bootStrapServers;
}
private ReceiverOptions<Integer, String> getRecceiverOPtions()
{
Map<String, Object> consumerProps = new HashMap<>();
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootStrapServers);
//consumerProps.put(ConsumerConfig.CLIENT_ID_CONFIG, "reactive-consumer");
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "onesecpolling-group");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
//consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
ReceiverOptions<Integer, String> receiverOptions = ReceiverOptions
.<Integer, String> create(consumerProps)
.subscription(Collections.singleton("HighFrequencyPollingKPIsComputedValues"));
return receiverOptions;
}
public Flux<ReceiverRecord<Integer, String>> getTestTopicFlux()
{
return createTopicCache();
}
private Flux<ReceiverRecord<Integer, String>> createTopicCache()
{
Flux<ReceiverRecord<Integer, String>> oneSecPollingMessagesFlux = KafkaReceiver.create(getRecceiverOPtions())
.receive()
.delayElements(Duration.ofMillis(500));
return oneSecPollingMessagesFlux;
}
}
POM dependencies
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
<!--
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>io.projectreactor.kafka</groupId>
<artifactId>reactor-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!-- This is breaking WebFlux
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>${springdoc.version}</version>
</dependency>
-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
<classifier>test-binder</classifier>
<type>test-jar</type>
<scope>test</scope>
</dependency>
<!-- <dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-test</artifactId>
<scope>test</scope>
</dependency> -->
</dependencies>
I also tried changing the handle(...) method definition in WebSocketHanlder to following, but still data from kafka is pushed to only one websocket session:
#Override
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive().then();
Mono<Void> output = webSocketSession.send(oneSecPollingKafkaConsumerService.getTestTopicFlux().map(ReceiverRecord::value).map(webSocketSession::textMessage));
return Mono.zip(input, output).then();
}
Also, I tried following:
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive()
.doOnSubscribe(subscribe -> log.info("sesseion created sessionId:{}:userId:{};sessionhash:{}",
webSocketSession.getId(),
getUserPollingSessionId(webSocketSession),
webSocketSession.hashCode()))
.then();
Flux<String> source = oneSecPollingKafkaConsumerService.getTestTopicFlux().map(record -> getOneSecPollingTopicRecord(record, webSocketSession)).log();
Mono<Void> output = webSocketSession.send(source.map(webSocketSession::textMessage)).log();
return Mono.zip(input, output).then().log();
}
I enabled log() and got following output:
20:09:22.652 [http-nio-8080-exec-9] INFO c.m.e.w.p.i.w.v.OneSecPollingWebSocketHandler - sesseion created sessionId:a:userId:124;sessionhash:1974799413
20:09:22.652 [http-nio-8080-exec-9] INFO reactor.Flux.RefCount.41 - | onSubscribe([Fuseable] FluxRefCount.RefCountInner)
20:09:22.652 [http-nio-8080-exec-9] INFO reactor.Flux.Map.42 - onSubscribe(FluxMap.MapSubscriber)
20:09:22.652 [http-nio-8080-exec-9] INFO reactor.Flux.Map.42 - request(1)
20:09:22.652 [http-nio-8080-exec-9] INFO reactor.Flux.RefCount.41 - | request(32)
20:09:22.659 [http-nio-8080-exec-9] INFO reactor.Mono.FromPublisher.43 - onSubscribe(MonoNext.NextSubscriber)
20:09:22.659 [http-nio-8080-exec-9] INFO reactor.Mono.FromPublisher.43 - request(unbounded)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Mono.IgnorePublisher.48 - onSubscribe(MonoIgnoreElements.IgnoreElementsSubscriber)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Mono.IgnorePublisher.48 - request(unbounded)
20:09:25.942 [http-nio-8080-exec-10] INFO c.m.e.w.p.i.w.v.OneSecPollingWebSocketHandler - sesseion created sessionId:b:userId:123;sessionhash:1582184236
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Flux.RefCount.45 - | onSubscribe([Fuseable] FluxRefCount.RefCountInner)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Flux.Map.46 - onSubscribe(FluxMap.MapSubscriber)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Flux.Map.46 - request(1)
20:09:25.942 [http-nio-8080-exec-10] INFO reactor.Flux.RefCount.45 - | request(32)
20:09:25.947 [http-nio-8080-exec-10] INFO reactor.Mono.FromPublisher.47 - onSubscribe(MonoNext.NextSubscriber)
20:09:25.949 [http-nio-8080-exec-10] INFO reactor.Mono.FromPublisher.47 - request(unbounded)
20:10:00.880 [reactive-kafka-onesecpolling-group-11] INFO reactor.Flux.RefCount.41 - | onNext(ConsumerRecord(topic = HighFrequencyPollingKPIsComputedValues, partition = 0, leaderEpoch = null, offset = 474, CreateTime = 1644071999871, serialized key size = -1, serialized value size = 43, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = {"greeting" : "Hello", "name" : "Prashant"}))
20:10:01.387 [parallel-5] INFO reactor.Flux.Map.42 - onNext({"greeting" : "Hello", "name" : "Prashant"})
20:10:01.389 [parallel-5] INFO reactor.Flux.Map.42 - request(1)
Here we can see that we have 2 subscribers to reactor-kafka flux:
reactor.Flux.Map.42 - onSubscribe(FluxMap.MapSubscriber
reactor.Flux.Map.46 - onSubscribe(FluxMap.MapSubscriber)
but when data is read from kafka topic, it is received by only one subscriber:
reactor.Flux.Map.42 - onNext({"greeting" : "Hello", "name" :
"Prashant"})
Is it a bug in the Webflux API itself ?
I have found the issue and the solution.
Problem
The way I was using Flux (obtained from KafkaReceiver) in WebSocketHandler handle() method is not correct.
For each websocket session created from multiple client requests, handle method is get called. And so, multiple Flux objects for KafkaReceiver.create().receive() are created. One of the Flux reads data from KafkaReceiver but other flux objects failed to do so.
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive()
.doOnSubscribe(subscribe -> log.info("sesseion created sessionId:{}:userId:{};sessionhash:{}",
webSocketSession.getId(),
getUserPollingSessionId(webSocketSession),
webSocketSession.hashCode()))
.then();
**Flux<String> source = oneSecPollingKafkaConsumerService.getTestTopicFlux()**.map(record -> getOneSecPollingTopicRecord(record, webSocketSession)).log();
Mono<Void> output = webSocketSession.send(source.map(webSocketSession::textMessage)).log();
return Mono.zip(input, output).then().log();
}
Solution
Make sure that only one Flux is created for
KafkaReceiver.create().receive(). One way to do so is to make Flux in the constructor of WebSocketHandler (or KAfkaCOnsumer class)
private final Flux<String> source;
#Autowired
public OneSecPollingWebSocketHandler(OneSecPollingKafkaConsumerService oneSecPollingKafkaConsumerService)
{
source = oneSecPollingKafkaConsumerService.getOneSecPollingTopicFlux().map(r -> getOneSecPollingTopicRecord(r));
}
#Override
public Mono<Void> handle(WebSocketSession webSocketSession)
{
// add usersession id as session attribute
Mono<Void> input = getInputMessageMono(webSocketSession);
Mono<Void> output = getOutputMessageMono(webSocketSession);
return Mono.zip(input, output).then().log();
}
private Mono<Void> getOutputMessageMono(WebSocketSession webSocketSession)
{
Mono<Void> output = webSocketSession.send(source.map(webSocketSession::textMessage)).doOnError(err -> log.error(err.getMessage())).doOnTerminate(() ->
{
log.info("onesecpolling session terminated;{}", webSocketSession.getId());
}).log();
return output;
}
private Mono<Void> getInputMessageMono(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive().doOnSubscribe(subscribe ->
{
log.info("onesecpolling session created sessionId:{}:userId:{}", webSocketSession.getId(), getUserPollingSessionId(webSocketSession));
}).then();
return input;
}
private String getOneSecPollingTopicRecord(ReceiverRecord<Integer, String> record)
{
String lastRecord = record.value();
record.receiverOffset().acknowledge();
return lastRecord;
}
private String getUserPollingSessionId(WebSocketSession webSocketSession)
{
UriTemplate template = new UriTemplate(WEB_SOCKET_ENDPOINT);
URI uri = webSocketSession.getHandshakeInfo().getUri();
Map<String, String> parameters = template.match(uri.getPath());
String userPollingSessionId = parameters.get(WEB_SOCKET_ENDPOINT_USERID_SUBPATH);
return userPollingSessionId;
}

Spring Fox (Swagger-UI) 2.9.2 How to disable the 'Try it out' button

I am trying to disable the 'Try it out' button from the UI.
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>2.9.2</version>
</dependency>
In my Java class, I am doing something like below in an attempt to disable the button, and none of the properties I am trying seem to work. How can I disable the 'Try it out' button?
#GetMapping(path = "/swagger-resources/configuration/ui", produces = APPLICATION_JSON_VALUE)
public Object uiConfig() {
HashMap<String, Object> map = new HashMap<>();
...
1. map.put("supportedSubmitMethods", new String[0]);
2. map.put("x-explorer-enabled", Boolean.FALSE);
3. map.put("x-samples-enabled", Boolean.FALSE);
...
return map;
}
Below code worked for me
#Bean
UiConfiguration uiConfig()
{
return new UiConfiguration( null, UiConfiguration.Constants.NO_SUBMIT_METHODS );
}
Thanks

Spring-Boot Elasticseach EntityMapper can not be autowired

Based on this answer and the comments I implemented the code to receive the scores of an elastic search query.
public class CustomizedHotelRepositoryImpl implements CustomizedHotelRepository {
private final ElasticsearchTemplate elasticsearchTemplate;
#Autowired
public CustomizedHotelRepositoryImpl(ElasticsearchTemplate elasticsearchTemplate) {
super();
this.elasticsearchTemplate = elasticsearchTemplate;
}
#Override
public Page<Hotel> findHotelsAndScoreByName(String name) {
QueryBuilder queryBuilder = QueryBuilders.boolQuery()
.should(QueryBuilders.queryStringQuery(name).lenient(true).defaultOperator(Operator.OR).field("name"));
NativeSearchQuery nativeSearchQuery = new NativeSearchQueryBuilder().withQuery(queryBuilder)
.withPageable(PageRequest.of(0, 100)).build();
DefaultEntityMapper mapper = new DefaultEntityMapper();
ResultsExtractor<Page<Hotel>> rs = new ResultsExtractor<Page<Hotel>>() {
#Override
public Page<Hotel> extract(SearchResponse response) {
ArrayList<Hotel> hotels = new ArrayList<>();
SearchHit[] hits = response.getHits().getHits();
for (SearchHit hit : hits) {
try {
Hotel hotel = mapper.mapToObject(hit.getSourceAsString(), Hotel.class);
hotel.setScore(hit.getScore());
hotels.add(hotel);
} catch (IOException e) {
e.printStackTrace();
}
}
return new PageImpl<>(hotels, PageRequest.of(0, 100), response.getHits().getTotalHits());
}
};
return elasticsearchTemplate.query(nativeSearchQuery, rs);
}
}
As you can see I needed to create a new instance of DefaultEntityMapper mapper = new DefaultEntityMapper(); which should not be the case because it should be possible to #Autowire EntityMapper. If I do so, I get the exception that there is no bean.
Description:
Field entityMapper in com.example.elasticsearch5.es.cluster.repository.impl.CustomizedCluserRepositoryImpl required a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' that could not be found.
Action:
Consider defining a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' in your configuration.
So does anybody know if its possible to autowire EntityMapper directly or does it needs to create the bean manually using #Bean annotation.
I use spring-data-elasticsearch-3.0.2.RELEASE.jar where the core package is inside.
My pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
</dependency>
I checked out the source code of spring-data-elasticsearch. There is no bean/comoponent definition for EntityMapper. It seems this answer is wrong. I test it on my project and get the same error.
Consider defining a bean of type 'org.springframework.data.elasticsearch.core.EntityMapper' in your configuration.
I couldn't find any other option by except defining a #Bean

How to remove the entire cache, and then pre-populate the cache?

Can someone tell me what is the problem with below implementation. I'm trying to delete the entire cache, secondly, I then want to pre-populate/prime the cache. However, what I've below is only deleting both caches, but not pre-populating/priming the cache, when the two methods are executed. Any idea?
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.cache.annotation.Caching;
#Cacheable(cacheNames = "cacheOne")
List<User> cacheOne() throws Exception {...}
#Cacheable(cacheNames = "cacheOne")
List<Book> cacheTwo() throws Exception {...}
#Caching (
evict = {
#CacheEvict(cacheNames = "cacheOne", allEntries = true),
#CacheEvict(cacheNames = "CacheTwo", allEntries = true)
}
)
void clearAndReloadEntireCache() throws Exception
{
// Trying to reload cacheOne and cacheTwo inside this method
// Is this even possible? if not what is the correct approach?
cacheOne();
cacheTwo();
}
I've spring boot application (v1.4.0), more importantly, utilizing the following dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.ehcache</groupId>
<artifactId>ehcache</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>javax.cache</groupId>
<artifactId>cache-api</artifactId>
<version>1.0.0</version>
</dependency>
If you call the clearAndReloadEntireCache() method, only this method will be processed by the caching interceptor. Calling other methods of the same object: cacheOne() and cacheTwo() will not cause cache interception at runtime, although both of them are annotated with #Cacheable.
You could achieve desired functionality by reloading cacheOne and cacheTwo with two method calls shown below:
#Caching(evict = {#CacheEvict(cacheNames = "cacheOne", allEntries = true, beforeInvocation = true)},
cacheable = {#Cacheable(cacheNames = "cacheOne")})
public List<User> cleanAndReloadCacheOne() {
return cacheOne();
}
#Caching(evict = {#CacheEvict(cacheNames = "cacheTwo", allEntries = true, beforeInvocation = true)},
cacheable = {#Cacheable(cacheNames = "cacheTwo")})
public List<Book> cleanAndReloadCacheTwo() {
return cacheTwo();
}

Resources