unhandled Query implementation org.springframework.data.elasticsearch.core.query.NativeSearchQuery - codeigniter

I tried to create RestTemplate Bean with ElasticSearchClient insed of RestHighLevelClient getting this error. Code was compiled and I am to run th application as well. While fetching the data it is happening.
Query query = new NativeSearchQueryBuilder()
.withQuery(booleanQueryBuilder)
.withPageable(PageRequest.of(0, searchAddressRequest.getMaxRecordsRequested()))
.build();
return elasticsearchOperations.search(query, SomeIndex.class);
I am using springboot 2.7.8
#Bean
public ElasticsearchTemplate getElasticsearchTemplate(
#Qualifier ElasticsearchClient elasticsearchClient, AddressMappingEsConverter addressMappingEsConverter) {
return new ElasticsearchTemplate(elasticsearchClient, addressMappingEsConverter);
}

Related

How to implement Circuit breaker in spring framework 6 declarative clients

In a spring boot project, I would like to implement CicrcuitBreaker when connecting to 3rd party services using spring frameworks new declarative client
What I have done till now
Added below dependency in POM.xml as declarative client uses webclient underneath
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-circuitbreaker-reactor-resilience4j</artifactId>
</dependency>
Added below properties in application.properties
resilience4j.circuitbreaker.configs.default.registerHealthIndicator=true
resilience4j.circuitbreaker.configs.default.slidingWindowSize=10
resilience4j.circuitbreaker.configs.default.minimumNumberOfCalls=5
resilience4j.circuitbreaker.configs.default.permittedNumberOfCallsInHalfOpenState=3
resilience4j.circuitbreaker.configs.default.automaticTransitionFromOpenToHalfOpenEnabled=true
resilience4j.circuitbreaker.configs.default.waitDurationInOpenState=5s
resilience4j.circuitbreaker.configs.default.failureRateThreshold=50
resilience4j.circuitbreaker.configs.default.eventConsumerBufferSize=10
resilience4j.circuitbreaker.configs.default.recordExceptions[0]=org.springframework.web.reactive.function.client.WebClientRequestException
resilience4j.circuitbreaker.configs.default.recordExceptions[1]=java.net.ConnectException
resilience4j.circuitbreaker.configs.default.recordExceptions[2]=java.io.IOException
resilience4j.timelimiter.configs.default.timeoutDuration=5s
resilience4j.timelimiter.configs.default.cancelRunningFuture=true
management.health.circuitbreakers.enabled=true
Created a Bean of type ReactiveResilience4JCircuitBreakerFactory as below.
#Bean
Customizer<ReactiveResilience4JCircuitBreakerFactory> defaultCustomizer() {
return factory ->
factory.configureDefault(
id ->
new Resilience4JConfigBuilder(id)
.circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
.timeLimiterConfig(TimeLimiterConfig.ofDefaults())
.build());
}
Added CircuitBreaker to the HttpServiceProxyFactory Bean as below
#Bean
public HttpServiceProxyFactory httpServiceProxyFactory(
WebClient.Builder builder,
ReactiveResilience4JCircuitBreakerFactory resilience4JCircuitBreakerFactory) {
CircuitBreaker circuitBreaker =
resilience4JCircuitBreakerFactory
.getCircuitBreakerRegistry()
.circuitBreaker("default");
WebClient webClient =
builder.baseUrl(applicationProperties.getInventoryServiceUrl())
.filter(
(request, next) ->
next.exchange(request)
.transform(
CircuitBreakerOperator.of(circuitBreaker)))
.defaultHeaders(
httpHeaders -> {
httpHeaders.setContentType(MediaType.APPLICATION_JSON);
httpHeaders.setAccept(List.of(MediaType.APPLICATION_JSON));
})
.build();
return HttpServiceProxyFactory.builder(WebClientAdapter.forClient(webClient)).build();
}
When 3rd party service is down it is not giving any fallback response, how to fix this.
Another approach tried
Annotating the method with #CircuitBreaker and creating fallback method, still it is giving me exception.
#CircuitBreaker(
name = "getInventoryByProductCode",
fallbackMethod = "getInventoryByProductCodeFallBack")
private InventoryDto getInventoryByProductCode(String code) {
return inventoryServiceProxy.getInventoryByProductCode(code);
}
private InventoryDto getInventoryByProductCodeFallBack(String code, Exception e) {
log.error("Exception occurred while fetching product details", e);
return new InventoryDto(code, 0);
}
How to fix this?

ListenerExecutionFailedException Nullpointer when trying to index kafka payload through new ElasticSearch Java API Client

I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".

Get Aggregate Information from Elasticsearch using Spring-data-elasticsearch, ElasticsearchRepository

I would like to get aggregate results from ES like avgSize (avg of a field with name 'size'), totalhits for documents that match a term, and some other aggregates in future, for which I don't think ElasticsearchRepository has any methods to call. I built Query and Aggregate Builders as below. I want to use my Repository interface but I am not sure of what should the return ObjectType be ? Should it be a document type in my DTOs ? Also I have seen examples where the searchQueryis passed directly to ElasticsearchTemplate but then what is the point of having Repository interface that extends ElasticsearchRepository
Repository Interface
public interface CCFilesSummaryRepository extends ElasticsearchRepository<DataReferenceSummary, UUID> {
}
Elastic configuration
#Configuration
#EnableElasticsearchRepositories(basePackages = "com.xxx.repository.es")
public class ElasticConfiguration {
#Bean
public ElasticsearchOperations elasticsearchTemplate() throws UnknownHostException {
return new ElasticsearchTemplate(elasticsearchClient());
}
#Bean
public Client elasticsearchClient() throws UnknownHostException {
Settings settings = Settings.builder().put("cluster.name", "elasticsearch").build();
TransportClient client = new PreBuiltTransportClient(settings);
client.addTransportAddress(new TransportAddress(InetAddress.getLocalHost(), 9200));
return client;
}
}
Service Method
public DataReferenceSummary createSummary(final DataSet dataSet) {
try {
QueryBuilder queryBuilder = QueryBuilders.matchQuery("type" , dataSet.getDataSetCreateRequest().getContentType());
AvgAggregationBuilder avgAggregationBuilder = AggregationBuilders.avg("avg_size").field("size");
ValueCountAggregationBuilder valueCountAggregationBuilder = AggregationBuilders.count("total_references")
.field("asset_id");
SearchQuery searchQuery = new NativeSearchQueryBuilder()
.withQuery(queryBuilder)
.addAggregation(avgAggregationBuilder)
.addAggregation(valueCountAggregationBuilder)
.build();
return ccFilesSummaryRepository.search(searchQuery).iterator().next();
} catch (Exception e){
e.printStackTrace();
}
return null;
}
DataReferernceSummary is just a POJO for now and for which I am getting an error during my build that says Unable to build Bean CCFilesSummaryRepository, illegalArgumentException DataReferernceSummary. is not a amanged Object
First DataReferenceSummary must be a class annotated with #Document.
In Spring Data Elasticsearch 3.2.0 (the current version) you need to define the repository return type as AggregatedPage<DataReferenceSummary>, the returned object will contain the aggregations.
From the upcoming version 4.0 on, you will have to define the return type as SearchHits<DataReferenceSummary> and find the aggregations in this returned object.

PreparedStatementCallback; bad SQL grammar [SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = ? and JOB_KEY =?

I want the Spring Batch metadata to be created on the MySQL server and used all the existing tables from Oracle to fetch data from it and put it into the MongoDB.
I created the following configurations, but somehow missing the trick to create the Spring Batch metadata tables though configuration.
spring.data.mongodb.host=localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=MY_DB
#By default, Spring runs all the job as soon as it has started its context.
spring.batch.job.enabled=false
spring.batch.initialize-schema=always
spring.batch.tablePrefix=test.BATCH_
#spring.batch.initializer.enabled=false
spring.batch.schema=org/springframework/batch/core/schema-mysql.sql
spring.datasource.url=jdbc:oracle:thin:#localhost:1527:OR_DEV
spring.datasource.username=EDR_USR
spring.datasource.password=txz$2Zhr
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
jdbc.batch.jdbcUrl=jdbc:mysql://localhost:3306/test?useSSL=false
jdbc.batch.username=root
jdbc.batch.password=root
jdbc.batch.driver-class-name=com.mysql.cj.jdbc.Driver
DBConfig.java
#Configuration
#ComponentScan
public class DBConfig {
#Autowired
private Environment env;
#Bean(name="oracleDS")
public DataSource batchDataSource(){
return DataSourceBuilder.create()
.url(env.getProperty("spring.datasource.url"))
.driverClassName(env.getProperty("spring.datasource.driver-class-name"))
.username(env.getProperty("spring.datasource.username"))
.password(env.getProperty("spring.datasource.password"))
.build();
}
#Bean(name="mysqlDS")
#Primary
public DataSource mysqlBatchDataSource(){
return DataSourceBuilder.create()
.url(env.getProperty("jdbc.batch.jdbcUrl"))
.driverClassName(env.getProperty("jdbc.batch.driver-class-name"))
.username(env.getProperty("jdbc.batch.username"))
.password(env.getProperty("jdbc.batch.password"))
.build();
}
}
Job
#GetMapping("/save-student")
public String saveStudent() {
JobParameters params = new JobParametersBuilder()
.addString("JobID", String.valueOf(System.currentTimeMillis()))
.addString("Job_ID", String.valueOf(System.currentTimeMillis()))
.addDate("date", new Date())
.toJobParameters();
try {
JobExecution jobExecution = jobLauncher.run(countryJob, params);
log.debug("Job Status : " + jobExecution.getStatus());
} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException
| JobParametersInvalidException e) {
log.error("Job Failed : "+e.getMessage());
}
return "";
}
Error:
{
"timestamp": "2019-03-27T14:57:52.745+0000",
"status": 500,
"error": "Internal Server Error",
"message": "PreparedStatementCallback; bad SQL grammar [SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = ? and JOB_KEY = ?]; nested exception is java.sql.SQLSyntaxErrorException: Table 'test.batch_job_instance' doesn't exist",
"path": "/save-student"
}
Write below code in application.properites file. It will automatically generate tables required for the spring batch.
spring.batch.initialize-schema=always
To fixed this issue, I executed the MYSQL spring batch metadata tables manually by executing the script from here: MySQL Spring Batch Metadata tables script
Since we've two dataSources, Spring Batch is unable to create the metadata tables automatically and hence this error is coming. I executed the script manually as a workaround, but is there any way to fixed this issue through code?
After spend some time, I could achieve the results i hoped by setting the batch datasource as #Primary. Like this:
Datasource configuration
#Bean
#Primary
public DataSource batchDataSource() {
return batchDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
#ConfigurationProperties("spring.datasource.batch")
public DataSourceProperties batchDataSourceProperties() {
return new DataSourceProperties();
}
OR, In the Batch configuration i set the data source
#Component
public class BatchDSConfiguration extends DefaultBatchConfigurer {
#Override
#Autowired(required = false)
public void setDataSource(#Qualifier("batchDataSource") DataSource dataSource) {
super.setDataSource(dataSource);
}
...
}
Application properties
spring.datasource.batch.driverClassName=org.h2.Driver
spring.datasource.batch.url=jdbc:h2:mem:batchdb
spring.datasource.batch.username=sb
spring.datasource.batch.password=
spring.batch.jdbc.initialize-schema=ALWAYS
I Tried to create a custom JobRepository in the BatchConfigurer, but could not create the batch schema with multiple data sources. I Will try more and if i can do it i'll come back here to tell.
Hope it helps.
spring.batch.jdbc.initialize-schema=always

use spring boot data redis Connect to the redis cluster problem

I used spring boot data redis to connect to the redis cluster, using version 2.1.3 The configuration is as follows:
#Bean
#Primary
public RedisConnectionFactory myLettuceConnectionFactory(GenericObjectPoolConfig poolConfig) {
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration();
final List<String> nodeList = redisProperties.getCluster().getNodes();
Set<RedisNode> nodes = new HashSet<RedisNode>();
for (String ipPort : nodeList) {
String[] ipAndPort = ipPort.split(":");
nodes.add(new RedisNode(ipAndPort[0].trim(), Integer.valueOf(ipAndPort[1])));
}
redisClusterConfiguration.setPassword(RedisPassword.of(redisProperties.getPassword()));
redisClusterConfiguration.setClusterNodes(nodes);
redisClusterConfiguration.setMaxRedirects(redisProperties.getCluster().getMaxRedirects());
LettuceClientConfiguration clientConfig = LettucePoolingClientConfiguration.builder()
.commandTimeout(redisProperties.getTimeout())
.poolConfig(poolConfig)
.build();
RedisClusterClient clusterClient ;
LettuceConnectionFactory factory = new LettuceConnectionFactory(redisClusterConfiguration,clientConfig);
return factory;
}
However, during the operation, a WARN exception message will always be received as follows:
Well, this seems to be a problem with lettuce, How to map remote host & port to localhost using Lettuce,but I don't know how to use it in spring boot data redis. Any solution is welcome, thank you
I've got the answer, so let's define a ClinentRourse like this:
MappingSocketAddressResolver resolver = MappingSocketAddressResolver.create(DnsResolvers.UNRESOLVED ,
hostAndPort -> {
if(hostAndPort.getHostText().startsWith("172.31")){
return HostAndPort.of(ipStr, hostAndPort.getPort());
}
return hostAndPort;
});
ClientResources clientResources = ClientResources.builder()
.socketAddressResolver(resolver)
.build();
Then through LettuceClientConfiguration.clientResources method set in, the normal work of the lettuce.

Resources