"Node Discovery Disabled" in Elastic Search - elasticsearch

I used below Java code on UBUNTU and I am getting "Node Discovery Disabled". Because of this I am not able move forward.
Could anyone please help me out solving this problem.
public static JestClient JestConfiguration(){
// Configuration
ClientConfig client = new ClientConfig.Builder("http://localhost:9200")
.multiThreaded(true).build();
System.out.println("\nclient configured via:- "+client);
// Construct a new Jest client according to configuration via factory
JestClientFactory factory = new JestClientFactory();
factory.setClientConfig(client);
System.out.println("\nJestClientFactory Via:-"+factory);
JestClient jestClient = factory.getObject();
System.out.println("\njestClient via:-"+jestClient);
//jestClient.shutdownClient();
return jestClient;
}

I am not sure what version you are using. I am using 0.1.2 and the factory I have only has a setHttpClientConfig method. So I used the HttpClientConfig instead, which extends ClientConfig. That point aside, the builder has two methods you'll want:
discoveryEnabled
discoveryFrequency
These set node node discovery and how frequently to poll.
HttpClientConfig httpClientConfig = new HttpClientConfig.Builder("http://localhost:9200")
.discoveryEnabled(true)
.discoveryFrequency(10l, TimeUnit.SECONDS)
.multiThreaded(true)
.build();
JestClientFactory factory = new JestClientFactory();
factory.setHttpClientConfig(httpClientConfig);

Related

unable to disable topics auto create in spring kafka v 1.1.6

Im using springboot v1.5 and spring kafka v1.1.6 to publish a message to a kafka broker.
when it publishes the message to the topic, the topic is created in the broker by default if not present.
I do not want it to create topics if not present. I tried to disable it by adding the property spring.kafka.topic.properties.auto.create=false but it does not work.
below is my bean configuration
#Value("${kpi.kafka.bootstrap-servers}")
private String bootstrapServer;
#Bean
public ProducerFactory<String, CmsMonitoringMetrics> producerFactoryJson() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
configProps.put("allow.auto.create.topics", "false");
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, CmsMonitoringMetrics> kafkaTemplateJson() {
return new KafkaTemplate<>(producerFactoryJson());
}
in producer method im using the below code to publish
Message<CmsMonitoringMetrics> message = MessageBuilder.withPayload(data)
.setHeader(KafkaHeaders.TOPIC, topicName)
.build();
SendResult<String, CmsMonitoringMetrics> result = kafkaTemplate.send(message).get();
it still creates the topic. please help me disable it.
As per the documentation, auto.create.topics.enable is a broker configuration. That means that you have to set this property on the server side of Kafka, not on producer/consumer clients.

ListenerExecutionFailedException Nullpointer when trying to index kafka payload through new ElasticSearch Java API Client

I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".

use spring boot data redis Connect to the redis cluster problem

I used spring boot data redis to connect to the redis cluster, using version 2.1.3 The configuration is as follows:
#Bean
#Primary
public RedisConnectionFactory myLettuceConnectionFactory(GenericObjectPoolConfig poolConfig) {
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration();
final List<String> nodeList = redisProperties.getCluster().getNodes();
Set<RedisNode> nodes = new HashSet<RedisNode>();
for (String ipPort : nodeList) {
String[] ipAndPort = ipPort.split(":");
nodes.add(new RedisNode(ipAndPort[0].trim(), Integer.valueOf(ipAndPort[1])));
}
redisClusterConfiguration.setPassword(RedisPassword.of(redisProperties.getPassword()));
redisClusterConfiguration.setClusterNodes(nodes);
redisClusterConfiguration.setMaxRedirects(redisProperties.getCluster().getMaxRedirects());
LettuceClientConfiguration clientConfig = LettucePoolingClientConfiguration.builder()
.commandTimeout(redisProperties.getTimeout())
.poolConfig(poolConfig)
.build();
RedisClusterClient clusterClient ;
LettuceConnectionFactory factory = new LettuceConnectionFactory(redisClusterConfiguration,clientConfig);
return factory;
}
However, during the operation, a WARN exception message will always be received as follows:
Well, this seems to be a problem with lettuce, How to map remote host & port to localhost using Lettuce,but I don't know how to use it in spring boot data redis. Any solution is welcome, thank you
I've got the answer, so let's define a ClinentRourse like this:
MappingSocketAddressResolver resolver = MappingSocketAddressResolver.create(DnsResolvers.UNRESOLVED ,
hostAndPort -> {
if(hostAndPort.getHostText().startsWith("172.31")){
return HostAndPort.of(ipStr, hostAndPort.getPort());
}
return hostAndPort;
});
ClientResources clientResources = ClientResources.builder()
.socketAddressResolver(resolver)
.build();
Then through LettuceClientConfiguration.clientResources method set in, the normal work of the lettuce.

Why is Spring JMS creating a JMS connection every second when connecting to an ActiveMQ Broker?

I've created a Spring JMS application using version 4.1.2.RELEASE, which is connected to a broker that is running ActiveMQ 5.11.0. The problem that I'm seeing is as follows. In the logs, I notice that every second, I'm seeing a connection being created as such.
2017-06-21 13:10:21,046 | level=INFO | thread=ActiveMQ Task-1 | class=org.apache.activemq.transport.failover.FailoverTransport | Successfully connected to tcp://localhost:61616
I know that it is creating a new ActiveMQ connection each time, because it says successfully "connected" and not "reconnected" as shown in the code located here: http://grepcode.com/file/repo1.maven.org/maven2/com.ning/metrics.collector/1.3.3/org/apache/activemq/transport/failover/FailoverTransport.java#891
I don't have a caching connection factory set for my consumer, but I'm wondering if the following is the culprit when it comes to why I'm seeing constant connections being created.
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
The following post states that consumers should not be cached, but I wonder if that applies to caching the connection + session. If the connection is cached, but the session is not, then I wonder if that creates a problem.
Why DefaultMessageListenerContainer should not use CachingConnectionFactory?
The following are the configurations that I'm using in my application. I am hoping that it is something that I've misconfigured, and would appreciate any insights that anyone has to offer.
Spring Configurations
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() throws Throwable {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
factory.setMaxMessagesPerTask(-1);
factory.setConcurrency(1);
factory.setSessionTransacted(true);
return factory;
}
#Bean
public CachingConnectionFactory cachingConnectionFactory(){
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(connectionFactory());
cachingConnectionFactory.setCacheConsumers(false);
cachingConnectionFactory.setSessionCacheSize(1);
return cachingConnectionFactory;
}
#Bean
public ActiveMQConnectionFactory connectionFactory(){
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(1000L);
redeliveryPolicy.setRedeliveryDelay(1000L);
redeliveryPolicy.setMaximumRedeliveries(6);
redeliveryPolicy.setUseExponentialBackOff(true);
redeliveryPolicy.setBackOffMultiplier(5);
ActiveMQConnectionFactory activeMQ = new ActiveMQConnectionFactory("admin", "admin", "tcp://localhost:61616");
activeMQ.setRedeliveryPolicy(redeliveryPolicy);
activeMQ.setPrefetchPolicy(prefetchPolicy());
return activeMQ;
}
#Bean
public JmsMessagingTemplate jmsMessagingTemplate(){
ActiveMQTopic activeMQ = new ActiveMQTopic("topic.out");
JmsMessagingTemplate jmsMessagingTemplate = new JmsMessagingTemplate(cachingConnectionFactory());
jmsMessagingTemplate.setDefaultDestination(activeMQ);
return jmsMessagingTemplate;
}
protected ActiveMQPrefetchPolicy prefetchPolicy(){
ActiveMQPrefetchPolicy prefetchPolicy = new ActiveMQPrefetchPolicy();
int prefetchValue = 1000;
prefetchPolicy.setQueuePrefetch(prefetchValue);
return prefetchPolicy;
}
Thanks,
Juan
The issue was indeed the following code.
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
The moment that I removed it, the rapid connection creation stopped.

Eclipse Paho Mqtt - Spring Java configuration

I want to use MqTT in my SpringMVC project. In this link,the official example, creates all the objects with new keyword. As far as I know, this is not Spring style. The recommended way to do this creating bean, isn't?
I found some examples (spring-integration-mqtt, which based on eclipse-paho-mqtt) configured xml-based, but I want to make it Java based configuration. I congifured whole project Java-based. There is no .xml file in the project (not even web.xml).
If you suggest me an example with Java-config or good document about converting xml-config to java-config I will be appriciated.
Thanks in advance.
You can track the Pull Request on the matter, but let me share a piece of code to track more info here as well:
#Bean
public MessageProducer inbound() {
MqttPahoMessageDrivenChannelAdapter adapter =
new MqttPahoMessageDrivenChannelAdapter("tcp://localhost:1883", "testClient",
"topic1", "topic2");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
adapter.setOutputChannel(mqttInputChannel());
return adapter;
}
#Bean
#ServiceActivator(inputChannel = "mqttOutboundChannel")
public MessageHandler amqpOutbound() {
MqttPahoMessageHandler messageHandler =
new MqttPahoMessageHandler("testClient", mqttClientFactory());
messageHandler.setAsync(true);
messageHandler.setDefaultTopic("testTopic");
return messageHandler;
}

Resources