Logs to Elasticsearch to Apache NIFI (QueryElasticsearchHttp) - elasticsearch

I'm trying to send get data from my elastic search cluster to Apache NIFI using QueryElasticsearchHttp. Using latest elastic version elasticsearch-8.6.1 and apache nifi 1.19.1.
Getting the below error.
2023-02-02 03:23:55,110 ERROR [Timer-Driven Process Thread-5] o.a.n.p.e.QueryElasticsearchHttp [QueryElasticsearchHttp[id=1024bc75-0186-1000-ddf8-9843acd89d89], null, null] Failed to read {} from Elasticsearch due to {}
java.lang.NullPointerException: null
at org.apache.nifi.processors.elasticsearch.QueryElasticsearchHttp.getPage(QueryElasticsearchHttp.java:418)
at org.apache.nifi.processors.elasticsearch.QueryElasticsearchHttp.onTrigger(QueryElasticsearchHttp.java:354)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1356)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
So I went through source code. it says in Apache Nifi (QueryElasticsearchHttp) Type attribute should be added. The problem is I don't have type in Elasticsearch is the any workaround or how can I set empty type to queried from Elasticsearch using lucene.
Once I set arbitrary type in QueryElasticsearchHttp below error occurs. Because in my Elasticsearch logs no type defined.
2023-02-02 03:24:17,268 WARN [Timer-Driven Process Thread-5] o.a.n.p.e.QueryElasticsearchHttp QueryElasticsearchHttp[id=1024bc75-0186-1000-ddf8-9843acd89d89] Elasticsearch returned code 400 with message Bad Request.

Related

Kafka Sink Connector Elastic Cloud

I am trying to connect kafka to Elastic cloud via url, username and password.
I get a 404 error so I guess that the configuration is not the right one to connect to the Elastic cluster.
How can I connect to Elastic Cloud?
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
type.name=_doc
connection.password=my_pass_elastic
topics=demo-topic-distributed
tasks.max=1
connection.username=my_user_elastic
connection.url=https://xxxxxxxxxxxxxxxxx.xx-xxx-x.aws.found.io:port
value.converter=org.apache.kafka.connect.json.JsonConverter
key.ignore=true
key.converter=org.apache.kafka.connect.storage.StringConverter
schema.ignore=true
org.apache.kafka.connect.errors.ConnectException: Could not create index 'demo-topic-distributed': 404 Not Found
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.createIndex(JestElasticsearchClient.java:458)
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.createIndices(JestElasticsearchClient.java:425)
at io.confluent.connect.elasticsearch.ElasticsearchWriter.createIndicesForTopics(ElasticsearchWriter.java:374)
at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.open(ElasticsearchSinkTask.java:131)
at org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:617)
at org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:71)
at org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:682)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:293)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:430)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:449)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:365)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:508)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1261)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1230)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:454)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:239)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The credentials I use in the connection.username and connection.password field are the same ones I use to access kibana from the cluster.
Specify the URL and crednetials of your Elastic host, not Kibana.
From Elastic cloud you should have a Copy endpoint button

Elasticsearch sink connector throws 403 forbidden exception when trying to create indices from topics

i am trying to create a sink connector to elastic cloud. This is the configuration of my Elasticsearch sink connector (with ksqldb).
create sink connector elastic_writer with (
'connector.class'='io.confluent.connect.elasticsearch.ElasticsearchSinkConnector',
'connection.url'='********',
'connection.username'='********',
'connection.password'='********',
'type.name'='kafka-connect',
'topics.regex'='sqlserver\.dbo\.*',
'schema.ignore'='true');
When i create the sink connector i first get this error.
[2020-11-02 08:56:37,480] INFO Index 'sqlserver.dbo.quotations' not found in local cache; checking for existence (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
[2020-11-02 08:56:37,486] INFO Index 'sqlserver.dbo.quotations' not found in Elasticsearch. Error message: 403 Forbidden (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
[2020-11-02 08:56:37,486] INFO Requesting Elasticsearch create index 'sqlserver.dbo.quotations' (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
[2020-11-02 08:56:37,494] INFO Index 'sqlserver.dbo.quotations' not found in local cache; checking for existence (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
[2020-11-02 08:56:37,503] INFO Index 'sqlserver.dbo.quotations' not found in Elasticsearch. Error message: 403 Forbidden (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
[2020-11-02 08:56:37,504] WARN Failed to create index sqlserver.dbo.quotations with attempt 1/6, will attempt retry after 62 ms. Failure reason: Could not create index 'sqlserver.dbo.quotations': 403 Forbidden (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
then it cycles through all of the retries untill i finally get the following error and the task gets killed.
[2020-11-02 08:56:40,245] ERROR WorkerSinkTask{id=ELASTIC_WRITER-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Could not create index 'sqlserver.dbo.quotations': 403 Forbidden
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.createIndex(JestElasticsearchClient.java:451)
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.createIndices(JestElasticsearchClient.java:421)
at io.confluent.connect.elasticsearch.ElasticsearchWriter.createIndicesForTopics(ElasticsearchWriter.java:374)
at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.open(ElasticsearchSinkTask.java:131)
at org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:614)
at org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:71)
at org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:679)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:293)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:430)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:440)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:359)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:513)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1268)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1230)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:451)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:318)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
[2020-11-02 08:56:40,246] ERROR WorkerSinkTask{id=ELASTIC_WRITER-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
Any ideas on how to fix this problem? I've already tried to make the indices before creating the sink connector, however this didnt fix the problem and kafka connect threw the exact same error.
This 403 exception is thrown when the elastic sink connector is unable to connect with the elastic service. Check your firewall settings and/or filters applied in the elastic cloud deployment

Error when used org.apache.kafka.connect.json.JsonConverter in Kafka Connect

I am trying to use Json converter in Kafka connect but it is throwing below error :
{"type":"log", "host":"connecttest6-ckaf-connect-84866788d4-p8lkh", "level":"ERROR", "neid":"kafka-connect-4d9495b82e1e420992ec44c433d733ad", "system":"kafka-connect", "time":"2019-04-08T11:55:14.254Z", "timezone":"UTC", "log":"pool-5-thread-1 - org.apache.kafka.connect.runtime.WorkerTask - WorkerSinkTask{id=hive-sink6-1} Task threw an uncaught and unrecoverable exception"}
java.lang.ClassCastException: org.apache.kafka.connect.json.JsonConverter cannot be cast to io.confluent.connect.hdfs.Format
at io.confluent.connect.hdfs.DataWriter.<init>(DataWriter.java:242)
at io.confluent.connect.hdfs.DataWriter.<init>(DataWriter.java:103)
at io.confluent.connect.hdfs.HdfsSinkTask.start(HdfsSinkTask.java:98)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:302)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:191)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I tried following configuration for Json (key.converter.enable.schema=false and value.converter.enable.schema=false) in source and same in HDFSSinkConnector configuration.
Connect configuration:
ConnectKeyConverter: "org.apache.kafka.connect.json.JsonConverter"
ConnectValueConverter: "org.apache.kafka.connect.json.JsonConverter"
ConnectKeyConverterSchemasEnable: "true"
ConnectValueConverterSchemasEnable: "true"
"http(s)://schemareg_headless_service_name.namespace.svc.cluster.local:port"
ConnectSchemaRegistryUrl: "http://kafka-schema-registry-ckaf-schema-registry-headless.ckaf.svc.cluster.local:8081"
ConnectInternalKeyConverter: "org.apache.kafka.connect.json.JsonConverter"
ConnectInternalValueConverter: "org.apache.kafka.connect.json.JsonConverter"
REST API command used to add sink(Sink Configuration):
curl -X PUT -H "Content-Type: application/json" --data '{"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector","tasks.max":"2","topics":"topic-test","hdfs.url": "hdfs://localhost/tmp/jsontest4","flush.size": "3","name": "thive-sink6","format.class":"org.apache.kafka.connect.json.JsonConverter","value.converter.schemas.enable":"false","key.converter.schemas.enable":"false"}' connecttest6-ckaf-connect.ckaf.svc.cluster.local:8083/connectors/hive-sink6/config
After adding the sink to Kafka Connect.I sent the data to respective Kafka topic. Below are the data I tried :
{"name":"test"}
{"schema":{"type":"struct","fields":[{"type":"string","field":"name"}]},"payload":{"name":"value1"}}
The data is expected to be written in HDFS location provided in Sink Configuration mentioned above.
Need suggestions on above scenario and how the error can be resolved.
It appears as if somewhere in your configurations (for Kubernetes??), you've assigned format.class=org.apache.kafka.connect.json.JsonConverter, which isn't valid.
Perhaps you meant to use io.confluent.connect.hdfs.json.JsonFormat

Apache MiNifi- Putelasticsearch

i make flow, which process real time data from local server and send relevant data to Elasticsearch. I use Minifi, but when I run MiNifi it returned the following error.
Does anyone know, where is the issue?
Thanks
ERROR [Timer-Driven Process Thread-10] o.a.n.p.elasticsearch.PutElasticsearch5 PutElasticsearch5[id=4ed70cbe-9838-35cd-0000-000000000000] PutElasticsearch5[id=4ed70cbe-9838-35cd-0000-000000000000] failed to process due to java.lang.NoClassDefFoundError: Could not initialize class org.elasticsearch.Version; rolling back session: {}
java.lang.NoClassDefFoundError: Could not initialize class org.elasticsearch.Version
at org.elasticsearch.common.io.stream.StreamOutput.(StreamOutput.java:73)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:60)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:57)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:47)
at org.elasticsearch.common.xcontent.XContentBuilder.builder(XContentBuilder.java:67)
at org.elasticsearch.common.settings.Setting.arrayToParsableString(Setting.java:698)
at org.elasticsearch.common.settings.Setting.lambda$listSetting$26(Setting.java:656)
at org.elasticsearch.common.settings.Setting$2.getRaw(Setting.java:660)
at org.elasticsearch.common.settings.Setting.get(Setting.java:300)
at org.elasticsearch.plugins.PluginsService.(PluginsService.java:164)
at org.elasticsearch.client.transport.TransportClient.newPluginService(TransportClient.java:81)
at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:106)
at org.elasticsearch.client.transport.TransportClient.(TransportClient.java:228)
at org.elasticsearch.transport.client.PreBuiltTransportClient.(PreBuiltTransportClient.java:69)
at org.elasticsearch.transport.client.PreBuiltTransportClient.(PreBuiltTransportClient.java:65)
at org.apache.nifi.processors.elasticsearch.AbstractElasticsearch5TransportClientProcessor.getTransportClient(AbstractElasticsearch5TransportClientProcessor.java:230)
at org.apache.nifi.processors.elasticsearch.AbstractElasticsearch5TransportClientProcessor.createElasticsearchClient(AbstractElasticsearch5TransportClientProcessor.java:170)
at org.apache.nifi.processors.elasticsearch.AbstractElasticsearch5Processor.setup(AbstractElasticsearch5Processor.java:94)
at org.apache.nifi.processors.elasticsearch.PutElasticsearch5.onTrigger(PutElasticsearch5.java:177)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
In order to reduce its footprint, MiNiFi java only ships with the standard bundle of processors. In order to use the other processors that are present within a standard NiFi deployment in MiNiFi, you need to put the appropriate "nar" file into the "lib" of the MiNiFi deployment.
For "PutElasticSearch" you need "nifi-elasticsearch-nar-.nar" where "" is the version of NiFi that your version of MiNiFi is built off of. Versions 0.4.0 of MiNiFi java uses NiFi 1.5.0.
For more information and a list of the processors that do come bundled with MiNiFi out of the box see the "MiNiFi Java Agent Quick Start" documentation, section "Using Processors Not Packaged with MiNiFi"[1]. For more information on the different versions of MiNiFi correspond to the version of NiFi frameworks, see here[2].
[1] https://nifi.apache.org/minifi/minifi-java-agent-quick-start.html
[2] https://cwiki.apache.org/confluence/display/MINIFI/MiNiFi+Versioning+and+Toolkit+Compatibility

Getting ClusterBlockException while running queries using node client

My elasticsearch cluster(version 2.0) is started and the node client is built successfully, but for some reason I'm getting the following error while running queries using node client.
20:15:15.479 [Pool:entitytaskscheduler: Thread#1] DEBUG c.b.o.e.t.c.DataCollectorStatusUpdateTask - collectors updated due to agent reconnected:{}
ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];]
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:154)
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:144)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.<init>(TransportSearchTypeAction.java:116)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.<init>(TransportSearchQueryThenFetchAction.java:73)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.<init>(TransportSearchQueryThenFetchAction.java:67)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:64)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:53)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:99)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:44)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)
at com.hidden.ppp.management.dc.DataCollectorPollStatusDAOESImpl.findDCIdsUpdatedInTime(DataCollectorPollStatusDAOESImpl.java:151)
at com.hidden.ppp.engine.taskexecutor.cptaskexecs.DataCollectorStatusUpdateTask.execute(DataCollectorStatusUpdateTask.java:199)
at com.hidden.ppp.engine.taskexecutor.cptaskexecs.DataCollectorStatusUpdateTaskRunner.run(DataCollectorStatusUpdateTaskRunner.java:27)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
20:15:15.558 [Pool:entitytaskscheduler: Thread#1] WARN c.b.o.m.d.DataCollectorPollStatusDAOESImpl - blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
20:15:15.558 [Pool:entitytaskscheduler: Thread#1] DEBUG c.b.o.e.t.c.DataCollectorStatusUpdateTask - collectors for which polls updated after epoc time:1453128243336 - dcids: []
ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];]
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:154)
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:144)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.<init>(TransportSearchTypeAction.java:116)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.<init>(TransportSearchQueryThenFetchAction.java:73)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.<init>(TransportSearchQueryThenFetchAction.java:67)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:64)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:53)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:99)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:44)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)
at com.hidden.ppp.management.dc.DataCollectorPollStatusDAOESImpl.findDCIdsNotUpdatedInTime(DataCollectorPollStatusDAOESImpl.java:182)
at com.hidden.ppp.engine.taskexecutor.cptaskexecs.DataCollectorStatusUpdateTask.execute(DataCollectorStatusUpdateTask.java:204)
at com.hidden.ppp.engine.taskexecutor.cptaskexecs.DataCollectorStatusUpdateTaskRunner.run(DataCollectorStatusUpdateTaskRunner.java:27)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
I've even disabled the "multicast" as per this post - still no luck. Surprisingly, I could access the elasticsearch from sense. Any clues on what is going wrong ?
I faced the same error message and was not able to understand the problem first. I was developing a node client Java application on my laptop, using an Elasticsearch data node on a remote server. For production use, I needed to deploy the Java application on this remote server.
I configured the Java application to talk to the local host only (being on the same host now):
elasticsearch.discovery.zen.ping.unicast.hosts=127.0.0.1
And got the same exception
ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];]
Looking at the logs I also found this entry:
[WARN] [TP-Processor2] DiscoveryService.waitForInitialState -> [cerbera] waited for 30s and no initial state was set by the discovery
So basically, the question was: Why doesn't it find the Elasticsearch data node? I changed port ranges and also played with the multicast setting - without success.
Finally, I checked elasticsearch.yml and found the data node not listening to localhost (127.0.0.1), but instead on the ethernet interface 192.168.1.2.
network.host: 192.168.1.2
http.port: 9200
The final change was simple, I just needed to reconfigure the node client configuration to talk to the correct interface
elasticsearch.discovery.zen.ping.unicast.hosts=192.168.1.2
Now my node client is talking to elasticsearch via the correct interface. Job done.
I had the same problem (using k8s ) I finally replaced my elastic image and the issue was solved...
moved from 6.5.4-debian-9-r41 to 6.8.16-debian-10-r5 (using bitnami images)
I know it is not the best answer - but I really tried suggested answers and nothing worked for me. so my recommendation is to update to a newer better version. (docker makes that easy:) )

Resources