trying to set up elastic exporter for a local stack :
https://github.com/prometheus-community/elasticsearch_exporter
I get a connection refused when running with a docker even when xpack security is disabled.
xpack.license.self_generated.type: basic
xpack.security.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.exporters.my_local_exporter:
type: local
bootstrap.memory_lock: true
search.allow_expensive_queries: true
indices.memory.index_buffer_size: 30%
I use elastic 8.1.2 :
{
"name" : "0a166124ca20",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "xxxxxxxxxxxxxxx",
"version" : {
"number" : "8.1.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "xxxxxxxxxxxxxxx",
"build_date" : "2022-03-29T21:18:59.991429448Z",
"build_snapshot" : false,
"lucene_version" : "9.0.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
I get limited amount of metrics but missing the majoriy
for example --> elasticsearch_node_stats_up 0
here is my docker-compose:
elasticsearch_exporter:
image: quay.io/prometheuscommunity/elasticsearch-exporter:latest
command:
- '--es.uri=http://localhost:9200'
- '--es.ssl-skip-verify'
- '--es.all'
restart: always
environment:
- 'ES_API_KEY=Apikey xxxxxxxxxx'
ports:
- "9114:9114"
I have two Elasticsearch nodes installed as tar on the server and running on ports 9200 and 9201.
I would like to have both master nodes running as the same cluster.
Each is running fine, but clustering is not working.
The following is the network setting of my elasticsearch.yml file.
NODE-1
cluster.name: My-ElasticSearch
node.name: "node-1"
node.roles: [ master,data ]
network.host: 10.0.20.10
http.port: 9200
transport.port: 9300
discovery.seed_hosts: ["10.0.20.10:9300", "10.0.20.10:9301"]
cluster.initial_master_nodes: ["node-1", "node-2"]
http.cors.enabled: true
http.cors.allow-origin: "*"
transport.host: 10.0.20.10
node-1 run log
[2022-09-27T00:47:12,272][INFO ][o.e.e.NodeEnvironment ] [node-1] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-ubuntu--lv)]], net usable_space [75.9gb], net total_space [94.9gb], types [ext4]
[2022-09-27T00:47:12,272][INFO ][o.e.e.NodeEnvironment ] [node-1] heap size [3.8gb], compressed ordinary object pointers [true]
[2022-09-27T00:47:12,315][INFO ][o.e.n.Node ] [node-1] node name [node-1], node ID [IHal4DVSRCaUes8HN-VWjA], cluster name [My-ElasticSearch], roles [data, master]
[2022-09-27T00:47:14,805][INFO ][o.e.x.s.Security ] [node-1] Security is disabled
[2022-09-27T00:47:14,851][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-1] [controller/1758] [Main.cc#123] controller (64 bit): Version 8.3.3 (Build d2d2e518384d45) Copyright (c) 2022 Elasticsearch BV
[2022-09-27T00:47:15,193][INFO ][o.e.t.n.NettyAllocator ] [node-1] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-09-27T00:47:15,214][INFO ][o.e.i.r.RecoverySettings ] [node-1] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2022-09-27T00:47:15,240][INFO ][o.e.d.DiscoveryModule ] [node-1] using discovery type [multi-node] and seed hosts providers [settings]
[2022-09-27T00:47:16,286][INFO ][o.e.n.Node ] [node-1] initialized
[2022-09-27T00:47:16,287][INFO ][o.e.n.Node ] [node-1] starting ...
[2022-09-27T00:47:16,306][INFO ][o.e.x.s.c.f.PersistentCache] [node-1] persistent cache index loaded
[2022-09-27T00:47:16,307][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [node-1] deprecation component started
[2022-09-27T00:47:16,486][INFO ][o.e.t.TransportService ] [node-1] publish_address {10.0.20.10:9300}, bound_addresses {10.0.20.10:9300}
[2022-09-27T00:47:17,110][INFO ][o.e.b.BootstrapChecks ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-09-27T00:47:17,114][WARN ][o.e.c.c.ClusterBootstrapService] [node-1] this node is locked into cluster UUID [YsU25ZUuQRqZNUG3YykAjg] but [cluster.initial_master_nodes] is set to [node-1, node-2]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts
[2022-09-27T00:47:17,236][INFO ][o.e.c.s.MasterService ] [node-1] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {node-1}{IHal4DVSRCaUes8HN-VWjA}{qc71eyIPTjS_SIckpcdmgg}{node-1}{10.0.20.10}{10.0.20.10:9300}{dm} completing election], term: 24, version: 996, delta: master node changed {previous [], current [{node-1}{IHal4DVSRCaUes8HN-VWjA}{qc71eyIPTjS_SIckpcdmgg}{node-1}{10.0.20.10}{10.0.20.10:9300}{dm}]}
[2022-09-27T00:47:17,290][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous [], current [{node-1}{IHal4DVSRCaUes8HN-VWjA}{qc71eyIPTjS_SIckpcdmgg}{node-1}{10.0.20.10}{10.0.20.10:9300}{dm}]}, term: 24, version: 996, reason: Publication{term=24, version=996}
[2022-09-27T00:47:17,328][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {10.0.20.10:9200}, bound_addresses {[::]:9200}
[2022-09-27T00:47:17,328][INFO ][o.e.n.Node ] [node-1] started {node-1}{IHal4DVSRCaUes8HN-VWjA}{qc71eyIPTjS_SIckpcdmgg}{node-1}{10.0.20.10}{10.0.20.10:9300}{dm}{xpack.installed=true}
node-1 state
ubuntu#elasticsearch:~$ curl http://10.0.20.10:9200/_cluster/health?pretty
{
"cluster_name" : "My-ElasticSearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 17,
"active_shards" : 17,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 80.95238095238095
}
NODE-2
cluster.name: My-ElasticSearch
node.name: node-2
node.roles: [ master ]
network.bind_host: 10.0.20.10
network.host: 10.0.20.10
network.publish_host: 10.0.20.10
http.port: 9201
transport.port: 9301
discovery.seed_hosts: ["10.0.20.10:9300", "10.0.20.10:9301"]
cluster.initial_master_nodes: ["node-1", "node-2"]
http.host: 10.0.20.10
http.cors.enabled: true
http.cors.allow-origin: "*"
node-2 run log
[2022-09-27T00:53:20,372][INFO ][o.e.e.NodeEnvironment ] [node-2] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-ubuntu--lv)]], net usable_space [75.9gb], net total_space [94.9gb], types [ext4]
[2022-09-27T00:53:20,372][INFO ][o.e.e.NodeEnvironment ] [node-2] heap size [4.6gb], compressed ordinary object pointers [true]
[2022-09-27T00:53:20,429][INFO ][o.e.n.Node ] [node-2] node name [node-2], node ID [PSaq7WA5RvSJduF-9Uk-KA], cluster name [My-ElasticSearch], roles [master]
[2022-09-27T00:53:23,022][INFO ][o.e.x.s.Security ] [node-2] Security is disabled
[2022-09-27T00:53:23,085][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-2] [controller/1995] [Main.cc#123] controller (64 bit): Version 8.3.3 (Build d2d2e518384d45) Copyright (c) 2022 Elasticsearch BV
[2022-09-27T00:53:23,506][INFO ][o.e.t.n.NettyAllocator ] [node-2] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-09-27T00:53:23,531][INFO ][o.e.i.r.RecoverySettings ] [node-2] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2022-09-27T00:53:23,560][INFO ][o.e.d.DiscoveryModule ] [node-2] using discovery type [multi-node] and seed hosts providers [settings]
[2022-09-27T00:53:24,634][INFO ][o.e.n.Node ] [node-2] initialized
[2022-09-27T00:53:24,635][INFO ][o.e.n.Node ] [node-2] starting ...
[2022-09-27T00:53:24,642][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [node-2] deprecation component started
[2022-09-27T00:53:24,765][INFO ][o.e.t.TransportService ] [node-2] publish_address {10.0.20.10:9301}, bound_addresses {[::]:9301}
[2022-09-27T00:53:25,039][INFO ][o.e.b.BootstrapChecks ] [node-2] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-09-27T00:53:25,042][WARN ][o.e.c.c.ClusterBootstrapService] [node-2] this node is locked into cluster UUID [IOwpgSpjQYWX4Vr9f-Cx_g] but [cluster.initial_master_nodes] is set to [node-1, node-2]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts
[2022-09-27T00:53:25,104][INFO ][o.e.c.s.MasterService ] [node-2] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {node-2}{PSaq7WA5RvSJduF-9Uk-KA}{q8Ls3l-mR9u0pUJ4STb58w}{node-2}{10.0.20.10}{10.0.20.10:9301}{m} completing election], term: 19, version: 105, delta: master node changed {previous [], current [{node-2}{PSaq7WA5RvSJduF-9Uk-KA}{q8Ls3l-mR9u0pUJ4STb58w}{node-2}{10.0.20.10}{10.0.20.10:9301}{m}]}
[2022-09-27T00:53:25,164][INFO ][o.e.c.s.ClusterApplierService] [node-2] master node changed {previous [], current [{node-2}{PSaq7WA5RvSJduF-9Uk-KA}{q8Ls3l-mR9u0pUJ4STb58w}{node-2}{10.0.20.10}{10.0.20.10:9301}{m}]}, term: 19, version: 105, reason: Publication{term=19, version=105}
[2022-09-27T00:53:25,194][INFO ][o.e.h.AbstractHttpServerTransport] [node-2] publish_address {10.0.20.10:9201}, bound_addresses {10.0.20.10:9201}
[2022-09-27T00:53:25,195][INFO ][o.e.n.Node ] [node-2] started {node-2}{PSaq7WA5RvSJduF-9Uk-KA}{q8Ls3l-mR9u0pUJ4STb58w}{node-2}{10.0.20.10}{10.0.20.10:9301}{m}{xpack.installed=true}
node-2 state
ubuntu#elasticsearch:~$ curl http://10.0.20.10:9201/_cluster/health?pretty
{
"cluster_name" : "My-ElasticSearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 0,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 0.0
}
All xpack security settings are false.
And below is the result when connecting from head.
Why is the node not clustering?
please let me know...TT
(I've edited the question before posting it because I didn't do it well.)
val request curl
ubuntu#elasticsearch:~$ curl http://10.0.20.10:9200/
{
"name" : "node-1",
"cluster_name" : "My-ElasticSearch",
"cluster_uuid" : "YsU25ZUuQRqZNUG3YykAjg",
"version" : {
"number" : "8.3.3",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "801fed82df74dbe537f89b71b098ccaff88d2c56",
"build_date" : "2022-07-23T19:30:09.227964828Z",
"build_snapshot" : false,
"lucene_version" : "9.2.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
ubuntu#elasticsearch:~$ curl http://10.0.20.10:9201/
{
"name" : "node-2",
"cluster_name" : "My-ElasticSearch",
"cluster_uuid" : "IOwpgSpjQYWX4Vr9f-Cx_g",
"version" : {
"number" : "8.3.3",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "801fed82df74dbe537f89b71b098ccaff88d2c56",
"build_date" : "2022-07-23T19:30:09.227964828Z",
"build_snapshot" : false,
"lucene_version" : "9.2.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
I followed Mark Walkom's method and it worked.
Before starting, you need to set seed_host and initial_master_nodes in advance.
However, when running, I had to run the rest of the nodes within the time limit.
When you run the first node, the following message appears.
Then you need to run the second node.
[2022-09-27T07:44:01,557][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] master not discovered yet, this node has not previously joined a bootstrapped cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered [{node-1}{NBsKhaguTKiH_JNJ6DkRYw}{zmC5Tel1QIWGl1eUq4fLXQ}{node-1}{10.20.0.10}{10.20.0.10:9300}{dm}]; discovery will continue using [10.0.20.10:9300, 10.0.20.10:9301] from hosts providers and [{node-1}{NBsKhaguTKiH_JNJ6DkRYw}{zmC5Tel1QIWGl1eUq4fLXQ}{node-1}{10.20.0.10}{10.20.0.10:9300}{dm}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
But I want to add a node to an already created cluster.
This method is too inconvenient.
If anyone knows how to add a node to an already created cluster, please let me know.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 months ago.
Improve this question
I'm trying to build ElasticSearch cluster but it cause an error.
Log for master node
[2020-06-23T16:33:47,361][WARN ][o.e.c.c.Coordinator ] [kn-log-01] failed to validate incoming join request from node [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [kn-log-02][127.0.0.2:9300][internal:cluster/coordination/join/validate] request_id [88] timed out after [59835ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) [elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.7.0.jar:7.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
Log for data node to join
org.elasticsearch.transport.RemoteTransportException: [kn-log-01][127.0.0.1:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalStateException: failure when sending a validation request to node
at org.elasticsearch.cluster.coordination.Coordinator$2.onFailure(Coordinator.java:514) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$8.run(TransportService.java:1001) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) ~[elasticsearch-7.7.0.jar:7.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [kn-log-02][127.0.0.2:9300][internal:cluster/coordination/join/validate] disconnected
[2020-06-23T16:41:47,433][WARN ][o.e.c.c.ClusterFormationFailureHelper] [kn-log-02] master not discovered yet: have discovered [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.3:9300, 127.0.0.4:9300] from hosts providers and [] from last-known cluster state; node term 1, last-accepted version 0 in term 0
[2020-06-23T16:41:57,434][WARN ][o.e.c.c.ClusterFormationFailureHelper] [kn-log-02] master not discovered yet: have discovered [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.3:9300, 127.0.0.4:9300] from hosts providers and [] from last-known cluster state; node term 1, last-accepted version 0 in term 0
It saying time-out error and I don't know how to solve it. It doesn't work now but yesterday did. I didn't change any settings about ElasticSearch (maybe).
What I did already:
Checking firewalld settings about 9200, 9300 port again.
Rebooting all machines.
Wipe ElasticSearch data folders and restart services.
EDIT
elasticsearch.yml for master node (comments were omitted)
cluster.name: mycluster
node.name: kn-log-01
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["127.0.0.1", "127.0.0.2", "127.0.0.3", "127.0.0.4"]
cluster.initial_master_nodes: ["kn-log-01"]
node.master: true
node.data: true
elasticsearch.yml for data node
cluster.name: mycluster
node.name: kn-log-02
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["127.0.0.1", "127.0.0.2", "127.0.0.3", "127.0.0.4"]
cluster.initial_master_nodes: ["kn-log-01"]
node.master: false
node.data: true
ensure both instances are up and running
$ curl -XGET 127.0.0.1:9200
{
"name" : "kn-log-01",
"cluster_name" : "mycluster",
"cluster_uuid" : "jN-0FJwDRZqlAtQ6LpXwug",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
$ curl -XGET 127.0.0.2:9200
{
"name" : "kn-log-02",
"cluster_name" : "mycluster",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
$ curl -XGET 127.0.0.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 15 2 0 0.01 0.03 0.05 dilmrt * kn-log-01
Solved finally. An issue was caused by physical network problem.
MTU of the ethernet card was configured with value that hardware do not support. So I fix it then now it works.
I have just started to learn about the ELK stack. I am referring to this site
https://www.elastic.co/guide/en/elastic-stack-get-started/6.4/get-started-elastic-stack.html
for installing the ELK stack in my system I have a problem when I try to start Kibana in my windows system. I get the following error
\log [13:36:52.255] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:52.277] [warning][admin][elasticsearch] No living connections
log [13:36:52.279] [warning][task_manager] PollError No Living connections
log [13:36:53.810] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:53.836] [warning][admin][elasticsearch] No living connections
log [13:36:56.456] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:56.457] [warning][admin][elasticsearch] No living connections
log [13:36:56.458] [warning][task_manager] PollError No Living connections
log [13:36:57.348] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:57.349] [warning][admin][elasticsearch] No living connections
I think it is having a problem fetching the Elastic Search connection. But I think the elastic search instance has been started successfully. When I run
./bin/elasticsearch.bat
I get the following results
[2019-09-01T18:34:11,594][INFO ][o.e.h.AbstractHttpServerTransport] [DESKTOP-TD85D7S] publish_address {192.168.0.101:9200}, bound_addresses {192.168.99.1:9200}, {192.168.56.1:9200}, {192.168.0.101:9200}
[2019-09-01T18:34:11,595][INFO ][o.e.n.Node ] [DESKTOP-TD85D7S] started
In your kibana.yml configuration file, you need to change the following line:
elasticsearch.hosts: ["http://localhost:9200"]
to
elasticsearch.hosts: ["http://192.168.0.101:9200"]
Note: Elasticsearch 7.4.0, Kibana 7.4.0
status: working.
I am using a docker-compose.yml file to run elasticsearch and kibana on localhost. port 9200 is being used by another service so, I have mapped 9201:9200 (9201 of localhost with 9200 of docker container)
In kibana environment variable we are setting elasticsearch host and port (port should be of container port) eg. ELASTICSEARCH_HOSTS=http://elasticsearch:9200
File: docker-compose.yml
version: '3.7'
services:
# Elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9201:9200
- 9300:9300
# Kibana
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
driver: local
Elastic search is running at http://localhost:9201, you will get similar to
{
"name" : "d0bb78764b7e",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Djch5nbnSWC-EqYawp2Cng",
"version" : {
"number" : "7.4.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
"build_date" : "2019-09-27T08:36:48.569419Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Kibana is running at http://localhost:5601, open in the browser.
Note: if your docker is running on some server other than your local machine, then replace localhost, with that server host
I found the error in a log file: /var/log/elasticsearch/my-instance.log
[2022-07-25T15:59:44,049][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler]
[nextcloud] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service];
nested: AccessDeniedException[/var/lib/elasticsearch/nodes];
you have to set the bit s on the folder /var/lib/elasticsearch/nodes
# mkdir /var/lib/elasticsearch/nodes
# chown elasticsearch:elasticsearch /var/lib/elasticsearch/nodes
# chmod g+s /var/lib/elasticsearch/nodes
# ls -ltr /var/lib/elasticsearch/nodes
drwxr-sr-x 5 elasticsearch elasticsearch 4096 25 juil. 16:42 0/
you can then query localhost on port 9200.
# curl http://localhost:9200
{
"name" : "nextcloud",
"cluster_name" : "my-instance",
"cluster_uuid" : "040...V3TA",
"version" : {
"number" : "7.14.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "66b...331e",
"build_date" : "2021-08-26T09:01:05.390870785Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
My environment: Debian11.
I installed elasticsearch by hand, by downloading the package elasticsearch-7.14.1-amd64.deb
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.14.1-amd64.deb
hope it helps
Elasticsearch 7.0.0 is configured like that on CentOS 7.6
:
sudo cat /etc/elasticsearch/elasticsearch.yml:
cluster.name: elk-log-elasticsearch
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
From inside server:
curl --verbose http://127.0.0.1:9200
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 525
<
{
"name" : "Cardif.software.altkom.pl",
"cluster_name" : "elk-log-elasticsearch",
"cluster_uuid" : "rTMG9hXBTk-CuA73G9KHSA",
"version" : {
"number" : "7.0.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "b7e28a7",
"build_date" : "2019-04-05T22:55:32.697037Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
From outside of this server (name it 'A'), on server 'B' i can ping server 'A':
I know that it's IP is like: 172.16.xx.x
I can enter Kibana on: http://172.16.xx.x:5601 in browser, but i can not enter
Elasticsearch page on http://172.16.xx.x:9200
How can i change config to make it work?
Ports are enabled in firewalld:
firewall-cmd --list-all
ports: 5432/tcp 80/tcp 5601/tcp 5602/tcp 9200/tcp 9201/tcp 15672/tcp 8080/tcp 8081/tcp 8082/tcp 5488/tcp
I tried:
1)
network.host : 0.0.0.0
2)
network.bind_host: 172.x.x.x
This does the trick:
network.host: 0.0.0.0
discovery.seed_hosts: 127.0.0.1