elastic exporter - connection refused - elasticsearch

trying to set up elastic exporter for a local stack :
https://github.com/prometheus-community/elasticsearch_exporter
I get a connection refused when running with a docker even when xpack security is disabled.
xpack.license.self_generated.type: basic
xpack.security.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.exporters.my_local_exporter:
type: local
bootstrap.memory_lock: true
search.allow_expensive_queries: true
indices.memory.index_buffer_size: 30%
I use elastic 8.1.2 :
{
"name" : "0a166124ca20",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "xxxxxxxxxxxxxxx",
"version" : {
"number" : "8.1.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "xxxxxxxxxxxxxxx",
"build_date" : "2022-03-29T21:18:59.991429448Z",
"build_snapshot" : false,
"lucene_version" : "9.0.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
I get limited amount of metrics but missing the majoriy
for example --> elasticsearch_node_stats_up 0
here is my docker-compose:
elasticsearch_exporter:
image: quay.io/prometheuscommunity/elasticsearch-exporter:latest
command:
- '--es.uri=http://localhost:9200'
- '--es.ssl-skip-verify'
- '--es.all'
restart: always
environment:
- 'ES_API_KEY=Apikey xxxxxxxxxx'
ports:
- "9114:9114"

Related

connection refused when trying to run Elasticsearch query on presto ( spark )

I'm working on presto on spark. I have Elasticsearch as datasource. Im not able to run the queries using presto.
Elasticsearch.properties -
elasticsearch.ignore-publish-address=true
elasticsearch.default-schema-name=default
elasticsearch.host=localhost
connector.name=elasticsearch
elasticsearch.port=2900
docker-compose.yaml
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
- network.host=0.0.0.0
ports:
- '9200:9200'
networks:
- pqp-net
networks:
pqp-net:
driver: bridge
I'm getting below error -
*c.f.p.e.client.ElasticsearchClient - Error refreshing nodes
com.facebook.presto.spi.PrestoException: Connection refused*
Well, Im able to fetch the details of Elasticsearch :
http://localhost:9200
{ "name" : "ab751e0dd0ad", "cluster_name" : "docker-cluster", "cluster_uuid" : "3T66bOexSGOo6Pwtt2Ul4Q", "version" : {
"number" : "7.6.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
If anyone faced same issue, please help.
Thanks in advance
Resolved: I had an issue with port number mentioned in my application. By changing the port ( some other port ) I was able to connect to Elasticsearch.

Elasticsearch - Reindex Picks Up No Documents using official documentation

I am following reindex documentation here: https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html
I have two Elasticsearch indexes in localhost:
192.168.0.100:9200
{
"name" : "fses01",
"cluster_name" : "fs-es-cluster",
"cluster_uuid" : "DqVDBBafRaO9UAJPKuc_xQ",
"version" : {
"number" : "7.14.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "dd5a0a2acaa2045ff9624f3729fc8a6f40835aa1",
"build_date" : "2021-07-29T20:49:32.864135063Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
With the following index:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open formshare_records NpEFzsGvTEmivnYjJhwHJg 5 1 51343 0 7.8mb 3.9mb
192.168.0.100:9201
{
"name" : "es01",
"cluster_name" : "es-docker-cluster",
"cluster_uuid" : "bvsRXLwZRtKuWIrIuKx0hg",
"version" : {
"number" : "7.14.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "6bc13727ce758c0e943c3c21653b3da82f627f75",
"build_date" : "2021-09-15T10:18:09.722761972Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
With the same index but blank:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open formshare_records uxvizIAIS82YBHSsvpq4-Q 5 1 0 0 2kb 1kb
I am doing a reindex using CULR with the following:
curl -X POST "192.168.0.100:9201/_reindex" -H 'Content-Type: application/json' -d'
{
"source": {
"remote": {
"host": "http://192.168.0.100:9200"
},
"index": "formshare_records",
"query": {
"match_all": {}
}
},
"dest": {
"index": "formshare_records"
}
}
'
But I get zero transfer:
{"took":34,"timed_out":false,"total":0,"updated":0,"created":0,"deleted":0,"batches":0,"version_conflicts":0,"noops":0,"retries":{"bulk":0,"search":0},"throttled_millis":0,"requests_per_second":-1.0,"throttled_until_millis":0,"failures":[]}
The index definition is the same on both sides:
{"settings": {
"index": {
"number_of_shards": 5,
"number_of_replicas": 1
}
},
"mappings": {
"properties": {
"project_id": {"type": "keyword"},
"form_id": {"type": "keyword"},
"schema": {"type": "keyword"},
"table": {"type": "keyword"}
}
}}
Both of them are under docker:
192.168.0.100:9200
version: '3'
services:
fses01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: fses01
environment:
- node.name=fses01
- cluster.name=fs-es-cluster
- discovery.seed_hosts=fses02
- cluster.initial_master_nodes=fses01,fses02
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
- cluster.max_shards_per_node=20000
- script.max_compilations_rate=10000/1m
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch-docker/data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
fses02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: fses02
environment:
- node.name=fses02
- cluster.name=fs-es-cluster
- discovery.seed_hosts=fses01
- cluster.initial_master_nodes=fses01,fses02
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
- cluster.max_shards_per_node=20000
- script.max_compilations_rate=10000/1m
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch-docker/data2:/usr/share/elasticsearch/data
networks:
- esnet
kib01:
image: docker.elastic.co/kibana/kibana:7.14.0
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://fses01:9200
ELASTICSEARCH_HOSTS: '["http://fses01:9200","http://fses02:9200"]'
networks:
- esnet
networks:
esnet:
192.168.0.100:9201
version: '3.7'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- reindex.remote.whitelist=192.168.0.100:9200
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch710/data:/usr/share/elasticsearch/data
ports:
- 9201:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch710/data2:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /opt/elasticsearch710/data3:/usr/share/elasticsearch/data
networks:
- elastic
networks:
elastic:
driver: bridge
The container of 9201 can see the index of 9200:
sudo docker exec -it es01 /bin/bash
[root#943159977b17 elasticsearch]# curl -X GET "192.168.0.100:9200/_cat/indices/formshare_records?v&s=index&pretty"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open formshare_records NpEFzsGvTEmivnYjJhwHJg 5 1 51343 0 7.8mb 3.9mb
curl -X POST "192.168.0.100:9201/_reindex" -H 'Content-Type: application/json' -d'
{
"source": {
"remote": {
"host": "http://192.168.0.100:9200"
},
"index": "formshare_records",
"query": {
"match_all": {}
}
},
"dest": {
"index": "formshare_records"
}
}
'
{"took":18,"timed_out":false,"total":0,"updated":0,"created":0,"deleted":0,"batches":0,"version_conflicts":0,"noops":0,"retries":{"bulk":0,"search":0},"throttled_millis":0,"requests_per_second":-1.0,"throttled_until_millis":0,"failures":[]}
Any idea why reindex does not work? I tried without query but I get the same!

ES 7.7 failed to join a cluster because of time-out [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 months ago.
Improve this question
I'm trying to build ElasticSearch cluster but it cause an error.
Log for master node
[2020-06-23T16:33:47,361][WARN ][o.e.c.c.Coordinator ] [kn-log-01] failed to validate incoming join request from node [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [kn-log-02][127.0.0.2:9300][internal:cluster/coordination/join/validate] request_id [88] timed out after [59835ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) [elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.7.0.jar:7.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
Log for data node to join
org.elasticsearch.transport.RemoteTransportException: [kn-log-01][127.0.0.1:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalStateException: failure when sending a validation request to node
at org.elasticsearch.cluster.coordination.Coordinator$2.onFailure(Coordinator.java:514) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$8.run(TransportService.java:1001) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) ~[elasticsearch-7.7.0.jar:7.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [kn-log-02][127.0.0.2:9300][internal:cluster/coordination/join/validate] disconnected
[2020-06-23T16:41:47,433][WARN ][o.e.c.c.ClusterFormationFailureHelper] [kn-log-02] master not discovered yet: have discovered [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.3:9300, 127.0.0.4:9300] from hosts providers and [] from last-known cluster state; node term 1, last-accepted version 0 in term 0
[2020-06-23T16:41:57,434][WARN ][o.e.c.c.ClusterFormationFailureHelper] [kn-log-02] master not discovered yet: have discovered [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.3:9300, 127.0.0.4:9300] from hosts providers and [] from last-known cluster state; node term 1, last-accepted version 0 in term 0
It saying time-out error and I don't know how to solve it. It doesn't work now but yesterday did. I didn't change any settings about ElasticSearch (maybe).
What I did already:
Checking firewalld settings about 9200, 9300 port again.
Rebooting all machines.
Wipe ElasticSearch data folders and restart services.
EDIT
elasticsearch.yml for master node (comments were omitted)
cluster.name: mycluster
node.name: kn-log-01
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["127.0.0.1", "127.0.0.2", "127.0.0.3", "127.0.0.4"]
cluster.initial_master_nodes: ["kn-log-01"]
node.master: true
node.data: true
elasticsearch.yml for data node
cluster.name: mycluster
node.name: kn-log-02
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["127.0.0.1", "127.0.0.2", "127.0.0.3", "127.0.0.4"]
cluster.initial_master_nodes: ["kn-log-01"]
node.master: false
node.data: true
ensure both instances are up and running
$ curl -XGET 127.0.0.1:9200
{
"name" : "kn-log-01",
"cluster_name" : "mycluster",
"cluster_uuid" : "jN-0FJwDRZqlAtQ6LpXwug",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
$ curl -XGET 127.0.0.2:9200
{
"name" : "kn-log-02",
"cluster_name" : "mycluster",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
$ curl -XGET 127.0.0.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 15 2 0 0.01 0.03 0.05 dilmrt * kn-log-01
Solved finally. An issue was caused by physical network problem.
MTU of the ethernet card was configured with value that hardware do not support. So I fix it then now it works.

How to resolve the error "No living connection" while starting Kibana in windows

I have just started to learn about the ELK stack. I am referring to this site
https://www.elastic.co/guide/en/elastic-stack-get-started/6.4/get-started-elastic-stack.html
for installing the ELK stack in my system I have a problem when I try to start Kibana in my windows system. I get the following error
\log [13:36:52.255] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:52.277] [warning][admin][elasticsearch] No living connections
log [13:36:52.279] [warning][task_manager] PollError No Living connections
log [13:36:53.810] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:53.836] [warning][admin][elasticsearch] No living connections
log [13:36:56.456] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:56.457] [warning][admin][elasticsearch] No living connections
log [13:36:56.458] [warning][task_manager] PollError No Living connections
log [13:36:57.348] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:57.349] [warning][admin][elasticsearch] No living connections
I think it is having a problem fetching the Elastic Search connection. But I think the elastic search instance has been started successfully. When I run
./bin/elasticsearch.bat
I get the following results
[2019-09-01T18:34:11,594][INFO ][o.e.h.AbstractHttpServerTransport] [DESKTOP-TD85D7S] publish_address {192.168.0.101:9200}, bound_addresses {192.168.99.1:9200}, {192.168.56.1:9200}, {192.168.0.101:9200}
[2019-09-01T18:34:11,595][INFO ][o.e.n.Node ] [DESKTOP-TD85D7S] started
In your kibana.yml configuration file, you need to change the following line:
elasticsearch.hosts: ["http://localhost:9200"]
to
elasticsearch.hosts: ["http://192.168.0.101:9200"]
Note: Elasticsearch 7.4.0, Kibana 7.4.0
status: working.
I am using a docker-compose.yml file to run elasticsearch and kibana on localhost. port 9200 is being used by another service so, I have mapped 9201:9200 (9201 of localhost with 9200 of docker container)
In kibana environment variable we are setting elasticsearch host and port (port should be of container port) eg. ELASTICSEARCH_HOSTS=http://elasticsearch:9200
File: docker-compose.yml
version: '3.7'
services:
# Elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9201:9200
- 9300:9300
# Kibana
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
driver: local
Elastic search is running at http://localhost:9201, you will get similar to
{
"name" : "d0bb78764b7e",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Djch5nbnSWC-EqYawp2Cng",
"version" : {
"number" : "7.4.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
"build_date" : "2019-09-27T08:36:48.569419Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Kibana is running at http://localhost:5601, open in the browser.
Note: if your docker is running on some server other than your local machine, then replace localhost, with that server host
I found the error in a log file: /var/log/elasticsearch/my-instance.log
[2022-07-25T15:59:44,049][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler]
[nextcloud] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service];
nested: AccessDeniedException[/var/lib/elasticsearch/nodes];
you have to set the bit s on the folder /var/lib/elasticsearch/nodes
# mkdir /var/lib/elasticsearch/nodes
# chown elasticsearch:elasticsearch /var/lib/elasticsearch/nodes
# chmod g+s /var/lib/elasticsearch/nodes
# ls -ltr /var/lib/elasticsearch/nodes
drwxr-sr-x 5 elasticsearch elasticsearch 4096 25 juil. 16:42 0/
you can then query localhost on port 9200.
# curl http://localhost:9200
{
"name" : "nextcloud",
"cluster_name" : "my-instance",
"cluster_uuid" : "040...V3TA",
"version" : {
"number" : "7.14.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "66b...331e",
"build_date" : "2021-08-26T09:01:05.390870785Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
My environment: Debian11.
I installed elasticsearch by hand, by downloading the package elasticsearch-7.14.1-amd64.deb
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.14.1-amd64.deb
hope it helps

Elasticsearch page visible from localhost but not from remote server

Elasticsearch 7.0.0 is configured like that on CentOS 7.6
:
sudo cat /etc/elasticsearch/elasticsearch.yml:
cluster.name: elk-log-elasticsearch
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
From inside server:
curl --verbose http://127.0.0.1:9200
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 525
<
{
"name" : "Cardif.software.altkom.pl",
"cluster_name" : "elk-log-elasticsearch",
"cluster_uuid" : "rTMG9hXBTk-CuA73G9KHSA",
"version" : {
"number" : "7.0.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "b7e28a7",
"build_date" : "2019-04-05T22:55:32.697037Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
From outside of this server (name it 'A'), on server 'B' i can ping server 'A':
I know that it's IP is like: 172.16.xx.x
I can enter Kibana on: http://172.16.xx.x:5601 in browser, but i can not enter
Elasticsearch page on http://172.16.xx.x:9200
How can i change config to make it work?
Ports are enabled in firewalld:
firewall-cmd --list-all
ports: 5432/tcp 80/tcp 5601/tcp 5602/tcp 9200/tcp 9201/tcp 15672/tcp 8080/tcp 8081/tcp 8082/tcp 5488/tcp
I tried:
1)
network.host : 0.0.0.0
2)
network.bind_host: 172.x.x.x
This does the trick:
network.host: 0.0.0.0
discovery.seed_hosts: 127.0.0.1

Resources