Elasticsearch 7.0.0 is configured like that on CentOS 7.6
:
sudo cat /etc/elasticsearch/elasticsearch.yml:
cluster.name: elk-log-elasticsearch
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
From inside server:
curl --verbose http://127.0.0.1:9200
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 525
<
{
"name" : "Cardif.software.altkom.pl",
"cluster_name" : "elk-log-elasticsearch",
"cluster_uuid" : "rTMG9hXBTk-CuA73G9KHSA",
"version" : {
"number" : "7.0.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "b7e28a7",
"build_date" : "2019-04-05T22:55:32.697037Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
From outside of this server (name it 'A'), on server 'B' i can ping server 'A':
I know that it's IP is like: 172.16.xx.x
I can enter Kibana on: http://172.16.xx.x:5601 in browser, but i can not enter
Elasticsearch page on http://172.16.xx.x:9200
How can i change config to make it work?
Ports are enabled in firewalld:
firewall-cmd --list-all
ports: 5432/tcp 80/tcp 5601/tcp 5602/tcp 9200/tcp 9201/tcp 15672/tcp 8080/tcp 8081/tcp 8082/tcp 5488/tcp
I tried:
1)
network.host : 0.0.0.0
2)
network.bind_host: 172.x.x.x
This does the trick:
network.host: 0.0.0.0
discovery.seed_hosts: 127.0.0.1
Related
trying to set up elastic exporter for a local stack :
https://github.com/prometheus-community/elasticsearch_exporter
I get a connection refused when running with a docker even when xpack security is disabled.
xpack.license.self_generated.type: basic
xpack.security.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.exporters.my_local_exporter:
type: local
bootstrap.memory_lock: true
search.allow_expensive_queries: true
indices.memory.index_buffer_size: 30%
I use elastic 8.1.2 :
{
"name" : "0a166124ca20",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "xxxxxxxxxxxxxxx",
"version" : {
"number" : "8.1.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "xxxxxxxxxxxxxxx",
"build_date" : "2022-03-29T21:18:59.991429448Z",
"build_snapshot" : false,
"lucene_version" : "9.0.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
I get limited amount of metrics but missing the majoriy
for example --> elasticsearch_node_stats_up 0
here is my docker-compose:
elasticsearch_exporter:
image: quay.io/prometheuscommunity/elasticsearch-exporter:latest
command:
- '--es.uri=http://localhost:9200'
- '--es.ssl-skip-verify'
- '--es.all'
restart: always
environment:
- 'ES_API_KEY=Apikey xxxxxxxxxx'
ports:
- "9114:9114"
I'm working on presto on spark. I have Elasticsearch as datasource. Im not able to run the queries using presto.
Elasticsearch.properties -
elasticsearch.ignore-publish-address=true
elasticsearch.default-schema-name=default
elasticsearch.host=localhost
connector.name=elasticsearch
elasticsearch.port=2900
docker-compose.yaml
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
- network.host=0.0.0.0
ports:
- '9200:9200'
networks:
- pqp-net
networks:
pqp-net:
driver: bridge
I'm getting below error -
*c.f.p.e.client.ElasticsearchClient - Error refreshing nodes
com.facebook.presto.spi.PrestoException: Connection refused*
Well, Im able to fetch the details of Elasticsearch :
http://localhost:9200
{ "name" : "ab751e0dd0ad", "cluster_name" : "docker-cluster", "cluster_uuid" : "3T66bOexSGOo6Pwtt2Ul4Q", "version" : {
"number" : "7.6.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
If anyone faced same issue, please help.
Thanks in advance
Resolved: I had an issue with port number mentioned in my application. By changing the port ( some other port ) I was able to connect to Elasticsearch.
My server EC2 have two node, when i start node_data by command /home/esdata/bin/elasticsearch,
node_master is running change to offline
Logs in node_data, have message error:
[2020-12-15T18:08:57,391][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node_data_1] master not discovered yet: have discovered [{node_data_1}{elNK7JY9S4-E51sUY5zitw}{cDTkyu3bTmypDseY7IFrxw}{localhost}{127.0.0.1:8300}{drt}{xpack.installed=true, transform.node=true}]; discovery will continue using [127.0.0.1:9300] from hosts providers and [] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2020-12-15T18:09:07,392][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node_data_1] master not discovered yet: have discovered [{node_data_1}{elNK7JY9S4-E51sUY5zitw}{cDTkyu3bTmypDseY7IFrxw}{localhost}{127.0.0.1:8300}{drt}{xpack.installed=true, transform.node=true}]; discovery will continue using [127.0.0.1:9300] from hosts providers and [] from last-known cluster state; node term 0, last-accepted version 0 in term 0
I try curl it, and result bellow:
root#ip-xxx:~# curl -XGET http://localhost:9201
{
"name" : "node_data_1",
"cluster_name" : "eslasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.9.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91",
"build_date" : "2020-09-01T21:22:21.964974Z",
"build_snapshot" : false,
"lucene_version" : "8.6.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
root#ip-xxx:~# curl -XGET http://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
config for node_master in /etc/elasticsearch/elasticsearch.yml
cluster.name: elasticsearch
node.name: node_master
node.master: true
node.data: true
node.ingest: false
node.ml: false
transport.host: localhost
transport.tcp.port: 9300
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["127.0.0.1:9300", "127.0.0.1:8300"]
cluster.initial_master_nodes: ["node_master"]
and config for node_data in /home/esdata/config.yml
cluster.name: elasticsearch
node.name: node_data_1
node.master: false
node.data: true
node.ingest: false
node.ml: false
transport.host: localhost
transport.tcp.port: 8300
network.host: 0.0.0.0
http.port: 9201
discovery.seed_hosts: ["127.0.0.1:9300", "127.0.0.1:8300"]
Can someone help me fix it? Do I've to run two ES instance? How can I do this?
Thanks in advance.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 months ago.
Improve this question
I'm trying to build ElasticSearch cluster but it cause an error.
Log for master node
[2020-06-23T16:33:47,361][WARN ][o.e.c.c.Coordinator ] [kn-log-01] failed to validate incoming join request from node [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [kn-log-02][127.0.0.2:9300][internal:cluster/coordination/join/validate] request_id [88] timed out after [59835ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) [elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.7.0.jar:7.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
Log for data node to join
org.elasticsearch.transport.RemoteTransportException: [kn-log-01][127.0.0.1:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalStateException: failure when sending a validation request to node
at org.elasticsearch.cluster.coordination.Coordinator$2.onFailure(Coordinator.java:514) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.transport.TransportService$8.run(TransportService.java:1001) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) ~[elasticsearch-7.7.0.jar:7.7.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [kn-log-02][127.0.0.2:9300][internal:cluster/coordination/join/validate] disconnected
[2020-06-23T16:41:47,433][WARN ][o.e.c.c.ClusterFormationFailureHelper] [kn-log-02] master not discovered yet: have discovered [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.3:9300, 127.0.0.4:9300] from hosts providers and [] from last-known cluster state; node term 1, last-accepted version 0 in term 0
[2020-06-23T16:41:57,434][WARN ][o.e.c.c.ClusterFormationFailureHelper] [kn-log-02] master not discovered yet: have discovered [{kn-log-02}{tuCA1_YARK-HkHyzbpG4Nw}{0yZHEJGAQpKgWw336U2vDQ}{127.0.0.2}{127.0.0.2:9300}{dilrt}{ml.machine_memory=134888939520, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.3:9300, 127.0.0.4:9300] from hosts providers and [] from last-known cluster state; node term 1, last-accepted version 0 in term 0
It saying time-out error and I don't know how to solve it. It doesn't work now but yesterday did. I didn't change any settings about ElasticSearch (maybe).
What I did already:
Checking firewalld settings about 9200, 9300 port again.
Rebooting all machines.
Wipe ElasticSearch data folders and restart services.
EDIT
elasticsearch.yml for master node (comments were omitted)
cluster.name: mycluster
node.name: kn-log-01
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["127.0.0.1", "127.0.0.2", "127.0.0.3", "127.0.0.4"]
cluster.initial_master_nodes: ["kn-log-01"]
node.master: true
node.data: true
elasticsearch.yml for data node
cluster.name: mycluster
node.name: kn-log-02
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["127.0.0.1", "127.0.0.2", "127.0.0.3", "127.0.0.4"]
cluster.initial_master_nodes: ["kn-log-01"]
node.master: false
node.data: true
ensure both instances are up and running
$ curl -XGET 127.0.0.1:9200
{
"name" : "kn-log-01",
"cluster_name" : "mycluster",
"cluster_uuid" : "jN-0FJwDRZqlAtQ6LpXwug",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
$ curl -XGET 127.0.0.2:9200
{
"name" : "kn-log-02",
"cluster_name" : "mycluster",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
$ curl -XGET 127.0.0.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 15 2 0 0.01 0.03 0.05 dilmrt * kn-log-01
Solved finally. An issue was caused by physical network problem.
MTU of the ethernet card was configured with value that hardware do not support. So I fix it then now it works.
I have just started to learn about the ELK stack. I am referring to this site
https://www.elastic.co/guide/en/elastic-stack-get-started/6.4/get-started-elastic-stack.html
for installing the ELK stack in my system I have a problem when I try to start Kibana in my windows system. I get the following error
\log [13:36:52.255] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:52.277] [warning][admin][elasticsearch] No living connections
log [13:36:52.279] [warning][task_manager] PollError No Living connections
log [13:36:53.810] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:53.836] [warning][admin][elasticsearch] No living connections
log [13:36:56.456] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:56.457] [warning][admin][elasticsearch] No living connections
log [13:36:56.458] [warning][task_manager] PollError No Living connections
log [13:36:57.348] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
log [13:36:57.349] [warning][admin][elasticsearch] No living connections
I think it is having a problem fetching the Elastic Search connection. But I think the elastic search instance has been started successfully. When I run
./bin/elasticsearch.bat
I get the following results
[2019-09-01T18:34:11,594][INFO ][o.e.h.AbstractHttpServerTransport] [DESKTOP-TD85D7S] publish_address {192.168.0.101:9200}, bound_addresses {192.168.99.1:9200}, {192.168.56.1:9200}, {192.168.0.101:9200}
[2019-09-01T18:34:11,595][INFO ][o.e.n.Node ] [DESKTOP-TD85D7S] started
In your kibana.yml configuration file, you need to change the following line:
elasticsearch.hosts: ["http://localhost:9200"]
to
elasticsearch.hosts: ["http://192.168.0.101:9200"]
Note: Elasticsearch 7.4.0, Kibana 7.4.0
status: working.
I am using a docker-compose.yml file to run elasticsearch and kibana on localhost. port 9200 is being used by another service so, I have mapped 9201:9200 (9201 of localhost with 9200 of docker container)
In kibana environment variable we are setting elasticsearch host and port (port should be of container port) eg. ELASTICSEARCH_HOSTS=http://elasticsearch:9200
File: docker-compose.yml
version: '3.7'
services:
# Elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9201:9200
- 9300:9300
# Kibana
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
driver: local
Elastic search is running at http://localhost:9201, you will get similar to
{
"name" : "d0bb78764b7e",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Djch5nbnSWC-EqYawp2Cng",
"version" : {
"number" : "7.4.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
"build_date" : "2019-09-27T08:36:48.569419Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Kibana is running at http://localhost:5601, open in the browser.
Note: if your docker is running on some server other than your local machine, then replace localhost, with that server host
I found the error in a log file: /var/log/elasticsearch/my-instance.log
[2022-07-25T15:59:44,049][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler]
[nextcloud] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service];
nested: AccessDeniedException[/var/lib/elasticsearch/nodes];
you have to set the bit s on the folder /var/lib/elasticsearch/nodes
# mkdir /var/lib/elasticsearch/nodes
# chown elasticsearch:elasticsearch /var/lib/elasticsearch/nodes
# chmod g+s /var/lib/elasticsearch/nodes
# ls -ltr /var/lib/elasticsearch/nodes
drwxr-sr-x 5 elasticsearch elasticsearch 4096 25 juil. 16:42 0/
you can then query localhost on port 9200.
# curl http://localhost:9200
{
"name" : "nextcloud",
"cluster_name" : "my-instance",
"cluster_uuid" : "040...V3TA",
"version" : {
"number" : "7.14.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "66b...331e",
"build_date" : "2021-08-26T09:01:05.390870785Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
My environment: Debian11.
I installed elasticsearch by hand, by downloading the package elasticsearch-7.14.1-amd64.deb
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.14.1-amd64.deb
hope it helps