Elasticsearch connection refused while kibana is trying to connect - elasticsearch

I am trying to run ELK stack using docker container. But I am getting error that kibana is unable to make connection with elasticsearch.
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["error","elasticsearch","admin"],"pid":12,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.2:9200"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:console#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:elasticsearch#5.6.9","error"],"pid":12,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:metrics#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-22T19:31:38,182][INFO ][o.e.d.DiscoveryModule ] [g8HPieb] using discovery type [zen]
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:timelion#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["listening","info"],"pid":12,"message":"Server running at http://0.0.0.0:5601"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","ui settings","error"],"pid":12,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-22T19:31:38,634][INFO ][o.e.n.Node ] initialized
elasticsearch_1 | [2018-06-22T19:31:38,634][INFO ][o.e.n.Node ] [g8HPieb] starting ...
elasticsearch_1 | [2018-06-22T19:31:38,767][INFO ][o.e.t.TransportService ] [g8HPieb] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
elasticsearch_1 | [2018-06-22T19:31:38,776][WARN ][o.e.b.BootstrapChecks ] [g8HPieb] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
logstash_1 | log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
logstash_1 | log4j:WARN Please initialize the log4j system properly.
logstash_1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
**logstash_1 | {:timestamp=>"2018-06-22T19:31:40.555000+0000", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}**
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:40Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:40Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"}
Here is the content of my docker-comp
version: "2.0"
services:
logstash:
image: logstash:2
ports:
- "5044:5044"
volumes:
- ./:/config
command: logstash -f /config/logstash.conf
links:
- elasticsearch
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:5.6.9
ports:
- "9200:9200"
volumes:
- "./es_data/es_data:/usr/share/elasticsearch/data/"
kibana:
image: kibana:5
ports:
- "5601:5601"
links:
- elasticsearch
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- elasticsearch
Content of my logstash.conf
input { beats { port => 5044 } }
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
stdout {
codec => rubydebug
}
}
I have curl on elasticsearch container and kibana container and it looks good to me
{
"name" : "g8HPieb",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "XxH0TcAmQcGqprf6s7TJEQ",
"version" : {
"number" : "5.6.9",
"build_hash" : "877a590",
"build_date" : "2018-04-12T16:25:14.838Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
curl localhost:9200/_cat/indices?pretty
yellow open .kibana GIBmXdlRQJmI67oq5r4oCg 1 1 1 0 3.2kb 3.2kb
After increasing virtual memory size
root#sfbp19:~/dockerizing-jenkins# sysctl -p
vm.max_map_count = 262144
root#sfbp19:~/dockerizing-jenkins# docker-compose -f docker-compose-elk.yml up
Creating network "dockerizingjenkins_default" with the default driver
Creating dockerizingjenkins_elasticsearch_1
Creating dockerizingjenkins_logstash_1
Creating dockerizingjenkins_kibana_1
Attaching to dockerizingjenkins_elasticsearch_1, dockerizingjenkins_kibana_1, dockerizingjenkins_logstash_1
elasticsearch_1 | [2018-06-26T19:08:19,294][INFO ][o.e.n.Node ] [] initializing ...
elasticsearch_1 | [2018-06-26T19:08:19,363][INFO ][o.e.e.NodeEnvironment ] [PVmTsqv] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/sfbp19--vg-root)]], net usable_space [671.9gb], net total_space [789.2gb], spins? [possibly], types [ext4]
elasticsearch_1 | [2018-06-26T19:08:19,364][INFO ][o.e.e.NodeEnvironment ] [PVmTsqv] heap size [1.9gb], compressed ordinary object pointers [true]
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] node name [PVmTsqv] derived from node ID [PVmTsqv3QnyS3sQarPcJ-A]; set [node.name] to override
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] version[5.6.9], pid[1], build[877a590/2018-04-12T16:25:14.838Z], OS[Linux/4.4.0-31-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_171/25.171-b11]
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [aggs-matrix-stats]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [ingest-common]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-expression]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-groovy]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-mustache]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-painless]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [parent-join]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [percolator]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [reindex]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [transport-netty3]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [transport-netty4]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] no plugins loaded
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:kibana#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:elasticsearch#5.6.9","info"],"pid":13,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["error","elasticsearch","admin"],"pid":13,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.2:9200"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:console#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:elasticsearch#5.6.9","error"],"pid":13,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:metrics#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["status","plugin:timelion#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["listening","info"],"pid":13,"message":"Server running at http://0.0.0.0:5601"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["status","ui settings","error"],"pid":13,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-26T19:08:21,190][INFO ][o.e.d.DiscoveryModule ] [PVmTsqv] using discovery type [zen]
elasticsearch_1 | [2018-06-26T19:08:21,654][INFO ][o.e.n.Node ] initialized
elasticsearch_1 | [2018-06-26T19:08:21,654][INFO ][o.e.n.Node ] [PVmTsqv] starting ...
elasticsearch_1 | [2018-06-26T19:08:21,780][INFO ][o.e.t.TransportService ] [PVmTsqv] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
logstash_1 | log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
logstash_1 | log4j:WARN Please initialize the log4j system properly.
logstash_1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:23Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:23Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"No living connections"}
logstash_1 | {:timestamp=>"2018-06-26T19:08:23.572000+0000", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
logstash_1 | {:timestamp=>"2018-06-26T19:08:23.790000+0000", :message=>"Pipeline main started"}
elasticsearch_1 | [2018-06-26T19:08:24,837][INFO ][o.e.c.s.ClusterService ] [PVmTsqv] new_master {PVmTsqv}{PVmTsqv3QnyS3sQarPcJ-A}{coD5A4HyR7-1MedSq8dFUQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)[, ]
elasticsearch_1 | [2018-06-26T19:08:24,869][INFO ][o.e.h.n.Netty4HttpServerTransport] [PVmTsqv] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_1 | [2018-06-26T19:08:24,870][INFO ][o.e.n.Node ] [PVmTsqv] started
elasticsearch_1 | [2018-06-26T19:08:24,989][INFO ][o.e.g.GatewayService ] [PVmTsqv] recovered [1] indices into cluster_state
elasticsearch_1 | [2018-06-26T19:08:25,148][INFO ][o.e.c.r.a.AllocationService] [PVmTsqv] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:26Z","tags":["status","plugin:elasticsearch#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch:9200."}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:26Z","tags":["status","ui settings","info"],"pid":13,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Elasticsearch plugin is red"}
========================filebeat.yml==============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /jenkins/gerrit_volume/logs/*_log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#============================== Kibana =====================================
setup.kibana:
#host: "localhost:5601"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.1.9.69:5044"]
logging.level: debug

It looks like this is an elasticsearch problem from your logging, preventing ES from initializing. This line:
elasticsearch_1 | [2018-06-22T19:31:38,776][WARN ][o.e.b.BootstrapChecks ] [g8HPieb] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
You can bump it up temporarily with the following command:
sysctl -w vm.max_map_count=262144
Or set it permanently by adding the following line to /etc/sysctl.conf and running sysctl -p to pick up the config if you're on a live instance:
vm.max_map_count=262144
Since you're doing this in a Docker container you probably want to just spin up with the latter option in /etc/sysctl.conf.
Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html

Related

logstash 7.9.1 docker conatiner: file input is not working

I am trying to read a log file but is not working, it works when the logstash.conf is configure to listen in the port 5000 but from a file not works. I am using logstash version 7.9.1 from docker container and trying to sent the logs to Elastic search 7.9.1.
This is my logstash.conf file
input {
file {
path => ["/home/douglas/projects/incollect/*.log"]
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "test-elk-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
stdout {
codec => rubydebug
}
}
this is the logs from the console,I can't see any error and says Successfully started
logstash_1 | [2020-10-16T00:38:27,748][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2020-10-16T00:38:27,795][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2020-10-16T00:38:27,798][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x44d5fe run>"}
logstash_1 | [2020-10-16T00:38:27,800][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x4c6dee32 run>"}
logstash_1 | [2020-10-16T00:38:27,840][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2020-10-16T00:38:28,535][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.73}
logstash_1 | [2020-10-16T00:38:28,599][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2020-10-16T00:38:28,600][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.8}
logstash_1 | [2020-10-16T00:38:28,840][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2020-10-16T00:38:28,909][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2020-10-16T00:38:28,920][INFO ][filewatch.observingtail ][main][4a3eb924128694e00dae8e6fab084bfc5e3c3692e66663362019b182fcb31a48] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2020-10-16T00:38:29,386][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
and this is my log file:
Oct 9 15:34:19 incollect drupal: http://dev.incollect.com|1602257659|DEV|52.202.31.67|http://dev.incollect.com/icadmin/inquires_report?q=icadmin/ajax_validate_and_fix_inquire_by_id|http://dev.incollect.com/icadmin/inquires_report|3||Validate inquireStep 0
Oct 9 15:34:19 incollect drupal: http://dev.incollect.com|1602257659|DEV|52.202.31.67|http://dev.incollect.com/icadmin/inquires_report?q=icadmin/ajax_validate_and_fix_inquire_by_id|http://dev.incollect.com/icadmin/inquires_report|3||Validate inquireStep 1 - inquire_id:14219
Edited****************
I am adding the docker-compose file, this is my configuration to logstash
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
volumes:
- ./../../:/usr/share/logstash
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
I am not sure what is the problem, I tried differents solutions but it does not works.
If this is - ./../../:/usr/share/logstash what you are using to mount the logs volume, your logstash file input path should point to /usr/share/logstash/*.log

Missing queues from RabbitMQ Metricbeat

It looks like only a fraction of the queues on my RabbitMQ cluster are making it into Elasticsearch via Metricbeat.
When I query RabbitMQ's /api/overview, I see 887 queues reported:
object_totals: {
consumers: 517,
queues: 887,
exchanges: 197,
connections: 305,
channels: 622
},
When I query RabbitMQ's /api/queues (which is what Metricbeat hits), I count 887 queues there as well.
When I get a unique count of the field rabbitmq.queue.name in Elasticsearch, I am seeing only 309 queues.
I don't see anything in the debug output that stands out to me. It's just the usual INFO level startup messages, followed by the publish information:
root#rabbitmq:/etc/metricbeat# metricbeat -e
2019-06-24T21:13:33.692Z INFO instance/beat.go:571 Home path: [/usr/share/metricbeat] Config path: [/etc/metricbeat] Data path: [/var/lib/metricbeat] Logs path: [/var/log/metricbeat]
2019-06-24T21:13:33.692Z INFO instance/beat.go:579 Beat ID: xxx
2019-06-24T21:13:33.692Z INFO [index-management.ilm] ilm/ilm.go:129 Policy name: metricbeat-7.1.1
2019-06-24T21:13:33.692Z INFO [seccomp] seccomp/seccomp.go:116 Syscall filter successfully installed
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:827 Beat info {"system_info": {"beat": {"path": {"config": "/etc/metricbeat", "data": "/var/lib/metricbeat", "home": "/usr/share/metricbeat", "logs": "/var/log/metricbeat"}, "type": "metricbeat", "uuid": "xxx"}}}
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:836 Build info {"system_info": {"build": {"commit": "3358d9a5a09e3c6709a2d3aaafde628ea34e8419", "libbeat": "7.1.1", "time": "2019-05-23T13:23:10.000Z", "version": "7.1.1"}}}
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:839 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.11.5"}}}
[...]
2019-06-24T21:13:33.694Z INFO [beat] instance/beat.go:872 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/etc/metricbeat", "exe": "/usr/share/metricbeat/bin/metricbeat", "name": "metricbeat", "pid": 30898, "ppid": 30405, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2019-06-24T21:13:33.100Z"}}}
2019-06-24T21:13:33.694Z INFO instance/beat.go:280 Setup Beat: metricbeat; Version: 7.1.1
2019-06-24T21:13:33.694Z INFO [publisher] pipeline/module.go:97 Beat name: metricbeat
2019-06-24T21:13:33.694Z INFO instance/beat.go:391 metricbeat start running.
2019-06-24T21:13:33.694Z INFO cfgfile/reload.go:150 Config reloader started
2019-06-24T21:13:33.694Z INFO [monitoring] log/log.go:117 Starting metrics logging every 30s
[...]
2019-06-24T21:13:43.696Z INFO filesystem/filesystem.go:57 Ignoring filesystem types: sysfs, rootfs, ramfs, bdev, proc, cpuset, cgroup, cgroup2, tmpfs, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, dax, bpf, pipefs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, pstore, mqueue, autofs
2019-06-24T21:13:43.696Z INFO fsstat/fsstat.go:59 Ignoring filesystem types: sysfs, rootfs, ramfs, bdev, proc, cpuset, cgroup, cgroup2, tmpfs, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, dax, bpf, pipefs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, pstore, mqueue, autofs
2019-06-24T21:13:44.696Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xxx))
2019-06-24T21:13:44.711Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xxx)) established
2019-06-24T21:14:03.696Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":130,"time":{"ms":131}},"total":{"ticks":1960,"time":{"ms":1965},"value":1960},"user":{"ticks":1830,"time":{"ms":1834}}},"handles":{"limit":{"hard":1048576,"soft":1024},"open":12},"info":{"ephemeral_id":"xxx","uptime":{"ms":30030}},"memstats":{"gc_next":30689808,"memory_alloc":21580680,"memory_total":428076400,"rss":79917056}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"events":{"acked":7825,"batches":11,"total":7825},"read":{"bytes":66},"type":"logstash","write":{"bytes":870352}},"pipeline":{"clients":4,"events":{"active":313,"published":8138,"retry":523,"total":8138},"queue":{"acked":7825}}},"metricbeat":{"rabbitmq":{"connection":{"events":2987,"failures":10,"success":2977},"exchange":{"events":1970,"success":1970},"node":{"events":10,"success":10},"queue":{"events":3130,"failures":10,"success":3120}},"system":{"cpu":{"events":2,"success":2},"filesystem":{"events":7,"success":7},"fsstat":{"events":1,"success":1},"load":{"events":2,"success":2},"memory":{"events":2,"success":2},"network":{"events":4,"success":4},"process":{"events":18,"success":18},"process_summary":{"events":2,"success":2},"socket_summary":{"events":2,"success":2},"uptime":{"events":1,"success":1}}},"system":{"cpu":{"cores":4},"load":{"1":0.48,"15":0.28,"5":0.15,"norm":{"1":0.12,"15":0.07,"5":0.0375}}}}}}
I think if there were a problem getting the queue, I should see an error in the logs above as per https://github.com/elastic/beats/blob/master/metricbeat/module/rabbitmq/queue/data.go#L94-L104
Here's the metricbeat.yml:
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 10s
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
name: metricbeat
fields:
environment: development
processors:
- add_cloud_metadata: ~
output.logstash:
hosts: ["xxx"]
Here's the modules.d/rabbitmq.yml:
- module: rabbitmq
metricsets: ["node", "queue", "connection", "exchange"]
enabled: true
period: 2s
hosts: ["xxx"]
username: xxx
password: xxx
I solved it by upgrading Elastic Stack from 7.1.1 to 7.2.0.

Elasticsearch: Data server not discovering master

I have a 1 data - 1 master es cluster. (using 6.4.2 on CentOS 7)
On my master01:
==> /opt/elasticsearch/logs/master01-elastic.my-local-domain-master01-elastic/esa-local-stg-cluster.log <==
[2019-02-08T11:06:21,267][INFO ][o.e.n.Node ] [master01-elastic] initialized
[2019-02-08T11:06:21,267][INFO ][o.e.n.Node ] [master01-elastic] starting ...
[2019-02-08T11:06:21,460][INFO ][o.e.t.TransportService ] [master01-elastic] publish_address {10.18.0.13:9300}, bound_addresses {10.18.0.13:9300}
[2019-02-08T11:06:21,478][INFO ][o.e.b.BootstrapChecks ] [master01-elastic] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-02-08T11:06:24,543][INFO ][o.e.c.s.MasterService ] [master01-elastic] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{GH9oflu7QZuJB_U7sPJDlg}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true}
[2019-02-08T11:06:24,550][INFO ][o.e.c.s.ClusterApplierService] [master01-elastic] new_master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{GH9oflu7QZuJB_U7sPJDlg}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true}, reason: apply cluster state (from master [master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{GH9oflu7QZuJB_U7sPJDlg}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
[2019-02-08T11:06:24,575][INFO ][o.e.h.n.Netty4HttpServerTransport] [master01-elastic] publish_address {10.18.0.13:9200}, bound_addresses {10.18.0.13:9200}
[2019-02-08T11:06:24,575][INFO ][o.e.n.Node ] [master01-elastic] started
[2019-02-08T11:06:24,614][INFO ][o.e.l.LicenseService ] [master01-elastic] license [c2004733-fa30-4249-bb07-d5f2238816ad] mode [basic] - valid
[2019-02-08T11:06:24,615][INFO ][o.e.g.GatewayService ] [master01-elastic] recovered [0] indices into cluster_state
[root#master01-elastic ~]# systemctl status elasticsearch
● master01-elastic_elasticsearch.service - Elasticsearch-master01-elastic
Loaded: loaded (/usr/lib/systemd/system/master01-elastic_elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-02-08 11:06:12 EST; 2 days ago
Docs: http://www.elastic.co
Main PID: 18695 (java)
CGroup: /system.slice/master01-elastic_elasticsearch.service
├─18695 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Djava.awt.headless=true -Dfile.encoding...
└─18805 /usr/share/elasticsearch/modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller
Feb 08 11:06:12 master01-elastic systemd[1]: Started Elasticsearch-master01-elastic.
[root#master01-elastic ~]# ss -tula | grep -i 9300
[root#master01-elastic ~]#
cluster logs on my master01:
[2019-02-11T02:36:21,406][INFO ][o.e.n.Node ] [master01-elastic] initialized
[2019-02-11T02:36:21,406][INFO ][o.e.n.Node ] [master01-elastic] starting ...
[2019-02-11T02:36:21,619][INFO ][o.e.t.TransportService ] [master01-elastic] publish_address {10.18.0.13:9300}, bound_addresses {10.18.0.13:9300}
[2019-02-11T02:36:21,654][INFO ][o.e.b.BootstrapChecks ] [master01-elastic] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-02-11T02:36:24,813][INFO ][o.e.c.s.MasterService ] [master01-elastic] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{Vgq60hVVRn-3aO_uBuc2uQ}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true}
[2019-02-11T02:36:24,818][INFO ][o.e.c.s.ClusterApplierService] [master01-elastic] new_master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{Vgq60hVVRn-3aO_uBuc2uQ}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true}, reason: apply cluster state (from master [master {master01-elastic}{10kX4tQMTzS0O8AQYvieZw}{Vgq60hVVRn-3aO_uBuc2uQ}{10.18.0.13}{10.18.0.13:9300}{xpack.installed=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
[2019-02-11T02:36:24,856][INFO ][o.e.h.n.Netty4HttpServerTransport] [master01-elastic] publish_address {10.18.0.13:9200}, bound_addresses {10.18.0.13:9200}
[2019-02-11T02:36:24,856][INFO ][o.e.n.Node ] [master01-elastic] started
[2019-02-11T02:36:24,873][INFO ][o.e.l.LicenseService ] [master01-elastic] license [c2004733-fa30-4249-bb07-d5f2238816ad] mode [basic] - valid
[2019-02-11T02:36:24,875][INFO ][o.e.g.GatewayService ] [master01-elastic] recovered [0] indices into cluster_state
This makes master undiscoverable so in my data01
[2019-02-11T02:24:09,882][WARN ][o.e.d.z.ZenDiscovery ] [data01-elastic] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
Also on my data01
[root#data01-elastic ~]# cat /etc/elasticsearch/data01-elastic/elasticsearch.yml | grep -i zen
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: 10.18.0.13:9300
[root#data01-elastic ~]# ping 10.18.0.13
PING 10.18.0.13 (10.18.0.13) 56(84) bytes of data.
64 bytes from 10.18.0.13: icmp_seq=1 ttl=64 time=0.171 ms
64 bytes from 10.18.0.13: icmp_seq=2 ttl=64 time=0.147 ms
How can I further troubleshoot this?
The cluster was deployed using these ansible scripts:
with this configuration for the master:
- hosts: masters
tasks:
- name: Elasticsearch Master Configuration
import_role:
name: elastic.elasticsearch
vars:
es_instance_name: "{{ ansible_hostname }}"
es_data_dirs:
- "{{ data_dir }}"
es_log_dir: "/opt/elasticsearch/logs"
es_config:
node.name: "{{ ansible_hostname }}"
cluster.name: "{{ cluster_name }}"
discovery.zen.ping.unicast.hosts: "{% for host in groups['masters'] -%}{{ hostvars[host]['ansible_ens33']['ipv4']['address'] }}:9300{% if not loop.last %},{% endif %}{%- endfor %}"
http.port: 9200
transport.tcp.port: 9300
node.data: false
node.master: true
bootstrap.memory_lock: true
network.host: '{{ ansible_facts["ens33"]["ipv4"]["address"] }}'
discovery.zen.minimum_master_nodes: 1
es_xpack_features: []
es_scripts: false
es_templates: false
es_version_lock: true
es_heap_size: 2g
es_api_port: 9200
and this for the data
- hosts: data
tasks:
- name: Elasticsearch Data Configuration
import_role:
name: elastic.elasticsearch
vars:
es_instance_name: "{{ ansible_hostname }}"
es_data_dirs:
- "{{ data_dir }}"
es_log_dir: "/opt/elasticsearch/logs"
es_config:
node.name: "{{ ansible_hostname }}"
cluster.name: "{{ cluster_name }}"
discovery.zen.ping.unicast.hosts: "{% for host in groups['masters'] -%}{{ hostvars[host]['ansible_ens33']['ipv4']['address'] }}:9300{% if not loop.last %},{% endif %}{%- endfor %}"
http.port: 9200
transport.tcp.port: 9300
node.data: true
node.master: false
bootstrap.memory_lock: true
network.host: '{{ ansible_facts["ens33"]["ipv4"]["address"] }}'
discovery.zen.minimum_master_nodes: 1
es_xpack_features: []
es_scripts: false
es_templates: false
es_version_lock: true
es_heap_size: 6g
es_api_port: 9200
The 2 VMs I was trying to establish communication among were Centos7 which has firewalld enabled by default.
Disabling and stopping the service solved the issue.

Difference between running Logstash on console and service

I want to index the Apache logs on my webserver, and view them on the Elasticsearch server, were also Kibana is running.
So I installed Logstash on my webserver.
If I start my Logstash conf on the console at the webserver (as root), the content is send to the ES-server, and an index is created on the ES-server.
/usr/share/logstash/bin/logstash -f apache2.conf
But if I start the Logstash service with that same config, the ES-server dont recieve anything.
systemctl start logstash
I checked the logs /var/log/logstash/logstash-plain.log and /var/log/messages , but no error entry or useful hint is included.
Nov 21 15:05:01 wfe01 logstash: [2018-11-21T15:05:01,967][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,793][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.X.X:9200/]}}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,809][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.X.X:9200/, :path=>"/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,230][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.X.X:9200/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,344][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,353][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,398][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.X.X:9200"]}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,441][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,507][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
Nov 21 15:05:04 wfe01 logstash: [2018-11-21T15:05:04,367][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,138][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d32aef0519b35231d714b89c8b4d5791", :path=>["/path/ssl_access_log", "/path/ssl_error_log"]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,193][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x634099e9 run>"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,293][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,321][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,914][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
(We have another db-server with metricbeat-service installed, and this works also over the network, the content is send to the ES-server.)
ES Version 6.4
Logstash config:
input {
file {
path => [
"/path/ssl_access_log",
"/path/ssl_error_log"
]
start_position => "beginning"
add_field => { "myconf" => "apache2" }
}
}
output {
if [myconf]=="apache2" {
elasticsearch {
hosts => ["http://192.168.X.X:9200"]
index => "apache2-status-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
}
I tried several things: deleting the index, the since_db file, service-restarting.
What could be the problem, that the console call works, but not the service?
Thanks
Steffen

How to get connection from node-data to master in elasticsearch?

I checked all documentations in internet by googling. I tried to bind node-data and master node. But I realized that there is an error in my logs;
Eror: if I check "192.168.5.84" logs; below error occurs.
[node1] not enough master nodes discovered during pinging (found [[]], but needed [1])
2017-08-16 13:37:38 Commons Daemon procrun stdout initialized
[2017-08-16T13:37:43,253][INFO ][o.e.n.Node ] [node1] initializing ...
[2017-08-16T13:37:43,346][INFO ][o.e.e.NodeEnvironment ] [node1] using [1] data paths, mounts [[(C:)]], net usable_space [10.7gb], net total_space [39.6gb], spins? [unknown], types [NTFS]
[2017-08-16T13:37:43,346][INFO ][o.e.e.NodeEnvironment ] [node1] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-08-16T13:37:43,472][INFO ][o.e.n.Node ] [node1] node name [node1], node ID [81pArkMqSUuBVnKwny1Blw]
[2017-08-16T13:37:43,472][INFO ][o.e.n.Node ] [node1] version[5.4.1], pid[7632], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Windows Server 2012 R2/6.3/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-08-16T13:37:43,472][INFO ][o.e.n.Node ] [node1] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=C:\elk\elasticsearch, -Des.default.path.logs=C:\elk\elasticsearch\logs, -Des.default.path.data=C:\elk\elasticsearch\data, -Des.default.path.conf=C:\elk\elasticsearch\config, exit, -Xms2048m, -Xmx2048m, -Xss1024k]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [aggs-matrix-stats]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [ingest-common]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-expression]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-groovy]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-mustache]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [lang-painless]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [percolator]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [reindex]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [transport-netty3]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] loaded module [transport-netty4]
[2017-08-16T13:37:45,706][INFO ][o.e.p.PluginsService ] [node1] no plugins loaded
[2017-08-16T13:37:50,987][INFO ][o.e.d.DiscoveryModule ] [node1] using discovery type [zen]
[2017-08-16T13:37:52,347][INFO ][o.e.n.Node ] [node1] initialized
[2017-08-16T13:37:52,347][INFO ][o.e.n.Node ] [node1] starting ...
[2017-08-16T13:37:53,190][INFO ][o.e.t.TransportService ] [node1] publish_address {192.168.5.84:9300}, bound_addresses {192.168.5.84:9300}
[2017-08-16T13:37:53,206][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-08-16T13:37:56,362][WARN ][o.e.d.z.ZenDiscovery ] [node1] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
[2017-08-16T13:37:59,378][WARN ][o.e.d.z.ZenDiscovery ] [node1] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
[2017-08-16T13:38:02,394][WARN ][o.e.d.z.ZenDiscovery ] [node1] not enough master nodes discovered during pinging (found [[]], but needed [1]), pinging again
My Master : (it is working perfect!) 10.180.11.82
cluster.name: elasticsearch
node.name: "lmaster"
node.master: true
node.data: true
network.host: 10.180.11.82
http.port: 333
#network.bind_host: ["192.168.5.84"]
#discovery.zen.ping.multicast.enabled: true
discovery.zen.ping.unicast.hosts: ["10.180.11.82:333"]
My data Node : (Above error occurs here) 192.168.5.84
network.host: 192.168.5.84
http.port: 333
cluster.name: elasticsearch
node.name: "node1"
node.master: false
node.data: true
#discovery.zen.ping.unicast.hosts: ["10.180.11.82:333"]
#network.bind_host: 10.180.11.82
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ["10.180.11.82:333"]
Your settings are all over the place. Start with something simpler, like the following, make sure it works and then, if not happy with what ports/IPs are used, start changing:
Node 10.180.11.82:
cluster.name: elasticsearch
node.name: "lmaster"
node.master: true
node.data: true
network.host: 10.180.11.82
discovery.zen.ping.unicast.hosts: ["192.168.5.84:9300"]
discovery.zen.minimum_master_nodes: 1
Node 192.168.5.84:
cluster.name: elasticsearch
node.name: "node1"
node.master: false
node.data: true
network.host: 192.168.5.84
discovery.zen.ping.unicast.hosts: ["10.180.11.82:9300"]
discovery.zen.minimum_master_nodes: 1

Resources