logstash 7.9.1 docker conatiner: file input is not working - elasticsearch

I am trying to read a log file but is not working, it works when the logstash.conf is configure to listen in the port 5000 but from a file not works. I am using logstash version 7.9.1 from docker container and trying to sent the logs to Elastic search 7.9.1.
This is my logstash.conf file
input {
file {
path => ["/home/douglas/projects/incollect/*.log"]
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "test-elk-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
stdout {
codec => rubydebug
}
}
this is the logs from the console,I can't see any error and says Successfully started
logstash_1 | [2020-10-16T00:38:27,748][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2020-10-16T00:38:27,795][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2020-10-16T00:38:27,798][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x44d5fe run>"}
logstash_1 | [2020-10-16T00:38:27,800][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x4c6dee32 run>"}
logstash_1 | [2020-10-16T00:38:27,840][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2020-10-16T00:38:28,535][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.73}
logstash_1 | [2020-10-16T00:38:28,599][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2020-10-16T00:38:28,600][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.8}
logstash_1 | [2020-10-16T00:38:28,840][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2020-10-16T00:38:28,909][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2020-10-16T00:38:28,920][INFO ][filewatch.observingtail ][main][4a3eb924128694e00dae8e6fab084bfc5e3c3692e66663362019b182fcb31a48] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2020-10-16T00:38:29,386][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
and this is my log file:
Oct 9 15:34:19 incollect drupal: http://dev.incollect.com|1602257659|DEV|52.202.31.67|http://dev.incollect.com/icadmin/inquires_report?q=icadmin/ajax_validate_and_fix_inquire_by_id|http://dev.incollect.com/icadmin/inquires_report|3||Validate inquireStep 0
Oct 9 15:34:19 incollect drupal: http://dev.incollect.com|1602257659|DEV|52.202.31.67|http://dev.incollect.com/icadmin/inquires_report?q=icadmin/ajax_validate_and_fix_inquire_by_id|http://dev.incollect.com/icadmin/inquires_report|3||Validate inquireStep 1 - inquire_id:14219
Edited****************
I am adding the docker-compose file, this is my configuration to logstash
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
volumes:
- ./../../:/usr/share/logstash
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
I am not sure what is the problem, I tried differents solutions but it does not works.

If this is - ./../../:/usr/share/logstash what you are using to mount the logs volume, your logstash file input path should point to /usr/share/logstash/*.log

Related

Error in Filebeat logs - not able to view data in kibana

Recently upgraded to 7.17.7 filebeat. Using elasticsearch, kibana and filebeat, all 7.17.7. However , I am not able to see the logs in kibana, as filebeat is not sending the logs to elasticsearch and kibana. In filebeat saw error -
ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(http://localhost:9200)): Connection marked as failed because the onConnect callback failed: resource 'filebeat-7.17.7' exists, but it is not an alias
Can someone help to figure out what could be the cause and solution for this error?
restarted filebeat, but didnt help.
Filebeat config -
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/www/vhosts/rshop/current/var/log/*.log
multiline.pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}
multiline.negate: true
multiline.match: after
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.ilm.enabled: false
setup.kibana:
output.elasticsearch:
hosts: ["localhost:9200"]
indices:
- index: "r-logs-%{[agent.version]}-%{+yyyy.MM.dd}"
when.regexp:
log.file.path: '^.+\/var\/log\/recalculation\.log$'
pipelines:
- pipeline: "filebeat-6.8.7-monolog-pipeline"
when.or:
- regexp:
log.file.path: '^.+\/var\/log\/recalculation\.log$'
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0755
#NKumar most likely its an upgrade issue from legacy to new index templates, which will happen if you don't mark them as true for overwriting.
Can you please provide info from what version of stack did you upgrade to 7.17?
Also, the quick solution would be to just add an alias to your filebeat index as:
POST /_aliases
{
"actions" : [
{
"add" : {
"index" : "filebeat-7.17.7",
"alias" : "filebeat-7.17.7_1",
"is_write_index" : true
}
}
]}
or a more persistent solution would be to add following setting in filebeat:
setup.template.settings:
setup.template.enabled: true
setup.template.overwrite: true

Trying to Verify ELK installation , kibana dashboard not showing logbeats in discover tab

I used helm to load the ELK stack on kubernetes.
I ran the following commands
minikube start --cpus 4 --memory 8192
minikube addons enable ingress
helm repo add elastic https://helm.elastic.co
helm repo update
Then deployed elasticsearch
values-02.yml
replicas: 1
minimumMasterNodes: 1
ingress:
enabled: true
hosts:
- host: es-elk.s9.devopscloud.link #Change the hostname to the one you need
paths:
- path: /
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
Applied it
helm install elk-elasticsearch elastic/elasticsearch -f values-02.yml
Then deployed kibana values-03.yml
elasticsearchHosts: "http://elasticsearch-master:9200"
ingress:
enabled: true
className: "nginx"
hosts:
- host:
paths:
- path: /
Applied it
helm install elk-kibana elastic/kibana -f values-03.yml
Then deployed logstash
persistence:
enabled: true
logstashConfig:
logstash.yml: |
http.host: 0.0.0.0
xpack.monitoring.enabled: false
logstashPipeline:
logstash.conf: |
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "http://elasticsearch-master.logging.svc.cluster.local:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
service:
type: ClusterIP
ports:
- name: beats
port: 5044
protocol: TCP
targetPort: 5044
- name: http
port: 8080
protocol: TCP
targetPort: 8080
Applied it
helm install elk-logstash elastic/logstash -f values-04.yaml
Then deployed filebeat values-05.yaml
daemonset:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["elk-logstash-logstash:5044"]
Then applied it
helm install elk-filebeat elastic/filebeat -f values-05.yaml
All up and running
kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 61m
elk-filebeat-filebeat-ggjhc 1/1 Running 0 45m
elk-kibana-kibana-6d658894bf-grb8x 1/1 Running 0 52m
elk-logstash-logstash-0 1/1 Running 0 47m
But when I go onto the discover page
http://172.21.95.140/app/management/kibana/indexPatterns?bannerMessage=To%20visualize%20and%20explore%20data%20in%20Kibana,%20you%20must%20create%20an%20index%20pattern%20to%20retrieve%20data%20from%20Elasticsearch.
It does not show anything, for filebeats
Instead I get a Ready to try Kibana? First, you need data message.
I was following this tutorial
https://blog.knoldus.com/how-to-deploy-elk-stack-on-kubernetes/#deploy-elastic-search
I followed this tutorial and ran the default kibana and filebeat yaml files.

Missing queues from RabbitMQ Metricbeat

It looks like only a fraction of the queues on my RabbitMQ cluster are making it into Elasticsearch via Metricbeat.
When I query RabbitMQ's /api/overview, I see 887 queues reported:
object_totals: {
consumers: 517,
queues: 887,
exchanges: 197,
connections: 305,
channels: 622
},
When I query RabbitMQ's /api/queues (which is what Metricbeat hits), I count 887 queues there as well.
When I get a unique count of the field rabbitmq.queue.name in Elasticsearch, I am seeing only 309 queues.
I don't see anything in the debug output that stands out to me. It's just the usual INFO level startup messages, followed by the publish information:
root#rabbitmq:/etc/metricbeat# metricbeat -e
2019-06-24T21:13:33.692Z INFO instance/beat.go:571 Home path: [/usr/share/metricbeat] Config path: [/etc/metricbeat] Data path: [/var/lib/metricbeat] Logs path: [/var/log/metricbeat]
2019-06-24T21:13:33.692Z INFO instance/beat.go:579 Beat ID: xxx
2019-06-24T21:13:33.692Z INFO [index-management.ilm] ilm/ilm.go:129 Policy name: metricbeat-7.1.1
2019-06-24T21:13:33.692Z INFO [seccomp] seccomp/seccomp.go:116 Syscall filter successfully installed
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:827 Beat info {"system_info": {"beat": {"path": {"config": "/etc/metricbeat", "data": "/var/lib/metricbeat", "home": "/usr/share/metricbeat", "logs": "/var/log/metricbeat"}, "type": "metricbeat", "uuid": "xxx"}}}
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:836 Build info {"system_info": {"build": {"commit": "3358d9a5a09e3c6709a2d3aaafde628ea34e8419", "libbeat": "7.1.1", "time": "2019-05-23T13:23:10.000Z", "version": "7.1.1"}}}
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:839 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.11.5"}}}
[...]
2019-06-24T21:13:33.694Z INFO [beat] instance/beat.go:872 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/etc/metricbeat", "exe": "/usr/share/metricbeat/bin/metricbeat", "name": "metricbeat", "pid": 30898, "ppid": 30405, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2019-06-24T21:13:33.100Z"}}}
2019-06-24T21:13:33.694Z INFO instance/beat.go:280 Setup Beat: metricbeat; Version: 7.1.1
2019-06-24T21:13:33.694Z INFO [publisher] pipeline/module.go:97 Beat name: metricbeat
2019-06-24T21:13:33.694Z INFO instance/beat.go:391 metricbeat start running.
2019-06-24T21:13:33.694Z INFO cfgfile/reload.go:150 Config reloader started
2019-06-24T21:13:33.694Z INFO [monitoring] log/log.go:117 Starting metrics logging every 30s
[...]
2019-06-24T21:13:43.696Z INFO filesystem/filesystem.go:57 Ignoring filesystem types: sysfs, rootfs, ramfs, bdev, proc, cpuset, cgroup, cgroup2, tmpfs, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, dax, bpf, pipefs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, pstore, mqueue, autofs
2019-06-24T21:13:43.696Z INFO fsstat/fsstat.go:59 Ignoring filesystem types: sysfs, rootfs, ramfs, bdev, proc, cpuset, cgroup, cgroup2, tmpfs, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, dax, bpf, pipefs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, pstore, mqueue, autofs
2019-06-24T21:13:44.696Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xxx))
2019-06-24T21:13:44.711Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xxx)) established
2019-06-24T21:14:03.696Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":130,"time":{"ms":131}},"total":{"ticks":1960,"time":{"ms":1965},"value":1960},"user":{"ticks":1830,"time":{"ms":1834}}},"handles":{"limit":{"hard":1048576,"soft":1024},"open":12},"info":{"ephemeral_id":"xxx","uptime":{"ms":30030}},"memstats":{"gc_next":30689808,"memory_alloc":21580680,"memory_total":428076400,"rss":79917056}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"events":{"acked":7825,"batches":11,"total":7825},"read":{"bytes":66},"type":"logstash","write":{"bytes":870352}},"pipeline":{"clients":4,"events":{"active":313,"published":8138,"retry":523,"total":8138},"queue":{"acked":7825}}},"metricbeat":{"rabbitmq":{"connection":{"events":2987,"failures":10,"success":2977},"exchange":{"events":1970,"success":1970},"node":{"events":10,"success":10},"queue":{"events":3130,"failures":10,"success":3120}},"system":{"cpu":{"events":2,"success":2},"filesystem":{"events":7,"success":7},"fsstat":{"events":1,"success":1},"load":{"events":2,"success":2},"memory":{"events":2,"success":2},"network":{"events":4,"success":4},"process":{"events":18,"success":18},"process_summary":{"events":2,"success":2},"socket_summary":{"events":2,"success":2},"uptime":{"events":1,"success":1}}},"system":{"cpu":{"cores":4},"load":{"1":0.48,"15":0.28,"5":0.15,"norm":{"1":0.12,"15":0.07,"5":0.0375}}}}}}
I think if there were a problem getting the queue, I should see an error in the logs above as per https://github.com/elastic/beats/blob/master/metricbeat/module/rabbitmq/queue/data.go#L94-L104
Here's the metricbeat.yml:
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 10s
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
name: metricbeat
fields:
environment: development
processors:
- add_cloud_metadata: ~
output.logstash:
hosts: ["xxx"]
Here's the modules.d/rabbitmq.yml:
- module: rabbitmq
metricsets: ["node", "queue", "connection", "exchange"]
enabled: true
period: 2s
hosts: ["xxx"]
username: xxx
password: xxx
I solved it by upgrading Elastic Stack from 7.1.1 to 7.2.0.

Difference between running Logstash on console and service

I want to index the Apache logs on my webserver, and view them on the Elasticsearch server, were also Kibana is running.
So I installed Logstash on my webserver.
If I start my Logstash conf on the console at the webserver (as root), the content is send to the ES-server, and an index is created on the ES-server.
/usr/share/logstash/bin/logstash -f apache2.conf
But if I start the Logstash service with that same config, the ES-server dont recieve anything.
systemctl start logstash
I checked the logs /var/log/logstash/logstash-plain.log and /var/log/messages , but no error entry or useful hint is included.
Nov 21 15:05:01 wfe01 logstash: [2018-11-21T15:05:01,967][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,793][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.X.X:9200/]}}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,809][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.X.X:9200/, :path=>"/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,230][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.X.X:9200/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,344][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,353][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,398][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.X.X:9200"]}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,441][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,507][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
Nov 21 15:05:04 wfe01 logstash: [2018-11-21T15:05:04,367][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,138][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d32aef0519b35231d714b89c8b4d5791", :path=>["/path/ssl_access_log", "/path/ssl_error_log"]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,193][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x634099e9 run>"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,293][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,321][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,914][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
(We have another db-server with metricbeat-service installed, and this works also over the network, the content is send to the ES-server.)
ES Version 6.4
Logstash config:
input {
file {
path => [
"/path/ssl_access_log",
"/path/ssl_error_log"
]
start_position => "beginning"
add_field => { "myconf" => "apache2" }
}
}
output {
if [myconf]=="apache2" {
elasticsearch {
hosts => ["http://192.168.X.X:9200"]
index => "apache2-status-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
}
I tried several things: deleting the index, the since_db file, service-restarting.
What could be the problem, that the console call works, but not the service?
Thanks
Steffen

Elasticsearch connection refused while kibana is trying to connect

I am trying to run ELK stack using docker container. But I am getting error that kibana is unable to make connection with elasticsearch.
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["error","elasticsearch","admin"],"pid":12,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.2:9200"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:console#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:elasticsearch#5.6.9","error"],"pid":12,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:metrics#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-22T19:31:38,182][INFO ][o.e.d.DiscoveryModule ] [g8HPieb] using discovery type [zen]
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:timelion#5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["listening","info"],"pid":12,"message":"Server running at http://0.0.0.0:5601"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:38Z","tags":["status","ui settings","error"],"pid":12,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-22T19:31:38,634][INFO ][o.e.n.Node ] initialized
elasticsearch_1 | [2018-06-22T19:31:38,634][INFO ][o.e.n.Node ] [g8HPieb] starting ...
elasticsearch_1 | [2018-06-22T19:31:38,767][INFO ][o.e.t.TransportService ] [g8HPieb] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
elasticsearch_1 | [2018-06-22T19:31:38,776][WARN ][o.e.b.BootstrapChecks ] [g8HPieb] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
logstash_1 | log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
logstash_1 | log4j:WARN Please initialize the log4j system properly.
logstash_1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
**logstash_1 | {:timestamp=>"2018-06-22T19:31:40.555000+0000", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}**
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:40Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-22T19:31:40Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"}
Here is the content of my docker-comp
version: "2.0"
services:
logstash:
image: logstash:2
ports:
- "5044:5044"
volumes:
- ./:/config
command: logstash -f /config/logstash.conf
links:
- elasticsearch
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:5.6.9
ports:
- "9200:9200"
volumes:
- "./es_data/es_data:/usr/share/elasticsearch/data/"
kibana:
image: kibana:5
ports:
- "5601:5601"
links:
- elasticsearch
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- elasticsearch
Content of my logstash.conf
input { beats { port => 5044 } }
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
stdout {
codec => rubydebug
}
}
I have curl on elasticsearch container and kibana container and it looks good to me
{
"name" : "g8HPieb",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "XxH0TcAmQcGqprf6s7TJEQ",
"version" : {
"number" : "5.6.9",
"build_hash" : "877a590",
"build_date" : "2018-04-12T16:25:14.838Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
curl localhost:9200/_cat/indices?pretty
yellow open .kibana GIBmXdlRQJmI67oq5r4oCg 1 1 1 0 3.2kb 3.2kb
After increasing virtual memory size
root#sfbp19:~/dockerizing-jenkins# sysctl -p
vm.max_map_count = 262144
root#sfbp19:~/dockerizing-jenkins# docker-compose -f docker-compose-elk.yml up
Creating network "dockerizingjenkins_default" with the default driver
Creating dockerizingjenkins_elasticsearch_1
Creating dockerizingjenkins_logstash_1
Creating dockerizingjenkins_kibana_1
Attaching to dockerizingjenkins_elasticsearch_1, dockerizingjenkins_kibana_1, dockerizingjenkins_logstash_1
elasticsearch_1 | [2018-06-26T19:08:19,294][INFO ][o.e.n.Node ] [] initializing ...
elasticsearch_1 | [2018-06-26T19:08:19,363][INFO ][o.e.e.NodeEnvironment ] [PVmTsqv] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/sfbp19--vg-root)]], net usable_space [671.9gb], net total_space [789.2gb], spins? [possibly], types [ext4]
elasticsearch_1 | [2018-06-26T19:08:19,364][INFO ][o.e.e.NodeEnvironment ] [PVmTsqv] heap size [1.9gb], compressed ordinary object pointers [true]
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] node name [PVmTsqv] derived from node ID [PVmTsqv3QnyS3sQarPcJ-A]; set [node.name] to override
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] version[5.6.9], pid[1], build[877a590/2018-04-12T16:25:14.838Z], OS[Linux/4.4.0-31-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_171/25.171-b11]
elasticsearch_1 | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [aggs-matrix-stats]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [ingest-common]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-expression]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-groovy]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-mustache]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [lang-painless]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [parent-join]
elasticsearch_1 | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [percolator]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [reindex]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [transport-netty3]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] loaded module [transport-netty4]
elasticsearch_1 | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService ] [PVmTsqv] no plugins loaded
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:kibana#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:elasticsearch#5.6.9","info"],"pid":13,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["error","elasticsearch","admin"],"pid":13,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.2:9200"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:console#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:elasticsearch#5.6.9","error"],"pid":13,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:metrics#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["status","plugin:timelion#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["listening","info"],"pid":13,"message":"Server running at http://0.0.0.0:5601"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:21Z","tags":["status","ui settings","error"],"pid":13,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1 | [2018-06-26T19:08:21,190][INFO ][o.e.d.DiscoveryModule ] [PVmTsqv] using discovery type [zen]
elasticsearch_1 | [2018-06-26T19:08:21,654][INFO ][o.e.n.Node ] initialized
elasticsearch_1 | [2018-06-26T19:08:21,654][INFO ][o.e.n.Node ] [PVmTsqv] starting ...
elasticsearch_1 | [2018-06-26T19:08:21,780][INFO ][o.e.t.TransportService ] [PVmTsqv] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
logstash_1 | log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
logstash_1 | log4j:WARN Please initialize the log4j system properly.
logstash_1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:23Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:23Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"No living connections"}
logstash_1 | {:timestamp=>"2018-06-26T19:08:23.572000+0000", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
logstash_1 | {:timestamp=>"2018-06-26T19:08:23.790000+0000", :message=>"Pipeline main started"}
elasticsearch_1 | [2018-06-26T19:08:24,837][INFO ][o.e.c.s.ClusterService ] [PVmTsqv] new_master {PVmTsqv}{PVmTsqv3QnyS3sQarPcJ-A}{coD5A4HyR7-1MedSq8dFUQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)[, ]
elasticsearch_1 | [2018-06-26T19:08:24,869][INFO ][o.e.h.n.Netty4HttpServerTransport] [PVmTsqv] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_1 | [2018-06-26T19:08:24,870][INFO ][o.e.n.Node ] [PVmTsqv] started
elasticsearch_1 | [2018-06-26T19:08:24,989][INFO ][o.e.g.GatewayService ] [PVmTsqv] recovered [1] indices into cluster_state
elasticsearch_1 | [2018-06-26T19:08:25,148][INFO ][o.e.c.r.a.AllocationService] [PVmTsqv] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:26Z","tags":["status","plugin:elasticsearch#5.6.9","info"],"pid":13,"state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch:9200."}
kibana_1 | {"type":"log","#timestamp":"2018-06-26T19:08:26Z","tags":["status","ui settings","info"],"pid":13,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Elasticsearch plugin is red"}
========================filebeat.yml==============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /jenkins/gerrit_volume/logs/*_log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#============================== Kibana =====================================
setup.kibana:
#host: "localhost:5601"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.1.9.69:5044"]
logging.level: debug
It looks like this is an elasticsearch problem from your logging, preventing ES from initializing. This line:
elasticsearch_1 | [2018-06-22T19:31:38,776][WARN ][o.e.b.BootstrapChecks ] [g8HPieb] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
You can bump it up temporarily with the following command:
sysctl -w vm.max_map_count=262144
Or set it permanently by adding the following line to /etc/sysctl.conf and running sysctl -p to pick up the config if you're on a live instance:
vm.max_map_count=262144
Since you're doing this in a Docker container you probably want to just spin up with the latter option in /etc/sysctl.conf.
Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html

Resources