Difference between running Logstash on console and service - elasticsearch

I want to index the Apache logs on my webserver, and view them on the Elasticsearch server, were also Kibana is running.
So I installed Logstash on my webserver.
If I start my Logstash conf on the console at the webserver (as root), the content is send to the ES-server, and an index is created on the ES-server.
/usr/share/logstash/bin/logstash -f apache2.conf
But if I start the Logstash service with that same config, the ES-server dont recieve anything.
systemctl start logstash
I checked the logs /var/log/logstash/logstash-plain.log and /var/log/messages , but no error entry or useful hint is included.
Nov 21 15:05:01 wfe01 logstash: [2018-11-21T15:05:01,967][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,793][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.X.X:9200/]}}
Nov 21 15:05:02 wfe01 logstash: [2018-11-21T15:05:02,809][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.X.X:9200/, :path=>"/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,230][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.X.X:9200/"}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,344][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,353][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,398][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.X.X:9200"]}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,441][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
Nov 21 15:05:03 wfe01 logstash: [2018-11-21T15:05:03,507][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
Nov 21 15:05:04 wfe01 logstash: [2018-11-21T15:05:04,367][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,138][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d32aef0519b35231d714b89c8b4d5791", :path=>["/path/ssl_access_log", "/path/ssl_error_log"]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,193][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x634099e9 run>"}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,293][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,321][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
Nov 21 15:05:05 wfe01 logstash: [2018-11-21T15:05:05,914][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
(We have another db-server with metricbeat-service installed, and this works also over the network, the content is send to the ES-server.)
ES Version 6.4
Logstash config:
input {
file {
path => [
"/path/ssl_access_log",
"/path/ssl_error_log"
]
start_position => "beginning"
add_field => { "myconf" => "apache2" }
}
}
output {
if [myconf]=="apache2" {
elasticsearch {
hosts => ["http://192.168.X.X:9200"]
index => "apache2-status-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
}
I tried several things: deleting the index, the since_db file, service-restarting.
What could be the problem, that the console call works, but not the service?
Thanks
Steffen

Related

Metricbeat - Not creating any logfile

I am trying to set up metric beat for my CentOS7 host. I have explictly mentioned the logfile location for the metricbeat and the logging level is debug, but I dont see a log file created. I can see the logs in journalctl. Please let me know why the logfile is not creating. Same setting works with filebeat and the log file gets created.
Metricbeat version:
root#example.domain.com:/usr/share/metricbeat# metricbeat version
metricbeat version 7.2.0 (amd64), libbeat 7.2.0 [9ba65d864ca37cd32c25b980dbb4020975288fc0 built 2019-06-20 15:07:31 +0000 UTC]
Metricbeat config file:
/etc/metricbeat/metricbeat.yml
metricbeat:
config:
modules:
path: /etc/metricbeat/modules.d/*.yml
reload.enabled: true
reload.period: 10s
output.logstash:
hosts: ['logstash.domain.com:5158']
worker: 1
compression_level: 3
loadbalance: true
ssl:
certificate: /usr/share/metricbeat/metricbeat.crt
key: /usr/share/metricbeat/metricbeat.key
verification_mode: none
logging:
level: debug
to_files: true
files:
path: /var/myapp/log/metricbeat
name: metricbeat.log
rotateeverybytes: 10485760
keepfiles: 7
Ideally it should create a file (metricbeat.log) in /var/myapp/log/metricbeat location, but I dont see any files getting created.
Journalctl output:
* metricbeat.service - Metricbeat is a lightweight shipper for metrics.
Loaded: loaded (/usr/lib/systemd/system/metricbeat.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2022-01-24 08:51:13 PST; 39min ago
Docs: https://www.elastic.co/products/beats/metricbeat
Main PID: 13520 (metricbeat)
CGroup: /system.slice/metricbeat.service
`-13520 /usr/share/metricbeat/bin/metricbeat -e -c /etc/metricbeat/metricbeat.yml -path.home /usr/share/metricbeat -path.config /etc/metricbeat -path.data /var/lib/metricbeat -path.logs /var/log/metricbeat
Jan 24 09:30:14 example.domain.com metricbeat[13520]: "/var/lib/metricbeat",
Jan 24 09:30:14 example.domain.com metricbeat[13520]: "-path.logs",
Jan 24 09:30:14 example.domain.com metricbeat[13520]: "/var/log/metricbeat"
Jan 24 09:30:14 example.domain.com metricbeat[13520]: ]
Jan 24 09:30:14 example.domain.com metricbeat[13520]: },
Jan 24 09:30:14 example.domain.com metricbeat[13520]: "user": {
Jan 24 09:30:14 example.domain.com metricbeat[13520]: "name": "root"
Jan 24 09:30:14 example.domain.com metricbeat[13520]: },
Jan 24 09:30:14 example.domain.com metricbeat[13520]: "event": {
Jan 24 09:30:14 example.domain.com metricbeat[13520]: "module": "system",
I dont see any thing in "/var/log/metricbeat" directory as well.
UPDATE: I tried with version 6.3 and 7.16. It works fine. Looks like some issue with 7.2

Fluentd regexp for jboss-eap

I am trying to collect logs from Jboss-eap server and send then to Elasticsearch, I am using td-agent on the server and it always says unmatched_lines when it reads my input and regex expression
my input configuration
<source>
#type tail
read_from_head true
tag file-jboss.log
path C:\T24\JBOSS\jboss-eap-6.4\standalone\log\server.log
#C:\T24\TAFJ\log\database.log
pos_file c:\opt\td-agent\file-jboss.pos
<parse>
#type regexp
expression /(?<time>\d{2}\:\d{2}\:\d{2}\,\d{3})\s+\s*(?<level>\w{1,6})\s+(?<service>\[[a-zA-Z.]*\])\s*(?<thread>\(.*\))\s+(?<message>[A-Za-z0-9]*\:.*)/
</parse>
</source>
<filter file-jboss.log>
#type record_transformer
<record>
host_param "#{Socket.gethostname}"
</record>
</filter>
I used this regex it doesn't match everything but what I need is matched
(?<time>\d{2}\:\d{2}\:\d{2}\,\d{3})\s+\s*(?<level>\w{1,6})\s+(?<service>\[[a-zA-Z.]*\])\s*(?<thread>\(.*\))\s+(?<message>[A-Za-z0-9]*\:.*)
Logs to match sample
11:12:09,587 INFO [org.jboss.as.controller] (Controller Boot Thread) JBAS014774: Service status report
JBAS014775: New missing/unsatisfied dependencies:
service jboss.naming.context.java.ConnectionFactory (missing) dependents: [service jboss.naming.context.java.comp.TAFJJEE_EAR.TAFJJEE_EJB.ARCMOBProcessingBean.env.jms.TopicConnectionFactory, service jboss.naming.context.java.comp.TAFJJEE_EAR.TAFJJEE_MDB.TAFJPhantomListenerMDB.env.jms.TAFJQueueConnectionFactory, service jboss.naming.context.java.comp.TAFJJEE_EAR.TAFJJEE_EJB.AMLProcessingBean.env.jms.TopicConnectionFactory, service jboss.naming.context.java.module.BrowserWeb.BrowserWeb.env.jms.jmsConnectionFactory, JBAS014799: ... and 6 more ]
11:12:13,447 INFO [org.jboss.as.server.deployment] (MSC service thread 1-39) JBAS015974: Stopped subdeployment (runtime-name: TAFJJEE_EJB.jar) in 135ms
11:12:13,447 INFO [org.jboss.as.server.deployment] (MSC service thread 1-37) JBAS015974: Stopped subdeployment (runtime-name: TAFJJEE_MDB.jar) in 135ms
Fluentd logs output
2021-10-13 11:55:21 +0100 [info]: using configuration file: <ROOT>
<source>
#type tail
read_from_head true
tag "file-jboss.log"
path "C:\\T24\\JBOSS\\jboss-eap-6.4\\standalone\\log\\server.log"
pos_file "c:\\opt\\td-agent\\file-jboss.pos"
<parse>
#type "regexp"
expression /(?<time>\d{2}\:\d{2}\:\d{2}\,\d{3})\s+\s*(?<level>\w{1,6})\s+(?<service>\[[a-zA-Z.]*\])\s*(?<thread>\(.*\))\s+(?<mes
sage>[A-Za-z0-9]*\:.*)/
unmatched_lines
</parse>
</source>
<filter file-jboss.log>
#type record_transformer
<record>
host_param T24-MERCURY
</record>
</filter>
<match file-jboss.log>
#type file
path "c:\\opt\\td-agent\\output\\tafj.log"
<buffer time>
path "c:\\opt\\td-agent\\output\\tafj.log"
</buffer>
</match>
</ROOT>
2021-10-13 11:55:21 +0100 [info]: starting fluentd-1.13.3 pid=3192 ruby="2.7.4"
2021-10-13 11:55:21 +0100 [info]: spawn command to main: cmdline=["C:/opt/td-agent/bin/ruby.exe", "-Eascii-8bit:ascii-8bit", "C:/opt/
td-agent/bin/fluentd", "--under-supervisor"]
2021-10-13 11:55:24 +0100 [info]: adding filter pattern="file-jboss.log" type="record_transformer"
2021-10-13 11:55:24 +0100 [info]: adding match pattern="file-jboss.log" type="file"
2021-10-13 11:55:24 +0100 [info]: adding source type="tail"
2021-10-13 11:55:24 +0100 [info]: #0 starting fluentd worker pid=8120 ppid=3192 worker=0
2021-10-13 11:55:24 +0100 [info]: #0 following tail of C:\T24\JBOSS\jboss-eap-6.4\standalone\log\server.log
2021-10-13 11:55:24 +0100 [info]: #0 fluentd worker is now running worker=0
Any idea why fluentd doesn't like the regex ?

logstash 7.9.1 docker conatiner: file input is not working

I am trying to read a log file but is not working, it works when the logstash.conf is configure to listen in the port 5000 but from a file not works. I am using logstash version 7.9.1 from docker container and trying to sent the logs to Elastic search 7.9.1.
This is my logstash.conf file
input {
file {
path => ["/home/douglas/projects/incollect/*.log"]
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "test-elk-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
stdout {
codec => rubydebug
}
}
this is the logs from the console,I can't see any error and says Successfully started
logstash_1 | [2020-10-16T00:38:27,748][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2020-10-16T00:38:27,795][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash_1 | [2020-10-16T00:38:27,798][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x44d5fe run>"}
logstash_1 | [2020-10-16T00:38:27,800][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x4c6dee32 run>"}
logstash_1 | [2020-10-16T00:38:27,840][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2020-10-16T00:38:28,535][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.73}
logstash_1 | [2020-10-16T00:38:28,599][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2020-10-16T00:38:28,600][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.8}
logstash_1 | [2020-10-16T00:38:28,840][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2020-10-16T00:38:28,909][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1 | [2020-10-16T00:38:28,920][INFO ][filewatch.observingtail ][main][4a3eb924128694e00dae8e6fab084bfc5e3c3692e66663362019b182fcb31a48] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2020-10-16T00:38:29,386][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
and this is my log file:
Oct 9 15:34:19 incollect drupal: http://dev.incollect.com|1602257659|DEV|52.202.31.67|http://dev.incollect.com/icadmin/inquires_report?q=icadmin/ajax_validate_and_fix_inquire_by_id|http://dev.incollect.com/icadmin/inquires_report|3||Validate inquireStep 0
Oct 9 15:34:19 incollect drupal: http://dev.incollect.com|1602257659|DEV|52.202.31.67|http://dev.incollect.com/icadmin/inquires_report?q=icadmin/ajax_validate_and_fix_inquire_by_id|http://dev.incollect.com/icadmin/inquires_report|3||Validate inquireStep 1 - inquire_id:14219
Edited****************
I am adding the docker-compose file, this is my configuration to logstash
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
volumes:
- ./../../:/usr/share/logstash
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
I am not sure what is the problem, I tried differents solutions but it does not works.
If this is - ./../../:/usr/share/logstash what you are using to mount the logs volume, your logstash file input path should point to /usr/share/logstash/*.log

Fluentd is working but no index is being created on elastcisearch

I have a Kubernetes pod java app (writes logs to file on volume host (/var/log/java-app/java.log )) and use Fluentd as daemon sets that tails log file and writes to Elasticsearch. My fluentd is working but no index is being created on the elastic search and no index is showing on kibana.
Here is the Fluentd configuration:
javaapp.conf: |
<source>
#type tail
path /var/log/java-app/java.log
pos_file /var/log/java-apps.log.pos
tag java.app
read_from_head true
<parse>
#type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
# we send the logs to Elasticsearch
<match java.**>
#type elasticsearch_dynamic
#log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
reload_connections true
logstash_format true
logstash_prefix java-app-logs
<buffer>
#type file
path /var/log/fluentd-buffers/java-app.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever true
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 32
overflow_action block
</buffer>
</match>
Version of Fluentd version : fluent/fluentd-kubernetes-daemonset:v1.1-debian-elasticsearch
Version of Elasticsearch version: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
Looks like Fluentd does not get to put the logs into Elasticsearch.

Missing queues from RabbitMQ Metricbeat

It looks like only a fraction of the queues on my RabbitMQ cluster are making it into Elasticsearch via Metricbeat.
When I query RabbitMQ's /api/overview, I see 887 queues reported:
object_totals: {
consumers: 517,
queues: 887,
exchanges: 197,
connections: 305,
channels: 622
},
When I query RabbitMQ's /api/queues (which is what Metricbeat hits), I count 887 queues there as well.
When I get a unique count of the field rabbitmq.queue.name in Elasticsearch, I am seeing only 309 queues.
I don't see anything in the debug output that stands out to me. It's just the usual INFO level startup messages, followed by the publish information:
root#rabbitmq:/etc/metricbeat# metricbeat -e
2019-06-24T21:13:33.692Z INFO instance/beat.go:571 Home path: [/usr/share/metricbeat] Config path: [/etc/metricbeat] Data path: [/var/lib/metricbeat] Logs path: [/var/log/metricbeat]
2019-06-24T21:13:33.692Z INFO instance/beat.go:579 Beat ID: xxx
2019-06-24T21:13:33.692Z INFO [index-management.ilm] ilm/ilm.go:129 Policy name: metricbeat-7.1.1
2019-06-24T21:13:33.692Z INFO [seccomp] seccomp/seccomp.go:116 Syscall filter successfully installed
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:827 Beat info {"system_info": {"beat": {"path": {"config": "/etc/metricbeat", "data": "/var/lib/metricbeat", "home": "/usr/share/metricbeat", "logs": "/var/log/metricbeat"}, "type": "metricbeat", "uuid": "xxx"}}}
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:836 Build info {"system_info": {"build": {"commit": "3358d9a5a09e3c6709a2d3aaafde628ea34e8419", "libbeat": "7.1.1", "time": "2019-05-23T13:23:10.000Z", "version": "7.1.1"}}}
2019-06-24T21:13:33.692Z INFO [beat] instance/beat.go:839 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.11.5"}}}
[...]
2019-06-24T21:13:33.694Z INFO [beat] instance/beat.go:872 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/etc/metricbeat", "exe": "/usr/share/metricbeat/bin/metricbeat", "name": "metricbeat", "pid": 30898, "ppid": 30405, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2019-06-24T21:13:33.100Z"}}}
2019-06-24T21:13:33.694Z INFO instance/beat.go:280 Setup Beat: metricbeat; Version: 7.1.1
2019-06-24T21:13:33.694Z INFO [publisher] pipeline/module.go:97 Beat name: metricbeat
2019-06-24T21:13:33.694Z INFO instance/beat.go:391 metricbeat start running.
2019-06-24T21:13:33.694Z INFO cfgfile/reload.go:150 Config reloader started
2019-06-24T21:13:33.694Z INFO [monitoring] log/log.go:117 Starting metrics logging every 30s
[...]
2019-06-24T21:13:43.696Z INFO filesystem/filesystem.go:57 Ignoring filesystem types: sysfs, rootfs, ramfs, bdev, proc, cpuset, cgroup, cgroup2, tmpfs, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, dax, bpf, pipefs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, pstore, mqueue, autofs
2019-06-24T21:13:43.696Z INFO fsstat/fsstat.go:59 Ignoring filesystem types: sysfs, rootfs, ramfs, bdev, proc, cpuset, cgroup, cgroup2, tmpfs, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, dax, bpf, pipefs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, pstore, mqueue, autofs
2019-06-24T21:13:44.696Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xxx))
2019-06-24T21:13:44.711Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xxx)) established
2019-06-24T21:14:03.696Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":130,"time":{"ms":131}},"total":{"ticks":1960,"time":{"ms":1965},"value":1960},"user":{"ticks":1830,"time":{"ms":1834}}},"handles":{"limit":{"hard":1048576,"soft":1024},"open":12},"info":{"ephemeral_id":"xxx","uptime":{"ms":30030}},"memstats":{"gc_next":30689808,"memory_alloc":21580680,"memory_total":428076400,"rss":79917056}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"events":{"acked":7825,"batches":11,"total":7825},"read":{"bytes":66},"type":"logstash","write":{"bytes":870352}},"pipeline":{"clients":4,"events":{"active":313,"published":8138,"retry":523,"total":8138},"queue":{"acked":7825}}},"metricbeat":{"rabbitmq":{"connection":{"events":2987,"failures":10,"success":2977},"exchange":{"events":1970,"success":1970},"node":{"events":10,"success":10},"queue":{"events":3130,"failures":10,"success":3120}},"system":{"cpu":{"events":2,"success":2},"filesystem":{"events":7,"success":7},"fsstat":{"events":1,"success":1},"load":{"events":2,"success":2},"memory":{"events":2,"success":2},"network":{"events":4,"success":4},"process":{"events":18,"success":18},"process_summary":{"events":2,"success":2},"socket_summary":{"events":2,"success":2},"uptime":{"events":1,"success":1}}},"system":{"cpu":{"cores":4},"load":{"1":0.48,"15":0.28,"5":0.15,"norm":{"1":0.12,"15":0.07,"5":0.0375}}}}}}
I think if there were a problem getting the queue, I should see an error in the logs above as per https://github.com/elastic/beats/blob/master/metricbeat/module/rabbitmq/queue/data.go#L94-L104
Here's the metricbeat.yml:
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 10s
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
name: metricbeat
fields:
environment: development
processors:
- add_cloud_metadata: ~
output.logstash:
hosts: ["xxx"]
Here's the modules.d/rabbitmq.yml:
- module: rabbitmq
metricsets: ["node", "queue", "connection", "exchange"]
enabled: true
period: 2s
hosts: ["xxx"]
username: xxx
password: xxx
I solved it by upgrading Elastic Stack from 7.1.1 to 7.2.0.

Resources