Describe the bug
logs are not getting transferred to elasticsearch.
Expected behavior
logs from the source folder should've been transferred to elasticsearch.
Your Environment
Fluentd or td-agent version: td-agent 1.11.2
Operating system: Linux
Your Configuration
<source>
#type tail
path /home/smtp-api/storage/logs/fluentd/*
pos_file /home/smtp-api/storage/logs/fluentd/access-logs.pos
tag smtp.access
format json
read_from_head true
</source>
<source>
#type tail
path /home/smtp-api-2/storage/logs/fluentd/*
pos_file /home/smtp-api-2/storage/logs/fluentd/access-logs.pos
tag smtp.access2
format json
read_from_head true
</source>
<match smtp.*>
#type elasticsearch
host elk-data.amartya-int.int
<buffer>
#type file
path /var/log/fluent/smtp
compress gzip
</buffer>
port 9200
logstash_format true
logstash_prefix smtp.access
</match>
Your Error Log
2021-05-17 15:30:41 +0530 [warn]: #0 failed to flush the buffer. retry_time=3 next_retry_seconds=2021-05-17 15:30:45 864793371479422728793/2199023255552000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:30:41 +0530 [warn]: #0 suppressed same stacktrace
2021-05-17 15:30:46 +0530 [warn]: #0 failed to flush the buffer. retry_time=4 next_retry_seconds=2021-05-17 15:30:53 1246145672473190896169/2199023255552000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:30:46 +0530 [warn]: #0 suppressed same stacktrace
2021-05-17 15:30:53 +0530 [warn]: #0 failed to flush the buffer. retry_time=5 next_retry_seconds=2021-05-17 15:31:09 431493802875340255437/549755813888000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:30:53 +0530 [warn]: #0 suppressed same stacktrace
2021-05-17 15:31:10 +0530 [warn]: #0 failed to flush the buffer. retry_time=6 next_retry_seconds=2021-05-17 15:31:44 36018824927254535859/274877906944000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:31:10 +0530 [warn]: #0 suppressed same stacktrace
2021-05-17 15:31:45 +0530 [warn]: #0 failed to flush the buffer. retry_time=7 next_retry_seconds=2021-05-17 15:32:41 213226392719973317877/274877906944000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:31:45 +0530 [warn]: #0 suppressed same stacktrace
2021-05-17 15:32:41 +0530 [warn]: #0 failed to flush the buffer. retry_time=8 next_retry_seconds=2021-05-17 15:34:46 14488516743329774951/68719476736000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:32:41 +0530 [warn]: #0 suppressed same stacktrace
^[2021-05-17 15:34:46 +0530 [warn]: #0 failed to flush the buffer. retry_time=9 next_retry_seconds=2021-05-17 15:38:59 1590472627730823409/17179869184000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:34:46 +0530 [warn]: #0 suppressed same stacktrace
2021-05-17 15:38:59 +0530 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2021-05-17 15:48:09 6447852607269569521/8589934592000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:38:59 +0530 [warn]: #0 suppressed same stacktrace
2021-05-17 15:48:10 +0530 [warn]: #0 failed to flush the buffer. retry_time=11 next_retry_seconds=2021-05-17 16:05:21 2422140701418556159/8589934592000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 15:48:10 +0530 [warn]: #0 suppressed same stacktrace
2021-05-17 16:05:21 +0530 [warn]: #0 failed to flush the buffer. retry_time=12 next_retry_seconds=2021-05-17 16:43:30 22032519751758471/268435456000000000 +0530 chunk="5c279c260c779ea13a39e5ac45c9b1a9" error_class=IOError error="closed stream"
2021-05-17 16:05:21 +0530 [warn]: #0 suppressed same stacktrace
Related
I have downloaded the elasticsearch & kibana on my Ubuntu18.04 machine. Both of them are running fine and I can access them. Below is the elasticsearch details:
{
"name" : "TX-G1-000",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "AobC_iiNSyyNftYl3pUJ7w",
"version" : {
"number" : "7.14.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "66b55ebfa59c92c15db3f69a335d500018b3331e",
"build_date" : "2021-08-26T09:01:05.390870785Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I have also installed Fluentd and its service td-agent is running fine.
● td-agent.service - td-agent: Fluentd based data collector for Treasure Data
Loaded: loaded (/lib/systemd/system/td-agent.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-09-20 21:18:12 IST; 7min ago
Docs: https://docs.treasuredata.com/display/public/PD/About+Treasure+Data%27s+Server-Side+Agent
Process: 5486 ExecStop=/bin/kill -TERM ${MAINPID} (code=exited, status=0/SUCCESS)
Process: 5491 ExecStart=/opt/td-agent/bin/fluentd --log $TD_AGENT_LOG_FILE --daemon /var/run/td-agent/td-agent.pid $TD_AGENT_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 5498 (fluentd)
Tasks: 12 (limit: 4915)
CGroup: /system.slice/td-agent.service
├─5498 /opt/td-agent/bin/ruby /opt/td-agent/bin/fluentd --log /var/log/td-agent/td-agent.log --daemon /var/run/td-agent/td-agent.pid
└─5501 /opt/td-agent/bin/ruby -Eascii-8bit:ascii-8bit /opt/td-agent/bin/fluentd --log /var/log/td-agent/td-agent.log --daemon /var/run/td-agent/td-agent.pid --under-supervisor
Sep 20 21:18:11 TX-G1-000 systemd[1]: Starting td-agent: Fluentd based data collector for Treasure Data...
Sep 20 21:18:12 TX-G1-000 systemd[1]: Started td-agent: Fluentd based data collector for Treasure Data.
Below is my td-agent.conf file:
<source>
#type tail
path /home/user/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_health.json
pos_file /home/user/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_health.json.pos
format json
time_format %Y-%m-%d %H:%M:%S
tag health01
</source>
<source>
#type tail
path /home/user/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_cycle.json
pos_file /home/user/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_cycle.json.pos
format json
time_format %Y-%m-%d %H:%M:%S
tag cycle01
</source>
<match health*>
#type elasticsearch
hosts http://localhost:9200/
index_name health_skl_device
type_name health
</match>
<match cycle*>
#type elasticsearch
hosts http://localhost:9200/
index_name cycle_skl_device
type_name cycle
</match>
When running the td-agent, below are its logs:
2021-09-20 21:18:12 +0530 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-elasticsearch' version '5.1.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-elasticsearch' version '5.0.5'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-flowcounter-simple' version '0.1.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-kafka' version '0.16.3'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-prometheus' version '2.0.1'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.1.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-s3' version '1.6.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-sd-dns' version '0.1.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-systemd' version '1.0.5'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-td' version '1.1.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-utmpx' version '0.5.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluent-plugin-webhdfs' version '1.4.0'
2021-09-20 21:18:12 +0530 [info]: gem 'fluentd' version '1.13.3'
2021-09-20 21:18:12 +0530 [info]: using configuration file: <ROOT>
<source>
#type tail
path "/home/user/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_health.json"
pos_file "/home/user/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_health.json.pos"
format json
time_format %Y-%m-%d %H:%M:%S
tag "health01"
<parse>
time_format %Y-%m-%d %H:%M:%S
#type json
unmatched_lines
time_type string
</parse>
</source>
<source>
#type tail
path "/home/user/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_cycle.json"
pos_file "/home/user/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_cycle.json.pos"
format json
time_format %Y-%m-%d %H:%M:%S
tag "cycle01"
<parse>
time_format %Y-%m-%d %H:%M:%S
#type json
unmatched_lines
time_type string
</parse>
</source>
<match health*>
#type elasticsearch
hosts "http://localhost:9200/"
index_name "health_skl_device"
type_name "health"
</match>
<match cycle*>
#type elasticsearch
hosts "http://localhost:9200/"
index_name "cycle_skl_device"
type_name "cycle"
</match>
</ROOT>
2021-09-20 21:18:12 +0530 [info]: starting fluentd-1.13.3 pid=5491 ruby="2.7.4"
2021-09-20 21:18:12 +0530 [info]: spawn command to main: cmdline=["/opt/td-agent/bin/ruby", "-Eascii-8bit:ascii-8bit", "/opt/td-agent/bin/fluentd", "--log", "/var/log/td-agent/td-agent.log", "--daemon", "/var/run/td-agent/td-agent.pid", "--under-supervisor"]
2021-09-20 21:18:13 +0530 [info]: adding match pattern="health*" type="elasticsearch"
2021-09-20 21:18:13 +0530 [warn]: #0 Detected ES 7.x: `_doc` will be used as the document `_type`.
2021-09-20 21:18:13 +0530 [info]: adding match pattern="cycle*" type="elasticsearch"
2021-09-20 21:18:13 +0530 [warn]: #0 Detected ES 7.x: `_doc` will be used as the document `_type`.
2021-09-20 21:18:13 +0530 [info]: adding source type="tail"
2021-09-20 21:18:13 +0530 [info]: adding source type="tail"
2021-09-20 21:18:13 +0530 [info]: #0 starting fluentd worker pid=5501 ppid=5498 worker=0
2021-09-20 21:18:13 +0530 [info]: #0 following tail of /home/thingtrax/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_cycle.json
2021-09-20 21:18:13 +0530 [info]: #0 following tail of /home/thingtrax/PycharmProjects/Td-Agent/logs/TX-S2-SKL-001_health.json
2021-09-20 21:18:13 +0530 [info]: #0 fluentd worker is now running worker=0
I do not see any error logs but not sure why its not able to upload data. I try to create index pattern, it doesnt matches on Kibana. Can anyone please help me in debugging this issue. Thanks
Logs after adding debug
2021-09-23 07:41:50 +0530 [debug]: 'host localhost' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'host: localhost' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'index_name health_skl_device' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'index_name: health_skl_device' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'template_name ' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'template_name: ' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'logstash_prefix logstash' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_prefix: logstash' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'logstash_dateformat %Y.%m.%d' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_dateformat: %Y.%m.%d' has timestamp placeholders, but chunk key 'time' is not configured
2021-09-23 07:41:50 +0530 [debug]: 'logstash_dateformat %Y.%m.%d' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_dateformat: %Y.%m.%d' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'deflector_alias ' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'deflector_alias: ' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'application_name default' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'application_name: default' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'ilm_policy_id logstash-policy' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'ilm_policy_id: logstash-policy' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: Need substitution: false
2021-09-23 07:41:50 +0530 [debug]: 'host_placeholder localhost' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'host_placeholder: localhost' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'host localhost' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'host: localhost' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'index_name cycle_skl_device' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'index_name: cycle_skl_device' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'template_name ' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'template_name: ' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'logstash_prefix logstash' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_prefix: logstash' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'logstash_dateformat %Y.%m.%d' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_dateformat: %Y.%m.%d' has timestamp placeholders, but chunk key 'time' is not configured
2021-09-23 07:41:50 +0530 [debug]: 'logstash_dateformat %Y.%m.%d' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_dateformat: %Y.%m.%d' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'deflector_alias ' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'deflector_alias: ' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'application_name default' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'application_name: default' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: 'ilm_policy_id logstash-policy' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'ilm_policy_id: logstash-policy' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: Need substitution: false
2021-09-23 07:41:50 +0530 [debug]: 'host_placeholder localhost' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'host_placeholder: localhost' doesn't have tag placeholder
2021-09-23 07:41:50 +0530 [debug]: No fluent logger for internal event
I think you have incorrect match tags. Nowhere in documentation does it mention that asterisks can be used that way, they should either take a place of a whole tag part or be used inside a regular expression. According to this section, Fluentd accepts all non-period characters as a part of a tag. So in fact health* is a valid name for a tag, fluentd expects exact matches of that string.
You should try using /health.*/ and /cycle.*/ instead.
Better yet, you can go the intended way, change the tag names to health.01 and cycle.01 and use health.** and cycle.** for matching.
i tried to set up an EFK Stack. While E+K work fine in the default namespace, the Fluentd container can't connect to elasticsearch.
kubectl get services -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-master ClusterIP 10.43.40.136 <none> 9200/TCP,9300/TCP 92m
elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 92m
kibana-kibana ClusterIP 10.43.152.189 <none> 5601/TCP 74m
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 14d
I've installed fluentd from this repo and changed the url to elasticsearch
https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch-rbac.yaml
kubectl -n kube-system get pods | grep fluentd
fluentd-4fd2s 1/1 Running 0 51m
fluentd-7t2v5 1/1 Running 0 49m
fluentd-dfnfg 1/1 Running 0 50m
fluentd-lvrsv 1/1 Running 0 48m
fluentd-rv4td 1/1 Running 0 50m
but the log is telling me:
2021-07-23 21:38:59 +0000 [info]: starting fluentd-1.13.2 pid=7 ruby="2.6.8"
2021-07-23 21:38:59 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/fluentd/vendor/bundle/ruby/2.6.0/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--gemfile", "/fluentd/Gemfile", "-r", "/fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_simple_sniffer.rb", "--under-supervisor"]
2021-07-23 21:39:01 +0000 [info]: adding match in #FLUENT_LOG pattern="fluent.**" type="null"
2021-07-23 21:39:01 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2021-07-23 21:39:01 +0000 [warn]: #0 [filter_kube_metadata] !! The environment variable 'K8S_NODE_NAME' is not set to the node name which can affect the API server and watch efficiency !!
2021-07-23 21:39:01 +0000 [info]: adding match pattern="**" type="elasticsearch"
2021-07-23 21:39:09 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:39:09 +0000 [warn]: #0 [out_es] Remaining retry: 14. Retry to communicate after 2 second(s).
2021-07-23 21:39:18 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:39:18 +0000 [warn]: #0 [out_es] Remaining retry: 13. Retry to communicate after 4 second(s).
2021-07-23 21:39:31 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:39:31 +0000 [warn]: #0 [out_es] Remaining retry: 12. Retry to communicate after 8 second(s).
2021-07-23 21:39:52 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:39:52 +0000 [warn]: #0 [out_es] Remaining retry: 11. Retry to communicate after 16 second(s).
2021-07-23 21:40:29 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:40:29 +0000 [warn]: #0 [out_es] Remaining retry: 10. Retry to communicate after 32 second(s).
2021-07-23 21:41:38 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
I installed dig and it resolved the service:
root#fluentd-dfnfg:/home/fluent# nslookup elasticsearch-master.default.svc.cluster.local
Server: 10.43.0.10
Address: 10.43.0.10#53
Name: elasticsearch-master.default.svc.cluster.local
Address: 10.43.40.136
I'm out of ideas.
PS: Im using a hardened RKE2. (https://github.com/rancherfederal/rke2-ansible)
I am getting this error from the fluentd pods and they keep restarting. I am running this on kuberentes v1.17.9-eks-4c6976.
Not sure of what the cause is. Any help would be appreciated.
/usr/local/bundle/gems/fluentd-1.11.4/lib/fluent/plugin_helper/http_server/compat/webrick_handler.rb:26: warning: The called method `build' is defined here
2020-11-23 18:02:08 +0000 [warn]: [elasticsearch] failed to flush the buffer. retry_time=0 next_retry_seconds=2020-11-23 18:02:09.126315296 +0000 chunk="5b4c9fd811e8162eb94f03d8cec677e5" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\", :port=>9200, :scheme=>\"http\", :path=>\"\"}): read timeout reached"
2020-11-23 18:02:08.126340601 +0000 fluent.warn: {"retry_time":0,"next_retry_seconds":"2020-11-23 18:02:09.126315296 +0000","chunk":"5b4c9fd811e8162eb94f03d8cec677e5","error":"#<Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure: could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\", :port=>9200, :scheme=>\"http\", :path=>\"\"}): read timeout reached>","message":"[elasticsearch] failed to flush the buffer. retry_time=0 next_retry_seconds=2020-11-23 18:02:09.126315296 +0000 chunk=\"5b4c9fd811e8162eb94f03d8cec677e5\" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error=\"could not push logs to Elasticsearch cluster ({:host=>\\\"elasticsearch-master\\\", :port=>9200, :scheme=>\\\"http\\\", :path=>\\\"\\\"}): read timeout reached\""}
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluent-plugin-elasticsearch-4.2.2/lib/fluent/plugin/out_elasticsearch.rb:1055:in `rescue in send_bulk'
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluent-plugin-elasticsearch-4.2.2/lib/fluent/plugin/out_elasticsearch.rb:1017:in `send_bulk'
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluent-plugin-elasticsearch-4.2.2/lib/fluent/plugin/out_elasticsearch.rb:842:in `block in write'
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluent-plugin-elasticsearch-4.2.2/lib/fluent/plugin/out_elasticsearch.rb:841:in `each'
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluent-plugin-elasticsearch-4.2.2/lib/fluent/plugin/out_elasticsearch.rb:841:in `write'
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluentd-1.11.4/lib/fluent/plugin/output.rb:1136:in `try_flush'
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluentd-1.11.4/lib/fluent/plugin/output.rb:1442:in `flush_thread_run'
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluentd-1.11.4/lib/fluent/plugin/output.rb:462:in `block (2 levels) in start'
2020-11-23 18:02:08 +0000 [warn]: /usr/local/bundle/gems/fluentd-1.11.4/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2020-11-23 18:02:08 +0000 [warn]: [elasticsearch] failed to flush the buffer. retry_time=1 next_retry_seconds=2020-11-23 18:02:09 475256825743319463889/8796093022208000000000 +0000 chunk="5b4c9fd80c4e40f1d7a4a799916ae12b" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\", :port=>9200, :scheme=>\"http\", :path=>\"\"}): read timeout reached"
2020-11-23 18:02:08.127449054 +0000 fluent.warn: {"retry_time":1,"next_retry_seconds":"2020-11-23 18:02:09 475256825743319463889/8796093022208000000000 +0000","chunk":"5b4c9fd80c4e40f1d7a4a799916ae12b","error":"#<Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure: could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\", :port=>9200, :scheme=>\"http\", :path=>\"\"}): read timeout reached>","message":"[elasticsearch] failed to flush the buffer. retry_time=1 next_retry_seconds=2020-11-23 18:02:09 475256825743319463889/8796093022208000000000 +0000 chunk=\"5b4c9fd80c4e40f1d7a4a799916ae12b\" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error=\"could not push logs to Elasticsearch cluster ({:host=>\\\"elasticsearch-master\\\", :port=>9200, :scheme=>\\\"http\\\", :path=>\\\"\\\"}): read timeout reached\""}
The default request_timeout value for fluent-plugin-elasticsearch is 5s, which could often be too short when the fluentd has a large backlog to replay back to elasticsearch in large bulk messages.
So you may want to increase that request_timeout value for your elasticsearch output in your fluentd configuration to 15s or even much higher - like say 60s. It is important that you specify the time unit such as s also and not just the value of say 60.
The documentation for the that setting can be seen here: https://github.com/uken/fluent-plugin-elasticsearch#request_timeout
This could also be an indication that your elasticsearch node/cluster cannot ingest the data fast enough.
I am following instructions to connect FluentD to Elastic from HERE.
Unfortunately, when I deploy the daemonset, I get the following error:
2020-11-05 09:13:19 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. no implicit conversion of nil into String (TypeError)
2020-11-05 09:13:19 +0000 [warn]: #0 [out_es] Remaining retry: 14. Retry to communicate after 2 second(s).
2020-11-05 09:13:23 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. no implicit conversion of nil into String (TypeError)
2020-11-05 09:13:23 +0000 [warn]: #0 [out_es] Remaining retry: 13. Retry to communicate after 4 second(s).
2020-11-05 09:13:31 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. no implicit conversion of nil into String (TypeError)
2020-11-05 09:13:31 +0000 [warn]: #0 [out_es] Remaining retry: 12. Retry to communicate after 8 second(s).
2020-11-05 09:13:47 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. no implicit conversion of nil into String (TypeError)
2020-11-05 09:13:47 +0000 [warn]: #0 [out_es] Remaining retry: 11. Retry to communicate after 16 second(s).
2020-11-05 09:14:24 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. no implicit conversion of nil into String (TypeError)
2020-11-05 09:14:24 +0000 [warn]: #0 [out_es] Remaining retry: 10. Retry to communicate after 32 second(s).
2020-11-05 09:15:28 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. no implicit conversion of nil into String (TypeError)
2020-11-05 09:15:28 +0000 [warn]: #0 [out_es] Remaining retry: 9. Retry to communicate after 64 second(s).
#<Thread:0x00007fcf20f41a40#/fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-kubernetes_metadata_filter-2.3.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:265 run> terminated with exception (report_on_exception is true):
/usr/local/lib/ruby/2.6.0/openssl/buffering.rb:125:in `sysread': error reading from socket: Connection reset by peer (HTTP::ConnectionError)
from /usr/local/lib/ruby/2.6.0/openssl/buffering.rb:125:in `readpartial'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/timeout/null.rb:45:in `readpartial'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/connection.rb:212:in `read_more'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/connection.rb:92:in `readpartial'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/response/body.rb:30:in `readpartial'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/response/body.rb:36:in `each'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/kubeclient-4.9.1/lib/kubeclient/watch_stream.rb:25:in `each'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-kubernetes_metadata_filter-2.3.0/lib/fluent/plugin/kubernetes_metadata_watch_namespaces.rb:36:in `start_namespace_watch'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-kubernetes_metadata_filter-2.3.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:265:in `block in configure'
/usr/local/lib/ruby/2.6.0/openssl/buffering.rb:125:in `sysread': Connection reset by peer (Errno::ECONNRESET)
from /usr/local/lib/ruby/2.6.0/openssl/buffering.rb:125:in `readpartial'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/timeout/null.rb:45:in `readpartial'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/connection.rb:212:in `read_more'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/connection.rb:92:in `readpartial'
from /fluentd/vendor/bundle/ruby/2.6.0/gems/http-4.4.1/lib/http/response/body.rb:30:in `readpartial'
I have tried both true and false for this flag:
- name: FLUENT_ELASTICSEARCH_SSL_VERIFY
value: "false"
and have had no luck.
I would appreciate any pointers as to what might be the issue.
You are passing one of the env variables a blank value. Update that.
There are 0 error messages when bringing up the Fluentd docker container, so it makes it hard to debug.
curl http://elasticsearch:9200/_cat/indices from the fluentd-container shows indices, but however doesn't show the fluentd-index.
docker logs 7b
2018-06-29 13:56:41 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-06-29 13:56:41 +0000 [info]: starting fluentd-0.12.19
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-rename-key' version '0.1.3'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.12.19'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.10.61'
2018-06-29 13:56:41 +0000 [info]: adding filter pattern="**" type="record_transformer"
2018-06-29 13:56:41 +0000 [info]: adding match pattern="docker.*" type="rename_key"
2018-06-29 13:56:41 +0000 [info]: Added rename key rule: rename_rule1 {:key_regexp=>/^log$/, :new_key=>"message"}
2018-06-29 13:56:41 +0000 [info]: adding match pattern="**" type="elasticsearch"
2018-06-29 13:56:41 +0000 [info]: adding source type="forward"
2018-06-29 13:56:41 +0000 [info]: adding source type="monitor_agent"
2018-06-29 13:56:41 +0000 [info]: using configuration file: <ROOT>
<source>
#type forward
</source>
<source>
#type monitor_agent
bind 0.0.0.0
port 24220
</source>
<filter **>
type record_transformer
<record>
node /
role app
environment dev
tenant xxx
tag ${tag}
</record>
</filter>
<match docker.*>
type rename_key
rename_rule1 ^log$ message
append_tag message
</match>
<match **>
type elasticsearch
host elasticsearch
port 9200
index_name fluentd
type_name fluentd
include_tag_key true
logstash_format true
</match>
</ROOT>
2018-06-29 13:56:41 +0000 [info]: listening fluent socket on 0.0.0.0:24224
...
2018-06-29 14:16:38 +0000 [info]: listening fluent socket on 0.0.0.0:24224
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=49
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=50
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=51
... many repeats
2018-07-01 06:21:52 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 08:39:07 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 06:21:52 +0000 [warn]: suppressed same stacktrace
2018-07-01 08:39:07 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 13:02:17 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 08:39:07 +0000 [warn]: suppressed same stacktrace
2018-07-01 13:02:17 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 21:04:48 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 13:02:17 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [warn]: failed to flush the buffer. error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 21:04:48 +0000 [warn]: retry count exceededs limit.
2018-07-01 21:04:48 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [error]: throwing away old logs.
I am able to successfully insert data in a test-index in ElasticSearch by curling. How do I troubleshoot where fluentd fails?
I am unable to comment so adding couple of observations here.
Documentation says to use #type elasticsearch. Also if both elastic and fluentd are running as docker containers, please make sure to run them with proper network so they can talk to each other(try IPs first maybe).
Also, what is your Dockerfile looks like so we can pass verbosity to fluentd command?.
I successfully used this configuration for fluentd+elastisearch:
<source>
#type forward
#label #mainstream
bind 0.0.0.0
port 24224
</source>
<label #mainstream>
<match **>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
<buffer>
flush_mode interval
flush_interval 1s
retry_type exponential_backoff
flush_thread_count 2
retry_forever true
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</store>
</match>
</label>
For debugging you could use tcpdump:
sudo tcpdump -i eth0 tcp port 24224 -X -s 0 -nn
**note: removed the leading slash form the first source tag
Accordingly to the fluentd documentation you can use different log level, aka order of verbosity
https://docs.fluentd.org/deployment/logging