How can I debug why Fluentd is not sending data to Elasticsearch? - elasticsearch

There are 0 error messages when bringing up the Fluentd docker container, so it makes it hard to debug.
curl http://elasticsearch:9200/_cat/indices from the fluentd-container shows indices, but however doesn't show the fluentd-index.
docker logs 7b
2018-06-29 13:56:41 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-06-29 13:56:41 +0000 [info]: starting fluentd-0.12.19
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-rename-key' version '0.1.3'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.12.19'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.10.61'
2018-06-29 13:56:41 +0000 [info]: adding filter pattern="**" type="record_transformer"
2018-06-29 13:56:41 +0000 [info]: adding match pattern="docker.*" type="rename_key"
2018-06-29 13:56:41 +0000 [info]: Added rename key rule: rename_rule1 {:key_regexp=>/^log$/, :new_key=>"message"}
2018-06-29 13:56:41 +0000 [info]: adding match pattern="**" type="elasticsearch"
2018-06-29 13:56:41 +0000 [info]: adding source type="forward"
2018-06-29 13:56:41 +0000 [info]: adding source type="monitor_agent"
2018-06-29 13:56:41 +0000 [info]: using configuration file: <ROOT>
<source>
#type forward
</source>
<source>
#type monitor_agent
bind 0.0.0.0
port 24220
</source>
<filter **>
type record_transformer
<record>
node /
role app
environment dev
tenant xxx
tag ${tag}
</record>
</filter>
<match docker.*>
type rename_key
rename_rule1 ^log$ message
append_tag message
</match>
<match **>
type elasticsearch
host elasticsearch
port 9200
index_name fluentd
type_name fluentd
include_tag_key true
logstash_format true
</match>
</ROOT>
2018-06-29 13:56:41 +0000 [info]: listening fluent socket on 0.0.0.0:24224
...
2018-06-29 14:16:38 +0000 [info]: listening fluent socket on 0.0.0.0:24224
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=49
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=50
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=51
... many repeats
2018-07-01 06:21:52 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 08:39:07 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 06:21:52 +0000 [warn]: suppressed same stacktrace
2018-07-01 08:39:07 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 13:02:17 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 08:39:07 +0000 [warn]: suppressed same stacktrace
2018-07-01 13:02:17 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 21:04:48 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 13:02:17 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [warn]: failed to flush the buffer. error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 21:04:48 +0000 [warn]: retry count exceededs limit.
2018-07-01 21:04:48 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [error]: throwing away old logs.
I am able to successfully insert data in a test-index in ElasticSearch by curling. How do I troubleshoot where fluentd fails?

I am unable to comment so adding couple of observations here.
Documentation says to use #type elasticsearch. Also if both elastic and fluentd are running as docker containers, please make sure to run them with proper network so they can talk to each other(try IPs first maybe).
Also, what is your Dockerfile looks like so we can pass verbosity to fluentd command?.

I successfully used this configuration for fluentd+elastisearch:
<source>
#type forward
#label #mainstream
bind 0.0.0.0
port 24224
</source>
<label #mainstream>
<match **>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
<buffer>
flush_mode interval
flush_interval 1s
retry_type exponential_backoff
flush_thread_count 2
retry_forever true
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</store>
</match>
</label>
For debugging you could use tcpdump:
sudo tcpdump -i eth0 tcp port 24224 -X -s 0 -nn
**note: removed the leading slash form the first source tag

Accordingly to the fluentd documentation you can use different log level, aka order of verbosity
https://docs.fluentd.org/deployment/logging

Related

Error Elasticsearch client not compatible with Elasticsearch server?

I have installed td-agent and I am trying to upload data to Elasticsearch. Below is the td-agent.conf file:
<source>
#type tail
path /home/rocket/PycharmProjects/EFK/log.json
pos_file /home/rocket/PycharmProjects/EFK/log.json.pos
format json
time_format %Y-%m-%d %H:%M:%S
tag log
</source>
<match *log*>
#type elasticsearch
host 35.171.30.19
port 9200
user elastic
password XXXXXX
index_name test
</match>
Below is the error I am getting:
2023-01-30 14:13:47 +0000 [info]: starting fluentd-1.15.3 pid=5105 ruby="2.7.6"
2023-01-30 14:13:47 +0000 [info]: spawn command to main: cmdline=["/opt/td-agent/bin/ruby", "-Eascii-8bit:ascii-8bit", "/opt/td-agent/bin/fluentd", "--log", "/var/log/td-agent/td-agent.log", "--daemon", "/var/run/td-agent/td-agent.pid", "--under-supervisor"]
2023-01-30 14:13:47 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-01-30 14:13:48 +0000 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil
2023-01-30 14:13:48 +0000 [info]: adding match pattern="*log*" type="elasticsearch"
2023-01-30 14:13:48 +0000 [error]: #0 config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Using Elasticsearch client 8.4.0 is not compatible for your Elasticsearch server. Please check your using elasticsearch gem version and Elasticsearch server."
2023-01-30 14:13:48 +0000 [error]: Worker 0 finished unexpectedly with status 2
2023-01-30 14:13:48 +0000 [info]: Received graceful stop
2023-01-30 14:13:49 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-01-30 14:13:49 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-calyptia-monitoring' version '0.1.3'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '5.2.4'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-flowcounter-simple' version '0.1.0'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-kafka' version '0.18.1'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-metrics-cmetrics' version '0.1.2'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-opensearch' version '1.0.8'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.3'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.1.0'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.1'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-s3' version '1.7.2'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-sd-dns' version '0.1.0'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.5'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-td' version '1.2.0'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-utmpx' version '0.5.0'
2023-01-30 14:13:49 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.5.0'
2023-01-30 14:13:49 +0000 [info]: gem 'fluentd' version '1.15.3'
2023-01-30 14:13:49 +0000 [info]: using configuration file: <ROOT>
<source>
#type tail
path "/home/rocket/PycharmProjects/EFK/log.json"
pos_file "/home/rocket/PycharmProjects/EFK/log.json.pos"
format json
time_format %Y-%m-%d %H:%M:%S
tag "log"
<parse>
time_format %Y-%m-%d %H:%M:%S
#type json
unmatched_lines
time_type string
</parse>
</source>
<match *log*>
#type elasticsearch
host "35.179.40.29"
port 9200
user "elastic"
password xxxxxx
index_name "test"
</match>
</ROOT>
2023-01-30 14:13:49 +0000 [info]: starting fluentd-1.15.3 pid=5116 ruby="2.7.6"
2023-01-30 14:13:49 +0000 [info]: spawn command to main: cmdline=["/opt/td-agent/bin/ruby", "-Eascii-8bit:ascii-8bit", "/opt/td-agent/bin/fluentd", "--log", "/var/log/td-agent/td-agent.log", "--daemon", "/var/run/td-agent/td-agent.pid", "--under-supervisor"]
2023-01-30 14:13:49 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-01-30 14:13:49 +0000 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil
2023-01-30 14:13:49 +0000 [info]: adding match pattern="*log*" type="elasticsearch"
2023-01-30 14:13:50 +0000 [error]: #0 config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Using Elasticsearch client 8.4.0 is not compatible for your Elasticsearch server. Please check your using elasticsearch gem version and Elasticsearch server."
2023-01-30 14:13:50 +0000 [error]: Worker 0 finished unexpectedly with status 2
2023-01-30 14:13:50 +0000 [info]: Received graceful stop
So the error says error_class=Fluent::ConfigError error="Using Elasticsearch client 8.4.0 is not compatible for your Elasticsearch server. Please check your using elasticsearch gem version and Elasticsearch server."
So it's an issue between the Elastic plugin version and Elasticsearch server version. But I am unable to find anywhere which version is supported and how to install it.
Below is how I have installed td-agent in Ubuntu 18.04.
curl -fsSL https://toolbelt.treasuredata.com/sh/install-ubuntu-bionic-td-agent4.sh | sh

FluentD unable to establish connection to ElasticSearch

I am trying to setup an FluentD + ECK on my Kubernetes Cluster.
But FluentD is failing to establish connection with ElasticSearch which is on SSL.
Error log
2022-10-12 04:55:27 +0000 [info]: adding match in #OUTPUT pattern="**" type="elasticsearch"
2022-10-12 04:55:29 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. EOFError (EOFError)
2022-10-12 04:55:29 +0000 [warn]: #0 Remaining retry: 14. Retry to communicate after 2 second(s).
2022-10-12 04:55:33 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. EOFError (EOFError)
2022-10-12 04:55:33 +0000 [warn]: #0 Remaining retry: 13. Retry to communicate after 4 second(s).
FluentD output Conf
<label #OUTPUT>
<match **>
#type elasticsearch
host elasticsearch-es-http
port 9200
path ""
user elastic
password XXXXXXXXX
ca_path "/etc/ssl/certs/ca.crt"
</match>
</label>
Mounted the below ElasticSearch secret as cert on fluentd
- name: elasticsearch-es-http-certs-public
secret:
secretName: elasticsearch-es-http-certs-public
- name: elasticsearch-es-http-certs-public
mountPath: "/etc/ssl/certs"
elasticsearch-es-http is the ElasticSearch Service name and the PODs are up and running.
Please guide me on where I went wrong.

fluentd can't connect to elasticsearch in cluster

i tried to set up an EFK Stack. While E+K work fine in the default namespace, the Fluentd container can't connect to elasticsearch.
kubectl get services -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-master ClusterIP 10.43.40.136 <none> 9200/TCP,9300/TCP 92m
elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 92m
kibana-kibana ClusterIP 10.43.152.189 <none> 5601/TCP 74m
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 14d
I've installed fluentd from this repo and changed the url to elasticsearch
https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch-rbac.yaml
kubectl -n kube-system get pods | grep fluentd
fluentd-4fd2s 1/1 Running 0 51m
fluentd-7t2v5 1/1 Running 0 49m
fluentd-dfnfg 1/1 Running 0 50m
fluentd-lvrsv 1/1 Running 0 48m
fluentd-rv4td 1/1 Running 0 50m
but the log is telling me:
2021-07-23 21:38:59 +0000 [info]: starting fluentd-1.13.2 pid=7 ruby="2.6.8"
2021-07-23 21:38:59 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/fluentd/vendor/bundle/ruby/2.6.0/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--gemfile", "/fluentd/Gemfile", "-r", "/fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-elasticsearch-5.0.5/lib/fluent/plugin/elasticsearch_simple_sniffer.rb", "--under-supervisor"]
2021-07-23 21:39:01 +0000 [info]: adding match in #FLUENT_LOG pattern="fluent.**" type="null"
2021-07-23 21:39:01 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2021-07-23 21:39:01 +0000 [warn]: #0 [filter_kube_metadata] !! The environment variable 'K8S_NODE_NAME' is not set to the node name which can affect the API server and watch efficiency !!
2021-07-23 21:39:01 +0000 [info]: adding match pattern="**" type="elasticsearch"
2021-07-23 21:39:09 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:39:09 +0000 [warn]: #0 [out_es] Remaining retry: 14. Retry to communicate after 2 second(s).
2021-07-23 21:39:18 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:39:18 +0000 [warn]: #0 [out_es] Remaining retry: 13. Retry to communicate after 4 second(s).
2021-07-23 21:39:31 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:39:31 +0000 [warn]: #0 [out_es] Remaining retry: 12. Retry to communicate after 8 second(s).
2021-07-23 21:39:52 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:39:52 +0000 [warn]: #0 [out_es] Remaining retry: 11. Retry to communicate after 16 second(s).
2021-07-23 21:40:29 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2021-07-23 21:40:29 +0000 [warn]: #0 [out_es] Remaining retry: 10. Retry to communicate after 32 second(s).
2021-07-23 21:41:38 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
I installed dig and it resolved the service:
root#fluentd-dfnfg:/home/fluent# nslookup elasticsearch-master.default.svc.cluster.local
Server: 10.43.0.10
Address: 10.43.0.10#53
Name: elasticsearch-master.default.svc.cluster.local
Address: 10.43.40.136
I'm out of ideas.
PS: Im using a hardened RKE2. (https://github.com/rancherfederal/rke2-ansible)

Fluentd is not filtering as intended before writing to Elasticsearch

Using:
Elasticsearch 7.5.1.
Fluentd 1.11.2
Fluent-plugin-elasticsearch 4.1.3
Springboot 2.3.3
I have a Springboot artifact with Logback configured with an appender that, in addition to the app STDOUT, sends logs to Fluentd:
<appender name="FLUENT_TEXT"
class="ch.qos.logback.more.appenders.DataFluentAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
<tag>myapp</tag>
<label>myservicename</label>
<remoteHost>fluentdservicename</remoteHost>
<port>24224</port>
<useEventTime>false</useEventTime>
</appender>
Fluentd config file looks like this:
<ROOT>
<source>
#type forward
port 24224
bind "0.0.0.0"
</source>
<filter myapp.**>
#type parser
key_name "message"
reserve_data true
remove_key_name_field false
<parse>
#type "json"
</parse>
</filter>
<match myapp.**>
#type copy
<store>
#type "elasticsearch"
host "elasticdb"
port 9200
logstash_format true
logstash_prefix "applogs"
logstash_dateformat "%Y%m%d"
include_tag_key true
type_name "app_log"
tag_key "#log_name"
flush_interval 1s
user "elastic"
password xxxxxx
<buffer>
flush_interval 1s
</buffer>
</store>
<store>
#type "stdout"
</store>
</match>
</ROOT>
So it just adds a filter to parse the information (a Json string) to a structured way and then writes it to Elasticsearch (as well as to Fluentd's STDOUT). Check how I add the myapp.** regexp to make it match in the filter and in the match blocks.
Everyting is up and running properly in Openshift. Springboot sends properly the logs to Fluentd, and Fluentd writes in Elasticsearch.
But the problem is that every log generated from the app is also written. This means that every INFO log with, for example, the initial Spring configuration or any other information that the app sends to through Logback is also written.
Example of "wanted" log:
2020-11-04 06:33:42.312840352 +0000 myapp.myservice: {"traceId":"bf8195d9-16dd-4e58-a0aa-413d89a1eca9","spanId":"f597f7ffbe722fa7","spanExportable":"false","X-Span-Export":"false","level":"INFO","X-B3-SpanId":"f597f7ffbe722fa7","idOrq":"bf8195d9-16dd-4e58-a0aa-413d89a1eca9","logger":"es.organization.project.myapp.commons.services.impl.LoggerServiceImpl","X-B3-TraceId":"f597f7ffbe722fa7","thread":"http-nio-8085-exec-1","message":"{\"traceId\":\"bf8195d9-16dd-4e58-a0aa-413d89a1eca9\",\"inout\":\"IN\",\"startTime\":1604471622281,\"finishTime\":null,\"executionTime\":null,\"entrySize\":5494.0,\"exitSize\":null,\"differenceSize\":null,\"user\":\"pmmartin\",\"methodPath\":\"Method Path\",\"errorMessage\":null,\"className\":\"CamelOrchestrator\",\"methodName\":\"preauthorization_validate\"}","idOp":"","inout":"IN","startTime":1604471622281,"finishTime":null,"executionTime":null,"entrySize":5494.0,"exitSize":null,"differenceSize":null,"user":"pmmartin","methodPath":"Method Path","errorMessage":null,"className":"CamelOrchestrator","methodName":"preauthorization_validate"}
Example of "unwanted" logs (check how there is a Fluentd warning per each unexpected log message):
2020-11-04 06:55:09.000000000 +0000 myapp.myservice: {"level":"INFO","logger":"org.apache.camel.impl.engine.InternalRouteStartupManager","thread":"restartedMain","message":"Route: route6 started and consuming from: servlet:/preAuth"}
2020-11-04 06:55:09 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data 'Total 20 routes, of which 20 are started'" location=nil tag="myapp.myservice" time=1604472909 record={"level"=>"INFO", "logger"=>"org.apache.camel.impl.engine.AbstractCamelContext", "thread"=>"restartedMain", "message"=>"Total 20 routes, of which 20 are started"}
2020-11-04 06:55:09.000000000 +0000 myapp.myservice: {"level":"INFO","logger":"org.apache.camel.impl.engine.AbstractCamelContext","thread":"restartedMain","message":"Total 20 routes, of which 20 are started"}
2020-11-04 06:55:09 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data 'Apache Camel 3.5.0 (MyService DEMO Mode) started in 0.036 seconds'" location=nil tag="myapp.myservice" time=1604472909 record={"level"=>"INFO", "logger"=>"org.apache.camel.impl.engine.AbstractCamelContext", "thread"=>"restartedMain", "message"=>"Apache Camel 3.5.0 (MyService DEMO Mode) started in 0.036 seconds"}
2020-11-04 06:55:09.000000000 +0000 myapp.myservice: {"level":"INFO","logger":"org.apache.camel.impl.engine.AbstractCamelContext","thread":"restartedMain","message":"Apache Camel 3.5.0 (MyService DEMO Mode) started in 0.036 seconds"}
2020-11-04 06:55:09 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data 'Started MyServiceApplication in 15.446 seconds (JVM running for 346.061)'" location=nil tag="myapp.myservice" time=1604472909 record={"level"=>"INFO", "logger"=>"es.organization.project.myapp.MyService", "thread"=>"restartedMain", "message"=>"Started MyService in 15.446 seconds (JVM running for 346.061)"}
The question is: What and how do I tell Fluentd to really filter the info that gets to it so the unwanted info gets discarded?
Thanks to #Azeem, and according to grep and regexp features documentation, I got it :).
I just added this to my Fluentd config file:
<filter onpay.**>
#type grep
<regexp>
key message
pattern /^.*inout.*$/
</regexp>
</filter>
Any line that does not contain the word "inout" is now excluded.

stack elasticsearch + fluentd

I am setting up fluentd and elasticsearch on a local VM in order to try the fluentd and ES stack.
OS: centos (recent)
[root#localhost data]# cat /etc/redhat-release
CentOS release 6.5 (Final)
I am elasticsearch up and running on localhost (I used it with logstash with no issue)
[root#localhost data]# curl -X GET http://localhost:9200/
{
"status" : 200,
"name" : "Simon Williams",
"version" : {
"number" : "1.2.1",
"build_hash" : "6c95b759f9e7ef0f8e17f77d850da43ce8a4b364",
"build_timestamp" : "2014-06-03T15:02:52Z",
"build_snapshot" : false,
"lucene_version" : "4.8"
},
"tagline" : "You Know, for Search"
}
I have installed td-agent following the installation notes from fluentd website.
I am using that configuration file:
<source>
type tail
path /tmp/data/log
pos_file /tmp/data/log.pos
format /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[\
^\"]*)" "(?<agent>[^\"]*)")?/
time_format %d/%b/%Y:%H:%M:%S %z
tag front.nginx.access
</source>
<match front.nginx.access>
type elasticsearch
host localhost
port 9200
index_name fluentd
type_name nginx
include_tag_key
# buffering
buffer_type file
buffer_path /tmp/fluentd/buffer/
flush_interval 10s
buffer_chunk_limit 16m
buffer_queue_limit 4096
retry_wait 15s
</match>
Here is the start-up log:
2014-07-24 13:39:58 +0200 [info]: starting fluentd-0.10.50
2014-07-24 13:39:58 +0200 [info]: reading config file path="/etc/td-agent/td-agent.conf"
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-mixin-config-placeholders' version '0.2.4'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-elasticsearch' version '0.3.1'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-flume' version '0.1.1'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-mongo' version '0.7.3'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-parser' version '0.3.4'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.4.1'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-s3' version '0.4.0'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-scribe' version '0.10.10'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-td' version '0.10.20'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-td-monitoring' version '0.1.2'
2014-07-24 13:39:58 +0200 [info]: gem 'fluent-plugin-webhdfs' version '0.2.2'
2014-07-24 13:39:58 +0200 [info]: gem 'fluentd' version '0.10.50'
2014-07-24 13:39:58 +0200 [info]: using configuration file: <ROOT>
<source>
type tail
path /tmp/data/log
pos_file /tmp/data/log.pos
format /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?/
time_format %d/%b/%Y:%H:%M:%S %z
tag front.nginx.access
</source>
<match front.nginx.access>
type elasticsearch
host localhost
port 9200
index_name fluentd
type_name nginx
include_tag_key
buffer_type file
buffer_path /tmp/fluentd/buffer/
flush_interval 10s
buffer_chunk_limit 16m
buffer_queue_limit 4096
retry_wait 15s
</match>
</ROOT>
2014-07-24 13:39:58 +0200 [info]: adding source type="tail"
2014-07-24 13:39:58 +0200 [info]: adding match pattern="front.nginx.access" type="elasticsearch"
2014-07-24 13:39:58 +0200 [info]: following tail of /tmp/data/log
I get that error:
2014-07-24 13:40:00 +0200 [warn]: temporarily failed to flush the buffer. next_retry=2014-07-24 13:40:13 +0200 error_class="Elasticsearch::Transport::Transport::Errors::ServiceUnavailable" error="[503] " instance=70247139359260
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/elasticsearch-transport-0.4.11/lib/elasticsearch/transport/transport/base.rb:132:in `__raise_transport_error'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/elasticsearch-transport-0.4.11/lib/elasticsearch/transport/transport/base.rb:227:in `perform_request'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/elasticsearch-transport-0.4.11/lib/elasticsearch/transport/transport/http/faraday.rb:20:in `perform_request'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/elasticsearch-transport-0.4.11/lib/elasticsearch/transport/client.rb:92:in `perform_request'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/elasticsearch-api-0.4.11/lib/elasticsearch/api/actions/ping.rb:19:in `ping'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluent-plugin-elasticsearch-0.3.1/lib/fluent/plugin/out_elasticsearch.rb:46:in `client'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluent-plugin-elasticsearch-0.3.1/lib/fluent/plugin/out_elasticsearch.rb:103:in `send'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluent-plugin-elasticsearch-0.3.1/lib/fluent/plugin/out_elasticsearch.rb:98:in `write'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.50/lib/fluent/buffer.rb:296:in `write_chunk'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.50/lib/fluent/buffer.rb:276:in `pop'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.50/lib/fluent/output.rb:310:in `try_flush'
2014-07-24 13:40:00 +0200 [warn]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.50/lib/fluent/output.rb:132:in `run'
running tcpdump on port 9200, I get nothing...
tcpdump -x -X -i any 'port 9200'
I've found the problem.
Actually, I had not modified the default cluster name in ES.
Another ES cluster existed on the same network.
The clients used in this cluster where sending packets to my ES cluster with an ancient protocol.
I have corrected all issues by changing the ES cluster name.

Resources