I'm trying to connect fluentd with elasticsearch and I'm getting this error when I start the td-agent service.
td-agent.log:
Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 127.0.0.1:9092 (Errno::ECONNREFUSED)
td-agent.conf
<match docker.*>
#type elasticsearch
host localhost
port 9092
logstash_format true
</match>
My elasticsearch is running because I can check on http://localhost:9200/ and also the fluentd plugin
plugin 2020-05-21 12:57:55 -0300 [info]: gem 'fluent-plugin-elasticsearch' version '4.0.8'
If you can access elasticsearch from localhost:9200 then the .conf will look like:
<match docker.*>
#type elasticsearch
host localhost
port 9200
logstash_format true
</match>
Related
I wanna send haproxy logs to fluentd/elasticsearch/kibana using td-agent, but I can't do it correctly
I have installed EFK by dockers and it rules correctly.
I have a haproxy with log type haproxy.tcp like this:
haproxy[27508]: info 127.0.0.1:45111 [12/Jul/2012:15:19:03.258] wss-relay wss-relay/local02_9876 0/0/50015 1277 cD 1/0/0/0/0 0/0
My td-agent.conf is this:
<source>
#type tail
path /var/log/haproxy.log
format /^(?<ps>\w+)\[(?<pid>\d+)\]: (?<pri>\w+) (?<c_ip>[\w\.]+):(?<c_port>\d+) \[(?<time>.+)\] (?<f_end>[\w-]+) (?<b_end>[\w-]+)\/(?<b_server>[\w-]+) (?<tw>\d+)\/(?<tc>\d+)\/(?<tt>\d+) (?<bytes>\d+) (?<t_state>[\w-]+) (?<actconn>\d+)\/(?<feconn>\d+)\/(?<beconn>\d+)\/(?<srv_conn>\d+)\/(?<retries>\d+) (?<srv_queue>\d+)\/(?<backend_queue>\d+)$/
tag haproxy.tcp
time_format %d/%B/%Y:%H:%M:%S
</source>
<match haproxy.tcp>
#type forward
<server>
host dockerdes01
port 24224
</server>
</match>
But the log don't arrive to /var/log/td-agent/td-agent.log
If I use this :
<match haproxy.tcp>
#type copy
<store>
#type stdout
</store>
<store>
#type elasticsearch
logstash_format true
flush_interval 10s # for testing.
host dockerdes01
port 9200
</store>
</match>
I see this in my /var/log/td-agent/td-agent.log:
2012-07-12 15:19:03.000000000 +0200 haproxy.tcp: {"ps":"haproxy","pid":"27508","pri":"info","c_ip":"127.0.0.1","c_port":"45111","f_end":"wss-relay","b_end":"wss-relay","b_server":"local02_9876","tw":"0","tc":"0","tt":"50015","bytes":"1277","t_state":"cD","actconn":"1","feconn":"0","beconn":"0","srv_conn":"0","retries":"0","srv_queue":"0","backend_queue":"0"}
but it doesn't arrive to fluentd...
I need that the logs arrive to fluentd
Better to use syslog set up to fluentd and just send from haproxy with syslog.
I have installed an EFK stack to log nginx access log.
While using fresh install Im able to send data from Fluentd to elasticsearch without any problem. However, I installed searchguard to implement authentication on elasticsearch and kibana. Now Im able to login to Kibana and elasticsearch with searchguards demo user credentials.
Now my problem is that fluentd is unable to to connect to elasticsearch. From td-agent log im getting the following messages:
2018-07-19 15:20:34 +0600 [warn]: #0 failed to flush the buffer. retry_time=5 next_retry_seconds=2018-07-19 15:20:34 +0600 chunk="57156af05dd7bbc43d0b1323fddb2cd0" error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error="Can not reach Elasticsearch cluster ({:host=>\"<elasticsearch-ip>\", :port=>9200, :scheme=>\"http\", :user=>\"logstash\", :password=>\"obfuscated\"})!"
Here is my Fluentd config
<source>
#type forward
</source>
<match user_count.**>
#type copy
<store>
#type elasticsearch
host https://<elasticsearch-ip>
port 9200
ssl_verify false
scheme https
user "logstash"
password "<logstash-password>"
index_name "custom_user_count"
include_tag_key true
tag_key "custom_user_count"
logstash_format true
logstash_prefix "custom_user_count"
type_name "custom_user_count"
utc_index false
<buffer>
flush_interval 2s
</buffer>
</store>
</match>
sg_roles.yml:
sg_logstash:
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS
- indices:admin/template/get
- indices:admin/template/put
indices:
'custom*':
'*':
- CRUD
- CREATE_INDEX
'logstash-*':
'*':
- CRUD
- CREATE_INDEX
'*beat*':
'*':
- CRUD
- CREATE_INDEX
Can anyone help me on this?
It seemed td-agent was using TLSv1 as default
added ssl_version TLSv1_2 to the config and now working
I have a fluentd container that after a week of working regularly, it stops forwarding logs to elasticsearch.
If a 'docker logs' that container it shows me all logs, but after a certain date/time these are not forwarded.
Fluentd config is this:
</source>
#type forward
#label #mainstream
bind 0.0.0.0
port 24224
</source>
<label #mainstream>
<match **>
#type copy <store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
<buffer>
flush_mode interval
flush_interval 1s
</buffer>
</store> </match>
</label>
Do you have any suggestions to find the root of this problem?
Thank you in advance.
I have a setup with Fluentd and Elasticsearch running on a Docker engine. I have swarms of services which I would like to log to Fluentd.
What I want to do is create a tag for each service that I run and use that tag as an index in Elasticsearch. Here's the setup that I have:
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<match docker.service1>
#type elasticsearch
host "172.20.0.3"
port 9200
index_name service1
type_name fluentd
flush_interval 10s
</match>
<match docker.service2>
#type elasticsearch
host "172.20.0.3"
port 9200
index_name service2
type_name fluentd
flush_interval 10s
</match>
and so forth.
It would be annoying to have to include a new match tag for every single service I create, because I want to be able to add new service without updating my fluentd configuration. Is there a way to do something like this:
<source>
#type forward
port 24224
bind 0.0.0.0
</source>
<match docker.**>
#type elasticsearch
host "172.20.0.3"
port 9200
index_name $(TAG)
type_name fluentd
flush_interval 10s
</match>
Where I use a $(TAG) variable to indicate that I want the Tag name to be the name of the index?
I've tried this from an answer I found here: ${tag_parts[0]}. This was printed literally as my index. So my index was "${tag_parts[0]}".
Thanks in advance.
I figured out that I needed to import the other Elasticsearch plugin. Here's an example of a match tag that I used:
<match>
#type elasticsearch_dynamic
host "172.20.0.3"
port 9200
type_name fluentd
index_name ${tag_parts[2]}
flush_interval 10s
include_tag_key true
reconnect_on_error true
</match>
I've imported the #elasticsearch_dynamic plugin instead of the #elasticsearch plugin. Then, I can use the ${tag_parts} thing.
The include_tag_key will include the tag in the json data.
It helps to read the documentation
I had the same problem, and the solution provided here is being deprecated. What I ended up doing was this:
Add a transform filter that adds the index name you want as a key on the record
<filter xxx.*>
#type record_transformer
enable_ruby true
<record>
index_name ${tag_parts[1]}-${time.strftime('%Y%m')}
</record>
</filter>
and then in the elasticsearch output you configure
<match xxx.*>
#type elasticsearch-service
target_index_key index_name
index_name fallback-index-%Y%m
the fallback index_name here will be used if a record is missing the index_name key, but that should never happen.
I am setting up EFK Stack. In Kibana I want to represent one index for application and one index for syslogs.
I am using fluentd for log forwarding.
syslogs --> /var/log/messages and /var/log/secure
application --> /var/log/application.log
what is the td-agent.cong to create two index plz help
thanking you
If you are using the ElasticSearch output plugin and want to use kibana you can config your indices names by changing the logstash_prefix attribute.
read the documentation : elasticsearch output plugin documentation
I have added this following fluentd.conf file to demonstrate your usecase.
In this file I have 2 matches:
1. "alert" - will pipe all logs with "alert" (FluentLogger.getLogger("alert")) to "alert" index in elasticsearch.
default match - will pipe all logs to elasticsearch with "fluentd" index (which is the default index of this plugin).
fluentd/conf/fluent.conf
#type forward
port 24224
bind 0.0.0.0
<match alert.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix alert
logstash_dateformat %Y%m%d
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>