Fluentd not forwarding logs to elastic search - elasticsearch

I have deployed fluentd and elastic search in k8s. If I check the log of the fluentd pod it logs: The client is unable to verify that the server is Elasticsearch. Some functionality may not be compatible if the server is running an unsupported product.
My fluentd.conf is:
<match kubernetes.var.log.containers.** >
#type elasticsearch
host http://elasticsearch
port 9200
logstash_format true
</match>
How do I send the docker container logs to elastic search.

Related

fluentd elasticsearch plugin - The client is unable to verify that the server is Elasticsearch

I want to send some nginx logs from fluentd to elasticsearch , however, fluentd is unable to start due to following error message:
The client is unable to verify that the server is Elasticsearch. Some functionality may not be compatible if the server is running an unsupported product.
[error]: #0 unexpected error error_class=Elasticsearch::UnsupportedProductError error="The client noticed that the server is not Elasticsearch and we do not support this unknown product."
This is my fluentd config :
<source>
#type tail
<parse>
#type nginx
</parse>
path /tmp/lab4/nginx/access.log
pos_file /tmp/lab4/nginx/access.po
tag nginx.access
</source>
<match nginx.**>
#type elasticsearch
scheme http
host 192.168.1.154
port 9200
with_transporter_log true
#log_level debug
</match>
If I do a curl http://192.168.1.154:9200 , I can see a response from Elasticsearch with the system version and other info .
For reference I am using :
fluentd version 1.14.5
fluentd elastic-search-plugin 5.2.0
elastic-search 7.12.0
Any idea on what I am doing wrong ?
for anyone who is facing the issue in docker, the below steps solved the issue for me:
need to build the fleutd with the "elasticsearch gem" as per the version of the elasticsearch being used, like below:
Dockerfile:
FROM fluent/fluentd
RUN gem install elasticsearch -v 7.6
RUN gem install fluent-plugin-elasticsearch
RUN gem install fluent-plugin-rewrite-tag-filter
RUN gem install fluent-plugin-multi-format-parser
Mention the es version in the out plugin of es in fluent.conf:
#type elasticsearch
host 10.10.13.21
port 9200
verify_es_version_at_startup false
default_elasticsearch_version 7
In that snapshot, the elasticsearch client gem version (used by fluent-plugin-elasticsearch) is 8.0.0. You are using ElasticSearch v7.12.0 which is evaluated as unsupported.
See https://github.com/elastic/elasticsearch-ruby/blob/ce84322759ff494764bbd096922faff998342197/elasticsearch/lib/elasticsearch.rb#L110-L119.
So, it looks like you need to install an equivalent supported version.

Host journal logs no present in EFK Kubernetes stack

I'm using kube-fluentd-operator to aggregate logs using fluentd into Elasticsearch and query them in Kibana.
I can see my application (pods) logs inside the cluster.
However I cannot see the journal logs (systemd units, kubelet, etc) from the hosts inside the cluster.
There are no noticeable messages in fluentd's pods logs and the stack works for logs coming from applications.
Inside the fluentd container I have access to the /var/log/journal directory (drwxr-sr-x 3 root 101 4096 May 21 12:37 journal).
Where should I look next to get the journald logs in my EFK stack?
Here's the kube-system.conf file attached to the kube-system namespace:
<match systemd.** kube.kube-system.** k8s.** docker>
# all k8s-internal and OS-level logs
#type elasticsearch
host "logs-es-http.logs"
port "9200"
scheme "https"
ssl_verify false
user "u1"
password "password"
logstash_format true
#with_transporter_log true
##log_level debug
validate_client_version true
ssl_version TLSv1_2
</match>
Minimal, simple, according to the docs.
Is it possible that my search terms are wrong?
What should I search for in order to get the journal logs?
After having tried every possible solution (from enabling log_level debug, to only having the kube-system namespace monitored, to adding runAsGroup: 101 to the containers) all I was left with was changing what I was using for log aggregation and decided to switch from that operator to the DaemonSet provided by fluent themselves: https://github.com/fluent/fluentd-kubernetes-daemonset
This switch has proved successful and the search of the systemd units works from inside the EFK stack.

"Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 3 reconnect attempt(s)" error appears

I am running filebeat, elastic search and kibana to get logs of nginx from local machine, i am directly connecting filebeat with elastic search in filebeat configuration but as i start filebeat config , it shows errors like" pipeline/output.go:145 Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 3 reconnect attempt(s)" and no logs received by kibana.

send logs to external elasticsearch from openshift projects

I'm trying to send specific openshift project logs to unsecured external elastic search.
I have tried solution which is there in https://github.com/richm/docs/releases/tag/20190205142308. But found that it will work only when ELS is secured.
Later I have tried using elasticsearch plugin also by adding in output-applications.conf.
output-applications.conf:
<match *.*>
#type elasticsearch
host xxxxx
port 9200
logstash_format true
</match>
All other files are same which is described in https://github.com/richm/docs/releases/tag/20190205142308 #Application logs from specific namespaces/pods/containers
Included output-applications.conf in fluent.conf file.
In fluentd logs except "[info]: reading config file path="/etc/fluent/fluent.conf" " this message i dont see any other things and data is not reaching to elasticsearch
Can anyone tell how to proceed?

Logs not being flushed to Elasticsearch container through Fluentd

I have a local setup running 2 conainers -
One for Elasticsearch (setup for development as detailed here - https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html). This I run as directed in the article using - docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" docker.elastic.co/elasticsearch/elasticsearch:5.4.1
Another as a Fluentd aggregator (using this base image - https://hub.docker.com/r/fluent/fluentd/). My fluent.conf for testing purposes is as follows :
<source>
#type forward
port 24224
</source>
<match **>
#type elasticsearch
host 172.17.0.2 # Verified internal IP address of the ES container
port 9200
user elastic
password changeme
index_name fluentd
buffer_type memory
flush_interval 60
retry_limit 17
retry_wait 1.0
include_tag_key true
tag_key docker.test
reconnect_on_error true
</match>
This I start with the command - docker run -p 24224:24224 -v /data:/fluentd/log vg/fluentd:latest
When I run my processes (that generate logs), and run these 2 containers, I see the following towards the end of stdout for the Fluentd container -
2017-06-15 12:16:33 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"172.17.0.2", :port=>9200, :scheme=>"http", :user=>"elastic", :password=>"obfuscated"}
However, beyond this, I see no logs. When I login to http://localhost:9200 I only see the Elasticsearch welcome message.
I know the logs are reaching the Fluentd container, because when I change fluent.conf to redirect to a file, I see all the logs as expected. What am I doing wrong in my setup of Elasticsearch? How can I get to seeing all the indexes laid out correctly in my browser / through Kibana?
It seems that you are in the right track. Just check the indexes that were created in elasticsearch as follows:
curl 'localhost:9200/_cat/indices?v'
Docs:
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/_list_all_indexes.html
There you can see each index name. So pick one and search within it:
curl 'localhost:9200/INDEXNAME/_search'
Docs: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html
However I recommend you to use kibana in order to have a better human experience. Just start it and by default it searches for an elastic in localhost. In the interface's config put the index name that you now know, and start to play with it.

Resources