fluentd elasticsearch plugin - The client is unable to verify that the server is Elasticsearch - elasticsearch

I want to send some nginx logs from fluentd to elasticsearch , however, fluentd is unable to start due to following error message:
The client is unable to verify that the server is Elasticsearch. Some functionality may not be compatible if the server is running an unsupported product.
[error]: #0 unexpected error error_class=Elasticsearch::UnsupportedProductError error="The client noticed that the server is not Elasticsearch and we do not support this unknown product."
This is my fluentd config :
<source>
#type tail
<parse>
#type nginx
</parse>
path /tmp/lab4/nginx/access.log
pos_file /tmp/lab4/nginx/access.po
tag nginx.access
</source>
<match nginx.**>
#type elasticsearch
scheme http
host 192.168.1.154
port 9200
with_transporter_log true
#log_level debug
</match>
If I do a curl http://192.168.1.154:9200 , I can see a response from Elasticsearch with the system version and other info .
For reference I am using :
fluentd version 1.14.5
fluentd elastic-search-plugin 5.2.0
elastic-search 7.12.0
Any idea on what I am doing wrong ?

for anyone who is facing the issue in docker, the below steps solved the issue for me:
need to build the fleutd with the "elasticsearch gem" as per the version of the elasticsearch being used, like below:
Dockerfile:
FROM fluent/fluentd
RUN gem install elasticsearch -v 7.6
RUN gem install fluent-plugin-elasticsearch
RUN gem install fluent-plugin-rewrite-tag-filter
RUN gem install fluent-plugin-multi-format-parser
Mention the es version in the out plugin of es in fluent.conf:
#type elasticsearch
host 10.10.13.21
port 9200
verify_es_version_at_startup false
default_elasticsearch_version 7

In that snapshot, the elasticsearch client gem version (used by fluent-plugin-elasticsearch) is 8.0.0. You are using ElasticSearch v7.12.0 which is evaluated as unsupported.
See https://github.com/elastic/elasticsearch-ruby/blob/ce84322759ff494764bbd096922faff998342197/elasticsearch/lib/elasticsearch.rb#L110-L119.
So, it looks like you need to install an equivalent supported version.

Related

Fluentd not forwarding logs to elastic search

I have deployed fluentd and elastic search in k8s. If I check the log of the fluentd pod it logs: The client is unable to verify that the server is Elasticsearch. Some functionality may not be compatible if the server is running an unsupported product.
My fluentd.conf is:
<match kubernetes.var.log.containers.** >
#type elasticsearch
host http://elasticsearch
port 9200
logstash_format true
</match>
How do I send the docker container logs to elastic search.

Its possible to send logs from two different machines without logstash to elasticsearch?

I have installed on a ubuntu machine elasticsearch, kibana and auditbeat so im monitoring the log events on the ubuntu machine. I also installed winglogbeat on a windows machine to monitorize it too and I configured it to send the logs to the elasticsearch on the ubuntu machine.
This is the configuration of the winglogbeat.yml
But when I tried to run the winglogbeat I get the following error when its trying to connect to kibana on the ubuntu machine.
On the ubuntu machine kibana, elasticsearch and auditbeat works properly.
This is the configuration of the elasticsearch.yml:
And this is the kibana.yml configuration:
I just modify the file kibana.yml to allow connections from a remote host:
Server.host: "0.0.0.0"

filebeat version 7.13.1 not working with aws elasticserach 7.4

Can someone help me with the below error?
#filebeat test output -c /etc/filebeat/filebeat.yml
talk to server... ERROR Connection marked as failed because the onConnect callback failed: could not connect to a compatible version of Elasticsearch: unauthorized access, could not connect to the xpack endpoint, verify your credentials
OS version:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
Elasticsearch version:
7.4
FileBeat version:
filebeat version 7.13.1 (amd64), libbeat 7.13.1 [2d80f6e99f41b65a270d61706fa98d13cfbda18d built 2021-05-28 16:38:20 +0000 UTC]
I am using Elasticsearch Service from AWS and using OSS version of filebeat. It was working fine with filebeat version 7.12.1. When the version got upgraded we are facing this issue.
It is a breaking change in version 7.13.
From version 7.13+ Filebeat will only work with the Elasticsearch distribution from Elastic as it will now check the license, at least at the moment.
It was caused by this change in the code, and there is an open pull request to revert the old behavior.
But at the moment if you are not using the Elasticsearch with Elastic license you can't use any beat from version 7.13+, you will need to revert the version.

send logs to external elasticsearch from openshift projects

I'm trying to send specific openshift project logs to unsecured external elastic search.
I have tried solution which is there in https://github.com/richm/docs/releases/tag/20190205142308. But found that it will work only when ELS is secured.
Later I have tried using elasticsearch plugin also by adding in output-applications.conf.
output-applications.conf:
<match *.*>
#type elasticsearch
host xxxxx
port 9200
logstash_format true
</match>
All other files are same which is described in https://github.com/richm/docs/releases/tag/20190205142308 #Application logs from specific namespaces/pods/containers
Included output-applications.conf in fluent.conf file.
In fluentd logs except "[info]: reading config file path="/etc/fluent/fluent.conf" " this message i dont see any other things and data is not reaching to elasticsearch
Can anyone tell how to proceed?

Logs not being flushed to Elasticsearch container through Fluentd

I have a local setup running 2 conainers -
One for Elasticsearch (setup for development as detailed here - https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html). This I run as directed in the article using - docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" docker.elastic.co/elasticsearch/elasticsearch:5.4.1
Another as a Fluentd aggregator (using this base image - https://hub.docker.com/r/fluent/fluentd/). My fluent.conf for testing purposes is as follows :
<source>
#type forward
port 24224
</source>
<match **>
#type elasticsearch
host 172.17.0.2 # Verified internal IP address of the ES container
port 9200
user elastic
password changeme
index_name fluentd
buffer_type memory
flush_interval 60
retry_limit 17
retry_wait 1.0
include_tag_key true
tag_key docker.test
reconnect_on_error true
</match>
This I start with the command - docker run -p 24224:24224 -v /data:/fluentd/log vg/fluentd:latest
When I run my processes (that generate logs), and run these 2 containers, I see the following towards the end of stdout for the Fluentd container -
2017-06-15 12:16:33 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"172.17.0.2", :port=>9200, :scheme=>"http", :user=>"elastic", :password=>"obfuscated"}
However, beyond this, I see no logs. When I login to http://localhost:9200 I only see the Elasticsearch welcome message.
I know the logs are reaching the Fluentd container, because when I change fluent.conf to redirect to a file, I see all the logs as expected. What am I doing wrong in my setup of Elasticsearch? How can I get to seeing all the indexes laid out correctly in my browser / through Kibana?
It seems that you are in the right track. Just check the indexes that were created in elasticsearch as follows:
curl 'localhost:9200/_cat/indices?v'
Docs:
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/_list_all_indexes.html
There you can see each index name. So pick one and search within it:
curl 'localhost:9200/INDEXNAME/_search'
Docs: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html
However I recommend you to use kibana in order to have a better human experience. Just start it and by default it searches for an elastic in localhost. In the interface's config put the index name that you now know, and start to play with it.

Resources