Logs not being flushed to Elasticsearch container through Fluentd - elasticsearch

I have a local setup running 2 conainers -
One for Elasticsearch (setup for development as detailed here - https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html). This I run as directed in the article using - docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" docker.elastic.co/elasticsearch/elasticsearch:5.4.1
Another as a Fluentd aggregator (using this base image - https://hub.docker.com/r/fluent/fluentd/). My fluent.conf for testing purposes is as follows :
<source>
#type forward
port 24224
</source>
<match **>
#type elasticsearch
host 172.17.0.2 # Verified internal IP address of the ES container
port 9200
user elastic
password changeme
index_name fluentd
buffer_type memory
flush_interval 60
retry_limit 17
retry_wait 1.0
include_tag_key true
tag_key docker.test
reconnect_on_error true
</match>
This I start with the command - docker run -p 24224:24224 -v /data:/fluentd/log vg/fluentd:latest
When I run my processes (that generate logs), and run these 2 containers, I see the following towards the end of stdout for the Fluentd container -
2017-06-15 12:16:33 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"172.17.0.2", :port=>9200, :scheme=>"http", :user=>"elastic", :password=>"obfuscated"}
However, beyond this, I see no logs. When I login to http://localhost:9200 I only see the Elasticsearch welcome message.
I know the logs are reaching the Fluentd container, because when I change fluent.conf to redirect to a file, I see all the logs as expected. What am I doing wrong in my setup of Elasticsearch? How can I get to seeing all the indexes laid out correctly in my browser / through Kibana?

It seems that you are in the right track. Just check the indexes that were created in elasticsearch as follows:
curl 'localhost:9200/_cat/indices?v'
Docs:
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/_list_all_indexes.html
There you can see each index name. So pick one and search within it:
curl 'localhost:9200/INDEXNAME/_search'
Docs: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html
However I recommend you to use kibana in order to have a better human experience. Just start it and by default it searches for an elastic in localhost. In the interface's config put the index name that you now know, and start to play with it.

Related

Fluentd not forwarding logs to elastic search

I have deployed fluentd and elastic search in k8s. If I check the log of the fluentd pod it logs: The client is unable to verify that the server is Elasticsearch. Some functionality may not be compatible if the server is running an unsupported product.
My fluentd.conf is:
<match kubernetes.var.log.containers.** >
#type elasticsearch
host http://elasticsearch
port 9200
logstash_format true
</match>
How do I send the docker container logs to elastic search.

Host journal logs no present in EFK Kubernetes stack

I'm using kube-fluentd-operator to aggregate logs using fluentd into Elasticsearch and query them in Kibana.
I can see my application (pods) logs inside the cluster.
However I cannot see the journal logs (systemd units, kubelet, etc) from the hosts inside the cluster.
There are no noticeable messages in fluentd's pods logs and the stack works for logs coming from applications.
Inside the fluentd container I have access to the /var/log/journal directory (drwxr-sr-x 3 root 101 4096 May 21 12:37 journal).
Where should I look next to get the journald logs in my EFK stack?
Here's the kube-system.conf file attached to the kube-system namespace:
<match systemd.** kube.kube-system.** k8s.** docker>
# all k8s-internal and OS-level logs
#type elasticsearch
host "logs-es-http.logs"
port "9200"
scheme "https"
ssl_verify false
user "u1"
password "password"
logstash_format true
#with_transporter_log true
##log_level debug
validate_client_version true
ssl_version TLSv1_2
</match>
Minimal, simple, according to the docs.
Is it possible that my search terms are wrong?
What should I search for in order to get the journal logs?
After having tried every possible solution (from enabling log_level debug, to only having the kube-system namespace monitored, to adding runAsGroup: 101 to the containers) all I was left with was changing what I was using for log aggregation and decided to switch from that operator to the DaemonSet provided by fluent themselves: https://github.com/fluent/fluentd-kubernetes-daemonset
This switch has proved successful and the search of the systemd units works from inside the EFK stack.

Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.
The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.
So I found that from elastic forums that the following iptables command would resolve my issue but no luck,
iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.
NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.
The steps I have followed to debug this issue are:
Check if Kibana is coming up,
Check if Elastic API's are working,
Check if Logstash is accessible from Filebeat.
Everything is working fine in my case. Added log levels in Filebeat.yml and found "Permission Denied" error while filebeat is accessing the docker container logs under "/var/lib/docker/containers//" folder.
Fixed the issue by setting selinux to "Permissive" by running the following command,
sudo setenforce Permissive
After this ELK started to sync the logs.

send logs to external elasticsearch from openshift projects

I'm trying to send specific openshift project logs to unsecured external elastic search.
I have tried solution which is there in https://github.com/richm/docs/releases/tag/20190205142308. But found that it will work only when ELS is secured.
Later I have tried using elasticsearch plugin also by adding in output-applications.conf.
output-applications.conf:
<match *.*>
#type elasticsearch
host xxxxx
port 9200
logstash_format true
</match>
All other files are same which is described in https://github.com/richm/docs/releases/tag/20190205142308 #Application logs from specific namespaces/pods/containers
Included output-applications.conf in fluent.conf file.
In fluentd logs except "[info]: reading config file path="/etc/fluent/fluent.conf" " this message i dont see any other things and data is not reaching to elasticsearch
Can anyone tell how to proceed?

Fluentd seems to be working but no logs in Kibana

I have a Kubernetes pod consisting of two containers - main app (writes logs to file on volume) and Fluentd sidecar that tails log file and writes to Elasticsearch.
Here is the Fluentd configuration:
<source>
type tail
format none
path /test/log/system.log
pos_file /test/log/system.log.pos
tag anm
</source>
<match **>
#id elasticsearch
#type elasticsearch
#log_level debug
time_key #timestamp
include_timestamp true
include_tag_key true
host elasticsearch-logging.kube-system.svc.cluster.local
port 9200
logstash_format true
<buffer>
#type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</match>
Everything is working, Elasticsearch host & port are correct since API works correctly on that URL. In Kibana I see only records every 5 seconds about Fluentd creating new chunk:
2018-12-03 12:15:50 +0000 [debug]: #0 [elasticsearch] Created new chunk chunk_id="57c1d1c105bcc60d2e2e671dfa5bef04" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag="anm", variables=nil>
but no actual logs in Kibana (the ones that are being written by the app to system.log file). Kibana is configured to the "logstash-*" index pattern that matches the one and only existing index.
Version of Fluentd image: k8s.gcr.io/fluentd-elasticsearch:v2.0.4
Version of Elasticsearch: k8s.gcr.io/elasticsearch:v6.3.0
Where can I check to find out what's wrong? Looks like Fluentd does not get to put the logs into Elasticsearch, but what can be the reason?
The answer turned out to be embarrassingly simple, maybe will help someone in the future.
I figured the problem was with this source config line:
<source>
...
format none
...
</source>
That meant that no usual tags where added when saved to elasticsearch (e.g. pod or container name) and I had to search for these records in Kibana in a completely different way. For instance, I used my own tag to search for those records and found them alright. The custom tag was originally added just in case, but turned out to be very useful:
<source>
...
tag anm
...
</source>
So, the final takeaway could be the following. Use "format none" with caution, and if the source data actually is unstructured, add your own tags, and possibly enrich with additional tags/info (e.g. "hostname", etc) using fluentd's record_transformer, which I ended up also doing. Then it will be much easier to locate the records via Kibana.

Resources