why i don't receive the fortigate logs from filebeat elk? - elasticsearch

I installed elastic and kibana and filebeat in a same ubuntu 22.04 VM and I installed FortiGate 7.2.0 in other VM and i want to collect FortiGate logs with filebeat but I don't receive the FortiGate logs enter image description here

Related

Getting "Kibana server is not ready yet" when running from docker

I'm trying to run elasticsearch and kibana via dockers, and I'm getting errors with kibana.
I'm using elasticsearch and kibana version 7.6.2
and Ubuntu 18.04.6 LTS
I run elasticsearch with the following command:
docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.6.2
And it seems that elasticsearch is on (I can bulk documents and get information about the index from python code).
I'm running kibana with the following commands:
docker network create elastic
docker run --net elastic -p 127.0.0.1:5601:5601 -e "ELASTICSEARCH_HOSTS=http://127.0.0.1:9200" docker.elastic.co/kibana/kibana:7.6.2
I see the following message in the web browser: Kibana server is not ready yet
And I see the following logs in the console:
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["info","savedobjects-service"],"pid":7,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","data"],"pid":7,"message":"Request error, retrying\nHEAD http://127.0.0.1:9200/.apm-agent-configuration => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","data"],"pid":7,"message":"Request error, retrying\nGET http://127.0.0.1:9200/_xpack => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","admin"],"pid":7,"message":"Request error, retrying\nGET http://127.0.0.1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"Unable to revive connection: http://127.0.0.1:9200/"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"No living connections"}
Could not create APM Agent configuration: No Living connections
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"Unable to revive connection: http://127.0.0.1:9200/"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"No living connections"}
How can I run kibana via docker ?
Did you try enrolling kibana to you elasticsearch cluster?
The enrollment token is valid for 30 minutes. If you need to generate a new enrollment token, run the elasticsearch-create-enrollment-token tool on your existing node. This tool is available in the Elasticsearch bin directory of the Docker container.
For example, run the following command on the existing es01 node to generate an enrollment token for newer nodes to be added:
docker exec -it es01
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s
node
When you start Kibana, a unique link is output to your terminal.
To access Kibana, click the generated link in your terminal.
Then in your browser, paste the enrollment token that you copied when starting Elasticsearch and click the button to connect your Kibana instance with Elasticsearch.
Log in to Kibana as the elastic user with the password that was generated when you started Elasticsearch.
More details here
You've created a docker network for the Kibana container, but the Elastic container is not joined to it. Since you can access Elastic from your localhost:9200, there is no need to use the elastic network for the Kibana container.
Update the Kibana docker run command to docker run -p 127.0.0.1:5601:5601 -e "ELASTICSEARCH_HOSTS=http://host.docker.internal:9200" docker.elastic.co/kibana/kibana:7.6.2
This removes the join to the elastic network, and updates the ELASTICSEARCH_HOSTS environment variable so that it uses the localhost of the machine instead of container.

Its possible to send logs from two different machines without logstash to elasticsearch?

I have installed on a ubuntu machine elasticsearch, kibana and auditbeat so im monitoring the log events on the ubuntu machine. I also installed winglogbeat on a windows machine to monitorize it too and I configured it to send the logs to the elasticsearch on the ubuntu machine.
This is the configuration of the winglogbeat.yml
But when I tried to run the winglogbeat I get the following error when its trying to connect to kibana on the ubuntu machine.
On the ubuntu machine kibana, elasticsearch and auditbeat works properly.
This is the configuration of the elasticsearch.yml:
And this is the kibana.yml configuration:
I just modify the file kibana.yml to allow connections from a remote host:
Server.host: "0.0.0.0"

Zeek logs to elk

I've installed elk on server and zeek with filebeat on another server.
I followed documnetation to install each one, but the filebeat is not shipping zeek logs to kibana.
by the way filebeat basic logs is shiped to kibana but without zeek logs
for the records:
1 - I've enabled zeek module
2 - I've add load policy/tuning/json-logs.zeek to local.zeek

"Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 3 reconnect attempt(s)" error appears

I am running filebeat, elastic search and kibana to get logs of nginx from local machine, i am directly connecting filebeat with elastic search in filebeat configuration but as i start filebeat config , it shows errors like" pipeline/output.go:145 Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 3 reconnect attempt(s)" and no logs received by kibana.

Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.
The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.
So I found that from elastic forums that the following iptables command would resolve my issue but no luck,
iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.
NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.
The steps I have followed to debug this issue are:
Check if Kibana is coming up,
Check if Elastic API's are working,
Check if Logstash is accessible from Filebeat.
Everything is working fine in my case. Added log levels in Filebeat.yml and found "Permission Denied" error while filebeat is accessing the docker container logs under "/var/lib/docker/containers//" folder.
Fixed the issue by setting selinux to "Permissive" by running the following command,
sudo setenforce Permissive
After this ELK started to sync the logs.

Resources