Fortinet FortiGate logs re not getting inserted in elastic search using fleet integration - elasticsearch

Can someone please assist me, what all settings I can cross check at fortinet side to ensure that syslog matches Fortinet FortiGate logs integration requirement?
Current status:
Integration and all required assets are installed in kiana.
No error and warn noticed in elastic agent logs.
OLD question:
Could you please assist me on how I can add RFC3164 version to
logs-fortinet.firewall-1.7.2 ingest pipeline?
ALso, is it possible to add RFC3103 (using syslog_pri filter or kv filter) if yes, please assist with some examples to parse data?

Related

Alert based on the field value in Kibana

I want to send alert from Kibana whenever someone adds the document which meets the conditions. I am using Elastic Cloud Kibana version 8.5.2
Below are my rule configurations
I am indexing the document from the Dev Tools api, query is working but it doesn't send alert when I index new document, does anyone knows what is going wrong in my configuration.

Suitecrm Elasticsearch Integration

I'm trying to integrate elsaicsearch with suitecrm.
I've followed the document as per https://docs.suitecrm.com/admin/administration-panel/search/elasticsearch/
Then I tried to run full indexing from suitecrm, I got "index_not_found_exception" hence created indexes manually in the elasticsearch.
After that also when I am trying to run the indexing, no logs showing in suitecrm or elasticsearch and search in elasticsearch is not working.
Suitecrm version
Version 7.12.5
Sugar Version 6.5.25 (Build 344)
Elasticsearch version
"number" : "7.17.5",
Please advise. Thanks.
I didn't get any suspicious logs in suitecrm.log so had enabled debug mode for the logs
https://docs.suitecrm.com/developer/logging/
Then I clicked on full indexing I found below log line
Elasticsearch trying to re-indexing a bean but this module is blacklisted: SchedulersJobs
I followed this document then https://docs.suitecrm.com/blog/scheduler-jobs/
And lastly this step Admin / Repairs / Quick Repair and Rebuild
After that it started working

ELK - Removing old logs viewable in Kibana

I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.

Is it a good idea to use serilog to write logs directly to the elasticsearch

I'm evaluating different options about the distributed log server.
In the Java world, as I can see, the most popular solution is filebeat + kafka + logstash + elasticsearch + kibana.
However, in .NET world, there's a serilog which can send structure logs directly to the elasticsearch. So the only required components are elasticsearch + kibana.
I searched a lot, but there's not much information about this solution in production. I've no idea whether it's enough to handle large volumes of logs.
Can anyone give me some suggestions? Thanks.
I had the same issue exactly. Our system worked with the "classic" elk-stack architecture i.e. FileBeat -> LogStash -> Elastic ( ->Kibana).
but as we found out in big projects with a lot of logs Serilog is much better solution for the following reasons:
CI\CD - when you have different types of logs with different structure which you want to have different types, Serilog power comes in handy. in LogStash you need to create a different filter to break down a message according to the pattern. which implies that there is big coupling in the log structure aspect and the LogStash aspect - very bug prone.
maintenance - Because of the easy CI\CD and the one point of change, it is easier to maintain a large amount of logs.
Scalability - FileBeat has a problem to handle big chunks of data because of the registry file which have a tend to "explode" - reference from personal experience stack overflow flow question ; elastic-forum question
Less failure points - with serilog the log send directly to elastic when with Filebeat you have to path through LogStash. one more place to fail.
Hope it helps you with your evaluation.
Update (Dec 2021):
The Elasticsearch logger provider has been moved to the Elastic ECS DotNet project.
Find the latest version here: https://github.com/elastic/ecs-dotnet/blob/master/src/Elasticsearch.Extensions.Logging/ReadMe.md
The nuget package is here: https://www.nuget.org/packages/Elasticsearch.Extensions.Logging/1.6.0-alpha1
It is still labelled an alpha release (although it has more functionality than the Essential's version), so currently (Dec 2021) you need to specify the version when adding the package:
dotnet add package Elasticsearch.Extensions.Logging --version 1.6.0-alpha1
Disclaimer: I am the author
ORIGINAL ANSWER
There is now also a stand alone logger provider that will write .NET Core logging direct to Elasticsearch, following the Elasticsearch Common Schema (ECS) field specifications, https://github.com/sgryphon/essential-logging/tree/master/src/Essential.LoggerProvider.Elasticsearch
To use this from your .NET Core application, add a reference to the Essential.LoggerProvider.Elasticsearch package:
dotnet add package Essential.LoggerProvider.Elasticsearch
Then, add the provider to the loggingBuilder during host construction, using the provided extension method.
using Essential.LoggerProvider;
// ...
.ConfigureLogging((hostContext, loggingBuilder) =>
{
loggingBuilder.AddElasticsearch();
})
The default configuration will write to a local Elasticsearch running at http://localhost:9200/.
Once you have sent some log events, open Kibana (e.g. http://localhost:5601/) and define an index pattern for "dotnet-*" with the time filter "#timestamp".
This reduces the dependencies even more, as rather than pull in the entire Serilog infrastructure (App -> Microsoft ILogger -> Serilog provider/adapter -> Elasticsearch sink -> Elasticsearch) you now only have (App -> Microsoft ILogger -> Elasticsearch provider -> Elasticsearch).
The ElasticsearchLoggerProvider also writes events following the Elasticsearch Common Schema (ECS) conventions, so is compatible with events logged from other sources, e.g. Beats.

How to get details of log form dashboards of kibana

Hi I am new to Kibana (ELK stack).I have created an error dashboard in Kibana but If I need more information about errors how can I get that.Like If I want to know because of what error has been created.How can I get it.
Any initiatives will be appreciated.
For any information that you want to display in Kibana you need to have a source from where that particular information is available. In this case if we assume that the logs being parsed by Logstash has the information on the errors, then you can parse/filter the logs via the pipeline of Logstash to extract the information and store it in a field.
Now the document which are saved in the ES contain the information, this can be used to perform various aggregations or apply different mathematical functions and then eventually visualize it.

Resources