Automatically generate and send kibana dashboard reports via an Email - elasticsearch

I have a 3 node cluster of ELK (all version 6), on 1st node i have Elasticsearch and Kibana, on 2nd i have Elasticsearch and Logstash and on 3rd i have only Elasticsearch which is a Ingest node.
I have 4 servers which sends me data via filebeat and metricbeat.
Now all are working fine, i even have X-Pack version 6 Now there is manual process of generating pdf of dashboards i tried that.
I want to automatically generate reports at certain time and email it to me.
I read about watchers and email configuration in elasticsearch.yml file and i did that ..
But i want it to be done automatically. And i am not trying skidler and phantomJs.
If anything i am missing, help me out Thank You.

Here is an example from the documentation on how to generate a report with Watcher:
PUT _xpack/watcher/watch/error_report
{
"trigger" : {
"schedule": {
"interval": "1h"
}
},
"actions" : {
"email_admin" : {
"email": {
"to": "'Recipient Name <recipient#example.com>'",
"subject": "Error Monitoring Report",
"attachments" : {
"error_report.pdf" : {
"reporting" : {
"url": "http://0.0.0.0:5601/api/reporting/generate/dashboard/Error-Monitoring?_g=(time:(from:now-1d%2Fd,mode:quick,to:now))",
"retries":6,
"interval":"1s",
"auth":{
"basic":{
"username":"elastic",
"password":"changeme"
}
}
}
}
}
}
}
}
}
Basically you just need an API call to get this done.

Related

Trigger an action for each hit of Elasticsearch query in Kibana Monitor

Is it possible to trigger an action for each hit of a given query in a Kibana Monitor? I would like to use a foreach loop to do this as demonstrated here. However, it's unclear how to implement this on the Kibana Monitor page. On the page there is an input field for Trigger Conditions but I'm unsure how to format the foreach within it or if this is supported.
Consider using Elasticsearch watcher (require at least gold licesnse): https://www.elastic.co/guide/en/elasticsearch/reference/current/how-watcher-works.html
Watcher will run on a certain interval and will perform a query against indices (according to your configuration). You will need to create a condition (e.g. hits number is greater than 5) that when it evaluates to true an action will be performed. Elasticsearch allows you to use multiple actions. For example, you can use webhook and receive the data from the last watcher run (you can also use watcher api to transform the data). If you don't have Gold license you can mimic watcher behavior by a script/program that uses Elasticsearch Search API.
Herbeby is a simple example of a watcher checking index named test every minute and sends a webhook with the entire search context in case there is at least one document.
{
"trigger" : {
"schedule" : { "interval" : "1m" }
},
"input" : {
"search" : {
"request" : {
"indices" : [ "test" ],
"body" : {
"query" : {
"bool": {
"must": {
"range": {
"updatedAt": {
"gte": "now-1m"
}
}
}
}
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
"actions" : {
"sample_webhook" : {
"webhook" : {
"method" : "POST",
"url": "http://b4022015b928.ngrok.io/UtilsService/api/elasticHandler/watcher",
"body" : "{{#toJson}}ctx.payload{{/toJson}}",
"auth": {
"basic": {
"user": "user",
"password": "pass"
}
}
}
}
}
}
An alternative way would be to use Kibana Alerts and Actions.
https://www.elastic.co/guide/en/kibana/current/alerting-getting-started.html
This feature is slightly different from Watcher but basically allows you to perfrom actions upon a query against Elasticsearch. This featrue is only part of Kibana opposing to watcher which is part of Elasticsearch (though it is accessible from Kibana stack management).

How to auto apply index policy to newly created indexes in AWS Elasticsearch

We push Nginx logs to AWS Elasticsearch using Filebeat and Logstash. We have created an index pattern with the name nginx-error-logs* & nginx-access-logs*. We can see in Kibana that daily new indices are being created based on the nginx log file date pattern. We created index policy and applied to existing indices but we would like to auto-apply the same ISM policy for all the newly created indices in Elasticsearch. Kindly help us to achieve this?
Is this the correct format to apply in Devtools console?
PUT _template/testindex_template
{
"index_patterns": ["*"],
"settings": {
"opendistro.index_state_management.policy_id": "index_lifecycle_management_policy"
}
}
Or should that be applied on the filebeat or Logstash config?
opendistro.index_state_management.policy_id is deprecated
opendistro.index_state_management.policy_id is deprecated
You have to add your index pattern in ism_template array of the policy. Below is the example.
PUT _opendistro/_ism/policies/policy_name
{
"policy": {
"description": "Policy to manage indices",
"default_state": "hot",
"states" : [
{
"name" : "hot",
"actions" : [
{
"rollover" : {
"min_size" : "20gb",
"min_index_age" : "2d"
}
}
]
}
],
"ism_template": {
"index_patterns": [
"nginx-error-logs*", // **sample index pattern**
"nginx-access-logs*"
],
"priority": 100
}
}
}
Whenever new index create, the index name pattern will match to the ism_template and the respective policy will be applied.
If same pattern available in multiple policy the it will attach the policy who has high prority.

How to extract and visualize values from a log entry in OpenShift EFK stack

I have an OKD cluster setup with EFK stack for logging, as described here. I have never worked with one of the components before.
One deployment logs requests that contain a specific value that I'm interested in. I would like to extract just this value and visualize it with an area map in Kibana that shows the amount of requests and where they come from.
The content of the message field basically looks like this:
[fooServiceClient#doStuff] {"somekey":"somevalue", "multivalue-key": {"plz":"12345", "foo": "bar"}, "someotherkey":"someothervalue"}
This plz is a German zip code, which I would like to visualize as described.
My problem here is that I have no idea how to extract this value.
A nice first success would be if I could find it with a regexp, but Kibana doesn't seem to work the way I think it does. Following its docs, I expect this /\"plz\":\"[0-9]{5}\"/ to deliver me the result, but I get 0 hits (time interval is set correctly). Even if this regexp matches, I would only find the log entry where this is contained and not just the specifc value. How do I go on here?
I guess I also need an external geocoding service, but at which point would I include it? Or does Kibana itself know how to map zip codes to geometries?
A beginner-friendly step-by-step guide would be perfect, but I could settle for some hints that guide me there.
It would be possible to parse the message field as the document gets indexed into ES, using an ingest pipeline with grok processor.
First, create the ingest pipeline like this:
PUT _ingest/pipeline/parse-plz
{
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{POSINT:plz}"
]
}
}
]
}
Then, when you index your data, you simply reference that pipeline:
PUT plz/_doc/1?pipeline=parse-plz
{
"message": """[fooServiceClient#doStuff] {"somekey":"somevalue", "multivalue-key": {"plz":"12345", "foo": "bar"}, "someotherkey":"someothervalue"}"""
}
And you will end up with a document like the one below, which now has a field called plz with the 12345 value in it:
{
"message": """[fooServiceClient#doStuff] {"somekey":"somevalue", "multivalue-key": {"plz":"12345", "foo": "bar"}, "someotherkey":"someothervalue"}""",
"plz": "12345"
}
When indexing your document from Fluentd, you can specify a pipeline to be used in the configuration. If you can't or don't want to modify your Fluentd configuration, you can also define a default pipeline for your index that will kick in every time a new document is indexed. Simply run this on your index and you won't need to specify ?pipeline=parse-plz when indexing documents:
PUT index/_settings
{
"index.default_pipeline": "parse-plz"
}
If you have several indexes, a better approach might be to define an index template instead, so that whenever a new index called project.foo-something is created, the settings are going to be applied:
PUT _template/project-indexes
{
"index_patterns": ["project.foo*"],
"settings": {
"index.default_pipeline": "parse-plz"
}
}
Now, in order to map that PLZ on a map, you'll first need to find a data set that provides you with geolocations for each PLZ.
You can then add a second processor in your pipeline in order to do the PLZ/ZIP to lat,lon mapping:
PUT _ingest/pipeline/parse-plz
{
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{POSINT:plz}"
]
}
},
{
"script": {
"lang": "painless",
"source": "ctx.location = params[ctx.plz];",
"params": {
"12345": {"lat": 42.36, "lon": 7.33}
}
}
}
]
}
Ultimately, your document will look like this and you'll be able to leverage the location field in a Kibana visualization:
{
"message": """[fooServiceClient#doStuff] {"somekey":"somevalue", "multivalue-key": {"plz":"12345", "foo": "bar"}, "someotherkey":"someothervalue"}""",
"plz": "12345",
"location": {
"lat": 42.36,
"lon": 7.33
}
}
So to sum it all up, it all boils down to only two things:
Create an ingest pipeline to parse documents as they get indexed
Create an index template for all project* indexes whose settings include the pipeline created in step 1

Exclude "'" in pipeline Kibana

I am currently working with Kibana and am running into a problem which i cant solve.
in my source file there is a rule which includes "", however when i run the script i made for kibana in dev tools it does not include that rule bur gives a error. how can i exclude those characters or how can it be included?
i have tried to exclude those characters using a g sub field but that doesn't work either.
"%{DATA:Datetime},%{DATA:Elapsed},%{DATA:label},%{DATA:ResponseCode},%{DATA:ResponseMessage},%{DATA:ThreadName},%{DATA:DataType},%{DATA:Success},%{DATA:FailureMessage},%{DATA:Bytes},%{DATA:SentBytes},%{DATA:GRPThreads},%{DATA:AllThreads},%{DATA:URL},%{DATA:Latency},%{DATA:IdleTime},%{GREEDYDATA:Connect}"
that is the grok pattern i'm using.
27-19-2018 12:19:43,8331,OK - Refresh Samenvatting,200,"Number of samples in transaction : 67, number of failing samples : 0",Thread Group 1-1,,true,,550720,137198,1,1,null,8318,5094,270
and this is the line i want to run trough it, it goes wrong at the "".
R. Kiers
Best to use custom patterns. Tested on Kibana 6.x and works for the sample data provided above. Can easily tweak to work with any sample data
custom patterns used:
UNTILNEXTCOMMA ([^,]*)
Grok pattern:
%{DATA:Datetime},%{NUMBER:Elapsed},%{UNTILNEXTCOMMA:label},%{NUMBER:Responsecode},"%{GREEDYDATA:ResponseMessage}",%{UNTILNEXTCOMMA:ThreadName},%{UNTILNEXTCOMMA:DataType},%{UNTILNEXTCOMMA:Success},%{UNTILNEXTCOMMA:FailureMessage},%{NUMBER:Bytes},%{NUMBER:SentBytes},%{NUMBER:GRPThreads},%{NUMBER:AllThreads},%{UNTILNEXTCOMMA:URL},%{NUMBER:Latency},%{NUMBER:IdleTime},%{NUMBER:Connect}
EDIT1:
Pipeline for Kibana 5.x
PUT _ingest/pipeline/test
{
"description": "Test pipeline",
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{DATA:Datetime},%{DATA:Elapsed},%{DATA:label},%{DATA:ResponseCode},\"%{GREEDYDATA:ResponseMessage}\",%{DATA:ThreadName},%{DATA:DataType},%{DATA:Success},%{DATA:FailureMessage},%{DATA:Bytes},%{DATA:SentBytes},%{DATA:GRPThreads},%{DATA:AllThreads},%{DATA:URL},%{DATA:Latency},%{DATA:IdleTime},%{GREEDYDATA:Connect}"
],
"ignore_failure": false
}
}
]
}
Tested using simulate and it works
POST _ingest/pipeline/test/_simulate
{
"docs" : [
{ "_source": { "message" : "27-19-2018 12:19:43,8331,OK - Refresh Samenvatting,200,\"Number of samples in transaction : 67, number of failing samples : 0\",Thread Group 1-1,,true,,550720,137198,1,1,null,8318,5094,270"
} }
]
}

How to create elasticsearch watcher by xpack

I just tried working with elasticsearch and now trying to create first watcher
There are some information I have read in elasticsearch documentation : https://www.elastic.co/guide/en/x-pack/current/watcher-getting-started.html
And now I trty to create one :
https://es.origin-test.cloud.rccf.ru/apiconnect508/_xpack/watcher/watch/audit_watch
PUT method + auth headers
I put in :
{ "trigger" : {
"schedule": {
"interval": "1h"
}
}, "actions" : { "send_email" : {
"email" : {
"to" : "ext_avolkova#rencredit.ru",
"subject" : "Watcher Notification",
"body" : "{{ctx.payload.hits.total}} logs found"
} } } }
But now I see mistake :
No handler found for uri
[/apiconnect508/_xpack/watcher/watch/log_audit] and method [PUT]
Please, help me to create one simple watcher
Based on the support matrix, elasticsearch 2.x is not compatible with x-pack.
You might want to install Watcher as a separate plugin using this document.

Resources