How to enable logging when ElasticSearch indexes a record? - elasticsearch

I am trying to measure the latency involved with using the ELK stack. I am logging in my test application and I want to find out how long it takes before it appears in ElasticSearch. I understand this will only be a rough estimate and is very specific to my environment.
How can I measure the latency between my app/logstash/elasticsearch?
EDIT:
I am following suggestion and enabled _timestamp but I don't see the field in my records.
{
logaggr : {
order : 0,
template : "logaggr-*",
settings : {},
mappings : {
logaggr : {
date_detection : false,
_timestamp : {
enabled : true,
store: true
},
properties : {
level : {
type : "string"
},
details : {
type : "string"
},
logts : {
format : "yyyy-MM-dd HH:mm:ss,SSS",
type : "date"
},
classname : {
type : "string"
},
thread : {
type : "string"
}
}
}
},
aliases : {}
}
}
Thanks in advance!

There are three timestamps that will answer your question:
the log file timestamp, e.g. when the application wrote the information. Make sure your server's clock is correct.
#timestamp, which is set by logstash to the time when it receives the log.
_timestamp, which elasticsearch can set to the time when it receives the log. This setting must be enabled in elasticsearch.
Between these three, you can track the progress of your logs through ELK.

Related

Trigger an action for each hit of Elasticsearch query in Kibana Monitor

Is it possible to trigger an action for each hit of a given query in a Kibana Monitor? I would like to use a foreach loop to do this as demonstrated here. However, it's unclear how to implement this on the Kibana Monitor page. On the page there is an input field for Trigger Conditions but I'm unsure how to format the foreach within it or if this is supported.
Consider using Elasticsearch watcher (require at least gold licesnse): https://www.elastic.co/guide/en/elasticsearch/reference/current/how-watcher-works.html
Watcher will run on a certain interval and will perform a query against indices (according to your configuration). You will need to create a condition (e.g. hits number is greater than 5) that when it evaluates to true an action will be performed. Elasticsearch allows you to use multiple actions. For example, you can use webhook and receive the data from the last watcher run (you can also use watcher api to transform the data). If you don't have Gold license you can mimic watcher behavior by a script/program that uses Elasticsearch Search API.
Herbeby is a simple example of a watcher checking index named test every minute and sends a webhook with the entire search context in case there is at least one document.
{
"trigger" : {
"schedule" : { "interval" : "1m" }
},
"input" : {
"search" : {
"request" : {
"indices" : [ "test" ],
"body" : {
"query" : {
"bool": {
"must": {
"range": {
"updatedAt": {
"gte": "now-1m"
}
}
}
}
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
"actions" : {
"sample_webhook" : {
"webhook" : {
"method" : "POST",
"url": "http://b4022015b928.ngrok.io/UtilsService/api/elasticHandler/watcher",
"body" : "{{#toJson}}ctx.payload{{/toJson}}",
"auth": {
"basic": {
"user": "user",
"password": "pass"
}
}
}
}
}
}
An alternative way would be to use Kibana Alerts and Actions.
https://www.elastic.co/guide/en/kibana/current/alerting-getting-started.html
This feature is slightly different from Watcher but basically allows you to perfrom actions upon a query against Elasticsearch. This featrue is only part of Kibana opposing to watcher which is part of Elasticsearch (though it is accessible from Kibana stack management).

Elasticsearch indexing timestamp-field fails

I fail at indexing timestamp fields with ElasticSearch (version 7.10.2) and I do not understand why.
So I create an index with the following mapping. You can copy & paste it directly to Kibana:
PUT /my-dokumente
{
"mappings" : {
"properties" : {
"aufenthalt" : {
"properties" : {
"aufnahme" : {
"properties" : {
"zeitpunkt" : {
"type" : "date",
"format": "yyyy-MM-dd HH:mm:ss",
"ignore_malformed": true
}
}
},
"entlassung" : {
"properties" : {
"zeitpunkt" : {
"type" : "date",
"format": "yyyy-MM-dd HH:mm:ss",
"ignore_malformed": true
}
}
}
}
}
}
}
}
Then I try to index a document:
PUT /my-dokumente/dokumente/1165963
{
"aufenthalt" :
{
"aufnahme" :
{
"zeitpunkt" : "2019-08-18 15:02:13"
},
"entlassung" :
{
"zeitpunkt" : "2019-08-20 10:29:22"
}
}
}
Now, i get this error:
"mapper [aufenthalt.entlassung.zeitpunkt] cannot be changed from type [date] to [text]
Why is elastic search not parsing my date?
I also tried with many different mapping settings like strict_date_hour_minute_second or to send the timestamp as "2019-08-18T15:02:13" or "2019-08-18T15:02:13Z" also, I converted it to epoch millis, but I always get some different error message, for example Cannot update parameter [format] from [strict_date_hour_minute_second] to [strict_date_optional_time||epoch_millis].
So the basic question is just: How can I send a timestamp value to ElasicSearch? (with Kibana/CURL).
PS: I am not using a Client SDK like Java High Level Rest Client. Why are talking about basic Kibana/CURL.
It can't be that complicated. What am I missing?
Thank you!
Mapping types are removed in 7.x. Refer to this official documentation
You need to add _doc in URL when indexing a document to Elasticsearch
Modify the URL as PUT /my-dokumente/_doc

How to create elasticsearch watcher by xpack

I just tried working with elasticsearch and now trying to create first watcher
There are some information I have read in elasticsearch documentation : https://www.elastic.co/guide/en/x-pack/current/watcher-getting-started.html
And now I trty to create one :
https://es.origin-test.cloud.rccf.ru/apiconnect508/_xpack/watcher/watch/audit_watch
PUT method + auth headers
I put in :
{ "trigger" : {
"schedule": {
"interval": "1h"
}
}, "actions" : { "send_email" : {
"email" : {
"to" : "ext_avolkova#rencredit.ru",
"subject" : "Watcher Notification",
"body" : "{{ctx.payload.hits.total}} logs found"
} } } }
But now I see mistake :
No handler found for uri
[/apiconnect508/_xpack/watcher/watch/log_audit] and method [PUT]
Please, help me to create one simple watcher
Based on the support matrix, elasticsearch 2.x is not compatible with x-pack.
You might want to install Watcher as a separate plugin using this document.

Performance with nested data in a script field

I am wondering if there is a more performant way of performing a calculation on nested data in a script field or of organizing the data. In the code below, the data will contain values for 50 states and/or other regions. Each user is tied to an area, so the script above will search to see that the averageValue in their area is above a certain threshold and return a true/false value for each matching document.
Mapping
{
"mydata" : {
"properties" : {
...some fields,
"related" : {
"type" : "nested",
"properties" : {
"average_value" : {
"type" : "integer"
},
"state" : {
"type" : "string"
}
}
}
}
}
}
Script
"script_fields" : {
"inBudget" : {
"script" : {
"inline" : "_source.related.find { it.state == default_area && it.average_value >= min_amount } != null",
"params" : {
"min_amount" : 100,
"default_area" : "CA"
}
}
}
}
I have a working solution using the above method, but it slows my query down and I am curious if there is a better solution. I have been toying with the idea of using a inner object with a key, like: related_CA and having each states data in a separate object, however for flexibility I would rather not have to pre-define each region in a mapping (as I may not have them all ahead of time). I feel like I might be missing a simpler/better way and I am open to either reorganizing the data/mapping and/or changes to the script.

How to specify integer type for csv-river generated type in Elasticsearch

Seems I'm way out of my league here and hope that someone frome the Elasticsearch community can tune in and pinpoint me to my error.
I wanted to try out Kibana + Elastic search for fiddling with some COUNTER3-compliant stats we have and for this needed to import CSV files.
I choose the CSV River plugin to import the csv files.
The river-csv was configured with following PUT request to http://localhost:9200/_river/test_csv_river/_meta
{
"type": "csv",
"csv_file": {
"folder": "C:\\localdata\\UsageStats\\COUNTER3",
"first_line_is_header": "true"
}
}
This works fine but it turns out that the number fields are imported as strings, not numbers. Confirmed by checking on the indexes type mapping : http://localhost:9200/test_csv_river/csv_type/_mapping
{
test_csv_river: {
mappings: {
csv_type: {
properties: {
ft_total: {
type: "string"
},
.....
}
}
}
}
}
This related SO question made me believe that I can change the types field properties AFTER I have created the index: How to update a field type in elasticsearch?
But when I made the following PUT Request I get an error
{"error":"MergeMappingException[Merge failed with failures {[mapper [ft_total] of different type, current_type [string], merged_type [integer]]}]","status":400}
http://localhost:9200/test_csv_river/csv_type/_mapping
{
"csv_type" : {
"properties" : {
"ft_total" : {"type" : "integer", "store" : "yes"}
}
}
}
Is there no way to update that type from string > integer?
Alternatively, is there any way I can make sure the index is created with a specific field predefined as "type" : "integer" when using the CSV river plugin ?

Resources