I've been reviewing different ways to aggregate log messages together that have a start event but no end event. Been struggling with the logstash aggregate filter plugin not sorting correctly and was looking at retrofitting an old entity-centric model for a previous version of elasticsearch Entity-Centric Indexing - Mark Harwood | Elastic Videos when I realized elasticsearch 7.13 transforms introduce the concept of 'latest' which negates my need for a bunch of external scripts (hopefully) to do this.
I am looking at the "Getting Web Session Details by using Scripted Metric Aggregation" sample painless script https://www.elastic.co/guide/en/elasticsearch/reference/current/transform-painless-examples.html#painless-web-session which produces session details, including session duration. Because the logs do not have an end-time, I need to make use of a timeout interval, something like a 30 minute window for aggregating message events based on my group by.
Is this possible to do within the transform by adjusting that script and could anyone help?
Related
Good day,
I have a gotten into a bit of a headache when working on indexing some data in Elasticsearch and have some questions about a good approach.
As of now, an event is received on a Kafka topic with just a part of the data that should be stored in the document. The rest of the data needs to be collected after the event is received and is available from different APIs. To reduce the amount of work, it seems that Logstash could be a good approach.
Is there a way to configure Logstash to initiate data collection from different APIs and DBs when an event is received, and then assemble the document with the combined date, or am I stuck with writing time consuming custom logic for the indexing? I have searched around a bit, but couldn't find any good answer on the problem.
What you need in logstash is to lookup/enrich you message with data from external api's, right?
You could use logstash's http_filter plugin
I have problem with creating metrics and later trigger alerts base on that metric. I have two datasources, both are elasticsearch. One contains documents (logs from service) saying that message was produced to kafka, second contain documents (also logs from service) saying that message was consumed. What I want to achieve is to trigger alert if ratio of produced to consumed messages drop below 1.
Unfortunately it is impossible to use prometheus, for two reasons:
1) counter resets each time service is restarted.
2) second service doesn't have (and wont't have in reasonable time) prometheus integration.
Question is how to approach metrics and alerting based on that data sources? Is it possible? Maybe there is other way to achieve my goal?
The question is somewhat generic (meaning no mapping or code, in general, is provided), so I'll provide an approach.
You can use a watcher upon an aggregation that you will create.
It's relatively straightforward to create a percentage of consume/produce, and based upon that percentage you can trigger an alert via the watcher.
Take a look at this tutorial (official elasticsearch channel) on how to do this. Moreover, check the tutorials for your specific version of elasticsearch. From 5.x to 7.x setting alerts has been significantly improved (this means that for 7.x you might be able to do this via the UI of kibana, but for 5.x you'll probably need to add the alert via indexing json in the appropriate indices .watcher)
I haven't used grafana, but I believe the same approach can be applied. You'll need an aggregation as mentioned before and then you add the alert https://grafana.com/docs/grafana/latest/alerting/rules/
I have a Elasticsearch cluster running on AWS Elasticsearch instance. It is up running for a few months. I'd like to know the most used query requests over the last few months. Does Elasticsearch save all queries somewhere I can search? Or do I have to programmatically save the requests for analysis?
As far as I'm aware, Elasticsearch doesn't by default save a record or frequency histogram of all queries. However, there's a way you could have it log all queries, and then ship the logs somewhere to be aggregated/searched for the top results (incidentally this is something you could use Elasticsearch for :D). Sadly, you'll only be able to track queries after you configure this, I doubt that you'll be able to find any record of your historical queries the last few months.
To do this, you'd take advantage of Elasticsearch's slow query log. The default thresholds are designed to only log slow queries, but if you set those defaults to 0s then Elasticsearch would log any query as a slow query, giving you a record of all queries. See that link above for detailed instructions how, you could set this for a whole cluster in your yaml configuration file like
index.search.slowlog.threshold.fetch.debug: 0s
or set it dynamically per-index with
PUT /<my-index-name>/_settings
{
"index.search.slowlog.threshold.query.debug": "0s"
}
To be clear the log level you choose doesn't strictly matter, but utilizing debug for this would allow you to keep logging actually slow queries at the more dangerous levels like info and warn, which you might find useful.
I'm not familiar with how to configure an AWS elasticsearch cluster, but as the above are core Elasticsearch settings in all the versions I'm aware of there should be a way to do it.
Happy searching!
I would like to delete data from Elastic search using API (curl).
I would like to start the deletion process and later query about the progress of deletion process.
Is it possible to use job to do it?
I tried looking at relevant documentation but the amount of examples is very low.
Would appreciate any relevant information or links.
You have two solutions:
Using the delete-by-query API using a range query that you can then monitor using the Task API
Use daily indices (e.g. my-logs-2018-09-10, my-logs-2018-09-11, etc) so deleting data in the past is simply a matter of deleting the indices for the days you want to ditch. No need to monitor as this happens instantaneously
I am a newbie to Elasticsearch. We are currently using Splunk platform for our analytics application and looking to migrate to ELK. Splunk provides options to schedule searches to run in background periodically and to store the search results in a separate summary index. Is similar functionality available in Elasticsearch? If so, please point me to the documentation containing the process.
Thanks,
Keerthana
This is a great use case. Of course Elasticsearch can perform such tasks, but it is more manual. You have to write your own script. So for example, if you want to summarize data, you can use ElasticSearch aggregations, and take the result (which comes in JSON format) and store it back into an index where you keep summary data. This way, even if you delete your raw data, your summary data lives on.
Elasticsearch comes with different clients. I like to use the Python Elasticsearch DSL library.