Grafana: how to sort result with elasticsearch raw_data - elasticsearch

How to sort elasticsearch result in grafana when using “raw_data” metrics?
For example, if I wanted to query slowest queries (mysql, postgresql …) I would like to get them sorted by execution time to populate a table widget, but I can’t figure how to do that.
NOTE: transform functions are not the solution seems they operate AFTER the datasource returns
capture
Grafana version 9.3.2 (the docker image from dockerhub) and 8.3.6 (production one)

Related

elasticsearch query statistics and analysis in near real time

I am pretty new to elasticsearch and I want to create statistics and kibana dashboards on queries sent to elasticsearch index , what is the best approach to do so ? Any advice or recommendations will be highly appreciated?
The idea is to analyze all queries sent to the index and do some performance optimisation in the future when the userbase increase ...
I am planning for the moment to store the logs in different index , but parsing seems to be kind of complex activity ...
Ideally I need to have:
-Counting of user queries
-Counting of queries that returned no results
-Logging of all search terms
-Sorting of queries, and queries that returned no results, by most frequently contained search term
-A view of top queries, including the search term not found results for and the exact query
-A view of top queries returning no results, including the search term not found results for and the exact query
Thanks
There is no OOTB functionality available in Elasticsearch for search analysis. But there are some workaround you can do for same and get information what you are asking.
First option, you can enable slow log in Elasticsearch by executing below command and it will log each and every request to coming to Elasticsearch.
PUT /my-index-000001/_settings
{
"index.search.slowlog.threshold.query.info": "0s",
"index.search.slowlog.threshold.fetch.info": "0s"
}
Second option, You can log all the query the application layer or intermediate level using which application and elasticsearch talking to each other.
Once you have logs, You can configured Logstash / Filebeat / Fleet to read log and transform and index to Elasticsearch. Logstash provide differnt kind of filter which you can use and easily transofrm your plain text logs to strcture logs (grok filter).

Can you run Elasticsearch with Loki?

I have elastic configured with Grafana and it has logs. I tried to query logs for the elasticsearch in grafana but did not have much succes. I went online to try to learn how to do so, but when I do it talks about Loki. Are you able to use Loki with Elasticsearch? Do not see a definite answer for this online.
Using Loki with ES defeats the purpose of using Loki itself.
Loki prides itself on indexing only the metadata/labels of the logs and storing the actual log data separately in a compressed manner.
This reduces storage costs and leads to faster retrieval of data as there is less data to index as compared to the an ES index which indexes everything in a log line and worse still ,if the data is missing ,stores the index attribute as empty. (Almost similar to the diff between SQL vs NoSQL)
As of now, Loki does not support ES as the index store.
It uses two types of indices:- Labels and log chunks and stores them separately to be queried as and when required.
Label/metadata/index :- uses Cassandra,GCS,File System,S3
Data chunks:- Cassandra,BigTable,DynamoDB,BoltDB
For more info see Loki storage.

Is there a way to import data (csv data) to the winlogbeat kibana dashboard?

I had just started learning about ElasticSearch and Kibana. I created a Winlogbeat dashboard where the logs are working fine. I want to import additional data (CSV data) which I created using Python. I tried uploading the CSV file but I am only allowed to create a separate index and not merge it with the Winlogbeat data. Does anyone know how to do this?
Thanks in advance
In many use cases, you don't need to actually combine into a single index. Here's a few ways you can show combined data, in approximate order of complexity:
Straightforward methods, using separate indices:
Use multiple charts on a dashboard
Use multiple indices in a single chart
More complex methods that combine data into a single index:
Pivot indices using Data Transforms
Combine at ingest-time
Roll your own
Use multiple charts on a dashboard
This is the simplest way: ingest your data into separate indices, make separate visualizations for them, then add those visualisations to one dashboard. If the events are time-based, this simple approach could be all you need.
Use multiple indices in a single chart
Lens, TSVB and Timelion can all use multiple data sources. (Vega can too, but that's playing on hard mode)
Here's an official Elastic video about how to do it in Lens: youtube
Create pivot indices using Data Transforms
You can use Elasticsearch's Data Transforms functionality to fetch, combine and aggregate your disparate data sources into a combined data structure which is then available for querying with Kibana. The official tutorial on Transforming the eCommerce sample data is a good place to learn more.
Combine at ingest-time
If you have (or can add) Logstash in the mix, you have several options for combining datasets during the filter phase of your pipelines:
Using a file-based lookup table and the translate filter plugin
By waiting for related documents to come in then outputting a combined document to Elasticsearch with the aggregate filter plugin
Using external lookups with filter plugins like elasticsearch or http
Executing arbitrary ruby code using the ruby filter plugin
Roll your own
If you're generating the CSV file with a Python program, you might want to think about incorporating the python Elasticsearch DSL lib to run queries on the winlogbeat data, then ingest it in its combined state (whether via a CSV or other means).
Basically, Winlogbeat is a data shipper to Elasticsearch. Which ships windows specific data to an index named winlogbeat with a specific schema and document structure.
You can't merge another document with a different schema into winlogbeat index.
If your goal is to correlate different data points. Please use Time-series visual builder to overlay two different datasets to visualize.

How can I find the most used query from Elasticsearch?

I have a Elasticsearch cluster running on AWS Elasticsearch instance. It is up running for a few months. I'd like to know the most used query requests over the last few months. Does Elasticsearch save all queries somewhere I can search? Or do I have to programmatically save the requests for analysis?
As far as I'm aware, Elasticsearch doesn't by default save a record or frequency histogram of all queries. However, there's a way you could have it log all queries, and then ship the logs somewhere to be aggregated/searched for the top results (incidentally this is something you could use Elasticsearch for :D). Sadly, you'll only be able to track queries after you configure this, I doubt that you'll be able to find any record of your historical queries the last few months.
To do this, you'd take advantage of Elasticsearch's slow query log. The default thresholds are designed to only log slow queries, but if you set those defaults to 0s then Elasticsearch would log any query as a slow query, giving you a record of all queries. See that link above for detailed instructions how, you could set this for a whole cluster in your yaml configuration file like
index.search.slowlog.threshold.fetch.debug: 0s
or set it dynamically per-index with
PUT /<my-index-name>/_settings
{
"index.search.slowlog.threshold.query.debug": "0s"
}
To be clear the log level you choose doesn't strictly matter, but utilizing debug for this would allow you to keep logging actually slow queries at the more dangerous levels like info and warn, which you might find useful.
I'm not familiar with how to configure an AWS elasticsearch cluster, but as the above are core Elasticsearch settings in all the versions I'm aware of there should be a way to do it.
Happy searching!

Elastic search API Vs Spring data Vs logstash

I am planing to use elastic search for our dashboard using spring boot based rest services. After research i see top 3 options
Option A:
Use Elastic Search Java API ( from comment looks like going to go away)
Use Elastic Search Java Rest Client
Use spring-data-elasticsearch ( planing to use es 5.6 but challenging for latest es 6 as I don't see it's supports right now)
Option B:
Or shall I use logstash approach to
Sync data between postgressql and elastic search using logstash ?
Which one among them will be long term approach to get near real time data from ES in high load scenario ??
Usecase: I need to save some data from postgresql table to elastic search for my dashboard (near real time )
Update is frequent for both tables and es
to maintain current state
Load is going to increase in couple of week
The options you listed, in essence, are: should you go with a ready to use solution (logstash) or should you implement your own.
Try logstash first to see if it works for you - it'll take less time than implementing your own solution, and you can get working solution in minutes (if it's not hundreds of tables)
If you want near-real time, then you need to figure out if it allows you to:
handle incremental updates, i.e. if its 'tracking_column' configuration will work for your data structure and it will only load updated records in each run, not the whole table.
run it at the desired frequency
and in general, satisfies your latency requirements
If you decide to go with your own solution, keep in mind that spring-data-elasticsearch is a higher level wrapper for underlying elasticsearch client. If there are latency goals, then working on the lower level (elasticsearch clients) may give you better control and more options to tune the pipeline.
Otherwise, the client choice will not matter that much as data feed features (volume/update frequency) and db/es cluster configuration.

Resources