How to pull all the records from elasticsearch using Grafana - elasticsearch

When I am trying to pull the rawdocuments in a table from elasticsearch using Grafana, It does not show me the all the documents which are available there in elasticsearch index .No matter how many docs are in my elasticsearch index , it shows <=1000 docs only .
I guess when Grafana is firing the query for getting the docs . It is fixing the document size 1000 in query , and not using scan and scroll .
Is there some way possible, from where I can increase the size of documents which are getting retrieved from elasticsearch .
Can I write lucene query in query box and get all the records ? if yes what kind of query I need to specify in Grafana lucene query box , any example?

why do you want to scroll through more than 1000 docs? Is there not a filter / query you can specify to limit the list so you can find one you want?

Related

Grafana and Elasticsearch: How to perform a simple query

Using Grafana 7.2 and Elasticsearch 7.5.1.
I am storing in Elasticsearch a structure that, among other things, indexes an executionTime field in milliseconds:
Using Grafana, how do I filter by that field? So I can get only values with executionTime < 150, for example.
Something like this is not working:
Something like this is not working either:
Any idea?
Found!
As setted in official Grafana documentation, Lucene queries can be used in the query field.

Elasticsearch and Kibana: aggregation to find the name of the most rewarded miner, daily

I created an index from a Storm topology to ElasticSearch (ES). The index map is basically:
index: btc-block
miner: text
reward: double
datetime: date
From those documents I would like to create a histogram of the richest miner, on a daily scale.
I am wondering if I should aggregate first in storm and just use ES and Kibana to store, query and then display the data or if ES and Kibana can handle such requests.
I have been looking at the Transforms, in the index management section, that allows to create new indices from queries and aggregations in continuous modes but I can't succeed to get to the expected result.
Any help will be appreciated.
Sometimes we need to ask a question to find the answer...
I kept looking at the documentation and eventually I could solve the issue by using a sibling pipeline aggregation, in the visualization. In my case, a max bucket aggregation of the sum of reward on Y-axis.
In that case get like 6 records/hour so I guess it's ok to let Kibana and ES work. What if I got lot more data? Would it not be wiser to aggregate in Storm?

Is it possible to write a aggregation query in Dev Tools of Kibana and then store the result?

I have a field in elastic search loaded that has information in it as:
message: Requesting 30 containers
message: Requesting 40 containers
.
.
.
message: Requesting 50 containers
I want to get a total of all containers used in the job. (30+40+50=120, in this case).
Is it more efficient to extract these values in a field in logstash and then use aggregation queries in elasticsearch or given the message above everything is possible in elasticsearch?
Also, if I write a aggregation query in Dev Tools of Kibana, then is it possible to store the result to be used for visualization?
It is better and is the solution to extract the number in logstash and then use it in aggregations
No , You cant use a string in sum aggregation , Everything is never possible
You dont need you write aggregation query in dev tools if you are using kibana , in kibana you can do aggregations without writing queries

Querying the elastic search before inserting into it

i am using spring boot application to load messages into elastic search. i have a use case where i need to query the elastic search data , get some id value and populate it in elastic search json document before inserting it into elastic search
Querying the elastic search before insertion . Will this be expensive ? If yes is there some other way to approach this issue.
You can use update_by_query to do it in one step.
But otherwise it shouldn't be slow if you do it in two steps (get + update). It depends on many things - how often you do it, how much data is transferred, etc.

does kibana support max in queries?

I am hoping to find some information on the syntax of kibana queries. I want to be able to have a query that returns the max value of a field. Is this possible I have seen some stuff on facets but not sure if it apply's?
I know that max is an option for the histogram but i would like to use it elsewhere.
Since Kibana queries use the Lucene query syntax or RegEx, currently its queries seem to return matched records only (no aggregation).
I believe that aggregation (Max, for example) is only possible in Kibana Panels such as the Histogram.

Resources