Using Kibana, is it possible to display one row of data which is a summary of other rows?
This is our requirement:
Given an entry in an index with the following structure:
string requestId
boolean raisedException
boolean requiredExternalLookup
We want to create a tabular output with the following structure
requestId numberRaisedException numberNoException numberRequiredLookup
So, if there were three rows (or entries) in the index for the same request id, two where an exception was raised, the output may look like this:
requestId numberRaisedException numberNoException numberRequiredLookup
REQUEST_123 2 1 3
Presumably the correct Kibana visualization widget to represent this would be a Data Table. But how in Kibana would one create a row like the above which is a summary of several rows, somewhat akin to a sql GROUP BY clause. Is it at all possible?
You can probably do this with 'scripted_fields', but the status of the 'scripted_fields' feature in kibana isn't clear. I think it was recently blocked in kibana due to security issues - Leaving this open is dangerous since you can do anything.
If you have access to your elasticsearch cluster then you might be able to create the field on your elasticsearch index.
You can read about it here : http://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-script-fields.html
Related
Problem
I am trying to build a dashboard in elastic with a table to monitor job runs.
I want to have per run the minimum timestamp (ie. job start) and the number of processed messages. The minimum timestamp is my problem, I can't seem to get it.
What I have done
All my log lines have as (relevant) fields: #timestamp, nb_messages, run_id. run_id is unique per run, and a run creates multiple log lines.
I create a dashboard, add a TSVB panel, and select Table.
I use run_id as the field to group by.
I can use max(nb_message) in my table without issue.
But if I use min(#timestamp), or any other aggregation than count, I just get a -.
I first tried with a lens instead of a TSVB panel, and I had the same issue, but with as message: To use this function, select a different field.
I can confirm in the index that logging.timestamp has date for type.
Question
Is there a way to use the timestamp as metric?
I would use a "normal" data table visualization (navigate through Aggregation based option in the Visualization menu if you're using the latest version of Kibana) instead of the TSVB. There, the default metric is count representing the amount of events of the index pattern in the selected time range. You can use the min metric on the #timestamp field and aggregate/group your data as you want.
The preliminary is of course that the selected index pattern contains an #timestamp field.
I hope I could help you.
I have articles stored in Elasticsearch and I've been wondering if there is a way I can query by date but the result to contain a specific amount of articles from each publisher. More specifically, I have 5 different publishers and I want to get the 10 latest articles, 2 from each publisher. I'm storing the publishers name as a keyword field in elastic.
The only idea I've come up with is to run a query for each publisher separately and limit the result to the first 2 (and then merge the results programmatically), but it will be more efficient I think if there is way I can do this in a single query.
Thanks
This sounds like a case for field collapsing.
You would collapse on the publisher field (as long as it is a keyword or a number) and then request inner_hits, the actual articles.
I have an index named web with a field called csr that has a datatype of string -- the reason why we made it so, is because we have migrated from Mysql and is now trying to use Elasticsearch.
The csr field signifies the linked/joined data with another table during the Mysql phase (It shows that the web records HAS MANY linked csr records)
Here's an example screenshot of the index using Kibana:
Now, I'm trying to do a range query filter to display all the documents that belongs to a certain range of csr.csr_story_value. For example, I want to display documents that has a csr.csr_story_value values ranging from 4 to 5. However, since there are multiple json elements in the csr field, I presume that the query points out to the highest value in the csr field. Here's an example screenshot:
MAIN QUESTION
Is there a way in Elastcisearch to insert a match query in this range query so I can specifically match my kgp_id and cli_id to just specifically pick a certain json element in this field?
I have some sample data on the Elasticsearch, which looks like the following:
I am using the data table in the Visualize section to get the counts for each error type, for example: it should output
Error: Update failed for online booking with id, count is 5.
Not the count 1 for different id of the same error type.
What I have done is to build a query to output the counts for each error type, which looks like this:
However, when I save the query as the saved search, then visualize it as data table, it still have the same issue as above.
I was thinking to only save the output of that query as saved search, one issue is that the output is too verbose, has a lot of information I don't really need.
Any suggestions please !
I was successfully able to get data from an individual index, but now I will have to get the data from 3 indexes, those 3 indexes are of different types (Student, Employee, School). With a single query trigger can I be able to get data from all the three index.
In es you can specify that by URL
POST /gb,us/user,tweet/_search
{
//YOur query
}
In your case if you have one type per index it could be
POST /Student,Employee,School/_search
More info here
In Java according to this you should have smth like this
QueryBuilders.indicesQuery(queryBuilder, "product-a", "product-b");