Elasticsearch has_child query/filter in Kibana 4 - elasticsearch

I cannot seem to get the has_child query (or filter) to function in Kibana 4. My code works in elasticsearch directly as a curl script, but not in Kibana 4, yet I understood this was a key feature of the upgrade. Can anybody shed any light?
The curl script as follows works in elasticsearch, returning all of the parents where they have a child object:
curl -XPOST localhost:port/indexname/_search?pretty -d '{
"query" : {
"has_child" : {
"type" : "object",
"query" : {
"match_all" : {}
}
}
}
}'
The above runs fine. Then to convert it to the JSON query to submit within Kibana, I've followed the general formatting rules: I've dropped the curl line and added the index name (and sometimes a blank filter [], but it doesn't seem to make much difference); no error is thrown but the whole dataset returns.
{
"index" : "indexname",
"query" : {
"has_child" : {
"type" : "object",
"query" : {
"match_all" : {}
}
}
}
}
Am I missing something? Has anybody else got a has_child query to run in Kibana 4?
Many thanks in advance
Toby

Related

How to form index stats API?

ES Version : 7.10.2
I have a requirement to show index statistics, I have come across the index stats API which does fulfill my requirement.
But the issue is I don't necessarily need all the fields for a particular metric.
Ex: curl -XGET "http://localhost:9200/order/_stats/docs"
It shows response as below (omitted for brevity)
"docs" : {
"count" : 7,
"deleted" : 0
}
But I only want "count" not "deleted" field, from this.
So, in Index Stats API documentation, i came across a query param as :
fields:
(Optional, string) Comma-separated list or wildcard expressions of fields to include in the statistics.
Used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters
As per above when I perform curl -XGET "http://localhost:9200/order/_stats/docs?fields=count"
It throws an exception
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "request [/order/_stats/docs] contains unrecognized parameter: [fields]"
}
],
"type" : "illegal_argument_exception",
"reason" : "request [/order/_stats/docs] contains unrecognized parameter: [fields]"
},
"status" : 400
}
Am I understanding the usage of fields correctly ?
If yes/no, how can I achieve the above requirement ?
Any help is much appreciated :)
You can use the filter_path argument, like:
curl -XGET "http://localhost:9200/order/_stats?filter_path=_all.primaries.docs.count
This will return you only one field like:
{
"_all" : {
"primaries" : {
"docs" : {
"count" : 10
}
}
}
}

Elastic search query fail: No mapping found for [#timestamp] in order to sort on

Here is the mapping. I have a property named #timestamp.
{
"my_index" : {
"mappings" : {
"properties" : {
"#timestamp" : {
"type" : "date_nanos"
}
}
}
}
}
But when I query like this:
{
"sort" : {
"#timestamp" : "desc"
}
}
I got an error: No mapping found for [#timestamp] in order to sort on.
I found some solution using unmapping_type, but I have definition in the property. Could someone help explain this case? I just started to use elasticsearch. Thanks.
You need to query on your specific index
GET my_index/_search
And not
GET /_search
Because otherwise you'll hit all indexes in your cluster and the odds are high that one of them doesn't have a #timestamp field.

Elasticsearch delete debug syslog messages older than x days and with a certain term

Im trying to get rid of debug syslog Messages after a certain amount of time.
The query is running without Errors but isnĀ“t deleting any data:
curl -XPOST 'localhost:9200/logstash-syslog-vmware/_delete_by_query?pretty' -H 'Content-Type: application/json' -d '
{
"query": {
"bool" : {
"must" : {
"match" : {
"syslog_severity" : "debug" }
},
"filter" : {
"range" : {
"#timestamp" : {
"gt" : "2017-10-13T09:00:00",
"lt" : "2017-10-13T11:30:00"
}
}
}
}
}
}'
Not really the answer that has been solved as per comment (using a wildcard) but let me say something.
If you intend to purge everyday the debug level old documents then I'd recommend using different indices:
One for the trace, debug levels and another for the others.
Then when you need to purge old data, just drop the index like DELETE logstash-syslog-vmware-debug-2017-10-13.
It will be much more efficient.
If it was only a one go operation, then feel free to ignore me :)

In elasticsearch after adding a new field to a doc, cannot find with "exists" filter

I'm adding a field to a doc like this:
curl -XPOST 'http://localhost:9200/imap_email/imap_bruce/12/_update' -d '{
"script" : "ctx._source.simperium = \"12345\""
}'
Looking at that doc, I can verify that it has added the field "simperium". The following query (and the many variations of it I've tried) simply return everything in my index.
{
"constant_score" : {
"filter" : {
"exists" : { "field" : "simperium" }
}
}
}
What do I need to do to get a strict list of all docs that do or don't have a specific field?
The majority of Elasitcsearch examples exclude the outer "query" : {} object for brevity. It's very annoying when starting out, but eventually you learn to accept it.
You most likely wanted this:
{
"query" : {
"constant_score" : {
"filter" : {
"exists" : { "field" : "simperium" }
}
}
}
}

Elastic search using aggregations instead of facets

I am trying to figure out how I would do the following query, but instead of using facets use the new aggregation. The reason for my change is then I would like to take it further and instead of just showing 10 tags, show all tags with a count over 0.
{
"query" : { "query_string" : {"query" : "T*"} },
"facets" : {
"tags" : { "terms" : {"field" : "tags"} }
}
}
Any help would be greatly appreciated
Most facet types have an equivalent aggregation type. The equivalent of the terms facet type is the terms aggregation type.

Resources