I have a query that checks to see if some document is more than a year old with something like:
"range": {
"some_date": {
"gt": "now-1y"
}
}
But I need to also figure out if that document WILL BE more than a year old on some other date.
Is it possible to dynamically change the now value or something similar? Or do I need to replace now with something like doc['passed_in_date']-1y?
I'm very new to Elasticsearch. Thanks for your help.
Related
I have two fields "expiry_date" and "current_date". "current_date" is the today's date and it will keep changing.
I want to add a Kibana filter to search and display the documents where "expiry_date" is greater than or equal to "current_date".
I am unable to use "now" keyword as it only works with date formats and "expiry_date" is in string format.
I am not sure but trying to add a filter like
{
"range": {
"expiry_date": {
"gte": "doc[current_date].value"
}
}
}
Could anybody please guide me here
I am trying to filter Kibana for a field that contains the string "pH". The field is called extra.monitor_value_name. Examples of potential values are Temperature_ABC01, DO_ABC01, or pH_ABC01.
Kibana's Elasticsearch Query DSL does not seem to have a "contains string" so I need to custom make a query.
I am new to Query DSL, can you help me create the query?
Also, is it proper to call it Query DSL? I'm not even sure of proper wording.
Okay! Circling back with an answer to my own question.
My initial problem stemmed from not knowing about field_name vs field_name.keyword. Read here for info on keyword here: What's the difference between the 'field' and 'field.keyword' fields in Kibana?
Solution 1
Here's the query I ended up using. I used a regexp query. I found this article useful in figuring out syntax for the regexp:
{
"query": {
"regexp": {
"extra.monitor_value_name.keyword": "pH.*"
}
}
}
Solution 2
Another way I could have filtered, without Query DSL was typing in a search field: extra.monitor_value_name.keyword:pH*.
One interesting thing to note was the .keyword doesn't seem to be necessary with this method. I am not sure why.
try this in filter using Elasticsearch Query DSL:
{
"query": {
"wildcard": {
"extra.monitor_value_name": {
"value": "pH.*"
}
}
}
}
As I am able to sort data using score like
{
"version":true,
"_source":false,
"sort": [
{
"_score": {
"order": "desc"
}
}
],
"query": {
"match_all": {}
}
}
Please let me know How can I do the same with _version. By default Fielddata is not supported on field _version. So may be I am missing some thing.
Is there any specific setting to query with version?
Please help!
You can't do this, and usually you don't have to.
See this thread:
https://discuss.elastic.co/t/filter-by--version-and-show--version-in-elasticsearch-query/22024/2
While using the _version might seem to work in certain cases, I would
recommend to never use it for anything else than optimistic locking of
updates. In particular, versions do not carry any meaning: they might look
like the number of times a document has been modified but it is not always
the case (for instance if you create a new document which has the same ID
as a document that you just deleted, the version number of the new document
will not be 1), and more importantly it is an implementation detail, this
behaviour might change in the future.
_version field is not indexed so you can't use it in queries.
You can create you custom version field and handle it manually.
My documents contain an integer array field, storing the id of tags describing them. Given a specific tag id, I want to extract a list of top tags that occur most frequently together with the provided one.
I can solve this problem associating a term aggregation over the tag id field to a term filter over the same field, but the list I get back obviously always starts with the album id I provide: all documents matching my filter have that tag, and it is thus the first in the list.
I though of using the exclude field to avoid creating the problematic bucket, but as I'm dealing with an integer field, that seems not to be possible: this query
{
"size": 0,
"query": {
"term": {
"tag_ids": "00001"
}
},
"aggs": {
"tags": {
"terms": {
"size": 3,
"field": "tag_ids",
"exclude": "00001"
}
}
}
}
returns an error saying that Aggregation [tags] cannot support the include/exclude settings as it can only be applied to string values.
Is it possible to avoid getting back this bucket?
This is, as of Elasticsearch 1.4, a shortcoming of ES itself.
After the community proposed this change, the functionality has been added and will be included in Elasticsearch 1.5.0.
It's supposed to be fixed since version 1.5.0.
Look at this: https://github.com/elasticsearch/elasticsearch/pull/7727
While it is enroute to being fixed: My workaround is to have the aggregation use a script instead of direct access to the field, and let that script use the value as string.
Works well and without measurable performance loss.
I am looking for a way to create a facet such that I can essentially return two values for one key.
For instance, I am attempting to retrieve both an amount and schedule properties of an object. I attempted to use a computed value script, but the calculations that have to be done using the two objects are date based, and require an external library to perform them.
Basically, something along the lines of:
"theFacet": {
"terms_stats": {
"key_field": "someKeyProbablyADate",
"value_field": "amount",
"value_field": "simpleSchedule"
}
}
Workarounds are also appreciated. Perhaps some way to return a new dynamic object with both fields?
Sounds like you want to pre-process your data before you index it into a single field, then facet on that.
Something among the line of a single string containing key#amount#schedule
Then when you get the faceting results back you can split it up again and run whatever logic you want.
Try combining different fields with a script element. For example:
"facets": {
"facet-name": {
"terms": {
"field": "some-field",
"script": "_source['another-field'] + '/' + term
}
}
}