I'm attempting to do a count query such that I return the number of unsuccessful attempts to log into my system within the last 10 minutes. I created this query:
{
"term": {
"success":false
},
"range": {
"_timestamp": {
"gt": "now-10m"
}
}
}
However, this returns all of the unsuccessful attempts for any time, disregarding the range filter in my query. Am I structuring this query correctly? The query works when I do a search with terms and ranges.
In other words, the output of the above query and curl -XGET localhost:9200/application/_count is the same (I have only tested unsuccessful attempts).
Try using the search_type parameter instead of using the countAPI. This is actually preferred:
curl -XGET localhost:9200/application/_search&search_type=count -d'{
query:....
}'
Documentation:
http://www.elasticsearch.org/guide/reference/api/search/search-type/
The range is a filter, so I think you have to create a filtered query to take it correctly into account :
{
"filtered": {
"query": {
"term": {
"success":false
},
},
"filter: {
"range": {
"_timestamp": {
"gt": "now-10m"
}
}
}
}
}
Related
My mapping has two properties:
"news_from_date" : {
"type" : "string"
},
"news_to_date" : {
"type" : "string"
},
Search results have the properties news_from_date, news_to_date
curl -X GET 'http://172.2.0.5:9200/test_idx1/_search?pretty=true' 2>&1
Result:
{
"news_from_date" : "2022-05-30 00:00:00",
"news_to_date" : "2022-06-23 00:00:00"
}
Question is: How can I boost all results with the current date being in between their "news_from_date"-"news_to_date" interval, so they are shown as highest ranking results?
Tldr;
First off if you are going to play with dates, you should probably use the one of the dates type provided by Elasticsearch.
They are many way to approach you problem, using painless, using scoring function or even more classical query types.
Using Should
Using the Boolean query type, you have multiple clauses.
Must
Filter
Must_not
Should
Should allow for optionals clause to be factored in the final score.
So you go with:
GET _search
{
"query": {
"bool": {
"should": [
{
"range": {
"news_from_date": {
"gte": "now"
}
}
},
{
"range": {
"news_to_date": {
"lte": "now"
}
}
}
]
}
}
}
Be aware that:
You can use the minimum_should_match parameter to specify the number or percentage of should clauses returned documents must match.
If the bool query includes at least one should clause and no must or filter clauses, the default value is 1. Otherwise, the default value is 0.
Using a script
As provided by the documentation, you can create a custom function to score your documents according to your own business rules.
The script is using Painless (a stripped down version of java)
GET /_search
{
"query": {
"function_score": {
"query": {
"match": { "message": "elasticsearch" }
},
"script_score": {
"script": {
"source": "Math.log(2 + doc['my-int'].value)"
}
}
}
}
}
I have a elasticsearch range query like this
curl 'localhost:9200/myindex/_search?pretty' -d '
{
"query": {
"range" : {
"total" : {
"gte" :174,
"lte" :180
}
}
}
}'
I need to use this query in grafana for my graph. i am trying to add this as a part of the Lucene query. but i am not able to find the desired result. can anyone help.
If "total" is a field, you can do something like this in Lucene:
total:[174 TO 180]
reference: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html
First off I think you may be missing the document type from the request URL, should look like so:
http://localhost:9200/[INDEX]/[TYPE]/_search?pretty
Second, I've looked at previous answers providing detailed examples of range filtering and the query should work just fine like so
{
"query":
{
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"range": {
"total": {
"gte": 174,
"lte": 180
}
}
}
}
}
}
I have parsed events with field like "level" (DEBUG, INFO, ERROR, FATAL). How to retrieve events count by last minute and level type = ERROR?
screen from Kibana
I'm trying like that:
curl -XGET 'mysite.com:9200/myindex/_count?pretty=true' -d '
{
"query":{
"term":{
"level":"error"
}
},
"filter":{
"range":{
"_timestamp":{
"gt":"now-1m"
}
}
}
}'
You must have timestamp on your events.If yes, write a count aggregate query on events with query filters of level type and range timestamp(elasticsearch do support range on time/date field with 'now' parameter).
confusing part is you did't mention what kind of count you want.Total event count or you want to count by type or some name parameter(in that case use terms aggregation on that parameter).
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/mapping-date-format.html#date-math
{
"query": {
"filtered": {
"filter": {
"bool": {
"must": [
{
"term": {
"level": "trace"
}
},
{
"range": {
"timestamp": {
"gt": "now-1m"
}
}
}
]
}
}
}
}
}
I have to search two fields in a DB using elasticsearch where i should be getting total hits isequal to the sum of individual field search. I did it on port 9200 like this and its working. How to write a must match code for this.
http://localhost:9200/indexname/typename/_search?q=Both:Yes++Type:Comm
Where Both is one field and Comm is another.
Thank you
You need to use an "AND" query.
GET hilden1/type1/_search
{
"query": {
"filtered": {
"filter": {
"and": {
"filters": [
{
"term": {
"both": "yes"
}
},
{
"term": {
"type": "comm"
}
}
]
}
}
}
}
}
I think this is what you need:
Elasticsearch URI based query with AND operator
_search?q=%2Bboth:yes%20%2Btype:comm
right now I have a query like this:
{
"query": {
"bool": {
"must": [
{
"match": {
"uuid": "xxxxxxx-xxxx-xxxx-xxxxx-xxxxxxxxxxxxx"
}
},
{
"range": {
"date": {
"from": "now-12h",
"to": "now"
}
}
}
]
}
},
"aggs": {
"query": {
"terms": [
{
"field": "query",
"size": 3
}
]
}
}
}
The aggregation works perfectly well, but I can't seem to find a way to control the hit data that is returned, I can use the size parameter at the top of the dsl, but the hits that are returned are not returned in the same order as the bucket so the bucket results do not line up with the hit results. Is there any way to correct this or do I have to issue 2 separate queries?
To expand on Filipe's answer, it seems like the top_hits aggregation is what you are looking for, e.g.
{
"query": {
... snip ...
},
"aggs": {
"query": {
"terms": {
"field": "query",
"size": 3
},
"aggs": {
"top": {
"top_hits": {
"size": 42
}
}
}
}
}
}
Your query uses exact matches (match and range) and binary logic (must, bool) and thus should probably be converted to use filters instead:
"filtered": {
"filter": {
"bool": {
"must": [
{
"term": {
"uuid": "xxxxxxx-xxxx-xxxx-xxxxx-xxxxxxxxxxxxx"
}
},
{
"range": {
"date": {
"from": "now-12h",
"to": "now"
}
}
}
]
}
}
As for the aggregations,
The hits that are returned do not represent all the buckets that were returned. so if have buckets for terms 'a', 'b', and 'c' I want to have hits that represent those buckets as well
Perhaps you are looking to control the scope of the buckets? You can make an aggregation bucket global so that it will not be influenced by the query or filter.
Keep in mind that Elasticsearch will not "group" hits in any way -- it is always a flat list ordered according to score and additional sorting options.
Aggregations can be organized in a nested structure and return computed or extracted values, in a specific order. In the case of terms aggregation, it is in descending count (highest number of hits first). The hits section of the response is never influenced by your choice of aggregations. Similarly, you cannot find hits in the aggregation sections.
If your goal is to group documents by a certain field, yes, you will need to run multiple queries in the current Elasticsearch release.
I'm not 100% sure, but I think there's no way to do that in the current version of Elasticsearch (1.2.x). The good news is that there will be when version 1.3.x gets released:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-metrics-top-hits-aggregation.html