ElasticSearch how to get docs with 10 or more fields in them? - elasticsearch

I want to get all docs that have 10 or more fields in them. I'm guessing something like this:
{
"query": {
"range": {
"fields": {
"gt": 1000
}
}
}
}

What you can do is to run a script query like this
{
"query": {
"script": {
"script": {
"source": "params._source.size() >= 10"
}
}
}
}
However, be advised that depending on the number of documents you have and the hardware that supports your cluster, this can negatively impact the performance of your cluster.
A better idea would be to add another integer field that contains the number of fields that the document contains, so you can simply run a range query on it, like in your question.

As Per Documentation of _source field, you can do this like that or can't get results based on fields count.
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html

Related

Find same text within time range

I'm storing articles of blogs in ElasticSearch in this format:
{
blog_id: keyword,
blog_article_id: keyword,
timestamp: date,
article_text: text
}
Suppose I want to find all blogs with articles that mention X at least twice within the last 30 days. Is there a simple query to find all blog_ids that have articles with the same word at least n times within a date range?
Is this the right way to model the problem or should I use a nested objects for an easier query?
Can this be made into a report in Kibana?
The simplest query that comes to mind is
{
"_source": "blog_id",
"query": {
"bool": {
"must": [
{
"match": {
"article_text": "xyz"
}
},
{
"range": {
"timestamp": {
"gte": "now-30d"
}
}
}
]
}
}
}
nested objects are most probably not going to simplify anything -- on the contrary.
Can it be made into a Kibana report?
Sure. Just apply the filters either in KQL (Kib. query lang) or using the dropdowns & choose a metric that you want to track (total blog_id count, timeseries frequency etc.)
EDIT re # of occurrences:
I know of 2 ways:
there's the term_vector API which gives you the word frequency information but it's a standalone API and cannot be used at query time.
Then there's the scripted approach whereby you look at the whole article text, treat is as a case-sensitive keyword, and count the # of substrings, thereby eliminating the articles with non-sufficient word frequency. Note that you don't have to use function_score as I did -- a simple script query will do. it may take a non-trivial amount of time to resolve if you have non-trivial # of docs.
In your case it could look like this:
{
"query": {
"bool": {
"must": [
{
"script": {
"script": {
"source": """
def word = 'xyz';
def docval = doc['article_text.keyword'].value;
String temp = docval.replace(word, "");
def no_of_occurences = ((docval.length() - temp.length()) / word.length());
return no_of_occurences >= 2;
"""
}
}
}
]
}
}
}

ES: How do quasi-join queries using global aggregation compare to parent-child / nested queries?

At my work, I came across the following pattern for doing quasi-joins in Elasticsearch. I wonder whether this is a good idea, performance-wise.
The pattern:
Connects docs in one index in one-to-many relationship.
Somewhat like ES parent-child, but implemented without it.
Child docs need to be indexed with a field called e.g. "my_parent_id", with value being the parent ID.
Can be used when querying for parent, knowing its ID in advance, to also get the children in the same query.
The query with quasi-join (assume 123 is parent ID):
GET /my-index/_search
{
"query": {
"bool": {
"must": [
{
"term": {
"id": {
"value": 123
}
}
}
]
}
},
"aggs": {
"my-global-agg" : {
"global" : {},
"aggs" : {
"my-filtering-all-but-children": {
"filter": {
"term": {
"my_parent_id": 123
}
},
"aggs": {
"my-returning-children": {
"top_hits": {
"_source": {
"includes": [
"my_child_field1_to_return",
"my_child_field2_to_return"
]
},
"size": 1000
}
}
}
}
}
}
}
}
This query returns:
the parent (as search query result), and
its children (as the aggregation result).
Performance-wise, is the above:
definitively a good idea,
definitively a bad idea,
hard to tell / it depends?
It depends ;-) The idea is good, however, by default the maximum number of hits you can return in a top_hits aggregation is 100, if you try 1000 you'll get an error like this:
Top hits result window is too large, the top hits aggregator [hits]'s from + size must be less than or equal to: [100] but was [1000]. This limit can be set by changing the [index.max_inner_result_window] index level setting.
As the error states, you can increase this limit by changing the index.max_inner_result_window index setting. But, if there's a default, there's usually a good reason. I would take that as a hint that it might not be that great an idea to increase it too much.
So, if your parent documents have less than 100 children, why not, otherwise I'd seriously consider going another approach.

ElasticSearch remove all documents with over 1000 fields

I'm getting this error:
While updating a dev ElasticSearch DB from a LIVE one. I believe it is being caused because the live DB is sending documents with over 1000 fields in them and the dev DB index.mapping.total_fields.limit is set to 1000
I know I can up the fields limit, but for now I would like to just remove all documents with 1000 or more fields.
I'm guessing make a Postman call to the _delete_by_query API with something like:
{
"query": {
"range": {
"fields": {
"gt": 1000
}
}
}
}
Does anyone know of a simple query that can accomplish this?
You can run a query like this against the LIVE cluster:
POST logger/_delete_by_query
{
"query": {
"script": {
"script": {
"source": "params._source.size() > 1000"
}
}
}
}
Provided you don't have nested fields/objects, this will delete all documents having more than 1000 fields.

Elastic search wildcard query crashes cluster

I run the query below on a large elastic search cluster. The cluster bcomes unresponsive
{
"size": 10000,
"query": {
"bool": {
"must": [
{
"regexp": {
"message": {
"value": ".*exception.*"
}
}
},
{
"bool": {
"should": [
{
"term": {
"beat.hostname": "ip-xxx-xx-xx-xx"
}
}
]
}
},
{
"range": {
"#timestamp": {
"lt": 1518459660000,
"format": "epoch_millis",
"gte": 1518459600000
}
}
}
]
}
}
}
When I remove the wildcarded .*exception.* and replace it with any non wildcarded string like xyz it returns fast. Though the query uses a wildcarded expression, it also looks for a small time range and a specific host. I would think this is a very simple query. Any reason why elasticsearch server can't handle this query? The cluster has 10 nodes and 20 TB of data.
See the documentation for Regexp Query. It clearly states the following:
Note: The performance of a regexp query heavily depends on the regular
expression chosen. Matching everything like .* is very slow
What would be ideal is to change the text analysis on the message field with a WordDelimiterTokenFilter and set split_on_case_change to true. Then something like NullPointerException will get indexed as three separate tokens [Null, Pointer, Exception]. This can help you search on exception without using a regex. Caveat is you need to reindex all your documents.
Another quick thing to try might be to keep your filter conditions on the hostname and timestamp in a filter context, which will prefilter documents before running your regexp query. This may be a short-term solution for you until you fix the text analysis.

How to search for exact date match in Elasticsearch

I have a couple of items in my ES database with fields containing 2020-02-26T05:24:55.757Z for example. Is it possible to (with the URI Search, _search?q=...) search for exact dates? For example, in this case, I would like to find items from 2020-02-26. Is that possible?
Yes, It is possible. You could refer to query string documentation for more info.
curl localhost:9200/your_index_name/_search?q=your_date_field:%7B2020-02-26%20TO%20*%7D
You would need to encode the url. query part looks like q=your_date_field:{2020-02-26 TO *}
Above query in REST api would look like
{
"query": {
"range": {
"your_date_field": {
"gte": "2020-02-26"
}
}
}
}
For exact dates following would work
curl localhost:9200/your_index_name/_search?q=your_date_field:2020-02-26
Although this question is old, I came across it, so maybe others will do so too.
If you want to only work in UTC, you can use a match query, like:
{
"query": {
"match": {
"your_date_field": {
"query": "2020-02-26"
}
}
}
}
If you need to consider things matching on a particular date in a different timezone, you have to use a range query, like:
{
"query": {
"range": {
"your_date_field": {
"gte": "2020-02-26",
"lte": "2020-02-26",
"time_zone": "-08:00"
}
}
}
}

Resources