Does non-indexed field update triggers reindexing in elasticsearch8? - elasticsearch

My index mapping is the following:
{
"mappings": {
"dynamic": False,
"properties": {
"query_str": {"type": "text", "index": False},
"search_results": {
"type": "object",
"enabled": False
},
"query_embedding": {
"type": "dense_vector",
"dims": 768,
},
}
}
Field search_result is disabled. Actual search is performed only via query_embedding, other fields are just non-searchable data.
If I will update search_result field in existing document, will it trigger reindexing?
The docs say that "The enabled setting, which can be applied only to the top-level mapping definition and to object fields, causes Elasticsearch to skip parsing of the contents of the field entirely. The JSON can still be retrieved from the _source field, but it is not searchable or stored in any other way". So, it seems logical not to re-index docs if changes took place only in non-indexed part, but I'm not sure

Elasticsearch documents (Lucene Segments) are inmutable, so every change you make in a document will delete the document and create a new one. This is a Lucene's behavior:
Lucene's index is composed of segments, each of which contains a
subset of all the documents in the index, and is a complete searchable
index in itself, over that subset. As documents are written to the
index, new segments are created and flushed to directory storage.
Segments are immutable; updates and deletions may only create new
segments and do not modify existing ones. Over time, the writer merges
groups of smaller segments into single larger ones in order to
maintain an index that is efficient to search, and to reclaim dead
space left behind by deleted (and updated) documents.
When you set enable:false you are just avoiding to have the field content in the searchable structures but the data still lives in Lucene.
You can see a similar answer here:
Partial update on field that is not indexed

Related

Reindexing more than 10k documents in Elasticsearch

Let's say I have an index- A. It contains 26k documents. Now I want to change a field status with type as Keyword. As I can't change A's status field type which is already existing, I will create a new index: B with my setting my desired type.
I followed reindex API:
POST _reindex
{
"source": {
"index": "A",
"size": 10000
},
"dest": {
"index": "B",
"version_type": "external"
}
}.
But the problem is, here I can migrate only 10k docs. How to copy the rest?
How can I copy all the docs without losing any?
delete the size: 10000 and problem will be solved.
by the way the size field in Reindex API means that what batch size elasticsearch should use to fetch and reindex docs every time. by default the batch size is 100. (you thought it means how many document you want to reindex)

go elastic sum text field

I want to sum up filed price, but got error "Fielddata is disabled on text fields by default."
This is my code:
sumAgg := elastic.NewSumAggregation().Field("price")
q := query.Must(elastic.NewRangeQuery("price").Gt(0))
res, err := p.config.ElasticClient.Search().Index(idx).Query(q).Aggregation("sum", sumAgg).Size(0).Do(ctx)
This is the mapping:
"mappings": {
"properties": {
"price": {
"type": "scaled_float",
"scaling_factor": 100000
},
}
}
Anybody can help?
By default field data is disabled on text fields and a detailed reason is mentioned here , as it's very costly on text fields and mainly require for aggregations(your use-case), read more here
from official doc
Instead, text fields use a query-time in-memory data structure called
fielddata. This data structure is built on demand the first time that
a field is used for aggregations, sorting, or in a script. It is built
by reading the entire inverted index for each segment from disk,
inverting the term ↔︎ document relationship, and storing the result in
memory, in the JVM heap.
Make sure, a field which you are using in your aggregation has this enabled. for numeric fields doc_values are used for aggregation and its enabled on by default on the numeric field.

Sorting Results efficiently by non mapped/searchable Object properties

In my current index called items I have previously sorted the results of my queries by some property values of a specifiy object property, lets call this property just oprop. After I have realized, that with each new key in oprop the total number of used fields on the items index increased, I had to change the mapping on the index. So I set oprop's mapping to dynamic : false, so the keys of oprop are not searchable anymore (not indexed).
Now in some of my queries I need to sort the results on the items index by the values of oprop keys. I don't know how ElasticSearch still can give me the possibility to sort on these key values.
Do I need to use scripts for sorting? Do I have access on non indexed data when using scripts?
Somehow I don't think that this is a good approach and I think that in long term I will run into performance issues.
You could use scripts for sorting, since the data will be stored in the _source field, but that should probably be a last resort. If you know which fields need to be sortable, you could just add those to the mapping, and keep oprop as a non dynamic field otherwise?
"properties": {
"oprop": {
"dynamic": false,
"properties": {
"sortable_key_1": {
"type": "text"
},
"sortable_key_2": {
"type": "text"
}
}
}
}

Elasticsearch: Constant Data Field Type

Is there a way to add an Elasticsearch data field to an index mapping, such that it always returns a constant numeric value?
I know I can just add a numeric datatype, and then reindex everything with the constant, but I would like to avoid reindexing, and I'd also like to be able to change the constant dynamically without reindexing.
Motivation: Our cluster has a lot of different indexes. We routinely search multiple indexes at once for various reasons. However, when searching multiple indices, our search logic still needs to treat each index slightly differently. One way we could do this is by adding a constant numeric field to each index, and then use that field in our search query.
However, because this is a constant, it seems like we should not need to reindex everything (seems pointless to add a constant value to every record).
You could use the _meta field for that purpose:
PUT index1
{
"mappings": {
"_meta": {
"constant": 1
},
"properties": {
... your fields
}
}
}
PUT index2
{
"mappings": {
"_meta": {
"constant": 2
},
"properties": {
... your fields
}
}
}
You can change that constant value anytime, without any need for reindexing anything. The value is stored at the index level and can be retrieved anytime by simply retrieve the index mapping with GET index1,index2/_mapping

ElasticSearch + Kibana - Unique count using pre-computed hashes

update: Added
I want to perform unique count on my ElasticSearch cluster.
The cluster contains about 50 millions of records.
I've tried the following methods:
First method
Mentioned in this section:
Pre-computing hashes is usually only useful on very large and/or high-cardinality fields as it saves CPU and memory.
Second method
Mentioned in this section:
Unless you configure Elasticsearch to use doc_values as the field data format, the use of aggregations and facets is very demanding on heap space.
My property mapping
"my_prop": {
"index": "not_analyzed",
"fielddata": {
"format": "doc_values"
},
"doc_values": true,
"type": "string",
"fields": {
"hash": {
"type": "murmur3"
}
}
}
The problem
When I use unique count on my_prop.hash in Kibana I receive the following error:
Data too large, data for [my_prop.hash] would be larger than limit
ElasticSearch has 2g heap size.
The above also fails for a single index with 4 millions of records.
My questions
Am I missing something in my configurations?
Should I increase my machine? This does not seem to be the scalable solution.
ElasticSearch query
Was generated by Kibana:
http://pastebin.com/hf1yNLhE
ElasticSearch Stack trace
http://pastebin.com/BFTYUsVg
That error says you don't have enough memory (more specifically, memory for fielddata) to store all the values from hash, so you need to take them out from the heap and put them on disk, meaning using doc_values.
Since you are already using doc_values for my_prop I suggest doing the same for my_prop.hash (and, no, the settings from the main field are not inherited by the sub-fields): "hash": { "type": "murmur3", "index" : "no", "doc_values" : true }.

Resources