I have many JSON documents to be stored in elastic Search.
All the JSON documents have few fields which always exists but
some of the fields need not exist all the time.
So Maximum there would be 5000 fields in the complete JSON.
Some of the fields can have fields which can have either string or Date Type.And New fields can also be added to new documents.
Problems Faced:
As some of the fields,can have either date type or String type , elastic search gives an exception while inserting new documents
Exception:
Wrong mapping type current type [Date] mapped Type [text]
Question1:
Can we type of a field to (*) instead of Date or String that can store any type?
Can this be done after insertion or it should be declared initially and then only documents should be added?
But if new fields are to be added i have to re-insert all the documents
Question2:
Although there are 5000 fields can we put index only on certain fields?
If Yes how?
I want something similar to SQL:
Step 1: Create Table with 5000 columns.
Step 2: Create index only on 10 fields
Step 3: If More indexes are needed add dynamically
Are the same things possible in elastic search?
I mean the can it be done in the following way?
1.First insert documents with no fields as indexes
2.Then index only certain fields which are needed?
3.Add More indexes if required on already existing documents?
Related
I'm using kibana 7.10.1.
I need it to use different 'time fields' for each index pattern. Is this possible to set multiple time fields for same index ?
You can pick any date (or date_nanos) field as the primary time field in an index pattern. Screenshot from the second page when creating it:
#timestamp is just a convention. Though you will need to create a different index pattern for each combination of index(es) and primary time field.
I have a db of documents. Every document has a property(keyword) called index (noting to do with the elastic index) and a property(keyword) named superIndex. There can be multiple documents with the same index and multiple documents with the same superIndex in the DB, these fields are not unique.
I run a compound query searching free text on the text content of these documents, with sorting, and get the results I want. However, I get many documents having the same index and/or superIndex. Currently I programmatically filter the result list and take only the first result from each index and superIndex. My requirement is that at the end I'm left with the top results from the sort, the first from each index and superIndex.
Can this be done using elastic query. If so how?
Field collapsing allows you to collapse all search results having the same value in a field (e.g. index). (See Elasticsearch Reference: Field Collapsing)
I'm using Elastic Search 5.6.2 with Kibana and I'm currently facing a problem
My documents are indexed on the field timestamp which is normally an integer, however recently somebody has logged a document with a timestamp that is not an integer, and Kibana complains of conflicting type.
The discover panels display nothing and the following errors pop:
Saved "field" parameter is now invalid. Please select a new field.
Discover: "field" is a required parameter
How can I look for the document(s) causing these conflicts so that to find the service creating bad logs ?
The field type (either integer or text/keyword) is not defined on per document basis but rather on per index basis (in the mappings). I guess you are manipulating timeseries data, and you probably have un index per day (or per month or ...).
In Kibana Dev Tools:
List the created indices with GET _cat/indices
For each index (logstash-2017.09.28 in my example) do a GET logstash-2017.09.28/_mapping and check the type of the field in #timestamp
The field type is probably different between indices.
You won't be able to change the field type on created indices. Deleting document won't solve you're problem. The only solution is to drop the index or reindex the whole index with a new field type (in a specific mapping).
To avoid this problem on future indices, the solution is to create an index template with a mapping telling that the field #timestamp is of type date or whatever.
I am new to Elastic search. I have indexes which have array of objects. In order to get data related to single object only I need to set data type as nested.
How to set the data type of existing index element as nested?
Thanks.
I have a document whose structure changes often, how can I index nested documents inside it without changing the mapping on ElasticSearch?
You can index documents in Elasticsearch without providing a mapping yes.
However, Elasticsearch makes a decision about the type of a field when the first document contains a value for that field. If you add document 1 and it has a field called item_code, and in document 1 item_code is a string, Elasticsearch will set the type of field "item_code" to be string. If document 2 has an integer value in item_code Elasticsearch will have already set the type as string.
Basically, field type is index dependant, not document dependant.
This is mainly because of Apache Lucene and the way it handles this information.
If you're having a case where some data structure changes, while other doesn't, you could use an object type, http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-object-type.html.
You can even go as far as to use "enabled": false on it, which makes elasticsearch just store the data. You couldn't search it anymore, but maybe you actually don't even want that?