I have an ElasticSearch index with vacation rentals (100K+), each including a property with nested documents for availability dates (1000+ per 'parent' document). Periodically (several times daily), I need to replace the entire set of nested documents for each property (to have fresh data for availability per vacation rental property) - however ElasticSearch default behavior is to merge nested documents.
Here is a snippet of the mapping (availability dates in the "bookingInfo"):
{
"vacation-rental-properties": {
"mappings": {
"property": {
"dynamic": "false",
"properties": {
"bookingInfo": {
"type": "nested",
"properties": {
"avail": {
"type": "integer"
},
"datum": {
"type": "date",
"format": "dateOptionalTime"
},
"in": {
"type": "boolean"
},
"min": {
"type": "integer"
},
"out": {
"type": "boolean"
},
"u": {
"type": "integer"
}
}
},
// this part left out
}
}
}
}
Unfortunately, our current underlying business logic does not allow us to replace or update parts of the "bookingInfo" nested documents, we need to replace the entire array of nested documents. With the default behavior, updating the 'parent' doc, merely adds new nested docs to the "bookingInfo" (unless they exist, then they're updated) - leaving the index with a lot of old dates that should no longer be there (if they're in the past, they're not bookable anyway).
How do I go about making the update call to ES?
Currently using a bulk call such as (two lines for each doc):
{ "update" : {"_id" : "abcd1234", "_type" : "property", "_index" : "vacation-rental-properties"} }
{ "doc" : {"bookingInfo" : ["all of the documents here"]} }
I have found this question that seems related, and wonder if the following will work (first enabling scripts via script.inline: on in the config file for version 1.6+):
curl -XPOST localhost:9200/the-index-and-property-here/_update -d '{
"script" : "ctx._source.bookingInfo = updated_bookingInfo",
"params" : {
"updated_bookingInfo" : {"field": "bookingInfo"}
}
}'
How do I translate that to a bulk call for the above?
Using ElasticSearch 1.7, this is the way I solved it. I hope it can be of help to someone, as a future reference.
{ "update": { "_id": "abcd1234", "_retry_on_conflict" : 3} }\n
{ "script" : { "inline": "ctx._source.bookingInfo = param1", "lang" : "js", "params" : {"param1" : ["All of the nested docs here"]}}\n
...and so on for each entry in the bulk update call.
Related
Is is any way in elastic to store index template per alias.
I mean create Index with multiple aliases (alias1 ,alias2 ..) and attach different template to each of them. Then perform Index/Search docs on specific alias.
The reason I'm doing so due to multiple different data-structure (up to 50 types) of documents.
What I did so far is :
1. PUT /dynamic_index
2. POST /_aliases
{ "actions" : [
{ "add" : { "index" : "dynamic_index", "alias" : "alias_type1" } },
{ "add" : { "index" : "dynamic_index", "alias" : "alias_type2" } },
{ "add" : { "index" : "dynamic_index", "alias" : "alias_type3" } }
]}
3.
PUT_template/template1 {
"index_patterns": [
"dynamic_index"
],
"mappings": {
"dynamic_templates": [
{
"strings_as_keywords": {
"match_mapping_type": "string",
"mapping": {
"type": "text",
"analyzer": "standard",
"copy_to": "_all",
"fields": {
"keyword": {
"type": "keyword",
"normalizer": "lowercase_normalizer"
}
}
}
}
}
],
"properties": {
"source": {
"type": "keyword"
}
}
},
"aliases": {
"alias_type1": {
}
}
}
4. same way to alias_type2 , alias_type3 but different fields ...
Indexing/Search : Trying create and search docs per alias like in example:
POST alias_type1/_doc
{
"source": "foo"
, .....
}
POST alias_type2/_doc
{
"source": "foo123"
, .....
}
GET alias_type1/_search
{
"query": {
"match_all": {}
}
}
GET alias_type2/_search
{
"query": {
"match_all": {}
}
}
What I see actually that even if I index documents per alias,
when searching I don't see result per alias ,all results are same on alias_type1,2 and even on index.
Any way I can achieve separation logic on each alias in terms of searches/index docs per type (alias) ?
Any ideas ?
You can’t have separate mapping for aliases pointing to the same index! Aliases are like virtual link pointing to a index so if your aliases pointing to same index you will get the same result back.
If you want to have different mapping based on your data structure you will need to creat multiple indices.
Update
You also can use custom routing based on a field for more information you can check Elastic official documentation here.
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."
}
"status" : 400
I've read that the above is because type are deprecated in elasticsearch 7.7, is that valid for data type? I mean how am I suppose to say I want the data to be considered as a date?
My current mapping has this element:
"Time": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
I just wanted to create it again with type:"date", but I noticed even copy pasting the current mapping (which works) yields an error... The index and mappins I have are generate automatically by https://github.com/jayzeng/scrapy-elasticsearch
My goal is simply to have a date field, I have all my date in my index, but when I want to filter in kibana I can see it is not considered as a date field. And modyfing mapping doesn't seem like an option.
Obvious ELK noob here, please bare with me (:
The error is quite ironic because I pasted the mapping from an existing index/mapping...
You're experiencing this because of the breaking change in 7.0 regarding named doc types.
Long story short, instead of
PUT example_index
{
"mappings": {
"_doc_type": {
"properties": {
"Time": {
"type": "date",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
}
you should do
PUT example_index
{
"mappings": { <-- Note no top-level doc type definition
"properties": {
"Time": {
"type": "date",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
I'm not familiar with your scrapy package but at a glance it looks like it's just a wrapper around the native py ES client which should forward your mapping definitions to ES.
Caveat: if you have your fields defined as text and you intend to change it to date (or add a field of type date), you'll get the following error:
mapper [Time] of different type, current_type [text], merged_type [date]
You basically have two options:
drop the index, set the new mapping & reindex everything (easiest but introduces downtime)
extend the mapping with a new field, let's call it Time_as_datetime and update your existing docs using a script (introduces a brand new field/property -- that's somewhat verbose)
I am new to elastic search and I am struggling with date range query. I have to query the records which fall between some particular dates.The JSON records pushed into elastic search database are as follows:
"messageid": "Some message id",
"subject": "subject",
"emaildate": "2020-01-01 21:09:24",
"starttime": "2020-01-02 12:30:00",
"endtime": "2020-01-02 13:00:00",
"meetinglocation": "some location",
"duration": "00:30:00",
"employeename": "Name",
"emailid": "abc#xyz.com",
"employeecode": "141479",
"username": "username",
"organizer": "Some name",
"organizer_email": "cde#xyz.com",
I have to query the records which has start time between "2020-01-02 12:30:00" to "2020-01-10 12:30:00". I have written a query like this :
{
"query":
{
"bool":
{
"filter": [
{
"range" : {
"starttime": {
"gte": "2020-01-02 12:30:00",
"lte": "2020-01-10 12:30:00"
}
}
}
]
}
}
}
This query is not giving results as expected. I assume that the person who has pushed the data into elastic search database at my office has not set the mapping and Elastic Search is dynamically deciding the data type of "starttime" as "text". Hence I am getting inconsistent results.
I can set the mapping like this :
PUT /meetings
{
"mappings": {
"dynamic": false,
"properties": {
.
.
.
.
"starttime": {
"type": "date",
"format":"yyyy-MM-dd HH:mm:ss"
}
.
.
.
}
}
}
And the query will work but I am not allowed to do so (office policies). What alternatives do I have so that I can achieve my task.
Update :
I assumed the data type to be "Text" but by default Elastic Search applies both "Text" and "Keyword" so that we can implement both Full Text and Keyword based searches. If it is also set as "Keyword" . Will this benefit me in any case. I do not have access to lots of stuff in the office that's why I am unable to debug the query.I only have the search API for which I have to build the query.
GET /meetings/_mapping output :
'
'
'
"starttime" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
'
'
'
Date range queries will not work on text field, for that, you have to use the date field
Since you are working on date fields , best practice is to use the date field.
I would suggest you to reindex your index to another index so that you can change the type of your text field to date field
Step1-: Create index2 using index1 mapping and make sure to change the type of your date field which is text to date type
Step 2-: Run the elasticsearch reindex and reindex all your data from index1 to index2. Since you have changed your field type to date field type. Elasticsearch will now recognize this field as date
POST _reindex
{
"source":{ "index": "index1" },
"dest": { "index": "index2" }
}
Now you can run your Normal date queries on index2
As #jzzfs suggested the idea is to add a date sub-field to the starttime field. You first need to modify the mapping like this:
PUT meetings/_mapping
{
"properties": {
"starttime" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
},
"date": {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss",
}
}
}
}
}
When done, you need to reindex your data using the update by query API so that the starttime.date field gets populated and index:
POST meetings/_update_by_query
When the update is done, you'll be able to leverage the starttime.date sub-field in your query:
{
"query": {
"bool": {
"filter": [
{
"range": {
"starttime.date": {
"gte": "2020-01-02 12:30:00",
"lte": "2020-01-10 12:30:00"
}
}
}
]
}
}
}
There are ways of parsing text fields as dates at search time but the overhead is impractical... You could, however, keep the starttime as text by default but make it a multi-field and query it using starttime.as_date, for example.
Is there anyway I can rename an element in an existing elasticsearch mapping without having to add a new element ?
If so whats the best way to do it in order to avoid breaking the existing mapping?
e.g. from fieldCamelcase to fieldCamelCase
{
"myType": {
"properties": {
"timestamp": {
"type": "date",
"format": "date_optional_time"
},
"fieldCamelcase": {
"type": "string",
"index": "not_analyzed"
},
"field_test": {
"type": "double"
}
}
}
}
You could do this by creating an Ingest pipeline, that contains a Rename Processor in combination with the Reindex API.
PUT _ingest/pipeline/my_rename_pipeline
{
"description" : "describe pipeline",
"processors" : [
{
"rename": {
"field": "fieldCamelcase",
"target_field": "fieldCamelCase"
}
}
]
}
POST _reindex
{
"source": {
"index": "source"
},
"dest": {
"index": "dest",
"pipeline": "my_rename_pipeline"
}
}
Note that you need to be running Elasticsearch 5.x in order to use ingest. If you're running < 5.x then you'll have to go with what #Val mentioned in his comment :)
Updating field name in ES (version>5, missing has been removed) using _update_by_query API:
Example:
POST http://localhost:9200/INDEX_NAME/_update_by_query
{
"query": {
"bool": {
"must_not": {
"exists": {
"field": "NEW_FIELD_NAME"
}
}
}
},
"script" : {
"inline": "ctx._source.NEW_FIELD_NAME = ctx._source.OLD_FIELD_NAME; ctx._source.remove(\"OLD_FIELD_NAME\");"
}
}
First of all, you must understand how elasticsearch and lucene store data, by immutable segments (you can read about easily on Internet).
So, any solution will remove/create documents and change mapping or create a new index so a new mapping as well.
The easiest way is to use the update by query API: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/docs-update-by-query.html
POST /XXXX/_update_by_query
{
"query": {
"missing": {
"field": "fieldCamelCase"
}
},
"script" : {
"inline": "ctx._source.fieldCamelCase = ctx._source.fieldCamelcase; ctx._source.remove(\"fieldCamelcase\");"
}
}
Starting with ES 6.4 you can use "Field Aliases", which allow the functionality you're looking for with close to 0 work or resources.
Do note that aliases can only be used for searching - not for indexing new documents.
I have a Kibana dashboard that contains a terms panel to show the number of instances for a particular field (let's call it field1). Field1 is, effectively, a list of strings. Each string usually contains multiple words. Since it's analyzed, Elasticsearch breaks the terms up into separate columns. I need to keep the text together, so I need a not_analyzed version. Here's my attempt to do that with a template, located at ~\config\templates\doc_template.json on a Windows box, which does not seem to be working. Elasticsearch is running as a Windows service.
{
"doc_template": {
"template": "*",
"mappings": {
"Type-*": {
"properties": {
"Field1": {
"type": "multi_field",
"fields": {
"Field1": { "index": "analyzed" },
"RawField1": { "index": "not_analyzed" }
}
}
}
}
}
}
}
In the terms panel, I expect the necessary field to be either RawField1 or Field1.RawField1, but I've tried other variations including and excluding .raw, with no luck.
New indexes are created daily. Field1 exists in 4 separate types, each of which begin with "Type-". I suspect my attempt at using a wildcard there is problematic, but I'm not sure. All data is being sent to Elasticsearch via NEST in a C# .NET application. Here's the mapping for Field1 as it currently exists for one of the types:
{
"index-2014.12.08" : {
"mappings" : {
"Type-1" : {
"properties" : {
"Field1" : {
"type" : "string"
},
"Field2" : {
"type" : "string"
},
"Field3" : {
"type" : "string"
}
}
}
}
}
}
Obviously, the mapping doesn't look like how I expect. What's the best way to remedy this issue?