I am using this request when creating my index:
PUT some_name
{
"mappings": {
"_default_": {
"_timestamp" : {
"enabled": true,
"store": true
},
"properties": {
"properties": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
}
}
However, _timestamp field is not being returned, basically when I add a document (without any time field) and request it back. I am running Elasticsearch 1.5, and I have tried "store": "yes", "store": "true".
What am I doing wrong? Thanks.
You need to specifically ask for that field to be returned: "fields": ["_timestamp"] because it's a field that's not commonly returned and is not included in the _source (the default being returned):
GET /some_name/_search
{
"query": {
"match_all": {}
},
"fields": ["_timestamp"]
}
Related
I had created a copy_to field as of an existing field by altering its mapping below:
{
"properties": {
"ExistingField": {
"type": "date",
"copy_to": "CopiedField"
},
"CopiedField": {
"type": "date",
"store": true
}
}
}
I had used "store": "true" since I wanted this new fields value to be retrieved when I do a search. Aggregations with the "CopiedField" work fine but when I try to search for a value in this new CopiedField I cannot see anything being retrieved:
{
"stored_fields": [
"CopiedField"
],
"query": {
"match_all": {}
}
}
How do I retrieve the value of this "CopiedField" in a simple search?
Mapping cannot be changed for already existing fields.
You will need to create a new index(with correct mapping) and move documents from old index to new index. You can delete old index and use alias so that index name is not changed
(Mapping)[https://www.elastic.co/blog/changing-mapping-with-zero-downtime]
(Reindex)[https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html]
(Alias)[https://www.elastic.co/guide/en/elasticsearch/reference/6.2/indices-aliases.html]
ex.
Old index index35 with below mapping
PUT index35
{
"mappings": {
"properties": {
"ExistingField": {
"type": "date"
}
}
}
}
Query: below query will not return anything
GET index35/_search
{
"stored_fields": [
"CopiedField"
],
"query": {
"match_all": {}
}
}
Create New index
PUT index36
{
"mappings": {
"properties": {
"ExistingField": {
"type": "date",
"copy_to": "CopiedField"
},
"CopiedField": {
"type": "date",
"store": true
}
}
}
}
Move old documents to new documents
POST _reindex
{
"source": {
"index": "index35"
},
"dest": {
"index": "index36" ----> must be created before reindex
}
}
Make sure document count is same in both old and new index(to prevent data loss)
Delete old index :-DELETE index35
Create an alias for new index(give old name) so that search queries are not affected
POST /_aliases
{
"actions" : [
{ "add" : { "index" : "index36", "alias" : "index35" } }
]
}
Old query will now return results
I've installed the Docker containers for Elasticsearch 5.5.2 and Kibana. I started to learn about mapping types, and created an index with the following code through xcurl:
{
"mappings": {
"user": {
"_all": { "enabled": false },
"properties": {
"title": { "type": "text" },
"name": { "type": "text" },
"age": { "type": "integer" }
}
}
}
The index was created successfully and I decided to insert some data. When I try to add a string into an integer field i.e. {"age": "hello"}, Elastic shows an error (this means mappings is working OK). The problem is with other data types:
1.It accepts integers and floats in string fields (I think this could be because of implicit casts).
2.It accepts floats like 22.4 in the agefield (when I search with Kibana or xcurl the agefield content is shown as float and not as an integer, that means is not doing casts from float to integer)
What I'm doing bad?
Have you tried to disable coercion? It can be done at field level:
{
"mappings": {
"user": {
"_all": { "enabled": false },
"properties": {
"title": { "type": "text" },
"name": { "type": "text" },
"age": { "type": "integer",
"coerce": false}
}
}
}
Or at index level for all fields:
"settings": {
"index.mapping.coerce": false
},
"mappings": {
...
I have a mapping with an inner object as follows:
{
"mappings": {
"_all": {
"enabled": false
},
"properties": {
"foo": {
"name": {
"type": "string",
"index": "not_analyzed"
},
"address": {
"type": "object",
"properties": {
"address": {
"type": "string"
},
"city": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
When I try the following aggregation it does not return any data:
post data:*/foo/_search?search_type=count
{
"query": {
"match_all": {}
},
"aggs": {
"unique": {
"cardinality": {
"field": "address.city"
}
}
}
}
When I try to put field city or address.city, aggregation returns zero but if i put foo.address.city it is then when i get the correct respond by elasticsearch. This also affects kibana behavior
Any ideas why this is happening? I saw there is a mapping refactoring that might affects this. I use elasticsearch version 1.7.1
To add on this if, I use the relative path in a search query as follows it works normally:
"query": {
"filtered": {
"filter": {
"term": {
"address.city": "london"
}
}
}
}
Seems its this same issue.
This is seen when the type name and field name is same.
What am I trying to do?
Add doc_type to an existing index.
What have I tried?
Created index and document
POST /my_index-1/my_type/1
{
"my_prop": "my_value"
}
Added a template
PUT /_template/my_template
{
"id": "my_template",
"template": "my_index-*",
"mappings": {
"_default_": {
"dynamic_templates": [
{
"my_prop_template": {
"mapping": {
"index": "not_analyzed",
"doc_values": true,
"fielddata": {
"format": "doc_values"
},
"type": "string"
},
"match": "my_prop",
"match_mapping_type": "string"
}
}
]
}
}
}
Reindexed
./stream2es es --source http://localhost:9200/my_index-1 --target http://localhost:9200/my_index-2
What went wrong?
In the new index my_index-2 the property did not receive "doc_values": true:
...
"properties": {
"my_prop": {
"type": "string"
}
}
...
Just for the sanity, I have also tried adding the same document to my_index-3, and it got "doc_values": true.
My question
How can I reindex my old index with "doc_values": true?
Thanks #Val! Logstash indeed solved the problem.
Both stream2es and elasticsearch-reindex created new mapping without "doc_values": true.
I have an Elasticsearch index with a bunch of fields, some of which I want to use along with the default stopword list. On the other hand, I have a username field which should return results for users called the, be etc.
Of course, when I run the following query:
{
"query": {
"constant_score": {
"filter": {
"terms": {
"username": [
"be"
]
}
}
}
}
}
nothing is returned. I have seen various solutions for changing the standard analyzer to remove stopwords, but am struggling to find how I would do so for this one field only. Thanks for any pointers.
You can do it like the following: add a custom analyzer that shouldn't use stopwords and then explicitly specify this analyzer just for those fields you want stopwords to be recognized (like your username field).
PUT /stopwords
{
"settings": {
"analysis": {
"analyzer": {
"my_english": {
"type": "english",
"stopwords": "_none_"
}
}
}
},
"mappings": {
"text": {
"properties": {
"title": {
"type": "string"
},
"content": {
"type": "string"
},
"username": {
"type": "string",
"analyzer": "my_english"
}
}
}
}
}