Elasticsearch 5.x.x cannot disable dynamic mapping - elasticsearch

I'm trying to simply disable dynamic mapping for any fields not explicitly defined in the mapping at index creation time. Nothing would work, so I even tried the example in their docs
PUT my_index
{
"mappings": {
"my_type": {
"dynamic": false,
"properties": {
"user": {
"type": "text"
}
}
}
}
}
Made a test insert:
POST my_index/my_type
{
"user": "tester",
"some_unknown_field": "lsdkfjsd"
}
Then searching the index shows:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "my_index",
"_type": "my_type",
"_id": "AViPrfwVko8c8Q3co8Qz",
"_score": 1,
"_source": {
"user": "tester",
"some_unknown_field": "lsdkfjsd"
}
}
]
}
}
I'm expecting "some_unknown_field" to not be indexed, since it was not defined in the mapping. So why is it still being indexed? Am I missing something?
UPDATE
It turns out that it isn't currently possible in version 5.0.0 to do what I wanted, so I removed the fields in my app before sending to elasticsearch and achieved the same end result.

What mapping does is to have your field as the type which you mention, when you create the index using the mapping. So for a field which you haven't mentioned anything during the mapping and then trying to insert values, ES will always consider it as a new field and will add it to the index with a default mapping. So if you don't want to see a particular field within your _source you could do some source filtering.
Work arounds:
If that's not the case try disabling the default mapping when
you're creating the index.
Try making the property dynamic into strict:
PUT /test
{
"settings": {
"index.mapper.dynamic": false
},
"mappings": {
"testing_type": {
"dynamic":"strict",
"properties": {
"field1": {
"type": "string"
}
}
}
}
}
If the above two doesn't work out, try making index_mapper_dynamicto false. This SO could be handy. Hope it helps.

Related

Elasticsearch 7.x mapper [location] cannot be changed from type [geo_point] to [text]

I have an index in elasticsearch with a location field mapped as a geo_point. This is what's in the mappings tab of the index management settings.
{
"mappings": {
"_doc": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
According to the geo-point docs on elasticsearch I should be able to send this in as a string of "lat,lon"
This is the value of a location in the json being sent to elastic
'location': '32.807078,-89.908972'
I'm getting this error message
RequestError(400, 'illegal_argument_exception', 'mapper [location] cannot be changed from type [geo_point] to [text]')
UPDATE
I was able to get data flowing in but now it's coming in as text data not geo_point!
This is in my index mapping along with the other mappings
This is in the data
Even though the mapping is there the data is text and I'm unable to use it in a map chart.
SOLUTION
Along with the checked answer, also needed to do this.
Go to Home -> Manage -> Kibana -> Index Patterns
Deleted the original index pattern and created a new one and now it's being seen as geo_data!
On the basis of your index mapping, it seems that you are using the elasticsearch version that is before 7.x.
I have indexed a document using the same index mapping as provided in the question (using version 6.8). Don't forget to add _doc (or the type you used in your index mapping) in the URL (when indexing the data)
Index Mapping:
{
"mappings": {
"_doc": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
Index Data:
While posting a document to Elasticsearch, make sure the URL is in the format as mentioned below:
PUT my_index/_doc/1
{
"location": "41.12,-71.34"
}
Response:
{
"_index": "65076754",
"_type": "_doc",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 0,
"_primary_term": 1
}
EDIT 1:
On the basis of comments below, the OP is using 7.x
Index Mapping using 7.x
{
"mappings": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}

Look for items that a field starts with (ElasticSearch) nodejs client

I'm trying to query my ElasticSearch index in order to retrieve the items that one of the "foo" fields starts with "hel".
The toto field is a keyword type:
"toto": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
This what I tried:
client.search({index: 'xxxxx', type: 'xxxxxx_type', body: {"query": {"regexp": {"toto": "hel.*"}}}},
function(err, resp, status) {
if (err)
res.send(err)
else {
console.log(resp);
res.send(resp.hits.hits)
}
});
I tried to find a solution here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-regexp-query.html
and
https://www.elastic.co/guide/en/elasticsearch/guide/current/_wildcard_and_regexp_queries.html
or here
How to search for a part of a word with ElasticSearch
but nothing work.
This is how looks my data:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 4,
"max_score": 1,
"hits": [
{
"_index": "xxxxxx",
"_type": "xxxxx_type",
"_id": "1",
"_score": 1,
"_source": {
"toto": "hello"
}
}
}
Match phrase prefix query is what you are looking for.
Use the query below:
{
"query": {
"match_phrase_prefix": {
"toto": "hel"
}
}
}
It sounds like you are looking for an auto-complete solution. running regex searches for every character the user type is not that efficient.
I would suggest changing the indexing tokenizers and analyzer in order to create the prefix tokens in advance and allow faster search.
Some options on how to implement auto complete:
Elasticsearch Completion suggester: https://www.elastic.co/guide/en/elasticsearch/reference/6.0/search-suggesters-completion.html
or do it yourself:
https://hackernoon.com/elasticsearch-building-autocomplete-functionality-494fcf81a7cf
How to suggest (autocomplete) next word in elastic search?

elasticsearch: copying meta-field _id to other field while creating document

I am using elasticsearch. I see there is meta-field _id for each document. I want to search document using this meta-field as I don't have any other field as unique field in document. But _id is a string and can have dashes which are not possible to search unless we add mapping for field as type :keyword. But it is possible as mentioned here. So now I am thinking to add another field newField in document and make it same as _id. One way to do it is: first create document and assign _id to that field and save document again. But this will have 2 connections which is not that good. So I want to find some solution to set newField while creating document itself. Is it even possible?
You can search for a document that contains dashes:
PUT my_index/tweet/testwith-
{
"fullname" : "Jane Doe",
"text" : "The twitter test!"
}
We just created a document with a dash in its id
GET my_index/tweet/_search
{
"query": {
"terms": {
"_id": [
"testwith-"
]
}
}
}
We search for the document that have the following id: testwith-
{
"took": 9,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "my_index",
"_type": "tweet",
"_id": "testwith-",
"_score": 1,
"_source": {
"fullname": "Jane Doe",
"text": "The twitter test!"
}
}
]
}
}
We found it. We can search on document that have - in it.
you could also use a set processor when using an ingest pipeline to store the id in an additional field, see https://www.elastic.co/guide/en/elasticsearch/reference/5.5/accessing-data-in-pipelines.html and https://www.elastic.co/guide/en/elasticsearch/reference/5.5/set-processor.html

Spurious results from elasticsearch

I suspect I can't (or I'm just not quite desperate enough to try yet!) give enough information to give you enough work on but I'm just hoping someone may be able to give me an idea of where to investigate...
I have an elastic search index which is in a live system and is working fine. I've added 3 attributes to the core entity in the index (productId). I'm getting the correct data back but every now and then it includes spurious data in the return results.
So for example (I've cut the list of fields down which is my it is a multi_match query).
Using Postman I am sending
{
"query" : {
"multi_match" : {
"query" : "FD41D359-1066-47C5-B930-C839F380FBDE",
"fields" : [ "softwareitem.productId" ]
}
}
}
I'm expecting 1 item to come back in this example and I'm getting 2. I've modified the result a little but the key thing is the productId. You can see in the 2nd item returned it is not the product Id be searched ?
Can anyone give me any idea where I should look next with this ? Is there a fault with my query or do you think the index might be corrupt in some way ?
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 27.424479,
"hits": [
{
"_index": "core_products",
"_type": "softwareitem",
"_id": "040EEEA1-4758-4F01-A55A-CAE710117C81",
"_score": 27.424479,
"_source": {
"id": "040EEEA1-4758-4F01-A55A-CAE710117C81",
"productId": "FD41D359-1066-47C5-B930-C839F380FBDE",
"softwareitem": {
"id": "040EEEA1-4758-4F01-A55A-CAE710117C81",
"title": "Code Library",
"description": "Blah Blah Blah",
"rmType": "Software",
"created": 1424445765000,
"updated": null
},
"searchable": true
}
},
{
"_index": "core_products",
"_type": "softwareitem",
"_id": "806B8F04-3E53-4278-BCC2-C2E1A17D2813",
"_score": 1.049637,
"_source": {
"id": "806B8F04-3E53-4278-BCC2-C2E1A17D2813",
"productId": "9FB80ABA-B09C-47C5-929A-9FB6C48BD5A8",
"softwareitem": {
"id": "806B8F04-3E53-4278-BCC2-C2E1A17D2813",
"title": "Video Game",
"description": "Blah Blah Blah",
"rmType": "Software",
"created": 1424445765000,
"updated": null
},
"searchable": true
}
}
]
}
}
It seems softwareitem.productId is a string field that it's being analysed. For doing exact matching of a string field, use a not_analyzed string field in your mapping, something like:
"productId" : {
"type" : "string",
"index" : "not_analyzed"
}
Probably your field is alread not_analyzed you have to do an additional change.
At query time you don't need to use a multi_match / match query. These type of queries will analyze your input string query and build a more complex query out of that input, for that reason you are seeing a second unexpected result (it contains 47C5, probably the analyzer is tokenising the full string and building a query that only one token needs to match) . You should use terms / term queries

Change the structure of ElasticSearch response json

In some cases, I don't need all of the fields in response json.
For example,
// request json
{
"_source": "false",
"aggs": { ... },
"query": { ... }
}
// response json
{
"took": 123,
"timed_out": false,
"_shards": { ... },
"hits": {
"total": 123,
"max_score": 123,
"hits": [
{
"_index": "foo",
"_type": "bar",
"_id": "123",
"_score": 123
}
],
...
},
"aggregations": {
"foo": {
"buckets": [
{
"key": 123,
"doc_count": 123
},
...
]
}
}
}
Actually I don't need the _index/_type every time. When I do aggregations, I don't need hits block.
"_source" : false or "_source": { "exclude": [ "foobar" ] } can help ignore/exclude the _source fields in hits block.
But can I change the structure of ES response json in a more common way? Thanks.
I recently needed to "slim down" the Elasticsearch response as it was well over 1MB in json and I started using the filter_path request variable.
This allows to include or exclude specific fields and can have different types of wildcards. Do read the docs in the link above as there is quite some info there.
eg.
_search?filter_path=aggregations.**.hits._source,aggregations.**.key,aggregations.**.doc_count
This reduced (in my case) the response size by half without significantly increasing the search duration, so well worth the effort..
In the hits section, you will always jave _index, _type and _id fields. If you want to retrieve only some specific fields in your search results, you can use fields parameter in the root object :
{
"query": { ... },
"aggs": { ... },
"fields":["fieldName1","fieldName2", etc...]
}
When doing aggregations, you can use the search_type (documentation) parameter with count value like this :
GET index/type/_search?search_type=count
It won't return any document but only the result count, and your aggregations will be computed in the exact same way.

Resources