Can only use wildcard queries on keyword, text and wildcard fields - not on [id] which is of type [long] - elasticsearch

Elasticsearch version 7.13.1
GET test/_mapping
{
"test" : {
"mappings" : {
"properties" : {
"id" : {
"type" : "long"
},
"name" : {
"type" : "text"
}
}
}
}
}
POST test/_doc/101
{
"id":101,
"name":"hello"
}
POST test/_doc/102
{
"id":102,
"name":"hi"
}
Wildcard Search pattern
GET test/_search
{
"query": {
"query_string": {
"query": "*101* *hello*",
"default_operator": "AND",
"fields": [
"id",
"name"
]
}
}
}
Error is : "reason" : "Can only use wildcard queries on keyword, text and wildcard fields - not on [id] which is of type [long]",
It was working fine in version 7.6.0 ..
What is new change in latest ES and what is the resolution of this issue?

It's not directly possible to perform wildcards on numeric data types. It is better to convert those integers to strings.
You need to modify your index mapping to
PUT /my-index
{
"mappings": {
"properties": {
"code": {
"type": "text"
}
}
}
}
Otherwise, if you want to perform a partial search you can use edge n-gram tokenizer

Related

ElasticSearch - Fuzzy search in list elements

I've got some documents stored in ElasticSearch like this:
{
"tag" : ["tag1", "tag2", "tag3"]
...
}
I want to search through the "tag" field. I know that It should work with a query like:
{
"query":
{
"match" : {"tag" : "tag1"}
}
}
But, I don't want to use a match, I want to use a fuzzy search through the list, for example, something like:
{
"query":
{
"fuzzy" : {"tag" : "tagg1"}
}
}
The problem is, the above query doesn't return anything. What should I use instead?
What is the type of tag field in your elasticsearch mapping ?
I have tried with following type for tag field & elastisearch version is 7.2
"tag" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
And working well for me.
Query With elastic fuzzy will be :
{
"query":
{
"fuzzy": {"tag" : "tagg1"}
}
}

ElasticSearch - Copy one field value to other field for all documents

We have a field "name" in the index. We recently added a new field "alias".
I want to copy name field value to the new field alias for all documents.
Is there any Update query that will do this?
If that is not possible , Help me to achieve this.
Thanks in advance
I am trying this query
http://URL/index/profile/_update_by_query
{
"query": {
"constant_score" : {
"filter" : {
"exists" : { "field" : "name" }
}
}
},
"script" : "ctx._source.alias = name;"
}
In the script , I am not sure how to give name field.
I getting error
{
"error": {
"root_cause": [
{
"type": "class_cast_exception",
"reason": "java.lang.String cannot be cast to java.util.Map"
}
],
"type": "class_cast_exception",
"reason": "java.lang.String cannot be cast to java.util.Map"
},
"status": 500
}
Indeed, the syntax has changed a tiny little bit since. You need to modify your query to this:
POST index/_update_by_query
{
"query": {
"constant_score" : {
"filter" : {
"exists" : { "field" : "name" }
}
}
},
"script" : {
"inline": "ctx._source.alias = ctx._source.name;"
}
}
UPDATE for ES 6
Use source instead of inline

ElasticSearch filtering for a tag in array

I've got a bunch of events that are tagged for their audience:
{ id = 123, audiences = ["Public", "Lecture"], ... }
I've trying to do an ElasticSearch query with filtering, so that the search will only return events that have the an exact entry of "Public" in that audiences array (and won't return events that a "Not Public").
How do I do that?
This is what I have so far, but it's returning zero results, even though I definitely have "Public" events:
curl -XGET 'http://localhost:9200/events/event/_search' -d '
{
"query" : {
"filtered" : {
"filter" : {
"term" : {
"audiences": "Public"
}
},
"query" : {
"match" : {
"title" : "[searchterm]"
}
}
}
}
}'
You could use this mapping for you content type
{
"your_index": {
"mappings": {
"your_type": {
"properties": {
"audiences": {
"type": "string",
"index": "not_analyzed"
},
}
}
}
}
}
not_analyzed
Index this field, so it is searchable, but index the
value exactly as specified. Do not analyze it.
And use lowercase term value in search query

Indexing a comma-separated value field in Elastic Search

I'm using Nutch to crawl a site and index it into Elastic search. My site has meta-tags, some of them containing comma-separated list of IDs (that I intend to use for search). For example:
contentTypeIds="2,5,15". (note: no square brackets).
When ES indexes this, I can't search for contentTypeIds:5 and find documents whose contentTypeIds contain 5; this query returns only the documents whose contentTypeIds is exactly "5". However, I do want to find documents whose contentTypeIds contain 5.
In Solr, this is solved by setting the contentTypeIds field to multiValued="true" in the schema.xml. I can't find how to do something similar in ES.
I'm new to ES, so I probably missed something. Thanks for your help!
Create custom analyzer which will split indexed text into tokens by commas.
Then you can try to search. In case you don't care about relevance you can use filter to search through your documents. My example shows how you can attempt search with term filter.
Below you can find how to do this with sense plugin.
DELETE testindex
PUT testindex
{
"index" : {
"analysis" : {
"tokenizer" : {
"comma" : {
"type" : "pattern",
"pattern" : ","
}
},
"analyzer" : {
"comma" : {
"type" : "custom",
"tokenizer" : "comma"
}
}
}
}
}
PUT /testindex/_mapping/yourtype
{
"properties" : {
"contentType" : {
"type" : "string",
"analyzer" : "comma"
}
}
}
PUT /testindex/yourtype/1
{
"contentType" : "1,2,3"
}
PUT /testindex/yourtype/2
{
"contentType" : "3,4"
}
PUT /testindex/yourtype/3
{
"contentType" : "1,6"
}
GET /testindex/_search
{
"query": {"match_all": {}}
}
GET /testindex/_search
{
"filter": {
"term": {
"contentType": "6"
}
}
}
Hope it helps.
POST _analyze
{
"tokenizer": {
"type": "char_group",
"tokenize_on_chars": [
"whitespace",
"-",
"\n",
","
]
},
"text": "QUICK,brown, fox"
}

Elasticsearch Map case insensitive to not_analyzed documents

I have a type with following mapping
PUT /testindex
{
"mappings" : {
"products" : {
"properties" : {
"category_name" : {
"type" : "string",
"index" : "not_analyzed"
}
}
}
}
}
I wanted to search for an exact word.Thats why i set this as not_analyzed.
But the problem is i want to search that with lower case or upper case[case insensitive].
I searched for it and found a way to set case insensitive.
curl -XPOST localhost:9200/testindex -d '{
"mappings" : {
"products" : {
"properties" : {
"category_name":{"type": "string", "index": "analyzed", "analyzer":"lowercase_keyword"}
}
}
}
}'
Is there any way to do these two mappings to same field.?
Thanks..
I think this example meets your needs:
$ curl -XPUT localhost:9200/testindex/ -d '
{
"settings":{
"index":{
"analysis":{
"analyzer":{
"analyzer_keyword":{
"tokenizer":"keyword",
"filter":"lowercase"
}
}
}
}
},
"mappings":{
"test":{
"properties":{
"title":{
"analyzer":"analyzer_keyword",
"type":"string"
}
}
}
}
}'
taken from here: How to setup a tokenizer in elasticsearch
it uses both the keyword tokenizer and the lowercase filter on a string field which I believe does what you want.
If you want case insensitive queries ONLY, consider changing both your data AND your query to either of lower/upper case before you go about doing your business.
That would mean you keep your field not_analyzed and enter data/query in only one of the cases.
I believe this Gist answers your question best:
* https://gist.github.com/mtyaka/2006966
You can index a field several times during mapping and we do this all the time where one is not_analyzed and another is. We typically set the not_analyzed version to .raw
Like John P. wrote, you can set up analyzer during runtime, or you can set one up in the config at server start like in link above:
# Register the custom 'lowercase_keyword' analyzer. It doesn't do anything else
# other than changing everything to lower case.
index.analysis.analyzer.lowercase_keyword.type: custom
index.analysis.analyzer.lowercase_keyword.tokenizer: keyword
index.analysis.analyzer.lowercase_keyword.filter: [lowercase]
Then you define your mapping for your field(s) with both the not_analyzed version and the analyzed one:
# Map the 'tags' property to two fields: one that isn't analyzed,
# and one that is analyzed with the 'lowercase_keyword' analyzer.
curl -XPUT 'http://localhost:9200/myindex/images/_mapping' -d '{
"images": {
"properties": {
"tags": {
"type": "multi_field",
"fields": {
"tags": {
"index": "not_analyzed",
"type": "string"
},
"lowercased": {
"index": "analyzed",
"analyzer": "lowercase_keyword",
"type": "string"
}
}
}
}
}
}'
And finally your query (note lowercased values before building query to help find match):
# Issue queries against the index. The search query must be manually lowercased.
curl -XPOST 'http://localhost:9200/myindex/images/_search?pretty=true' -d '{
"query": {
"terms": {
"tags.lowercased": [
"event:battle at the boardwalk"
]
}
},
"facets": {
"tags": {
"terms": {
"field": "tags",
"size": "500",
"regex": "^team:.*"
}
}
}
}'
just create your custom analyzer with keyword tokenizer and lowercase token filter.
To this scenarios, I suggest that you could combine lowercase filter and keyword tokenizer into your custom analyzer. And lowercase your search-input keywords.
1.Create index with the analyzer combined with lowercase filter and keyword tokenizer
curl -XPUT localhost:9200/test/ -d '
{
"settings":{
"index":{
"analysis":{
"analyzer":{
"your_custom_analyzer":{
"tokenizer":"keyword",
"filter": ["lowercase"]
}
}
}
}
}'
2.Put mappings and set the field properties with the analyzer
curl -XPUT localhost:9200/test/_mappings/twitter -d '
{
"twitter": {
"properties": {
"content": {"type": "string", "analyzer": "your_custom_analyzer" }
}
}
}'
3.You could search what you want in wildcard query.
curl -XPOST localhost:9200/test/twitter/ -d '{
"query": {
"wildcard": {"content": "**the words you want to search**"}
}
}'
Another way for search a filed in different way. I offser a suggestion for U was that using the multi_fields type.
You could set the field in multi_field
curl -XPUT localhost:9200/test/_mapping/twitter -d '
{
"properties": {
"content": {
"type": "multi_field",
"fields": {
"default": {"type": "string"},
"search": {"type": "string", "analyzer": "your_custom_analyzer"}
}
}
}
}'
So you could index data with above mappings properties. and finally search it in two way (default/your_custom_analyzer)
We could achieve case insensitive searching on non-analyzed strings using ElasticSearch scripting.
Example Query Using Inline Scripting:
{
"query" : {
"bool" : {
"must" : [{
"query_string" : {
"query" : "\"apache\"",
"default_field" : "COLLECTOR_NAME"
}
}, {
"script" : {
"script" : "if(doc['verb'].value != null) {doc['verb'].value.equalsIgnoreCase(\"geT\")}"
}
}
]
}
}
}
You need to enable scripting in the elasticsearch.yml file. Using scripts in search queries could reduce your overall search performance. If you want scripts to perform better, then you should make them "native" using java plugin.
Example Plugin Code:
public class MyNativeScriptPlugin extends Plugin {
#Override
public String name() {
return "Indexer scripting Plugin";
}
public void onModule(ScriptModule scriptModule) {
scriptModule.registerScript("my_script", MyNativeScriptFactory.class);
}
public static class MyNativeScriptFactory implements NativeScriptFactory {
#Override
public ExecutableScript newScript(#Nullable Map<String, Object> params) {
return new MyNativeScript(params);
}
#Override
public boolean needsScores() {
return false;
}
}
public static class MyNativeScript extends AbstractSearchScript {
Map<String, Object> params;
MyNativeScript(Map<String, Object> params) {
this.params = params;
}
#Override
public Object run() {
ScriptDocValues<?> docValue = (ScriptDocValues<?>) doc().get(params.get("key"));
if (docValue instanceof Strings) {
return ((String) params.get("value")).equalsIgnoreCase(((Strings) docValue).getValue());
}
return false;
}
}
}
Example Query Using Native Script:
{
"query" : {
"bool" : {
"must" : [{
"query_string" : {
"query" : "\"apache\"",
"default_field" : "COLLECTOR_NAME"
}
}, {
"script" : {
"script" : "my_script",
"lang" : "native",
"params" : {
"key" : "verb",
"value" : "GET"
}
}
}
]
}
}
}
it is so simple, just create mapping as follows
{
"mappings" : {
"products" : {
"properties" : {
"category_name" : {
"type" : "string"
}
}
}
}
}
No Need of giving index if you want to work with case insensitive because the default index will be "standard" that will take care of case insensitive.
I wish I could add a comment, but I can't. So the answer to this question is "this is not possible".
Analyzers are composed of a single Tokenizer and zero or more TokenFilters.
I wish I could tell you something else, but spending 4 hours researching, that's the answer. I'm in the same situation. You can't skip tokenization. It's either all on or all off.

Resources