I work with ElasticSearch version 1.2.3
I've integrated WordNet 3.0 as a Synonym database for ElasticSearch Synonyms Analyzer. (Full WordNet install: configure, make, make install)
I've added the following code to the ElasticSearch index settings (the index name is local_es)
curl -XPUT 'localhost:9200/local_es/_settings' -d '{
"settings" : {
"analysis" : {
"analyzer" : {
"synonym" : {
"tokenizer" : "lowercase",
"filter" : ["synonym"]
}
},
"filter" : {
"synonym" : {
"type" : "synonym",
"format": "wordnet",
"synonyms_path": "analysis/wn_s.pl"
}
}
}
}
}'
I've also have updated the mapping with the following code:
enter code here
curl -XPUT 'localhost:9200/local_es/shadowpage/_mapping' -d '{
"shadowpage" : {
"shadowPageName" : {
"enabled" : true,
"analyzer" : "synonym"
},
"properties" : {
"name" : { "type" : "string", "index" : "analyzed", "analyzer" : "synonym" }
}
}
}'
All is working as expected.
As you can see, ElasticSearch takes its data from the file path of analysis/wn_s.pl
wn_s.pl file is a WordNet prolog file that contains all the database synonyms.
How can I add new synonyms to the database?
Do I add it directly to the WordNet database? Or in wn_s.pl file?
If you are going to be actively modifying your synonym database you should probably just transform the synsets in the wordnet database into the basic comma delimited file in this format
"british,english",
"queen,monarch"
Then use and edit this file as your synonym resource.
Related
I have just one index in elasticsearch, with name aa-bb-YYYY-MM.
Documents in this index contain a field i want to use as date field.
Those documents have been inserted from a custom script (not using logstash).
When creating the index pattern in kibana:
If i enter aa-bb-*, the date field is not found.
If i enter aa-*, the date field is not found.
If i enter aa*, the date field is found, and i can create the index pattern.
But i really need to group indexes by the first two "dimensions".I tried using "_" instead "-", with the same result.
Any idea of what is going on?
Its working for me. I'm on the latest build on the 5.0 release branch (just past the beta1 release). I don't know what version you're on.
I created this index and added 2 docs;
curl --basic -XPUT 'http://elastic:changeme#localhost:9200/aa-bb-2016-09' -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"test" : {
"properties" : {
"date" : { "type" : "date"},
"action" : {
"type" : "text",
"analyzer" : "standard",
"fields": {
"raw" : { "type" : "text", "index" : "not_analyzed" }
}
},
"myid" : { "type" : "integer"}
}
}
}
}'
curl -XPUT 'http://elastic:changeme#localhost:9200/aa-bb-2016-09/test/1' -d '{
"date" : "2015-08-23T00:01:00",
"action" : "start",
"myid" : 1
}'
curl -XPUT 'http://elastic:changeme#localhost:9200/aa-bb-2016-09/test/2' -d '{
"date" : "2015-08-23T14:02:30",
"action" : "stop",
"myid" : 1
}'
and I was able to create the index pattern with aa-bb-*
I'm trying to search a text indexed by elasticsearch and the icu_tokenizer but can't get it working.
My testcase is to tokenize the sentence “Hello. I am from Bangkok”, in thai สวัสดี ผมมาจากกรุงเทพฯ, which should be tokenized to the five words สวัสดี, ผม, มา, จาก, กรุงเทพฯ. (Sample from Elasticsearch - The Definitive Guide)
Searching using any of the last four words fails for me. Searching using any of the space separated words สวัสดี or ผมมาจากกรุงเทพฯ works fine.
If I specify the icu_tokenizer on the command line, like
curl -XGET 'http://localhost:9200/icu/_analyze?tokenizer=icu_tokenizer' -d "สวัสดี ผมมาจากกรุงเทพฯ"
it tokenizes to five words.
My settings are:
curl http://localhost:9200/icu/_settings?pretty
{
"icu" : {
"settings" : {
"index" : {
"creation_date" : "1474010824865",
"analysis" : {
"analyzer" : {
"nfkc_cf_normalized" : [ "icu_normalizer" ],
"tokenizer" : "icu_tokenizer"
}
}
},
"number_of_shards" : "5",
"number_of_replicas" : "1",
"uuid" : "tALRehqIRA6FGPu8iptzww",
"version" : {
"created" : "2040099"
}
}
}
}
The index is populated with
curl -XPOST 'http://localhost:9200/icu/employee/' -d '
{
"first_name" : "John",
"last_name" : "Doe",
"about" : "สวัสดี ผมมาจากกรุงเทพฯ"
}'
Searching with
curl -XGET 'http://localhost:9200/_search' -d'
{
"query" : {
"match" : {
"about" : "กรุงเทพฯ"
}
}
}'
Returns nothing ("hits" : [ ]).
Performing the same search with one of สวัสดี or ผมมาจากกรุงเทพฯ works fine.
I guess I've misconfigured the index, how should it be done?
The missing part is:
"mappings": {
"employee" : {
"properties": {
"about":{
"type": "text",
"analyzer": "icu_analyzer"
}
}
}
}
In the mapping, the document field have to be specified the analyzer to be using.
[Index] : icu
[type] : employee
[field] : about
PUT /icu
{
"settings": {
"analysis": {
"analyzer": {
"icu_analyzer" : {
"char_filter": [
"icu_normalizer"
],
"tokenizer" : "icu_tokenizer"
}
}
}
},
"mappings": {
"employee" : {
"properties": {
"about":{
"type": "text",
"analyzer": "icu_analyzer"
}
}
}
}
}
test the custom analyzer using followings DSLJson
POST /icu/_analyze
{
"text": "สวัสดี ผมมาจากกรุงเทพฯ",
"analyzer": "icu_analyzer"
}
The result should be [สวัสดี, ผม, มา, จาก, กรุงเทพฯ]
My suggestion would be :
Kibana : Dev Tool could help you for effective query crafting
Using the Elasticsearch JDBC importer with this configuration:
bin=/usr/share/elasticsearch/elasticsearch-jdbc-2.1.1.2/bin
lib=/usr/share/elasticsearch/elasticsearch-jdbc-2.1.1.2/lib
echo '{
"type" : "jdbc",
"jdbc" : {
"url" : "ip/db",
"user" : "myuser",
"password" : "a7sdf7hsdf8hn78df",
"sql" : "SELECT title, body, source_id, time_order, type, blablabla...",
"index" : "importeditems",
"type" : "item",
"elasticsearch.host": "_eth0_",
"detect_json" : false
}
}' | java \
-cp "${lib}/*" \
-Dlog4j.configurationFile=${bin}/log4j2.xml \
org.xbib.tools.Runner \
org.xbib.tools.JDBCImporter
I've indexed some documents correctly with the form:
{
"title":"Tiempo de Opinión: Puede comenzar un ciclo",
"body":"Sebas Álvaro nos trae cada lunes historias y anécdotas de la montaña<!-- com -->",
"source_id":21188,
"time_order":"1438638043:55c2c6bb96d4c"
"type":"rss"
}
I'm trying to ignore the accents (for example, opiniónin title has an ó), so if a user searches "tiempo de opinión" or "tiempo de opinion" with a match_phrase it gives a match with the documents with or without accent.
So after using the importer and indexing everything, I changed my index settings to defaultanalyzer with an asciifolding filter.
curl -XPOST 'localhost:9200/importeditems/_close'
curl -XPUT 'localhost:9200/importeditems/_settings?pretty=true' -d '{
"analysis": {
"analyzer": {
"default": {
"tokenizer" : "standard",
"filter": [ "lowercase", "asciifolding"]
}}}}'
curl -XPOST 'localhost:9200/importeditems/_open'
Then I make a match_phrase to match"tiempo de opinion" (no accent) and "tiempo de opinión" (with accent)
# No accent
curl -XGET 'localhost:9200/importeditems/_search?pretty=true' -d'
{
"query": {
"match_phrase" : {
"title" : "tiempo de opinion"
}}}'
# With accent
curl -XGET 'localhost:9200/importeditems/_search?pretty=true' -d'
{
"query": {
"match_phrase" : {
"title" : "tiempo de opinión"
}}}'
But no match is given when they exist (if I match_phrase the phrase tiempo de it returns some hits containing tiempo de opinión).
I think the problem is due to de JDBC Importer because I reproduced the error without using the importer, adding another index and entries by hand, changing the index settings also to asciifolding and everything works as expected. You can see this working example right here.
If I check the settings of the index created after using the importer (importeditems)
curl -XGET 'localhost:9200/importeditems/_settings?pretty=true'
This outputs:
{
"importeditems" : {
"settings" : {
"index" : {
"creation_date" : "1457533907278",
"analysis" : {
"analyzer" : {
"default" : {
"filter" : [ "lowercase", "asciifolding" ],
"tokenizer" : "standard"
}
}
},
"number_of_shards" : "5",
"number_of_replicas" : "1",
"uuid" : "x",
"version" : {
"created" : "2010199"
}}}}
... and if I check the settings of the manually created index (test):
curl -XGET 'localhost:9200/test/_settings?pretty=true'
I get the same exact output:
{
"test" : {
"settings" : {
"index" : {
"creation_date" : "1457603253278",
"analysis" : {
"analyzer" : {
"default" : {
"filter" : [ "lowercase", "asciifolding" ],
"tokenizer" : "standard"
}
}
},
"number_of_shards" : "5",
"number_of_replicas" : "1",
"uuid" : "x",
"version" : {
"created" : "2010199"
}}}}
Can someone please tell why is not working if I use the Elasticsearch JDBC Importer and why is it working if I add raw data?
I finally solved the issue by first changing the settings by adding the analysis module:
curl -XPOST 'localhost:9200/importeditems/_close'
curl -XPUT 'localhost:9200/importeditems/_settings?pretty=true' -d '{
"analysis": {
"analyzer": {
"default": {
"tokenizer" : "standard",
"filter": [ "lowercase", "asciifolding"]
}}}}'
curl -XPOST 'localhost:9200/importeditems/_open'
... and then importing all the data again.
It's extrange, because as I stated on the post, I did exactly the same in both cases (with the JDBC Importer and the raw data):
Index data
Change index settings
Make the query with match_phrase
And it worked with the raw data (test) and not with the one I used the importer with (importeditems). The only thing I can think about is that the importeditems were more than 12GB and it needs time to re-create the content with the asciifolding on it. That's why the changes were not reflecting just after the asciifolding was activated.
Anyways, if someone is having the same issue and specially for those who are working with a huge amount of data, remember first to set the analyzer, and then index all the data.
According to the docs:
Queries can find only terms that actually exist in the inverted index,
so it is important to ensure that the same analysis process is applied
both to the document at index time, and to the query string at search
time so that the terms in the query match the terms in the inverted
index.
I am trying to add a custom analyzer.
curl -XPUT 'http://localhost:9200/my_index' -d '{
"settings" : {
"analysis" : {
"filter" : {
"my_filter" : {
"type" : "word_delimiter",
"type_table": [": => ALPHA", "/ => ALPHA"]
}
},
"analyzer" : {
"my_analyzer" : {
"type" : "custom",
"tokenizer" : "whitespace",
"filter" : ["lowercase", "my_filter"]
}
}
}
}
}'
It works on my local environment when I can recreate the index every time I want, the problem comes when I try to do the same on other environments like qa or prod, where the index has already been created.
{
"error": "IndexAlreadyExistsException[[my_index] already exists]",
"status": 400
}
How can I add my custom analyzer through the HTTP API?
In the documentation I found that to update index settings I can do this:
curl -XPUT 'localhost:9200/my_index/_settings' -d '
{
"index" : {
"number_of_replicas" : 4
}
}'
And to update analyzer settings the documentation says:
"...it is required to close the index first and open it after the changes are made."
So I ended up doing this:
curl -XPOST 'http://localhost:9200/my_index/_close'
curl -XPUT 'http://localhost:9200/my_index' -d '{
"settings" : {
"analysis" : {
"filter" : {
"my_filter" : {
"type" : "word_delimiter",
"type_table": [": => ALPHA", "/ => ALPHA"]
}
},
"analyzer" : {
"my_analyzer" : {
"type" : "custom",
"tokenizer" : "whitespace",
"filter" : ["lowercase", "my_filter"]
}
}
}
}
}'
curl -XPOST 'http://localhost:9200/my_index/_open'
Which fixed everything for me.
For folks using AWS Elastic-search service, closing and opening is not allowed, They need to follow re-indexing as mentioned here.
Basically create a temp index with all mappings of current original index and add/modify those mappings and settings(where analyzers sit), delete original index and create a new index with that name and copy back all mappings and settings from temp index.
I am using elasticsearch 1.0.2 and using a sample dynamic template in my index. Is there anyway we can derive the field index name from a part of dynamic field Name
This is my template
{"dynamic_templates":[
"dyn_string_fields": {
"match": "dyn_string_*",
"match_mapping_type": "string",
"mapping": {
"type": "string",
"index" : "analyzed",
"index_name": "{name}"
}
}
}]}
The dynamic templates work and I am able to add fields. Our goal is to add fields with the "dyn_string_" prefix but while searching it should be just the fieldname without the "dyn_string_" prefix. I tested using match_mapping_type to add fields but this will allow any field to be added. Does someone have any suggestions?
I looked at Elasticsearch API and they have a transform feature in 1.3 which allows to modify the document before insertion.(unfortunately I will not be able to upgrade to that version.)
In single template several aliases can be set. For quick example please have a look at this dummy example:
curl -XPUT localhost:9200/_template/test_template -d '
{
"template" : "test_*",
"settings" : {
"number_of_shards" : 4
},
"aliases" : {
"name_for_alias" : {}
},
"mappings" : {
"type" : {
"properties" : {
"id" : {
"type" : "integer",
"include_in_all" : false
},
"test_user_id" : {
"type" : "integer",
"include_in_all" : false
}
}
}
}
}
'
There "name_for_alias" is you simple alias. As parameter there can be defined preset filters if you want use alias for filtering data.
More information can be found here: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-templates.html