How change global elasticsearch index creation setting? - elasticsearch

OS: #1 SMP Debian 4.19.152-1 (2020-10-18)
Install elastic by this tutorial https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html:
{
"name" : "web",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "HSeOQvkHRey2P_JN2Nv-GA",
"version" : {
"number" : "7.10.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
"build_date" : "2020-11-09T21:30:33.964949Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
How globally change some index creation options https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html?
Such as index.max_ngram_diff as example?

You need to use the index update settings API:
PUT your-index/_settings
{
"index.max_ngram_diff": 2
}
If you want this setting to be applied on index that are going to be created in the future, you can create an index template that will be applied to the matching future indexes:
PUT /_index_template/template_1
{
"index_patterns" : ["my-index-*"],
"priority" : 1,
"template": {
"settings" : {
"index.max_ngram_diff" : 2
}
}
}
After that, every time that you create a new index whose name matches my-index-*, that new index will get the settings set in the template.

I will recommend to use Index templates for this. You can preconfigure template with settings you need, and use on index creation.

Related

Problem when running elastic search on windows 10

when i am running elastic search on windows 10 , i can not open elastic portal in the local host and i got this message :
{
"name" : "ISBUS-PC02",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "FE38HTACR2evmiVFD7ciag",
"version" : {
"number" : "8.6.0",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "f67ef2df40237445caa70e2fef79471cc608d70d",
"build_date" : "2023-01-04T09:35:21.782467981Z",
"build_snapshot" : false,
"lucene_version" : "9.4.2",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
nothing, because this is the last issue i stucked on.
This JSON you provided means Elasticsearch is up and running perfectly fine. You can note in the JSON response, Elasticsearch cluster name, version, build type and other information is also provided.
{
"name": "ISBUS-PC02",
"cluster_name": "elasticsearch",
"cluster_uuid": "FE38HTACR2evmiVFD7ciag",
"version":
{
"number": "8.6.0",
"build_flavor": "default",
"build_type": "zip",
"build_hash": "f67ef2df40237445caa70e2fef79471cc608d70d",
"build_date": "2023-01-04T09:35:21.782467981Z",
"build_snapshot": false,
"lucene_version": "9.4.2",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
},
"tagline": "You Know, for Search"
}
You don't have to worry, your Elasticsearch cluster is up and running and you can execute other requests.

Could not validate a connection to Elasticsearch. Unknown 401 error from Elasticsearch null

{
"name" : "DESKTOP-BNTBBBG",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "DlRfjfPlRg69wUBRfy3kKQ",
"version" : {
"number" : "8.1.3",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "39afaa3c0fe7db4869a161985e240bd7182d7a07",
"build_date" : "2022-04-19T08:13:25.444693396Z",
"build_snapshot" : false,
"lucene_version" : "9.0.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
But When running this command
php bin/magento setup:install --base-url="http://versionup.magento.com" --db-host="localhost" --db-name="magento2" --db-user="magento2" --db-password="magento2" --admin-firstname="admin" --admin-lastname="admin" --admin-email="user#example.com" --admin-user="admin" --admin-password="Admin#123456" --language="en_US" --currency="USD" --timezone="America/Chicago" --use-rewrites="1" --backend-frontname="admin" --search-engine=elasticsearch7 --elasticsearch-host="localhost" --elasticsearch-port=9200
Could not validate a connection to Elasticsearch. Unknown 401 error from Elasticsearch null
i have the same problem, but now it's fixed.
make sure you enable xpack.security.http.ssl on elasticsearch.yml. because I disable it before.
then change the host to:
--elasticsearch-host="https://localhost"
then add more options to your command for authentication:
--elasticsearch-enable-auth=true --elasticsearch-username="elastic" --elasticsearch-password="your password"

How to use wildcard field in Aiven's ElasticSearch?

I wanted to test the new wildcard field type in my ElasticSearch instance (Aiven).
I've tried this:
PUT /wildcard_test
{
"mappings" : {
"properties" : {
"wildcard_field" : {
"type" : "wildcard"
}
}
}
}
And I'm getting this response:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "No handler for type [wildcard] declared on field [wildcard_field]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Failed to parse mapping [_doc]: No handler for type [wildcard] declared on field [wildcard_field]",
"caused_by" : {
"type" : "mapper_parsing_exception",
"reason" : "No handler for type [wildcard] declared on field [wildcard_field]"
}
},
"status" : 400
}
Here are the info regarding the instance:
GET /
{
"name" : "...",
"cluster_name" : "...",
"cluster_uuid" : "...",
"version" : {
"number" : "7.9.3",
"build_flavor" : "unknown",
"build_type" : "unknown",
"build_hash" : "c4138e51121ef06a6404866cddc601906fe5c868",
"build_date" : "2020-10-16T10:36:16.141335Z",
"build_snapshot" : false,
"lucene_version" : "8.6.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
GET /_license
{
"error" : {
"root_cause" : [
{
"type" : "invalid_index_name_exception",
"reason" : "Invalid index name [_license], must not start with '_'.",
"index_uuid" : "_na_",
"index" : "_license"
}
],
"type" : "invalid_index_name_exception",
"reason" : "Invalid index name [_license], must not start with '_'.",
"index_uuid" : "_na_",
"index" : "_license"
},
"status" : 400
}
My understanding is that this feature is provided by X-Pack, which I don't whether or not is included in Aiven's service. Is there some way to make this work?
Although the wildcard field type was indeed added in v7.9, it's (unfortunately) only available as part of an X-Pack subscription and I presume the one running on Aiven is OSS which is missing this and other X-Pack features.

Problem with Elasticsearch Index Lifecycle Policy that doesn't rollover

In order to evaluate its potential to help on our daily operations, I have deployed Elastic Search and Kibana (7.7.1 with BASIC license) and created an index template for Ntopng (our monitoring platform).
Since indexes keep growing, I want to delete Ntopng indexes older than 20 days or so, therefore I have created a life cycle policy called ntopng where the time-stamped index should rollover after 1 day (for testing purposes) and then will be deleted after 2 days of the rollover:
Next I picked a time-stamped index created that day and applied the lifecycle policy to it:
Before that, I had to create an alias for that Index, so I did it manually:
POST /_aliases
{
"actions" : [
{ "add" : { "index" : "ntopng-2020.09.09", "alias" : "ntopng_Alias" } }
]
}
All looked good after that ( I guess) as no errors or alarms were displayed:
"indices" : {
"ntopng-2020.09.09" : {
"index" : "ntopng-2020.09.09",
"managed" : true,
"policy" : "ntopng",
"lifecycle_date_millis" : 1599609600433,
"age" : "20.14h",
"phase" : "hot",
"phase_time_millis" : 1599681721821,
"action" : "rollover",
"action_time_millis" : 1599680521920,
"step" : "check-rollover-ready",
"step_time_millis" : 1599681721821,
"is_auto_retryable_error" : true,
"failed_step_retry_count" : 1,
"phase_execution" : {
"policy" : "ntopng",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_age" : "1d"
},
"set_priority" : {
"priority" : 100
}
}
},
"version" : 4,
"modified_date_in_millis" : 1599509572867
}
}
My expectation was that in the next day the policy would be automatically rolled over to the next index (ntopng-2020.10.10) so that the initial index would be eventually deleted the next two days.
Instead, I got the following errors:
GET ntopng-*/_ilm/explain
{
"indices" : {
"ntopng-2020.09.09" : {
"index" : "ntopng-2020.09.09",
"managed" : true,
"policy" : "ntopng",
"lifecycle_date_millis" : 1599609600433,
"age" : "1.94d",
"phase" : "hot",
"phase_time_millis" : 1599776521822,
"action" : "rollover",
"action_time_millis" : 1599680521920,
"step" : "ERROR",
"step_time_millis" : 1599777121822,
"failed_step" : "check-rollover-ready",
"is_auto_retryable_error" : true,
"failed_step_retry_count" : 80,
"step_info" : {
"type" : "illegal_argument_exception",
"reason" : """index name [ntopng-2020.09.09] does not match pattern '^.*-\d+$'""",
"stack_trace" : """java.lang.IllegalArgumentException: index name [ntopng-2020.09.09] does not match pattern '^.*-\d+$'
at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.generateRolloverIndexName(TransportRolloverAction.java:241)
at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.masterOperation(TransportRolloverAction.java:133)
at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.masterOperation(TransportRolloverAction.java:73)
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.lambda$doStart$3(TransportMasterNodeAction.java:170)
at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225)
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:170)
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:133)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:110)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:59)
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:153)
at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:123)
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:151)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:129)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:64)
at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:399)
at org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin(ClientHelper.java:92)
at org.elasticsearch.xpack.core.ClientHelper.executeWithHeadersAsync(ClientHelper.java:155)
at org.elasticsearch.xpack.ilm.LifecyclePolicySecurityClient.doExecute(LifecyclePolicySecurityClient.java:51)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:399)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1234)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.rolloverIndex(AbstractClient.java:1736)
at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:127)
at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:173)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
"""
},
"phase_execution" : {
"policy" : "ntopng",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_age" : "1d"
},
"set_priority" : {
"priority" : 100
}
}
},
"version" : 4,
"modified_date_in_millis" : 1599509572867
}
}
"ntopng-2020.09.10" : {
"index" : "ntopng-2020.09.10",
"managed" : true,
"policy" : "ntopng",
"lifecycle_date_millis" : 1599696000991,
"age" : "22.57h",
"phase" : "hot",
"phase_time_millis" : 1599776521844,
"action" : "rollover",
"action_time_millis" : 1599696122033,
"step" : "ERROR",
"step_time_millis" : 1599777121839,
"failed_step" : "check-rollover-ready",
"is_auto_retryable_error" : true,
"failed_step_retry_count" : 67,
"step_info" : {
"type" : "illegal_argument_exception",
"reason" : "index.lifecycle.rollover_alias [ntopng_Alias] does not point to index [ntopng-2020.09.10]",
"stack_trace" : """java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [ntopng_Alias] does not point to index [ntopng-2020.09.10]
at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:104)
at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:173)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
"""
},
"phase_execution" : {
"policy" : "ntopng",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_age" : "1d"
},
"set_priority" : {
"priority" : 100
}
}
}
The first index error reads "index name [ntopng-2020.09.09] does not match pattern '^.*-\d+$"
while second one displays: ""index.lifecycle.rollover_alias [ntopng_Alias] does not point to index [ntopng-2020.09.10]"
Please note that I'm learning the basics on ES Index management, so I'd appreciate any clue on what the problem might be.
OK, I just found that the index name must end with a numeric pattern like 0001 and not 2020.09.09 So I may need to find an alternative way to make it work.
As per the kibana regex you can date and time pattern as well, instead of the 2020.01.01 use 2020-01-01
This should work as well. You can check the regex here : https://regex101.com/r/VclptX/1

Elasticsearch showing error 406 when inserting

Just installed ElasticSearch on my mac (brew install elasticsearch).
Then started it with: brew services start elasticsearch.
I can see it is running:
curl localhost:9200
{
"name" : "84ucGje",
"cluster_name" : "elasticsearch_dnk306",
"cluster_uuid" : "yU6wdZpJT_-1OvaFf7l7Ug",
"version" : {
"number" : "6.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "816e6f6",
"build_date" : "2018-11-09T18:58:36.352602Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When I try to add some index to it - it fails with 406 error:
curl -XPOST localhost:9200/test/1 -d '{"title":"Hello world"}'
{"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}
What do I need to do to make this work?
Thanks.

Resources