I've setup an elasticsearch cluster in kuberentes, but I'm getting the error "MasterNotDiscoveredException". I'm not really sure even where to begin debugging this error as there does not appear to be anything really useful in the logs of any of the nodes:
│ elasticsearch {"type": "server", "timestamp": "2022-06-15T00:44:17,226Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "logging-ek", "node.name": "logging-ek-es-master-0", "message": "path: /_bulk, params: {}", │
│ elasticsearch "stacktrace": ["org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized, SERVICE_UNAVAILABLE/2/no master];", │
│ elasticsearch "at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:179) ~[elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.handleBlockExceptions(TransportBulkAction.java:635) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:481) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$2.onTimeout(TransportBulkAction.java:669) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:660) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]", │
│ elasticsearch "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]", │
│ elasticsearch "at java.lang.Thread.run(Thread.java:833) [?:?]", │
│ elasticsearch "Suppressed: org.elasticsearch.discovery.MasterNotDiscoveredException", │
│ elasticsearch "\tat org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:297) ~[elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:660) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]", │
│ elasticsearch "\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]", │
│ elasticsearch "\tat java.lang.Thread.run(Thread.java:833) [?:?]"] }
is pretty much the only logs i've ever seen.
It does appear that the cluster sees all of my master nodes:
elasticsearch {"type": "server", "timestamp": "2022-06-15T00:45:41,915Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "logging-ek", "node.name": "logging-ek-es-master-0", "message": "master not discovered yet, this node has not previously joine │
│ d a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{logging-ek-es-master-0}{fHLQrvLsTJ6UvR_clSaxfg}{iLoGrnWSTpiZxq59z7I5zA}{10.42.64.4}{10.42.64.4:9300}{mr}, {logging-ek-es-master-1}{EwF8WLIgSF6Q1Q46_51VlA}{wz5rg74iThicJdtzXZg29g}{ │
│ 10.42.240.8}{10.42.240.8:9300}{mr}, {logging-ek-es-master-2}{jtrThk_USA2jUcJYoIHQdg}{HMvZ_dUfTM-Ar4ROeIOJlw}{10.42.0.5}{10.42.0.5:9300}{mr}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]: │
│ 9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305, 10.42.0.5:9300, 10.42.240.8:9300] from hosts providers and [{logging-ek-es-master-0}{fHLQrvLsTJ6UvR_clSaxfg}{iLoGrnWSTpiZxq59z7I5zA}{10.42.64.4}{10.42.64.4:9300}{mr}] from last-known cluster state; node term 0, last-accepted version │
│ 0 in term 0" }
and I've verified that they can in fact reach each other through the network. Is there anything else or anywhere else I need to look for errors? I installed elasticsearch via elasticoperator.
Start doing curl to the 9300 port. Make sure you get a valid response going both ways.
Also make sure your MTU is set right or this can happen as well. It's a network problem most of the time.
Related
I'm getting this error when I try to use the gulp build command:
[09:08:11] Starting 'build:compile-css'...
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($spacer, 2) or calc($spacer / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
306 │ $headings-margin-bottom: $spacer / 2 !default;
│ ^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 306:31 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($input-padding-y, 2) or calc($input-padding-y / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
501 │ $input-height-inner-quarter: add($input-line-height * .25em, $input-padding-y / 2) !default;
│ ^^^^^^^^^^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 501:73 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($custom-control-indicator-size, 2) or calc($custom-control-indicator-size / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
571 │ $custom-switch-indicator-border-radius: $custom-control-indicator-size / 2 !default;
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 571:49 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($spacer, 2) or calc($spacer / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
717 │ $nav-divider-margin-y: $spacer / 2 !default;
│ ^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 717:37 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($spacer, 2) or calc($spacer / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
722 │ $navbar-padding-y: $spacer / 2 !default;
│ ^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 722:37 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
[09:08:14] 'build:compile-css' errored after 3.14 s
[09:08:14] Error in plugin "sass"
Message:
build\_css\compat\components\_dropdowns.scss
Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\compat\components\_dropdowns.scss 34:11 root stylesheet
Details:
formatted: Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\compat\components\_dropdowns.scss 34:11 root stylesheet
line: 34
column: 11
file: C:\Users\fmateosg\IdeaProjects\test\themes\base-theme\build\_css\compat\components\_dropdowns.scss
status: 1
messageFormatted: build\_css\compat\components\_dropdowns.scss
Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\compat\components\_dropdowns.scss 34:11 root stylesheet
messageOriginal: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\compat\components\_dropdowns.scss 34:11 root stylesheet
relativePath: build\_css\compat\components\_dropdowns.scss
domainEmitter: [object Object]
domainThrown: false
[09:08:14] 'build' errored after 6.36 s
I know that there is a similar question to this one but the answers there couldn't solve my problem. This is my folder structure:
Folder structure
I copied the _dropdowns.scss file into src/css/compat/components/ and did the modification there but it still gives me the error when I retry to build
I had the same problem because I accidentally upgraded my liferay-theme-tasks in package.json to version 11.2.2.
If thats the case, downgrade liferay-theme-tasks to version ^10.0.2, remove the node_modules folder and run npm install again. Gulp build should pass after that.
I'm using Node.js version 14.17.0, gulp version 4.0.2
I need to send an email when the application shuts down.
I have created the following method:
void onStop(#Observes ShutdownEvent event) throws ParserConfigurationException {
mailer.send(Mail.withText("test#email.com","STOP TEST", "HERE IS STOP A TEST"));
Quarkus.waitForExit();
}
But I always receive the following error:
│ Caused by: java.net.ConnectException: Connection refused (Connection refused) │
│ at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) │
│ at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) │
│ at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) │
│ at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) │
│ at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) │
│ at java.base/java.net.Socket.connect(Socket.java:609) │
│ at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) │
│ at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) │
│ ... 32 more
I am quite sure that all the connections are closed when I am trying to send the email. This is very similar to the question asked here -> How to accept http requests after shutdown signal in Quarkus?, but not clear to me how to implement the solution he proposes.
Any help is greatly appreciated.
When I create table as follows:
CREATE TABLE partition_v3_cluster ON CLUSTER perftest_3shards_3replicas(
ID String,
URL String,
EventTime Date
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(EventTime)
ORDER BY ID;
I get errors:
Query id: fe98c8b6-16af-44a1-b8c9-2bf10d9866ea
┌─host────────┬─port─┬─status─┬─error───────────────────────────────────────────────────────────────────────────────────────────────────────────── ──────────────────────────────────────────────────────────────────────────┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 10.18.1.131 │ 9000 │ 371 │ Code: 371, e.displayText() = DB::Exception: There are two exactly the same ClickHouse instances 10.18.1.131:9000 i n cluster perftest_3shards_3replicas (version 21.6.3.14 (official build)) │ 2 │ 0 │
│ 10.18.1.133 │ 9000 │ 371 │ Code: 371, e.displayText() = DB::Exception: There are two exactly the same ClickHouse instances 10.18.1.133:9000 i n cluster perftest_3shards_3replicas (version 21.6.3.14 (official build)) │ 1 │ 0 │
│ 10.18.1.132 │ 9000 │ 371 │ Code: 371, e.displayText() = DB::Exception: There are two exactly the same ClickHouse instances 10.18.1.132:9000 i n cluster perftest_3shards_3replicas (version 21.6.3.14 (official build)) │ 0 │ 0 │
└─────────────┴──────┴────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ──────────────────────────────────────────────────────────────────────────┴─────────────────────┴──────────────────┘
← Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.) 0%
3 rows in set. Elapsed: 0.149 sec.
Received exception from server (version 21.6.3):
Code: 371. DB::Exception: Received from localhost:9000. DB::Exception: There was an error on [10.18.1.131:9000]: Code: 371, e.displayText() = DB:: Exception: There are two exactly the same ClickHouse instances 10.18.1.131:9000 in cluster perftest_3shards_3replicas (version 21.6.3.14 (official build)).
And here is my metrika.xml:
<?xml version="1.0" encoding="utf-8"?>
<yandex>
<remote_servers>
<perftest_3shards_3replicas>
<shard>
<replica>
<host>10.18.1.131</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.132</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.133</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>10.18.1.131</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.132</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.133</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>10.18.1.131</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.132</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.133</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards_3replicas>
</remote_servers>
<zookeeper>
<node>
<host>10.18.1.131</host>
<port>2181</port>
</node>
<node>
<host>10.18.1.132</host>
<port>2181</port>
</node>
<node>
<host>10.18.1.133</host>
<port>2181</port>
</node>
</zookeeper>
<macros>
<shard>01</shard>
<replica>01</replica>
</macros>
</yandex>
I don't know where is wrong, can someone help me? Thank u
I upgraded the elasticsearch chart in kubernetes from 6.6 to 7.10.2 version. Data and master pods are running and ready but, the clients aren't ready (2 clients, 2 data , 3 master).
This is their status:
NAME READY STATUS RESTARTS AGE
elasticsearch-client-685c875bb5-5mcxg 0/1 Running 0 2m23s
elasticsearch-client-685c875bb5-cs9lq 0/1 Running 0 24m
When I run describe I see this warning:
Warning Unhealthy 10s (x10 over 100s) kubelet, Readiness probe failed: Get http://_cluster/health: net/http: request canceled (Client. Timeout exceeded while awaiting headers)
and in kubectl logs, I get this
{"type": "server", "timestamp": "2021-01-24T13:43:41,318Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "elasticsearch", "node.name": "elasticsearch-client-685c875bb5-5mcxg", "message": "path: /_cluster/health, params: {}",
"stacktrace": ["org.elasticsearch.discovery.MasterNotDiscoveredException: null",
"at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:230) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:335) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:601) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:684) [elasticsearch-7.10.2.jar:7.10.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
{"type": "server", "timestamp": "2021-01-24T13:43:51,319Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "elasticsearch", "node.name": "elasticsearch-client-685c875bb5-5mcxg", "message": "path: /_cluster/health, params: {}",
"stacktrace": ["org.elasticsearch.discovery.MasterNotDiscoveredException: null",
"at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:230) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:335) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:601) [elasticsearch-7.10.2.jar:7.10.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:684) [elasticsearch-7.10.2.jar:7.10.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
I set the readiness and liveness initialDelaySeconds to 90
what might be the problem here?
Since you have upgraded from 6.x to 7.x, make sure that you have set cluster.initial_master_nodes in your env or in the elasticsearch.yml config file.
You must have odd number of master nodes, e.g. 1, 3, 5 and so on, usually 3 masters is optimal. Otherwise your cluster won't work due to lack of quorum.
I have a trouble in change Elasticsearch config that deployed in K8s.
I want to apply this config for my Elasticsearch node
# Force all memory to be locked, forcing the JVM to never swap
bootstrap.mlockall: true
## Threadpool Settings ##
# Search pool
threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 100
# Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 60
threadpool.bulk.queue_size: 300
# Index pool
threadpool.index.type: fixed
threadpool.index.size: 20
threadpool.index.queue_size: 100
# Indices settings
indices.memory.index_buffer_size: 30%
indices.memory.min_shard_index_buffer_size: 12mb
indices.memory.min_index_buffer_size: 96mb
# Cache Sizes
indices.fielddata.cache.size: 15%
indices.fielddata.cache.expire: 6h
indices.cache.filter.size: 15%
indices.cache.filter.expire: 6h
# Indexing Settings for Writes
index.refresh_interval: 30s
index.translog.flush_threshold_ops: 50000
To deploy Elasticsearch in K8s, I follow two step:
Step 1: Create .yml file, like this
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: esc-lamtest
namespace: lamtest
spec:
version: 7.9.0
nodeSets:
- name: basic-1
count: 3
config:
node.master: true
node.data: true
node.ingest: true
# Search pool
thread_pool.search.queue_size: 50
thread_pool.search.size: 20
thread_pool.search.min_queue_size: 10
thread_pool.search.max_queue_size: 100
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 150Gi
storageClassName: vnptit-nfs
podTemplate:
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
resources:
requests:
memory: 0Gi
cpu: 0Gi
limits:
memory: 0Gi
cpu: 0Gi
http:
tls:
selfSignedCertificate:
disabled: true
Step 2: I run command line:
kubectl apply -f lamtest.yaml
But I can only app config for "Search pool".
When I apply config for # Bulk pool # Index pool # Indices settings # Cache Sizes and # Indexing Settings for Writes, my Elasticsearch fail and here is my logs
"Suppressed: java.lang.IllegalArgumentException: unknown setting [thread_pool.bulk.queue_size] did you mean any of [thread_pool.get.queue_size, thread_pool.write.queue_size, thread_pool.analyze.queue_size, thread_pool.search.queue_size, thread_pool.listener.queue_size]?",
"\tat org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:544) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:489) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:460) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:431) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:149) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.node.Node.<init>(Node.java:385) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.node.Node.<init>(Node.java:277) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:227) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:227) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) ~[elasticsearch-cli-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.9.0.jar:7.9.0]",
"Suppressed: java.lang.IllegalArgumentException: unknown setting [thread_pool.bulk.size] did you mean any of [thread_pool.get.size, thread_pool.write.size, thread_pool.analyze.size, thread_pool.search.size]?",
"\tat org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:544) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:489) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:460) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:431) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:149) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.node.Node.<init>(Node.java:385) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.node.Node.<init>(Node.java:277) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:227) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:227) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) ~[elasticsearch-cli-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.9.0.jar:7.9.0]",
"\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.9.0.jar:7.9.0]"] }
uncaught exception in thread [main]
java.lang.IllegalArgumentException: unknown setting [thread_pool.bulk.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:544)
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:489)
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:460)
at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:431)
at org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:149)
at org.elasticsearch.node.Node.<init>(Node.java:385)
at org.elasticsearch.node.Node.<init>(Node.java:277)
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:227)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:227)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
For complete error details, refer to the log at /usr/share/elasticsearch/logs/esc-lamtest.log
Error message is very clear, that you have not defined the proper config for other threadpools, Please notice the first line of your error msg carefully.
"Suppressed: java.lang.IllegalArgumentException: unknown setting
[thread_pool.bulk.queue_size] did you mean any of
[thread_pool.get.queue_size, thread_pool.write.queue_size,
thread_pool.analyze.queue_size, thread_pool.search.queue_size,
thread_pool.listener.queue_size]?",
You have defined bulk queue size as thread_pool.bulk.queue_size, which is not correct and as you can read threadpools in ES , there is no, threadpool for bulk, instead it uses write threadpool for bulk requests, from the same doc, hence chaning this to thread_pool.write.queue_size would work for this config.
write For single-document index/delete/update and bulk requests.
Thread pool type is fixed with a size of # of allocated processors,
queue_size of 10000. The maximum size for this pool is 1 + # of
allocated processors.
Now in order to fix this for bulk, Index and other settings, please verify that you are using the correct names for their configs, threadpools can be obtained from any running ES instance and you can easily construct the corresponding config.
http://localhost:9200/_nodes/stats?pretty gives the all the threadpools and some are listed below.
"thread_pool": {
"analyze": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 1
},
"ccr": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
}