I'm facing issue while starting ElasticSearch which was working fine sometime back.
I'm deploying it on Kubernetes.
Now, everytime I try to start it, the pod throws this error:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
{"type": "server", "timestamp": "2019-09-24T13:09:38,995+0000", "level": "WARN", "component": "o.e.b.JNANatives", "cluster.name": "docker-cluster", "node.name": "metrics-es-779d8667c8-tgn9w", "message": "Unable to lock JVM Memory: error=12, reason=Cannot allocate memory" }
{"type": "server", "timestamp": "2019-09-24T13:09:38,998+0000", "level": "WARN", "component": "o.e.b.JNANatives", "cluster.name": "docker-cluster", "node.name": "metrics-es-779d8667c8-tgn9w", "message": "This can result in part of the JVM being swapped out." }
{"type": "server", "timestamp": "2019-09-24T13:09:38,999+0000", "level": "WARN", "component": "o.e.b.JNANatives", "cluster.name": "docker-cluster", "node.name": "metrics-es-779d8667c8-tgn9w", "message": "Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216" }
{"type": "server", "timestamp": "2019-09-24T13:09:38,999+0000", "level": "WARN", "component": "o.e.b.JNANatives", "cluster.name": "docker-cluster", "node.name": "metrics-es-779d8667c8-tgn9w", "message": "These can be adjusted by modifying /etc/security/limits.conf, for example: \n\t# allow user 'elasticsearch' mlockall\n\telasticsearch soft memlock unlimited\n\telasticsearch hard memlock unlimited" }
{"type": "server", "timestamp": "2019-09-24T13:09:38,999+0000", "level": "WARN", "component": "o.e.b.JNANatives", "cluster.name": "docker-cluster", "node.name": "metrics-es-779d8667c8-tgn9w", "message": "If you are logged in interactively, you will have to re-login for the new limits to take effect." }
{"type": "server", "timestamp": "2019-09-24T13:09:39,699+0000", "level": "WARN", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "docker-cluster", "node.name": "metrics-es-779d8667c8-tgn9w", "message": "uncaught exception in thread [main]" ,
"stacktrace": ["org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-7.1.0.jar:7.1.0]",
"at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.1.0.jar:7.1.0]",
"Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?",
"at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:297) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.node.Node.<init>(Node.java:272) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.node.Node.<init>(Node.java:252) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.1.0.jar:7.1.0]",
"... 6 more",
"Caused by: java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data/nodes/0",
"at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:219) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:267) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.node.Node.<init>(Node.java:272) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.node.Node.<init>(Node.java:252) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.1.0.jar:7.1.0]",
"... 6 more",
"Caused by: java.nio.file.FileSystemException: /usr/share/elasticsearch/data/nodes/0/node.lock: Read-only file system",
"at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]",
"at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?]",
"at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?]",
"at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?]",
"at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 2019-03-08 11:58:55]",
"at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 2019-03-08 11:58:55]",
"at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 2019-03-08 11:58:55]",
"at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:212) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:267) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.node.Node.<init>(Node.java:272) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.node.Node.<init>(Node.java:252) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:211) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) ~[elasticsearch-7.1.0.jar:7.1.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.1.0.jar:7.1.0]",
"... 6 more"] }
Any suggestions what might be the issue?
Related
I use https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner and everything is work.
But when I use helm to install elasticsearch, elasticsearch show running but not work.
Step I install elasticsearch
Create certificate , username/password for elasticsearch
manually create pvc/automatically create pvc by elasticsearch's value.yaml
elasticsearch show running but can't curl http://service's ip:9200
I also change the value of persistence: enabled: true to persistence: enabled: false and everything work , also can curl http://service's ip:9200 get the default response for elasticsearch
So I wonder if I misunderstand the way to use pvc, Here is my pvc.yaml and my status when I create pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elasticsearch-master-elasticsearch-master-0
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: nfs-client
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-master-elasticsearch-master-0 Bound pvc-55480402-c7d7-4a52-aa85-961f97ab7f82 2Gi RWO nfs-client 20m
My disk information for node that run es
10.10.1.134:/mnt/nfs/default-elasticsearch-master-elasticsearch-master-0-pvc-55480402-c7d7-4a52-aa85-961f97ab7f82 15G 1.8G 14G 12% /var/lib/kubelet/pods/b3696f5a-5d9b-4c00-943e-027c2dc7a86c/volumes/kubernetes.io~nfs/pvc-55480402-c7d7-4a52-aa85-961f97ab7f82
drwxrwxrwx 3 root root 19 Sep 19 17:25 pvc-55480402-c7d7-4a52-aa85-961f97ab7f82
My description of elasticsearch
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 21m default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
Normal Scheduled 21m default-scheduler Successfully assigned default/elasticsearch-master-0 to a136
Normal Pulled 21m kubelet Container image "docker.elastic.co/elasticsearch/elasticsearch:7.17.3" already present on machine
Normal Created 21m kubelet Created container configure-sysctl
Normal Started 21m kubelet Started container configure-sysctl
Normal Pulled 21m kubelet Container image "docker.elastic.co/elasticsearch/elasticsearch:7.17.3" already present on machine
Normal Created 21m kubelet Created container elasticsearch
Normal Started 21m kubelet Started container elasticsearch
My logs of es
{"type": "server", "timestamp": "2022-09-19T09:25:49,509Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-voting-only-node]" }
{"type": "server", "timestamp": "2022-09-19T09:25:49,509Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "loaded module [x-pack-watcher]" }
{"type": "server", "timestamp": "2022-09-19T09:25:49,510Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "no plugins loaded" }
{"type": "deprecation.elasticsearch", "timestamp": "2022-09-19T09:25:49,515Z", "level": "CRITICAL", "component": "o.e.d.c.s.Settings", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "[node.ml] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.", "key": "node.ml", "category": "settings" }
{"type": "deprecation.elasticsearch", "timestamp": "2022-09-19T09:25:49,621Z", "level": "CRITICAL", "component": "o.e.d.c.s.Settings", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "[node.data] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.", "key": "node.data", "category": "settings" }
curl when persist value to true
curl -u username:password http://10.101.45.49:9200
curl: (7) Failed connect to 10.101.45.49:9200; Connection refused
curl when persist value to false
curl -u username:password http://10.101.45.49:9200
{
"name" : "elasticsearch-master-0",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "WRR3PXhLS7GGHl5AaUz2DA",
"version" : {
"number" : "7.17.3",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "5ad023604c8d7416c9eb6c0eadb62b14e766caff",
"build_date" : "2022-04-19T08:11:19.070913226Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
My volumeClaimTemplate in elasticsearch value.yaml
volumeClaimTemplate:
storageClassName: "nfs-client"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
It seems that es can't work when there is dynamic pvc, but i don't know how to solve the problem, thank you for help
When one of the elasticsearch pods restarts for any reason, in kibana logs I was getting error that the elastic user was not able to authenticate . I could not able to find any relevant documentation. Any help would be appreciated.
my kibana log:
{"type":"error","#timestamp":"2021-11-02T19:33:54Z","tags":["warning","stats-collection"],"pid":1,"level":"error","error":{"message":"[security_exception] failed to authenticate user [elastic], with { header={ WWW-Authenticate="Basic realm=\"security\" charset=\"UTF-8\"" } }","name":"Error","stack":"[security_exception] failed to authenticate user [elastic], with { header={ WWW-Authenticate="Basic realm=\"security\" charset=\"UTF-8\"" } } :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"sort\":[{\"task.runAt\":\"asc\"},{\"_id\":\"desc\"}],\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"filter\":{\"term\":{\"_id\":\"Maps-maps_telemetry\"}}}}]}}}","statusCode":401,"response":"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"failed to authenticate user [elastic]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}}],\"type\":\"security_exception\",\"reason\":\"failed to authenticate user [elastic]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}},\"status\":401}","wwwAuthenticateDirective":"Basic realm=\"security\" charset=\"UTF-8\""}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:315:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:274:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)\n at IncomingMessage.emit (events.js:194:15)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)"},"message":"[security_exception] failed to authenticate user [elastic], with { header={ WWW-Authenticate="Basic realm=\"security\" charset=\"UTF-8\"" } }"}
now i enabled xpack.security.transport.ssl.enabled: true and then i am getting this error in es master node
{"type": "server", "timestamp": "2021-11-03T19:17:30,962+0000", "level": "WARN", "component": "o.e.t.TcpTransport", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "cluster.uuid": "hO1sgzRlTKWHl-jJVItUmA", "node.id": "PnVpoMi0TACfs_vJNQ1afQ", "message": "exception caught on transport layer [Netty4TcpChannel{localAddress=0.0.0.0/0.0.0.0:9300, remoteAddress=/SOMEIPADDRESS}], closing connection" ,
I am deploying Elasticsearch cluster on AWS EKS. Below is the k8s spec yml file.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: datasource
spec:
version: 7.14.0
nodeSets:
- name: node
count: 3
config:
node.store.allow_mmap: true
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
xpack.security.enabled: false
podTemplate:
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
readinessProbe:
exec:
command:
- bash
- -c
- /mnt/elastic-internal/scripts/readiness-probe-script.sh
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 12
successThreshold: 1
timeoutSeconds: 12
env:
- name: READINESS_PROBE_TIMEOUT
value: "30"
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 1024Gi
After deploy, I see all three pods have error:
{"type": "server", "timestamp": "2021-10-05T05:19:37,041Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "datasource", "node.name": "datasource-es-node-0", "message": "[.kibana/g5_90XpHSI-y-I7MJfBZhQ] update_mapping [_doc]", "cluster.uuid": "xJ00drroT_CbJPfzi8jSAg", "node.id": "qmtgUZHbR4aTWsYaoIEDEA" }
{"type": "server", "timestamp": "2021-10-05T05:19:37,622Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "datasource", "node.name": "datasource-es-node-0", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana][0]]]).", "cluster.uuid": "xJ00drroT_CbJPfzi8jSAg", "node.id": "qmtgUZHbR4aTWsYaoIEDEA" }
{"timestamp": "2021-10-05T05:19:40+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:19:45+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:19:50+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:19:55+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:20:00+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:20:05+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:20:10+00:00", "message": "readiness probe failed", "curl_rc": "35"}
{"timestamp": "2021-10-05T05:20:15+00:00", "message": "readiness probe failed", "curl_rc": "35"}
From above log, it shows Cluster health status changed from [YELLOW] to [GREEN] first then comes to this error readiness probe failed. I wonder how I can solve this issue. Is it Elasticsearch related error or k8s related?
You can by declaring READINESS_PROBE_TIMEOUT in your spec like this.
...
env:
- name: READINESS_PROBE_TIMEOUT
value: "30"
You can customize the readiness probe if necessary, the latest elasticsearch.k8s.elastic.co/v1 API spec is here, it's the same K8s PodTemplateSpec that you can use in your Elasticsearch spec.
Update: curl error code 35 refers to SSL error. Here's a post regarding the script. Can you remove the following settings from your spec and re-run:
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
xpack.security.enabled: false
I have a single node ES cluster.
I have created a new index with 10 shards that suppose to have 1TB of information.
So I have started to reindex part of the data into this new index and I got java.lang.OutOfMemoryError: Java heap space exception.
I have restarted the docker container and I see the following.
What Should I do?
thanks
{"type": "server", "timestamp": "2020-12-13T14:35:28,155Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "docker-cluster", "node.name": "607ed4606bec", "message": "path: /.kibana/_count, params: {index=.kibana}", "cluster.uuid": "zNFK_xhtTAuEfr6S_mcdSA", "node.id": "y9BuSdDNTXyo9X0b13fs8w" ,
"stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:551) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:309) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:582) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:393) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:223) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:288) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.9.3.jar:7.9.3]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.3.jar:7.9.3]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
{"type": "server", "timestamp": "2020-12-13T14:35:42,170Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "607ed4606bec", "message": "adding template [.management-beats] for index patterns [.management-beats]", "cluster.uuid": "zNFK_xhtTAuEfr6S_mcdSA", "node.id": "y9BuSdDNTXyo9X0b13fs8w" }
{"type": "server", "timestamp": "2020-12-13T14:37:52,073Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "607ed4606bec", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[entities][5]]]).", "cluster.uuid": "zNFK_xhtTAuEfr6S_mcdSA", "node.id": "y9BuSdDNTXyo9X0b13fs8w" }
You are reaching your JVM memory heap size limit to solve the problem you can increase your docker memory size and try again.
Anyone able to help me with steps to fix/diagnose an elastic cluster which has fallen over randomly with the following errors please? version 7.3.1.
elasticsearch | {"type": "server", "timestamp": "2019-12-06T09:30:49,585+0000", "level": "DEBUG", "component": "o.e.a.a.i.c.TransportCreateIndexAction", "cluster.name": "xxx", "node.name": "bex", "message": "no known master node, scheduling a retry" }
elasticsearch | {"type": "server", "timestamp": "2019-12-06T09:30:50,741+0000", "level": "WARN", "component": "r.suppressed", "cluster.name": "xxx", "node.name": "bex", "message": "path: /_bulk, params: {}" ,
elasticsearch | "stacktrace": ["org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized, SERVICE_UNAVAILABLE/2/no master];",
It has been running without any issues for ages.