Am new to Kubernetes, my question is related to Google Cloud platform.
Given a scenario we need to restart a kubernetes cluster and we have some services in Spring boot. As Spring boot services are like individual JVM's each and run like an independent process. Once the Kubernetes is restarted in order to restart the Spring boot services I need help in understanding what type of a script or mechanism to use to restart all the services in Kubernetes. Please let me know and thank you and appreciate all your inputs.
I am not sure if I fully understood your question but I think the best approach for you would be to Pack your Spring Boot app to a Docker container and then use it on GKE.
Good guide about Packing your Spring Boot application to container can be found in CodeLabs tutorial.
When you will have your application in container you will be able to use it in Deployment or Statefulsets configuration file and deploy it in your cluster.
As mentioned in Deployment Documentation:
A Deployment provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
In short, Deployment controller ensure to keep your application in your desired state.
For example if you would like to restart your application you just could scale down Deployment to 0 replicas and scale up to 5 replicas.
Also as GKE is working on Google Compute Engine VMs you can also scale your cluster nodes number.
Examples
Restarting Application
For my test I've used Nginx container in Deployment but it should work similar with your Spring boot app container.
Let's say you have 2 node cluster with 5 replicas aplication.
$ kubectl create deployment nginx --image=nginx --replicas=5
deployment.apps/nginx created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-86c57db685-2x8tj 1/1 Running 0 2m45s 10.4.1.5 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-86c57db685-6lpfg 1/1 Running 0 2m45s 10.4.1.6 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-86c57db685-8lvqq 1/1 Running 0 2m45s 10.4.0.9 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-86c57db685-lq6l7 1/1 Running 0 2m45s 10.4.0.11 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-86c57db685-xn7fn 1/1 Running 0 2m45s 10.4.0.10 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
Now you would need to change some environment variables inside your application using ConfigMap. To apply this change you could just use rollout. It would restart your application and provide additional data from ConfigMap.
$ kubectl rollout restart deployment nginx
deployment.apps/nginx restarted
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6c98778485-2k98b 1/1 Running 0 6s 10.4.0.13 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-6c98778485-96qx7 1/1 Running 0 6s 10.4.1.7 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-6c98778485-qb89l 1/1 Running 0 6s 10.4.0.12 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-6c98778485-qqs97 1/1 Running 0 4s 10.4.1.8 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-6c98778485-skbwv 1/1 Running 0 4s 10.4.0.14 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-86c57db685-2x8tj 0/1 Terminating 0 4m38s 10.4.1.5 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-86c57db685-6lpfg 0/1 Terminating 0 4m38s <none> gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-86c57db685-8lvqq 0/1 Terminating 0 4m38s 10.4.0.9 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-86c57db685-xn7fn 0/1 Terminating 0 4m38s 10.4.0.10 gke-cluster-1-default-pool-faec7b51-x07n <nont e> <none>
Draining node to perform node operations
Another example can be when you need to do something with your VMs. You can do it using by draining node.
You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod's containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.
So it will reschedule all pods from this node to another nodes.
Restarting Cluster
Keep in Mind that GKE is managed by google and you cannot restart one machine as it's managed by Managed instance group.
You can ssh to each node, change some settings. When you scale them to 0 and scale up, you will get new machine with your requirements with new ExternalIP.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-6kc3 Ready <none> 3d1h v1.17.14-gke.1600 10.128.0.25 34.XX.176.56 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
gke-cluster-1-default-pool-faec7b51-x07n Ready <none> 3d1h v1.17.14-gke.1600 10.128.0.24 23.XXX.50.249 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
$ gcloud container clusters resize cluster-1 --node-pool default-pool \
> --num-nodes 0 \
> --zone us-central1-c
Pool [default-pool] for [cluster-1] will be resized to 0.
$ kubectl get nodes -o wide
No resources found
$ gcloud container clusters resize cluster-1 --node-pool default-pool --num-nodes 2 --zone us-central1-c
Pool [default-pool] for [cluster-1] will be resized to 2.
Do you want to continue (Y/n)? y
$ $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-n5hm Ready <none> 68s v1.17.14-gke.1600 10.128.0.26 23.XXX.50.249 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
gke-cluster-1-default-pool-faec7b51-xx01 Ready <none> 74s v1.17.14-gke.1600 10.128.0.27 35.XXX.135.41 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
Conclusion
When you are using GKE you are using pre-definied nodes, managed by google and those nodes are automatically upgrading (some security features, etc). Due to that, changing nodes capacity it's easy.
When you pack your application to container and used it in Deployment, your application will be handled by Deployment Controller which will try to keep desired state all the time.
As mention in Service Documentation.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them
Service will be still visible in your cluster even if you will scale you cluster to 0 node as this is abstraction. You don't have to restart it. However if you would change some static service configuration (like port) you would need to recreate service with new configuration.
Useful links
Migrating workloads to different machine types
Auto-repairing nodes
Related
I am using the Kubernetes(v1.23.13) with the container and Flannel CNI. The Kubernetes cluster created on ubuntu (v 18) VM(vmware esxi) and windows server running on another VM. I follow the link below to add the windows(windows server 2019) node to the cluster. Windows node added the cluster. But the windows kube-proxy and demonset pod deployment has failed.
Link https://web.archive.org/web/20220530090758/https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/
Error: Normal Created (x5 over ) kubelet Created container kube-proxy
Normal Pulled (x5 over ) kubelet Container image "sigwindowstools/kube-proxy:v1.23.13-nanoserver" already present on machine
Warning Failed kubelet Error: failed to create containerd task: hcsshim::CreateComputeSystem kube-proxy: The directory name is invalid.
(extra info: {"Owner":"containerd-shim-runhcs-v1.exe","SchemaVersion":{"Major":2,"Minor":1},"Container":{"GuestOs":{"HostName":"kube-proxy-windows-hq7bb"},"Storage":{"Layers":[{"Id":"e30f10e1-6696-5df6-af3f-156a372bce4e","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\19"},{"Id":"8aa59a8b-78d3-5efe-a3d9-660bd52fd6ce","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\18"},{"Id":"f222f973-9869-5b65-a546-cb8ae78a32b9","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\17"},{"Id":"133385ae-6df6-509b-b342-bc46338b3df4","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\16"},{"Id":"f6f9524c-e3f0-5be2-978d-7e09e0b21299","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\15"},{"Id":"0d9d58e6-47b6-5091-a552-7cc2027ca06f","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\14"},{"Id":"6715ca06-295b-5fba-9224-795ca5af71b9","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\13"},{"Id":"75e64a3b-69a5-52cf-b39f-ee05718eb1e2","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\12"},{"Id":"8698c4b4-b092-57c6-b1eb-0a7ca14fcf4e","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\11"},{"Id":"7c9a6fb7-2ca8-5ef7-bbfe-cabbff23cfa4","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\10"},{"Id":"a10d4ad8-f2b1-5fd6-993f-7aa642762865","Path":"C:\ProgramData\containerd\root\io.containerd.snapshotter.v1.windows\snapshots\9"}],"Path":"\\?\Volume{64336318-a64f-436e-869c-55f9f8e4ea62}\"},"MappedDirectories":[{"HostPath":"c:\","ContainerPath":"c:\host"},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\containers\kube-proxy\0e58a001","ContainerPath":"c:\dev\termination-log"},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~configmap\kube-proxy","ContainerPath":"c:\var\lib\kube-proxy","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~configmap\kube-proxy-windows","ContainerPath":"c:\var\lib\kube-proxy-windows","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\volumes\kubernetes.io~projected\kube-api-access-4zs46","ContainerPath":"c:\var\run\secrets\kubernetes.io\serviceaccount","ReadOnly":true},{"HostPath":"c:\var\lib\kubelet\pods\1cd0c333-3cd0-4c90-9d22-884ea73e8b69\etc-hosts","ContainerPath":"C:\Windows\System32\drivers\etc\hosts"}],"MappedPipes":[{"ContainerPipeName":"rancher_wins","HostPath":"\\.\pipe\rancher_wins"}],"Networking":{"Namespace":"4a4d0354-251a-4750-8251-51ae42707db2"}},"ShouldTerminateOnLastHandleClosed":true}): unknown
Warning BackOff (x23 over ) kubelet Back-off restarting failed container
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-2mkd5 1/1 Running 0 19h
kube-system coredns-64897985d-qhhbz 1/1 Running 0 19h
kube-system etcd-scspa2658542001 1/1 Running 2 19h
kube-system kube-apiserver-scspa2658542001 1/1 Running 8 (3h4m ago) 19h
kube-system kube-controller-manager-scspa2658542001 1/1 Running 54 (126m ago) 19h
kube-system kube-flannel-ds-hjw8s 1/1 Running 14 (18h ago) 19h
kube-system kube-flannel-ds-windows-amd64-xfhjl 0/1 ImagePullBackOff 0 29m
kube-system kube-proxy-windows-hq7bb 0/1 CrashLoopBackOff 10 (<invalid> ago) 29m
kube-system kube-proxy-wx2x9 1/1 Running 0 19h
kube-system kube-scheduler-scspa2658542001 1/1 Running 92 (153m ago) 19h
From this issue, it seems windows nodes with flannel has issues they have solved with different work arounds,
As mentioned in the issue they have made a guide to work windows properly, Follow this doc with the installation guide and requirements.
Attaching troubleshooting blog and issue for crashloop backoff.
I had a similar error failed to create containerd task: hcsshim::CreateComputeSystem with flannel on k8s v1.24. The cause was that Windows OS patches had not been applied. You must have applied the patch related to KB4489899.
https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/guides/guide-for-adding-windows-node.md#before-you-begin
I'm trying to install Kafka with Strimzy on a local munikube cluster running on Windows 10, to test the impact of different parameters (especially the TLS configuration). Before moving to TLS, i'd simply like to connect to my cluster :)
Here is my yaml configuration :
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 2.3.0
replicas: 1
listeners:
external:
type: nodeport
tls: false
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
log.message.format.version: "2.3"
storage:
type: persistent-claim
size: 1Gi
zookeeper:
replicas: 1
storage:
type: persistent-claim
size: 2Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
For the listener, I firstly started with plain: {} but this only gives me services of type ClusterIP, not accessible from outside minikube (i really need to connect from outside).
I then moved to a listener of kind external.
You can fin below the configuration of the cluster:
kubectl get all -n kafka
NAME READY STATUS RESTARTS AGE
pod/my-cluster-entity-operator-9657c9d79-8hknc 3/3 Running 0 17m
pod/my-cluster-kafka-0 2/2 Running 0 18m
pod/my-cluster-zookeeper-0 2/2 Running 0 18m
pod/strimzi-cluster-operator-f77b7d544-hq5pq 1/1 Running 0 5h22m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-cluster-kafka-0 NodePort 10.99.3.204 <none> 9094:30117/TCP 18m
service/my-cluster-kafka-bootstrap ClusterIP 10.106.176.111 <none> 9091/TCP 18m
service/my-cluster-kafka-brokers ClusterIP None <none> 9091/TCP 18m
service/my-cluster-kafka-external-bootstrap NodePort 10.109.235.156 <none> 9094:32372/TCP 18m
service/my-cluster-zookeeper-client ClusterIP 10.97.2.69 <none> 2181/TCP 18m
service/my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 18m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-cluster-entity-operator 1/1 1 1 17m
deployment.apps/strimzi-cluster-operator 1/1 1 1 5h22m
The IP address of the minikube cluster is 192.168.49.2 (given by minikube ip)
For the while, is everything correct on my configuration ? I cannot connect on the cluster with a producer (i get a timeout error when i try to publish data).
I tried to connect to 192.168.49.2:32372 & 192.168.49.2:30117 and I always get the same timeout error. I also tryed to run
minikube service -n kafka my-cluster-kafka-external-bootstrap
and
minikube service -n kafka my-cluster-kafka-0
and i still get the same error.
What is wrong in what i'm trying to do?
Thanks!
Ok, I got the answser.
I changed the type of the service to LoadBalancer and started minikube tunnel
One other point, as I'm running this on windows, I noticed that if I run everything using powershell it works, if I used an other command line tool (like Moba) it does not work, I don't explain this.
I deploy a Elasticsearch cluster to EKS, below is the spec
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elk
spec:
version: 7.15.2
serviceAccountName: docker-sa
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: node
count: 3
config:
...
I can see it has been deployed correctly and all pods are running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
elk-es-node-0 1/1 Running 0 19h
elk-es-node-1 1/1 Running 0 19h
elk-es-node-2 1/1 Running 0 11h
But I can't restart the deployment Elasticsearch,
$ kubectl rollout restart Elasticsearch elk-es-node
Error from server (NotFound): elasticsearches.elasticsearch.k8s.elastic.co "elk-es-node" not found
The Elasticsearch is using statefulset so I tried to restart statefulset,
$ kubectl rollout restart statefulset elk-es-node
statefulset.apps/elk-es-node restarted
the above command says restarted, but the actual pods are not restarting.
what is the right way to restart a custom kind in K8S?
Use - kubectl get all
To identify if the resource created is a deployment or a statefulset -
use -n <namespace"> along with the above command, if you are working in a specific namespace.
Assuming, you are using a statefulset, the issue below command to understand the properties in which it is configured.
kubectl get statefulset <statefulset-name"> -o yaml > statefulsetContent.yaml
this will create a yaml file names statefulsetContent.yaml in same directory.
you can use it to explore different options configured in the statefulset.
Check for .spec.updateStrategy in the yaml file. Based on this we can identify its update strategy.
Below is from the official documentation
There are two possible values:
OnDelete
When a StatefulSet's .spec.updateStrategy.type is set to OnDelete, the StatefulSet controller will not automatically update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to create new Pods that reflect modifications made to a StatefulSet's .spec.template.
RollingUpdate
The RollingUpdate update strategy implements automated, rolling update for the Pods in a StatefulSet. This is the default update strategy.
As a work around, you can try to scale down/up the statefulset.
kubectl scale sts <statefulset-name"> --replicas=<count">
With ECK as the operator, you do not need to use rollout restart. Apply your updated Elasticsearch spec and the operator will perform rolling update for you. If for any reason you need to restart a pod, you use kubectl delete pod <es pod> -n <your es namespace> to remove the pod and the operator will spin up new one for you.
I see this error when trying to use Gitea with microk8s on Ubuntu 21.10:
$ k logs gitea-0 -c configure-gitea
Wait for database to become avialable...
gitea-postgresql (10.152.183.227:5432) open
...
2021/11/20 05:49:40 ...om/urfave/cli/app.go:277:Run() [I] PING DATABASE postgres
2021/11/20 05:49:45 cmd/migrate.go:38:runMigrate() [F] Failed to initialize ORM engine: dial tcp: lookup gitea-postgresql.default.svc.cluster.local: Try again
I am looking for some clues as to how to debug this please.
The other pods seem to be running as expected:
$ k get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system hostpath-provisioner-5c65fbdb4f-nfx7d 1/1 Running 0 11h
kube-system calico-node-h8tpk 1/1 Running 0 11h
kube-system calico-kube-controllers-f7868dd95-dpp8n 1/1 Running 0 11h
kube-system coredns-7f9c69c78c-cnpkj 1/1 Running 0 11h
default gitea-memcached-584956987c-zb8kp 1/1 Running 0 20s
default gitea-postgresql-0 1/1 Running 0 20s
default gitea-0 0/1 Init:1/2 1 20s
The services are not as expected, since gitea-0 is not starting:
$ k get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 11h
kube-system kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 11h
default gitea-postgresql-headless ClusterIP None <none> 5432/TCP 3m25s
default gitea-ssh ClusterIP None <none> 22/TCP 3m25s
default gitea-http ClusterIP None <none> 3000/TCP 3m25s
default gitea-memcached ClusterIP 10.152.183.15 <none> 11211/TCP 3m25s
default gitea-postgresql ClusterIP 10.152.183.227 <none> 5432/TCP 3m25s
Also see:
https://github.com/ubuntu/microk8s/issues/2741
https://gitea.com/gitea/helm-chart/issues/249
I worked through to the point where I had the logs below, specifically:
cmd/migrate.go:38:runMigrate() [F] Failed to initialize ORM engine: dial tcp: lookup gitea-postgresql.default.svc.cluster.local: Try again
Using k cluster-info dump I saw:
[ERROR] plugin/errors: 2 gitea-postgresql.default.svc.cluster.local.cisco.com. A: read udp 10.1.147.194:56647->8.8.8.8:53: i/o timeout
That led me to test the DNS with dig and 8.8.8.8. That test didn't reveal any errors, in that DNS seemed to work. Even so, DNS seemed suspect.
So then I tried microk8s enable storage dns:<IP address of DNS in lab>, whereas I was previously only using microk8s storage dns. The storage part enables the persistent volumes that the database needs.
The key piece here is the lab DNS server IP address argument when enabling DNS with microk8s.
I deploy a elasticsearch to minikube with below configure file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.10.1
ports:
- containerPort: 9200
- containerPort: 9300
I run the command kubectl apply -f es.yml to deploy the elasticsearch cluster.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-fb9b44948-bchh2 1/1 Running 5 6m23s
The elasticsearch pod keep restarting every a few minutes. When I run kubectl describe pod command, I can see these events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m11s default-scheduler Successfully assigned default/elasticsearch-fb9b44948-bchh2 to minikube
Normal Pulled 3m18s (x5 over 7m11s) kubelet Container image "elasticsearch:7.10.1" already present on machine
Normal Created 3m18s (x5 over 7m11s) kubelet Created container elasticsearch
Normal Started 3m18s (x5 over 7m10s) kubelet Started container elasticsearch
Warning BackOff 103s (x11 over 5m56s) kubelet Back-off restarting failed container
The last event is Back-off restarting failed but I don't know why it restarts the pod. Is there any way I can check why it keeps restarting?
The first step (kubectl describe pod) you've already done. As a next step I suggest checking container logs: kubectl logs <pod_name>. 99% you get the reason from logs in this case (I bet on bootstrap check failure).
When neither describe pod nor logs do not have anything about the error, I get into the container with 'exec': kubectl exec -it <pod_name> -c <container_name> sh. With this you'll get a shell inside the container (of course if there IS a shell binary in it) ans so you can use it to investigate the problem manually. Note that to keep failing container alive you may need to change command and args to something like this:
command:
- /bin/sh
- -c
args:
- cat /dev/stdout
Be sure to disable probes when doing this. A container may restart if liveness probe fails, you will see that in kubectl describe pod if it happen. Since your snippet doesn't have any probes specified, you can skip this.
Checking logs of the pod using kubectl logs podname gives clue about what could go wrong.
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
Check this post for a solution