Does elasticsearch need a persistent storage when deployed on kubernetes? - elasticsearch

In the Kubernetes example of Elasticsearch production deployment, there is a warning about using emptyDir, and advises to "be adapted according to your storage needs", which is linked to the documentation of persistent storage on Kubernetes.
Is it better to use a persistent storage, which is an external storage for the node, and so needs (high) I/O over network, or can we deploy a reliable Elasticsearch using multiple data nodes with local emptyDir storage?
Context: We're deploying our Kubernetes on commodity hardware, and we prefer not to use SAN for the storage layer (because it doesn't seem like commodity).

The warning is so that folks don't assume that using emptyDir provides a persistent storage layer. An emptyDir volume will persist as long as the pod is running on the same host. But if the host is replaced or it's disk becomes corrupted, then all data would be lost. Using network mounted storage is one way to work around both of these failure modes. If you want to use replicated storage instead, that works as well.

Related

change persistent disk type to ssd

I have an elasticsearch running as a ECK on a GKE cluster for production purposes and in order to increase its performance I'm thinking of changing the persistent disk type to ssd. I came accross solutions that incite the need to create a snapshot of the disk in GCE and then create another ssd disk with the data stored in the snapshot. I'm still concerned whether it still has a risk of data loss and if I create another disk will my elastic be able to match it or not as it is statefulset.
Since this is a production deployment I would advise to do as follows:
Create a volume snapshot (doc).
Set up a secondary cluster (doc).
Modify the deployment so that it uses an SSD (doc).
Deploy to the second cluster.
Once this new deployment has been fully tested you can switch over the traffic.

Image pull over multiple K8s nodes

When I create a pod, a corresponding image is pulled to the node where the pod is created
Can I have those images shared among the cluster nodes, instead of being stored locally on each node?
Thanks a lot
Best Regards
It's possible if you have shared storage across all the Kubernetes nodes. However, it's not a good idea 🙅 since typically the place where images get stored is also the place where the container runtime stores its files when it's actually running the container. For example, if you are using Docker, everything gets stored under /var/lib/docker or in the case of containerd it's /var/lib/containerd
So in summary, it's possible with shared files/cluster file systems like NFS, Ceph, Glusterfs, AWS EFS, etc, but it's not a good idea in my opinion 🚫.
Update (#BMitch):
Make sure that the container storage driver you are using supports the filesystem that you are using.
✌️

How to configure elasticsearch snapshots using persistent volumes as the "shared file system repository" in Kubernetes(on GCP)?

I have registered the snapshot repository and have been able to create snapshots of the cluster for a pod. I have used a mounted persistent volume as the "shared file system repository" as the backup storage.
However in a production cluster with multiple nodes, it is required that the shared file system is mounted for all the data and master nodes.
Hence I would have to mount the persistent volume for the data nodes and the master nodes.
But Kubernetes persistent volumes don't have a "read write many" option. So can't mount it on all the nodes and hence am unable to register the snapshot repository. Is there a way to use persistent volumes as the backup snapshot storage for a production elastic search cluster in Google Kubernetes Engine?
Reading this, I guess that you are using a cluster created on your own and not GKE, since you cannot install agents on master nodes and workers will get recreated whenever there is a node pool update. Please make this clear since it can be misleading.
There are multiple volumes that allow multiple readers, such as cephfs, glusterfs and nfs. You can take a look at the different volume types on this

Kubernetes distributed filesystem

Well, my company is considering to move from Hadoop to Kubernetes. We can find solutions in Kubernetes for tools such as cassandra, sparks, etc. So the last problem for us is how to store massive amount of files in Kubernetes, saying 1 PB. FYI, we DO NOT want to use online storage services such as S3.
As far as I know, HDFS is merely used in Kubernetes and there are a few replacement products such as Torus and Quobyte. So my question is, any recommendation for the filesystem on Kubernetes? Or any better solution?
Many thanks.
You can use a Hadoop Compatible FileSystem such as Ceph or Minio. Both of which offer S3-compatible REST APIs for reading and writing. In Kubernetes, Ceph can be deployed using the Rook project.
But overall, running HDFS in Kubernetes would require stateful services like the NameNode, and DataNodes with proper affinity and network rules in place. The Hadoop Ozone project is a realization that object storage is more common for microservice workloads than HDFS block storage as reasonably trying to analyze PB of data using distributed microservices wasn't feasible. (I'm only speculating)
The alternative is to use Docker support in Hadoop & YARN 3.x

How Do I Make A Persistent Volume Accessible to Multiple Kubernetes Pods?

I've done quite a bit of research and have yet to find an answer to this. Here's what I'm trying to accomplish:
I have an ELK stack container running in a pod on a k8s cluster in GCE - the cluster also contains a PersistentVolume (format: ext4) and a PersistentVolumeClaim.
In order to scale the ELK stack to multiple pods/nodes and keep persistent data in ElasticSearch, I either need to have all pods write to the same PV (using the node/index structure of the ES file system), or have some volume logic to scale up/create these PVs/PVCs.
Currently what happens is if I spin up a second pod on the replication controller, it can't mount the PV.
So I'm wondering if I'm going about this the wrong way, and what is the best way to architect this solution to allow for persistent data in ES when my cluster/nodes autoscale.
Persistent Volumes have access semantics. on GCE I'm assuming you are using a Persistent Disk, which can either be mounted as writable to a single pod or to multiple pods as read-only. If you want multi writer semantics, you need to setup Nfs or some other storage that let's you write from multiple pods.
In case you are interested in running NFS - https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/nfs/README.md
FYI: We are still working on supporting auto-provisioning of PVs as you scale your deployment. As of now it is a manual process.

Resources