we are using elasticsearch 1.1.1
We have a cluster with 3 nodes and all three nodes are in 3 different machines.
Accessing the cluster, performing index operations work fine.
But when we use snapshot feature to take the backup, it (backup) getting failed.
but if we have all three nodes on the same machine, the snapshot command works fine.
Did anybody face this issue.
I did not include the configuration details here, as the cluster and indexing operations work fine without any issues.
Thanks in advance.
For those who are looking for a solution, it is a requirement that we should use the NFS
Related
Hello enthusiastic people.
I am a student trying to learn Elastic stack.
I have 1 node installed on my local machine. I have also successfully installed beats on my other local machine to get data and deliver it to my logstash.
My question is, what if I add another node, do I still need to install kibana and elasticsearch? Then connect it from my first node?
I just read a lot that a single node is prone to data loss.
Sorry for my noob question.
Your answer is very appreciated.
Thanks in advance.
Having a cluster with at least 3 nodes would be good to ensure data security and integrity.
A cluster can have one or more nodes.
An example scenario:
It will be easier for you to install with docker during the learning and development process. I recommend you follow the link below. This link explains how to set up an elasticsearch cluster with 3 nodes on docker.
Start a multi-node cluster with Docker Compose
Sorry for what may be an obvious question. But I have a 3 node ElasticSearch cluster, and I want it to take a nightly snapshot that is sent to S3 for recovery. I have done this for my test cluster which is a single node. And I was starting to do it for my 3 node production cluster when I was left wondering if I have to configure the repository and snapshot on each node separately or can I just do it on one node via Kibana and then it will replicate that across the cluster? I have looked through the documentation but didn't see anything about this.
Thank you!
Yes, you need to configure it in every node.
First you need to install the repository-s3 plugin in every node, this is explained in the documentation.
After that, you also need to add the access and secret keys in the elasticsearch-keystore of every node. (documentation).
The rest of the configuration, creating the repository and setting the snapshots, are done through Kibana once.
I'm having an issue setting up my cluster according to the documents, as seen here: https://docs.sensu.io/sensu-go/5.5/guides/clustering/
This is a non-https setup to get my feet wet, I'm not concerned with that at the moment. I just want a running cluster to begin with.
I've set up sensu-backend on my three nodes, and have configured the backend configuration (backend.yml) accordingly on all three nodes through an ansible playbook. However, my cluster does not discover the other two nodes. It simply shows the following:
For backend1:
=== Etcd Cluster ID: 3b0efc7b379f89be
ID Name Peer URLs Client URLs
────────────────── ─────────────────── ─────────────────────── ───────────────────────
8927110dc66458af backend1 http://127.0.0.1:2380 http://localhost:2379
For backend2 and backend3, it's the same, except it shows those individual nodes as the only nodes in their cluster.
I've tried both the configuration in the docs, as well as the configuration in this git issue: https://github.com/sensu/sensu-go/issues/1890
None of these have panned out for me. I've ensured all the ports are open, so that's not an issue.
When I do a manual sensuctl cluster member-add X X, I get an error message and it results in the sensu-backend process failing. I can't remove the member, either, because it causes the entire process to not be able to start. I have to revert to an earlier snapshot to fix it.
The configs on all machines are the same, except the IP's and names are appropriated for each machine
etcd-advertise-client-urls: "http://XX.XX.XX.20:2379"
etcd-listen-client-urls: "http://XX.XX.XX.20:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-initial-cluster: "backend1=http://XX.XX.XX.20:2380,backend2=http://XX.XX.XX.31:2380,backend3=http://XX.XX.XX.32:2380"
etcd-initial-advertise-peer-urls: "http://XX.XX.XX.20:2380"
etcd-initial-cluster-state: "new" # have also tried existing
etcd-initial-cluster-token: ""
etcd-name: "backend1"
Did you find the answer to your question? I saw that you posted over on the Sensu forums as well.
In any case, the easiest thing to do in this case would be to stop the cluster, blow out /var/lib/sensu/sensu-backend/etcd/ and reconfigure the cluster. As it stands, the behavior you're seeing seems like the cluster members were started individually first, which is what is potentially causing the issue and would be the reason for blowing the etcd dir away.
I manage a small ELK stack which resides on one docker host with a 3 node elasticsearch cluster, a master, a client node, and a data node each running elasticsearch 2.4.x. Each of these nodes had the same host directory bind mounted as the elasticsearch data directory even though only the data node needed it.
While testing the upgrade path to 5.x I was running into a very strange issue. The cluster would come back up, but would not initialize any of the 2.x created indices, throwing errors:
[o.e.g.DanglingIndicesState] [elastic-data] [[logstash-2017.02.01/pBco8d7dQAqmZoI37vUIOQ]] dangling index exists on local file system, but not in cluster metadata, auto import to cluster state
The indices would never initialize and remain red. The stack would create new indices fine and if I deleted these indices the system would work perfectly fine, but that data loss is definitely sub-optimal if I was going to do this on a production system.
The fact the the master and client nodes had been mounting the data directory turned out to be the cause of this issue. Elasticsearch 5.x enforces a default limit of 1 node to a data directory and while the master and client 2.x nodes had not been actively data managing, they had affected the folder structuring. I was able to get a clean upgrade with all green indices by first removing the bind mounts from master and client on the 2.x cluster and letting that sort itself out, then upgrading to 5.x. Hope this helps anybody else who runs into this issue.
I'm sorry that this is probably a kind of broad question, but I didn't find a solution form this problem yet.
I try to run an Elasticsearch cluster on Mesos through Marathon with Docker containers. Therefore, I built a Docker image that can start on Marathon and dynamically scale via either the frontend or the API.
This works great for test setups, but the question remains how to persist the data so that if either the cluster is scaled down (I know this is also about the index configuration itself) or stopped, and I want to restart later (or scale up) with the same data.
The thing is that Marathon decides where (on which Mesos Slave) the nodes are run, so from my point of view it's not predictable if the all data is available to the "new" nodes upon restart when I try to persist the data to the Docker hosts via Docker volumes.
The only things that comes to my mind are:
Using a distributed file system like HDFS or NFS, with mounted volumes either on the Docker host or the Docker images themselves. Still, that would leave the question how to load all data during the new cluster startup if the "old" cluster had for example 8 nodes, and the new one only has 4.
Using the Snapshot API of Elasticsearch to save to a common drive somewhere in the network. I assume that this will have performance penalties...
Are there any other way to approach this? Are there any recommendations? Unfortunately, I didn't find a good resource about this kind of topic. Thanks a lot in advance.
Elasticsearch and NFS are not the best of pals ;-). You don't want to run your cluster on NFS, it's much too slow and Elasticsearch works better when the speed of the storage is better. If you introduce the network in this equation you'll get into trouble. I have no idea about Docker or Mesos. But for sure I recommend against NFS. Use snapshot/restore.
The first snapshot will take some time, but the rest of the snapshots should take less space and less time. Also, note that "incremental" means incremental at file level, not document level.
The snapshot itself needs all the nodes that have the primaries of the indices you want snapshoted. And those nodes all need access to the common location (the repository) so that they can write to. This common access to the same location usually is not that obvious, that's why I'm mentioning it.
The best way to run Elasticsearch on Mesos is to use a specialized Mesos framework. The first effort is this area is https://github.com/mesosphere/elasticsearch-mesos. There is a more recent project, which is, AFAIK, currently under development: https://github.com/mesos/elasticsearch. I don't know what is the status, but you may want to give it a try.