Elasticsearch multinode setup - elasticsearch

I want to setup an 3 node cluster setup in elasticsearch, but I unable to setup, getting error like connection refused in data machine, master machine starting fine, but it shows like 0 nodes added.

I would recommend to read tutorial first, like
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-production-elasticsearch-cluster-on-ubuntu-14-04
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-zen.html
then ask precise question here about a specific issue.
About your question, I think you didnt configure discovery.zen.ping.unicast.host fine, so nodes dont know each other.
Also, when you post a question, please post:
elasticsearch version
environnement (AWS, VM ...)
configuration sample
Welcome at SO!

Related

How to setup 2 nodes on elasticsearch?

Hello enthusiastic people.
I am a student trying to learn Elastic stack.
I have 1 node installed on my local machine. I have also successfully installed beats on my other local machine to get data and deliver it to my logstash.
My question is, what if I add another node, do I still need to install kibana and elasticsearch? Then connect it from my first node?
I just read a lot that a single node is prone to data loss.
Sorry for my noob question.
Your answer is very appreciated.
Thanks in advance.
Having a cluster with at least 3 nodes would be good to ensure data security and integrity.
A cluster can have one or more nodes.
An example scenario:
It will be easier for you to install with docker during the learning and development process. I recommend you follow the link below. This link explains how to set up an elasticsearch cluster with 3 nodes on docker.
Start a multi-node cluster with Docker Compose

elasticsearch node not alive

I need some help and I will try and offer as much information as I can as I am unfamiliar with Elasticsearch.
I have received access to a server that has elasticsearch installed and uses I am guessing one node to run the elastic search.
When running docker ps -a I can see the name of the container and it's ID and I can also log into it.
however, in a certain part of the application I am getting this error message:
production.INFO: Exception at search page No alive nodes found in your cluster
When digging in a little more I can see the following:
production.ERROR: No alive nodes found in your cluster {"userId":1639,"exception":"[object] (Elasticsearch\Common\Exceptions\NoNodesAvailableException(code: 0): No alive nodes found in your cluster at /var/www/vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php:50)*
I am assuming the problem is that there is no connection with the node but all answers I found on the web do not specify how to fix the issue or when I try the fixes I get other errors on my side (systemctl not installed and such).
Can anyone explain how I can restart the nodes through the cli? I know for certain the code was not changed so it has to be something to do with the server.
If anyone can help me out that would be great! thanks for your time
So my issue was I needed to run -
sysctl -w vm.max_map_count=262144
I understand this is to increase virtual memory for the container (I found this in a document that was left in the system).
But I would really appreciate if someone can explain why this issue suddenly appeared and if there is a better solution I can use.

Setting up a Sensu-Go cluster - cluster is not synchronizing

I'm having an issue setting up my cluster according to the documents, as seen here: https://docs.sensu.io/sensu-go/5.5/guides/clustering/
This is a non-https setup to get my feet wet, I'm not concerned with that at the moment. I just want a running cluster to begin with.
I've set up sensu-backend on my three nodes, and have configured the backend configuration (backend.yml) accordingly on all three nodes through an ansible playbook. However, my cluster does not discover the other two nodes. It simply shows the following:
For backend1:
=== Etcd Cluster ID: 3b0efc7b379f89be
ID Name Peer URLs Client URLs
────────────────── ─────────────────── ─────────────────────── ───────────────────────
8927110dc66458af backend1 http://127.0.0.1:2380 http://localhost:2379
For backend2 and backend3, it's the same, except it shows those individual nodes as the only nodes in their cluster.
I've tried both the configuration in the docs, as well as the configuration in this git issue: https://github.com/sensu/sensu-go/issues/1890
None of these have panned out for me. I've ensured all the ports are open, so that's not an issue.
When I do a manual sensuctl cluster member-add X X, I get an error message and it results in the sensu-backend process failing. I can't remove the member, either, because it causes the entire process to not be able to start. I have to revert to an earlier snapshot to fix it.
The configs on all machines are the same, except the IP's and names are appropriated for each machine
etcd-advertise-client-urls: "http://XX.XX.XX.20:2379"
etcd-listen-client-urls: "http://XX.XX.XX.20:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-initial-cluster: "backend1=http://XX.XX.XX.20:2380,backend2=http://XX.XX.XX.31:2380,backend3=http://XX.XX.XX.32:2380"
etcd-initial-advertise-peer-urls: "http://XX.XX.XX.20:2380"
etcd-initial-cluster-state: "new" # have also tried existing
etcd-initial-cluster-token: ""
etcd-name: "backend1"
Did you find the answer to your question? I saw that you posted over on the Sensu forums as well.
In any case, the easiest thing to do in this case would be to stop the cluster, blow out /var/lib/sensu/sensu-backend/etcd/ and reconfigure the cluster. As it stands, the behavior you're seeing seems like the cluster members were started individually first, which is what is potentially causing the issue and would be the reason for blowing the etcd dir away.

CouchBase Replication Error to Elastic Search

I have an existing replication in Couchbase -> ElasticSearch. I found out that there is now errors in replicating:
I tried to CREATE Replication again but it also gave the same error:
I already checked my elasticsearch plugin_head and I can see data in there and I can query with results. I restarted also my elasticsearch batch file but still error is persistent.
Anyone can help me on what else I need to check to further investigate the issue? Thank you in advance.
You may have a connectivity problem, which can happen due to networking issues like an IP address change since you initially setup the replication.
You might try the troubleshooting steps outlined here if you haven't already:
http://developer.couchbase.com/documentation/server/4.1/connectors/elasticsearch-2.1/trouble-intro.html
You should also check the goxdcr logs, which you can find here depending on the OS you're using:
http://developer.couchbase.com/documentation/server/4.0/troubleshooting/troubleshooting-logs.html

Datastax Opscenter issue: dashboard timeout

I installed Datastax community version in an EC2 server and it worked fine. After that I tried to add one more server and I see two nodes in the Nodes menu but in the main dashboard I see the following error:
Error: Call to /Test_Cluster__No_AMI_Parameters/rc/dashboard_presets/ timed out.
One potential rootcause I can see is the name of the cluster? I specified something else in the cassandra.yaml but it looks like opscenter is still using the original name? Any help would be grealy appreciated.
It was because cluster name change wasn't made properly. I found it easier to change the cluster name before starting Cassandra cluster. On top of this, only one instance of opscentered needs to run in one single cluster. datastax-agent needs to be running in all nodes in the cluster but they need to point to the same opscenterd (change needs to be made at /var/lib/datastax-agent/conf/address.yaml)

Resources