I'm trying to setup a two node system on separate linux servers. Based on the start multiple
instances documentation they are running the bind all switch when starting rethinkdb on the
primary server. then on the secondary they are using the job switch to point to the primary IP /
port
I would like to use the config file.
On my node1 (primary) I have the follow:
bind 192.168.1.177
canonical-address 192.168.1.177
on node2 (secondary) I have the following:
bind 192.168.1.178
canonical-address 192.168.1.178
join 192.168.1.177:29015
on startup node2 doesn't connect to node1. The only way to get it to work is on node1 add the join
and point it to the node2 IP/port. Is that correct? and example of the 2 configs would be
appreciated.
Related
I have a service called workload
I need to run this service on 3 nodes in my primary node is node1 and nod2 & node3 are secondary.
I need to run service only on node1 if anything wrong happens then it should run one for the secondary node so that no interruption occurs for our service.
I have a cluster of consul servers in two datacenters. each datacenter consists of 3 servers each. When I execute consul members -wan command I can see all 6 servers.
I want to separate these two into individual clusters and no connection between them.
I tried to use the command force-leave and leave as per the consul documentation:
https://www.consul.io/commands/force-leave: When I used this command
the result was a 500 - no node is found. I tried using the node name as server.datacenter, full FQDN of the server, IP of the server, none of them worked for me.
https://www.consul.io/commands/leave: When I used this command from
the node which I want to remove from the cluster, the response was
success but when I execute consul members -wan I still can see this
node.
I tried another approach where in I stopped the consul on the node I want to remove from cluster. Then executed the command: consul force-leave node-name. Then the command: consul members -wan showed this node as left. When I started the consul on this node, the node is back in cluster.
What steps am I missing here?
I think I solved the problem I had. I followed the instructions here:
https://support.hashicorp.com/hc/en-us/articles/1500011413401-Remove-WAN-federation-between-Consul-clusters
Currently I'm trying to change data node's ssh port in Hadoop. I have master and one data node and each host's ssh ports are different.
What I did :
Generated ssh key and I can connect to data node without password.
Added both on /etc/hosts as master and worker1
Changed port for master node in hadoop-env.sh file.
Changed the file on data node also.
The problem is, Hadoop uses same ports for master and data node like this.
How do I make Hadoop use different port for master and data node?
Any help would be appreciated :)
I want to setup Elastic-Search cluster. As it is a distributed system, I should be able to add more nodes on the fly(meaning: adding new nodes after it is deployed once). How is this done and how does Elastic-search manage to do it?
Elasticsearch handles this using Zen Discovery
The zen discovery is the built in discovery module for elasticsearch
and the default. It provides unicast discovery, but can be extended to
support cloud environments and other forms of discovery.
This is done through elasticsearch.yml configuration file. You have two options - multicast and unicast:
Multicast lets your new node to connect to your cluster without specifying IPs, however it's not recommended.
Unicast. You specify a list of nodes in your cluster (their IPs).
Both ways, your started node will try to ping other nodes and if their cluster names are matching, it will join it.
Configuration example:
cluster.name: elasticsearch_media
node.name: "media-dev"
node.master: true
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["153.32.228.250[9300-9400]", "10.122.234.19[9300-9400]"]
All you have to do is to edit the main configuration file on your new node and change the cluster name to the cluster you are currently running. Of course the new node must be discoverable. This depends on your network settings.
Try to write a script which will accept command line arguments about cluster name and IP addresses, authentications etc.,
This script will open and modify elasticsearch.yml file on the remote server.
I have 2 Linux VM's (both at same datacenter of Cloud Provider): Elastic1 and Elastic2 (where Elastic 2 is a clone of Elastic 1). Both have same version centos, same cluster name, and same version ES, again - Elastic2 is a clone.
I use the service wrapper to automatically start them both at boot, and introduced each others ip to their respective iptables file, so now I can successfully ping between nodes.
I thought this would be enough to allow ES to form a cluster, but to no avail.
Both Elastic1 and Elastic2 have 1 index each named e1 and e2 respectfully. Each index has 1 shard with no replicas.
I can use the head and paramedic plugins on each server successfully. And use curl -XGET 'http://localhost:9200/_cluster/nodes?pretty=true' to validate the cluster name is the same and each server only has 1 node listed.
Is there anything glaring out at why these nodes arent talking? Ive restarted the ES service and rebooted on both servers to no avail. Could cloning be the problem??
In your elasticsearch.yml:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ['host1:9300', 'host2:9300']
So, just list your node IPs with the transport port (default is 9300) under unicast hosts. Multicast is enabled by default, but is generally impossible on cloud environments without use of external plugins.
Also, make sure to check your IP rules / security groups! That's easy to forget.