Bind Elastic-Search to localhost as well as an IP address - elasticsearch

modules-network in Elastic-Search documentation says that it can bind to more than one network addresses by specifying an array of IP addresses in network.bind_host
I put the following in my config/elasticsearch.yaml:
# Used a real IP address in the below settings
network.bind_host: ["10.10.10.10","_local_"]
network.publish_host: 10.10.10.10
But it does not work and I get the following error:
failed to send join request to master [{Perseus}{AB0...B-kw}{10.10.10.172}{10.10.10.172:9300}],
reason [RemoteTransportException[[Perseus][10.10.10.172:9300][internal:discovery/zen/join]];
nested: ConnectTransportException[[Glitch][10.10.10.164:9300]
connect_timeout[30s]]; nested: NotSerializableExceptionWrapper[Connection refused: /10.10.10.164:9300];
Any ideas what am I doing wrong?
All my other elastic search nodes are running happily on other machines.
I want to bind all elastic-search nodes to a respective IP address as well as _localhost_ so that I can run some storm-jobs on each of the machines whose EsBolts feed only into localhost.
That ways, none of my EsBolts need to round-robin the feed to several Elastic-Search nodes.

Related

Extending Usable IPs for Mikrotik

I can't seem to extend my pool for my additional users. I don't have any problem with my connection when the ip address handed is 192.168.10.xxx but when it reaches to 192.168.11.xxx to 192.168.16.xxx, it can no longer use the internet. What am I missing with my setup?
Updated:
If you want to add more IPs in your DHCP server, just increase the range of your current network. You currenlty have 253 clients (192.168.10.0/24)
Don't add more /24 networks, it's useless, just use for example 192.168.8.0/21 (range .8.1 to .15.254) to get 2046 IPs. See http://www.subnet-calculator.com/subnet.php to test various network ranges.
So I suggest this:
remove parasites /24 networks and NAT rules (192.168.11.0/24 to .16/24)
increase the range of your current network: change IP address from 192.168.10.1/24 to 192.168.10.1/21, subnet 255.255.248.0, network 192.168.8.0
change NAT/masquerade rule: src-address=192.168.8.0/21
change dhcp network range 192.168.8.0/21
change dhcp pool size with two segments: 192.168.8.1-192.168.9.254 and 192.168.10.100-192.168.15.254
and, normally, it should work

How to get hostname from Elasticsearch Python module?

Using Elasticsearch, how do we get the working Elastic host?
es = Elasticsearch()
health = es.cluster.health
Above statement will print the health of a Elasticsearch host. But how to get the working host from this?
Use hosts = es.transport.hosts to get the list of hosts.
You can also use something like:
con = elasticsearch.connection.RequestsHttpConnection(**hosts[0])
con.perform_request(...)
But even better, you can do something like :
es.transport.perform_request('GET', '/_cat/tasks')
in order to perform any request on a healthy host from the list of configured hosts.
For a newer version [>=8.x.x] this is possible by accessing .transport.node_pool.all()
# get a list of all connections
nodes = [node.base_url for node in es.transport.node_pool.all()]

Unable to add another node to existing node to form a cluster. Couldn't change num_tokens to vnodes

i have installed cassandra on two individual nodes both on Amazon.when i am trying to configure nodes to form a cluster the nodes. I am receiving the following error.
ERROR [main] 2016-05-12 11:01:26,402 CassandraDaemon.java:381 - Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Cannot change the number of tokens from 1 to 256.
I using these setting in cassandra.yaml file
listen_address and rpc_address to : private Ip address
seeds : Public Ip [Elastic Ip address]
num_tokens: 256
This message usually appears when num_tokens is changed after the node has been bootstrapped.
The solution is:
Stop Cassandra on all nodes
Delete the data directory (inc. datafiles, commitlog and saved_caches)
Double check that num_tokens is set to 256, initial_token is commented out and auto_bootstrap is set to true in cassandra.yaml
Start Cassandra on all nodes
This will wipe your existing cluster and cause the nodes to bootstrap from scratch again.
Cassandra doesn't support changing between vnodes and static tokens after a datacenter is bootstrapped. If you need to change from vnodes to static tokens or vice versa in an already running cluster, you'll need to create a second datacenter using the new configuration, stream your data across, and then decomission the original nodes.

ElasticSearch... Not So Elastic?

I have used this method to build Elastic Search Clusters in the cloud. It works 30%-50% of the time.
I start with 2 centos nodes in 2 servers in Digital Oceans Cloud. I then install ES and set the same cluster name in each config/elasticsearch.yml. Then I also set (uncomment):
discovery.zen.ping.multicast.enabled: false
as well as set and uncomment:
discovery.zen.ping.unicast.hosts: ['192.168.10.1:9300', '192.168.10.2:9300']
in each of the 2 servers. SO Reference here
Then, to give ES the benefit of the doubt, I service iptables stop, then restart the service on each node. Sometimes the servers see each other and I get a """cluster""" out of elasticsearch, sometimes if not most, the servers dont see each other even though multicast is disabled and specific ip addresses are given in the unicast hosts array that have NO firewall on, and point to each other.
WHY ES Community? Why does a hello world equivalent of elastic search prove to be inelastic to say the least (Let me openly and readily admit this MUST be user error/idiocy else no one would use this technology).
At first I was trying to build a simple 4 node cluster, but goodness gracious the issues that came along with that before indexing a single document were ridiculous. I had a 0% success rate. Some nodes saw some other nodes (via head and paramedic) while others had 'dangling indices' and 'unassigned indexes'. When I googled this I found tons of relevent/similar issues and no workable answers.
Can someone send me an example of how to build an elastic search cluster, that works?
#Ben_Lim's Answer: Did everyone who needs this as a resource get that?
I took 1 node (This is not for Prod) Server1 and changed the following
in /config/elasticsearch.yml settings:
uncomment node.master: true
uncomment and set network.host: 192.XXX.1.10
uncomment transport.tcp.port: 9300
uncomment discovery.zen.ping.multicast.enabled: false
uncomment and set discovery.zen.ping.unicast.hosts: ["192.XXX.1.10:9300"]
That sets the master, okay, then in each subsequent node (example
above) that wants to join --
uncomment node.master: false
uncomment and set network.host: 192.XXX.1.11
uncomment transport.tcp.port: 9301
uncomment discovery.zen.ping.multicast.enabled: false
uncomment and set discovery.zen.ping.unicast.hosts: ["192.XXX.1.10:9300"]
Obviously make sure all nodes have same cluster name and you iptables
firewalls etc are setup right.
NOTE AGAIN -- This is not for prod, but a way to start testing ES in Cloud, you can tighten up the screws from here
The most probable problem you met is the 9300 port is used by other application or the master node is not started at port 9300 , therefore they can't communicate with each other.
When you start 2 ES nodes to build up an cluster, one node must be elected to Master node. The master node will have a communication address: hostIP:post. For example:
[2014-01-27 15:15:44,389][INFO ][cluster.service ] [Vakume] new_master [Vakume][FRtqGG4xSKGbM_Yw9_oBLg][inet[/10.0.0.10:9302]], reason: zen-disco-join (elected_as_master)
When you need to start another node to build up a cluster, you can try to specific the master IP:port, like the example above you need to set
discovery.zen.ping.unicast.hosts: ["10.0.0.10:9302"]
Then the second node can find the master node and join the cluster.

Multiple IP for One Host

I am setting up a grid-enabled cluster. I plan to assign 2 IP to my head node: one for local connection (LAN for distributing jobs to compute nodes) and one for public (internet for user access). So, my /etc/hosts file looks something like this:
111.111.111.111 myserver.whatever.com myserver #for public IP
11.11.11.11 myserver.whatever.com myserver #for local LAN
22.22.22.22 computenode01
33.33.33.33 computenode03
My concern here is will the hostname of myserver get messed up since it is mapped to two IPs?
I fear the system will always choose the first entry (111.111.111.111) if you want to resolve "myserver" address.
It will simply ignore the second entry, as I guess. Choose different hostnames for each entry, e.g. myserver.local and myserver.remote.

Resources