Problem with elasticsearch tribe node not discovering index - elasticsearch

I have a number of clusters running elasticsearch 5.2 and a tribe cluster for cross-cluster search which is all hosted on GCP.
I also have an alias setup on the one index on one of the clusters.
In the process of making changes to the masters of the clusters, I have updated the tribe config on one of the tribe nodes and restarted elasticsearch service , the new config looks like this
cluster.name: name of the cluster
network.host: 0.0.0.0
cloud:
gce:
project_id: Project ID
zone: [zones]
discovery:
type: gce
gce:
tags: network-tag
tribe:
blocks:
write: true
metadata: true
cluster_1:
cluster.name: cluster_1_name
discovery.zen.ping.unicast.hosts: ["new_master_1.1", "new_master_1.2"]
cluster_2:
cluster.name: cluster_2_name
discovery.zen.ping.unicast.hosts: ["new_master_2.1", "new_master_2.2"]
action.search.shard_count.limit: XXXX
Now when I try to run curl localhost:9200/alias/_search it says index not found exception but when I run
curl localhost:9200/index_name/_search i get the expected output.
The old tribe node that I still haven't updated the config for is working fine with both previous curl commands which is intriguing. The only difference in the config is the masters for the clusters.
So I don't know how to fix it. I appreciate all the help I can get in solving this issue.
Thanks a lot.
Edit:
When I inspect the tribe log it discovers the index the cluster that does not belong to but it doesn't discover it for the cluster that owns it. I'm not sure how this can help identifying the issue and resolving it.

I managed to fix it because there was a conflict of indices: two clusters had the same index name so when one of them adds it the tribe node fails to add the second one. All I had to do was add a prefer on conflict.

Related

ElasticSearch start up error - the default discovery settings are unsuitable for production use;

I have tried giving the following configurations in the elasticsearch.yaml file
network.host: aa.bbb.ccc.dd that being my IPv4 Address
and http.port: 9200
The response to this is as follows when I try to run elasticsearch.bat on my windows machine:
the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
I am really not quite sure what to configure for the cluster initialization. The default values are discovery.seed_hosts: ["host1", "host2"] and cluster.initial_master_nodes: ["node-1", "node-2"]
In short, if you are running Elasticsearch locally(single node) or just with a single node on the cloud then just use below config in your elasticsearch.yml to avoid the production check, and to make it work, more info about this config in this SO answer:
discovery.type: single-node
This is the configuration I did since I had only one machine that had the Elastic Search db in it (1 node Only).
node.data : true
network.host : 0.0.0.0
discovery.seed_hosts : []
cluster.initial_master_nodes : []
Elasticsearch 7 requires information to form a cluster. This is provided by the following two properties in elasticsearch.yml
cluster.initial_master_nodes : This is used to provide the initial set of nodes whose vote will be consider in master election process.
discovery.seed_hosts : This is used to provide the set of nodes which are master eligible. This should contain the name of all nodes which are master eligible.
So for example you are forming a cluster with three nodes : n0, n1, n2 which are master eligible then you config will look something like this:
cluster.initial_master_nodes: ["n0", "n1", "n2"]
discovery.seed_hosts: ["n0", "n1", "n2"]
Note: cluster.initial_master_nodes is used only once by elastic which is very first time of cluster formation.
For more detailed information read this guide.
I have also faced the same issue with the elastic-search 7.6.2 version. The solution of the above-mentioned problem is, you just need to either add "discovery.seed_hosts : 127.0.0.1:9300" or set discovery.type: single-node in eleasticsearch.yml file to avoid the production use error.
Click here for discovery and cluster formation settings.
I have provided the detailed answer here.
I am adding my answer from docker container perspective. I initially tried running 3 nodes of elasticsearch in a same cluster and then tried running only 1 and faced same issue. To resolve, I deleted docker volumes. Please note, my docker elasticsearch nodes had no data so there was no data loss due to docker volume deletion.
https://discuss.elastic.co/t/how-my-config-file-should-be-on-publish-mode-with-a-single-node/189034

Datastax - Cassandra Amazon EC2 Multiregion Setup - Cluster with 3 node

I have launched 3 Amazon EC2 instance and setup datastax cassandra as follows
1.Region - US EAST:
cassandra.yaml - configuration
a.listen_address as private IP of this instance
b.broadcast_address as public IP of this instance
c.seeds as 50.XX.XX.X1, 50.XX.XX.X2 (public-ip of node1,public-ip of node2)
cassandra-rackdc.properties - configuration
dc=DC1
rack=RAC1
dc_suffix=US_EAST_1
2.Region - US WEST:
I did same procedure as I did above.
3.Region - EU IRELAND:
The result of above configuration is
All the node working good individually. But when I do
$nodetool status on all the three node
It only listing the local node only.
I tried to achieve the following things.
1. Launch 3 cassandra node in three different region. For say, US-EAST,US-WEST,EU-IRELAND.
With Following configuration or methodology
a.Ec2MultiRegionSnitch
b.Replication staragey as SimpleStrategy
c.Replication Factor as 3
d. Read & write level as QUORUM.
I wish to attain only one thing i.e. if any two of the region is down or any two of the node down, I can survive with renaming one node.
My Questions here are
Where I did the mistake? and How to attain my requirements?
Any help or inputs are much appreciated.
Thanks.
This is what worked for me with cassandra 3.0
endpoint_snitch: Ec2MultiRegionSnitch
listen_address: <leave_blank>
broadcast_address: <public_ip_of_server>
rpc_address: 0.0.0.0
broadcast_rpc_address: <public_ip_of_server>
-seed: "one_ip_from_other_DC"
Finally, I found the resolution of my issue. I am using replication strategy as SimpleStrategy, hence I do not require to configure cassandra-rackdc.properties.
Once, I removed the file cassandra-rackdc.properties from all node, Everything working as expected.
Thanks

How to require one pod per minion/kublet when configuring a replication controller?

I have 4 nodes (kubelets) configured with a label role=nginx
master ~ # kubectl get node
NAME LABELS STATUS
10.1.141.34 kubernetes.io/hostname=10.1.141.34,role=nginx Ready
10.1.141.40 kubernetes.io/hostname=10.1.141.40,role=nginx Ready
10.1.141.42 kubernetes.io/hostname=10.1.141.42,role=nginx Ready
10.1.141.43 kubernetes.io/hostname=10.1.141.43,role=nginx Ready
I modified the replication controller and added these lines
spec:
replicas: 4
selector:
role: nginx
But when I fire it up I get 2 pods on one host. What I want is 1 pod on each host. What am I missing?
Prior to DaemonSet being available, you can also specify that you pod uses a host port and set the number of replicas in your replication controller to something greater than your number of nodes. The host port constraint will allow only one pod per host.
I was able to achieve this by modifying the labels as follows below
master ~ # kubectl get nodes -o wide
NAME LABELS STATUS
10.1.141.34 kubernetes.io/hostname=10.1.141.34,role=nginx1 Ready
10.1.141.40 kubernetes.io/hostname=10.1.141.40,role=nginx2 Ready
10.1.141.42 kubernetes.io/hostname=10.1.141.42,role=nginx3 Ready
10.1.141.43 kubernetes.io/hostname=10.1.141.43,role=nginx4 Ready
I then created 4 nginx replication controllers each referencing the nginx{1|2|3|4} roles and labels.
Replication controller doesn't guarantee one pod per node as the scheduler will find the best fit for each pod. I think what you want is the DaemonSet controller, which is still under development. Your workaround posted above would work too.

Elasticsearch clustering on multiple machines - master election

I am trying to implement elasticsearch cluster. I have 2 machines with 2 nodes each. I have following configuration in yml file. I have given unique node name on each node, all of them are master and data nodes.
cluster.name: elasticsearch
node.master: true
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["machine1", "machine2"]
discovery.zen.minimum_master_nodes: 3
the four nodes are working correctly in the cluster. I would like to bring one of the nodes down, and have other 3 run in the cluster. When I try to bring one of first three, cluster goes down, I get this error
{
"error": "ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];]",
"status": 503
}
If I bring down the last node that joined cluster, the cluster works fine. my understanding is, if I have 4 masters and one of the master (any) goes down, other three masters should run in the cluster. is there any issue with my configuration?
If you are running to nodes on one machine, it is probably better to add ports to your configuration.
discovery.zen.ping.unicast.hosts: ["machine1:9300", "machine2:9300", "machine2:9301"]
Than also configure the ports yourself so you know which node has which port:
transport.tcp.port: 9300
http.port: 9200

ElasticSearch... Not So Elastic?

I have used this method to build Elastic Search Clusters in the cloud. It works 30%-50% of the time.
I start with 2 centos nodes in 2 servers in Digital Oceans Cloud. I then install ES and set the same cluster name in each config/elasticsearch.yml. Then I also set (uncomment):
discovery.zen.ping.multicast.enabled: false
as well as set and uncomment:
discovery.zen.ping.unicast.hosts: ['192.168.10.1:9300', '192.168.10.2:9300']
in each of the 2 servers. SO Reference here
Then, to give ES the benefit of the doubt, I service iptables stop, then restart the service on each node. Sometimes the servers see each other and I get a """cluster""" out of elasticsearch, sometimes if not most, the servers dont see each other even though multicast is disabled and specific ip addresses are given in the unicast hosts array that have NO firewall on, and point to each other.
WHY ES Community? Why does a hello world equivalent of elastic search prove to be inelastic to say the least (Let me openly and readily admit this MUST be user error/idiocy else no one would use this technology).
At first I was trying to build a simple 4 node cluster, but goodness gracious the issues that came along with that before indexing a single document were ridiculous. I had a 0% success rate. Some nodes saw some other nodes (via head and paramedic) while others had 'dangling indices' and 'unassigned indexes'. When I googled this I found tons of relevent/similar issues and no workable answers.
Can someone send me an example of how to build an elastic search cluster, that works?
#Ben_Lim's Answer: Did everyone who needs this as a resource get that?
I took 1 node (This is not for Prod) Server1 and changed the following
in /config/elasticsearch.yml settings:
uncomment node.master: true
uncomment and set network.host: 192.XXX.1.10
uncomment transport.tcp.port: 9300
uncomment discovery.zen.ping.multicast.enabled: false
uncomment and set discovery.zen.ping.unicast.hosts: ["192.XXX.1.10:9300"]
That sets the master, okay, then in each subsequent node (example
above) that wants to join --
uncomment node.master: false
uncomment and set network.host: 192.XXX.1.11
uncomment transport.tcp.port: 9301
uncomment discovery.zen.ping.multicast.enabled: false
uncomment and set discovery.zen.ping.unicast.hosts: ["192.XXX.1.10:9300"]
Obviously make sure all nodes have same cluster name and you iptables
firewalls etc are setup right.
NOTE AGAIN -- This is not for prod, but a way to start testing ES in Cloud, you can tighten up the screws from here
The most probable problem you met is the 9300 port is used by other application or the master node is not started at port 9300 , therefore they can't communicate with each other.
When you start 2 ES nodes to build up an cluster, one node must be elected to Master node. The master node will have a communication address: hostIP:post. For example:
[2014-01-27 15:15:44,389][INFO ][cluster.service ] [Vakume] new_master [Vakume][FRtqGG4xSKGbM_Yw9_oBLg][inet[/10.0.0.10:9302]], reason: zen-disco-join (elected_as_master)
When you need to start another node to build up a cluster, you can try to specific the master IP:port, like the example above you need to set
discovery.zen.ping.unicast.hosts: ["10.0.0.10:9302"]
Then the second node can find the master node and join the cluster.

Resources