Indexed data not getting shared in Elaasticsearch Cluster 5.2 - elasticsearch

I installed elasticsearch as RPM service.
I have 5 cluster nodes in which one is tribe node, three are master and data node and final one is data node. I pointed logstash to that data node and it was successfully indexded. When I just checked in other boxes the indexed information is not shared among the clusters.
When I checked the url xxx.xx.xx:9200/. I am getting new field called cluster_uuid whcich I didn't see in my previous version of ELK(2.3.1).. I am suspecting due to that different cluster_uuid.
node1
{
"name" : "node2",
"cluster_name" : "cluster_sft",
"cluster_uuid" : "wwbCT_wJRryT6ko-g_lm7A",
"version" : {
"number" : "5.2.1",
"build_hash" : "db0d481",
"build_date" : "2017-02-09T22:05:32.386Z",
"build_snapshot" : false,
"lucene_version" : "6.4.1"
},
"tagline" : "You Know, for Search"
}
node 2:
{
"name" : "node3",
"cluster_name" : "cluster_sft",
"cluster_uuid" : "qq3YUgZGR9O7nBXTLe8SUg",
"version" : {
"number" : "5.2.1",
"build_hash" : "db0d481",
"build_date" : "2017-02-09T22:05:32.386Z",
"build_snapshot" : false,
"lucene_version" : "6.4.1"
},
"tagline" : "You Know, for Search"
}
Here is my elasticsearch config
cluster.name: cluster_sft
node.name: node1
node.master: true
node.data: true
bootstrap.memory_lock: true
http.host: xx.xx.xxx.xxx
network.host: 127.0.0.1
http.port: 9200
discovery.zen.ping.unicast.hosts: ["xx.xx.xxx.xxx:9200","xx.xx.xxx.xxx;9200","xx.xx.xxx.xxx:9200","xx.xx.xxx.xxx:9200"]
discovery.zen.minimum_master_nodes: 1
I am not sure whether I am missing any attributes or I am going wrong in configuration.
Thanks,
Rajeshkumar

Related

Elasticsearch issue with "cluster_uuid" : "_na_" and license in null

after install elasticsearch 7.12.1
with this config
network.host: 127.0.0.1
http.port: 9200
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
in main page of elasticsearch show json
{
"name" : "master",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.12.1",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "3186837139b9c6b6d23c3200870651f10d3343b7",
"build_date" : "2021-04-20T20:56:39.040728659Z",
"build_snapshot" : false,
"lucene_version" : "8.8.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
and in xpack license is null
you should add
node.name: master
cluster.initial_master_nodes: ["master"]
to your elasticsearch.yml file and restart elasticsearch service

cross cluster elasticsearch on kuberntes

I have 2 elastics cluster on 2 different kubernetes VMS I tried to connect with cross cluster . but its not working, I add detailed below can someone assist and tell me what I did wrong or missed?
I tried to connect from one elastic to another as below:
GET _cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"cluster_three" : {
"mode" : "proxy",
"proxy_address" : "122.22.111.222:30005"
},
"cluster_two" : {
"mode" : "sniff",
"skip_unavailable" : "false",
"transport" : {
"compress" : "true"
},
"seeds" : [
"122.22.222.182:30005"
]
},
"cluster_one" : {
"seeds" : [
"127.0.0.1:9200"
],
"transport" : {
"ping_schedule" : "30s"
}
}
}
}
},
"transient" : { }
}
}
I tried to search on cluster two and I get the following error:
{"statusCode":502,"error":"Bad Gateway","message":"Client request timeout"}
but when I do curl on elastic to cluste_two I get this :
curl 122.22.222.182:30005
{
"name" : "elasticsearch-client-7dcc49ddsdsd4-ljwasdpl",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "bOkaIrcFTgetsadaaY114N4a1EQ",
"version" : {
"number" : "7.10.2",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "747e1cc71def077253878a59143c1f785asdasafa92b9",
"build_date" : "2021-01-13T00:42:12.435326Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
this is my svc configured on kubernetes for cluste_two:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-client NodePort 10.111.11.28 <none> 9200:30005/TCP 27m
elasticsearch-discovery ClusterIP 10.111.11.11 <none> 9300/TCP 27m
Elasticsearch discovery work on port 9300 instead of 9200 while you are running curl it's going as client request over port 30005.
Please check 9300 is open to connect cross cluster. as your elasticsearch-discovery service running as clusterIP you might have to change the type of it to expose out of K8s using NodePort of LoadBalancer as per requirement.
for example
# From cluster 1, we’ll define how the cluster-2 can be accessed
PUT /_cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"us-cluster" : {
"seeds" : [
"127.0.0.1:9300"
]
}
}
}
}
}
you can also look into : https://www.elastic.co/blog/cross-datacenter-replication-with-elasticsearch-cross-cluster-replication

Launching Elastic Kibana - internal server 500 error - [illegal_argument_exception] application privileges must refer to at least one resource"}

I launched Kibana in my Elastic Cloud account and see this message. Why can I not log in to my Kibana account? I restarted my deployment and see the same error.
If this is relevant, I should add that there is an issue with my Elastic Search. It is apparently "unhealthy".
However, when I launch the Elastic Search instance, I get an apparently healthy response.
{
"name" : "instance-0000000003",
"cluster_name" : "d972<hidden for privacy>665aee2",
"cluster_uuid" : "9IOP<hidden for privacy>iflw",
"version" : {
"number" : "7.5.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "8bec5<hidden for privacy>580cd1afcdf",
"build_date" : "2020-01-15T12:11:52.313576Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
If you stop the Kibana server and restart it, the default space document will be created.
You get this error if .kibana index gets deleted while Kibana is running. It seems something is removing your .kibana index, in my case it was Curator. Thus, I added these two lines in Curator's action_file.yml file under actions/1/filters:
---
actions:
1:
filters:
- filtertype: kibana
exclude: True

X-Pack 401 Logstash cant connect to Elasticsearch

I changed JVM heap sized and Logstash cant connect to Elasticsearch, after restarting Elasticsearch
logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["es:9200"]
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: pass
logs by Logstash
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://es:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://es:9200/'"}
curl from logstash to es with same credentials
logstash: curl -ulogstash_system es:9200
Enter host password for user 'logstash_system':
{
"name" : "elasticsearch",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Elasticsearch Cluster On EC2 Aws Fails to join cluster

I installed oracle-jdk8 and elasticsearch on a ec2 instance and created an ami out of it. In the next copy of the ec2 machine i just changed the node name in elasticsearch.yml
However both the nodes if run individually are running.[NOTE the node id is appearing as same] But if run simultaneously, the one started later is failing with following in the logs:
[2018-08-07T16:35:06,260][INFO ][o.e.d.z.ZenDiscovery ] [node-1]
failed to send join request to master
[{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}], reason
[RemoteTransportException[[node-2][10.127.114.212:9300][internal:discovery/zen/join]];
nested: IllegalArgumentException[can't add node
{node-1}{uQHBhDuxTeWOgmZHsuaZmA}{Ba1r1GoMSZOMeIWVKtPD2Q}{10.127.114.194}{10.127.114.194:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66716696576, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}, found existing node
{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, xpack.installed=true,
ml.max_open_jobs=20, ml.enabled=true} with the same id but is a
different node instance];
My elasticsearch.yml:
cluster.name: elk
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: _ec2:privateIp_
transport.publish_host: _ec2:privateIp_
discovery.zen.hosts_provider: ec2
discovery.ec2.tag.ElasticSearch: elk-tag
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
Output from _nodes endpoint:
//----Output when node-1 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-1",
"transport_address" : "10.127.114.194:9300",
"host" : "10.127.114.194",
"ip" : "10.127.114.194",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66716696576",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 3110,
"mlockall" : true
}
}
}
}
//----Output when node-2 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-2",
"transport_address" : "10.127.114.212:9300",
"host" : "10.127.114.212",
"ip" : "10.127.114.212",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66718932992",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 4869,
"mlockall" : true
}
}
}
}
Solved this by deleting rm -rf /var/lib/elasticsearch/nodes/ in every instance and restarting elasticsearch.

Resources