cross cluster elasticsearch on kuberntes - elasticsearch

I have 2 elastics cluster on 2 different kubernetes VMS I tried to connect with cross cluster . but its not working, I add detailed below can someone assist and tell me what I did wrong or missed?
I tried to connect from one elastic to another as below:
GET _cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"cluster_three" : {
"mode" : "proxy",
"proxy_address" : "122.22.111.222:30005"
},
"cluster_two" : {
"mode" : "sniff",
"skip_unavailable" : "false",
"transport" : {
"compress" : "true"
},
"seeds" : [
"122.22.222.182:30005"
]
},
"cluster_one" : {
"seeds" : [
"127.0.0.1:9200"
],
"transport" : {
"ping_schedule" : "30s"
}
}
}
}
},
"transient" : { }
}
}
I tried to search on cluster two and I get the following error:
{"statusCode":502,"error":"Bad Gateway","message":"Client request timeout"}
but when I do curl on elastic to cluste_two I get this :
curl 122.22.222.182:30005
{
"name" : "elasticsearch-client-7dcc49ddsdsd4-ljwasdpl",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "bOkaIrcFTgetsadaaY114N4a1EQ",
"version" : {
"number" : "7.10.2",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "747e1cc71def077253878a59143c1f785asdasafa92b9",
"build_date" : "2021-01-13T00:42:12.435326Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
this is my svc configured on kubernetes for cluste_two:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-client NodePort 10.111.11.28 <none> 9200:30005/TCP 27m
elasticsearch-discovery ClusterIP 10.111.11.11 <none> 9300/TCP 27m

Elasticsearch discovery work on port 9300 instead of 9200 while you are running curl it's going as client request over port 30005.
Please check 9300 is open to connect cross cluster. as your elasticsearch-discovery service running as clusterIP you might have to change the type of it to expose out of K8s using NodePort of LoadBalancer as per requirement.
for example
# From cluster 1, we’ll define how the cluster-2 can be accessed
PUT /_cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"us-cluster" : {
"seeds" : [
"127.0.0.1:9300"
]
}
}
}
}
}
you can also look into : https://www.elastic.co/blog/cross-datacenter-replication-with-elasticsearch-cross-cluster-replication

Related

elasticsearch - moving from multi servers to one server

I have a cluster of 5 servers for elasticsearch, all with the same version of elasticsearch.
I need to move all data from servers 2, 3, 4, 5 to server 1.
How can I do it?
How can I know which server has data at all?
After change of _cluster/settings with:
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.require._host" : "server1"
}
}
I get for: curl -GET http://localhost:9200/_cat/allocation?v
the following:
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
6 54.5gb 170.1gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-5
6 50.4gb 167.4gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-3
6 22.6gb 139.8gb 2tb 2.1tb 6 *.*.*.* *.*.*.* node-2
6 49.8gb 166.6gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-4
6 54.8gb 172.1gb 1.9tb 2.1tb 7 *.*.*.* *.*.*.* node-1
and for: GET _cluster/settings?include_defaults
the following:
#! Deprecation: [node.max_local_storage_nodes] setting was deprecated in Elasticsearch and will be removed in a future release!
{
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"require" : {
"_host" : "server1"
}
}
}
}
},
"transient" : { },
"defaults" : {
"cluster" : {
"max_voting_config_exclusions" : "10",
"auto_shrink_voting_configuration" : "true",
"election" : {
"duration" : "500ms",
"initial_timeout" : "100ms",
"max_timeout" : "10s",
"back_off_time" : "100ms",
"strategy" : "supports_voting_only"
},
"no_master_block" : "write",
"persistent_tasks" : {
"allocation" : {
"enable" : "all",
"recheck_interval" : "30s"
}
},
"blocks" : {
"read_only_allow_delete" : "false",
"read_only" : "false"
},
"remote" : {
"node" : {
"attr" : ""
},
"initial_connect_timeout" : "30s",
"connect" : "true",
"connections_per_cluster" : "3"
},
"follower_lag" : {
"timeout" : "90000ms"
},
"routing" : {
"use_adaptive_replica_selection" : "true",
"rebalance" : {
"enable" : "all"
},
"allocation" : {
"node_concurrent_incoming_recoveries" : "2",
"node_initial_primaries_recoveries" : "4",
"same_shard" : {
"host" : "false"
},
"total_shards_per_node" : "-1",
"shard_state" : {
"reroute" : {
"priority" : "NORMAL"
}
},
"type" : "balanced",
"disk" : {
"threshold_enabled" : "true",
"watermark" : {
"low" : "85%",
"flood_stage" : "95%",
"high" : "90%"
},
"include_relocations" : "true",
"reroute_interval" : "60s"
},
"awareness" : {
"attributes" : [ ]
},
"balance" : {
"index" : "0.55",
"threshold" : "1.0",
"shard" : "0.45"
},
"enable" : "all",
"node_concurrent_outgoing_recoveries" : "2",
"allow_rebalance" : "indices_all_active",
"cluster_concurrent_rebalance" : "2",
"node_concurrent_recoveries" : "2"
}
},
...
"nodes" : {
"reconnect_interval" : "10s"
},
"service" : {
"slow_master_task_logging_threshold" : "10s",
"slow_task_logging_threshold" : "30s"
},
...
"name" : "cluster01",
...
"max_shards_per_node" : "1000",
"initial_master_nodes" : [ ],
"info" : {
"update" : {
"interval" : "30s",
"timeout" : "15s"
}
}
},
...
You can use shard allocation filtering to move all your data to server 1.
Simply run this:
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.require._name" : "node-1",
"cluster.routing.allocation.exclude._name" : "node-2,node-3,node-4,node-5"
}
}
Instead of _name you can also use _ip or _host depending on what is more practical for you.
After running this command, all primary shards will migrate to server1 (the replicas will be unassigned). You just need to make sure that server1 has enough storage space to store all the primary shards.
If you want to get rid of the unassigned replicas (and get back to green state), simply run this:
PUT _all/_settings
{
"index" : {
"number_of_replicas" : 0
}
}

Elastic Search remote connection from Spring boot application

I am having sping boot application and if i use my local elastic search i am able to connect but if i used the remote elastic search i'am not able to connect.
spring:
data:
elasticsearch:
cluster-name: es_psc
cluster-nodes: 100.84.57.2:9300
if i run in browser(http://100.84.57.2:9200/) I am able to see the details
{
"name" : "i58Q9JC",
"cluster_name" : "es_psc",
"cluster_uuid" : "EKeTJwviQvWeTzbS1h1w4w",
"version" : {
"number" : "6.4.3",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "fe40335",
"build_date" : "2018-10-30T23:17:19.084789Z",
"build_snapshot" : false,
"lucene_version" : "7.4.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
But I i run my spring boot application its giving below error:
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{6SiLGoTHTmmiuzBTvweb3A}{100.84.57.2}{100.84.57.2:9300}]
You are using the wrong port in your config file. You should be using port 9200, which is meant for rest communication.
Port 9300 is used for internal communication between Elasticsearch nodes.

X-Pack 401 Logstash cant connect to Elasticsearch

I changed JVM heap sized and Logstash cant connect to Elasticsearch, after restarting Elasticsearch
logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["es:9200"]
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: pass
logs by Logstash
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://es:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://es:9200/'"}
curl from logstash to es with same credentials
logstash: curl -ulogstash_system es:9200
Enter host password for user 'logstash_system':
{
"name" : "elasticsearch",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Elasticsearch Cluster On EC2 Aws Fails to join cluster

I installed oracle-jdk8 and elasticsearch on a ec2 instance and created an ami out of it. In the next copy of the ec2 machine i just changed the node name in elasticsearch.yml
However both the nodes if run individually are running.[NOTE the node id is appearing as same] But if run simultaneously, the one started later is failing with following in the logs:
[2018-08-07T16:35:06,260][INFO ][o.e.d.z.ZenDiscovery ] [node-1]
failed to send join request to master
[{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}], reason
[RemoteTransportException[[node-2][10.127.114.212:9300][internal:discovery/zen/join]];
nested: IllegalArgumentException[can't add node
{node-1}{uQHBhDuxTeWOgmZHsuaZmA}{Ba1r1GoMSZOMeIWVKtPD2Q}{10.127.114.194}{10.127.114.194:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66716696576, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}, found existing node
{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, xpack.installed=true,
ml.max_open_jobs=20, ml.enabled=true} with the same id but is a
different node instance];
My elasticsearch.yml:
cluster.name: elk
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: _ec2:privateIp_
transport.publish_host: _ec2:privateIp_
discovery.zen.hosts_provider: ec2
discovery.ec2.tag.ElasticSearch: elk-tag
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
Output from _nodes endpoint:
//----Output when node-1 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-1",
"transport_address" : "10.127.114.194:9300",
"host" : "10.127.114.194",
"ip" : "10.127.114.194",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66716696576",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 3110,
"mlockall" : true
}
}
}
}
//----Output when node-2 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-2",
"transport_address" : "10.127.114.212:9300",
"host" : "10.127.114.212",
"ip" : "10.127.114.212",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66718932992",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 4869,
"mlockall" : true
}
}
}
}
Solved this by deleting rm -rf /var/lib/elasticsearch/nodes/ in every instance and restarting elasticsearch.

How to fix elasticsearch node version number in a cluster (having 2 different version nodes)

In my elasticsearch cluster i have 2 nodes. 1 is of version 1.7.3 and other of version 1.7.5.(my elasticsearch v1.7.3 got corrupted so reinstalled 1.7.5)
how can i upgrade the node from 1.7.3 to 1.7.5
refered: https://www.elastic.co/guide/en/elasticsearch/reference/1.7/setup-upgrade.html#rolling-upgrades.
But could not get the procedure for upgradation of nodes version.
kindly help me through this.
my cluster is green.
and nodes are as follows:
{
"cluster_name" : "graylog2",
"nodes" : {
"mC4Osz5IS0OLy2E8QbqZLQ" : {
"name" : "Decay II",
"transport_address" : "inet[/127.0.0.1:9300]",
"host" : "localhost",
"ip" : "127.0.0.1",
"version" : "1.7.5",
"build" : "00f95f4",
"http_address" : "inet[/127.0.0.1:9200]",
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 957,
"max_file_descriptors" : 65535,
"mlockall" : false
}
},
"qCDvg4XCREmj_iGmbt4v4w" : {
"name" : "graylog2-server",
"transport_address" : "inet[/127.0.0.1:9350]",
"host" : "localhost",
"ip" : "127.0.0.1",
"version" : "1.7.3",
"build" : "05d4530",
"attributes" : {
"client" : "true",
"data" : "false",
"master" : "false"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 8937,
"max_file_descriptors" : 64000,
"mlockall" : false
}
}
i suspect the difference in the version is the cause of the graylog refusing connection with the elasticsearch cluster
please help
Hmmmm I can't think of one of two ways to do this
Add 3rd node on 1.75, wait for it to go green. Shutdown 2, upgrade and re-introduce to cluster.
Shutdown cluster. Upgrade node to 1.75. Start node mC4Osz5IS0OLy2E8QbqZLQ first and then the other one.

Resources