get ipallowlist in innodb cluster - mysql-8.0

I am new to innodb cluster and while setting up the innodb cluster, there is a step to setup ipAllowlist
c.addInstance("cluster_admin#service_name_of_new_node:3306",{ipAllowlist: "node1_service_name,new_node_service_name,...all_existing_service_names_in_this_cluster", recoveryMethod:"clone"});
How do you retrieve the current ipAllowlist?

You can use
cluster = dba.getCluster()
cluster.options()

Related

Can you run an elasticsearch data node after deleting the data folder?

I am running a three node Elasticsearch (ELK) cluster. All nodes have all and the same roles, e.g. data, master, etc. The disk on node 3 where the data folder is assigned became corrupt and that data is probably unrecoverable. The other nodes are running normally and one of them assumed the master role instead.
Will the cluster work normally if I replace the disk and make the empty directory available to elastic again, or am I risking crashing the whole cluster?
EDIT: As this is not explicitly mentioned in the answer, yes, if you add your node with an empty data folder, the cluster will continue normally as if you added a new node to the cluster, but you have to deal with the missing data. In my case, I lost the data as I do not have replicas.
Let me try to explain that in simple way.
Your data got corrupt at node-3 so if you add that that node again, it will not have the older data, i.e. the shards stored in node-3 will remain unavailable for the cluster.
Did you have the replica shards configured for the indexes?
What is the current status(yellow/red) of the cluster when you have
node-3 removed?
If a primary shard isn't available then the master-node promotes one of the active replicas to become the new primary. If there are currently no active replicas then status of the cluster will remain red.

Elasticsearch cluster in AWS ECS

I'm trying to create an elasticsearch cluster in AWS ECS but i'm getting the warn "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes. My elasticsearch.yml and task definition are the same for all nodes. How can i differentiate between the master and the other nodes ? Should i have a separate elasticsearch.yml/task definition for master node ?
My elasticsearch.yml :
cluster.name: "xxxxxxxxxxx"
bootstrap.memory_lock: false
network.host: 0.0.0.0
network.publish_host: _ec2:privateIp_
transport.publish_host: _ec2:privateIp_
discovery.seed_providers: ec2
discovery.ec2.tag.project: xxxxxxx-elasticsearch
discovery.ec2.endpoint: ec2.${REGION}.amazonaws.com
s3.client.default.endpoint: s3.${REGION}.amazonaws.com
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
xpack.security.enabled: false
I have faced the similar problem as well. Firstly, You need to create a initial cluster and make it ready to form a cluster. It is possible to start by using a inital node configuration on elasticsearch.yml. The solution I am using is to host on one ECS instance running with one elasticsearch docker container (As elasticsearch requires good amount of memory)
cluster.initial_master_nodes: '<<INITIAL_NODE_IPADDRESS>>'
This above configuration kickstarts the cluster that means elasticsearch is ready to join the nodes. In the next step Add the below configuration
cluster.initial_master_nodes: [<<MASTER_NODE_IPADDRESS>>,<<INITIAL_NODE_IPADDRESS>>]
discovery.seed_hosts: [<<MASTER_NODE_IPADDRESS>>,<<INITIAL_NODE_IPADDRESS>>]
Then you can add as many number of data nodes as you want. This depends on how much data you have.
Note: The IPADDRESS are from different nodes so use AWS SSM Parameter store to store IP securely and use engtrypoint.sh to get those and update the elasticsearch.yml file dynamically when you are building the docker images.
I hope this will solve the problem.

Adding cluster to existing elastic search in elk

Currently I have existing
1. Elastic search
2. Logstash
3. Kibana
I have existing data on them.
Now i have setup ELK cluster with 3 Master nodes , 5 data nodes 3 client nodes.
But i am not sure how can i get existing data into them.
Is it possible that if i make the existing ES node as data node and then attach it to the cluster . Then will that data gets replicated to other data nodes as well? and then take that node offline
Option 1
How about just try with fewer nodes? It is not hard to test if it is supported if you setup one node, feed some data, and add one more and configure them as a cluster to see if data get synchronized.
Option 2
Another option is to use an elasticsearch migration tool like https://github.com/taskrabbit/elasticsearch-dump, basically, you could setup a clean cluster and migrate all your data in old node to this cluster.

Is there a way to set number of shards and replicas while using JavaEsSpark api in spark

I am using JavaEsSpark for wring in ElasticSearch in spark, I want to change the default no_of_shards of elasticsearch while index creation.
Below is the line of code that i am using that creates the index as well as write the rdd in it but how can i set no of shards in this?
JavaEsSpark.saveToEs(stringRDD, "index_name/docs");
I have also tried setting in sparkConf object but still not working.

If you create a table with 32 shards on one server, when you add more servers will those shards rebalance?

When you have a one node cluster and you create a table with 32 shards, and then you add, say, 7 more nodes to the cluster, will those shards automatically migrate to the rest of the cluster so I have 4 shards per node ?
Is manual intervention required for this ?
How about the replicas created on one node ? Do those migrate to other nodes as well ?
Nothing will be automatically redistributed. In current versions of RethinkDB changing the number/distribution of replicas or changing shard boundaries will cause a loss of availability, so you have to explicitly ask for it happen (either in the web UI or with the command line administration tool).

Resources