I work with a standalone on-premises clusters. When adding the RepairManager with cluster config, it appears to be auto allocated to nodes. Is it possible to select which nodes or node types for the RepairManager?
Related
I have provisioned a rancher 3 node HA cluster with rke cluster.yml. Is it possible to add windows worker nodes to the existing cluster?. I was able to add linux worker nodes to the cluster by adding entries into the "cluster.yml" and updating the configuration using "rke up --update-only". Is there any possible way to add windows worker nodes to the existing cluster using rke?
I think this should be done with custom nodes
I want to create a Failover cluster for MSMQ for two vm's in azure. I created two VM's in azure and have them domain joined. I can create the failover cluster with both nodes. However when i try to add a role for MSMQ i need an cluster shared disk. I tried to create a new managed disk in azure and attach it to the vm's but it still wasn't able to find the disk.
Also tried fileshare-sync, but still not working.
I found out i need iSCSI disk, there was this article https://learn.microsoft.com/en-us/azure/storsimple/storsimple-virtual-array-deploy3-iscsi-setup . But it is end of life next year.
So i am wondering if it is possible to setup a failover cluster for msmq on azure and if so how can i do it?
Kind regards,
You should be able to create a Cluster Shared Volume using Storage Spaces Direct across a cluster of Azure VMs. Here are instructions for a SQL failover cluster. I assume this should work for MSMQ, but I haven't set up MSMQ in over 10 years and I don't' know if requirements are different.
I have an Elasticsearch cluster in a VPN.
How can my Spring Boot application access the cluster securely if it is located on a separate server outside of the VPN and how can I configure it in the Spring boot configuration (application.yml/application.properties)?
I also want the application to connect to the cluster i an way so that if i have e.g. 2 Master eligible nodes and one fails, the connection remains intact.
if you have only 2 master eligble nodes, you are at risk of the "split brain problem". there is a easy formula for the calculation of the required number of master nodes:
M = 2F +1 ( m=master node count, f=number of master nodes possible to fail at same time)
in your application define all master nodes as target for the elasticsearch client. The client will handle the failover. see elasticsearc client documentation or https://qbox.io/blog/rest-calls-made-easy-part-2-sniffing-elasticsearch for a example
the vpn should not be handled by your application. the infrastructure (server, firewall) is the right place to address it. try to develop your application environment agnostic. this will make your app easier to develop, maintain and make it more robust to infrastructure changes
I have a cluster setup with nodes that are not reliable and can go down (They are aws spot instances). I am trying to make sure that my application master only launches on the reliable nodes (aws on demand instances) of the cluster. Is there a workaround for the same? My cluster is being managed by hortonworks ambari.
This can be achieved by using node labels. I was able to use the property in spark spark.yarn.am.nodeLabelExpression to restrict my application master to a set of nodes while running spark on yarn. Add the node labels to whichever nodes you want to use for application masters.
I have setup a cluster using Ambari that consists of 3 nodes.
When I want to refer to HDFS , YARN etc (services installed using Ambari) do I have to use URI for individual nodes ? Or is there any unified URI that represents whole cluster ?
Maybe providing some more context into what you are trying to use the URI for will help us better answer your question.
However, in general each service consists of one or more components. It's common for components to be installed on some nodes and not others, in which case a unified URI would not be useful. You would address a component by the node and port its running on if it has a running process (Master or Slave component).
For example the Yarn service has a few components, some of which are: Resource Manager, Node Manager, and client. The yarn client will be installed on 1 or more nodes (A cardinality of 1+). The Resource manager has a cardinality of 1. Node Manager has a cardinality of 1+.