Is it possible to assign multiples roles to one node in an IBM Cloud Private (ICP) cluster?
For example: if I have 5 physical servers and I want to implement an HA cluster - can I deploy the following architecture?
server 1: master + boot + worker
server 2: master + proxy + worker
server 3: master + worker
server 4: worker
server 5: worker
Will it be a supported architecture?
Yes, it is supported by ICP. You can assign multiple roles for each node, thanks.
Yes your planned HA architecture is supported by ICP. However, it would be recommended to have a only a node for only master + boot and a dedicated proxy node. Such as the following architecture:
server 1: master + boot
server 2: master + worker
server 3: master + worker
server 4: worker
server 5: proxy
You can find more information here: https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/getting_started/architecture.html
Note: If you assign multiple roles to one node, you have to increase the hardware resources of that node to meet the minimum requirements of multiple roles
"If you do not use a management node in your multi-node cluster, ensure that the master node meets the requirements of the management node plus the master node."
https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_system_config/hardware_reqs.html#multi
"If multiple cluster roles are installed on one node, the disk requirement is the sum of the disk requirement for each role. In the production environment, it is not recommended to install multiple cluster roles on one node."
https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_system_config/hardware_reqs.html#multi
ICP Support Portal: http://ibm.biz/icpsupport
ICP Public Slack Channel: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W1559b1be149d_43b0_881e_9783f38faaff/page/Connect
Related
I am currently analyzing a cluster environment with a Distributed cache.
I have 5 nodes, each one with an application that must cache a value. 2 nodes are in one datacenter and the other 3 in another. I was thinking of installing an ISPN instance on each node (where is the application hosted) to build the cluster.
Do you have any suggestions for me for further analysis?
Many thanks!
I have provisioned a rancher 3 node HA cluster with rke cluster.yml. Is it possible to add windows worker nodes to the existing cluster?. I was able to add linux worker nodes to the cluster by adding entries into the "cluster.yml" and updating the configuration using "rke up --update-only". Is there any possible way to add windows worker nodes to the existing cluster using rke?
I think this should be done with custom nodes
I have created a GCP Dataproc cluster with Standard (1 master, N workers). Now I want to upgrade it to High Availability (3 masters, N workers) - Is it possible?
I tried GCP, GCP alpha and GCP beta commands. For example GCP beta documented here: https://cloud.google.com/sdk/gcloud/reference/beta/dataproc/clusters/update.
It has option to scale worker nodes, however does not have option to switch from standard to high availability mode. Am I correct?
You can upgrade the master node by going into VM Instances section under your cluster , Stop your Master VM and Edit the configuration to
You may always upgrade your master node machine type and also add more worker node.
While that would improve your cluster job performance but noting to do with HA.
The answer is - no. Once HA cluster is created, it can't be downgraded and vice versa. You can add worker nodes, however master node can't be altered.
Yes, you can always do that, for changing the machine type of master node
you first need to stop the master VM instance, then you can change the machine type
Even the machine type of worker node can be changed, only we need to do is stop the machine and edit the machine configuration.
I have an Elasticsearch cluster in a VPN.
How can my Spring Boot application access the cluster securely if it is located on a separate server outside of the VPN and how can I configure it in the Spring boot configuration (application.yml/application.properties)?
I also want the application to connect to the cluster i an way so that if i have e.g. 2 Master eligible nodes and one fails, the connection remains intact.
if you have only 2 master eligble nodes, you are at risk of the "split brain problem". there is a easy formula for the calculation of the required number of master nodes:
M = 2F +1 ( m=master node count, f=number of master nodes possible to fail at same time)
in your application define all master nodes as target for the elasticsearch client. The client will handle the failover. see elasticsearc client documentation or https://qbox.io/blog/rest-calls-made-easy-part-2-sniffing-elasticsearch for a example
the vpn should not be handled by your application. the infrastructure (server, firewall) is the right place to address it. try to develop your application environment agnostic. this will make your app easier to develop, maintain and make it more robust to infrastructure changes
I have setup a cluster using Ambari that consists of 3 nodes.
When I want to refer to HDFS , YARN etc (services installed using Ambari) do I have to use URI for individual nodes ? Or is there any unified URI that represents whole cluster ?
Maybe providing some more context into what you are trying to use the URI for will help us better answer your question.
However, in general each service consists of one or more components. It's common for components to be installed on some nodes and not others, in which case a unified URI would not be useful. You would address a component by the node and port its running on if it has a running process (Master or Slave component).
For example the Yarn service has a few components, some of which are: Resource Manager, Node Manager, and client. The yarn client will be installed on 1 or more nodes (A cardinality of 1+). The Resource manager has a cardinality of 1. Node Manager has a cardinality of 1+.