I recently installed DC / OS ( https://dcos.io/ ) . For now, I use it for testing , so my architecture is composed of:
- 1 node Boostrap
- 1 Master node
- 2 Slave node
However, can you explain why DC / OS does not properly distributed services based on available resources on the various nodes ?
In my case, all services are installed on the same node. If I have more resources on this node , DC / OS no longer allows me to install new services.
Thank you in advance !
You have configured the second node as a public agent.
If you want to deploy your service on the public agent, you have to set "slave_public" in Deploy New(or Edit) Service -> Optional -> Accepted Resource Roles.
Related
Is it possible to assign multiples roles to one node in an IBM Cloud Private (ICP) cluster?
For example: if I have 5 physical servers and I want to implement an HA cluster - can I deploy the following architecture?
server 1: master + boot + worker
server 2: master + proxy + worker
server 3: master + worker
server 4: worker
server 5: worker
Will it be a supported architecture?
Yes, it is supported by ICP. You can assign multiple roles for each node, thanks.
Yes your planned HA architecture is supported by ICP. However, it would be recommended to have a only a node for only master + boot and a dedicated proxy node. Such as the following architecture:
server 1: master + boot
server 2: master + worker
server 3: master + worker
server 4: worker
server 5: proxy
You can find more information here: https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/getting_started/architecture.html
Note: If you assign multiple roles to one node, you have to increase the hardware resources of that node to meet the minimum requirements of multiple roles
"If you do not use a management node in your multi-node cluster, ensure that the master node meets the requirements of the management node plus the master node."
https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_system_config/hardware_reqs.html#multi
"If multiple cluster roles are installed on one node, the disk requirement is the sum of the disk requirement for each role. In the production environment, it is not recommended to install multiple cluster roles on one node."
https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_system_config/hardware_reqs.html#multi
ICP Support Portal: http://ibm.biz/icpsupport
ICP Public Slack Channel: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W1559b1be149d_43b0_881e_9783f38faaff/page/Connect
IBM Cloud Private supports multi node type, x, p, x in the same cluster, what should we define in the helm deployment in order to make sure a deployment goes to a particular node type?
IBM Could Private supports mixed architectures on the worker nodes.
For example, if you deploy an z application it will only try to run on z nodes.
All the matter nodes should be one architecture, either x or p.
From ICP app center, you can create different charts for different platforms as following:
app center
We found an issue to enable 'nodeSelector' to select different platforms to deploy. This issue was tracked in both ICP and Kubernetes community as below.
ICP issue: https://github.ibm.com/IBMPrivateCloud/roadmap/issues/1737
Kubernetes Chart issue: https://github.com/kubernetes/charts/issues/1899
You can get the node information from Infrastructure-->Node.
Different platforms have different arch images, so we have to use nodeSelector to enable the pods to be scheduled to different nodes. We are now trying to enable multiarck docker images here https://github.com/docker-library/official-images , if this finished, then we will do not need nodeSelector anymore.
For the usage metric of different nodes in the cluster, you can get resource usage from the dashboard by default. If you install the add-on monitoring framework such as Prometheus etc on ICP, you will get more metrics.
For setting multi node cluster in hadoop through ambari does we require similar type of operating system at both the hosts or different will work too.. for eg mine 1 host has cent os 7 and other has cent os 6 so will the setup will be successfull or it will through an error..
I think it not regarding your OS 7 and OS 6, because Hadoop is deployed by master machine and slaves machines, once you have deployed the master nodes, other nodes can direct join the Hadoop cluster. Also, the YARN will responsible for the task scheduling.
Ambari supports clusters that consist of heterogeneous OSs. Just make sure to provide valid repo URLs for OSs in use when deploying cluster
I have cluster of 3 Mesos slaves, where I have two applications: “redis” and “memcached”. Where redis depends on memcached and the requirement is both of the applications/services should start on same node instead of different slave nodes.
So I have created the application group and added the dependency properly in the JSON file. After launching the JSON file via “v2/groups” REST API, I observe that sometime both application group will start on same node but sometimes it will start on different slaves which breaks our requirement.
So intent/requirement is; if any application fails to start on a slave both the application should failover to other slave node. Also can I configure the JSON file to tell Marathon to start the application group on slave-1 (specific slave first) if it is available else start it on other slave in a cluster. Due to some reason if this application group will start on other slave can Marathon relaunch the application group to slave-1 if it is available to serve the request.
Thanks in advance for help.
Edit/Update (2):
Mesos, Marathon, and DC/OS support for PODs is available now:
DC/OS: https://dcos.io/docs/1.9/usage/pods/using-pods/
Mesos: https://github.com/apache/mesos/blob/master/docs/nested-container-and-task-group.md
Marathon: https://github.com/mesosphere/marathon/blob/master/docs/docs/pods.md
I assume you are talking about marathon apps.
Marathon application groups don't have any semantics concerning co-location on the same node and the same is the case for dependencies.
You seem to be looking for a Kubernetes like Pod abstraction in marathon, which is on the roadmap but not yet available (see update above :-)).
Hope this helps!
I think this should be possible (as a workaround) if you specify the correct app contraints within the group's JSON.
Have a look at the example request at
https://mesosphere.github.io/marathon/docs/generated/api.html#v2_groups_post
and the constraints syntax at
https://mesosphere.github.io/marathon/docs/constraints.html
e.g.
"constraints": [["hostname", "CLUSTER", "slave-1"]]
should do. Downside is that there will be no automatic failover to another slave that way. Still, I'd be curious why both apps need to specifically run on the same slave node...
I have setup a cluster using Ambari that consists of 3 nodes.
When I want to refer to HDFS , YARN etc (services installed using Ambari) do I have to use URI for individual nodes ? Or is there any unified URI that represents whole cluster ?
Maybe providing some more context into what you are trying to use the URI for will help us better answer your question.
However, in general each service consists of one or more components. It's common for components to be installed on some nodes and not others, in which case a unified URI would not be useful. You would address a component by the node and port its running on if it has a running process (Master or Slave component).
For example the Yarn service has a few components, some of which are: Resource Manager, Node Manager, and client. The yarn client will be installed on 1 or more nodes (A cardinality of 1+). The Resource manager has a cardinality of 1. Node Manager has a cardinality of 1+.