I created before a cluster server that contains different nodes and deployed an application then accessed it on the port number 9080
How can i create a cluster with different nodes of AppSrv and access application on the same port
Can any one discuss me in this point?
I'm not sure if I fully understand your question, but I do have an answer for you. If you delete the old clusters/servers on the node you will not get the default ports (ie/9080) when making a new cluster/server on the same node. It actually remembers the most recently used ports and uses that +1 (so 9081) regardless if 9080 is available. My understanding is that you want the default ports to be used (so 9080). In that case you would need to ensure that the "generate unique ports" option/flag is not selected when creating the new cluster/servers. This link here may help you https://www.ibm.com/support/knowledgecenter/SSRMWJ_6.0.0.21/com.ibm.isim.doc/installing/tsk/tsk_ic_ins_was_85_cluster.htm
addNode command best practices below should help you to create the cluster with different nodes.
https://www.ibm.com/support/knowledgecenter/SSAW57_9.0.5/com.ibm.websphere.nd.multiplatform.doc/ae/rxml_nodetips.html
For information about port numbers, see the Port number settings topic.
To be frank with you, you can't create another cluster to access the same port, because it's already in use. If you dont precise the port it will get the default 9081, but if you force it to redirect the application to 9080, then none are going to work, u'll get a socket error.
Your solution : One of the clusters should access the 9080 port
Related
I need to get Dmgr host and port dynamically to sync the node.
AdminControl.getHost() and AdminControl.getPort()
I am not sure whether i works. Thanks in advance
Would something like this work instead at the end of your administrative script?
AdminConfig.save()
if (NDInstall == "ND"):
nodeSync = AdminControl.completeObjectName("type=NodeSync,node=" + nodeLongName + ",*")
AdminControl.invoke(nodeSync, "sync")
A save and sync by itself doesn't require nodes or application servers to be down. Depending on the nature of the change you may need to recycle application servers to bring the change into effect. One feature that's in ND to help with high availability is the ability to ripple start servers in a cluster. This way one or more application servers stay up to service requests while a change is 'rippled' into effect.
A cluster is also an administrative unit that can be stopped and started. You can arrange your clusters however you want across your nodes.
I have 3 mesos master setup for a cluster with 3 different. But I have a silly question here, how a application can resolve which uri to access. Lets say I am just browsing the admin console, do I have to try all 3 ip:5050 to get the hit?
Since there is only ever one active (leading) Mesos Master in a HA setup you only need to access that one IP. There seems to be a second question intermingled with the Master-related one, which I think revolves around the general case of mapping Mesos tasks to IP:PORT combinations: for this, Mesos-DNS is a useful solution.
You should be able to try just one Master endpoint. It will redirect to the "real" Master automatically after 5 seconds if you hit a non-leading Master.
When using Mesos-DNS and add that to your local systems DNS preferences you just can enter http://leader.mesos:5050/ in your browser for example to access the currently leading masters WebUI.
I have a customer where we have hadoop installation managed by us. In the current setup all the nodes in the cluster have all the ports open for each other. But the customer is quite reluctant to keep all the ports open. Can anyone let me know if any such configuration is at all possible where we instruct hadoop to use only restricted number of ports.
My Findings : I have been able to configure a test setup where I have opened only the required port as per the mentioned blog
https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
But I still see the MR jobs are not executed in distributed manner.
I have two different machines running elastic search server instances. They automatically create a cluster and changes made on one instance reflect on other instance on different machine. I have changed the cluster.name property in elasticsearch.yml file in config folder and the issue is resolved. I wanted to know if i can start elastic search server instance in non-cluster mode ?
You can't start the es server in non-cluster mode.
But if you want the two servers to run independently (in its own cluster), there are 2 options that I can think of:
Disable multicast and don't set the hosts for them in unicast
Change the cluster.name to make them have different names
The easiest is to set node.local: true
This prevents elasticsearch from trying to connect to other nodes.
Using a custom name is also a good idea in any case just to prevent unintended exchange of data. Use something else for production, testing, and development.
I usually use Munin as monitoring software, but this (as others software I presume) needs an IP to make the ICMP or whatever pings to collect data.
In Amazon EC2 instances are created on the fly, with IP's you don't know.
How can they be monitored ?
I was thinking about using amazon console commands to read the IP's of the instances up, and change the monit configuration file on the fly also , but it can be too complicated ... or not?
Any other solution / suggestion ?
Thank you
I use revealcloud to monitor my amazon instances. You can install it once and create an ami from that systen, or bootstrap the install command if that's your method. Since the install is just one command, it's easy enough to put into the rc.local (or similar). You can then see all the instances in the dashboard or topiew as soon as they boot up.
Our instances are bootstrapped using chef recipes, so it's easier for me to provide IPs/hosts as they (= all members of my cluster) get entered into /etc/hosts on start-up. Generally, it doesn't hurt to use elastic IPs for a master server and allow all connections (in /etc/munin/munin.conf by default).
I'd solve the security 'question' on the security groups level. E.g. allow only instances with a certain security group to connect to the munin-node process (on port 4949). The question which remains is.
E.g., using ec2-authorize you can achieve
ec2-authorize mygroup -o monitorgroup -u <AWS-USER-ID>
This means that all instances with group monitorgroup can access resources on instances with mygroup.
Let me know if this helps!
If your Munin master and nodes are all hosted on EC2 than it's better to use internal hosts like domU-00-00-00-00-00-00.compute-1.internal. because this way you don't have to deal with IP addresses and security groups.
You also have to set this in /etc/munin/munin-node.conf:
allow ^.*$
You can read more about it in Monitoring AWS Ubuntu Instances using Munin
But if your Munin master is not on EC2 your best bet is to attach Elastic IP to your EC2 instance.