How to get dmgr host and port number dynamically using jython and jacl in IBM Websphere Application Server in linux? - websphere

I need to get Dmgr host and port dynamically to sync the node.
AdminControl.getHost() and AdminControl.getPort()
I am not sure whether i works. Thanks in advance

Would something like this work instead at the end of your administrative script?
AdminConfig.save()
if (NDInstall == "ND"):
nodeSync = AdminControl.completeObjectName("type=NodeSync,node=" + nodeLongName + ",*")
AdminControl.invoke(nodeSync, "sync")

A save and sync by itself doesn't require nodes or application servers to be down. Depending on the nature of the change you may need to recycle application servers to bring the change into effect. One feature that's in ND to help with high availability is the ability to ripple start servers in a cluster. This way one or more application servers stay up to service requests while a change is 'rippled' into effect.
A cluster is also an administrative unit that can be stopped and started. You can arrange your clusters however you want across your nodes.

Related

Creating a cluster server in WAS

I created before a cluster server that contains different nodes and deployed an application then accessed it on the port number 9080
How can i create a cluster with different nodes of AppSrv and access application on the same port
Can any one discuss me in this point?
I'm not sure if I fully understand your question, but I do have an answer for you. If you delete the old clusters/servers on the node you will not get the default ports (ie/9080) when making a new cluster/server on the same node. It actually remembers the most recently used ports and uses that +1 (so 9081) regardless if 9080 is available. My understanding is that you want the default ports to be used (so 9080). In that case you would need to ensure that the "generate unique ports" option/flag is not selected when creating the new cluster/servers. This link here may help you https://www.ibm.com/support/knowledgecenter/SSRMWJ_6.0.0.21/com.ibm.isim.doc/installing/tsk/tsk_ic_ins_was_85_cluster.htm
addNode command best practices below should help you to create the cluster with different nodes.
https://www.ibm.com/support/knowledgecenter/SSAW57_9.0.5/com.ibm.websphere.nd.multiplatform.doc/ae/rxml_nodetips.html
For information about port numbers, see the Port number settings topic.
To be frank with you, you can't create another cluster to access the same port, because it's already in use. If you dont precise the port it will get the default 9081, but if you force it to redirect the application to 9080, then none are going to work, u'll get a socket error.
Your solution : One of the clusters should access the 9080 port

Wildfly 11 - High Availability - Single deploy on slave

I have two servers in a HA mode. I'd like to know if is it possible to deploy an application on the slave server? If yes, how to configure it in jgroups? I need to run a specific program that access the master database, but I would not like to run on master serve to avoid overhead on it.
JGroups itself does not know much about WildFly and the deployments, it only creates a communication channel between nodes. I don't know where you get the notion of master/slave, but JGroups always has single* node marked as coordinator. You can check the membership through Channel.getView().
However, you still need to deploy the app on both nodes and just make it inactive if this is not its target node.
*) If there's no split-brain partition, or similar rare/temporal issues

H2 Database Cluster Recovery

I have got a SpringMVC application which runs on Apache Tomcat and uses H2 database.
The infrastructure contains two application servers (lets name them A & B) running their own Tomcat Servers. I also have a H2 database clustering in place.
On one system (A) I ran the following command
java org.h2.tools.Server -tcp -tcpPort 9101 -tcpAllowOthers -baseDir server1
On the other (B) I ran
java org.h2.tools.Server -tcp -tcpPort 9101 -tcpAllowOthers -baseDir server2
I started the cluster in machine A
java org.h2.tools.CreateCluster
-urlSource jdbc:h2:tcp://IpAddrOfA:9101/~/test
-urlTarget jdbc:h2:tcp://IpAddrOfB:9101/~/test
-user sa
-serverList IpAddrOfA:9101,IpAddrOfB:9101
When any one of the server is down, it has been mentioned that, one has to delete the database that failed, restart the server and rerun the CreateCluster.
I have the following questions ?
If both servers are down, how can I ascertain, which database to
delete so that I can restart that server and rerun the cluster ?
CreateCluster contains a urlSource and urlTarget. Do I need to be
specific as to give them the same value as was previously given or I
can interchange them without any side effect ?
Do I need to run the CreateCluster command from both the machines?
If so, do I need to interchange the urlSource and urlTarget ?
Is there a way to know whether both, one or none of the servers are
running ? I want that both IpAddress will be returned if both of
them are up, one IpAddress if only one is up otherwise none is all
are down.
If both servers are down, how can I ascertain, which database to delete
The idea of the cluster is that a second database adds redundancy to the system. Let's assume a server fails one every 100 days (hard disk failure, power failure or so). That is 99% availability. This might not be good enough for you, that's why you may want to use a cluster with two servers. That way, even if one of the server fails every 100 days, the chance of both failing at the same time is very very low. Ideally, the risk of failure is completely independent. That would mean the risk of both failing at the exact same time is 1 in 10000 (100 times 100), giving you 99.99% availability. To the risk that both servers are down is exactly what the cluster feature should prevent.
CreateCluster contains a urlSource and urlTarget. Do I need to be specific as to give them the same value as was previously
It depends which one you want to use as the source and which one as the target. The database from the source is copied to the target. The source is that database you want to copy to the target.
Do I need to run the CreateCluster command from both the machines?
No.
Is there a way to know whether both, one or none of the servers are running ?
You could try to open a TCP/IP connection to them, to check if the listener is running. What I usually do is running telnet <server> <port> on the command line.

Accessing Clustered MSMQ with an application

We are switching from a non-clustered to a 2-node clustered MSMQ Windows Server 2008 R2 SP1 Enterprise environment. Previously, when it was non-clustered, we wrote a .NET 3.5 C# Windows Form application to help us manage our environment (so it does tasks such as create queues with the right permissions, read messages, forward messages, etc.). I would like to make this application work with our new cluster.
Per these articles,
http://blog.terranspot.com/2011/07/accessing-microsoft-message-queuing.html
http://blogs.msdn.com/b/johnbreakwell/archive/2008/02/18/clustering-msmq-applications-rule-1.aspx
I understand that I need to add the application as a resource on the cluster as when I don't, I am accessing the node's MSMQ instance. To help with my debugging, I have turned the local MSMQ services off. No matter what I do, however, the program keeps trying to access the node's instance. I added it as an application resource (with the command line of "Q:\QueueManagerConsole.exe". The Q:\ is the disk that is shared between the 2 nodes that is part of the failover cluster), but when I run it via Windows Explorer, it doesn't see the cluster instance, only the local. I have seen no way to execute a program from Failover Cluster Manager, so I don't understand what I am doing wrong. I switched the code to access everything via "." (so MessageQueue.GetPrivateQueuesByMachine(".")), which, per my meager understanding is how you access the local queue. Could someone explain, preferably acting as if I had no clue what I was doing, on a. if this IS possible and b. HOW to do this correctly?
Hi I did something similar a while ago. Try deploy a service in a failover cluster
, it wokerd for me to:
configure the app to use clustered msmq
configure app as clustered resource
configure the app to connect under host name
set the permission set rquired for transpot
At least this will give you a good starting point.
I finally got this working by creating a shortcut to the application and putting it on the server that was actually accessing the clustered queues.
Please try add to environment used by Your application following Environment variables:
_CLUSTER_NETWORK_NAME_
_CLUSTER_NETWORK_HOSTNAME_
with cluster server name as a value. It worked in the system which is being developed by my team - it contains a few services which had to access clustered MSMQ and it solved the problem.

Monitoring instances in cloud

I usually use Munin as monitoring software, but this (as others software I presume) needs an IP to make the ICMP or whatever pings to collect data.
In Amazon EC2 instances are created on the fly, with IP's you don't know.
How can they be monitored ?
I was thinking about using amazon console commands to read the IP's of the instances up, and change the monit configuration file on the fly also , but it can be too complicated ... or not?
Any other solution / suggestion ?
Thank you
I use revealcloud to monitor my amazon instances. You can install it once and create an ami from that systen, or bootstrap the install command if that's your method. Since the install is just one command, it's easy enough to put into the rc.local (or similar). You can then see all the instances in the dashboard or topiew as soon as they boot up.
Our instances are bootstrapped using chef recipes, so it's easier for me to provide IPs/hosts as they (= all members of my cluster) get entered into /etc/hosts on start-up. Generally, it doesn't hurt to use elastic IPs for a master server and allow all connections (in /etc/munin/munin.conf by default).
I'd solve the security 'question' on the security groups level. E.g. allow only instances with a certain security group to connect to the munin-node process (on port 4949). The question which remains is.
E.g., using ec2-authorize you can achieve
ec2-authorize mygroup -o monitorgroup -u <AWS-USER-ID>
This means that all instances with group monitorgroup can access resources on instances with mygroup.
Let me know if this helps!
If your Munin master and nodes are all hosted on EC2 than it's better to use internal hosts like domU-00-00-00-00-00-00.compute-1.internal. because this way you don't have to deal with IP addresses and security groups.
You also have to set this in /etc/munin/munin-node.conf:
allow ^.*$
You can read more about it in Monitoring AWS Ubuntu Instances using Munin
But if your Munin master is not on EC2 your best bet is to attach Elastic IP to your EC2 instance.

Resources