Websphere application server 8.5.5 clustering same application - websphere

I have the same application running on two WAS clusters. Each cluster has 3 application servers based in different datacenters. In front of each cluster are 3 IHS servers.
Can I specify a primary cluster, and a failover cluster within the plugin-cfg.xml? Currently I have both clusters defined within the plugin, but I'm only hitting 1 cluster for every request. The second cluster is completely ignored.
Thanks!

As noted already the WAS HTTP server plugin doesn't provide the function your're seeking as documented in the WAS KC http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rwsv_plugincfg.html?lang=en
assuming that by "failover cluster" what is actually meant is "BackupServers" in the plugin-cfg.xml
The ODR alternative mentioned previously likely isn't an option either, this because the ODR isn't supported for use in the DMZ (it's not been security hardened for DMZ deployment) http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/twve_odoecreateodr.html?lang=en
From an effective HA/DR perspective what you're seeking to accomplish should handled at the network layer, using the global load balancer (global site selector, global traffic manager, etc) that is routing traffic into the data centers, this is usually accomplished by setting a "site cookie" using the load balancer

This is by design. IHS, at least at the 8.5.5 level, does not allow for what you are trying to do. You will have to implement such level of high availability in a higher level in your topology.

There are a few options.
If the environemnt is relatively static, you could post-process plugin-cfg.xml and combine them into a single ServerCluster with the "dc2" servers listed as <BackupServer>'s in the cluster. The "dc1" servers are probably already listed as <PrimaryServer>'s
BackupServers are only used when no PrimaryServers are reachable.
Another option is to use the Java On-Demand Router, which has first-class awareness of applications running in two cells. Rules can be written that dictate the behavior of applications residing in two clusters (load balancer, failover, etc.). I believe these are "ODR Routing Rules".

Related

Veritas alternatives for MQ file sharing

We have an application that can use only one MQ queue manager to put/get messages, so to get High availability we are using an active/passive cluster with IP & File system sharing between 2 linux servers and veritas service taking care of failing over the resources. While this is a working soulution we are running into multiple issues and being limited to using physical servers over a VM.
I'm looking for a solution that can replace Veritas, something that can be run on VM and takes care of moving filesystem/IP between servers seamlessly.
There is a service in MQ Advanced that can do this but is considerably expensive than base, so not really interested.
Anyone with similar MQ architecture or know a product that can do such failover please share any details.
Thanks!

Setting Up Ncache (Distributed?/Shared)

I have two servers, where I will be deploying the same application. Basically these two servers will handle work from a common Web API, the work that handed out will be transformed and go through some logic and loaded into DB. I want to cache the data the get loaded/update or deleted in the database, so that when the same data is referenced i can get it from the Cache (Kind of explained the cache mechanism). Now I am using Ncache and it working perfectly fine within one application. I am trying have kind of a shared cache, so that both my application can have access to. How do i go about doing it?
NCache is a distributed cache so you can continue to use that.
There is good general documentation available and very good getting started material that walks you through all the steps required.
In essence you install NCache on both the servers and then reference both servers in your client configuration (%NCHOME%\config\client.ncconf)
In cluster caches, a single logical cache instance is distributed over multiple server nodes and because the cache process is running outside the application address space, multiple applications can share and see the same exact cache data change in terms of addition, removal and update of the cache content.
Local out-proc caches are limited to one server node but as they are outside the application address space, they also support sharing of data between applications.
In fact, besides allowing multiple applications to share data, NCache supports a pub/sub infrastructure to allow for multiple applications to actually communicate with each other. This allows NCache to play a key part in setting up a fast and reliable microservices environment wherein all the participating services send messages to each other through the NCache platform.
See the link below where they have shared information about NCache topologies
http://www.alachisoft.com/resources/docs/ncache/admin-guide/cache-topologies.html
http://www.alachisoft.com/resources/videos/five-steps-getting-started.html

Consul infrastructure footprint and performance

Is there any documentation on Consul's requirements as it pertains to infrastructure footprint ? (e.g. memory / disk / cpu requirement and typical usage of Consul agents and master themselves). How does this compare with other similar service discovery solutions ?
I know of no official documentation on this. It will depend in part on the number of total nodes you are running (i.e. server agents plus client agents on each machine running services.
This thread includes comments (and a good link to a presentation by Darron at DataDog) from production users. They indicate using AWS m3.medium to m3.large instances in their production situations (you can find specs on those instance types here). Darron's presentation includes some information on # of nodes in their scenario as well as comments on how they scaled as the number of nodes grew.

How can a Phoenix application tailored only to use channels scale on multiple machines? Using HAProxy? How to broadcast messages to all nodes?

I use the node application purely for socket.io channels with Redis PubSub, and at the moment I have it spread across 3 machines, backed by nginx load balancing on one of the machines.
I want to replace this node application with a Phoenix application, and I'm still all new to the erlang/Elixir world so I still haven't figured out how a single Phoenix application can span on more than one machine. Googling all possible scaling and load balancing terms yielded nothing.
The 1.0 release notes mention this regarding channels:
Even on a cluster of machines, your messages are broadcasted across the nodes automatically
1) So I basically deploy my application to N servers, starting the Cowboy servers in each one of them, similarly to how I do with node and them I tie them nginx/HAProxy?
2) If that is the case, how channel messages are broadcasted across all nodes as mentioned on the release notes?
EDIT 3: Taking Theston answer which clarifies that there is no such thing as Phoenix applications, but instead, Elixir/Erlang applications, I updated my search terms and found some interesting results regarding scaling and load balancing.
A free extensive book: Stuff Goes Bad: Erlang in Anger
Erlang pooling libraries recommendations
EDIT 2: Found this from Elixir's creator:
Elixir provides conveniences for process grouping and global processes (shared between nodes) but you can still use external libraries like Consul or Zookeeper for service discovery or rely on HAProxy for load balancing for the HTTP based frontends.
EDITED: Connecting Elixir nodes on the same LAN is the first one that mentions inter Elixir communication, but it isn't related to Phoenix itself, and is not clear on how it related with load balancing and each Phoenix node communicating with another.
Phoenix isn't the application, when you generate a Phoenix project you create an Elixir application with Phoenix being just a dependency (effectively a bunch of things that make building a web part of your application easier).
Therefore any Node distribution you need to do can still happen within your Elixir application.
You could just use Phoenix for the web routing and then pass the data on to your underlying Elixir app to handle the distribution across nodes.
It's worth reading http://www.phoenixframework.org/v1.0.0/docs/channels (if you haven't already) where it explains how Phoenix channels are able to use PubSub to distribute (which can be configured to use different adapters).
Also, are you spinning up cowboy on your deployment servers by running mix phoenix.server ?
If so, then I'd recommend looking at EXRM https://github.com/bitwalker/exrm
This will bundle your Elixir application into a self contained file that you can simply deploy to your production servers (with Capistrano if you like) and then you start your application.
It also means you don't need any Erlang/Elixir dependencies installed on the production machines either.
In short, Phoenix is not like Rails, Phoenix is not the application, not the stack. It's just a dependency that provides useful functionality to your Elixir application.
Unless I am misunderstanding your use case, you can still use the exact scaling technique your node version of the application is. Simply deploy the Phoenix application to > 1 machines and use an Nginx load balancer configured to forward requests to one of the many application machines.
The built in node communications etc of Erlang are used for applications that scale in a different way than a web app. For instance, distributed databases or queues.
Look at Phoenix.PubSub
It's where Phoenix internally has the Channel communication bits.
It currently has two adapters:
Phoenix.PubSub.PG2 - uses Distributed Elixir, directly exchanging notifications between servers. (This requires that you deploy your application in a elixir/erlang distributed cluster way.)
Phoenix.PubSub.Redis - uses Redis to exchange data between servers. (This should be similar to solutions found in socket.io and others)

Failover proxy on Amazon aws?

This is a fairly generic question. Suppose I have three ec2 boxes: two app boxes and a box that hosts nginx as a reverse proxy, delegating requests to the two app boxes (my database is hosted elsewhere). Now, the two app machines can absorb a failure amongst themselves, however the third one represents a single point of failure. How can I configure my setup so that if the reverse proxy goes down, the site is still available?
I am looking at keepalived and HAproxy. For me this stuff is non-obvious, and any help for the ears of a beginner is appreciated.
If your nginx does no much more than proxying HTTP requests, please have a look at Amazon Elastic Load Balancer. You can set up your two (or more) app boxes, leave some spare ones (in order to keep always two or more up, if you need it), set up health checks, have SSL termination at the balancer, make use of sticky sessions, etc.
There is a lot of people, though, that would like to see the ability to set elastic IP addresses to ELBs, and others with good arguments why it is not neeeded.
My suggestions is that you take a look at ELB documentation, as it seems to perfectly fit your needs. I also recommend reading this interesting post for a good discussion on this subject.
I think if you are a beginner with HA and clusters, your best solution is Elastic Load Balancer (ELB) which is maintained by Amazon. They scale up automatically and implements a high availability cluster of balancers. So using ELB service you already mitigate the point of failure that you commented. Also it's important to have in mind that an ELB is cheaper than 2 instances in AWS. And of course it's easier to launch and maintain.
You can't see multiple ELB because it is a service, so you don't have to take care of the availability.
Other important point is that AWS elastic ips aren't assigned to NIC interface of your OS instance, so use virtual ips as well in classical infrastructures it's difficult.
After this explanation, if you still want Nginx as a proxy reverse in AWS because your reasons, I think you can implement an autoscaling group with a layer composed by Nginx instances. But if you aren't expert in autoscaling technology, it could be very tricky.

Resources