How does appfabric caching failover work? - caching

We are looking to use Windows AppFabric caching in a high availability scenario.
We would like to place the caching on the web servers, but the web servers do not have access to a database.
Therefore, the only option is the xml configuration file. This is on a share on the lead web server.
What happens if this server goes down? Is high availability only available when you have a clustered SQL Server or access to a SAN?

You can designate multiple lead servers, however for HA SQL Server (clustered!) configuration is recommended by MS.

The XML configuration storage location can be a single point of failure, therefore MS recommend you use failover clustering which results in creating a folder with high availability.
See here for more details.

Related

Websphere application server 8.5.5 clustering same application

I have the same application running on two WAS clusters. Each cluster has 3 application servers based in different datacenters. In front of each cluster are 3 IHS servers.
Can I specify a primary cluster, and a failover cluster within the plugin-cfg.xml? Currently I have both clusters defined within the plugin, but I'm only hitting 1 cluster for every request. The second cluster is completely ignored.
Thanks!
As noted already the WAS HTTP server plugin doesn't provide the function your're seeking as documented in the WAS KC http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rwsv_plugincfg.html?lang=en
assuming that by "failover cluster" what is actually meant is "BackupServers" in the plugin-cfg.xml
The ODR alternative mentioned previously likely isn't an option either, this because the ODR isn't supported for use in the DMZ (it's not been security hardened for DMZ deployment) http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/twve_odoecreateodr.html?lang=en
From an effective HA/DR perspective what you're seeking to accomplish should handled at the network layer, using the global load balancer (global site selector, global traffic manager, etc) that is routing traffic into the data centers, this is usually accomplished by setting a "site cookie" using the load balancer
This is by design. IHS, at least at the 8.5.5 level, does not allow for what you are trying to do. You will have to implement such level of high availability in a higher level in your topology.
There are a few options.
If the environemnt is relatively static, you could post-process plugin-cfg.xml and combine them into a single ServerCluster with the "dc2" servers listed as <BackupServer>'s in the cluster. The "dc1" servers are probably already listed as <PrimaryServer>'s
BackupServers are only used when no PrimaryServers are reachable.
Another option is to use the Java On-Demand Router, which has first-class awareness of applications running in two cells. Rules can be written that dictate the behavior of applications residing in two clusters (load balancer, failover, etc.). I believe these are "ODR Routing Rules".

Replicated cache system to each http servers

I have setup recently memcached for a PHP site with lot of traffic. Before we used APC but this lacks the possibility to have a unique cache system (invalidating one key on one server doesn't invalidate through the others).
I noticed a big difference when comes to memcached being on same machine as http server or on separated server.
http+memcached on same server -> 0.06 average time spent to deliver a page
http and memcache on diff servers (but under NAT) -> 0.15 - 0.20 to the deliver a page
So it's a huge difference and I am wondering if won't be better to have the cache system on same machine as http. The additional complexity is the fact the website is served by couple http servers (through a load balancer). So I actually need a cache system with replication, each http server having a cache "copy" and writing the changes only to the "master" (or other approach doing similar things).
There are couple of such systems (couchbase, redis, aso). I think couchbase is not good for this as won't allow connecting to local cache server but rather to the "gate". Redis may work, I am still checking on others.
The main this is: has someone tried this approach to speed up the website? By having on each machine a cache "copy" (kept in synch with the others)?
You can use GigaSpaces XAP solution which is a distributed in memory data grid, but also has an integration with jetty allowing you to deploy your web app and manage it from a single management system. The central distributed data grid (which can be a used as simple cache) can have a local cache on each web container which is kept in sync with the main cache, you don't have to use the jetty integration for it, you can still use your own web container and just create a proxy to the distributed cache with an embedded local cache via code. Or you can also have a fully replicated topology between the web containers without having a main distributed cache and each web container will contain a full copy of the entire cache which will be in sync with the other instances of the web container.
You can read more in:
http://wiki.gigaspaces.com/wiki/display/SBP/Web+Service+PU
http://wiki.gigaspaces.com/wiki/display/XAP9/Web+Jetty+Processing+Unit+Container
http://wiki.gigaspaces.com/wiki/display/XAP9/Client+Side+Caching
Disclaimer: I am a developer working for GigaSpaces.

Can AppFabric Cache be used like memcached with independent nodes?

With Memcached, it is my understanding that each of the cache servers doesn't need to know diddly about the other servers. With AppFabric Cache on the other hand, the shared configuration links the servers (and consequently becomes a single point of failure).
Is it possible to use AppFabric cache servers independently? In other words, can the individual clients choose where to store their key/values based on the available cache servers and would that decision be the same for all clients (the way it is with memcached).
NOTE: I do realize that more advanced features such as tagging would be broken without all the servers knowing about each other.
Are you viewing the shared configuration as a single point of failure? If you are using SQL Server as your configuration repository, then this shouldn't be an issue with a redundant SQL Server setup.
This approach would obviously loose you all of the benefits of using a distributed cache, however, if you really want to do this then simply don't use a shared configuration. When you configure a new AppFabric node create a new configuration file or database. Choosing an existing one basically says "add this new node to the existing cache cluster".

Horizontal Scaling of Tomcat in Microsoft Azure

I am working on this quiet a while, but still no conclustion.
I want to do horizontal scaling of Tomcat instances in Microsoft Azure (1,2,3,... Tomcat instances for one service). I read lots of articles about session replication, clustering,... with Tomcat. Since Azure does not support Multicasts, there is no easy way to cluster Tomcat. Also sticky sessions is no options, because Azure does round robin load balancing. Setting up two services - one with Terracotta or Apache mod_jk - and the other with Tomcat instances seems overkill for me (if even doable)...
Is this even possible?
Thanks in advance for reading and answering my question. Every comment/idea is highly appreciated.
There is the new appFabric caching service you could use, or there are examples of using Memcache on Azure, would that help?
http://code.msdn.microsoft.com/winazurememcached
Why do you feel that running 2 services is overkill, exactly? If you have no issue with scaling out to n Tomcat instances, adding another service for load distribution is a perfectly acceptable solution in my book. By running that service on a minimum of 2 instances, that service itself meets the Azure SLA requirements: your uptime will be as good as it is going to get on Azure, and you avoid a SPoF (single point of failure).
You could go with a product like terracotta, but it is also pretty straightforward to write a simple socket server to route HTTP sessions back to a particular instance running in Windows Azure. You would have to be aware of node recyles, but that is quite doable.
Be aware that memcached requires an additional Azure service as well (web roles), the appFabric caching service does not (but also has cost associated with it). I do not know Tomcat, but for IIS you can easily move session state from in memory to persisted (either SQL Azure or Azure Storage). Something to be aware of: for high volume sites, the transaction cost to Azure Storage can actually become a cost driver for your deployment if you store session info there. SQL Azure could well be the more cost-effective solution, but on the other hand might not be supported out-of-the-box for your solution.
I do not think that you can run Tomcat on Azure. Even if you could (using the virtual machine role) it is probably cheaper to run it on a Linux VM on Amazon EC2.
Edit
I see that this is possible using the Tomcat Solution Accelerator. But look at the disclaimer:
This solution accelerator is provided
for informational purposes only and
Microsoft or Infosys makes no
warranties, either express or implied
This is an unsupported solution. I know that it is often difficult to question management decisions. But using unsupported software for production systems, when a cheaper supported alternative is available, is generally not a good idea.

THE FASTEST Smarty Cache Handler

Does anyone know if there is an overview of the performance of different cache handlers for smarty?
I compared smarty file cache with a memcache handler, but it seemed memcache has a negative impact on performance.
I figured there would be a faster way to cache than through the filesystem... am I wrong?
I don't have a systematic answer for you, but in my experience, the file cache is the fastest. I should clarify that I haven't done any serious performance tests, but in all the time I've used Smarty, I have found the file cache to work best.
One this that definitely improves performance is to disable checking if the template files have changed. This avoids having to stat the tpl files.
File caching is ok when you have a single server instance or using shared drive (NFS) in a server cluster, but when you have a web server cluster (two or more web servers serving the same content), the problem with file based caching is not sync across the web servers. To perform a simple rsync on the caching directories is error prone. May work flawlessly for awhile but not a stable solution. The best solution for a cluster is to use distributed caching, that is memcache, which is a separate server running a memcached instance and each web server has PHP Memcache installed. Each server will then check for the existent of a cached page/item and if exists pulls from memcache otherwise will generate from the database and then save into memcached. When you are dealing with clusters, you cannot skimp on a good caching mechanism. If you are dealing with clusters, then your site already has more traffic (or will be) for a single server to handle.
There is beginners level cluster environment which can be implemented for a relative low cost. You can set up two colocated servers (nginx load balancer and a memcached server), then using free shared web hosting, you create an account of the same domain on those free hosting accounts and install your content. You configure your nginx load balancer to point to the IP addresses of the free web hosts. The free web hosts must have php5 memcache installed or the solution will not work.
Then you set you DNS for the domain with the registrar to point the NGINX IP (which would be a static ip if you are colocating). Now when someone access your domain, nginx redirects to one of your web server clusters located on the free hosting.
You may also want to consider a CDN to off load traffic when serving the static content.

Resources