THE FASTEST Smarty Cache Handler - performance

Does anyone know if there is an overview of the performance of different cache handlers for smarty?
I compared smarty file cache with a memcache handler, but it seemed memcache has a negative impact on performance.
I figured there would be a faster way to cache than through the filesystem... am I wrong?

I don't have a systematic answer for you, but in my experience, the file cache is the fastest. I should clarify that I haven't done any serious performance tests, but in all the time I've used Smarty, I have found the file cache to work best.
One this that definitely improves performance is to disable checking if the template files have changed. This avoids having to stat the tpl files.

File caching is ok when you have a single server instance or using shared drive (NFS) in a server cluster, but when you have a web server cluster (two or more web servers serving the same content), the problem with file based caching is not sync across the web servers. To perform a simple rsync on the caching directories is error prone. May work flawlessly for awhile but not a stable solution. The best solution for a cluster is to use distributed caching, that is memcache, which is a separate server running a memcached instance and each web server has PHP Memcache installed. Each server will then check for the existent of a cached page/item and if exists pulls from memcache otherwise will generate from the database and then save into memcached. When you are dealing with clusters, you cannot skimp on a good caching mechanism. If you are dealing with clusters, then your site already has more traffic (or will be) for a single server to handle.
There is beginners level cluster environment which can be implemented for a relative low cost. You can set up two colocated servers (nginx load balancer and a memcached server), then using free shared web hosting, you create an account of the same domain on those free hosting accounts and install your content. You configure your nginx load balancer to point to the IP addresses of the free web hosts. The free web hosts must have php5 memcache installed or the solution will not work.
Then you set you DNS for the domain with the registrar to point the NGINX IP (which would be a static ip if you are colocating). Now when someone access your domain, nginx redirects to one of your web server clusters located on the free hosting.
You may also want to consider a CDN to off load traffic when serving the static content.

Related

Where should I install varnish?

So my web app is hosted on amazon using Opswork.
Currently I have a 1 dedicated instance for Postgresql, 1 instance as my webserver, and another dedicated instance running Redis for caching purposes.
I would like to improve the performance by adding Varnish. Given my architecture where should I install varnish? and also taking into account I may soon outgrow this solucion and be using more webservers behind a loadbalancer.
Any help would be appreciated!
Bye
Varnish will always be quicker if you run it with memory storage - so the one with the most free memory would be a good pick. Even if you don't have enough to spare for the storage, it also uses quite some memory for the connection handling when you reach a bit more traffic.
Further along the road when you want a load balancer a good start would be to use a dedicated server for varnish that also can do load balancing just fine. It's not as effecient as a lightweight dedicated loadbalancer but until you need multiple varnish servers (way down the road) there is generally no point in using anything before it.
You should use Varnish in front of Apache Web Server. However it's fine to reside on Web Server itself and point Load Balancers to Varnish.

Changes to VMs in Availability Set and Load Balancing

I have gone through the whole process of testing and setting up two or even three VMs under availability sets and load balancing endpoints, and I have noticed how when accessing the domain the different VMs instances are loaded since I put different titles on each instance of a CMS web site to test the availability. The main reason I am trying to look into this is that the current VM/web site has had some problems when Windows did their periodical updates, which at times stopped the FTP or changed the server settings.
While this is working almost the way I thought it would, my question is about what happens when a client, who this will be setup for, makes changes to a CMS web site. My thought is that if they make changes to the CMS then those changes only apply to one instance of the VMs in the availability set, and if the VMs are load balancing where the different VM instances are loading then multiple different changes could be applied to each VM in the Availability Set.
What I am trying to determine but not coming across anything concrete, is if there is away to setup a shared network or system to mirror any changes to the each VM so that the web site stays consistent. Or if using the Availability Set for the current VM and web site is still applicable.
If anyone can give me some insight that would be great.
Is using the server's file system necessary for the CMS software? Could the CMS software read/write content to/from a database instead?
If using the server file system is the only option, you could probably set up a file share on one server that all the other servers would work against. This creates the problem though that if the primary server (that containing the file share) goes down for some reason, the site is as well.
Another option could be to leverage Web Deploy to help publish the content changes. Here are two blog posts that discuss this further:
http://www.wadewegner.com/2013/03/using-windows-azure-virtual-machines-to-publish-and-synchronize-a-web-farm/
http://michaelwasham.com/2012/08/13/publishing-and-synchronizing-web-farms-using-windows-azure-virtual-machines/
This really depends on the CMS system you're using.
Some CMS systems, especially modern ones, will persist settings in some shared storage, like SQL Server database and thus any actions that users make to the CMS will be stored in this shared storage and available to all web servers that are housing the CMS.
Other CMS systems may not be compatible with load-balanced web servers. Doing file sharing/replication/etc of the files stored on local servers may or may not work, depending on the particular CMS and its architecture. I would really try to avoid this approach.

Replicated cache system to each http servers

I have setup recently memcached for a PHP site with lot of traffic. Before we used APC but this lacks the possibility to have a unique cache system (invalidating one key on one server doesn't invalidate through the others).
I noticed a big difference when comes to memcached being on same machine as http server or on separated server.
http+memcached on same server -> 0.06 average time spent to deliver a page
http and memcache on diff servers (but under NAT) -> 0.15 - 0.20 to the deliver a page
So it's a huge difference and I am wondering if won't be better to have the cache system on same machine as http. The additional complexity is the fact the website is served by couple http servers (through a load balancer). So I actually need a cache system with replication, each http server having a cache "copy" and writing the changes only to the "master" (or other approach doing similar things).
There are couple of such systems (couchbase, redis, aso). I think couchbase is not good for this as won't allow connecting to local cache server but rather to the "gate". Redis may work, I am still checking on others.
The main this is: has someone tried this approach to speed up the website? By having on each machine a cache "copy" (kept in synch with the others)?
You can use GigaSpaces XAP solution which is a distributed in memory data grid, but also has an integration with jetty allowing you to deploy your web app and manage it from a single management system. The central distributed data grid (which can be a used as simple cache) can have a local cache on each web container which is kept in sync with the main cache, you don't have to use the jetty integration for it, you can still use your own web container and just create a proxy to the distributed cache with an embedded local cache via code. Or you can also have a fully replicated topology between the web containers without having a main distributed cache and each web container will contain a full copy of the entire cache which will be in sync with the other instances of the web container.
You can read more in:
http://wiki.gigaspaces.com/wiki/display/SBP/Web+Service+PU
http://wiki.gigaspaces.com/wiki/display/XAP9/Web+Jetty+Processing+Unit+Container
http://wiki.gigaspaces.com/wiki/display/XAP9/Client+Side+Caching
Disclaimer: I am a developer working for GigaSpaces.

Amazon EC2 + Windows Server 2008 + Memcached = how?

We are building a system that would benefit greatly from a Distributed Caching mechanism, like Memcached. But i cant get my head around the configuration of Memcached daemons and clients finding each other on an Amazon Data Center. Do we manually setup the IP addresses of each memcache instance (they wont be dedicated, they will run on Web Servers or Worker Boxes) or is there a automagic way of getting them to talk to each other? I was looking at Microsoft Windows Server App Fabric Caching, but it seems to either need a file share or a domain to work correctly, and i have neither at the moment... given internal IP addresses are Transient on Amazon, i am wondering how you get around this...
I haven't setup a cluster of memcached servers before, but Membase is a solution that could take away all of the pain you are experiencing with memcached. Membase is basically memcached with a persistence layer underneath and comes with great cluster management software. Clustering servers together is as easy since all you need to do is tell the cluster what the ip address of the new node is. If you already have an application written for Memcached it will also work with Membase since Membase uses the Memcached protocol. It might be worth taking a look at.
I believe you could create an elastic ip in EC2 for each of the boxes that hold your memcached servers. These elastic ips can be dynamically mapped to any EC2 instance. Then your memcached clients just use the elastic ips as if they were static ip addresses.
http://alestic.com/2009/06/ec2-elastic-ip-internal
As you seemed to have discovered, Route53 is commonly used for these discovery purposes. For your specific use case, however, I would just use Amazon ElasticCache. Amazon has both memcached and redis compliant versions of ElasticCache and they manage the infrastructure for you including providing you with a DNS entry point. Also for managing things like asp.net session state, you might consider this article on the DynamoDB session state provider.
General rule of thumb: if you are developing a new app then try and leverage what the cloud provides vs. build it, it'll make your life way simpler.

Should I host Website and REST API on the same server or split?

I have a web application that consists of Website and REST API. Should I host them on the same server or should I host them on different servers? By "server" I mean a server cluster - several servers behind load balancer.
API is mostly inbound traffic, website - mostly outbound.
If it matters - hosted on Rackspace and/or AWS.
Here is what I see so far:
Benefits of having Website and REST API on the same server
Simple deployment
Simple scaling - something is slow - just launch another instance
Single load balancer configuration
Simple monitoring
Simple, simple, simple ...
Effective use of full duplex network (API - inbound, website - outbound)
Benefits of splitting
API overload will not affect website load time
Detailed monitoring (I will know which component uses resources at this moment)
Any comments?
Thank you
Alexander
Just as you stated, in most situations, there are more advantages in hosting the API on the same server as the website. So I would stick with that option.
But if you predict allot of traffic for either the website or the API, then maybe a separate server would be more suited.
If this is on a load balancer why don't you leave the services and pages on the same site and let the load balancer/cluster do its job?
Your list of advantages/disadvantages are operational considerations, but you should consider application needs as well.
Caching?
Security?
Other resources, i.e. filesystem
These may or may not apply, but if your application architecture is different between the two, be sure to factor this into your decision.

Resources