Changes to VMs in Availability Set and Load Balancing - windows

I have gone through the whole process of testing and setting up two or even three VMs under availability sets and load balancing endpoints, and I have noticed how when accessing the domain the different VMs instances are loaded since I put different titles on each instance of a CMS web site to test the availability. The main reason I am trying to look into this is that the current VM/web site has had some problems when Windows did their periodical updates, which at times stopped the FTP or changed the server settings.
While this is working almost the way I thought it would, my question is about what happens when a client, who this will be setup for, makes changes to a CMS web site. My thought is that if they make changes to the CMS then those changes only apply to one instance of the VMs in the availability set, and if the VMs are load balancing where the different VM instances are loading then multiple different changes could be applied to each VM in the Availability Set.
What I am trying to determine but not coming across anything concrete, is if there is away to setup a shared network or system to mirror any changes to the each VM so that the web site stays consistent. Or if using the Availability Set for the current VM and web site is still applicable.
If anyone can give me some insight that would be great.

Is using the server's file system necessary for the CMS software? Could the CMS software read/write content to/from a database instead?
If using the server file system is the only option, you could probably set up a file share on one server that all the other servers would work against. This creates the problem though that if the primary server (that containing the file share) goes down for some reason, the site is as well.
Another option could be to leverage Web Deploy to help publish the content changes. Here are two blog posts that discuss this further:
http://www.wadewegner.com/2013/03/using-windows-azure-virtual-machines-to-publish-and-synchronize-a-web-farm/
http://michaelwasham.com/2012/08/13/publishing-and-synchronizing-web-farms-using-windows-azure-virtual-machines/

This really depends on the CMS system you're using.
Some CMS systems, especially modern ones, will persist settings in some shared storage, like SQL Server database and thus any actions that users make to the CMS will be stored in this shared storage and available to all web servers that are housing the CMS.
Other CMS systems may not be compatible with load-balanced web servers. Doing file sharing/replication/etc of the files stored on local servers may or may not work, depending on the particular CMS and its architecture. I would really try to avoid this approach.

Related

Setting Up Ncache (Distributed?/Shared)

I have two servers, where I will be deploying the same application. Basically these two servers will handle work from a common Web API, the work that handed out will be transformed and go through some logic and loaded into DB. I want to cache the data the get loaded/update or deleted in the database, so that when the same data is referenced i can get it from the Cache (Kind of explained the cache mechanism). Now I am using Ncache and it working perfectly fine within one application. I am trying have kind of a shared cache, so that both my application can have access to. How do i go about doing it?
NCache is a distributed cache so you can continue to use that.
There is good general documentation available and very good getting started material that walks you through all the steps required.
In essence you install NCache on both the servers and then reference both servers in your client configuration (%NCHOME%\config\client.ncconf)
In cluster caches, a single logical cache instance is distributed over multiple server nodes and because the cache process is running outside the application address space, multiple applications can share and see the same exact cache data change in terms of addition, removal and update of the cache content.
Local out-proc caches are limited to one server node but as they are outside the application address space, they also support sharing of data between applications.
In fact, besides allowing multiple applications to share data, NCache supports a pub/sub infrastructure to allow for multiple applications to actually communicate with each other. This allows NCache to play a key part in setting up a fast and reliable microservices environment wherein all the participating services send messages to each other through the NCache platform.
See the link below where they have shared information about NCache topologies
http://www.alachisoft.com/resources/docs/ncache/admin-guide/cache-topologies.html
http://www.alachisoft.com/resources/videos/five-steps-getting-started.html

How to host sites on a single server?

I have five websites that I designed and now manage on a month-to-month basis. Currently, each website is hosted individually via HostGator. I am realizing this is the improper (and costly) way to manage multiple websites and am curious into how I could transfer the websites to a single server, and some hosts you guys find reliable.
Below is a snap of one of the sites usages, these are all static sites that are quite small. How much space would I need on my new, single server to accommodate 20 of these websites?
Current site usage:
http://imgur.com/18BvsC2
Your image shows you are using 6.7 megabytes of data for one website. If that is similar space usage for all 20 of your anticipated domains, you need virtually very little hosting space as far as storage goes these days. Most entry level virtual hosting plans come with more than enough to meet your 20 domain expectations of like usage.
You want virtual hosting regardless. Most web hosting providers have plans that allow you to host many domains, including hostgator. Here is a link to compare their plans. http://www.hostgator.com/shared-compare
I've used DreamHost and HostMonster in the past, with nothing bad to say about them.
Perhaps you should brush up more on the pros, cons and hows of web hosting. Here is a link I just googled that might get you started. http://www.webhostingsecretrevealed.net/web-hosting-beginner-guide/

Linode backup for Heroku

How would I go about setting up a backup for heroku downtimes set up on a vps like linode? (using nginx/unicorn)
Essentially very simply, but also with a whole world of hurt.
Simply create an instance of your application of said VPS.
Then you need to ensure that you're able to flip your DNS from Heroku to said VPS without waiting for a TTL to expire, or someway of letting the world know your application has moved.
Then figure out a reliable way of ensuring that the code on both environments is exactly the same, and works on both different server setups
Then figure out how you can keep the data up to date in both environments so that when you do need to flip, the data will be the same in both environments.
Then you need to figure out a way to remind yourself to keep this secondary VPS up to date from a server management point of view. Software updates, security patches etc etc.
Then you need to figure out a way that you can notified when Heroku is down 24/7
Then you need to hope that when Heroku is down that Linode isn't
... or just accept that any host will go down, and it can cost a hell of a lot of money to ensure that your site doesn't. To be honest, it's probably better for you to look at some sort of hosting setup that allows redundancy and failover across several locations (which won't be cheap)
There are third party services which provide the ability to keep your site (parts of) up if your server goes down - At least it appears to the user that your site is up but it's not working properly behind the scenes. CloudFlare is one such service. It sits in front of your site/application and performs magic (quite simply). It works with static/dynamic sites - and if your server goes offline then they are able to serve static parts of your site. See http://support.cloudflare.com/kb/what-do-the-various-cloudflare-settings-do/what-does-enabling-cloudflare-offline-browsing-do

THE FASTEST Smarty Cache Handler

Does anyone know if there is an overview of the performance of different cache handlers for smarty?
I compared smarty file cache with a memcache handler, but it seemed memcache has a negative impact on performance.
I figured there would be a faster way to cache than through the filesystem... am I wrong?
I don't have a systematic answer for you, but in my experience, the file cache is the fastest. I should clarify that I haven't done any serious performance tests, but in all the time I've used Smarty, I have found the file cache to work best.
One this that definitely improves performance is to disable checking if the template files have changed. This avoids having to stat the tpl files.
File caching is ok when you have a single server instance or using shared drive (NFS) in a server cluster, but when you have a web server cluster (two or more web servers serving the same content), the problem with file based caching is not sync across the web servers. To perform a simple rsync on the caching directories is error prone. May work flawlessly for awhile but not a stable solution. The best solution for a cluster is to use distributed caching, that is memcache, which is a separate server running a memcached instance and each web server has PHP Memcache installed. Each server will then check for the existent of a cached page/item and if exists pulls from memcache otherwise will generate from the database and then save into memcached. When you are dealing with clusters, you cannot skimp on a good caching mechanism. If you are dealing with clusters, then your site already has more traffic (or will be) for a single server to handle.
There is beginners level cluster environment which can be implemented for a relative low cost. You can set up two colocated servers (nginx load balancer and a memcached server), then using free shared web hosting, you create an account of the same domain on those free hosting accounts and install your content. You configure your nginx load balancer to point to the IP addresses of the free web hosts. The free web hosts must have php5 memcache installed or the solution will not work.
Then you set you DNS for the domain with the registrar to point the NGINX IP (which would be a static ip if you are colocating). Now when someone access your domain, nginx redirects to one of your web server clusters located on the free hosting.
You may also want to consider a CDN to off load traffic when serving the static content.

100mbps Dedicated Server same download speed as a Shared Host!

I have two specs from two different hosts I am using:
(a) Dedicated server with Full duplex 100Mbits internet connection ($140 per month)
(b) Shared Host on a server that has 100Mbits internet connection ($7 per month)
I have tested my application which downloads from other servers and lets users download from my site in turn. I have tested this again and again and it takes the same time to download files! But the dedicated is much faster in the final download to the clients computer.
Firstly, are there any Linux commands or tools I can use to test bandwidth properly for each server?
Secondly, why the hell do they have the same download speed from other servers??
Please shed some light on this as I feel I've been wasting money for no reason!!
Thanks all
First, you can use iperf to test your network speeds. Second, you're not paying for the speed, you're paying for the power and flexibility of having essentially your own server configured however you want. With a shared host, your site is most likely on a machine with a hundred other sites, each competing for resources.
Also, the bottleneck is probably not on your end or on your host's end, but rather somewhere in between the content you're fetching and your servers.
if i read correctly, your shared server is just as fast as the dedicated when fetching a file, but much slower when serving it.
I'd say that the box your shared server is in has the "out" bandwidth mostly used by the other client's slices, while the "in" bandwidth is mostly unused, so you get almost full performance.
sounds right, since serving files is a lot more common task than fetching them.
The big difference between shared hosting and dedicated hosting is that with dedicated hosting, you're the only account using that box. With shared hosting, there could be (and most likely are) thousands of other web sites hosted on it.
If one of those sites goes wonky and takes the whole box with it, your site goes too. On a dedicated box, the only site that's going to go wonky is yours.
With dedicated, you probably also have full admin rights to the box, which you probably don't have with the shared host.
Well, one possiblity is that the shared site is on a host that has very little load from the other shared sites. If all the shared sites are just sitting there getting very little hits, then your site basically is getting full use of the box, so its no different from a dedicated box.
But if those other sites start getting traffic, your site will be impacted.
Not sure if the shared site is full duplex or not, but that doesn't always make a difference (not an expert there).
Perhaps the servers they are downloading from are the bottleneck? You could have a dedicated gigabit pipe but it won't help if you can only get 10mbps from the other servers.
Remember the benefit of a dedicated host is that your performance will not be affected by other processes on the machine. The extra money guarantees you that 100mbit, not that you'll see better performance than a hosted machine at any given time.

Resources