I'm about to start a new project and there is a hosting issue that has been discussed about mirroring the servers and having some backup.
A different team is proposing a mirroring option that has server A with one hosting provider and server B with another provider. They are working on a solution that will detect when server A is down so it can redirect to server B.
At first glance I'm not sure that's possible. At least what I think I know is that both servers would need to be within the same network or else, how can one domain work for two different DNS.
I've been doing some research and so far have come empty handed and was wondering if someone here could have some other input regarding this issue we are facing.
Thanks!
-----[EDIT]-----
Well, I'll try to clarify it a little bit more. (even for me)
Server A (SA) will be with hosting provider A (HPA).
Server B (SB) will be with hosting provider B (HPB).
Each server has the website and the database installed. SA is supposed to be the primary server and SB would just be there as a backup.
First, there should be some sort of process that is updating the database in SB.
So, when, and if, SA goes down, people entering the site should be redirected to SB, that it has, or should have, the database updated, so for visitors this redirection is "transparent".
Our question is if that idea of how it should work can be done through proxys, or load balancers, or just through DNS settings (the domain pointing to several IPs from different servers).
Look into reverse proxy servers. It should be a simple configuration in nginx.
They are typically used for load balancing, or providing backup sites/servers.
Not a perfect solution but there can be another server which checks state of 2 servers hosting that project. This is a simple solution, but not perfect because if the mirrorer server is down, the same problem will occur again.
Related
I have One Database with one domain. But my Database have 3 Websites available. I want my 2nd Website for publish in that Database. Is that possible ???
You might want to make sure that you're not violating the terms of service with the company who is hosting your database. Having many outside domains hitting an inside database may cause some undue stress on that server that the company is not counting on or eating up more bandwidth that is allotted for that machine.
In the same breath though, if you setup some type of data layered web service which you can connect to, then your many other domains are not directly hitting the database and do essentially the same thing, but in a more ordered fashion of predictable database calls. This may not be what you're looking for, but if setup correctly it could make developing against your database much easier.
I'm hosting a web application which should be highly-available. I'm hosting on multiple linodes and using a nodebalancer to distribute the traffic. My question might be stupid simple - but not long ago I was affected by a DDoS hitting the data-center. That made me think how I can be better prepared next time this happens.
The nodebalancer and servers are all in the same datacenter which should, of course, be fixed. But how does one go about doing this? If I have two load balancers in two different data centers - how can I setup the domain to point to both, but ignore the one affected by DDoS? Should I look into the DNS manager? Am I making things too complicated?
Really would appreciate some insights.
Thanks everyone...
You have to look at ways to load balance across datacenters. There's a few ways to do this, each with pros and cons.
If you have a lot of DB calls, running to datacenters HOT can introduce a lot of latency problems. What I would do is as follows.
Have the second datacenter (DC2) be a warm location. It is configured for everything to work and is constantly getting data from the master DB in DC 1, but isn't actively getting traffic.
Use a service like CLoudFlare for their extremely fast DNS switching. Have a service in DC2 that constantly pings the load balancer in DC1 to make sure that everything is up and well. When it has trouble contacting DC1, it can connect to CloudFlare via the API and switch the main 'A' record to point to DC2, in which case it now picks up the traffic.
I forget what CloudFlare calls it but it has a DNS feature that allows you to switch 'A' records almost instantly because the actual IP address given to the public is their own, they just route the traffic for you.
Amazon also have a similar feature with CloudFront I believe.
This plan is costly however as you're running much more infrastructure that rarely gets used. Linode is and will be rolling out more network improvements so hopefully this becomes less necessary.
For more advanced load balancing and HA, you can go with more "cloud" providers but it does come at a cost.
-Ricardo
Developer Evangelist, CircleCI, formally Linode
How would I go about setting up a backup for heroku downtimes set up on a vps like linode? (using nginx/unicorn)
Essentially very simply, but also with a whole world of hurt.
Simply create an instance of your application of said VPS.
Then you need to ensure that you're able to flip your DNS from Heroku to said VPS without waiting for a TTL to expire, or someway of letting the world know your application has moved.
Then figure out a reliable way of ensuring that the code on both environments is exactly the same, and works on both different server setups
Then figure out how you can keep the data up to date in both environments so that when you do need to flip, the data will be the same in both environments.
Then you need to figure out a way to remind yourself to keep this secondary VPS up to date from a server management point of view. Software updates, security patches etc etc.
Then you need to figure out a way that you can notified when Heroku is down 24/7
Then you need to hope that when Heroku is down that Linode isn't
... or just accept that any host will go down, and it can cost a hell of a lot of money to ensure that your site doesn't. To be honest, it's probably better for you to look at some sort of hosting setup that allows redundancy and failover across several locations (which won't be cheap)
There are third party services which provide the ability to keep your site (parts of) up if your server goes down - At least it appears to the user that your site is up but it's not working properly behind the scenes. CloudFlare is one such service. It sits in front of your site/application and performs magic (quite simply). It works with static/dynamic sites - and if your server goes offline then they are able to serve static parts of your site. See http://support.cloudflare.com/kb/what-do-the-various-cloudflare-settings-do/what-does-enabling-cloudflare-offline-browsing-do
After researching various hosts, I still get the feeling that it is somewhat impossible to get a host that would never go down.
Maybe these hosts employ redundancy, maybe they do not. Either case, how would one display a friendly message to the user along the lines of "BRB". What if your host goes down completely for an hour? You would need a way to tell users you would be back. How do you accomplish that?
I doubt any ISP or hosting provider would do that for you. To archieve that you need very expensive and complicated infrastructure like redundant fail-safe routers and backbones in addition to servers of course - and you need multiple. The concepts like Simple Failover requires DNS updates which take minutes to hours to propagate normally, so it's not a 100% solution either. See a good Joel's article for a related discussion.
If the host is down and you're on a single server, then you are definitely down. This is a limitation of shared hosting... there's not much you can do about it. You can ask your host if you are hosted on multiple servers for redundancy... if so, then you wouldn't have to worry about it.
If you host your own server, then you could maybe get your hands on Simple Failover and maybe have a cheap Virtual Dedicated server that goes UP when your primary goes down.
Ok, every host will have downtime at some point. Your best bet would be to go with someone who has the great customer service that can help get your box back up. 99% of the time when your box goes down its your fault (if you have access to the OS/Apache etc).
The people at Rackspace are awesome for hosting + customer service. The rackspace cloud is great allowing you to create and take down servers instantly. (slicehost is good for persistent boxes charged by month, also owned by rackspace)
As for a way to communicate to your users, i would employ twitter, tumblr, or a hosted blog service. This way if your box goes down you can communicate your message via these services which are most likely on a different host/network.
Most solutions I've read here for supporting subdomain-per-user at the DNS level are to point everything to one IP using *.domain.com.
It is an easy and simple solution, but what if I want to point first 1000 registered users to serverA, and next 1000 registered users to serverB? This is the preferred solution for us to keep our cost down in software and hardware for clustering.
alt text http://learn.iis.net/file.axd?i=1101
(diagram quoted from MS IIS site)
The most logical solution seems to have 1 x A-record per subdomain in Zone Datafiles. BIND doesn't seem to have any size limit on the Zone Datafiles, only restricted to memory available.
However, my team is worried about the latency of getting the new subdoamin up and ready, since creating a new subdomain consist of inserting a new A-record & restarting DNS server.
Is performance of restarting DNS server something we should worry about?
Thank you in advance.
UPDATE:
Seems like most of you suggest me to use a reverse proxy setup instead:
alt text http://learn.iis.net/file.axd?i=1102
(ARR is IIS7's reverse proxy solution)
However, here are the CONS I can see:
single point of failure
cannot strategically setup servers in different locations based on IP geolocation.
Use the wildcard DNS entry, then use load balancing to distribute the load between servers, regardless of what client they are.
While you're at it, skip the URL rewriting step and have your application determine which account it is based on the URL as entered (you can just as easily determine what X is in X.domain.com as in domain.com?user=X).
EDIT:
Based on your additional info, you may want to develop a "broker" that stores which clients are to access which servers. Make that public facing then draw from the resources associated with the client stored with the broker. Your front-end can be load balanced, then you can grab from the file/db servers based on who they are.
The front-end proxy with a wild-card DNS entry really is the way to go with this. It's how big sites like LiveJournal work.
Note that this is not just a TCP layer load-balancer - there are plenty of solutions that'll examine the host part of the URL to figure out which back-end server to forward the query too. You can easily do it with Apache running on a low-spec server with suitable configuration.
The proxy ensures that each user's session always goes to the right back-end server and most any session handling methods will just keep on working.
Also the proxy needn't be a single point of failure. It's perfectly possible and pretty easy to run two or more front-end proxies in a redundant configuration (to avoid failure) or even to have them share the load (to avoid stress).
I'd also second John Sheehan's suggestion that the application just look at the left-hand part of the URL to determine which user's content to display.
If using Apache for the back-end, see this post too for info about how to configure it.
If you use tinydns, you don't need to restart the nameserver if you modify its database and it should not be a bottleneck because it is generally very fast. I don't know whether it performs well with 10000+ entries though (it would surprise me if not).
http://cr.yp.to/djbdns.html