Does the Cloud solve the hosting location dilemma? - web-hosting

My startup is located in Europe where most of our current users are.
I'm looking for a host that will allow us to scale to the US and Asia without latency taking its toll on performance.
Does the cloud solve the distance = latency problem?
If not, Where would be the ideal hosting location for a growing startup?
Some data:
Asp.net 3.5
SQL 2005
Jquery (lots of Ajax)
MVC
Thanks

The Cloud is just an abstraction. It doesn't affect the underlying physical nature of the servers running your code and hosting your data. If the systems storing your data are a long way from your users there will some latency, no matter how you access them.
Most Cloud providers allow you to choose where you want your data - for example, Amazon S3 lets you choose to store your data in either the US or Europe - but no provider is going to be able to magically store all your data in multiple locations simultaneously.
If you want the benefit of multiple data centres you'd have to allow simultaneous updates at each location and there is no way to synchronise such updates without knowledge of the business logic of the application, so you're going to have to write some code to do this.
You're still going to have a look at what each Cloud provider offers and work out how each can help solve your problems, but you're going to have to do some work yourself.

What you're looking is CDN (Content Delivery Network) hosting for Windows Applications. In CDN, your content is cached on various POP's located across the continents. So, if a request is coming from India, cached copy of content stored on Indian POP is served. The same is the case for US, EU and other continent clients.
This technology is still in early phase of development and there are two types of CDN technology - PUSH & PULL. PUSH means content is immediately PUSHED to POP's when there is any change on Master server and PULL means POP servers are pulling content at regular interval from Master server and this interval is usually 12 hours to 24 hours.
If your site is database driven and frequently updated, PUSH technology CDN will be the right choice.

Related

Which t3 EC2 instance should i pick as a start when launching a spring/angular webapplication?

I have built a spring boot/angular web application that uses a mySQL database for storage. The web application's main purpose is to be like a social media website for gardeners. Next to this it has a couple of tools that allow the user to generate a personalized planting calendar based on the monthly average temperature curve of the region where the user lives. Alternatively the user can also generate a personalized planting calendar based on planting journals made by other users that live within a certain radius near the user doing the calendar generating. I am using Hibernate Search for this.
I do not expect to get millions of visits in the first months after launching the web application, so my question is: What would be the best ec2 instance type to start out with? Could a t3.micro support an application like that for the first months or two? Also, How will i know when the current instance type can no longer handle the incoming traffic without lag and therefore i need to upgrade to a bigger instance like t3.medium or large?
Thank you
If the instance is suitable or not depends on many things. Based on my experience a micro instance is not enough for many use cases.
My suggestion is to start with a t3.small instance, start gathering metrics in CloudWatch to establish your baseline for few days. Then decide if it is enough or not.
If you are filling all your resources you can eventually upgrade to a bigger instance. However if your app is dealing with Java I think that a medium size is the minimum start.
About the lag and other things, first suggestion is to put CloudFront on top of the EC2 at least for all your static content (suggestion: put your static contents on S3 don't let EC2 serve them). Then I think that the only option is to rely on some third party performance tool, external to AWS.
By the way, I have built the same app on iOS many years ago, with a support website hosted on AWS. Now the app is gone, and the website is unmaintained :-)

How to create a Windows Azure application hosted in different datacenters

I'm trying to figure out how to scale a Windows Azure app, where there are some web roles and some worker roles.
The objective is to have some instances in a US datacenter and some others in an Europe datacenter, for different users in America an Europe to have the better response time. My problem is to replicate all my storages (for users in Europe who travel to America and viceversa) and even for troubles in one datacenter.
Until now, I understand that it's possible using Traffic Manager to let Azure know which datacenter is closer to the user.
I know I can replicate data between databases with SQL Data Sync.
The blob storages can also be replicated using Copy Blob API .
I understand the queues cannot be automatically replicated but I don't have much problem with that.
My problem is I cannot find a way to replicate table storages.
As a matter of fact I really don't know if this is the best strategy for my problem...
Thank you.
DX - you are right on with Traffic Manager and Data Sync. Those are the best options for roles & SQL. However, BLOBs are much easier - enable CDN and your BLOBs are replicated across 24 data centers automatically. Read Using CDN for Windows Azure for how to setup the CDN from your primary Storage account.
For table storage, I would handle this programatically, keep a list of the Table connections and then use a parallel foreach to insert into the different data centers.
We maintain a different Service Configuration file for each Data Center to simplify deployment.

Migrate Azure Web Site to Azure Cloud Service

I have a project and I'm planning to start the web app as an Azure Web Site and then migrate it to an Azure Cloud Service (also called Hosted Service) if it is needed as a scale strategy.
The decision is because I read that Azure Web Sites are more simple and fast to develop with almost no Azure-specific configurations or code. So starting fast and simple is a good starting point for the project.
But, is that a good starting point for you?
Is migrating an Azure Web Site to an Azure Cloud Service the same as you were migrating a normal ASP.NET Website to an Azure Cloud Service?
Would you start with an Azure Cloud Service right from the beginning? If yes, why?
Thanks for your time.
There are benefits to both deployment models, it will eventually come down to what you are trying to achieve and ultimately the success of your application.
Below I've outlined the Pros and Cons of each of the models to ensure that you're making the right choice for your applications goals.
Windows Azure Web Sites
You have properly identified that Windows Azure Web Sites is a great starting point for an application, however you could also consider that Web Sites does offer enough scalability for many solutions.
Pros
10 Free sites during preview [Free for 12 months]
Easy Deployment (use Git, TFS, Web Deploy or FTP)
Quick Scalability (You can move to your own dedicated cluster [aka reserved standard])
Simple Development (Supports Classic ASP, ASP.NET, Node.js, Python & PHP)
Persistent Environment (most people are used to this)
Cons
No SSL Support on Custom Domains
in Preview (currently no SLA)
Windows Azure Cloud Services
Cloud Services (formerly known as Hosted Services) is definitely the vision for the future of Web Applications. It is built with resiliency in mind to keep the cost of applications affordable by scaling to meet demand, and dial back capacity when your traffic slows.
Pros
Increased control over the cost of your application (if architected correctly)
Flexibility (You have full control over the environment)
SSL Support
Language Agnostic
Web Server Agnostic (although IIS is available by default)
Auto Management of Servers
Cons
Architecture should be carefully considered
Deployment time is slower (Slows development cycle)
Things to consider for Portability
The items above might have given you enough to plan the immediate future of the application and it is very likely that you might want to consider Cloud Services in the future (it fits a number of application scenarios better in the long run).
Here is a list of things to help portability between Web Sites to Cloud Services:
Start thinking Stateless
Windows Azure Web Sites is nice as it is a persistent environment, which means you are able to store things like session state and assets to the disk.
Although this is a good feature, it's best to start planning towards a stateless application, if your end goal is to be in Cloud Services. Here are a few things you can do to start thinking stateless:
Don't rely on Session State
If you need it, come up with a strategy to make it scale (Caching Service, SQL, or Storage)
Use the Storage Service
Assets such as Static HTML, css, javascript and images are better placed in Storage
Avoids additional bandwidth on your Web Site (potentially stay shared longer for lower cost)
Can be CDN Enabled, provides a better experience for International markets
Easier to update web assets when application is migrated to Cloud Services
Storing User content
If your application already stores to the Storage Service, there is one less code modification in the future when moving to cloud services.
Make it easy to discover patterns in your Data
The benefit of Cloud Services is it enables you to reduce cost by only scaling what needs scaled. Starting the process of identifying your scale units i.e. How you partition your database or Tables in Storage.
I read all post and all of them are very helpful.
In addition to all post , I found an info on msdn : Windows Azure Websites, Cloud Services, and VMs: When to use which?
With Windows Azure Websites you can:
Build highly scalable web sites on Windows Azure.
Quickly and easily deploy sites to a highly scalable cloud environment that allows you to start small and scale as needed.
Use the languages and open source applications of your choice then deploy with FTP, Git or TFS, and easily integrate Windows Azure services like SQL Database, Caching, CDN and Storage.
With Cloud Services you can:
Build or extend your enterprise applications on Windows Azure.
Create highly-available, scalable applications and services using a rich PaaS environment. Support advanced multi-tier scenarios, automated deployments and elastic scale. Deliver great SaaS solutions to customers anywhere around the world.
And also there is summarizes the option on msdn :
And comparing some features Web Sites and Cloud Services on msdn:
Azure is a great place to have your app, but there are some considerations you need to know before start migrating it.
Azure Websites and Hosted Services are really trivial to deploy. With
Visual studio you generate the package and simply upload it. Then you
have a Development environment to check it. If it's ok for you, swap
ips. If it's not ok for you, upgrade again.
Your instances have some properties that could be annoying. For
example, you cannot be sure about your IP. Then if your app works
with some provider using IP restriction, you will need to figure out
how to proceed.
More considerations. Your "server" could be reimaged at any moment.
If you store something on the local disc, that file could go away at any moment.
Azure works very nice if you have at least 2 instances or more for
each website. Maybe your app is not prepared for that. The first step
will be managing the sessions with the appFabric. Is really
easy, just a change on your web config. Be careful because this
session state doesn't work exactly as the "old one". You cannot store
non-serializable objects (should be easy to adapt) or a very large objects (more than 8MB).
If you are going to develop something from zero, I suggest you to start into azure from the beginning. The reason is simple: it's really cheap to start and you will not pay serious money until the app have lot's of visits. It's also very cheap to setup a SQLAzure and a storage account. One you have all in place, it's easy to add more instances or scale up.
Example:
Imagine you have an idea and you wish to show up to some possible investors.
You start setting up a little SQLAzure database (1GB ), $9,99 monthly.
Then you build a site and you put 2 extra small instances, $18,72 monthly.
Let's say you need 100 GB of space (images, backups, ...), $12,50 monthly.
At his point, you have all in place to start your business paying less than $50 monthly.
If you site have exit and the visits starts to come, you change your instances for small instances (it's really dangerous to have production environment with extra small instances, because do not have cpu reservation). Then you change the extra small cost ($18,71) up to $57,60. Maybe you need more space to that SQL Azure? etc...
prices calculated from here: http://www.windowsazure.com/en-us/pricing/calculator/?scenario=web .
Those are few tips, there is a lot more. My advice is to start a trial account and play with it.
Final advice: Its very easy to solve everything just purchasing more resources. Sometimes you need to refactor and optimize your code. If you simply add more resources each time you have a problem, you could end with a huge bill and a very messy code.
Hope it helps!
Another advantage of Windows Azure Cloud Services over Web Sites is that a cloud service can be added to an Azure Virtual Network. This can give it access to on-premises resources like databases. So if your requirements are such that you need the scalability offered by Azure but need to keep your data on-premises due to security restrictions, cloud services is a better choice.
Azure web sites cannot be part of an Azure virtual network. To access on-premises resources mechanisms such as Azure Service Bus Relay must be configured.
We've had our web site running on PHP on some hosting and at some point decided to move it to Azure (where sits main part of our service). We've started with Azure Web Sites which was great from development point of view (mainly integration with git). But after about a week of testing (when we've decided to actually move the production web site) we've found that currently
No SSL for custom domains
Custom domains are available only for reserved instances (no shared infrastructure)
SLA
So we moved to Hosted Service. The main problem for us was lack of ability of simple deployment (need to build package and upload whole package of the web site), and found solution was to use dropbox - as a startup task for role, we're installing dropbox service on the machine, which takes all the web site from dropbox, which in turn have SVN checked out folder, so site updates became very easy.

Amazon EC2 as web server? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have thought a lot recently about the different hosting types that are available out there. We can get pretty decent latency (average) from an EC2 instance in Europe (we're situated in Sweden) and the cost is pretty good. Obviously, the possibility of scaling up and down instances is amazing for us that's in a really expansive phase right now.
From a logical perspective, I also believe that Amazon probably can provide better availability and stability than most hosting companies on the market. Probably it will also outweigh the need of having a phone number to dial when we wonder anything and force us to google the things by ourselves :)
So, what should we be concerned about if we were about to run our web server on EC2? What are the pro's and cons?
To clarify, we will run a pretty standard LAMP configuration with memcached added probably.
Thanks
So, what should we be concerned about if we were about to run our web server on EC2? What are the pro's and cons?
The pros and cons of EC2 are somewhat dependent on your business. Below is a list of issues that I believe affect large organizations:
Separation of duties Your existing company probably has separate networking and server operations teams. With EC2 it may be difficult to separate these concerns. ie. The guy defining your Security Groups (firewall) is probably the same person who can spin up servers.
Home access to your servers Corporate environments are usually administered on-premise or through a Virtual Private Network (VPN) with two-factor authentication. Administrators with access to your EC2 control panel can likely make changes to your environment from home. Note further that your EC2 access keys/accounts may remain available to people who leave or get fired from your company, making home access an even bigger problem...
Difficulty in validating security Some security controls may inadvertently become weak. Within your premises you can be 99% certain that all servers are behind a firewall that restricts any admin access from outside your premises. When you're in the cloud it's a lot more difficult to ensure such controls are in place for all your systems.
Appliances and specialized tools do not go in the cloud Specialized tools cannot go into the cloud. This may impact your security posture. For example, you may have some sort of network intrusion detection appliances sitting in front of on-premise servers, and you will not be able to move these into the cloud.
Legislation and Regulations I am not sure about regulations in your country, but you should be aware of cross-border issues. For example, running European systems on American EC2 soil may open your up to Patriot Act regulations. If you're dealing with credit card numbers or personally identifiable information then you may also have various issues to deal with if infrastructure is outside of your organization.
Organizational processes Who has access to EC2 and what can they do? Can someone spin up an Extra Large machine and install their own software? (Side note: Our company http://LabSlice.com actually adds policies to stop this from happening). How do you backup and restore data? Will you start replicating processes within your company simply because you've got a separate cloud infrastructure?
Auditing challenges Any auditing activities that you normally undertake may be complicated if data is in the cloud. A good example is PCI -- Can you actually always prove data is within your control if it's hosted outside of your environment somewhere in the ether?
Public/private connectivity is a challenge Do you ever need to mix data between your public and private environments? It can become a challenge to send data between these two environments, and to do so securely.
Monitoring and logging You will likely have central systems monitoring your internal environment and collecting logs from your servers. Will you be able to achieve the monitoring and log collection activities if you run servers off-premise?
Penetration testing Some companies run periodic penetration testing activities directly on public infrastructure. I may be mistaken, but I think that running pen testing against Amazon infrastructure is against their contract (which make sense, as they would only see public hacking activity against infrastructure they own).
I believe that EC2 is definitely a good idea for small/medium businesses. They are rarely encumbered by the above issues, and usually Amazon can offer better services than an SMB could achieve themselves. For large organizations EC2 can obviously raise some concerns and issues that are not easily dealt with.
Simon # http://blog.LabSlice.com
The main negative is that you are fully responsible for ALL server administration. Such as : Security patches, Firewall, Backup, server configuration and optimization.
Amazon will not provide you with any OS or higher level support.
If you would be FULLY comfortable running your own hardware then it can be a great cost savings.
i work in a company and we are hosting with amazon ec2, we are running one high cpu instance and two small instances.
i won't say amazon ec2 is good or bad but just will give you a list of experiences of time
reliability: bad. they have a lot of outages. only segments mostly but yeah...
cost: expensive. its cloud computing and not server hosting! a friend works in a company and they do complex calculations that every day have to be finished at a certain time sharp and the calculation time depends on the amount of data they get... they run some servers themselves and if it gets scarce, they kick in a bunch of ec2's.
thats the perfect use case but if you run a server 24/7 anways, you are better of with a dedicated rootserver
a dedicated root server will give you as well better performance. e.g. disk reads will be faster as it has a local disk!
traffic is expensive too
support: good and fast and flexible, thats definately very ok.
we had a big launch of a product and had a lot of press stuff going on and there were problems with the reverse dns for email sending. the amazon guys got them set up all ripe conform and nice in not time.
amazon s3 hosting service is nice too, if you need it
in europe i would suggest going for a german hosting provider, they have very good connectivity as well.
for example here:
http://www.hetzner.de/de/hosting/produkte_rootserver/eq4/
http://www.ovh.de/produkte/superplan_mini.xml
http://www.server4you.de/root-server/server-details.php?products=0
http://www.hosteurope.de/produkt/Dedicated-Server-Linux-L
http://www.klein-edv.de/rootserver.php
i have hosted with all of them and made good experiences. the best was definately hosteurope, but they are a bit more expensive.
i ran a CDN and had like 40 servers for two years there and never experienced ANY outage on ANY of them.
amazon had 3 outages in the last two months on our segments.
One minus that forced me to move away from Amazon EC2:
spamhaus.org lists whole Amazon EC2 block on the Policy Block List (PBL)
This means that all mail servers using spamhaus.org will report "blocked using zen.dnsbl" in your /var/log/mail.info when sending email.
The server I run uses email to register and reset passwords for users; this does not work any more.
Read more about it at Spamhaus: http://www.spamhaus.org/pbl/query/PBL361340
Summary: Need to send email? Do not use Amazon EC2.
The other con no one has mentioned:
With a stock EC2 server, if an instance goes down, it "goes away." Any information on the local disk is gone, and gone forever. You have the added responsibility of ensuring that any information you want to survive a server restart is persisted off of the EC2 instance (into S3, RDS, EBS, or some other off-server service).
I haven't tried Amazon EC2 in production, but I understand the appeal of it. My main issue with EC2 is that while it does provide a great and affordable way to move all the blinking lights in your server room to the cloud, they don't provide you with a higher level architecture to scale your application as demand increases. That is all left to you to figure out on your own.
This is not an issue for more experienced shops that can maintain all the needed infrastructure by themselves, but I think smaller shops are better served by something more along the lines of Microsoft's Azure or Google's AppEngine: Platforms that enforce constraints on your architecture in return for one-click scalability when you need it.
And I think the importance of quality support cannot be underestimated. Look at the BitBucket blog. It seems that for a while there every other post was about the downtime they had and the long hours it took for Amazon to get back to them with a resolution to their issues.
Compare that to Github, which uses the Rackspace cloud hosting service. I don't use Github, but I understand that they also have their share of downtime. Yet it doesn't seem that any of that downtime is attributed to Rackspace's slow customer support.
Two big pluses come to mind:
1) Cost - With Amazon EC2 you only pay for what you use and the prices are hard to beat. Being able to scale up quickly to meet demands and then later scale down and "return" the unneeded capacity is a huge win depending on your needs / use case.
2) Integration with other Amazon web services - this advantage is often overlooked. Having integration with Amazon SimpleDB or Amazon Relational Data Store means that your data can live separate from the computing power that EC2 provides. This is a huge win that sets EC2 apart from others.
Amazon cloud monitoring service and support is charged extra - the first one is quite useful and you should consider that and the second one too if your app is mission critical.

Enough bandwidth to support

I have a client that is paying $1500 per month for hosting of 1 website (1 domain name, email is hosted elsewhere). The website is pretty low traffic. Like, 100 unique visitors a week. The only catch (and why it is so expensive) is that their database is 15 GB, and is replicated from the hosting company to inside my small companies office.
Inside the office, there is a desktop application that hits the internal database quite a bit. From the website, some data is entered into THAT version of the database. Replication keeps both databases in synch on a schedule of every 5 minutes.
My client has a T1 that runs into their office. I want to knock out the hosting provider altogether, host their website from a server they already have (more than capable of handling this website), and dump the replication altogether. This would save them $1500 per month, and for a company of 5, it would really make a difference to them.
Assuming I already have a backup strategy in place (way to move a copy of the DB offsite every day), what are the problems with this?
Support? they can reboot their server as easily as the hosting provider can.
What if server goes down for good? There is a duplicate that I can bring up in a couple of hours, and that is all the level of service they really require.
What am I missing here? I want to save them money, but I don't want to screw them over...
EDIT: Some of the answers and comments make it clear that I myself wasn't clear. My client (company A, not a hosting provider) is paying company B to host their website. The website has a database (MS SQL Server 2000) that is 15 GB. That SQL Server DB is being replicated back to a server # company A.
Company B is charging Company A $1500 per month for this service.
Company A already has a T1 for connectivity to the internet. They are located inside of a run of the mill business park.
I am proposing doing away with any outside hosting, getting a DNS provider to point the website to Company A's static IP and hosting the website on a server inside Company A. Then there would be no need for any replication at all, and they wouldn't be paying company B $1500 per month.
I hope that explains it. I'm going to re-read and comment on all the current answers.
Really, any advice is very appreciated.
Sounds to me like your only risk in moving the server in-house is if your T1 goes down. If you have a backup strategy in place for that, go for it.
The other option is to co-loc your own server with your own SQL Server licence on it. Hosting companies charge a lot for hosting SQL Server databases because they have to pay per-CPU licencing for it. So they build up a powerful server to serve lots of client's databases, but then SQL Server offers no way to do useage accounting so they only way they can bill/screw you is on database size.
Sounds like the traffic is low enough on your site you can get a dual core server, a 1 CPU licence of SQL Server for a one-off cost of a few thousand dollars and then you're only paying the monthly co-loc price.
A hosting provider can monitor the server 24x7. What if the server crashes at 8 pm? I the people at the small company are not working around the clock?
Depends on the service this DB is providing. What are the requirements to its uptime?
Database replication isn;t that expensive for bandwidth - well, assuming you're not doing a hotcopy of the entire DB files across the link that is.
Check out log shipping, or any of the supported replication options that will replicate the DB using minimal bandwidth. (you never said what the DB was, so I can't comment further there)
I would move to the new server and keep replication. At the very least, if you're really worried about data loss, then get another server in the same facility and copy across to that one - even if you copy 15Gb every 5 minutes, it'll be using non-chargeable bandwidth without even going outside the switch they're connected to.

Resources