Why choose RDS/Aurora over EC2 instance on AWS, or choose CloudSQL over Compute Engine (VMs) for database server? [closed] - amazon-ec2

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Recently, I've done some comparison between using Database as a service (e.g. CloudSQL on GCP and RDS/Aurora on AWS) and using VMs (e.g. Compute Engine on GCP and EC2 on AWS).
It turns out with the same type of machine/server, Database as a service is costing double as the price of setting us own VMs.
For example, on AWS, the r5.4xlarge EC2 instance costs $1.208/hour; while, the r5.4xlarge RDS costs $2.28/hour. Worse than that, Aurora costs $2.8/hour.
On GCP, the n1-high-mem-16 compute engine instance costs $686.33/month; while the n1-high-mem-16 CloudSQL costs $1387.98/month.
Why don't people spin up an EC2 instance or Compute Engine instance and set up their own MySQL?
It would be appreciated for you to write down your reasons of choosing the database as a service (CloudSQL or RDS or Aurora) than setting up a database on VMs?

I can't generalise, but at one of the previous companies I worked for, they did a thourough analysis.
They included the cost of people needed to setup the database in high-available mode, added the continuous costs of backups and keeping up-to-date with security patches, which needed to be prepared in advance for every patch.
When you have a managed service it's an all-in-one package, and it was actually cheaper or less risky than having to hire a (part-time) DBA.
They also calculated in the adoption rate of innovation. When they needed e.g. MongoDB or Redis, a managed service could be adopted in a week instead of having to wait for several months for someone to analyse all risks and options to set it up in a High-Available state and coming up with a Security plan.

Related

Best AWS server for eCommerce [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to deploy my Laravel app to AWS server and I got two options:
ECS services which allow me to use docker to manage the number of services I need (Ram, CPU ..etc)
AWS eCommerce Platform to set monthly plan services (static number of Ram, CPU, and storage according to the plan)
So which one should I use for my e-commerce platform? comparing should contain:
performance: which one is better to deal with API requests (I heard docker slows down the processing)
price: is it safer to choose a monthly plan instead of cost by view or resources?
security: AWS offers more security options on the AWS e-commerce platform
The issue with managing your own instance is that you have to work out security aspects deeply especially if you are handling payments or credit card information. Considering e-commerce site this may be at the core of requirement. Personally i will go for a managed service rather than ECS, as you be spending a lot of time configuring and securing ECS.IN ECS case you have to buy a SSL certificate on top, plus penetration testing to make sure site is secure etc.
The managed platform is hopefully already PCI-DSS compliant and easy to configure.

Amazon AWS - best setup for geographically optimized delivery within USA? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We run a busy web app, with most traffic being USA based. Our IT department is currently in the process of migrating our web server to the Amazon AWS EC2 environment and I'd like to make sure we set it up correctly geographically ...
Currently, the test environment AMI instance they set up is in the USA West region (North California). However, our traffic originates much more from Eastern than Western/Mountain regions (though the latter are hardly negligible). Our original hosting server is in Texas.
1) I think we will need to move the test AMI instance to USA East (Virginia) region where most of our traffic is. Is it easy to "move" or "mirror" an AMI to another region? Ideally, our primary instance would be in Virginia and we'd have another instance "mirrored" in California, which would sync any changes that we make to the primary instance.
2) Perhaps we should create a CloudFront CDN distribution instead of multiple AMIs in different regions? Or utilize both CloudFront CDN + two AMI instances set up, one for each coast?
3) [side question] Seems like we would need two separate AMI instances set up anyway to enable the load balancing feature in EC2?
First of all, think about having some kind of configuration management system available. With this you basically define a set of "roles" or "recipes" and assign your servers or instances to these roles. They will then automatically install, which makes tasks like changing the AWS region very easy. Have a look at this Wiki page for an overview.
While the latency inside the US is not that big compared to sending packages of the ocean, I would suggest to move your instance to your main traffic region, that is US east. However I do not think it is necessary in the first step to have another machine up and running in another region. I would suggest to use a service like Pingdom to measure your latency from different geographical regions. If you have the need to have another server up and running, you can still add it afterwards.
However what I would strongly suggest is using Cloudfront or another CDN. As most webpages have a lot of static files like images, css or javascript files, make sure they get delivered fast to the user. With Cloudfront you can always make sure that they will be served from the nearest endpoint. You will have a significant speedup through this.

How does AWS EC2 scale if more resourses needed? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I launched a t1.micro instance running Apache and MySQL servers on Ubuntu. Basically I'm using it to host my photo sharing app that may have huge random spikes in terms of visitors.
How does AWS go about it?
Will the instance automatically upgrade to appropriate horse power to keep up with demand and growing storage demands?
No, you have to manually make your instance more powerful by first making sure it is in the stopped state (this requires EBS volumes or you'll lose your data), then going to the AWS console, right click your instance and select 'Change Instance Type'.
If you are interested in a more automated approach, I suggest an Elastic Load Balancer with an Auto-scaling policy. With Auto-scaling, Amazon will spin up or down new instances based on set points that you provide (i.e. CPU usage reaches 80% for 10 minutes).

Will it be fast if I use amazon web services for India? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My portal will be mainly accessed in India and it involves uploading/viewing of images which means good data transfer will be involved.
If I host my portal on servers located in India; surely it will be faster to access the pages. But I want to personally use Amazon web services. Do we have option in Amazon so that we can host our tomcat server and save images on some servers located in India ; or at max. in Singapore so that access is fairly faster.
Amazon Web Services offers several AWS datacenter Regions for most of their Products & Services within their steadily expanding global infrastructure, amongst those the Asia Pacific (Singapore) Region (usually referred to as ap-southeast-1).
Furthermore they do offer even more so called edge locations for Amazon CloudFront, which is their Content delivery network (CDN) alike web service for content delivery.
You can see an overview of the current regions and edge locations on their Global Infrastructure map.
There is an API oriented Regions and Endpoints listing as well, see e.g. those for the Amazon Elastic Compute Cloud (EC2) (please note that not every region does necessarily support every single available product, especially beta offerings are usually available in us-east-1 only initially).
Consequently you should be fine using ap-southeast-1 for your use case, though as usual you might want to give it a try before settling on this, which is fairly easy to do by means of the AWS Free Usage Tier offering.

Automatic Ejabberd clustering with EC2 (Amazon Web Services) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Using Ejabberd in EC2 as an XMPP server to send real-time information to clients...
How it is possible to set up clustering so that if the load on the server gets too much, Auto Scaling will create a new EC2 instance that is part of the Ejabberd cluster?
The documentation I've read suggests that you must already have the machines and manually configure each new one to be added to the cluster. Surely though you don't have to be running redundant EC2 instances just in case?
You'll need to do this manually, however a single ejabberd server can handle quite a lot of traffic. Each server adds a significant amount of available connections to your cluster, so it's not a common task.
That said, I'd really be careful running ejabberd in EC2. I've been doing it for about a year, and we fight mnesia network partitioning pretty regularly. Clustered ejabberd servers don't work very reliably in the EC2 network.
I am installing an infrastructure based on EC2 + ejabberd and have read this post. Do not you recommend? I planned to use as backend mysqlk (in AWS RDS) for tables that store large amounts of data. What do you think?

Resources