Benchmarking amazon EC2 vs Google compute engine - amazon-ec2

For EC2: t2.micro, host in Oregon
For compute engine: n1-standard-1, in west-us
Both in ubuntu 16.04, single instance. I installed nvm, then node js 7.7.2 and initiated a simple server:
require('http').createServer(function(req, res) {
res.end('Hello world')
}).listen(8080)
Then from my local machine, I used wrk for bench marking
wrk -t12 -c400 -d10s http://myinstanceaddress
Result, average of 5:
EC2: ~1200 requests/s
Compute engine: ~500 requests/s
I expected that compute engine n1-standard-1 would perform better than EC2 t2.micro since the former has more CPU and power. However, the result indicated otherwise.
So, my question is: is this simple benchmark accurate, what other factors that I need to consider? If the benchmark is accurate, does this mean AWS EC2 t2.micro actually performs better than google compute engine n1-standard-1 despite the later has more power (CPU and memory as advertised)?

Have you checked the network path (e.g. latency) between the two nodes, using a tool like ping?
I'm guessing that you're actually being limited by the network path somewhere along the way.

Related

What exactly is a Heroku compute on a Dyno?

Heroku describes their dynos here and it lists the amount of memory each one has and also the amount of Compute resources. Nowhere do I see the definition of a "Compute".
When I run this command on the performance-l dynos it tells me it has 8 cores.
grep -c processor /proc/cpuinfo
I don't see how this relates to the 46x Compute that's on the chart. It seems like an arbitrary number to me and I don't understand exactly what it is.
Heroku's compute units are just Amazon's compute units (because Heroku runs on top of AWS).
One compute unit on AWS is defined as the computer power of a 1.0-1.2Ghz of a 2007 server CPU.
Keep in mind though: these units are typically pretty variable depending on how many other active dynos are on the same underlying EC2 host.

EC2 instance types for running a TitanDB cluster

I'm currently getting started on building up a graph database. For that I'm using Titan 1.0 and Cassandra 2.1.12 as the storage backend. For now I'll rely on Titans internal mechanisms for indexing and won't add any external indexing service like elasticsearch.
For the general surrounding the graph will be used in: For now the graph should mostly contain friendship and follower relations of my user base. Regarding read and write load I suspect some write load (e.g. when the user bulk-adds a lot of friends) and at the same time a lot of reads (e.g. the user wants a list of his friendships).
Today I ran some load tests and saw multiple times a spike in the metrics that Titan outputs.
I was wondering what kind of EC2 instances are best for running Titan? Right now I'm using r3.large but was wondering if maybe a little more CPU optimized instances would work better? Are there any benchmarks for different instance types out there?
Since the answer to your question is a little subjective I am going to point you in the direction of a post on Performance Tuning Titan in AWS. The post's author provides a comparison between the m4.large and m4.2xlarge with a Titan stack.
As you can see, moving from a m4.large (2 vCPU, 8 GiB memory) instance
to an m4.2xlarge (8 vCPU, 32 GiB) only gives a 9% gain in performance
when running this particular query, which shows it isn’t bound by
memory or CPU.
He points out that having multiple instances running an individual service will allow for fine grained tuning. This will help you once the architecture is in production since the expected read/write percentages are unknown. I think splitting the services to specific instances is going to give you the freedom to tune the stack better than simply moving to a larger instance.

AWS EC2 high ping and S3 download slow

i am new to AWS.
I have some questions want to know.
my EC2 instance :
Instance type : t2.micro - windows server
EC2 region : Asia Pacific (Tokyo)
S3 region : AWS write "S3 does not require region selection."
User location : Taiwan
My EC2 ping is too high to my real-time game and S3 download sometimes is very slow.
My network type is WiFi and Cellular networks(3G&4G).
I have test with my local server, 5 users connected and all work fine.
Average 80kb/s per user for EC2.
Questions:
1.Why my client ping EC2, the time always over 100ms?
2.How can i reduce ping under 100ms?
3.Why S3 download speed is very unstable, 50k~5mb?
4.How can i keep download speed stable?
Regarding S3:
It DOES matter where you create your S3 bucket:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html
"Amazon S3 creates bucket in a region you specify. You can choose any AWS region that is geographically close to you to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you might find it advantageous to create buckets in the EU (Ireland) region."
On the other hand, "S3 does not require region selection." Means you can access any bucket from any region, though access is optimized for the region where the bucket is located. Bucket location can be set only at the time they are created
Regarding unstable times:
t2.micro is not designed to provide a stable capacity. Instead, its capacity is throttled. Actually any instance type starting with t2 has throttled (non constant) CPU capacity. I suggest you try with a larger instance, like m1.small .
See http://www.ec2instances.info/ , any instance that says "burstable" has a non-stable performance.

Google Compute have used CDN?

I already used Google Compute instance, and it's located us-central.
I'm living Taiwan, ping the instance than average time 180~210ms.
Amazon EC2 located Singapore, average time 70~80ms.
I think this difference latency result, depend your server located, right?
So I guessed Google Compute Engine doesn't support CDN, right?
even Amazon ec2 also the same.
Kind Regards,
PinLiang
Google Compute runs code while a CDN delivers content (**C**ontent **D**elivery **N**etwork) so they aren't the same thing. If you get better latency to Amazon EC2 then use that instead but be aware that Google Compute and EC2 work very differently and you wont be able to run the same code on both.
If you want low latency (to Taiwan) compute resources you might want to consider using a Compute Engine instances in the Asia zones, see: 4/14/2014 - Google Cloud Platform expands to Asia
Yes, location and network connectivity will determine your latency. This is not always obvious though. Submarine cables tend to take particular paths. In some cases a geographically closer location may have higher latency.
A CDN is generally used for distributing static files, at lower latency to more users. Cloudfront can use any site as a custom origin.

AWS RDS Provisioned IOPS really worth it?

As I understand it, RDS Provisioned IOPS is quite expensive compared to standard I/O rate.
In Tokyo region, P-IOPS rate is 0.15$/GB, 0.12$/IOP for standard deployment. (Double the price for Multi-AZ deployment...)
For P-IOPS, the minimum required storage is 100GB, IOP is 1000.
Therefore, starting cost for P-IOPS is 135$ excluding instance pricing.
For my case, using P-IOPS costs about 100X more than using standard I/O rate.
This may be a very subjective question, but please give some opinion.
In the most optimized database for RDS P-IOPS, would the performance be worth the price?
or
The AWS site gives some insights on how P-IOPS can benefit the performance. Is there any actual benchmark?
SELF ANSWER
In addition to the answer that zeroSkillz wrote, I did some more research. However, please note that I am not an expert on reading database benchmarks. Also, the benchmark and the answer was based on EBS.
According to an article written by "Rodrigo Campos", the performance does actually improve significantly.
From 1000 IOPS to 2000 IOPS, the read/write(including random read/write) performance doubles. From what zeroSkillz said, the standard EBS block provices about 100 IOPS. Imagine the improvement on performance when 100 IOPS goes up to 1000 IOPS(which is the minimum IOPS for P-IOPS deployment).
Conclusion
According to the benchmark, the performance/price seems reasonable. For performance critical situations, I guess some people or companies should choose P-IOPS even when they are charged 100X more.
However, if I were a financial consultant in a small or medium business, I would just scale-up(as in CPU, memory) on my RDS instances gradually until the performance/price matches P-IOPS.
Ok. This is a bad question because it doesn't mention the size of the allocated storage or any other details of the setup. We use RDS and it has its pluses and minuses. First- you can't use an ephemeral storage device with RDS. You cant even access the storage device directly when using the RDS service.
That being said - the storage medium for RDS is presumed to be based on a variant of EBS from amazon. Performance for standard IOPS depends on the size of the volume and there are many sources stating that above 100GB storage they start to "stripe" EBS volumes. This provides better average case data access both on read and write.
We run currently about 300GB of storage allocation and can get 2k write IOP and 1k IOP about 85% of the time over a several hour time period. We use datadog to log this so we can actually see. We've seen bursts of up to 4k write IOPs, but nothing sustained like that.
The main symptom we see from an application side is lock contention if the IOPS for writing is not enough. The number and frequency you get of these in your application logs will give you symptoms for exhausting the IOPS of standard RDS. You can also use a service like datadog to monitor the IOPS.
The problem with provisioned IOPS is they assume steady state volumes of writes / reads in order to be cost effective. This is almost never a realistic use case and is the reason Amazon started cloud services to fix. The only assurance you get with P-IOPS is that you'll get a max throughput capability reserved. If don't use it, you pay for it still.
If you're ok with running replicas, we recommend running a read-only replica as a NON-RDS instance, and putting it on a regular EC2 instance. You can get better read-IOPS at a much cheaper price by managing the replica yourself. We even setup replicas outside AWS using stunnel and put SSD drives as the primary block device and we get ridiculous read speeds for our reporting systems - literally 100 times faster than we get from RDS.
I hope this helps give some real world details. In short, in my opinion - unless you must ensure a certain level of throughput capability (or your application will fail) on a constant basis (or at any given point) there are better alternatives to provisioned-IOPS including read-write splitting with read-replicas memcache etc.
So, I just got off of a call with an Amazon System Engineer, and he had some interesting insights related to this question. (ie. this is 2nd hand knowledge.)
standard EBS blocks can handle bursty traffic well, but eventually it will taper off to about 100 iops. There were several alternatives that this engineer suggested.
some customers use multiple small EBS blocks and stripe them. This will improve IOPS, and be the most cost effective. You don't need to worry about mirroring because EBS is mirrored behind the scenes.
some customers use the ephemeral storage on the EC2 instance. (or RDS instance) and have multiple slaves to "ensure" durabilty. The ephemeral storage is local storage and much faster than EBS. You can even use SSD provisioned EC2 instances.
some customers will configure the master to use provisioned IOPS, or SSD ephemeral storage, then use standard EBS storage for the slave(s). Expected performance is good, but failover performance is degraded (but still available)
anyway, If you decide to use any of these strategies, I would recheck with amazon to make sure I haven't forgotten any important steps. As I said before, this is 2nd hand knowledge.

Resources