At what requested disk size does Amazon RDS use striping? - amazon-ec2

From the Amazon RDS FAQ Page
http://aws.amazon.com/rds/faqs/
"Depending on the size of storage requested, Amazon RDS automatically stripes across multiple EBS volumes to enhance IOPS performance"
What disk size do I need to request to trigger disk striping? I've only heard rumors of 300GB.

I had a conversation with upper management at AWS regarding this. Here is the direct answer provided by them.
You will realize improvements with RDS throughput by scaling storage as high as 500GB, and this effect starts at a level well under 100GB (ie: striping occurs at a far lower level than 300GB).
The most important factor in realizing this throughput potential is the instance class. Specifically, the following instance classes are considered High I/O instances:
m1.xlarge
m2.2xlarge
m2.4xlarge
These instances have large network bandwidth available to them, so the upgrade that you mentioned on stackoverflow (to the m2.2xlarge instance) was likely the main reason you saw a leap in throughput. If you stripe your current storage as high as 500GB, this will continue to increase. With provisioned IOPS support for RDS (PIOPS-announced last night), throughput will now scale linearly all the way to 1TB.
With PIOPS, the throughput rate you can expect is currently associated with the amount of allocated storage. For Oracle and MySQL databases, you will realize a very consistent 1,000 IOPS for each 100GB you allocate – resulting in a potential throughput max of 10K IOPS. The (current, temporary) downside is that you will need to unload/load data to migrate an existing app to the PIOPS RDS.

Last I checked 300gb triggers back-end striping

I was told recently by an AWS engineer on an architecture review that 100GB will trigger EBS striping

Related

Read IOPS limits for RDS read-replica?

I've noticed strange thing happened on my PostgreSQL Amazon RDS Read replica.
We've done "stress-test" of dozens parallel high-load read requests. Performance was really good in the beginning of the test, but then rapidly decreased while PostgresSQL itself kept holding dozens of select queries which were performed fast before it stacked.
I've opened Monitor statistics tab in RDS console and have seen that along with visible performance reducing Read IOPS number also decreased from 3000/sec to 300/sec and didn't go upper then 300/sec iops for long time.
At the same time CPU usage was really low ~3%, there weren't any problems with RAM or storage space.
So my question: are any documented limitations of Read IOPS for read replica? It looks like Amazon RDS automatically reduced high limit of IOPS after really high load (3000/sec).
Read-replica server runs on db.t2.large instance with 100 GB General Purpose (SSD) storage type with disabled fixed IOPS feature.
The behavior you describe is exactly as documented for the underlying storage class GP2.
GP2 is designed to [...] deliver a consistent baseline performance of 3 IOPS/GB
GP2 volumes smaller than 1 TB can also burst up to 3,000 IOPS.
https://aws.amazon.com/ebs/details/
3 IOPS/GB on a 100GB volume is 300 IOPS.
See also http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html for a description of how IOPS credits work. While your system isn't busy, it will build up credits that can be used for the next burst.

What's the point of remote/cloud memcached service?

As far as I understand, memcached is mainly used to cache key value objects in local memory to speed up access.
But on platform like heroku, to use memcached you have to choose add-on like Memcachier, which is cloud based. I don't understand why is that useful? The network latency is orders of magnitude higher than accessing local memory and completely unpredictable.
So what am I missing?
In the applicable use cases, e.g. accessing a remote disk-based RDBMS or performing an expensive computation, the network latency is orders of magnitude lower than the alternative. Furthermore, while it is true that networks are generally unreliable, during normal operation you still get sub-millisecond latency.
That said, usually a local cache beats a remote cache in terms of latency but on the other hand it could prove problematic to scale.
Edit: answering the OP's comment.
You can essentially think of a disk-based DB as a memory cache over the data in disk - but the DB server's RAM is limited (like any other server). An external cache is therefore used to offload some of that stress, reduce the contention on the DB server resources and free it for other tasks.
As for latency, yes - I was referring to AWS' network. While I'm less familiar with Memcachier's offer, we (Redis Labs) make sure that our Memcached Cloud and Redis Cloud instances are co-located in the same data region as Heroku's dynos are to ensure minimal possible latency. In addition, we also have an Availability Zone Mapping utility that makes it possible to have the application and cache instances reside within the same zone for the same purpose.

Best practices for use of Neo4j on Google Compute Engine / Amazon EC2 instances

There is a very nice guide on optimizing linux machine for Neo4j. But this guide assumes the typical characteristics of a physical hard drive. I am running my Neo4j instances on Google CE and Amazon EC2. I am unable to find any document detailing an optimal setup for these virtual machines. What resources do I need in terms of memory (for heap or extended use) and disk speed / IOPS to get an optimal performance? I currently have a couple of million nodes and about ten million relationships (2 GBs) and the data size is increasing with imports.
On EC2 I used to rely on SSD scratch disks and then make regular backups to permanent disks. There is no such thing available on Compute Engines, and the write speeds don't seem very high to me, at least at normal disk sizes (because speed changes with size). Is there any way to get a reasonable performance on my import/index operations? Or maybe these operations have more to do with memory and compute capacities?
Any additional reading is welcome...
Use local disks whenever possible, SSDs are better than other, try provisioned ops on AWS.
EBS is not a good fit, it is slow and jittery.
No idea for compute engine though, you might want to use more RAM and try to load larger parts of the graph into memory then.
Additional reading: http://structr.org/blog/neo4j-performance-on-ext4
You still should check the other things mentioned in that blog post. Like Linux scheduler, write barriers etc.
Better to set those memory mapping settings manually. And for the 2nd level caches probably check out the enterprise version with the hpc cache.
See also this webinar: https://vimeo.com/46049647 on hw-sizing

AWS RDS Provisioned IOPS really worth it?

As I understand it, RDS Provisioned IOPS is quite expensive compared to standard I/O rate.
In Tokyo region, P-IOPS rate is 0.15$/GB, 0.12$/IOP for standard deployment. (Double the price for Multi-AZ deployment...)
For P-IOPS, the minimum required storage is 100GB, IOP is 1000.
Therefore, starting cost for P-IOPS is 135$ excluding instance pricing.
For my case, using P-IOPS costs about 100X more than using standard I/O rate.
This may be a very subjective question, but please give some opinion.
In the most optimized database for RDS P-IOPS, would the performance be worth the price?
or
The AWS site gives some insights on how P-IOPS can benefit the performance. Is there any actual benchmark?
SELF ANSWER
In addition to the answer that zeroSkillz wrote, I did some more research. However, please note that I am not an expert on reading database benchmarks. Also, the benchmark and the answer was based on EBS.
According to an article written by "Rodrigo Campos", the performance does actually improve significantly.
From 1000 IOPS to 2000 IOPS, the read/write(including random read/write) performance doubles. From what zeroSkillz said, the standard EBS block provices about 100 IOPS. Imagine the improvement on performance when 100 IOPS goes up to 1000 IOPS(which is the minimum IOPS for P-IOPS deployment).
Conclusion
According to the benchmark, the performance/price seems reasonable. For performance critical situations, I guess some people or companies should choose P-IOPS even when they are charged 100X more.
However, if I were a financial consultant in a small or medium business, I would just scale-up(as in CPU, memory) on my RDS instances gradually until the performance/price matches P-IOPS.
Ok. This is a bad question because it doesn't mention the size of the allocated storage or any other details of the setup. We use RDS and it has its pluses and minuses. First- you can't use an ephemeral storage device with RDS. You cant even access the storage device directly when using the RDS service.
That being said - the storage medium for RDS is presumed to be based on a variant of EBS from amazon. Performance for standard IOPS depends on the size of the volume and there are many sources stating that above 100GB storage they start to "stripe" EBS volumes. This provides better average case data access both on read and write.
We run currently about 300GB of storage allocation and can get 2k write IOP and 1k IOP about 85% of the time over a several hour time period. We use datadog to log this so we can actually see. We've seen bursts of up to 4k write IOPs, but nothing sustained like that.
The main symptom we see from an application side is lock contention if the IOPS for writing is not enough. The number and frequency you get of these in your application logs will give you symptoms for exhausting the IOPS of standard RDS. You can also use a service like datadog to monitor the IOPS.
The problem with provisioned IOPS is they assume steady state volumes of writes / reads in order to be cost effective. This is almost never a realistic use case and is the reason Amazon started cloud services to fix. The only assurance you get with P-IOPS is that you'll get a max throughput capability reserved. If don't use it, you pay for it still.
If you're ok with running replicas, we recommend running a read-only replica as a NON-RDS instance, and putting it on a regular EC2 instance. You can get better read-IOPS at a much cheaper price by managing the replica yourself. We even setup replicas outside AWS using stunnel and put SSD drives as the primary block device and we get ridiculous read speeds for our reporting systems - literally 100 times faster than we get from RDS.
I hope this helps give some real world details. In short, in my opinion - unless you must ensure a certain level of throughput capability (or your application will fail) on a constant basis (or at any given point) there are better alternatives to provisioned-IOPS including read-write splitting with read-replicas memcache etc.
So, I just got off of a call with an Amazon System Engineer, and he had some interesting insights related to this question. (ie. this is 2nd hand knowledge.)
standard EBS blocks can handle bursty traffic well, but eventually it will taper off to about 100 iops. There were several alternatives that this engineer suggested.
some customers use multiple small EBS blocks and stripe them. This will improve IOPS, and be the most cost effective. You don't need to worry about mirroring because EBS is mirrored behind the scenes.
some customers use the ephemeral storage on the EC2 instance. (or RDS instance) and have multiple slaves to "ensure" durabilty. The ephemeral storage is local storage and much faster than EBS. You can even use SSD provisioned EC2 instances.
some customers will configure the master to use provisioned IOPS, or SSD ephemeral storage, then use standard EBS storage for the slave(s). Expected performance is good, but failover performance is degraded (but still available)
anyway, If you decide to use any of these strategies, I would recheck with amazon to make sure I haven't forgotten any important steps. As I said before, this is 2nd hand knowledge.

Two Questions Regarding AWS' RDS Multi AZ

I understand that when upgrading to a Multi-AZ rds from a Single-AZ, there occurs a "breef i/o freeze". What exactly does that mean?
When an upgrade is made to a Multi-AZ deployment, say from small to large, will the production database be impacted at all? Will it be able to use the backup databse, then failover?
Answers to your questions are written down :
When you choose to move from Single AZ to Multi AZ, brief I/O freeze happens. It means that for some duration database won't be accessible. No read,write operations will be performed on the database. Mostly, the duration for this is around 3-4 minutes.
Yeah, production database will be affected when you resize the compute(from small to large). The best idea to perform resize operation is during scheduled maintenance window. If select Apply Immediately option, for sometime the database won't be accessible(time to switch control to backup server).
Regards,
Sanket Dangi
the downtime when converting from single-AZ to multi-AZ is essentially the time it takes for a new instance to launch and become fully functional as sanket said, it may take a few mins.
scaling up a multi-AZ deployment first scales up the slave instance, then performs a failover. the downtime is the time it takes to do the actual failover - usually closer to a minute.
scaling out a multi-AZ deployment is done by adding additional read-replicas (sourced off of the standby) which incurs no interruption. keep in mind that adding read-replicas creates an eventually consistent system which may or may not be desirable.
it's also worth nothing that you should use the same instance types across all multi-AZ instances, otherwise the imbalance may incur replica lag.
as you're probably realizing, it's best to start with a multi-AZ configuration from the beginning. it makes scaling up and scaling out a lot easier and with less downtime.
Is question 1 still valid? According to AWS documentation (2022) there is no downtime, but there is a small decrease in performance.
Quoting AWS documentation:
When converting a DB instance from Single-AZ to Multi-AZ, Amazon RDS
creates a snapshot of the database volumes and restores these to new
volumes in a different Availability Zone. Although the newly restored
volumes are available almost immediately, they don’t reach their
specified performance until the underlying storage blocks are copied
from the snapshot.
Therefore, during the conversion from Single-AZ to Multi-AZ, you can
experience elevated latency and performance impacts. This impact is a
function of volume type, your workload, instance, and volume size, and
can be significant and may impact large write-intensive DB instances
during peak hour of operations.

Resources