Azure SQL Managed instance latency to read replica - metrics

We are using Business Critical service tier for Azure Managed Instance and we are leveraging the read-only replicas heavily. How can I get some visibility into the latency between the read/write replica and the read replica?
Ultimately I would like to alert if that falls behind.
Thank you,
Michael

I don't think you will have much visibility in the latency. The BC tier have SSDs and their latency according to my knowledge is less than 5 seconds.
In the BC tier, there are always two secondary nodes running on the backend using Always on AGs.
Here are some DMV queries when you connect to read-only replica
https://learn.microsoft.com/en-us/azure/azure-sql/database/read-scale-out#monitoring-and-troubleshooting-read-only-replicas
hopefully this helps

Related

What's the point of remote/cloud memcached service?

As far as I understand, memcached is mainly used to cache key value objects in local memory to speed up access.
But on platform like heroku, to use memcached you have to choose add-on like Memcachier, which is cloud based. I don't understand why is that useful? The network latency is orders of magnitude higher than accessing local memory and completely unpredictable.
So what am I missing?
In the applicable use cases, e.g. accessing a remote disk-based RDBMS or performing an expensive computation, the network latency is orders of magnitude lower than the alternative. Furthermore, while it is true that networks are generally unreliable, during normal operation you still get sub-millisecond latency.
That said, usually a local cache beats a remote cache in terms of latency but on the other hand it could prove problematic to scale.
Edit: answering the OP's comment.
You can essentially think of a disk-based DB as a memory cache over the data in disk - but the DB server's RAM is limited (like any other server). An external cache is therefore used to offload some of that stress, reduce the contention on the DB server resources and free it for other tasks.
As for latency, yes - I was referring to AWS' network. While I'm less familiar with Memcachier's offer, we (Redis Labs) make sure that our Memcached Cloud and Redis Cloud instances are co-located in the same data region as Heroku's dynos are to ensure minimal possible latency. In addition, we also have an Availability Zone Mapping utility that makes it possible to have the application and cache instances reside within the same zone for the same purpose.

AWS RDS Provisioned IOPS really worth it?

As I understand it, RDS Provisioned IOPS is quite expensive compared to standard I/O rate.
In Tokyo region, P-IOPS rate is 0.15$/GB, 0.12$/IOP for standard deployment. (Double the price for Multi-AZ deployment...)
For P-IOPS, the minimum required storage is 100GB, IOP is 1000.
Therefore, starting cost for P-IOPS is 135$ excluding instance pricing.
For my case, using P-IOPS costs about 100X more than using standard I/O rate.
This may be a very subjective question, but please give some opinion.
In the most optimized database for RDS P-IOPS, would the performance be worth the price?
or
The AWS site gives some insights on how P-IOPS can benefit the performance. Is there any actual benchmark?
SELF ANSWER
In addition to the answer that zeroSkillz wrote, I did some more research. However, please note that I am not an expert on reading database benchmarks. Also, the benchmark and the answer was based on EBS.
According to an article written by "Rodrigo Campos", the performance does actually improve significantly.
From 1000 IOPS to 2000 IOPS, the read/write(including random read/write) performance doubles. From what zeroSkillz said, the standard EBS block provices about 100 IOPS. Imagine the improvement on performance when 100 IOPS goes up to 1000 IOPS(which is the minimum IOPS for P-IOPS deployment).
Conclusion
According to the benchmark, the performance/price seems reasonable. For performance critical situations, I guess some people or companies should choose P-IOPS even when they are charged 100X more.
However, if I were a financial consultant in a small or medium business, I would just scale-up(as in CPU, memory) on my RDS instances gradually until the performance/price matches P-IOPS.
Ok. This is a bad question because it doesn't mention the size of the allocated storage or any other details of the setup. We use RDS and it has its pluses and minuses. First- you can't use an ephemeral storage device with RDS. You cant even access the storage device directly when using the RDS service.
That being said - the storage medium for RDS is presumed to be based on a variant of EBS from amazon. Performance for standard IOPS depends on the size of the volume and there are many sources stating that above 100GB storage they start to "stripe" EBS volumes. This provides better average case data access both on read and write.
We run currently about 300GB of storage allocation and can get 2k write IOP and 1k IOP about 85% of the time over a several hour time period. We use datadog to log this so we can actually see. We've seen bursts of up to 4k write IOPs, but nothing sustained like that.
The main symptom we see from an application side is lock contention if the IOPS for writing is not enough. The number and frequency you get of these in your application logs will give you symptoms for exhausting the IOPS of standard RDS. You can also use a service like datadog to monitor the IOPS.
The problem with provisioned IOPS is they assume steady state volumes of writes / reads in order to be cost effective. This is almost never a realistic use case and is the reason Amazon started cloud services to fix. The only assurance you get with P-IOPS is that you'll get a max throughput capability reserved. If don't use it, you pay for it still.
If you're ok with running replicas, we recommend running a read-only replica as a NON-RDS instance, and putting it on a regular EC2 instance. You can get better read-IOPS at a much cheaper price by managing the replica yourself. We even setup replicas outside AWS using stunnel and put SSD drives as the primary block device and we get ridiculous read speeds for our reporting systems - literally 100 times faster than we get from RDS.
I hope this helps give some real world details. In short, in my opinion - unless you must ensure a certain level of throughput capability (or your application will fail) on a constant basis (or at any given point) there are better alternatives to provisioned-IOPS including read-write splitting with read-replicas memcache etc.
So, I just got off of a call with an Amazon System Engineer, and he had some interesting insights related to this question. (ie. this is 2nd hand knowledge.)
standard EBS blocks can handle bursty traffic well, but eventually it will taper off to about 100 iops. There were several alternatives that this engineer suggested.
some customers use multiple small EBS blocks and stripe them. This will improve IOPS, and be the most cost effective. You don't need to worry about mirroring because EBS is mirrored behind the scenes.
some customers use the ephemeral storage on the EC2 instance. (or RDS instance) and have multiple slaves to "ensure" durabilty. The ephemeral storage is local storage and much faster than EBS. You can even use SSD provisioned EC2 instances.
some customers will configure the master to use provisioned IOPS, or SSD ephemeral storage, then use standard EBS storage for the slave(s). Expected performance is good, but failover performance is degraded (but still available)
anyway, If you decide to use any of these strategies, I would recheck with amazon to make sure I haven't forgotten any important steps. As I said before, this is 2nd hand knowledge.

At what requested disk size does Amazon RDS use striping?

From the Amazon RDS FAQ Page
http://aws.amazon.com/rds/faqs/
"Depending on the size of storage requested, Amazon RDS automatically stripes across multiple EBS volumes to enhance IOPS performance"
What disk size do I need to request to trigger disk striping? I've only heard rumors of 300GB.
I had a conversation with upper management at AWS regarding this. Here is the direct answer provided by them.
You will realize improvements with RDS throughput by scaling storage as high as 500GB, and this effect starts at a level well under 100GB (ie: striping occurs at a far lower level than 300GB).
The most important factor in realizing this throughput potential is the instance class. Specifically, the following instance classes are considered High I/O instances:
m1.xlarge
m2.2xlarge
m2.4xlarge
These instances have large network bandwidth available to them, so the upgrade that you mentioned on stackoverflow (to the m2.2xlarge instance) was likely the main reason you saw a leap in throughput. If you stripe your current storage as high as 500GB, this will continue to increase. With provisioned IOPS support for RDS (PIOPS-announced last night), throughput will now scale linearly all the way to 1TB.
With PIOPS, the throughput rate you can expect is currently associated with the amount of allocated storage. For Oracle and MySQL databases, you will realize a very consistent 1,000 IOPS for each 100GB you allocate – resulting in a potential throughput max of 10K IOPS. The (current, temporary) downside is that you will need to unload/load data to migrate an existing app to the PIOPS RDS.
Last I checked 300gb triggers back-end striping
I was told recently by an AWS engineer on an architecture review that 100GB will trigger EBS striping

Load balance/distribution for postgresql

I am coming here after spending considerable time trying to understand how to implement load balancing (distributing database processing load) between postgresql database servers.
I have a postgresql system which attracts about 100s of transactions per second and this is likely to grow. Please do note that my case has so many updates + inserts + selects as well. So any solution for me needs to cater to all insert/update and reads.
I am planning to use plproxy as suggested through db tools from skype at http://www.slideshare.net/adorepump/database-tools-by-skype.
Now I am also hearing that "postgresql streaming replication + hot standby" in postgres 9.0 can be considered
Can someone suggest me if there is any simple (or complex) solution to implement for the above scenario?
If your database is smaller than 100GB then you should first try to maximize what you can from one computer.
You'd need:
a good storage controller with large battery backed cache;
a bunch of fast disks in RAID10;
another bunch of disks in RAID10 for WAL;
more RAM than you have data;
as many fast processor cores as you can.
You'd be able to do several 1000s of tps with this one computer.
If it won't be enough I'd try to add a second hot standby server with streaming replication. You'd use it to run long running read-only report queries, backups etc. so your master server won't have to do these.
Only if it prove not enough then you should try to add more streaming replication hot standby servers to load balance read-only queries. This will be complicated though - because it is asynchronous there's delay between master confirming and stand-by seeing a change. You'd have to deal with it in your client application. Your setup will be a lot more complicated.

Two Questions Regarding AWS' RDS Multi AZ

I understand that when upgrading to a Multi-AZ rds from a Single-AZ, there occurs a "breef i/o freeze". What exactly does that mean?
When an upgrade is made to a Multi-AZ deployment, say from small to large, will the production database be impacted at all? Will it be able to use the backup databse, then failover?
Answers to your questions are written down :
When you choose to move from Single AZ to Multi AZ, brief I/O freeze happens. It means that for some duration database won't be accessible. No read,write operations will be performed on the database. Mostly, the duration for this is around 3-4 minutes.
Yeah, production database will be affected when you resize the compute(from small to large). The best idea to perform resize operation is during scheduled maintenance window. If select Apply Immediately option, for sometime the database won't be accessible(time to switch control to backup server).
Regards,
Sanket Dangi
the downtime when converting from single-AZ to multi-AZ is essentially the time it takes for a new instance to launch and become fully functional as sanket said, it may take a few mins.
scaling up a multi-AZ deployment first scales up the slave instance, then performs a failover. the downtime is the time it takes to do the actual failover - usually closer to a minute.
scaling out a multi-AZ deployment is done by adding additional read-replicas (sourced off of the standby) which incurs no interruption. keep in mind that adding read-replicas creates an eventually consistent system which may or may not be desirable.
it's also worth nothing that you should use the same instance types across all multi-AZ instances, otherwise the imbalance may incur replica lag.
as you're probably realizing, it's best to start with a multi-AZ configuration from the beginning. it makes scaling up and scaling out a lot easier and with less downtime.
Is question 1 still valid? According to AWS documentation (2022) there is no downtime, but there is a small decrease in performance.
Quoting AWS documentation:
When converting a DB instance from Single-AZ to Multi-AZ, Amazon RDS
creates a snapshot of the database volumes and restores these to new
volumes in a different Availability Zone. Although the newly restored
volumes are available almost immediately, they don’t reach their
specified performance until the underlying storage blocks are copied
from the snapshot.
Therefore, during the conversion from Single-AZ to Multi-AZ, you can
experience elevated latency and performance impacts. This impact is a
function of volume type, your workload, instance, and volume size, and
can be significant and may impact large write-intensive DB instances
during peak hour of operations.

Resources