I want to migrate an EC2 instance of MySQL 5.5 to, at least, MySQL 5.6 (ideally to MariaDB 10,x) on RDS. Naturally, I'm going to use replication to do that. Part of the procedure requires a master-master setup between EC2-Mysql and RDS-Mysql and I don't think that's doable (been trying for awhile).
The reason for the master-master setup is this: Once the application is pointing to the RDS endpoint, I'd like any changes made to the RDS database to be replicated to the EC2 database in case we decide to rollback to using the EC2 instance. The MM setup won't be for long; probably overnight at most. And I can't think of a reason to roll back to using the EC2 instance, but I'm overly paranoid when it comes to production data.
Is it possible to have a master-master setup with EC2 and RDS? Some sites say "no", others, "yes". I've been trying but keep running into problems (internal timestamp formats, lack of STATEMENT based replication on RDS, etc.).
If I can't do an MM replication, is there another way for me to be able to roll back JIC?
Oh, and I can't use AWS' Database Migration Service since I have a couple thousand databases to migrate (don't ask).
Related
We are currently setting up AWS hosting for our Web Application.
This Laravel Web Application will have a Schema per company that registers, meaning it will have a large sized mySql server.
I have gone through the motions of setting up a VPC with EC2 instances and and RDS for this mySql server.
However we are currently looking at using Laravel Forge as a tool to host.
What Forge does differently is that it includes the mySql Server on the EC2 instance not on an RDS.
The question I have come to ask here is, what are the implications if any of having the mySql server on the EC2 instance rather then an RDS.
Would there be performance issues?
Is it better practice to have an RDS?
Or is Forges out the box way of packaging this all together on an EC2 server fine?
By running this on an EC2 instance you will taking more of the responsibility of managing the database, not just installation but also patching, backups, recovery. Harder to maintain functionality such as replication and HA will also be on you to implement and monitor.
By running on RDS AWS is going to take the heavy lifting of this and implement a best practice version of MySQL which offers the flexibility of allowing you to run a MySQL stack in the cloud without having to really think about the implementation details under the hood other than deciding do you want it to be HA and how many replicas do you want.
In saying this by using RDS you're also giving up the ability to run it however you want, you are limited to the versions of the database that RDS supports (although this is now quite soon after release). In addition not all plugins or extensions will be active so check this functionality before deciding.
Is there a way to upgrade from Aurora 1 (MySQL 5.6) to Aurora 2 (MySQL 5.7) without downtime on an active database? This seems like a simple task given we should be able to simply do major version upgrades from either the CLI or the Console, but that is not the case.
We tried:
Creating a snapshot of the database
Creating a new cluster using Aurora 2 (MySQL 5.7) from the snapshot
Configure replication to the new cluster from the primary cluster
However, because you can't run commands that require SUPER user privileges in Aurora you're not able to stop transactions long enough to get a good binlog pointer from the master, which results in a ton of SQL errors that are impossible to skip on an active database.
Also, because Aurora is not doing binlog replication to its Read replicas I can't necessarily stop replication to that read replica and get the pointer.
I have seen this semi-related question, but it certainly requires downtime: How to upgrade AWS RDS Aurora MySQL 5.6 to 5.7
UPDATE: AWS just announced in-place upgrade option available for 5.6 > 5.7:
https://aws.amazon.com/about-aws/whats-new/2021/01/amazon-aurora-supports-in-place-upgrades-mysql-5-6-to-5-7/
Simple as Modify and choose version with 2.x. :)
I tested this Aurora MySQL 5.6 > 5.7 on a 25Gb db, many minor versions behind and it took 10 min, with 8 min of downtime. Not zero downtime, but a very easy option, and it can be scheduled in AWS to happen automatically during off-peak times (maintenance window).
Additionally consider RDS Proxy to reduce downtime. During small windows of db unavailable time (eg. reboot for minor updates), the proxy will hold connections open, instead of completely unavailable, simply appearing as a brief delay/latency, only.
Need was to upgrade the AWS RDS Aurora MySQL from 5.6 to 5.7 without causing any downtime to our production. Being a SaaS solution, we could not afford any downtime.
Background
We have distributed architecture based on micro services running in AWS Fargate and AWS Lambda. For data persistency AWS RDS Aurora MySQL is used. While there are other services being used, those are not of interest in this use case.
Approach
After a good deliberation on in place upgrade by declaring a downtime and maintenance window, we realized that having zero downtime upgrade is the need. As without which we would have created a processing backlog for us.
High level approach was:
Create an AWS RDS Cluster with the required version and copy the data from the existing RDS Cluster to this new Cluster
Setup AWS DMS(Data Migration Service) between these two clusters
Once the replication is done and is ongoing then switch the application to point to the new DB. In our case, the micro-services running in AWS Fargate has to upgraded with the new end point and it took care of draining the old and using the new.
For Complete post please check out
https://bharatnainani1997.medium.com/aws-rds-major-version-upgrade-with-zero-downtime-5-6-to-5-7-b0aff1ea1f4
When you create a new Aurora cluster from a snapshot, you get a binlog pointer in the error log from the point at which the snapshot was taken. You can use that to set up replication from the old cluster to the new cluster.
I've followed a similar process to what you've described in your question (multiple times in fact) and was able to keep the actual downtime in the low seconds range.
I have an application that we are currently running on a number of co-located servers and I'm interested in moving everything to the cloud.
I have a legacy application running Postgres and its replacement application using MySql as its data store. I'm interested in moving to EC2 and looking to do this as pain free as possible. I was planning on using Amazon RDS for the MySql data store but am looking for options for the Postgres install.
I know that Heroku is built on top of EC2 and has Postres support and was wondering
Has anyone had any experience accessing a Heroku Postgres database from an application running in EC2. Comments on Performance, Reliability ease of Administration
The other alternative is to install Postgres on EC2 with EBS volumes but I've heard mixed reviews on performance, reliablitity and ease of administration.
Thanks in advance, any experience and suggestions would be greatly appreciated.
I've done this with several colocated boxes on the east coast. Heroku actually has a completely independent service: Heroku Postgres, which is built for this specific use case. The databases you create are all independent (not related to any Heroku apps).
Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question:
I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7.
It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data.
This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow?
My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..
If you use an EBS volume with your EC2 instance, you can mount/dismount them to have persistent storage. You can also use Amazon RDS to handle your database too which is handy (but can be slightly on the pricier side.)
So a way to think of it is:
Your EC2 instance: Get the OS set up exactly like you'd like it along with your web application - basically, get your static stuff all in place.
EBS volume: That can be mounted and can be used for things like user uploads.
RDS instance: This is a dedicated database server with no hassles. It's nice - I use a MySQL RDS and it automatically makes two daily backups, and is scalable like EC2 instances.
Amazon Web Service is a better approach at hosting your applications Jon. You have a basic understand of AWS but you need to know that you can also launch an instance that is persistent. Just launch an instance of a persistence AMI. Also you can install you database,webs server on the instance like a regular server. There is probably just minimal differences from running an Ec2 instance and a dedicated server. If you have any other questions you can contact me.
I am planning to deploy my web app to EC2. I have several webserver instances. I have 1 primary database instance. I have 1 failover database instance. I need a strategy to redirect the webservers to the failover database instance IP when the primary database instance fails.
I was hoping I could use an Elastic IP in my connection strings. But, the webservers are not able to access/ping the Elastic IP. I have several brute force ideas to solve the problem. However, I am trying to find the most elegant solution possible.
I am using all .Net and SQL Server. My connection strings are encrypted.
Does anybody have a strategy for failing over a database instance in EC2 using some form of automation or DNS configuration?
Please let me know.
http://alestic.com/2009/06/ec2-elastic-ip-internal
tells you how to use the Elastic IP public DNS.
Haven't used EC2 but surely you need to either:
(a) put your front-end into some custom maintenance mode, that you define, while you switch the IP over; and have the front-end perform required steps to manage potential data integrity and data loss issues related to the previous server going down and the new server coming up when it enters and leaves your custom maintenance mode
OR, for a zero down-time system:
(b) design the system at the object/relational and transaction levels from the ground up to support zero-down-time fail-over. It's not something you can bolt on quicjkly to just any application.
(c) use some database support for automatic failover. I am unaware whether SQL Server support for failover suitable for your application exists or is appropriate here. I suggest adding a "sql-server" tag to the question to start a search for the right audience.
If Elastic IPs don't work (which sounds odd to say the least - shouldn't you talk to EC2 about that), you mayhave to be able to instruct your front-end which new database IP to use at the same time as telling it to go from maintenance mode to normal mode.
If you're willing to shell out a bit of extra money, take a look at Rightscale's tools; they've built custom server images and supporting tools that handle database failover (among many other things). This link explains how to do it with MySQL, so will hopefully show you some principles even though it doesn't use SQL Server.
I always thought there was this possibility in the connnection string
This is taken (but not yet tested) from How to add Failover Partner to a connection string in VB.NET :
If you connect with ADO.NET or the SQL
Native Client to a database that is
being mirrored, your application can
take advantage of the drivers ability
to automatically redirect connections
when a database mirroring failover
occurs. You must specify the initial
principal server and database in the
connection string and the failover
partner server.
Data Source=myServerAddress;Failover Partner=myMirrorServerAddress;
Initial Catalog=myDataBase;Integrated Security=True;
There is ofcourse many other ways to
write the connection string using
database mirroring, this is just one
example pointing out the failover
functionality. You can combine this
with the other connection strings
options available.
To broaden gareth's answer, cloud management softwares usually solve this type of problems. RightScale is one of them, but you can try enStratus or Scalr (disclaimer: I work at Scalr). These tools provide failover solutions like:
Backups: you can schedule automated snapshots of the EBS volume containing the data
Fault-tolerant database: in the event of failure, a slave is promoted master and mounted storage will be switched if the failed master and new master are in the same AZ, or a snapshot taken of the volume
If you want to build your own solution, you could replicate the process detailed below that we use at Scalr:
Is there a slave in the same AZ? If so, promote it, switch EBS
volumes (which are limited to a single AZ), switch any ElasticIP you
might have, reconfigure replication of the remaining slaves.
If not, is there a slave fully replicated in another AZ? If so, promote it,
then do the above.
If there are no slave in same AZ, and no slave fully
replicated in another AZ, then create a snapshot from master's
volume, and use this snapshot to create a new volume in an AZ where a
slave is running. Then do the above.