Scale down Heroku DB from production to hobby - heroku

Is it possible to scale down a Heroku DB from production to hobby (the free Dev plan), as long as we stay within the row limits? A site I'm working on requires a production-grade DB for a few weeks, but then it'll be quiet for a while. Haven't been able to find any info on this.

It shouldn't be a problem. Heroku has a guide to upgrade with backups, so I'd recommend taking a backup of your current database, downloading it to your local computer, then spinning up a development database.
Once the development (free) DB is ready, restore from the pgbackup on your local. As long as you're under the row limit, you should be fine.
Obviously, you'd want to put the site in maintenance mode when you do all of this - but it shouldn't be down for more than 5-10 minutes.

Related

Downgrade Heroku Postgres from standard to hobby

I'm trying to downgrade a heroku postgres db from standard to hobby basic. As I'm not fully using the web app currently but there is still some data in there that needs to be kept. How can I downgrade? (some downtime is fine).
Update: managed to setup and promote a new database based on the inststructions below, but i can't deprovision the old one.
heroku info shows:
Heroku's instructions for upgrading with pg:copy will also work for downgrading. Here's the summary:
Provision a new database
Enter maintenance mode to prevent database writes
Transfer data to the new database
Promote the new database
Exit maintenance mode
If your app isn't live (not being actively written to), you can skip the maintenance mode steps.
Once you've done that, you can deprovision your old database.

Minimizing downtime in SaaS-multi tenant with separate database model web application

We are having separate databases for each tenant which is creating a lot of downtime when we are deploying changes on cloud. The steps(in brief) what we follow whenever we have to deploy the changes on cloud are:
Put down the client site.
Take a snapshot of the current RDS instance(in case anything goes south).
Run the migration scripts(Changes) on each tenant database on RDS instance.
If everything goes well, then we make the client site live again.
Now the problem is, we are having around 250 tenants as of now and the 3rd step which is running the update script is taking too much time which in turn increases the downtime. Any suggestions on how to improve this process or if we are suppose to do it in some other way. There is a clear lack of enterprise level expertise here on our end, so any help will be appreciated. Thanks!
Without knowing anything about your application, here are some things to think about:
If your application would still have some value when running in a 'read-only' mode, you could limit the actual downtime by doing the following.
Make sure all of your RDS databases have a read-replica.
Set your application into 'read-only' mode (i.e. thru some application code).
Let your read replica catchup with your master
promote your read-replica to a stand-alone DB.
Run your updates against this copy of the database.
redirect your application to the new master.
create a new read replica from this new master
delete/archive your old database.
You still have to do all the work, and it still takes a while to run, but the actual downtime for the user should be minimal.

How to setup auto backup on a heroku pg follower?

From pgbackups documentation:
Note that capturing a backup does add some load on your database for the duration of the backup. How this impacts your application will vary with the size of your database and the nature of the app. Consider taking backups on a follower if there is a significant impact from running them on the master.
I know I can create a manual backup using the command heroku pgbackups:capture FOLLOWER_DATABASE_URL
But when I add the pgbackups addon through the website https://addons.heroku.com/pgbackups it comes with autoback that I don't know how to turn off. When installing the addon, it asks me which app to add it to, but not which database. I have no idea when the automatic backup will run, nor do I know which database it will run on, the primary or the follower.
The autobackup will run on the primary database -- you can only capture backups on a follower manually.

Magento upgrading process and infrastructure for smallest possible downtime

I have a client who currently has one server with Magento and his admin takes down whole site for updates for multiple hours. I would like to make it instant process so that I wanted to propose new solution on how he should have set it up:
Magento Production Server 1 (WEB+DB)
Magento Production Server 2 (WEB+DB)
Magento Dev Server 1
DB would have to be synced somehow between those 2 servers (cluster? replication?) and I was thinking that for the smallest downtime possible first the updates should be tested on Dev Server (DB / WEB synced from Production server just before upgrading) and after checking it works fine and knowing how the process looks like I would be disabling LoadBalancing or RoundRobin DNS to only Server 1 then doing upgrades/updates on Server 2 and then Switching to server 2 as production server and updating server 1. When both are done switch on LoadBalancing/Round Robin on.
I come from Windows environment so this is how I would do it on Windows (maybe with seperate Database and Web too) and with tools like RedGate SqlCompare/Sql Data Compare etc it should work.
But I don't know Magento at all so please let me know what's possible and maybe how this should be done if the client don't want to end up with his shop being down...
You'll definitely need a production server, and some sort of staging/version management system.
I recommend checking out Subversion or Git for version management.
Changes can be committed to a repository first, and then updated to the live site with no downtime. This would be more than sufficient for a development environment.
For bigger changes, like a Magento version upgrade, you might still want/need to take the site down for a few hours in the middle of the night, as this is a much bigger process.
As for multiple servers, as an example I run a load balancer which balances between a primary and a secondary server. There is one database server that is separate. Changes are made to a development server, committed to the primary server with Subversion, and then any changes between the primary and secondary servers are rsynced to the secondary server every 60 seconds.
For this solution, session and cache data are stored in the database.
IMHO, with a good hosting environment, you won't need multiple servers unless you literally are in the thousands of simultaneous visitors. Plugins are the usual cause of admin-related problems.
We've had great success with "cloud" environments. Instantiate a new cloud instance, get that IP, then in your "hosts" file, point something like dev.yourdomain.com to it for testing. The only real downtime is that you should freeze the production site while the database converts to the new version, which can be a couple hours. Our mySql DB backup is 3 GB or so, but thankfully tgz's down to 280 MB.
We're using nginx and php-fpm and they are obscenely fast.
Typical migration path for me:
backup production site
start new cloud instance and copy production site to dev site
(restore production database)
try upgrading dev site one step at a time to see what breaks
start new cloud instance and do completely fresh install of newest
magento version
once working, restore production database and watch as it grinds on
converting it, see what breaks
pick between upgrade versus fresh install
back up production mySql, put production site in maintenance mode
while dev site converts the database
point domain to new IP address

Setting up a collaborative environment for web application development

My office is growing and ive been tasked to build out the IT for our web development.
Whats the best tool/setup for doing web development in a group setting? The requirements are a centralized code repository, a location to test development code on, and finally a way to push tagged code out to a staging server. What im thinking is svn/redmine for code repo, each user has an account on a central development machine to allow for ssh access(eclipse over ssh) and their own virtual host on the dev server which gives everyone a centralized development sandbox. Code is written and tested on this dev box then checked back into svn and later tagged and pushed out to the staging server. Yeah? Thoughts comments or recommendations?
*Also, in a dev environment what is the best way to handle databases? Is it wise to pull from the production database? Also should each developer have his/her own db or work off a master db?
**We are building a magento application and also have some custom backoffice tools that run on cakePHP.
Although this subject is off-topic in StackOverflow and flagged so then you need to concentrate on following areas:
VERSION-CONTROL
GIT has all the glory and you don't need your own box for this as https://bitbucket.org/ offers unlimited data and private/public repos and you can set your codebase there. http://github.com is also powerful and de facto most popular version-control oriented tool out there although it comes for a small price
so your master branches live in your version control and your devs will checkout frpom there and commit to it as well
your deployment tools will deploy data to your live and staging environments from your master
ENVIRONMENTS
usually three are used LIVE, STAGE, DEV
LIVE is well live and only approved code gets deployed there
STAGE is pre-live environment and should be exact replica environment according to LIVE so all things can be tested there by merchant
DEV is cool to have exact replica but can as well be on developers local env and is ment for loose testing and experimenting
DATABASES AND DEPLOYMENT
mysql databases are pain in the ass to sync so you better have a script for it that syncs from live to others and prevent syncing from other environments to LIVE. This limitation also requires that all the configuration and content will be added from LIVE only and only then synced down the line. Every change to schema or permanent setting should be handled by update scripts (As we are talking MAGENTO CE , MAGENTO EE has migration built in)
for deployment I also suggest you to build a fabric or capistrano script that resets dev and staging environments, handles database reset and pull from LIVE DB, and imports code from central repository.
it's also a good idea to target the following everyday tasks:
clients needs to reset the stage for it's tests
project manager, developer or testers need to test so spawning a test clone should be oneclick action (take current db and code and make it live in some subfolder for specific test only) as well as deleting the test
3rd party devs might need access to specific test or dev environment (this is actual with magento as in average there are at least 10 external extensions installed in every magento store)

Resources