Downgrade Heroku Postgres from standard to hobby - heroku

I'm trying to downgrade a heroku postgres db from standard to hobby basic. As I'm not fully using the web app currently but there is still some data in there that needs to be kept. How can I downgrade? (some downtime is fine).
Update: managed to setup and promote a new database based on the inststructions below, but i can't deprovision the old one.
heroku info shows:

Heroku's instructions for upgrading with pg:copy will also work for downgrading. Here's the summary:
Provision a new database
Enter maintenance mode to prevent database writes
Transfer data to the new database
Promote the new database
Exit maintenance mode
If your app isn't live (not being actively written to), you can skip the maintenance mode steps.
Once you've done that, you can deprovision your old database.

Related

A couple of heroku postgres questions (just started, am lost)

I have provisioned postgres on my heroku app and also installed postgres locally to maintain parity (as the documentation recommends) with the online database but I'm also not understanding how this will work. Am I supposed to be accessing a local copy of a database when running on my own computer (while building and before deploying) and then using heroku's separate postgres database once it is deployed? If it is parity, shouldn't they both be using the heroku postgres database?
In other words, will my local app (during production) and heroku app (deployed and live) be using the same online postgres database?
Thanks.
Am I supposed to be accessing a local copy of a database when running on my own computer (while building and before deploying) and then using heroku's separate postgres database once it is deployed?
Yes, that's exactly it. Without seeing what bit of documentation you're referencing it's hard to say what they mean but perhaps there's another way to explain it.
In your local development environment, you may find that you need to test database schema changes (this is just one example, there are many). If you only had the one heroku postgres database you'd be forced to test these changes in production, which might result in poor usability for your users and that doesn't even account for the possibility of making a mistake and accidentally destroying your production data. There are a number of other shortcomings and challenges with this single database configuration.
For these reasons and more, it's best to keep your production data completely separated from your development/staging/test environment by creating a local/staging database. You might reasonably ask, "What about the data? I need data to test!". There are many ways to put together your test database and which you choose will likely depend on your needs. A shortlist of possibilities:
Use a seed file to generate mock data in your db
Use a model factory (usually runs in conjunction with your testing framework)
Take a dump of your production database, anonymize and redact sensitive information and use that for local testing.

Development versus Production in Parse.com

I want to understand how people are handing an update to a production app on the Parse.com platform. Here is the scenario that I am not sure about.
Create an called myApp_DEV. The app contains a database as well as associated cloud code.
Once testing is complete and ready for go-live I will clone this app into myApp_PRD (Production version). Cloning it will copy all the database as well as the cloud code.
So far so good.
Now 3 months down the line I want have added some functionality which includes adding some cloud code functions as well as adding some new columns to the tables in the db.
How do I update myApp_PRD with these new database structure. If i try to clone it from my DEV app it tells me the app all ready exists.
If I clone a new app (say myApp_PRD2) from DEV then all the data will be lost since the customer is all ready live.
Any ideas on how to handle this scenario?
Cloud code supports deploying to production and development environments.
You'll first need to link your production app to your existing cloud code. this can be done in the command line:
parse add production
When you're ready to release, it's a simple matter of:
parse deploy production
See the Parse Documentation for all the details.
As for the schema changes, I guess we just have to manually add all the new columns.

upgrading to postgres on Heroku

What is the recommended way to upgrade a Heroku Postgres production database to 9.2 with minimal downtime? Is it possible to use a follower, or should we take the pgbackups/snapshots route?
Until logical followers in 9.4, you'll have to dump and restore (for the reasons Craig describes). You can simplify this with pgbackups:transfer. The direct transfer is faster than dump and restore, but know that you won't have a snapshot to keep.
The script below is basically Heroku's Using PG Backups to Upgrade Heroku Postgres Databases
with modification for pgbackups:transfer. (If you have multiple instances, say a staging server, add "-a" or "--remote" to each Heroku line to specify which server.)
# get the pgbackups plugin
heroku plugins:install git://github.com/heroku/heroku-pg-extras.git
# provision new db
heroku addons:add heroku-postgresql:crane --version=9.2
# wait for it to come online, make note of new color
heroku pg:wait
# prevent new data from arriving during dump
heroku ps:scale worker=0 web=0
heroku maintenance:on
# copy over the DB. could take a while.
heroku pgbackups:transfer OLDCOLOR NEWCOLOR
# promote new database as default for DATABASE_URL
heroku pg:promote NEWCOLOR
# start everything back up and test
heroku ps:scale worker=N web=N
heroku maintenance:off
heroku open
# remove old database
heroku addons:remove HEROKU_POSTGRESQL_OLDCOLOR
Note that if you compare your data size between them, the new one may be much smaller because of efficiencies in 9.2. (My 9.2 was about 70% of the 9.1.)
Heroku followers are, AFAIK, just PostgreSQL streaming replica servers. This means you can't use them across versions, you must have binary-compatible databases.
The same techniques should apply as ordinary PostgreSQL, except that you may not be able to use pg_upgrade on Heroku. This requires shell (ssh, etc) access as the postgres user on the system that hosts the database, so I doubt it's possible on Heroku unless they've provided a tool to run pg_upgrade for you. I can't find much information on this.
You will probably have to look at using Slony-I, Bucardo, or another trigger-based replication solution to do the upgrade unless you can find a way to run pg_upgrade on a Heroku database instance. The general idea is that you set up a new 9.2 instance, use Slony to clone data from the 9.1 instance into it, then once they're fully in sync you stop the 9.1 instance, remove the Slony triggers, and switch clients over to the 9.2 instance.
Search for more information on "postgresql low downtime upgrade slony" etc, see how you go.

Heroku shared database not resetting

Trying to reset my Rails app's shared database on Heroku.
Doing the following appears to work.
heroku pg:reset SHARED_DATABASE --confirm rabid-raccoon-2000
I get: Resetting SHARED_DATABASE (DATABASE_URL)... done
And running heroku run rake db:migrate after that appears to work as well. But when I run heroku run console, or try to use the app, it does not reflect the changes (it still uses an ancient db schema- even right after I reset it).
I've tried this with both the free 5mb free db, as well as with the $15 shared db, both to no avail. No idea what db it's working with.
My database.yml is checked into version control, but I don't see how that can be a problem.
Just deleted the app and started over. Explanations are welcome.
Just a thought... I followed the directions here to set up a beta postgresql database. The plus is that it gives me direct access to the database so I can change anything needed by my tables.
I then removed the generated .sql file with "git rm conf/evolutions/default/1.sql," committed and pushed to heroku. Happily, the app is now working!
This issue is very frustrating, especially since it mostly affects people using Heroku for the first time (w/ the shared database). It wasn't the database script since it worked just fine on the local dev database. Hope this helps you out for next time.

Magento upgrading process and infrastructure for smallest possible downtime

I have a client who currently has one server with Magento and his admin takes down whole site for updates for multiple hours. I would like to make it instant process so that I wanted to propose new solution on how he should have set it up:
Magento Production Server 1 (WEB+DB)
Magento Production Server 2 (WEB+DB)
Magento Dev Server 1
DB would have to be synced somehow between those 2 servers (cluster? replication?) and I was thinking that for the smallest downtime possible first the updates should be tested on Dev Server (DB / WEB synced from Production server just before upgrading) and after checking it works fine and knowing how the process looks like I would be disabling LoadBalancing or RoundRobin DNS to only Server 1 then doing upgrades/updates on Server 2 and then Switching to server 2 as production server and updating server 1. When both are done switch on LoadBalancing/Round Robin on.
I come from Windows environment so this is how I would do it on Windows (maybe with seperate Database and Web too) and with tools like RedGate SqlCompare/Sql Data Compare etc it should work.
But I don't know Magento at all so please let me know what's possible and maybe how this should be done if the client don't want to end up with his shop being down...
You'll definitely need a production server, and some sort of staging/version management system.
I recommend checking out Subversion or Git for version management.
Changes can be committed to a repository first, and then updated to the live site with no downtime. This would be more than sufficient for a development environment.
For bigger changes, like a Magento version upgrade, you might still want/need to take the site down for a few hours in the middle of the night, as this is a much bigger process.
As for multiple servers, as an example I run a load balancer which balances between a primary and a secondary server. There is one database server that is separate. Changes are made to a development server, committed to the primary server with Subversion, and then any changes between the primary and secondary servers are rsynced to the secondary server every 60 seconds.
For this solution, session and cache data are stored in the database.
IMHO, with a good hosting environment, you won't need multiple servers unless you literally are in the thousands of simultaneous visitors. Plugins are the usual cause of admin-related problems.
We've had great success with "cloud" environments. Instantiate a new cloud instance, get that IP, then in your "hosts" file, point something like dev.yourdomain.com to it for testing. The only real downtime is that you should freeze the production site while the database converts to the new version, which can be a couple hours. Our mySql DB backup is 3 GB or so, but thankfully tgz's down to 280 MB.
We're using nginx and php-fpm and they are obscenely fast.
Typical migration path for me:
backup production site
start new cloud instance and copy production site to dev site
(restore production database)
try upgrading dev site one step at a time to see what breaks
start new cloud instance and do completely fresh install of newest
magento version
once working, restore production database and watch as it grinds on
converting it, see what breaks
pick between upgrade versus fresh install
back up production mySql, put production site in maintenance mode
while dev site converts the database
point domain to new IP address

Resources