A bad side of pushing to Heroku is that I must push the code (and the server restarts automatically) before running my db migrations.
This can obviously cause some 500 errors on users navigating the website having the new code without the new tables/attributes: the solution proposed by Heroku is to use the maintenance mode, but I want a way with no downside letting my webapp running everytime!
Is there a way? For example with Capistrano:
I prepare the code to deploy in a new dir
I run (backward) migrations and the old code continue to work perfectly
I swith mongrel instance to the new dir and restart the server
...and I have no downtime!
You could setup a second Heroku app which points to the same DB as your primary production app and use the secondary app to run your DB migrations without interrupting production (assuming the migrations don't break the previous version of your app).
Let's call the Heroku apps PRODUCTION and STAGING.
Your deploy sequence would become something like:
Deploy new code to STAGING
git push heroku staging
Run database migrations on STAGING (to update PROD db)
heroku run -a staging-app rake db:migrate
Deploy new code to PRODUCTION
git push heroku production
The staging app won't cost you anything since you won't need to exceed Heroku's free tier and it would be pretty trivial to setup a rake deploy script to do this for you automatically.
Good luck!
If you're able to live with two versions of the same app live at the same time, Heroku now has a preboot feature.
https://devcenter.heroku.com/articles/preboot
The only method to improve the process somewhat is what this guy suggests. This is still not a hot deploy scenario though:
http://casperfabricius.com/site/2009/09/20/manage-and-rollback-heroku-deployments-capistrano-style/
One thing I would suggest is pushing only your migrations up to Heroku first and running them before you push your codebase. This would entail committing the migrations as standalone commits and manually pushing them each time (which is not ideal). I'm very surprised there is not a better solution to this issue with all of the large apps hosted on Heroku now.
You actually will have some downtime when Heroku restarts your app. They have a new feature called Preboot that starts up new dynos before taking out the old ones: https://devcenter.heroku.com/articles/labs-preboot/
As for database migrations, that article links to this one on how to deal with that issue: http://pedro.herokuapp.com/past/2011/7/13/rails_migrations_with_no_downtime/
I first commit the migrations, run them, then push the rest of the code. Add a single file like so:
git commit -m 'added migration' -- db/migrate/2012-your-migration.rb
Heroku can't deploy by capistrano. You are block by tool released by heroku.
The no downtime system is impossible in all cases. How change your schema with big change without stop your server. If you don't stop it, you can avoid some change and your database can be inconsistent. SO the using of maintenance page is a normal solution.
If you want a little solution to avoid problem is a balancing in two server. One with only read database during your migration. You can switch to this instance during your migration avoiding the maintenance page. After your migration you come back to your master.
Right now I don't see any possibility to do this without downtime. I hate it too.
This console command does it in the smallest amount of time I can think of
git push heroku master &&
heroku maintenance:on &&
sleep 5 &&
heroku run rails db:migrate &&
sleep 3 &&
heroku ps:restart &&
heroku maintenance:off
git push heroku master to push the master branch to heroku
heroku maintenance:on to put on maintenance so no 500s
sleep 5 to let the dynos start up the new code (without it, the migration might fail)
heroku run rails db:migrate to do the actual migration
heroku ps:restart out of experience the restart makes sure the new dynos have the latest schema
heroku maintenance:off turns of the maintenance
You might have to add -a <app name> behind all heroku commands if you have multiple apps.
Just one command will run these in series in terminal on Mac OSX.
Related
Very inexperienced user here...please be patient!
I inherited maintenance of Heroku app from someone no longer with the company. Having to re-deploy an app update is probably a once-a-year event, and here we are.
The instructions I have include building a standalone jar file containing my app and then deploying it to Heroku. Specifically the procedure for this is to use the Heroku CLI with the following command:
heroku deploy:jar webapp.jar -a my-app
Easy enough. Except he had his own instance of the Heroku CLI, and when I went to download my own copy, it appears that the deploy command no longer exists! Is this the case? Is this a deprecated command? Do I need to go through the process of figuring out how to set up a git repository to deploy this? (We are in fact using git to manage the source for this app, but it's behind our company firewall, so I'm not sure how practical/difficult it will be to set this up for Heroku). I just want to make sure I'm not missing something simple before investing a significant amount of time re-inventing the deployment process. Thanks.
The most popular mechanism is indeed to push the code from git to Heroku, providing the necessary files (i.e. profcile) to deploy the runtime.
An alternative is to create a Docker image and push it to the Heroku Registry (which in your case would require more reworking).
Refer to Deploy with Git, the firewall should not be a problem as Heroku will not access your code, but you will need to perform the push (git push heroku master)
I have to answer my own question because I was able to find the solution.
It turns out there is a plugin available for the heroku CLI that provides the deploy command. Running heroku plugins:install java will install the plugin that provides the deploy command in the heroku CLI.
See https://devcenter.heroku.com/articles/deploying-executable-jar-files for more information.
I forked a heroku application (on the cli, using heroku fork). However, when I checked the fork application's config vars, the DATABASE_URL that it's set to is exactly the same as in the original application which I forked.
Can I push database schema changes to the new fork without affecting the original application? Or is there a need to fork the database as well?
From the rather obscure warning in the Heroku documentation, it sounds like sometimes the Heroku Postgres setup in the target app is not 100% correct after forking your app (i.e. as you observed, your DATABASE_URL is still pointing at the original app's DB, instead of at the forked app's DB).
The remedy in this case is to promote the new DB (i.e. your new HEROKU_POSTGRESQL_COLOR_URL) to be the primary DB for the forked app, using heroku pg:promote, e.g:
heroku pg:promote HEROKU_POSTGRESQL_COLOR_URL --app theForkedApp
My workflow encompasses the following steps:
Git push (to BitBucket or GitHub depending on the project).
BitBucket/GitHub is integrated with CodeShip, tests are run.
If tests are ok, CodeShip automatically deploys to Heroku.
Everything works fine when, by pushing to the remote repo, the deployment tasks are triggered which ends up with the new version going live when everything is ok.
My question is:
Sometimes, I simply do a git push heroku master which defeats the whole purpose of this workflow.
How can I prevent it from happening? Is there a way to make Heroku only accept the deploy when the source is CodeShip?
After looking around for quite some time, I noticed that there are a some ways to accomplish this, all of them related to simply not giving access to the Heroku Account for the developer:
If you're a single developer ("one-man / one-woman show"):
Do not add the Heroku Remote to your Git Repository. If it is already added, remove it. That way you're not going to push to it by mistake.
If you're managing a team:
Do not give the team a user/pass to access Heroku Toolbelt. That way, the only remote repo they will have access to should be GitHub/BitBucket/Whatever.
You could just create another branch called dev and push to that branch your changes and when you are ready to deploy to heroku merge changes into master branch.
I just came accross your issue and this is what i did as quickest resolution
What is the recommended way to upgrade a Heroku Postgres production database to 9.2 with minimal downtime? Is it possible to use a follower, or should we take the pgbackups/snapshots route?
Until logical followers in 9.4, you'll have to dump and restore (for the reasons Craig describes). You can simplify this with pgbackups:transfer. The direct transfer is faster than dump and restore, but know that you won't have a snapshot to keep.
The script below is basically Heroku's Using PG Backups to Upgrade Heroku Postgres Databases
with modification for pgbackups:transfer. (If you have multiple instances, say a staging server, add "-a" or "--remote" to each Heroku line to specify which server.)
# get the pgbackups plugin
heroku plugins:install git://github.com/heroku/heroku-pg-extras.git
# provision new db
heroku addons:add heroku-postgresql:crane --version=9.2
# wait for it to come online, make note of new color
heroku pg:wait
# prevent new data from arriving during dump
heroku ps:scale worker=0 web=0
heroku maintenance:on
# copy over the DB. could take a while.
heroku pgbackups:transfer OLDCOLOR NEWCOLOR
# promote new database as default for DATABASE_URL
heroku pg:promote NEWCOLOR
# start everything back up and test
heroku ps:scale worker=N web=N
heroku maintenance:off
heroku open
# remove old database
heroku addons:remove HEROKU_POSTGRESQL_OLDCOLOR
Note that if you compare your data size between them, the new one may be much smaller because of efficiencies in 9.2. (My 9.2 was about 70% of the 9.1.)
Heroku followers are, AFAIK, just PostgreSQL streaming replica servers. This means you can't use them across versions, you must have binary-compatible databases.
The same techniques should apply as ordinary PostgreSQL, except that you may not be able to use pg_upgrade on Heroku. This requires shell (ssh, etc) access as the postgres user on the system that hosts the database, so I doubt it's possible on Heroku unless they've provided a tool to run pg_upgrade for you. I can't find much information on this.
You will probably have to look at using Slony-I, Bucardo, or another trigger-based replication solution to do the upgrade unless you can find a way to run pg_upgrade on a Heroku database instance. The general idea is that you set up a new 9.2 instance, use Slony to clone data from the 9.1 instance into it, then once they're fully in sync you stop the 9.1 instance, remove the Slony triggers, and switch clients over to the 9.2 instance.
Search for more information on "postgresql low downtime upgrade slony" etc, see how you go.
Trying to reset my Rails app's shared database on Heroku.
Doing the following appears to work.
heroku pg:reset SHARED_DATABASE --confirm rabid-raccoon-2000
I get: Resetting SHARED_DATABASE (DATABASE_URL)... done
And running heroku run rake db:migrate after that appears to work as well. But when I run heroku run console, or try to use the app, it does not reflect the changes (it still uses an ancient db schema- even right after I reset it).
I've tried this with both the free 5mb free db, as well as with the $15 shared db, both to no avail. No idea what db it's working with.
My database.yml is checked into version control, but I don't see how that can be a problem.
Just deleted the app and started over. Explanations are welcome.
Just a thought... I followed the directions here to set up a beta postgresql database. The plus is that it gives me direct access to the database so I can change anything needed by my tables.
I then removed the generated .sql file with "git rm conf/evolutions/default/1.sql," committed and pushed to heroku. Happily, the app is now working!
This issue is very frustrating, especially since it mostly affects people using Heroku for the first time (w/ the shared database). It wasn't the database script since it worked just fine on the local dev database. Hope this helps you out for next time.