rails app deployment on heroku - database does not update - ruby

Hello I am trying to deploy my rails app from cloud9 via git to heroku. I am able to push all of my changes to git and then push the data to heroku with
git push heroku master
next I migrate my database to heroku
heroku run rake db:migrate
However, when I visit the app on heroku, my data from the database does not show up. How can I fix this?

You're using 2 seperate databases. Migrate doesn't add the data. Just copy the database structure, but it's a new database.
Now you either re-create the data to the database on Heroku or you connect to your local database remotly though database.yml

Related

Is a new database created when a Heroku application is forked?

I forked a heroku application (on the cli, using heroku fork). However, when I checked the fork application's config vars, the DATABASE_URL that it's set to is exactly the same as in the original application which I forked.
Can I push database schema changes to the new fork without affecting the original application? Or is there a need to fork the database as well?
From the rather obscure warning in the Heroku documentation, it sounds like sometimes the Heroku Postgres setup in the target app is not 100% correct after forking your app (i.e. as you observed, your DATABASE_URL is still pointing at the original app's DB, instead of at the forked app's DB).
The remedy in this case is to promote the new DB (i.e. your new HEROKU_POSTGRESQL_COLOR_URL) to be the primary DB for the forked app, using heroku pg:promote, e.g:
heroku pg:promote HEROKU_POSTGRESQL_COLOR_URL --app theForkedApp

How to south migrate a forked heroku database

Before south migrating our heroku production db, I thought of trying the migrations on a forked db, to see if they complete successfully. Is this at all possible with heroku?
The standard command for migrating is:
heroku run python manage.py migrate [app]
but manage.py would direct to our production db of course. How would I go about making it migrate the forked db?
What you need is a staging environment, which is simply another (free) heroku app, linked to your forked or duplicated database. There you can push your new code and migrate, exactly as you would with your production env.
Here's heroku's explanation on how to do that.

Heroku. db:push when I app contains two databases

I have Rails 3.2 application hosted on Heroku. My application contains two databases (one for my model, the second is a kind of a dictionary with static data).
I need to push the second database (dictionary) to Heroku, but when I try db:push Heroku thinks that I'm going to push the first database (with Rails model).
The question is - how could I specify that I want to push my local database dictionary.sqlite to heroku dictionary.pg?
You could use the Heroku pg:transfer plugin which will let you set the target destination by it's URL.
https://github.com/ddollar/heroku-pg-transfer
Alternatively, use psql client locally but restore to the heroku pg isntance.
Don't use db:push/pull; those methods are deprecated. Use pgbackups:capture/restore for things like this. It accepts the HEROKU_POSTGRESQL_COLOR as part of the command:
$ heroku pgbackups:restore HEROKU_POSTGRESQL_COLOR 'https://example.com/data.dump' --app app-name
See Importing and Exporting Heroku Postgres Databases with PG Backups for more detailed explanation.
Also, heroku-pg-transfer has been integrated into pg-extras, check that out here: https://github.com/heroku/heroku-pg-extras

Heroku: Migration issues when pulling production database to testing and running rake db:migrate

I have 3 instances of my rails app on heroku (test, stage and production). When I want to test an issue that is happening with real users' data, I would like to heroku db:pull --app production and then heroku db:push --app test. The problem is that at this point heroku rake db:migrate --app test throws an error because the columns the migration is trying to create have already been created.
My understanding is that heroku db:push pushes data into an existing database schema and rather than literally pushing the entire database (schema included). This means that the schema we are pushing to may be more advanced than the migrations table we are pushing since this migrations table will be missing migration records that have not run on the database we pulled from but have obviously run on the database we are pushing to.
My first question is, am I correct in my understanding of how this works? My second question is how do I fix this so that I can pull production data, stick it in testing and run migrations without receiving this error. Ideally, I would want to copy the production database and stick it in test and then migrate it fully since if I could do this I wouldn't have to worry about the existing schema on test. Is there a way to do this?
If not, is there a way to fake that migrations have already run by populating the new migrations table with records for each migration that has already run on my test database?
No, db:push pushes the local schema and data. You can push your local DB into an empty DB on Heorku, this is how I put sites live - when you run it you see it creating the schema then pushing the data in.
I work like this - Test environment on Heroku same code as live - ie. a branch of master (ie what's live and pushed to test). Pull DB from Live. Fix on my local system. Push to test and run migrations. Test release against DB on Heroku. When I'm happy merge test code into master and then deploy and run migrations. Rinse and Repeat for future bugs. The production DB should never have a more advanced schema version that test. You can always check this out by looking in the schema_migrations table - this is how Rails knows what migrations have run so far, so you can compare this to db/migrations files.

Hot deploy on Heroku with no downtime

A bad side of pushing to Heroku is that I must push the code (and the server restarts automatically) before running my db migrations.
This can obviously cause some 500 errors on users navigating the website having the new code without the new tables/attributes: the solution proposed by Heroku is to use the maintenance mode, but I want a way with no downside letting my webapp running everytime!
Is there a way? For example with Capistrano:
I prepare the code to deploy in a new dir
I run (backward) migrations and the old code continue to work perfectly
I swith mongrel instance to the new dir and restart the server
...and I have no downtime!
You could setup a second Heroku app which points to the same DB as your primary production app and use the secondary app to run your DB migrations without interrupting production (assuming the migrations don't break the previous version of your app).
Let's call the Heroku apps PRODUCTION and STAGING.
Your deploy sequence would become something like:
Deploy new code to STAGING
git push heroku staging
Run database migrations on STAGING (to update PROD db)
heroku run -a staging-app rake db:migrate
Deploy new code to PRODUCTION
git push heroku production
The staging app won't cost you anything since you won't need to exceed Heroku's free tier and it would be pretty trivial to setup a rake deploy script to do this for you automatically.
Good luck!
If you're able to live with two versions of the same app live at the same time, Heroku now has a preboot feature.
https://devcenter.heroku.com/articles/preboot
The only method to improve the process somewhat is what this guy suggests. This is still not a hot deploy scenario though:
http://casperfabricius.com/site/2009/09/20/manage-and-rollback-heroku-deployments-capistrano-style/
One thing I would suggest is pushing only your migrations up to Heroku first and running them before you push your codebase. This would entail committing the migrations as standalone commits and manually pushing them each time (which is not ideal). I'm very surprised there is not a better solution to this issue with all of the large apps hosted on Heroku now.
You actually will have some downtime when Heroku restarts your app. They have a new feature called Preboot that starts up new dynos before taking out the old ones: https://devcenter.heroku.com/articles/labs-preboot/
As for database migrations, that article links to this one on how to deal with that issue: http://pedro.herokuapp.com/past/2011/7/13/rails_migrations_with_no_downtime/
I first commit the migrations, run them, then push the rest of the code. Add a single file like so:
git commit -m 'added migration' -- db/migrate/2012-your-migration.rb
Heroku can't deploy by capistrano. You are block by tool released by heroku.
The no downtime system is impossible in all cases. How change your schema with big change without stop your server. If you don't stop it, you can avoid some change and your database can be inconsistent. SO the using of maintenance page is a normal solution.
If you want a little solution to avoid problem is a balancing in two server. One with only read database during your migration. You can switch to this instance during your migration avoiding the maintenance page. After your migration you come back to your master.
Right now I don't see any possibility to do this without downtime. I hate it too.
This console command does it in the smallest amount of time I can think of
git push heroku master &&
heroku maintenance:on &&
sleep 5 &&
heroku run rails db:migrate &&
sleep 3 &&
heroku ps:restart &&
heroku maintenance:off
git push heroku master to push the master branch to heroku
heroku maintenance:on to put on maintenance so no 500s
sleep 5 to let the dynos start up the new code (without it, the migration might fail)
heroku run rails db:migrate to do the actual migration
heroku ps:restart out of experience the restart makes sure the new dynos have the latest schema
heroku maintenance:off turns of the maintenance
You might have to add -a <app name> behind all heroku commands if you have multiple apps.
Just one command will run these in series in terminal on Mac OSX.

Resources