I've created an app on heroku with two postgresql databases, and set up an automatic backup with the command:
heroku addons:add pgbackups:auto-month
But the backups are not being created. Am I missing something else?
Edit:
I was suggested to create promote a default database, but this would make the backup work only for one database. How do I enable the backups for both databases?
With the pgbackups auto plans, backups are taken of the database pointed at by DATABASE_URL in your config. Is any of those databases DATABASE_URL? If not, promote one of them, your primary, with heroku pg:promote HEROKU_POSTGRESQL_<color> -a <app>, and backups should be taken nightly.
In addition to these backups, physical snapshots of all databases are taken nightly, which in addition to WAL archival are the best way to do disaster recovery, especially as your database grows in size. Think of pgbackups dumps/restores as a way of exporting and importing data, not a DR tool. See https://devcenter.heroku.com/articles/heroku-postgres-data-safety-and-continuous-protection
Related
What is a neat way to recreate heroku dataclips on my local machine so that I have immediate access to the same useful queries locally which I do on an instance of my app on heroku?
I'm referring to the ability to query the state of the local database one is working with during application development, i.e. testing data, if you like (though of course after I pg:pull it's simply a copy of production data for testing purposes).
I have found I have come to rely on the views the dataclips give me into production data, which then assists in the courage to not allow primitive readability of bare tables to be a significant design consideration when adding to or adjusting my database schema. That means I can pursue more normalisation with confidence which can be wonderfully freeing.
So, I just realised this morning that this could be really quite useful, so, lets consider it two steps:
A high level overview of the concepts involved.
Details of how to do it, with some examples.
So to start with, do heroku dataclips correspond directly (postgres) database views?
Heroku Dataclips does nothing more than execute a given query and display/visualize the resulting data set. Additionally, dataclips are only able to query against Heroku Postgres databases. Simply put, there's no way to target a local database with the heroku dataclip tooling.
You could potentially create a Heroku Postgres database with the express purpose to model the state of your local development database and use that. For instance, every time you'd like to run a dataclip against your local instance you'd push the data up to this purposed database and then execute the dataclip against that database. It's an extra step but if you need to use Dataclips it's likely the only reasonable way to do it for the purposes you've expressed here.
I have unintentionally deleted my Postgres DB on my heroku "Hobby Dev" instance. Does heroku keep a backup that could be used to restore it?
Hobby databases do not have access to the point-in-time recovery feature that is available on the production tier of databases. If you haven't captured any logical backups with heroku pg:backups then there's no way to recover what you've deleted.
I have a production and staging app in my pipeline. I would like to do one of two things.
Copy the postgres production database, but with limited data (as the current amount requires that I pay). Really, I want to copy all of the data except from one table. Is it possible to copy it and then just delete a table?
If this is not possible, can I share the production database with the staging app but not allow it to add or delete data unless I know it is ready.
Yesterday I uploaded a Rails 5 application with multiple databases to Heroku. I have a hobby-dev postgres add-on. This morning I successfully imported my 7 Postgres databases using pg_dump backups created according to the Heroku documentation.
PGPASSWORD=mypassword pg_dump -Fc --no-acl --no-owner -h localhost -U myuser mydb > mydb.dump
heroku pg:backups:restore 'https://s3.amazonaws.com/me/items/3H0q/mydb.dump' HEROKU_POSTGRESQL_COLOR_URL
Several times I have imported these databases because after a few minutes they disappear. I ran my Heroku app and the first database was successfully accessed but by the time I tried to access the second one I received a 'relation "xxxx" does not exist'. When I went back to my datastore the databases were gone. When I tried to run my app a second time I got a 'relation "xxxx" does not exist' on the first table that I successfully accessed the previous time I ran my app.
I'm not seeing any errors when I look at the data-store for the database. They just disappear. I checked to see if there was a limit to the number of databases I could have with the hobby-dev but did not see any. The row count is under 10,000. Each time I have imported my pg_dump files I get a warning email about the number of rows.
UPDATE 2/17/2017 10:42 AM central: The only thing I have found so far are some posts stating that the Heroku filesystem is ephemeral, and does not persist between dyno restarts. If this is my problem:
How do I know when dynos restart if I don't restart it? I had not restarted my app when my databases disappeared.
How can I permanently store my databases using the Postgres add-on or do I have to store my databases elsewhere? Surely the add-on has a way to permanently store databases.
I assume you use Heroku PostgreSQL offering (instead of trying to set it up on your own). If that's the case the ephemeral nature of dyno file systems shouldn't be your concern.
I recommend that you first create the seven (empty) databases and see if they disappear or not. You can create a single database with
heroku addons:create heroku-postgresql:hobby-dev
After each call run heroku pg:wait to wait until the database has been provisioned. If the databases don't disappear try restoring your backups then.
Consider a basic Rails development pipeline, going from development -> staging -> production. When going upstream it is easy to push code, then run migrations. However, after a while data will build up in the production database that I want to have in the staging database. I assume that creating a backup of the production database, then overwriting the staging database, and finally running migrations on the staging environment is the correct way to do this?
My assumption is based on the schema_migrations table which should reflect the current schema state, and the schema in the staging database might be different than production. Thank you!
I assume that creating a backup of the production database, then overwriting the staging database, and finally running migrations on the staging environment is the correct way to do this?
This is how I would do it. The schema_migrations table will automatically be transferred to your staging environment, and thus when you run the migrations it will start the update at the correct migration point. At the same time this is a good test to see that the production DB can indeed be migrated properly. I do this often in my on development cycle before I do complex big upgrades. It provides one extra "free" migration test case with real-world data.