Heroku share database in pipeline with restricted permissions? - heroku

I have a production and staging app in my pipeline. I would like to do one of two things.
Copy the postgres production database, but with limited data (as the current amount requires that I pay). Really, I want to copy all of the data except from one table. Is it possible to copy it and then just delete a table?
If this is not possible, can I share the production database with the staging app but not allow it to add or delete data unless I know it is ready.

Related

Production Database Creation / Migration dilemma for ASP.NET Core MVC

I have been building my ASP.NET Core MVC web application since last year and there are currently 100 migration files in my project, as the database has evolved along with features and capabilities. My development / test database is obviously in sync with this migration.
Now, the time has come to create a Production environment, and I want to have an empty database with only the table schema. So, I just can't copy my test database to create a production database.
So, the question / dilemma I have regarding creating this Production database is as follows;
1. I can either create my Production database by running Add-Migration command (for which I need to delete existing migration files from the project), or
2. Create table schema in SQL Management Server, and keeping the __EFMigrationsHistory from test database
With [1], I am not sure how I will be able to manage my test database with this same project, going forward.
With [2], I am not sure if there is any drawback.
So, what is the standard or best practice for Production deployment?
You can certainly use Visual Studio to create or update database tables, but this would normally just be in your development environment.
If you are worried about the number of migrations you have, you still have the option of starting over by deleting them, and deleting your migration history (and manually deleting your tables and data) and creating a fresh 'Initial create' migration. If you do this you will probably want to export any test or config data first, or make sure you have a way to recreate it.
For the staging and production deployments, best to use SQL Server Management Studio to generate a script to build your tables. The database administrator - or you if it's just you - can create the database and run the script to generate the tables.
This is because staging and production environments tend to be more tightly controlled than development, so best to understand the processes that are most appropriate for the environment.
You need to check that your dev and staging/production SQL databases are set at a matching 'compatibility level', and also need to decide whether you need to add any seed or configuration data on create.
In SQL Server Management Studio, select the appropriate option to generate a script for the required tables.
https://learn.microsoft.com/en-us/sql/ssms/scripting/generate-scripts-sql-server-management-studio?view=sql-server-ver15
You can also use this process to export, and import data - this is ideal for config or test data.
https://dzone.com/articles/generate-database-scripts-with-data-in-sql-server

How to add existing heroku dataclips to local postgres development database?

What is a neat way to recreate heroku dataclips on my local machine so that I have immediate access to the same useful queries locally which I do on an instance of my app on heroku?
I'm referring to the ability to query the state of the local database one is working with during application development, i.e. testing data, if you like (though of course after I pg:pull it's simply a copy of production data for testing purposes).
I have found I have come to rely on the views the dataclips give me into production data, which then assists in the courage to not allow primitive readability of bare tables to be a significant design consideration when adding to or adjusting my database schema. That means I can pursue more normalisation with confidence which can be wonderfully freeing.
So, I just realised this morning that this could be really quite useful, so, lets consider it two steps:
A high level overview of the concepts involved.
Details of how to do it, with some examples.
So to start with, do heroku dataclips correspond directly (postgres) database views?
Heroku Dataclips does nothing more than execute a given query and display/visualize the resulting data set. Additionally, dataclips are only able to query against Heroku Postgres databases. Simply put, there's no way to target a local database with the heroku dataclip tooling.
You could potentially create a Heroku Postgres database with the express purpose to model the state of your local development database and use that. For instance, every time you'd like to run a dataclip against your local instance you'd push the data up to this purposed database and then execute the dataclip against that database. It's an extra step but if you need to use Dataclips it's likely the only reasonable way to do it for the purposes you've expressed here.

Nitrous.io/Heroku: limit updating/deleting records to the extent possible

Using: Rails 4, Nitrous.io, Heroku.
I would like to restrict, to the extent possible, my ability to update or destroy records. I scrape ephemeral data daily and my PostgreSQL database is the only copy in existence. I want to keep my data unaltered and safe from accidental/malicious deletion.
Heroku does not allow multiple PostgreSQL users (therefore solutions involving creating additional users with limited database permissions are not applicable).

Pulling data from a production environment into staging with ActiveRecord & Ruby

Consider a basic Rails development pipeline, going from development -> staging -> production. When going upstream it is easy to push code, then run migrations. However, after a while data will build up in the production database that I want to have in the staging database. I assume that creating a backup of the production database, then overwriting the staging database, and finally running migrations on the staging environment is the correct way to do this?
My assumption is based on the schema_migrations table which should reflect the current schema state, and the schema in the staging database might be different than production. Thank you!
I assume that creating a backup of the production database, then overwriting the staging database, and finally running migrations on the staging environment is the correct way to do this?
This is how I would do it. The schema_migrations table will automatically be transferred to your staging environment, and thus when you run the migrations it will start the update at the correct migration point. At the same time this is a good test to see that the production DB can indeed be migrated properly. I do this often in my on development cycle before I do complex big upgrades. It provides one extra "free" migration test case with real-world data.

oracle database sandbox

Is it possible to create database server "sandbox"?So there is a master server that contains real data and a sandbox server that should dispatch read request to the master server in case the sandbox does not have cached data.In the case of a write request it should create a local copy of the data and apply changes to that copy without any impact to the master server.
You could build such a thing.
Create a local Oracle database with a database link that points back to the master database.
Copy the DDL for every object you're interested in from the master database to your local database renaming each table (i.e. EMP becomes EMP_LOC).
Create a view in the local database for each table that does a UNION ALL between the remote and local copies of the table.
Create an INSTEAD OF trigger on the local view that writes any changes only to the local table.
While you could do such a thing, however, it's not obvious why you'd want to. It would be a fair amount of work to set up and maintain and performance could easily get dodgy rather easily. And it's not obvious what problem this approach solves-- it wouldn't replace the need to have isolated development, test, and staging environments. And I'm hard-pressed to come up with a lot of use cases where this sort of "sandbox" would be preferable to one of those environments.
#Justin Cave give a good approach.. however maybe you should consider creating a Virtual Machine and take a snapshot of your PROD instance whenever you want to work on something new with the latest data.

Resources