Using DropCreateDatabaseIfModelChanges in a production environment - asp.net-mvc-3

I've just started learning .NET MVC so this may be a silly question, but I've yet to find a good answer.
I'm following the Code First approach using the Entity Framework to build my database for me. I've included the following in my Application_Start() method in order to allow me to edit my database by making changes to my Model objects.
Database.SetInitializer<ContactManagerDB>(new DropCreateDatabaseIfModelChanges<ContactManagerDB>());
I was just wondering what would happen if I pushed this application to a production environment and then made a few changes to my models and then updated the application? Would this really drop and recreate the database in the production environment?
What's the best practice for pushing changes to production env. using the Code First approach?

DropCreateDatabaseIfModelChanges should only be use early on in development, never on a production machine. If you pushed to a production machine and made schema changes, you'd loose all your data.

You could delete the EdmMetadata table in your production environment. In that case, EF would not know the current schema to compare to the new, so it would just assume you know what you are doing and it would not touch the database schema.

Code first does not have the ability to upgrade your database while keeping your data intact.

Related

A couple of heroku postgres questions (just started, am lost)

I have provisioned postgres on my heroku app and also installed postgres locally to maintain parity (as the documentation recommends) with the online database but I'm also not understanding how this will work. Am I supposed to be accessing a local copy of a database when running on my own computer (while building and before deploying) and then using heroku's separate postgres database once it is deployed? If it is parity, shouldn't they both be using the heroku postgres database?
In other words, will my local app (during production) and heroku app (deployed and live) be using the same online postgres database?
Thanks.
Am I supposed to be accessing a local copy of a database when running on my own computer (while building and before deploying) and then using heroku's separate postgres database once it is deployed?
Yes, that's exactly it. Without seeing what bit of documentation you're referencing it's hard to say what they mean but perhaps there's another way to explain it.
In your local development environment, you may find that you need to test database schema changes (this is just one example, there are many). If you only had the one heroku postgres database you'd be forced to test these changes in production, which might result in poor usability for your users and that doesn't even account for the possibility of making a mistake and accidentally destroying your production data. There are a number of other shortcomings and challenges with this single database configuration.
For these reasons and more, it's best to keep your production data completely separated from your development/staging/test environment by creating a local/staging database. You might reasonably ask, "What about the data? I need data to test!". There are many ways to put together your test database and which you choose will likely depend on your needs. A shortlist of possibilities:
Use a seed file to generate mock data in your db
Use a model factory (usually runs in conjunction with your testing framework)
Take a dump of your production database, anonymize and redact sensitive information and use that for local testing.

Entity Framework implement Code first Migrations and prevent data loss

I would like to know what would be the best guide to follow and things to consider if i would want to Add Migrations to a Project and note that the project:
is live (dev/staging/production environments)
Model of the live versions has changed and some fields/tables are
removed/added
is hosted with Azure App Service (Publish settings)
is an MVC project with Entity Framework 6 using code first
I know the basics of adding/using migrations but that's it.
I would like to know how i can implement migrations to my solution, publish the new project (changed model) without losing any data.
Is this possible and can anyone suggest me anything to look at that is well explained for this kind of setup?
EDIT
I am testing this on development but i can't make it work without having my database recreated, hence losing existing data...
My configuration file:
public Configuration()
{
AutomaticMigrationsEnabled = true; // tried false as well
ContextKey = "ContractCare.Models.ApplicationDbContext";
AutomaticMigrationDataLossAllowed = false;
}
Kind regards
Add an empty snapshot migration to your DEV environement. This will capture the current state of that model:
enable-migrations
Add-Migration InitialBaseline –IgnoreChanges // Tells EF not generate Up() code of existing objects
update-database
Now all subsequent changes in DEV can be deployed to other environments either by changing the connect string and re-running or by generating a script that can be run on those servers update-database -Script.
Before that, you have to "catch up" the other environments to the state of DEV using the processes you already have in place. Then you apply the InitialBaseline migration to those environments.
Moving forward you can apply the DEV migrations to UAT, STG and eventually PROD. Since a lot of migrations tend to happen in DEV, you can roll those up into a single migration as Chris explains here.

Liquibase Rollback and Update errors

I have been assigned a team currently using liquibase to version their database and being new to the environment a problem came about from the developers and wanted to find what would be the best practice. I have never worked with liquibase before this, so bare with what I am trying to ask. The issue being is that developers are adding unintended changes(spaces or new lines) to a liquibase script inside the repo and not noiced and pushing their changes and liquibase is seeing it as a change. This was an example one of the developers gave me. Which I know can be dumb especially making sure the developers should be paying attention and git has ways of preventing this, but was given the task to create a some way to rollback if this issue were to arise. I was wondering if creating a rollback procedure is the best way or some way implment a block to not allow things like this to happen and if so what? Would it be on the db side as a block of is there something in liquibase that can prevent this. Also, being I think having a rollback procedure import for those one off accidents, whats the best way to create one for liquibase? As in best practice for a production level environment.

Why use Oracle version control?

At work we use Oracle (12c client) to store most of our data and I use SQL Developer to connect to the database environments.
Issue:
We have issues where tables are being modified for one reason or another (too lazy to create a new table so they add new columns and change data types or lengths). This in return will break the table for others who actually utilize it for its real purpose.
Update:
We have DEV, TST, UAT, and PRD environments. We test and have scripts approved before we promote to PRD. The problem resides in DEV when we want to go back to an existing table to make an change, but that table had already been modified for different reasons.
Question 1:
Is the versioning just for stored procedures or is it possible to track changes to table structures, functions, triggers, sequences, synonyms, etc.?
As Bob Jarvis indicates you need way more than a solution to your question. You need policies and practices enforced for all developers. Some ideas from places I have worked:
every developer has a VM machine with a copy of the database installed. They can do whatever they like on it but must supply scripts to move their changes to production. These scripts are applied on a test instance and again on a QA instance before going to production.
subversion works on all OS and tortoise works well on windows. Committing scripts to a repository works well and this is integrated with SQL developer and can be done with Toad.
you have a permissions issue. Too many people have the privileges to alter tables. Remove these permissions and centralize on one or two people. Changes are funnelled through them as scripts and oversight can be applied there. Developers can have their own schema to test or a VM with a copy for development.
run this script to see who can alter tables
select * from DBA_TAB_PRIVS
WHERE PRIVILEGE = 'ALTER'
The key is a separation of concerns. Developers should have access to a schema where they can do what they need. The company needs to know who did what, when and where.
If you have more than one developer working on multiple changes to a dev environment then you need coordination and communication as well as source control. A weekly meeting to discuss overlap areas or a heads up chat message are just some ways to work together.
The approach I think works best, is to have a DEV database where all the developers manage their own set of schemas.
Scripted builds are provided with test data loads to allow any developer to create his own working schema. He then works on there, tests his changes and then commits his changes via scripts to the source control. DEV databases do not need to be large, just need enough test cases to allow for unit tests.
Script all the changes so that they can be checked into a version control system, and merged with other changes. The goal is to have a system where devA checks in changeA, and then when merged with the main trunk, devB gets changeA as he builds his schemaA.
This approach requires care if the main project schema employs PUBLIC synonyms. You will need to consider this as you go forward.
I would also advise with each change checked in an accompanying back out script should be checked in.
The advantage of this approach is that devs can manage their own schemas. With a scripted approach they dont all need to have DBA knowledge, and don't need to manage the database either. having all these on one database makes it easier to manage and control resources.
I've used this approach in teams with 50+ developers and it has worked very well.
This approach also paves the way for having devs checking scripts in and having a automatically creating a deployment package.
There is so much that can be done to make the development-test-deploy-backout cycle easier to manage.

Flyway migration in Development and Production

I searching for an way to do a different migration in production and development.
I want to create a Spring Webapplication with Maven.
In development i want to update database schema AND load test data.
In production when a new version of the application is deployed i want only change the schema and don't load test data.
My first idea was to save the schema update and insert statements into different folders.
I think every body has solved this problem and can help me, thank you very much.
Basically, you have two options:
You could use different locations for your migrations in your flyway.locations property, i.e.:
for Test
flyway.locations=sql/structure,sql/test
for Production
flyway.locations=sql/structure
That way, you include your test data in the sql/test folder. You would have to take care with numbering, of course.
The second option (the one I prefer), is don't include test data in your migrations at all.
Rather, create your testdata any way you want and create an sql-dump of this data, which you keep separate from your migrations.
This works best if you have a separate database (instance, schema, whatever) containing your pristine testdata, where you apply each migration as part of your build process. This build job could then create a dump always matching the current migration.
When preparing your test machine, you first apply your migrations, then you load the contents of the matching dump.
I think this is a lot cleaner than the first version, especially because your test data can be prepared using other tools (your application) and has not to be handcoded.

Resources