Currently I'm using Visual Studio 2012 RC and SQL Server 2012 RTM.
I'd like to know how to re-deploy/re-create a test database for each test run.
Keep in mind I've a SQL Server database project for the database using Visual Studio 2012's template.
Actually I'm not very sure about an idea I got in my mind, but .testsettings file has Setup and cleanup scripts. Is this the way to go? For example, a PowerShell script reading the database project generated script and executing it against the database?
I guess there're better ways of doing that and it should be an out-of-the-box solution but I ignore it and Google doesn't help me in finding the right solution.
As mentioned you'll probably want to use the VS 2012 .Local.testsettings > Setup and Cleanup scripts to create / tear down you SQL Server database.
For the script you may want to use powershell with a .dacpac (rather than just a T-SQL script), since you are using a SSDT project. Here's a link to some example code - in particular you may want to take a look at the 'Deploy-Dac' command.
If you are unfamiliar with .dacpacs as the (build) output of SSDT-created database projects, take a look at this reference link.
Edit: Although this doesn't answer the question in a plain SQL Server way, an easy Entity Framework approach would be the following: I found that I could create and destroy my database every time correctly by using the DbContext.Database.CreateIfNotExists() and DbContext.Database.Delete() methods in my setup and cleanup phases of my tests.
The fastest solution, while a bit of a hack, is really straightforward. You can set the DB Projects properties under the debugging tab to "always re-created DB". Then test in two clicks, do a debug/build, then run all tests. You should get a freshly built DB on localDB for you tests to be ran against. You can also change the target for the debugging DB (again the DB projects properties) to whatever you want, so you can deploy to a .dacpac, or to an existing SQL DB or wherever. It means testing in two steps, and if your build is long, it may be annoying, but it works. Otherwise, I believe scripting is your only option.
Related
I've always been intrigued by Visual Studio Database Projects, and while they seem to be quite capable, I've never used them to any great degree outside of simplistic proof-of-concept work. I want to try this for a new project, and I'm also interested in using an EF layer on top of it, but in past test projects this has involved some decent effort.
I'm curious: has Visual Studio matured its product integration to support a single workflow that builds the database project, builds the EF layer on top of it, and finally builds the code, without intermediate steps involved?
We are a small team and we don't have dedicated SQL developers, and our primary goal is to bring the database into Visual Studio and to get it nicely under source control (TFS), and to achieve strong integration between from end to end. We're interested in growing into EF, and will probably start simple by treating it like a simple ORM tool to begin with if possible.
Has anyone actually done this that can provide insight into the process?
We have used VS2014, tool seem much the same and early version
Don't think has been much changes over the years
We have EDMX model and a DB project in the solution
Does mean that you need to keep the db project up to date.
But this is easy to do, you just publish you EDMX to local box/target
Then can import the changes with a schema compare of local to the project.
So they you can still have Model driven DB design
And use the DB project to deploy changes to the Dev/Stage/Live boxes
And can publish with automated deployments also.
The db project has a post build scripts option
Where you can use it to do seed data
And also a pre-build where you can do db manipulation if need to change structure and types of fields types when the data is on a live db.
Schema compare tool are rather good in Visual Studio
Can compare a DB to DB, DB to Project, or Schema file to either also
I have an SQL 2008 database project in Visual Studio 2010 that is sync'ed on a regular basis from a schema comparison during the development phase. This same project is also under TFS source control. I have two environments, Debug and Production. Each environment is a single machine that runs both IIS and SQL Server. The production environment however has different data and log paths for the database D:\Data\ and E:\Logs\ versus my development server at the standard c:\program files\sql....\data.
What I'm trying to do is setup the way I transact my deployments from the debug to production environments. I've gotten WebDeploy 2.1 setup and I build my deployment packages in Visual Studio via the right-click context menu on the website project. I want to manually copy deployment packages to the production server via RDP, so there's no over the wire concerns here. The deployment package settings are setup to include all databases configured in Package/Publish SQL tab. In the Package/Publish SQL tab I don't pull data from data/schema from an existing database because I want to deploy from the SQL database project instead. So I just point to the pre-generated .sql script file located in my database project's /sql/release folder. To top it off, I generate the .sql script in the post-build events in the SQL project via VSDBCMD.exe /dd:- /a:Deploy /manifest:... so that a simple solution rebuild all, then website project deploy ensures I always have the latest .sql script in the deployment package.
This is great and all, but I have a major problem here I can't seem to overcome. It has to do with the database data and log files paths being different from debug to production environments. I actually receive an exception during the WebDeploy in IIS on the production server that says it can't find c:\programs files...\MyDatabase.mdf file. And what's scarey is after this exception, the entire database is deleted. The empty databases I create right before doing the deployment. Happen both times I tried messing around with it. I'm not sure how I feel about that, but I'm hoping I could find a reliable solution to this.
I have been feverishly looking for a way to change the paths during a deployment and have found many places that mention changing the paths in the *.sqlfiles.sql files under Schema Objects\Database level objects\Storage\Files because the path it tries to deploy to is the path specified in those because of the Schema Comparisons and Writes from the Debug SQL server database. Changing the paths here will work temporarily, until I do my next schema comparison and write, then the sqlfiles.sql files will get overwritten with the info from the Debug database again. And I don't want to have to remember to never update these files during a schema comparison because any mistake has the potential to delete the production database.
I think my salvation lies in my Release.sqlcmdvars file. It's a tease actually, I can see a place I "could" type the default database path, but it appears to a read-only field as it mentions "Location where database files are created by default (set when you deploy)." It would be grand if I could specify the paths here. Is there any way at all to specify the path in a variable here that would override the paths from the *.sqlfiles.sql files?
In the solution where I work at, there are two custom variables in the sqlcmdvars called Path1 and Path2 that I thought were reserved names that do such that. However, this doesn't work in my solution and the difference between the two solutions are the other solution gets deployed via TFS build controller. Doing the TFS build controller route isn't an option really because I opted out to save money while using a third party source control service.
Any help with this would be great. I have even gone so far as to create separate *.sqlfiles.sql files for debug and release and configured the dbproj file to use one or the other depending on the Configuration, but this doesn't seem to be working either. Also, using the custom PATH1 variable in the sqlfile.sql file like FILENAME = '$(PATH1)\Cameleon_log.ldf', doesn't work either. I seriously think it shouldn't be this difficult. Am I missing something simple here??
Thanks!
Okay, this was an exercise in futility. Apparently with out syncing with the target database during the script generation the script would be exactly what is needed to build the database from scratch. Even if I could override the file paths, the deployment would complain about database objects already existing. I needed to specify the connection string of the target database in the deploy settings so a comparison is done during the script generation and only the relevant differences are added to the script. I really wanted to avoid exposing my production SQL server to the outside world, but it is what it is. No need to override the paths anymore because it looks the database file paths are conveniently ignored during this comparison!!
Why does Visual Studio asks (Optionally) to add database .mdf file to be stored in project output folder? It's still is a requirement that .mdf file to be part of running SQL Server instance so that application can work with the database.
For instance if I stop the instance of SQL Server and run the application, it throws exceptions etc. I wonder why it stays in VS solution folders then? Any advantage of this?
I think this is generally to allow for the "User Instance" feature, which lets you make a copy of the MDF file for local debugging purposes (without impacting the database that's running within SQL Server).
You can see this URL for more information on how this feature works, but I would just ignore it, since it is deprecated and in SQL Server 2012 is replaced with a fundamentally better and different way of dealing with isolation and avoiding instance maintenance (no more AttachDbFileName nonsense).
Personally, I think it's much better to work with a single copy of the database, attached to a proper instance of SQL Server, because these other methods just seem far too convoluted and confusing for very little gain. But maybe that's just me.
I usually create a solution folder in Visual Studio and put my DB scripts in them. I always use at least this set of scripts:
Drop model
Create model script
User functions
Stored procedures
Static data (lookup tables)
Test data (not deployed)
Then I simply combine them and run against an SQL Server so I'm able to recreate the whole DB in a single step (by combining these scripts into a single one and executing it).
Anyway. I've never used projects in either:
Visual Studio or
SQL Management Studio
I've tried creating SQL Server 2008 Database Project in Visual Studio 2010, but I'm somehow overwhelmed by all the possible server settings (which I prefer to stay default as set on the server anyway). So I'm a bit confused: Should I use this project template or should I just do the same thing I always did?
What do you use and why? What are advantages I may benefit from by using either?
If I were you I would continue to do it the way you are doing it. In fact I do! The advantages of having the actual .sql files right there in a folder for you to use/edit/look at in my opinion are far better than the advantages you get by using a DB project. DB Project would be used if you were doing something like Storage Reports, were you have to communicate with like 8 databases and compare then to 8 different databases and save result sets etc... Now don't get my wrong there are advantages of Database Projects, I just don't think they are actually doing much help when you have such a simple setup that works already.
Advantages of the SQL Server 2008 Database Project in VS10:
Not having to switch back and forth
from your current client you use to
communicate with your SQL server.
Decent Data and Schema compare tools.
Gives you a one-click way to reverse
engineer a database into source
control, and keep it up to date.
You can compare projects to physical
databases and vice-versa. (This makes it pretty easy to keep your database up to date, no matter where you make change it: file system database project, or in the physical database itself)
If the current tool your using is not specifically tailored to SQL Server, this one is.
Extremely helpful if you need to do
unit tests directly on the database
without using abstractions.
If you're looking for something a little less complicated, you might want to try SQL Source Control. This won't even require you to maintain scripts, as it doesn't this for you behind the scenes. It will, however, only work as a solution for you if you use either TFS or SVN. And it costs $295...
It has a 28-day trial period, so if you're happy to try it out, I'd be interested in your feedback.
I'm working in a multi-developer environment in Oracle with a large package. We have a DEV => TST => PRD promotion pattern. Currently, all package edits are made directly in TOAD and then compiled into the DEV package.
We run into two problems:
Concurrent changes need to be promoted on different schedules. For instance, developer A makes a change that needs to be promoted tomorrow while developer B is concurrently working on a change that won't be promoted for another two weeks. When it comes promotion time, we find ourselves manually commenting out stuff that isn't being promoted yet and then uncommenting it afterwards...yuck!!!
If two developers are making changes at the same exact time and one of them compiles, it wipes out the other developer's changes. There isn't a nice merge; instead the latest compile wins.
What strategies would you recommend to get around this? We are using TFS for our source-control but haven't yet utilized this with our Oracle packages.
P.S. I've seen this posting, but it doesn't fully answer my question.
The key is to adopt a practice of only deploying code from the source control system. I'm not familiar with TSF, but it must implement the concepts of branches, tags, etc. The question of what to deploy then falls out of the build and release tagging in the source control system.
Additional tips (for Oracle):
it works best if you split the package spec and body into different files that use a consistent file pattern for each (e.g. ".pks" for package spec, and ".pkb" for package body). If you use an automated build process that can process file patterns then you can build all of the specs and then the bodies. This also minimizes object invalidations if you are only deploying a package body.
put the time in to configure an automated build process that is driven from a release or build state of your source control system. If you have even a moderate number of db code objects it will pay to be able to build the code into a reference system and compare it to your qa or production system.
See my answer about Tools to work with stored procedures in Oracle, in a team (which I have just retagged).
Bottom line : don't modify procedures directly with TOAD. Store the source as files, that you will store in source control, modify then execute.
Plus, I would highly recommend that each developer works on its own copy of the database (use Oracle Express, which is free). You can do that if you store all the scripts to create the database in source control. More insight can be found here.
To avoid 2 developers working on the same package at the same time:
1) Use your version control system as the source of the package code. To work on a package, the developer must first check out the package from version control; nobody else can check the package out until this developer checks it back in.
2) Don't work directly on the package code in Toad or any other IDE. You have no clue whether the code you are working on there is correct or has been modified by one or more other developers. Work on the code in the script you have checked out from version control, and run that into the database to compile the package. My preference is to use a nice text editor (TextPad) and SQL Plus, but you can do this in Toad too.
3) When you have finished, check the script back into version control. Do not copy and paste code out of the database into your script (see point 2 again).
The downside (if it is one) of this controlled approach is that only one developer at a time can work on a package. This shouldn't be a major problem as long as:
You keep packages down to a reasonable size (in terms of WHAT they do, not how many lines of code or number of procedures in them). Don't have one big package that holds all the code.
Developers are encouraged to check out code only when ready to work on it, and to check it back in as soon as they have finished making and testing their changes.
We use Oracle Developer Tools for Visual Studio.NET...plugs right into TFS
we do it with a Dev database for every stream, and labels for the different streams.
Our Oracle licensing gives us unlimited dev/test instances, but we are an ISV, you may have a different licensing option
You can use the Oracle developer tools for VS or you can use sql developer. SQL developer integrates with Subversion and CVS and you can download it for free. See here: http://www.oracle.com/technology/products/database/sql_developer/files/what_is_sqldev.html
We use Toad for Oracle with the TFS MSSCCI provider against TFS 2008. We use a Custom Tool that pulls database checkins from source control and packages them for release.
To my knowledge Oracle Developer Tools for Visual Studio.Net doesn't have any real source control integration with TFS or otherwise.
You might consider Toad Extensions for Visual Studio though it's not cheap, maybe $4k I think.
Another option is the Oracle Change Management Pack but believe it requires the Enterprise edition of Oracle which is much more pricey.
You may be interested in Gitora www.gitora.com. It helps managing Oracle database objects with Git.
This article about collaborative development with the Oracle database can also be helpful: http://blog.gitora.com/plsql-how-to-develop-two-features-simultaneously-but-deploy-only-one/
Full disclosure: I am the developer and author of the article.