Is there anyway to set SQLCMD variables in VisualStudio Schema Compare - visual-studio-2013

I'm using vs2013 data tools and trying to do comparisons of my database project and databases in different environments.
But my tsql code uses synonyms to access different databases
I can set this up with publish files as I can set each SQLCMD Variable to the correct environmental setting, and when the script is generated the correct db/server/whatever is inserted.
for example
DEV $(Contoso) = "Contoso_dev"
TEST $(Contoso) = "Contoso_Test"
PROD $(Contoso) = "Contoso_Prod"
However, when I'm doing a database comparison (using .scmp), i have no such option to set CMDvars, so I cant successfully compare with TEST environment because the synonyms are set by project properties which are pointing to the dev environment.
Is there anyway to set CMDVars in SCMP file

Revisited this issue with VS2015 and SSDT 14.0.51215.0 (Dec 2015). (not sure if this will work with original configuration listed above)
By setting the LOCAL value of the cmd Variable, it replaces the cmd variable correctly (regardless of what DB Server you're looking at)
(this did not work, just having the default set)

In the current SSDT version there is no way to use SQLCMD variables in a schema comparison.
But there is a workaround: just debug your database (F5) using the right SQLCMD variables and compare the resulting staging database with the target one.
Hope this helps.

Related

Addressing ORACLE_HOME values diversity

I have checked my ORACLE_HOME in 3 different ways in my windows 32-bits PC in this order:
1.- In cmd, if I type echo %ORACLE_HOME% the result is just:%ORACLE_HOME%, so no current path. Why?
2.- In regedit, under Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\KEY_OraClient11g_home1, the stored value is C:\oracle\product\11.2.0\client_1
3.- Finally, in the variable Path in Environment variables these are the oracle-related ones values (I disguise the others by using asterisks) in the same order they appear:
C:\ProgramData\Oracle\Java\javapath;C:\oracle_python\instantclient_11_2;C:\oracle\32bit\product\11.2.0\client_1\bin;*;*;C:\oracle\product\11.2.0\client_1\bin;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;*;*;*;*;*;*;*;*;C:\Program Files\Java\jdk1.7.0_60\bin
In addition, I operate with Oracle only in two ways basically: I use SQL Developer daily but also I do some scripting in Python by using the cx_Oracle library.
My question is whether I have properly optimized the ORACLE_HOME variable or not, I mean would an expert in Oracle agree with this current scenario?
When you do echo %ORACLE_HOME% then you interrogate the Environment variable ORACLE_HOME. In your case it is not set.
Some tools use (only) the Environment variable for ORACLE_HOME, other use the Registry value. I think most programs use both and give precedence over Environment variable.
I assume for your Oracle Client following folders are relvant:
C:\oracle_python\instantclient_11_2
C:\oracle\32bit\product\11.2.0\client_1\bin
C:\oracle\product\11.2.0\client_1\bin
Looks like you installed Oracle client three times (all into different folder). I do not consider this as an optimized setup. My recommendation is to remove all of them and make one single proper installation.
Check also How to uninstall / completely remove Oracle 11g (client)? in case of problems.

Seriously, overriding the DefaultDataPath in the sqlcmdvars for a SQL Database project deployment

I have an SQL 2008 database project in Visual Studio 2010 that is sync'ed on a regular basis from a schema comparison during the development phase. This same project is also under TFS source control. I have two environments, Debug and Production. Each environment is a single machine that runs both IIS and SQL Server. The production environment however has different data and log paths for the database D:\Data\ and E:\Logs\ versus my development server at the standard c:\program files\sql....\data.
What I'm trying to do is setup the way I transact my deployments from the debug to production environments. I've gotten WebDeploy 2.1 setup and I build my deployment packages in Visual Studio via the right-click context menu on the website project. I want to manually copy deployment packages to the production server via RDP, so there's no over the wire concerns here. The deployment package settings are setup to include all databases configured in Package/Publish SQL tab. In the Package/Publish SQL tab I don't pull data from data/schema from an existing database because I want to deploy from the SQL database project instead. So I just point to the pre-generated .sql script file located in my database project's /sql/release folder. To top it off, I generate the .sql script in the post-build events in the SQL project via VSDBCMD.exe /dd:- /a:Deploy /manifest:... so that a simple solution rebuild all, then website project deploy ensures I always have the latest .sql script in the deployment package.
This is great and all, but I have a major problem here I can't seem to overcome. It has to do with the database data and log files paths being different from debug to production environments. I actually receive an exception during the WebDeploy in IIS on the production server that says it can't find c:\programs files...\MyDatabase.mdf file. And what's scarey is after this exception, the entire database is deleted. The empty databases I create right before doing the deployment. Happen both times I tried messing around with it. I'm not sure how I feel about that, but I'm hoping I could find a reliable solution to this.
I have been feverishly looking for a way to change the paths during a deployment and have found many places that mention changing the paths in the *.sqlfiles.sql files under Schema Objects\Database level objects\Storage\Files because the path it tries to deploy to is the path specified in those because of the Schema Comparisons and Writes from the Debug SQL server database. Changing the paths here will work temporarily, until I do my next schema comparison and write, then the sqlfiles.sql files will get overwritten with the info from the Debug database again. And I don't want to have to remember to never update these files during a schema comparison because any mistake has the potential to delete the production database.
I think my salvation lies in my Release.sqlcmdvars file. It's a tease actually, I can see a place I "could" type the default database path, but it appears to a read-only field as it mentions "Location where database files are created by default (set when you deploy)." It would be grand if I could specify the paths here. Is there any way at all to specify the path in a variable here that would override the paths from the *.sqlfiles.sql files?
In the solution where I work at, there are two custom variables in the sqlcmdvars called Path1 and Path2 that I thought were reserved names that do such that. However, this doesn't work in my solution and the difference between the two solutions are the other solution gets deployed via TFS build controller. Doing the TFS build controller route isn't an option really because I opted out to save money while using a third party source control service.
Any help with this would be great. I have even gone so far as to create separate *.sqlfiles.sql files for debug and release and configured the dbproj file to use one or the other depending on the Configuration, but this doesn't seem to be working either. Also, using the custom PATH1 variable in the sqlfile.sql file like FILENAME = '$(PATH1)\Cameleon_log.ldf', doesn't work either. I seriously think it shouldn't be this difficult. Am I missing something simple here??
Thanks!
Okay, this was an exercise in futility. Apparently with out syncing with the target database during the script generation the script would be exactly what is needed to build the database from scratch. Even if I could override the file paths, the deployment would complain about database objects already existing. I needed to specify the connection string of the target database in the deploy settings so a comparison is done during the script generation and only the relevant differences are added to the script. I really wanted to avoid exposing my production SQL server to the outside world, but it is what it is. No need to override the paths anymore because it looks the database file paths are conveniently ignored during this comparison!!

VS2010 Database Project Back up location

Is there a way to change the back up location for only my database in my database project settings or do I have to do that in a Pre-Deployment script and uncheck the back up database before deployment from the Deployment configuration file?
The backup option uses the SQL Server default backup directory. You can change that location only by editing the registry. Unfortunately, you can't set a different backup location for each database.
Location (For SQL 2008)
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQLServer
Change...
BackupDirectory
For more on changing SQL server default paths...
http://www.mssqltips.com/tip.asp?tip=1583
If you are feeling adventurous, you could change the registry value in your pre-deployment script and reset it with you post-deployment script. Use "xp_instance_regread" and "xp_instance_regwrite" to do that. USE WITH CAUTION!
More on that -> http://sqladm.blogspot.com/2010/09/xpinstanceregwrite-syntax.html
If you notice in your deployment script...the code to read the registry entry looks like this...
EXEC #rc = [master].[dbo].[xp_instance_regread] N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'BackupDirectory', #dir output, 'no_output'
If you are careful, you could read, change, and restore the path during your deployment.
Hope this helps!

How to create incremental scripts to update database schemas using Visual Studio 2010?

I'm trying to use VS 2010 Sql Server Database Project to keep track on changes made on my database and to generate appropriate scripts when a change needs to be deployed from dev to production environment.
I have created my schema comparison between my dev database and the project schema which does a great job. However, I cannot find a way to create incremental scripts, the only things I get are scripts with CREATE statements (Export to Editor option).
Am I doing something wrong?
Thanks in advance.
As part of our auto build process, we store .dbschema files for each environment in source control. During the build, we create the .dbschema file based on the database project and then use vsdbcmd command line call to generate the change script between the project schema and each destination DB schema. If you need specific command line call, let me know.
If you're using "Data Dude" correctly, these are done for you and run when you choose Deploy. Just keep your schema (tables, stored procs, populate scripts etc) as a project item and change it as you need to. The build-and-deploy process will generate the scripts. http://msdn.microsoft.com/en-us/library/ff678491.aspx is a not-bad starting point if you want to get these scripts and run them youself against various staging, production, etc databases.
In the .deploymentmanifest file there are two settings:
<DeployToDatabase>False</DeployToDatabase>
and
<DeployToScript>True</DeployToScript>
Running vsdbcmd will then generate the change scripts without affecting the target database. All you'd need is a version of the database which is the same as the production version, or access to point vsdbcmd at production to generate the script.

How are the different collation settings related and applied in the SQL Server project type in Visual Studio 2010?

In the Project Settings tab of the project properties page there is an option called Collation for database model.
There is also an option in the project settings (the .sqlsettings file) called Database collation.
And then in the Deploy tab of the project properties page there is a link to the Deployment configuration file which has an option called Deployment comparison collation.
I'm confused, but more importantly, even though I have set that last option to Use the collation of the server the deployment script always contains the following statement:
ALTER DATABASE [$(DatabaseName)] COLLATE Latin1_General_CI_AS;
Which results in the following error:
ALTER DATABASE failed. The default
collation of database 'Database'
cannot be set to Latin1_General_CI_AS.
Ideally I don't want to think about collation, and always follow what is set at the target database level, but somehow the various options of the SQL Server project make it hard to predict what's going to happen at actual deployment.
Can you explain what each of these options do and how they interact with and/or override each other?
While I cannot shed much light on what the plethora of different collation settings do, I can point out one setting that helped me when searching for a remedy to the ALTER DATABASE COLLATE always being in the deployment script.
In the Database Project => Properties => Database.sqldeployment settings there is a ScriptDatabaseCollation that when unticked, fixed my issue.
I believe the COLLATE Latin1_General_CI_AS that you are seeing is coming from the Default Collation specified in your Project Setting. This can be accessed via:
Project Properties > Project Settings > Edit the Catalog properties file
This will open the Database.sqlsettings and you will see the first entry is the Default database collation that your project uses when generating scripts.
The Deployment Comparison Collation is used for the database comparing models when you deploy the project. I think this only functions if you are deploying directly to a database and not if you are using the default settings of generating a .sql script file.
As eddie noted, there is a setting in the Database.sqldeployment file that when unchecked, removes the annoying collation specifier from the CREATE and ALTER scripts.

Resources