Octopus - SQL Deploy DACPAC Community Contributed Step - octopus-deploy

I am using the SQL Deploy DACPAC community contributed step to deploy my dacpac to the server within Octopus.
It has been setup correctly and has been working fine until the below situation occurs.
I have a situation where I am dropping columns but the deploy keeps failing due to rows being detected. I am attempting to use /p:BlockOnPossibleDataLoss=false as an "Additional deployment contributor arguments" but it seems to be ignored.
Can anyone guide me to what is wrong?

The publish properties should have DropObjectsNotInSource, try to set it to True.
You might want to fine tune it, to avoid dropping users, permissions, etc.

After multiple updates by the original author, this issue was still not resolved. The parameter has actually since been completely removed since version 11.
Initially, I added a pre-deployment script that copied all the data from the tables that were expected to fail, delete all the data, allow the table schema to update as normal, and in a post-deployment script re-insert all the data into the new structure. The problem with this was that for data that could be lost, a pre-deployment and post-deployment script was required when it wasn't really needed.
Finally, I got around this by duplicating the community step "SQL - Deploy DACPAC" (https://library.octopus.com/step-templates/58399364-4367-41d5-ad35-c2c6a8258536/actiontemplate-sql-deploy-dacpac) by saving it as a copy from within Octopus. I then went into the code, into the function Invoke-DacPacUtility, and added the following code:
[bool]$BlockOnPossibleDataLoss into the parameter list
Write-Debug (" Block on possible data loss: {0}" -f $BlockOnPossibleDataLoss) into the list of debugging
if (!$BlockOnPossibleDataLoss) { $dacProfile.DeployOptions.BlockOnPossibleDataLoss = $BlockOnPossibleDataLoss; } into the list of deployment options
Then, I went into the list of parameters and added as follows:
Variable name: BlockOnPossibleDataLoss
Label: Block on possible data loss
Help test: True to stop deployment if possible data loss if detected; otherwise, false. Default is true.
Control type: Checkbox
Default value: true
With this, I am able to change the value of this parameter with the checkbox when using the step in the process of the project.

Related

Octopus Deploy - variable replacement for deployment target machine name

Problem:
I've got a manual intervention step with textual steps for the person performing the deployment to follow.
I'd like to pass in the name of the target server so the the person doesn't need to lookup the server name being targeted.
For example as you see below, I need them to unzip to a location on the target server.
**SECTION 1: (Main installation)**
1. Navigate to: #{InstallationZipLocation}.
2. Download zip file named: #{ZipFileName}
3. Unzip to the desktop on: #{DeploymentTargetMachineName} --need help here
4. Run executable named: #{ExecutableName}
5. Accept default settings
What I have tried:
Octopus Deploy - System Variables Documentation offers:
#{Octopus.Deployment.Machines} results in: Machines-6
#{Octopus.Deployment.SpecificMachines} results in: (empty string)
What I expect to see:
3. Unzip to the desktop on: FTPServer05
Additional Comment:
I realize I could set the name of the target server in my variables list for each target environment/scope, resulting in only 4 variables (not a big deal, and easy to maintain), but I was curious if there was a way to simplify it. We are running Octopus Deploy 3.12.9.
So I was looking for an easier approach, but stumbled on something that I found to be rather interesting so I went ahead and implemented it.
Output variables. . . "After a step runs, Octopus captures the output variables, and keeps them for use in subsequent steps."
What I did to resolve my issue:
I setup a custom step-template which sole purpose is to set "output variables" to use in my subsequent step. You could have this be your first step in your project, or at a minimum come before the step that references the variable you are setting.
Custom step setup:
Powershell:
Write-Host "TargetDeploymentMachineName $TargetDeploymentMachineName"
Set-OctopusVariable -name "TargetDeploymentMachineName" -value $TargetDeploymentMachineName
Parameters:
Then in my Manual Intervention step, I use the output value like so:
3. Unzip to the desktop on: #{Octopus.Action[MyProject-Set-Output-Variables].Output.TargetDeploymentMachineName}
(Where [MyProject-Set-Output-Variables] represents the name of the step in my deployment project which is responsible for assigning the output variables)
Explanation for why I was having trouble in my question:
Turns out the variable binding syntax to use for my question would have been:
Octopus.Machine.Name = The name that was used to register the
machine in Octopus. Not the same as Hostname
However, the Manual Intervention step specifically does not have a "Deployment Target":
It instead just runs on "Octopus Server":
So I am pretty sure that is why I was not getting a value for the "target". For example, I simply tested another new basic step that used the "Deployment Targets" radio buttons, which resulted in the FTPServer05 value I was expecting.

Flyway does not ignore out of order migration scripts, with outOfOrder=false

I am not using outOfOrder.
I would like to be able to add a migration script, that would not be the latest, (e.g. to bugfix an existing script, without changing that script).
I would like the new script to be run, as part of the normal ordering, on databases that haven't been migrated yet.
Any databases that are up to date (e.g. manually repaired) should ignore the new script.
From the documentation:
OutOfOrder - Allows migrations to be run "out of order". If you
already have versions 1 and 3 applied, and now a version 2 is found,
it will be applied too instead of being ignored.
This suggests that the new script will be ignored, but I get the error:
ERROR: Validate failed: Detected resolved migration not applied to database
Will the new script only be ignored if the db baseline is ahead of it?
Is this the expected behaviour?
If so, I guess my solution here is either to:
Use outOfOrder, and complicate all my scripts to be idempotent.
Baseline my db after every migration.
There is a pull request for this that will be merged in time for Flyway 5.1.0: https://github.com/flyway/flyway/pull/1866
Until then you also have the option to disable validation by setting validateOnMigrate to false.

Publish fails when I select Execute Code Migrations and Have Config transformation enabled

I have enabled config file transformation for deployment.
When I am trying to select the Execute Code first migrations and select the update database option and also select a sql ddata script the publish fails with the following error.
Web deployment task failed. (The value '' is not a valid connection string or an absolute path. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_INVALID_CONNECTION_STRING.)
And also when I open the publish profile again to republish I dont see the Update database option again. This is something surprising.
Has any one faced anything like this?
If you have multiple projects in your solution, code first migrations maybe searches in the wrong project for the connection string.
Make sure the selected startup project has the right connection string in the app.config. Its not necessarily the project that contains your migration script.

Publishing SQL Data Tools 2012 project: Forces into Single User Mode

I have a CLR Project that I'm trying to publish using Visual Studio. I had to change the project to a SQL Data Tools project, and now it's not publishing. Each time I try, I get a timeout error. When I take it step-by-step, I find this line of code hangs on my server.
IF EXISTS (
SELECT 1
FROM [master].[dbo].[sysdatabases]
WHERE [name] = N'fwDrawings')
BEGIN
ALTER DATABASE [fwDrawings]
SET READ_COMMITTED_SNAPSHOT OFF;
END
Basically, I know it's trying to force the server into single user mode when I try to publish this up. It's just to my staging server and not to a production server, but this is still a problem. I can't keep kicking everyone off the server and try and switch it to single user mode every time I want to update the CLR while I'm testing it's functionality. And I don't want to wait for a maintenance cycle or down-time to promote it up to production. Is there a way around this?
Presumably you have READ_COMMITTED_SNAPSHOT turned on for your database.
If this is the case, you need to change your Database project settings to match. Check "Read committed snapshot" transaction isolation, within the Operational tab in Database Settings for the project.
For me, this prevented the publish timing out, i.e. I can now publish successfully.
For a safer way to deploy to a server that's in use, try using a schema comparison instead.

Is there a way to suppress SQL03006 error in VS2010 database project?

First of all, I know that the error I am getting can be resolved by creating reference project (of type Database Server) and then referencing it in my Database project...
However, I find this to be overkill, especially for small teams where there is no specific role separation between developers and db admins..But, let's leave this discussion for another time... Same goes for DACs...Can't use DAC b/c of limited objects supported...
Question
Now, the question is: Can I (and how), disable SQL03006 error when building my Database project. In my case this error is generated because I am creating some users whose logins are "unresolved"...I think this should be possible I hope, since I "know" that logins will exist on the server before I deploy the script...I also don't want to maintain database server project just so I can keep refs resolved (I have nothing besides logins at server level)...
Workaround
Using pre/post deployment scripts, it is trivial to get the secript working...
Workaround Issue
You have to comment out user scripts (which use login references) for workaround...
As soon as you do that, the .sqlpermissions bomb out, saying there is no referenced users...And then you have to comment permissions out and put them in post deploy scripts...
The main disadvantage of this workaround is that you cannot leverage schema compare to its fullest extent (you have to specify to ignore users/logins/permissions)
So again, all I want is
1. to maintain only DB project (no references to DB Server projects)
2. disable/suppress SQL03006 error
3. be able to use schema compare in my DB project
Am I asking for impossible? :)
Cheers
P.S.
If someone is aware of better VS2010 database project templates/tools (for SQL Server 2008 R2) please do share...
There are two workarounds:
1.
Turn off any schema checking (Tools > Options > Database Tools > Schema Compare > SQL Server 200x, then the Object Type tab) for anything user or security related. This is a permanent fix
2.
Go through the schema comparison and mark anything user or security related as Skip and then generate your SQL compare script. This is a per schema comparison fix.
It should be obvious but if you already have scripts in your project that reference logins or roles then delete them and they won't get created.

Resources