At the moment, I have 2 databases, one is local and dev, However, I configured the local one okay using config files and a specific gradle task, this runs okay;
However since Im trying to simulate what would happen in my pipeline, how can I set it up so that reads from the environment variables on my machine, so far this is what I expected;
I checked the environment variable by running
echo $FLYWAY_URL
This returned
jdbc:postgresql://localhost:5432/postgres
Which means variable exists, Next I setup my task like this in gradle;
def jdbcDevUrl = System.getenv()['FLYWAY_URL']
task migrateDev(type: FlywayMigrateTask) {
url = jdbcDevUrl
user = 'myUsr2'
password = 'mySecretPwd2'
locations = ['filesystem:doc/flyway/migrations']
}
However this does not work at all. I have tried to run by not setting up url properties from here hoping it would be picked up automatically but it does not work.
Also using a config file with
flyway.url=${FLYWAY_URL}
does not work. Im using community edition.
All I get is
Unable to connect to the database. Configure the url, user and password!
Any help would be highly appreciated.
Related
I have an Ansible script to update and maintain my WildFly installation. One of my tasks in this setup is managing the MySQL-driver and in order to perform an update on that driver, I have to first disable the application that uses that driver, before I can replace the it and set up all my datasources anew.
My CLI script starts with the following lines:
if (outcome == success) of /deployment=my-app-1.1.ear:read-resource
deployment disable my-app-1.1.ear
end-if
My problem is that I am here very depending on the actual name of the application and that name can change over time since I have my version information in there.
I tried the following:
set foo=`ls /deployment`
deployment disable $foo
It did not work since when I look at foo I see that it was not my-app-1.1.ear but ["my-app-1.1.ear"] -- so I feel that I might be going in the right direction, even though I have not got it right.
I've read many questions on this such as:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/da4bdb11-fe42-49db-bb8d-288dd1bb72a2/sqlcmd-vars-in-create-table-script?forum=ssdt
and
How to run different pre and post SSDT pubish scripts depending on the deploy profile
What I'm trying to achieve is a way of defining a set of scripts based on the environment being deployed to. The idea is that the environment is passed in as a SQLCMD variable as part of the azure-devops pipeline into a variable called $(ServerName), which I've setup in the sql server database project under properties with a default of 'DEV'.
This is then used in the post deployment script like this:
:r .\PostDeploymentScripts\$(ServerName)\index.sql
This should therefore pick up the correct index.sql file based on the $(ServerName) variable. When testing this by publishing and entering 'QA' for the $(ServerName) variable and generating the script it was still displaying the 'DEV' scripts. However, the top of the script showed the variable had been set correctly:
How do I get the post deployment script to reference the $(ServerName) variable correctly so I can dynamically set the correct reference path?
Contrary to this nice post: https://stackoverflow.com/a/54482350/11035005 , it appears that the :r directive is evaluated at compile time and inserted into the DACPAC before the xml profiles are even evaluated so this is not possible as explained.
The values used are the defaults or locals from the build config and can only be controlled from there.
i have two ODI instances installed on the same machine with the same home directory.
i am trying to import scenarios using ./startcmd.sh command and it is working but it is always deploy the scenario to instance1.
the question is where i can redirect the deployment to instance2 instead of instance1?
are there any properties file or something else providing that ?
A scenario is not imported into an instance / path on the machine. It's imported in a work repository in a database.
OdiImportScen does not have a parameter to specify the repository in which you want to import.
Instead you can use OdiImportObject and specify the WORKREPNAME parameter.
Is it possible to deploy different sets of seed data for different publish profiles using visual studio Sql Server Data tools database project?
We know you can deploy seed data using a post deployment script.
We know you can deploy to different environments using the publish profiles facility.
What we don't know is how you can deploy different seed data to the different environments.
Why would we want to do this?
We want to be able to do this so we can have a small explicit set of seed data for unit testing against.
We need a wider set of data to deploy to the test team's environment for the test team to test the whole application against
We need a specific set of seed data for the pre-prod environment.
There are a few ways you can achieve this, the first approach is to check for the environment in the post deploy script such as..
if ##servername = 'dev_server'
begin
insert data here
end
A slightly cleaner version is to have different script files for each environment and importing them via the :r import sqlcmd script so you could have:
PostDeploy.sql
DevServer.sql
QAServer.sql
then
if ##servername = 'dev_server'
begin
:r DevServer.sql
end
if ##servername = 'qa_server'
begin
:r QAServer.sql
end
You will need to make sure the paths to the .sql files are correct and you copy them with the dacpac.
You don't have to use ##servername you can use sqlcmd variables and pass them in for each environment which again a little cleaner than hardcoded server names.
The second approach is to moodify the dacpac to change the post delpoy script with your environment specific one, this is the my preferred and works best as part of a CI build, my process is:
Check-in changes
Build Server builds dacpac
Build takes dacpac, copies to the dev,qa,prod, etc env folders
Build replaces the post-deploy script in each with the env specific script
I call the scripts PostDeploy.dev.sql, PostDeploy.Qa.sql etc and set the Build action to "None" or they are added as "Script, Not in Build".
To replace the post-deploy script you just need to use the .net Packaging API or for some examples take a look at my Dir2Dac demo which does that and more:
https://github.com/GoEddie/Dir2Dac
more specifically:
https://github.com/GoEddie/Dir2Dac/blob/master/src/Dir2Dac/DacCreator.cs
var part = package.CreatePart(new Uri("/postdeploy.sql", UriKind.Relative), "text/plain");
using (var reader = new StreamReader(_postDeployScript))
{
reader.BaseStream.CopyTo(part.GetStream(FileMode.OpenOrCreate, FileAccess.ReadWrite));
}
I have solved this by writing a Powershell script that gets executed automatically when publishing, by an Exec Command in the Project-file.
It creates a script file, which includes all scripts found in a folder in the project (the folder is named like the target Environment).
This script is then included in the post-deploy script.
I am struggling to get Jenkins to work with SSH and after looking at a number of questions and answers the answer seems to involve setting the Windows Environment variable HOME.
When I set this environment variable and restart JENKINS, Jenkins starts properly but i can't access it via the URL:
http://localhost:8080
Once I get rid of this variable and restart Jenkins it works well.
I am not sure why this variable is wreaking havoc? I am not sure how others have managed to set this variable and get things to work.
The outcome is the same, when i remove the Windows environment variable and replace it with the system property inside Jenkins.
Appreciate any suggestions/advice.
Thanks