How to read .runsettings test parameter in xUnit fixture - xunit

I'm writing xUnit unit test cases for a dotnet core application which uses DocumentDB (CosmosDB) as storage. The unit test are written to execute against the local cosmos db emulator. On the Azure DevOps build environment, I've setup the Azure Cosmos DB CI/CD task which internally creates a container to install the emulator. However, I'm not able to figure out that how the endpoint of emulator can be passed to xUnit fixture?
Is there any way through which xUnit fixture can read the .runsettings test parameters or parameters can be passed via other source?
Update
Currently, I implemented the scenario using Environment Variable but still not happy to define the connection string as a environment variable using powershell in build task and read it in through code during unit test execution. I was thinking if there could be another way of achieving it..
Below snapshot shows how the build tasks are configured currently as workaround to achieve the desired:
And code to read the value as
var serviceEndpoint = Environment.GetEnvironmentVariable("CosmosDbEmulatorEndpointEnvironmentVariable");
Since, UnitTest task provides the option to pass .runsettings/.testsettings with option to override the test run parameters so was thinking it something can be achieved using those options.

This is not supported in xUnit.
See SO answers here and here, and this github issue indicating that it is not something that will be supported in xUnit.

Currently, I implemented the scenario using Environment Variable but still not happy to define the connection string as a environment variable using powershell in build task and read it in through code during unit test execution. I was thinking if there could be another way of achieving it..
Below snapshot shows how the build tasks are configured currently as workaround to achieve the desired:
And code to read the value as
var serviceEndpoint = Environment.GetEnvironmentVariable("CosmosDbEmulatorEndpointEnvironmentVariable");
Since, UnitTest task provides the option to pass .runsettings/.testsettings with option to override the test run parameters so was thinking it something can be achieved using those options.

Related

How do you reference defined variables in a SQL Server Database Project?

I've read many questions on this such as:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/da4bdb11-fe42-49db-bb8d-288dd1bb72a2/sqlcmd-vars-in-create-table-script?forum=ssdt
and
How to run different pre and post SSDT pubish scripts depending on the deploy profile
What I'm trying to achieve is a way of defining a set of scripts based on the environment being deployed to. The idea is that the environment is passed in as a SQLCMD variable as part of the azure-devops pipeline into a variable called $(ServerName), which I've setup in the sql server database project under properties with a default of 'DEV'.
This is then used in the post deployment script like this:
:r .\PostDeploymentScripts\$(ServerName)\index.sql
This should therefore pick up the correct index.sql file based on the $(ServerName) variable. When testing this by publishing and entering 'QA' for the $(ServerName) variable and generating the script it was still displaying the 'DEV' scripts. However, the top of the script showed the variable had been set correctly:
How do I get the post deployment script to reference the $(ServerName) variable correctly so I can dynamically set the correct reference path?
Contrary to this nice post: https://stackoverflow.com/a/54482350/11035005 , it appears that the :r directive is evaluated at compile time and inserted into the DACPAC before the xml profiles are even evaluated so this is not possible as explained.
The values used are the defaults or locals from the build config and can only be controlled from there.

SonarQube Generic Execution Report is ignored

The whole morning I have been trying to setup e2e tests reporting via SonarQube's Generic Execution, by using the Generic Test Data -> Generic Execution feature.
I created a custom xml report that gets added to the scan properties like this:
sonar.testExecutionReportPaths=**/e2e-report.xml
So far, SonarQube seems to completely ignore this property and I no attempt to parse the file in the logs. Has anyone made it work?
These are links by Sonar about the Generic Execution feature:
https://docs.sonarqube.org/display/SONAR/Generic+Test+Data
https://github.com/SonarSource/sonarqube/blob/master/sonar-scanner-engine/src/main/java/org/sonar/scanner/genericcoverage/GenericTestExecutionSensor.java
This is a SonarQube 6.2+ feature. Make sure to use an appropriate SonarQube version.
In addition sonar.testExecutionReportPaths does not allow matchers (like *).
Please provide relative or absolute paths, comma separated.
See also:
The official documentation of the Generic Test Data feature
The source code, that looks up the generic execution files

global variable for poor man's dependency injection in parse cloud code

I would like a variable to be shared among the various modules that I use for my cloud code.
For example, I was hoping I would be able to do the following:
In main.js, I would have the following:
Env = 'prod';
var Foo = require('cloud/foo.js').Foo;
Then in foo.js, I'd want to be able to access the value of Env
console.log("environment is: " + Env);
This does not work when deployed on Parse, but it does work if I run this in node.js.
Essentially, what I am looking for is a poor man's way to do dependency injection to allow me to easily test my cloud code in a local environment using node.js.
In the case above, Env would store the information that differs whether the cloud code executes in production (as a cloud function in Parse) or in a test (in node.js run locally).
[In the simple example above, I set Env to prod in main.js, and I'd set it to 'test' in my test script.]
Thanks for any insight.

Deploy different seed data for different publish profiles using visual studio ssdt?

Is it possible to deploy different sets of seed data for different publish profiles using visual studio Sql Server Data tools database project?
We know you can deploy seed data using a post deployment script.
We know you can deploy to different environments using the publish profiles facility.
What we don't know is how you can deploy different seed data to the different environments.
Why would we want to do this?
We want to be able to do this so we can have a small explicit set of seed data for unit testing against.
We need a wider set of data to deploy to the test team's environment for the test team to test the whole application against
We need a specific set of seed data for the pre-prod environment.
There are a few ways you can achieve this, the first approach is to check for the environment in the post deploy script such as..
if ##servername = 'dev_server'
begin
insert data here
end
A slightly cleaner version is to have different script files for each environment and importing them via the :r import sqlcmd script so you could have:
PostDeploy.sql
DevServer.sql
QAServer.sql
then
if ##servername = 'dev_server'
begin
:r DevServer.sql
end
if ##servername = 'qa_server'
begin
:r QAServer.sql
end
You will need to make sure the paths to the .sql files are correct and you copy them with the dacpac.
You don't have to use ##servername you can use sqlcmd variables and pass them in for each environment which again a little cleaner than hardcoded server names.
The second approach is to moodify the dacpac to change the post delpoy script with your environment specific one, this is the my preferred and works best as part of a CI build, my process is:
Check-in changes
Build Server builds dacpac
Build takes dacpac, copies to the dev,qa,prod, etc env folders
Build replaces the post-deploy script in each with the env specific script
I call the scripts PostDeploy.dev.sql, PostDeploy.Qa.sql etc and set the Build action to "None" or they are added as "Script, Not in Build".
To replace the post-deploy script you just need to use the .net Packaging API or for some examples take a look at my Dir2Dac demo which does that and more:
https://github.com/GoEddie/Dir2Dac
more specifically:
https://github.com/GoEddie/Dir2Dac/blob/master/src/Dir2Dac/DacCreator.cs
var part = package.CreatePart(new Uri("/postdeploy.sql", UriKind.Relative), "text/plain");
using (var reader = new StreamReader(_postDeployScript))
{
reader.BaseStream.CopyTo(part.GetStream(FileMode.OpenOrCreate, FileAccess.ReadWrite));
}
I have solved this by writing a Powershell script that gets executed automatically when publishing, by an Exec Command in the Project-file.
It creates a script file, which includes all scripts found in a folder in the project (the folder is named like the target Environment).
This script is then included in the post-deploy script.

Jenkins Integration/Unit Testing

My group will be implementing CI using Jenkins. As such, I want to make sure that any unit and/or integration tests we create integrate easily with Jenkins. We have several different technologies in our stack we are using from C++ code to Oracle PL/SQL packages to Groovy code. We want to develop test drivers (code that wraps and tests these individual code units) that we can integrate with Jenkins so that these tests are automatically run when we perform commits (git) as well as on a nightly basis. My question is, what are the best practices for writing these test drivers so that they will easily integrate with Jenkins when we implement it?
For example, we have have a PL/SQL stored procedure that we want to run tests against as part of our CI testing. I could write a bash shell script that wraps calls to it, I could write a Java program that calls it. Basically I could wrap it in anything. Then the next question is...is there some sort of standard for outputting results so that Jenkins can easily determine if the test passed or failed?
.is there some sort of standard for outputting results so that Jenkins
can easily determine if the test passed or failed?
If your test results are compliant with Junit results,jenkins have junit plugin which give you the better way for tracing test reports (result trend graph) and also test result archiving. converting ant test log to Junit format easier one.
useful links:
http://nose2.readthedocs.org/en/latest/plugins/junitxml.html
https://wiki.jenkins-ci.org/display/JENKINS/JUnit+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin
Jenkins and JUnit
Basically I could wrap it in anything.
I personally prefer to go with Java among your choices.because it give you better Api to create xml files
Use python unittest to wrap any of your tests.
Produce junit xml test results.
One easy way of getting any python unittest to write out junit is from command-line.
yum install pytest
And call your test script like this:
py.test --junitxml result.xml testscript.py
And in jenkins build configuration Post-build actions Add a "Publish JUnit test result report" action with result.xml and any more test result files you produce.
https://docs.python.org/2.7/library/unittest.html
This is just one way of producing junit xml results with python. There are a good few other methods either using unittest module or junitxml or others.

Resources