Run multiple test same build from different tests project - visual-studio

Need your help in the following scenario:
I have a solution with 2 projects with different unit tests
Those projects generate 2 different dll: *deployment.dll and *database.dll
I have a build on TFS that I want to use to run those tests, I'm using "Test Case Filter" to filter the categories of my tests
(TestCategory=TEST1|TestCategory=TEST2|TestCategory=TEST3|TestCategory=TEST4)
and in "Test Sources Spec" I'm filtering both dll (*deployment.dll;*database.dll)
*.deployment.dll has TEST2, TEST3, TEST4
*.database.dll has TEST1
This doesn't work, tests of *database.dll does not run. Test selected in Visual Studio Test Runner
Could you please help on that? If I make the build with only 1 dll, for example, *.database.dll, TEST1 runs well.
(UPDATE) SCENARIO 1
Test Case Filter: TestCategory=TEST1|TestCategory=TEST1|TestCategory=TEST2|TestCategory=TEST3|TestCategory=TEST4
Test Sources Spec: *database.dll;*deployment.dll
only runs TEST1
(UPDATE) SCENARIO 2
Test Case Filter: TestCategory=TEST1|TestCategory=TEST1|TestCategory=TEST2|TestCategory=TEST3|TestCategory=TEST4
Test Sources Spec: **\*deployment.dll;*database.dll
only runs TEST2,TEST3,TEST4
(UPDATE) Does not find tests in Database.dll

I've tested in TFS 2015.3, XAML build, but couldn't reproduce your issue. I'd like to share my steps here for your reference:
I have a solution with some projects, 2 of them are UnitTest projects (UnitTestProject1, UnitTestProject2).
In UnitTest1 project, I added TestCategory for two test case, check screenshot below:
[TestMethod()]
[TestCategory("Pro")]
public void M1Test()
{
//
}
[TestMethod()]
[TestCategory("Dev")]
public void M2Test()
{
//
}
Similar to Step2, in UnitTest2 project, I added TestCategory for two test case, check screenshot below:
[TestMethod()]
[TestCategory("Pro1")]
public void M3Test()
{
//
}
[TestMethod()]
[TestCategory("Dev1")]
public void M4Test()
{
//
}
Edit "Test Case Filter" and "Test Sources Spec" in build definition like below screenshot and queue a build:
The test result is as expected. Only M1Test and M2Test in UnitTestProject1, M3Test and M4Test in UnitTestProject2 are tested.

Finally, it's solved :)
So, what I have done to solve this problem, was, change Build Process Template.
There are one step in this process, calling: "FindMatchingFiles"
I change this value as the below image shows. (however, from now on, I must use "**\*" in all my filters that use this process template). This operation make sure that I have the assemblies with fullpath destination complete.
If you have different solutions, please post here :)
Special thank you to #Cece - MSFT for all support

Related

Customising Junit5 test output via Gradle

I'm trying to output BDD from my junit tests like the following: -
Feature: Adv Name Search
Scenario: Search by name v1
Given I am at the homepage
When I search for name Brad Pitt
And I click the search button2
Then I expect to see results with name 'Brad Pitt'
When running in IntelliJ IDE, this displays nicely but when running in Gradle nothing is displayed. I did some research and enabled the test showStandardStreams boolean i.e.
In my build.gradle file I've added ...
test {
useJUnitPlatform()
testLogging {
showStandardStreams = true
}
}
This produces ...
> Task :test
Adv Name Search STANDARD_OUT
Feature: Adv Name Search
Tests the advanced name search feature in IMDB
Adv Name Search > Search by name v1 STANDARD_OUT
Scenario: Search by name v1
Given I am at the homepage
When I search for name Brad Pitt
And I click the search button2
Then I expect to see results with name 'Brad Pitt'
... which is pretty close but I don't really want to see the output from gradle (the lines with STANDARD_OUT + extra blank lines).
Adv Name Search STANDARD_OUT
Is there a way to not show the additional Gradle logging in the test section?
Or maybe my tests shouldn't be using System.out.println at all, but rather use proper logging (eg. log4j) + gradle config to display these?
Any help / advice is appreciated.
Update (1)
I've created a minimum reproducable example at https://github.com/bobmarks/stackoverflow-junit5-gradle if anyone wants to quickly clone and ./gradlew clean test.
You can replace your test { … } configuration with the following to get what you need:
test {
useJUnitPlatform()
systemProperty "file.encoding", "utf-8"
test {
onOutput { descriptor, event ->
if (event.destination == TestOutputEvent.Destination.StdOut) {
logger.lifecycle(event.message.replaceFirst(/\s+$/, ''))
}
}
}
}
See also the docs for onOutput.
FWIW, I had originnaly posted the following (incomplete) answer which turned out to be focusing on the wrong approach of configuring the test logging:
I hardly believe that this is possible. Let me try to explain why.
Looking at the code which produces the lines that you don’t want to see, it doesn’t seem possible to simply configure this differently:
Here’s the code that runs when something is printed to standard out in a test.
The method it calls next unconditionally adds the test descriptor and event name (→ STANDARD_OUT) which you don’t want to see. There’s no way to switch this off.
So changing how standard output is logged can probably not be changed.
What about using a proper logger in the tests, though? I doubt that this will work either:
Running tests basically means running some testing tool – JUnit 5 in your case – in a separate process.
This tool doesn’t know anything/much about who runs it; and it probably shouldn’t care either. Even if the tool should provide a logger or if you create your own logger and run it as part of the tests, then the logger still has to print its log output somewhere.
The most obvious “somewhere” for the testing tool process is standard out again, in which case we wouldn’t win anything.
Even if there was some interprocess communication between Gradle and the testing tool for exchanging log messages, then you’d still have to find some configuration possibility on the Gradle side which configures how Gradle prints the received log messages to the console. I don’t think such configuration possibility (let alone the IPC for log messages) exists.
One thing that can be done is to set the displayGranuality property in testLogging Options
From the documentation
"The display granularity of the events to be logged. For example, if set to 0, a method-level event will be displayed as "Test Run > Test Worker x > org.SomeClass > org.someMethod". If set to 2, the same event will be displayed as "org.someClass > org.someMethod".

Tagged hooks are running for every test

My Hooks class like below :
#Before("#Firefox")
public void setUpFirefox() {}
#Before("#Chrome")
public void setUpChrome() {}
#After
public void tearDown(){}
When I run the following command mvn test -Dcucumber.options="--tags #Chrome" both #Before functions are calling.
How can I run specific #Before method depending on maven command?
My Runner class (I already tried with tags option, it is also not working for me) :
#RunWith(Cucumber.class)
#CucumberOptions(
plugin = {"pretty", "json:target/cucumber-reports/Cucumber.json",
"junit:target/cucumber-reports/Cucumber.xml",
"html:target/cucumber-reports"},
features = {"src/test/features/"},
glue = {"steps"})
public class RunCukesTest {
}
My feature file :
Feature: Storybook
#Test #Widgets #Smoke #Chrome #Firefox
Scenario: Storybook example
Given The user clicks on "storybook" index on the homepage
And Storybook HomePage should be displayed
It looks like it's because you've got both tags for that scenario, the before hooks seem to execute based on the scenario that's running. e.g.
-The tag command line --tags #Chrome etc. specifies which scenario to run
-Now based on that scenario, execute the before functions with the tags attached to that scenario (Test, Widgets, Smoke, Chrome, Firefox)
If you were to have a Before hook for the tag Smoke, I would imagine that would also run.
For example:
(It's in Scala)
Before("#test1") { _ =>
println("test1 before actioned")
}
Before("#test2") { _ =>
println("test2 before actioned")
}
With the feature file:
[...]
#test1
Scenario: Test scenario 1
Given Some precondition occurs
#test2
Scenario: Test scenario 2
Given Some precondition occurs
When I run either of those tags, I get the output
test1 before actioned or test2 before actioned
However, have both tags on one scenario then both lines are printed.
What's being actioned in those setupChome, setupFirefox functions, just setting the driver up? You could create new system property such as browser for example, match on the value and execute some setup then you could type in:
-Dbrowser=chrome and it would do the setup that way.

Hide test from Visual Studio Test Explorer

Using Fact(Skip = "Manual Only") is not entirely satisfactory because, if I click directly on the test to run it, it's still ignored.
I want it to not appear in Test Explorer but I can still run it by clicking on it. Is this possible?
Nice trick from Jimmy Bogard is to use the fact that Skip is writable and react to something in the environment:
public class RunnableInDebugOnlyAttribute : FactAttribute
{
public RunnableInDebugOnlyAttribute()
{
if (!Debugger.IsAttached)
Skip = "Only running in interactive mode.";
}
}
(Aside from that, no xUnit does not have an [Interactive]; the closest thing is `[Trait("Interactive","True")] and use that to either use the trait grouping in test explorer to remove them.
Finally, a 'cheat' way is to use TestDriven.Net, which doesnt care if there is an attribute (along with lots of other facilities).

CodedUI TestCaseFilter

I'm using a CSV file as a DataSource in my CodedUI tests. The file looks like so:
Environment,URL
Live,www.example.com
Stage,stage.example.com
Test,test.example.com
I'd like to be able to setup my TestCaseFilter to selectively run the tests on only one of the environments when running the vstest.console.exe commandline. I can't seem to find any way to do that, i.e. it looks like the TestCaseFilter commandline parameter only supports specific properties. Am I wrong? Is there a way to pass a custom property to TestCaseFilter so that only the tests that pertain to a specific DataRow are executed?
The DataSource in my tests is setup like so:
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\environments.csv", "environments#csv", DataAccessMethod.Sequential)]
And I am referencing the environment in each test like so:
var url = TestContext.DataRow["URL"].ToString();
Thanks for any insight.
The simple and best way is to add another column next to the environment column in your testdata file. Say the column name is RunStatus. The values should either be 'Yes' or 'No', the logic is that the URL which has Runstatus as Yes should be included for the execution.
Before the TestMethod, have a condition to check the Runstatus of the row is Yes. If it's Yes, then Run the TestMethod.
[TestMethod]
public void RunTheTest(TestContext testcontext)
{
if(testcontext.DataRow["RunStatus"].ToString()=="Yes")
{
TestMethod1();
}
}
Hope it helps. Good luck !!

How to cleanly separate two instances of the Test task in a Gradle build

Following on from this question.
If I have a build with two instances of the Test task, what is the best (cleanest, least code, most robust) way to completely separate those two tasks so that their outputs don't overlap?
I've tried setting their testResultsDir and testReportsDir properties, but that didn't seem to work as expected. (That is, the output got written to separate directories, but still the two tasks re-ran their respective tests with each run.)
Update for the current situation as of gradle 1.8: The testReportDir and reportsDir properties in dty's answer are deprecated since gradle 1.3. Test results are now separated automatically in the "test-results" directory and to set different destination directories for the HTML reports, call
tasks.withType(Test) {
reports.html.destination = file("${reporting.baseDir}/${name}")
}
Yet again, Rene has pointed me in the right direction. Thank you, Rene.
It turns out that this approach does work, but I must have been doing something wrong.
For reference, I added the following to my build after all the Test tasks had been defined:
tasks.withType(Test) {
testReportDir = new File("${reportsDir}/${testReportDirName}/${name}")
testResultsDir = new File("${buildDir}/${testResultsDirName}/${name}")
}
This will cause all instances of the Test task to be isolated from each other by having their task name as part of their directory hierarchy.
However, I still feel that this is a bit evil and there must be a cleaner way of achieving this that I haven't yet found!
Ingo Kegel's answer doesn't address the results directory, only the reports directory. Which means that a test report for a particular test type could be built that includes more test results than just that type. This can be addressed by setting the results directory as well.
tasks.withType(Test) {
reports.html.destination = file("${reporting.baseDir}/${name}")
reports.junitXml.destination = file("${testResultsDir}/${name}")
}
Just an update. The reports.html.destination way is deprecated.
This is the "new" way (Gradle > 4.x):
tasks.withType(Test) {
reports.html.setDestination file("${reporting.baseDir}/${name}")
}

Resources