how to run parameterised xUnit test cases via xunit.console? - xunit

I need to run my xUnit test cases through command line. I have some test cases as below:
[Theory]
[InlineData(2)]
[InlineData(4)]
[InlineData(6)]
public void GivenNumberMustBeAnEvenNumber(int val)
{
Assert.Equal(val%2, 0);
}
How can I run above test case with only 4 as the inline data? I passed -method "MyClass.GivenNumberMustBeAnEvenNumber(4)" to xunit.console but it didn't run. I even tried with -method "MyClass.GivenNumberMustBeAnEvenNumber(val: 4)" but no success.
I also have some test cases that take strings in their parameters as below:
[Theory]
[InlineData("abc")]
[InlineData("xyz")]
public void GivenStringLengthIsAlwaysThree(string val)
{
Assert.Equal(val.Length, 3);
}
How can I run above test case with "xyz" as inline data only.
Please help.

Related

How to get unit test results in the browser with Laravel.

I would like to see the results of individual unit tests in the browser. I would like to do something like...
public function test() {
$test = new \Tests\Unit\ExampleTest();
dd($test->testBasicTest());
}
That just returns null. I also tried exec('vendor/bin/phpunit'); but would like to stay away from exec() (and to mention, that snippet creates an endless loop for some reason).
What if you try doing a echo of your dd?
public function test() {
$test = new \Tests\Unit\ExampleTest();
echo(dd($test->testBasicTest()));
}

testng_failed.xml does not get refreshed before running and run older failed testcases

Actually question related to testng-failed.xml has already been asked many times but my problem is little different. I want to run all the failed test cases together so what i did is in my pom I passed testng-failed.xml.
But the problem I am facing is first my testng.xml runs then testng-failed.xml and then it testng-failed.xml gets overridden. Due to this , suppose if i give a second time fresh run to my testcases, testng.xml runs, then my testng-failed.xml has previously failed test cases so it runs the previously failed cases and then updates testng-failed.xml with this time failed cases.
I dont knoe which listener to add to handle this issue that whenever i run first testng.xml should run , then it should override testng-failed.xml and then testng-failed.xml should run.
I am using Maven, selenium, testng.
I just eneterd testng-failed.xml in my pom as shown below. Please let me know which listner to use
<suiteXmlFiles>
<suiteXmlFile>src/resources/testng/testng.xml</suiteXmlFile>
<suiteXmlFile>test-output/testng-failed.xml</suiteXmlFile>
</suiteXmlFiles>
Create class 'RetryListener' by implementing 'IAnnotationTransformer'.
public class RetryListener implements IAnnotationTransformer {
#Override
public void transform(ITestAnnotation testannotation, Class testClass,
Constructor testConstructor, Method testMethod) {
IRetryAnalyzer retry = testannotation.getRetryAnalyzer();
if (retry == null) {
testannotation.setRetryAnalyzer(Retry.class);
}
}
}
Now Create another class.
public class Retry implements IRetryAnalyzer {
private int retryCount = 0;
private int maxRetryCount = 1;
// Below method returns 'true' if the test method has to be retried
else 'false'
//and it takes the 'Result' as parameter of the test method that just
ran
public boolean retry(ITestResult result) {
if (retryCount < maxRetryCount) {
System.out.println("Retrying test " + result.getName() + " with status "
+ getResultStatusName(result.getStatus()) + " for the " + (retryCount+1) + " time(s).");
retryCount++;
return true;
}
return false;
}
public String getResultStatusName(int status) {
String resultName = null;
if(status==1)
resultName = "SUCCESS";
if(status==2)
resultName = "FAILURE";
if(status==3)
resultName = "SKIP";
return resultName;
}
}
And Now Add below lines in your testNG xml file
<listeners>
<listener class-name="com.pack.test.RetryListener"/>
</listeners>
and Do not pass Xml file in pom.xml
Hope it will works
Thanks
Why are you running the testng xml and failed test xml in the same testng task. You should have to separate build task, first that runs testng xml and generates the failed tests xml and then another task running the failed test xml. It will work.
I implemented run one time and rerun three times only the newly failed tests.
mvn $par1=$pSuiteXmlFile test > $test1log
mvn $par1=$failedRelPath test > $failed1log
mvn $par1=$failedRelPath test > $failed2log
mvn $par1=$failedRelPath test > $failed3log
It works, but with small test-cases-count. I have a suite with 300 tests in it and somehow the testng-failed.xml is not created by surefire/testng after the main (first) run. When the suite is smaller, the testng-failed.xml is created as required.

Test missing when using AutoFixture with NSubstitute auto data attribute

A test class with the following test is discovered as expected:
[Theory]
[AutoData]
public void MyDiscoveredTest() { }
However, the following test is missing:
[Theory]
[AutoNSubstituteData]
public void MyMissingTest() { }
Interestingly, if I put MyDiscoveredTest after MyMissingTest, then MyDiscoveredTest is also now missing. I have tried both the xUnit visual studio runner and xUnit console runner with the same results.
My AutoNSubstituteData attribute is defined here:
internal class AutoNSubstituteDataAttribute : AutoDataAttribute
{
internal AutoNSubstituteDataAttribute()
: base(new Fixture().Customize(new AutoNSubstituteCustomization()))
{
}
}
A related question: since the AutoNSubstituteDataAttribute above seems like a fairly common attribute, I'm wondering why it's not included with AutoFixture.AutoNSubstitute. Similarly useful would be an InlineAutoNSubstituteDataAttribute. Should I submit a pull request for these?
Nuget package versions used:
AutoFixture 3.30.8
AutoFixture.Xunit2 3.30.8
AutoFixture.AutoNSubstitute 3.30.8
xunit 2.0.0
xunit.runner.visualstudio 2.0.0
xunit.runner.console 2.0.0
NSubstitute 1.8.2.0
I am using Visual Studio 2013 Update 4 and targeting the .NET 4.5.1 Framework
Update: As recommended I tried TestDriven.NET-3.9.2897 Beta 2. The missing test now runs, however it still seems there is some bug. New example:
[Theory]
[AutoData]
public void MyWorkingTest(string s)
{
Assert.NotNull(s); // Pass
}
[Theory]
[AutoNSubstituteData]
public void MyBrokenTest(string s)
{
Assert.NotNull(s); // Fail
}
[Theory]
[AutoData]
public void MyWorkingTestThatIsNowBroken(string s)
{
Assert.NotNull(s); // Fail even though identical to MyWorkingTest above!
}
Both MyBrokenTest and MyWorkingTestThatIsNowBroken fail at Assert.NotNull, while MyWorkingTest passes even though it is identical to MyWorkingTestThatIsNowBroken. So not only does the AutoNSubstituteData attribute not work properly, but it is causing the downstream test to misbehave!
Update2: Changing the definition of AutoNSubstituteDataAttribute to public instead of internal fixes everything. xunit runner now discovers and passes all the tests as does TestDriven.Net. Any idea about this behavior? Is it expected?
Both xUnit visual studio runner and TestDriven.Net runner are causing these weird issues because the AutoNSubstituteDataAttribute class and constructor are internal. Changing these to public resolves all the issues. If the attribute is being ignored I would expect an error like this: System.InvalidOperationException : No data found for ...
This doesn't explain why the downstream tests are affected by the offending AutoNSubstituteData attribute from a totally different test. It seems like the unit test runners should be more robust in this case.
For completeness here is the working implementation of AutoNSubstituteDataAttribute:
public class AutoNSubstituteDataAttribute : AutoDataAttribute
{
public AutoNSubstituteDataAttribute()
: base(new Fixture().Customize(new AutoNSubstituteCustomization()))
{
}
}

NUnit v3 alpha: trying to get parallel tests to work

I'm trying to get parallel tests to work in NUnit v3, however, the tests don't seem to.
Considering the following test class:
namespace NUnitAlpha3Experimental
{
[TestFixture]
[Parallelizable(ParallelScope.Children)]
class DummyTests
{
[Test]
public void MustSuccess()
{
Assert.IsTrue(true);
FileIO.appendToFile("output.txt", Reflexion.GetCurrentMethodName());
}
[Test]
public void MustFail()
{
Thread.Sleep(500);
FileIO.appendToFile("output.txt", Reflexion.GetCurrentMethodName());
Assert.IsFalse(true);
}
}
}
Whenever I run my tests, "MustFail" is always outputted before "MustSuccess". "MustSuccess" should be outputted first if the tests were ran in parallel. Maybe there's something wrong with my attributes. I don't know.
Please help. Thank you.
edit: I added the /workers=8 to my command line:
[...] \NUnit3\nunit-console NUnitAlpha3Experimental.exe /framework:net-4.5 -workers=8
but still, my tests dont seem to run in parallel.
More info here: https://groups.google.com/forum/#!topic/nunit-discuss/_Zcd3EjiJGo
From the author of NUnit, parallel test cases are not yet implemented. https://groups.google.com/forum/#!topic/nunit-discuss/_Zcd3EjiJGo
Parallel testing of fixtures is implemented thought.

testng specify different Users

I am running our automated tests using TestNG. The reason we picked TestNG is because we can send variables inputs into the test methods example public void testXX( String userId ) and the userId can change for each test.
The code below shows three different userIds I can use to execute my tests. So my exact same test will run three times for each of the three different users. This feature is awesome and really enables me to have multiple tests under different scenarios because each of our users carry different profiles.
// All valid Pricing Leads
#DataProvider(name = "userIds")
public Object[][] createPricingLeadUsersParameters() {
return new Object[][] {
{ "TestUser001" },
{ "TestUser002" },
{ "TestUser003" }
};
}
#Test( dataProvider = "userIds" )
public void createGroup( String userIds) {
............
}
The problem I am having right now is during certain conditions I can only have one userId used or else all of my tests will fail. I would like to keep my exact same test but only pass in on userId not the three shown above. It there a way to configure TestNG to make this variable on the command line so at times I would use the three defined, but under another condition it would only be one of the three or a new userId?
Sure, there are plenty of ways to do this. How about passing a system property when you run TestNG?
java -Dfoo=bar org.testng.TestNG...
and then your data provider can test the value of foo with System.getProperty() and adjust what it returns accordingly.

Resources