Customising Junit5 test output via Gradle - gradle

I'm trying to output BDD from my junit tests like the following: -
Feature: Adv Name Search
Scenario: Search by name v1
Given I am at the homepage
When I search for name Brad Pitt
And I click the search button2
Then I expect to see results with name 'Brad Pitt'
When running in IntelliJ IDE, this displays nicely but when running in Gradle nothing is displayed. I did some research and enabled the test showStandardStreams boolean i.e.
In my build.gradle file I've added ...
test {
useJUnitPlatform()
testLogging {
showStandardStreams = true
}
}
This produces ...
> Task :test
Adv Name Search STANDARD_OUT
Feature: Adv Name Search
Tests the advanced name search feature in IMDB
Adv Name Search > Search by name v1 STANDARD_OUT
Scenario: Search by name v1
Given I am at the homepage
When I search for name Brad Pitt
And I click the search button2
Then I expect to see results with name 'Brad Pitt'
... which is pretty close but I don't really want to see the output from gradle (the lines with STANDARD_OUT + extra blank lines).
Adv Name Search STANDARD_OUT
Is there a way to not show the additional Gradle logging in the test section?
Or maybe my tests shouldn't be using System.out.println at all, but rather use proper logging (eg. log4j) + gradle config to display these?
Any help / advice is appreciated.
Update (1)
I've created a minimum reproducable example at https://github.com/bobmarks/stackoverflow-junit5-gradle if anyone wants to quickly clone and ./gradlew clean test.

You can replace your test { … } configuration with the following to get what you need:
test {
useJUnitPlatform()
systemProperty "file.encoding", "utf-8"
test {
onOutput { descriptor, event ->
if (event.destination == TestOutputEvent.Destination.StdOut) {
logger.lifecycle(event.message.replaceFirst(/\s+$/, ''))
}
}
}
}
See also the docs for onOutput.
FWIW, I had originnaly posted the following (incomplete) answer which turned out to be focusing on the wrong approach of configuring the test logging:
I hardly believe that this is possible. Let me try to explain why.
Looking at the code which produces the lines that you don’t want to see, it doesn’t seem possible to simply configure this differently:
Here’s the code that runs when something is printed to standard out in a test.
The method it calls next unconditionally adds the test descriptor and event name (→ STANDARD_OUT) which you don’t want to see. There’s no way to switch this off.
So changing how standard output is logged can probably not be changed.
What about using a proper logger in the tests, though? I doubt that this will work either:
Running tests basically means running some testing tool – JUnit 5 in your case – in a separate process.
This tool doesn’t know anything/much about who runs it; and it probably shouldn’t care either. Even if the tool should provide a logger or if you create your own logger and run it as part of the tests, then the logger still has to print its log output somewhere.
The most obvious “somewhere” for the testing tool process is standard out again, in which case we wouldn’t win anything.
Even if there was some interprocess communication between Gradle and the testing tool for exchanging log messages, then you’d still have to find some configuration possibility on the Gradle side which configures how Gradle prints the received log messages to the console. I don’t think such configuration possibility (let alone the IPC for log messages) exists.

One thing that can be done is to set the displayGranuality property in testLogging Options
From the documentation
"The display granularity of the events to be logged. For example, if set to 0, a method-level event will be displayed as "Test Run > Test Worker x > org.SomeClass > org.someMethod". If set to 2, the same event will be displayed as "org.someClass > org.someMethod".

Related

Tagging Jmeter Test Cases

I am looking for a way to tag Jmeter test cases.
We are using Jmeter for functional test , so we have a lot of test cases , not every test cases are run for every application configuration.
So based on configuration we need tag our test cases and run the test set accordingly from command line.
(Some thing we can do in testng and other framework , where you tag TC and during run you provide the tag so that TC only with that tag are executed)
If there is no tagging available then i feel that i will need to create multiple test set as per configuration and run them accordingly.
In most cases the Test overlap between this test sets and this will result in duplication and require quite a good maintenance.
Please suggest if you all see any solution to this problem.
One solution that I can think of is explained below:
- Declare a property for every test cases
- Process that property to execute your test cases. Use If Controller to check whether that property is passed or not
- Control that property via command line or via GUI. If you want to run some of them pass that property only.
Practical example is shown below:
Test Plan will look like this:
Test Case steps will be inside If controller, and if controller will decide whether to run that Test Case or not, depending upon what you are passing in that property.
Property declaration for all you test cases.
I have designed this in such a way that you can execute that via GUI as well as via command prompt.
If Controller logic
${__BeanShell("${TC1}"=="ON",)}
Execution via command line
jmeter -n -t <>.jmx -JTC1=ON -JTC3=ON -j sample.log
Here I am running TC1 and TC3, depending upon your requirement you can pass whichever scenario you need to execute.

nightwatchjs, run same test on multiple pages

I have written some tests for my homepage but the tests are very generic, like footer, header checking.
My test structure is like:
const footerCheck = function(browser){
browser.url("example.com");
browser.verify.elementPresent(".footer-top", "Footer-top is present.")
browser.verify.elementPresent(".footer-middle", "Legal notice bar is present")
browser.verify.elementPresent(".footer-bottom", "Copyright bar is present")
}
export.module = {
"Footer Check" : footerCheck
}
Lets say I have 100 pages. I would like to run footerCheck function run on all hundred pages.
URLs like example.com/page1 , example.com/page2 , example.com/page3...
Since all the tests are valid for other pages I would like to loop all pages for the same test cases. Somehow could not get my head around it.
How is that possible, any help would be appreciated.
Thanks
In my personal experience, the best way to do BDD is adding cucumber that uses gherkin syntax. It is clearer and helps a lot to reduce redundant code if you know to use it well. There is a Nightwatch npm plugin to add cucumber, once you have added it you have to create your .feature file like the following
Feature: Check elements are present
Scenario Outline:
Given the user enters on a <page>
Then .footer-top, .footer-middle and .footer-bottom class should be enabled
Examples:
|page|
|page.com/page1|
|page.com/page2|
|page.com/page3|
And your step definitions (where you declare what will do each step) it automatically will run each step for each url provided in the examples (note the <page> flag that will be replaced on the example, first row is the name of the tag).
Take a look to the examples

Turn off FireFox driver refresh POST warning

I have inherited some GEB tests that are testing logging into a site (and various error cases/validation warnings).
The test runs through some validation failures and then it attempts to re-navigate to the same page (just to refresh the page/dom) and attempts a valid login. Using GEB's to() method, it detects that you are attempting to navigate to the page you are on, it just calls refresh - the problem here is that attempts to refresh the last POST request, and the driver displays the
"To display this page, Firefox must send information that will repeat any action (such as a search or order confirmation) that was performed earlier"
message - as the test is not expecting this popup, it hangs and the tests timeout.
Is there a way to turn off these warnings in Firefox webdriver? or to auto-ignore/accept them via Selenium or GEB?
GEB Version: 0.9.2,
Selenium Version: 2.39.0
(Also tried with minor version above: 0.9.3 & 2.40.0)
Caveats:
I know about the POST/Re-direct/GET pattern - but am not at liberty to change the application code in this case
The warning message only causes an issue intermittently (maybe 1 in 5 times) - I have put this down to speed/race conditions whereby the test completes the next actions before the message appears - I know a possible solution is to update tests to wait for message to appear and then accept, but my question is, is there a global setting that can just avoid these being triggered/displayed?
That refresh() is there to work around an issue with IE driver which ignores calls to driver.get() with the same url as the current one.
Instead of monkey patching Browser class (which might bite you somewhere down the line or might not) I would change the url of your login page class. You might for example add an insignificant query string - I think that simply a ? at the end should suffice. The driver.currentUrl == newUrl condition will evaluate to false and you will not see that popup anymore.
If I understand you issue properly this might help. In Groovy you can modify a class on the fly.
We use Spock with Geb and I placed this in a Super class which all Spock Spec inherit from. Eg: QSpec extends GebSpec.
It is the original method slightly modified with the original code commented out so you know what has been changed. I use this technique in several required places to alter Geb behaviour.
static {
Browser.metaClass.go = { Map params, String url ->
def newUrl = calculateUri(url, params)
// if (driver.currentUrl == newUrl) {
// driver.navigate().refresh()
// } else {
// driver.get(newUrl)
// }
driver.get(newUrl)
if (!page) {
page(Page)
}
}
}

NUnit- Custom Property Attribute display in Test Explorer window

I created custom property attribute to link every system test to its driving requirements which is similar to something described in the link below:
NUnit - Multiple properties of the same name? Linking to requirements
I used the code given in the above link
[Requirements(new string[] { "FR50082", "FR50084" })]
[Test]
public void TestSomething(string a, string b) { // blah, blah, blah
Assert.AreNotEqual(a, b); }
which gets displayed in Test explorer (filter by traits) as :-
Requirements[System.String[]] (1)
TestSomething.....
But this is not what I was expecting. I require every requirement to get displayed individually though they are associated to the same test case in test explorer window.
I want to get it displayed as (in test explorer):-
Requirements[FR50082] (1)
TestSomething.....
Requirements[FR50084] (1)
TestSomething.....
and so on....
So, if I am associating n number of Requirements to a test case, the test explorer should display the same test case n times under different requirements. Please let me know how could this be achieved ??
It sounds like you are heading down the BDD (Behavior Driven Design) route. SpecFlow is a good choice in .Net if you don't mind a VS extension.
The big win for you I think would be that you can reuse step definitions, what you're calling TestSomething. You can set up different contexts, your Requirements, as I'm reading them, and in the Then step call your TestSomething to verify all is well.

How to cleanly separate two instances of the Test task in a Gradle build

Following on from this question.
If I have a build with two instances of the Test task, what is the best (cleanest, least code, most robust) way to completely separate those two tasks so that their outputs don't overlap?
I've tried setting their testResultsDir and testReportsDir properties, but that didn't seem to work as expected. (That is, the output got written to separate directories, but still the two tasks re-ran their respective tests with each run.)
Update for the current situation as of gradle 1.8: The testReportDir and reportsDir properties in dty's answer are deprecated since gradle 1.3. Test results are now separated automatically in the "test-results" directory and to set different destination directories for the HTML reports, call
tasks.withType(Test) {
reports.html.destination = file("${reporting.baseDir}/${name}")
}
Yet again, Rene has pointed me in the right direction. Thank you, Rene.
It turns out that this approach does work, but I must have been doing something wrong.
For reference, I added the following to my build after all the Test tasks had been defined:
tasks.withType(Test) {
testReportDir = new File("${reportsDir}/${testReportDirName}/${name}")
testResultsDir = new File("${buildDir}/${testResultsDirName}/${name}")
}
This will cause all instances of the Test task to be isolated from each other by having their task name as part of their directory hierarchy.
However, I still feel that this is a bit evil and there must be a cleaner way of achieving this that I haven't yet found!
Ingo Kegel's answer doesn't address the results directory, only the reports directory. Which means that a test report for a particular test type could be built that includes more test results than just that type. This can be addressed by setting the results directory as well.
tasks.withType(Test) {
reports.html.destination = file("${reporting.baseDir}/${name}")
reports.junitXml.destination = file("${testResultsDir}/${name}")
}
Just an update. The reports.html.destination way is deprecated.
This is the "new" way (Gradle > 4.x):
tasks.withType(Test) {
reports.html.setDestination file("${reporting.baseDir}/${name}")
}

Resources