Cucumber reporting hide Before and After hooks - maven

in version 2.0.0 of maven-cucumber-reporting plugin is it possible to hide #Before and #After hooks from the report?
so for example in the html report I see:
Before BusinessStepDefs.prepare()
Scenario: Business scenario
Given business specific input
When performing action
Then result is success
After HelperStepDefs.cleanUp()
I am using the version 2.0.0, using an older version did not have those lines. Any ideas ?
I would like this to appear as
Scenario: Business scenario
Given business specific input
When performing action
Then result is success

See this issue for a discussion and possible resolution.

Related

Why we need `afterEach(cleanup);`?

This is question about unit test (jest + #testing-library/react)
Hi. I started using #nrwl/react these days.
This is amazing products and I'm excited monorepos project with nx.
Btw, there is afterEach(cleanup); in generated template test file.
This is my sample project.
https://github.com/tyankatsu0105/reproducibility-react-test-nx/blob/master/apps/client/src/app/app.spec.tsx#L7
However react-testing-library doesn't need cleanup when using jest.
https://testing-library.com/docs/react-testing-library/api#cleanup
Please note that this is done automatically if the testing framework you're using supports the afterEach global (like mocha, Jest, and Jasmine). If not, you will need to do manual cleanups after each test.
In fact, I see error when remove afterEach(cleanup); from test files.
Found multiple elements with the text:
thanks!

OpenTest Reporting Library

I am currently seeking information around the available reporting capabilities for OpenTest. I would need information regarding the following:
Portability of reports/logging - can these results be published in various formats
Granularity of reports/logging - is there a way of getting very detailed in what is reported on and/or strategies to ensure that enough information is logged to allow for debugging of automated tests and System Under Test(SUT)
Screenshots - is there current functionality to allow for screenshots to be taken and published/posted to external systems?
Portability of reports/logging
You can obtain the test session results by using the API, either in JSON format (which contains a lot of detail) or the JUnit XML format:
http://localhost:3000/api/session/<SESSION_ID>?format=json
http://localhost:3000/api/session/<SESSION_ID>?format=junit
The detailed log of the test session can be retrieved in: JSON or human-readable format:
http://localhost:3000/api/session/<SESSION_ID>/log?format=json
http://localhost:3000/api/session/<SESSION_ID>/log?format=pretty
Granularity of reports/logging
The test results in JSON format will tell you everything you need to know about the pass/fail status for each test and each individual test action within a test, arguments that were used for test actions, the name of the screenshot captured for each test action, execution times and many other useful information.
When you want to troubleshoot a failed test, most of the time you'll need the detailed log information, which can be retrieved using the APIs I mentioned above. Aside from the log information generated by OpenTest itself, you can always log additional information that is specific to your application or test scenario using the $log JavaScript API.
Screenshots
Screenshots are automatically captured for Web and UI tests, whenever a test action fails. If you need to capture additional screenshots during your test, you can do so using the TakeScreenshot keyword for either Web testing or mobile testing. You can also capture a screenshot after any test action by using the $screenshot global test action argument:
- description: Click product 1 and capture a screenshot
action: org.getopentest.selenium.Click
args:
locator: { id: product1 }
$screenshot: true
You can download screenshots using this API:
https://localhost:3000/api/screenshot/SID1554380072_WEB_T05_SG01_ST01_after_03.png
SID1554380072_WEB_T05_SG01_ST01_after_03.png is the name of the screenshot file, which you can find in the test execution results in JSON format.
Integrating with custom reporting solutions
At some point you will need to integrate with a dedicated reporting product that can give you all the nice features that OpenTest is not be able to provide out-of-the-box. This is possible to do using the APIs I described. In order to notify interested parties of the current status of test sessions, OpenTest also provides a WebSocket API. You can use that to be notified when a test session has finished, then you can extract all the information you need via APIs. You can find a Java project that does all that here: https://github.com/adrianth/opentest-monitor. This project is intended as a starting point for your own custom integration.

Extracting Test outcomes serenity BDD

Went through the serenity documented for extracting the test outcomes
below isthe code, it didn't work
OutcomeFormat format = OutcomeFormat.XML; TestOutcomes outcomes =
TestOutcomeLoader.loadTestOutcomes().inFormat(format)
Tried with below code and its working,
OutcomeFormat format = OutcomeFormat.JSON; TestOutcomeLoaderBuilder
outcomes= TestOutcomeLoader.loadTestOutcomes().inFormat(format);
TestOutcomes out =outcomes.from(new File(""));
Issue is i need the test outcomes in #AfterScenario, but the thing is serenity reports gets generated after the entire execution tried changing the pom but didn't help. Is there any other way using which we can extract the test results?
Serenity uses JSON format by default now. Why are you trying to obtain the test outcomes? (i.e. what problem are you trying to solve?)
Created a separate java class for report extracting and added that in maven plugin which will get executed after serenity report is generated.
As #John smart has mentioned JSON and HTML are default output formats.
Still if you want to access Outcomes after test execution.
You can create a custom listener and listen to serenity event bus.
TestRunFinished event will be published with Outcome as a parameter.
You can use the outcome for getting required details.
For creating custom listener you can follow this page

Common asserts in any automation project

Can anyone briefly explain what are the common asserts to consider in any automation project please. Whether it might be an in-house or public web application. For example presently i am using selenium (java) to automate an eCommerce web application. As this is my first website to automate, i am running out of ideas where i can verify things expect few which i know mentioned below:
1.Verify each page Title
2.Verify a button, text, link, image, custom text etc
Apart from these is there any thing else i can verify? please feel free to correct my question and if you have worked on various automation projects which areas did you add asserts to verify or validate something on a webpage.
basically, you do automation to decrease the execution time of regression cycles by automating the Test Cases relate to the functionality of the application. so, first develop test cases, using test design techniques like ECP, BVA etc.
Each test case must have an Assertion called expected result or functionality (otherwise it won't be called a Test case).
This assertion can be anything like,
Whether login successful after giving valid credentials
Showing an error message after entering wrong credentials etc.
Selenium helps us to automate web interactions (navigations, clicks, enter texts etc.) and don't perform any assertions for you.
Assertions are available by frameworks like JUnit, TestNG (in Java) with Assertions class. There is built-in support from programming languages like assert keyword in python & Java (http://docs.oracle.com/javase/7/docs/technotes/guides/language/assert.html)
So, whatever you mentioned in your question like common assertions (Verify each page Title etc.), those are just web interactions. they don't decide whether a Test is PASS or FAIL. It is you who define the criteria whether a Test is PASS/FAIL.
For example, there is a test case related to successful login.
here, you automate web interactions like navigate to login page, enter credentials, click Submit button.
Then to validate whether you successfully logged in or not, you look for a web element in the Home Page of the user logged in (like, welcome user) in normal scenario. In Automation, you try to find the text welcome user using webelement. Then you use Assertions provided by frameworks, to assert whether the expected message is present in the webpage like
Assertions.assertEqual(expected_message, actual_message); // just an example.
If expected_message and actual_message is same, then the method don't throw any exception, which results in marking the testcase as PASS by the framework
If expected_message and actual_message is NOT same, then AssertionError is raised by the method assertEqual, which results in marking the test case as FAIL by the framework.

GWT : Automate tests of UIs with Selenium/FluentLenium

I have such a big problem and I really need your help.
Basically, I'm working on a project whose core technology is GWT and I have to make functional tests and the tests of UIs. In fact, I have also to use Cucumber the framework which is BDD-based framework.
Now I come to the main problem : Indeed, at every Maven build, GWT generates automatically the ids of the widgets. Then, Selenium could not find these widgets because of the recent updates/changes of their Ids. Moreover, I can't find some widgets with the methods (findByName/xPath/cssSelector etc.). I'm working now on the FluentLenium which is an overlay of Selenium.. I don't know how to fix this problem because I have no control of how GWT generates the Ids behind ..
Does anynone met the same problem before ?
Thank you a lot.
I've worked with GWT/Selenium/Cucumber. We had a single class file with public static String fields for each ID used in the whole application. These id's were set with ensureDebugId. This same class file is then used in Selenium/Cubumber tests to find the widgets by id. I don't know if this works for you. But in our case the tester was in control of the id's.

Resources