Active Tag in Cucumber Java - maven

Is there anyway to identify which is the current running cucumber tag in Cucumber-Java for API testing?
I am using Cucumber-Java, along with TestNG. For grouping and executing in different environments, I am using cucumber tags.
The tags are given in feature level. Multiple tags have been specified for same feature. Like:
#regression-staging #regression-production
Feature: Add to cart
Scenario:
.
.
.
Same code is using for staging and production. And as build took I am using Maven. Since TestNG is used, the test is triggered from RunTestNGTest class that I wrote. Inside the RunTestNGTest, I have #BeforeSuite and #AfterSuite methods.
The test is run using mvn command
mvn test -Dcucumber.filter.tags=#regression-staging
or
mvn test -Dcucumber.filter.tags=#regression-production
Is there anyway I can get which tag is currently running? (In my case, either one will be active at a time). I want to log which tag is currently using, and also want to use the same in the HTML report. I tried scenario.getSourceTagNames();, but that returns all the tags for the scenario, not the currently running one.

You can simply access cucumber.filter.tags from your code like you would access any other system property:
String myCurrentTag = System.getProperty("cucumber.filter.tags");
Now you can parse the value (split by commas or additionally remove # symbol, etc.)

Related

How to check if ModuleController has selected Controller before the test runs?

I have tests (JMX) with modular controllers and sometimes throw an error when they are started when there are modular controllers without a controller selected.
Is there a way to verify if every (enabled) modular controller has an associated controller?
As of JMeter 5.4.1 it is not possible to "validate" the Module Controllers without actually running your tests, the Test Plan Tree is being built in the runtime from the test elements provided and controllers like Module Controller or Include Controller basically modify the Test Plan Tree on the fly creating one big test plan from reusable fragments or external scripts.
So the options are in:
Perform a dry-run with 1-2 users/iterations to see whether your test still works, produces .jtl file without errors, etc.
It's also possible to run JMeter from Java code so you could come up with an utility which will do the test plan checking basing on your acceptance criteria

Is there a way to publish protractor test results in confluence and have an overview?

We run our protractor regression tests in gitlab CI and we have jasmine HTML reports. Right now it is only the QA team that monitors and checks failure if any.
But we would like to make it more visible. The devs have also asked us if we can make it visible in a single place instead of having to go to gitlab job and browse for artifacts.Also would it be possible to have an overview of pass/fail tests over time.
I'm not sure how and where to start. Any pointers would be appreciated.
You're looking for the expose_as keyword for artifacts. The full docs are here: https://docs.gitlab.com/ee/ci/yaml/#artifactsexpose_as.
If you use expose_as with your artifacts, Gitlab CI will link them to any application Merge Request with the name you give in this field.
For example (from the docs):
test:
script: ["echo 'test' > file.txt"]
artifacts:
expose_as: 'artifact 1'
paths: ['file.txt']
In this example, a Merge Request for this pipeline will have a link called "artifact 1" that opens the file "file.txt".
This also works for directories, but if there's more than one file it will open in the job's artifacts browser (like you currently do).
There are some caveats, like:
If you use a variable in the artifacts path field, expose_as won't work
Max of 10 artifacts can be exposed
Glob patters won't work
If Gitlab Pages is enabled, some file extensions will be automatically rendered using Pages (html, xml, txt, etc.).

Jasmine - How to get only specs within current suite?

I have multiple suites(describes) inside a spec.js file. I would like to implement jasmine-fail-fast and fail only one suite(single describe block) at a time and proceed with the rest in the spec.js file. The available plugin does not support this feature as it tries to get all specs inside the spec.js file at a time.
Is there any workaround for this ? TIA

Why we need `afterEach(cleanup);`?

This is question about unit test (jest + #testing-library/react)
Hi. I started using #nrwl/react these days.
This is amazing products and I'm excited monorepos project with nx.
Btw, there is afterEach(cleanup); in generated template test file.
This is my sample project.
https://github.com/tyankatsu0105/reproducibility-react-test-nx/blob/master/apps/client/src/app/app.spec.tsx#L7
However react-testing-library doesn't need cleanup when using jest.
https://testing-library.com/docs/react-testing-library/api#cleanup
Please note that this is done automatically if the testing framework you're using supports the afterEach global (like mocha, Jest, and Jasmine). If not, you will need to do manual cleanups after each test.
In fact, I see error when remove afterEach(cleanup); from test files.
Found multiple elements with the text:
thanks!

Extracting Test outcomes serenity BDD

Went through the serenity documented for extracting the test outcomes
below isthe code, it didn't work
OutcomeFormat format = OutcomeFormat.XML; TestOutcomes outcomes =
TestOutcomeLoader.loadTestOutcomes().inFormat(format)
Tried with below code and its working,
OutcomeFormat format = OutcomeFormat.JSON; TestOutcomeLoaderBuilder
outcomes= TestOutcomeLoader.loadTestOutcomes().inFormat(format);
TestOutcomes out =outcomes.from(new File(""));
Issue is i need the test outcomes in #AfterScenario, but the thing is serenity reports gets generated after the entire execution tried changing the pom but didn't help. Is there any other way using which we can extract the test results?
Serenity uses JSON format by default now. Why are you trying to obtain the test outcomes? (i.e. what problem are you trying to solve?)
Created a separate java class for report extracting and added that in maven plugin which will get executed after serenity report is generated.
As #John smart has mentioned JSON and HTML are default output formats.
Still if you want to access Outcomes after test execution.
You can create a custom listener and listen to serenity event bus.
TestRunFinished event will be published with Outcome as a parameter.
You can use the outcome for getting required details.
For creating custom listener you can follow this page

Resources