I have multiple suites(describes) inside a spec.js file. I would like to implement jasmine-fail-fast and fail only one suite(single describe block) at a time and proceed with the rest in the spec.js file. The available plugin does not support this feature as it tries to get all specs inside the spec.js file at a time.
Is there any workaround for this ? TIA
Related
Is there anyway to identify which is the current running cucumber tag in Cucumber-Java for API testing?
I am using Cucumber-Java, along with TestNG. For grouping and executing in different environments, I am using cucumber tags.
The tags are given in feature level. Multiple tags have been specified for same feature. Like:
#regression-staging #regression-production
Feature: Add to cart
Scenario:
.
.
.
Same code is using for staging and production. And as build took I am using Maven. Since TestNG is used, the test is triggered from RunTestNGTest class that I wrote. Inside the RunTestNGTest, I have #BeforeSuite and #AfterSuite methods.
The test is run using mvn command
mvn test -Dcucumber.filter.tags=#regression-staging
or
mvn test -Dcucumber.filter.tags=#regression-production
Is there anyway I can get which tag is currently running? (In my case, either one will be active at a time). I want to log which tag is currently using, and also want to use the same in the HTML report. I tried scenario.getSourceTagNames();, but that returns all the tags for the scenario, not the currently running one.
You can simply access cucumber.filter.tags from your code like you would access any other system property:
String myCurrentTag = System.getProperty("cucumber.filter.tags");
Now you can parse the value (split by commas or additionally remove # symbol, etc.)
This is question about unit test (jest + #testing-library/react)
Hi. I started using #nrwl/react these days.
This is amazing products and I'm excited monorepos project with nx.
Btw, there is afterEach(cleanup); in generated template test file.
This is my sample project.
https://github.com/tyankatsu0105/reproducibility-react-test-nx/blob/master/apps/client/src/app/app.spec.tsx#L7
However react-testing-library doesn't need cleanup when using jest.
https://testing-library.com/docs/react-testing-library/api#cleanup
Please note that this is done automatically if the testing framework you're using supports the afterEach global (like mocha, Jest, and Jasmine). If not, you will need to do manual cleanups after each test.
In fact, I see error when remove afterEach(cleanup); from test files.
Found multiple elements with the text:
thanks!
I am using guard-jasmine create coverage reports for my javascript app written with backbone js. I would like to exclude the template files from being included in the coverage. Is there a way to do this currently? I have also tried looking through the source and passing the -x option to the intrument command in the coverage.rb file but that doesn't seem to help at all. Any pointers would be appreciated.
Thanks!
There is currently no way of configuring Guard::Jasmine to skip specific files from generating the coverage.
A possible way to add this would be to add something like a coverage_skip option that contains a regex to check it as a preconodition in the coverage tilt template:
return data if file =~ ENV['COVERAGE_SKIP']
Since we do not have access to the Guard::Jasmine options, we need to set it as an environment variable in the server process.
A pull request is heartly welcome ;)
No where in the documentation does it tell you which scripts to include in which order. I would like to print to the console because I am running jasmine inside PhantomJS. What files should I use for this?
I'm trying:
bootstrap.js
console.js
jasmine.js
jasmine.getEnv().addReporter(new jasmine.ConsoleReporter(console.log));
but it gives
TypeError: 'undefined' is not a constructor (evaluating 'new jasmine.ConsoleReporter(console.log)')
jasmine.js needs to come before console.js, as console.js adds ConsoleReporter() as a method to the jasmine object.
Look at the included sample btml runner. I found that any add on must load after boot.js. For example, Jasmine-jquery and teamcity reporters. These usually attach themselves to the global Jasmine object which is configured in boot.
I am trying to figure out how to run unit tests, using Google Test, and send the results to TeamCity.
I have run my tests, and output the results to an xml, using a command-line argument --gtest_output="xml:test_results.xml".
I am trying to get this xml to be read in TeamCity. I don't see how I can get XML Reports passed to TeamCity during build/run...
Except through XML report Processing:
I added XML Report Processing, added Google Test, then...
it asks me to specify monitoring rules, and I added the path to the xml file... I don't understand what monitoring rules are, or how to create them...
[Still, I can see nowhere in the generated xml, the fact that it intends to talk to TeamCity...]
In the log, I have:
Google Test report watcher
[13:06:03][Google Test report watcher] No reports found for paths:
[13:06:03][Google Test report watcher] C:\path\test_results.xml
[13:06:03]Publishing internal artifacts
And, of course, no report results.
Can anyone please direct me to a proper way to import the xml test results file into TeamCity ? Thank you so much !
Edit: is it possible that XML Report Processing only processes reports that were created during build ? (which Google Test doesn't do?) And is ignoring the previously generated reports, as "out of date", while simply saying that it can't find them - or are in the wrong format, or... however I should read the message above ?
I found a bug report that shows that xml reports that are not generated during the build are ignored, making a newbie like me believe that they may not be generated correctly.
Two simple solutions:
1) Create a post build script
2) Add a build step that calls the command line executable with the command-line argument. Example:
Add build step
Add build feature - XML report processing
I had similar problems getting it to work. This is how I got it working.
When you call your google test executable from the command line, prepend %teamcity.build.checkoutDir% to the name of your xml file to set the path to it like this:
--gtest_output=xml:%teamcity.build.checkoutDir%test_results.xml
Then when configuring your additional build features on the build steps page, add this line to your monitoring rules:
%teamcity.build.checkoutDir%test_results.xml
Now the paths match and are in the build directory.