I have six test cases that I need to run, but only three of the test cases can be active in the test target environment at any one time. When I run a test, I want to check if the environment is setup correctly for that particular case. If not, I want to mark the test as skipped. How can I dynamically mark the test case as skipped in nightwatch?
From your question i can imagine that you are looking for something like this: test tags
Related
I was writing tests using ginkgo framework, and wanted to reduce duplication within my tests. Suppose I have two tests, which have a exactly common middle section, but different start and end sections. ginkgo provides BeforeEach function to remove duplication from the start, but I couldn't find any syntax to just define a simple utility function within Describe node. The best I could think of was assigning a function to a variable, but variable initialization is not allowed in container nodes.
I am not completely sure what do you mean by syntax to write utility in describe node. If you go through their documentation description, context etc are container nodes and are just sugarcoating to better manage test descriptions and readability. we cannot hold codes in those container nodes. The only code that executes is inside ginkgo.specify
refer this link: https://onsi.github.io/ginkgo/#adding-specs-to-a-suite
Now, to solve your problem, it's basically a test design issue and It totally depends on how you design your test cases. You can simply introduce fixtures files for test data/reusable functions. So for example we have a structure like this:
Testsuite:
|- a_runnertest.go - only controls spec runs
|- b_case.go - handles cases
|- c_fixture.go - handles all reusable functions and test data
now for any functions which are reusable and wanted to be used across various describes, we move that code to fixture and call it in b_case.go. it will also be scalable moving forward.
how can I run only one ui-test by XCTest from fastlane?
I know about parameters for fastlane: only_testing but not understood how to use this.
Can you give an example
I run my all ui-tests as:
fastlane ios RunningUITests
but want fastlane ios RunningUITests only_testing:GTUITests/GT00FirstClass/testFunc
this not work for me
Can you give an exactly example for this?
You have to use the scan (also known as run_tests) "action". Read this documentation for information.
There, you can see the instructions for calling it directly on the command line. In your example it would be:
fastlane scan --workspace "<YourRunningUITests>.xcworkspace" --scheme "<YourRunningUITestsScheme>" --only-testing "GTUITests/GT00FirstClass/testFunc"
Replace the values inside of the angled brackets (< >) with the values appropriate to your code.
However, rather than running that multi-parameter call from the command line, I recommend using a Fastfile to consolidate your logic and allow you to perform more sophisticated logic (such as these Fastfiles).
If you were to follow the logic suggested here, you could then simply call fastlane tests from the command line. Much simpler.
The comment from above is very useful, the only thing I want to add is that if you want to run more tests, write something like the following:
--only-testing "GTUITests/GT00FirstClass/testFunc,GTUITests/GT00FirstClass/testFunc2"
You should always write the full path to the test function
As the title says, I would like to run a single test, not the whole spec. The naive way I tried was using a case like this:
describe("MyCase",function() {
it("has a test",function() {
expect(something).toBe(something);
}
it("has another test",function() {
expect(something_else).toBe(something_else);
}
}
This is saved in a file called MyCase.spec.js (if this matters). I would have thought that it would be possible to run just the first case using the following from the command line:
jasmine-node --match="MyCase has a test"
But this is apperantly not the way to do it. So how is it done?
Thanks!
it may be pretty old channel but it would help someone who is looking for running specific testcase with jasmine 2.0. Use "fdescribe" to run a specific suite and use "fit" to run a specific spec.This skips all other tests except marked.
keep an eye and not to commit fdescribe and fit to repo. Here f describe the "focus".
For lower versions we can use ddescribe, iit as described in upper answers.
Change it with iit and run your test as usual.
Thus only this test will be run and all others will be ignored.
E.g.
iit('should run only this test', function () {
//expect(x).toBe(y);
});
The same works for the describe block, just rename it to ddescribe
Also you can ignore a single it test by renaming it to xit
And xdescribe works too
Not sure if it is applicable for jasmine-node, but using ts-node and Jasmine's node modules path, I can use the filter flag to match the spec's string. For me it looks like:
ts-node node_modules/jasmine/bin/jasmine --filter="employees*"
This would match all the 'it' blocks within a 'describe' block that begin with 'employees'.
It might not be what you need exactly, but I would like to propose using karma / karma-jasmine.
Within Karma, Jasmine is "patched" and provides additional ddescribe and iit methods. If you rename one of your suites to ddescribe or one of your specs to iit (takes precedence over ddescribe), only this suite or spec will be executed. Concerning your question, you could rename your first spec to iit and then only this spec would be executed. Of course, this is only useful while developing the spec.
A downside on this is, that one can easily end up having only a fraction of one's test suites being tested for a long time. So don't forget to rename it back to the usual version (no double d's, no double i's).
I was wondering if it is possible inside liteIDE to run only one function, using some parameters, and see what a variable contains after executing one specific code line.
Thanks.
Indeed, I could (and should !) write a test for it.
And for seeing what a variable contains at some step in the execution, either:
debug-print it inside the function, then run the test
Use a debugger (for go it's called delve) and step through either the test or the real program
With test/unit, and minitest, is it possible to fail any test that doesn't contain an assertion, or would monkey-patching be required (for example, checking if the assertion count increased after each test was executed)?
Background: I shouldn't write unit tests without assertions - at a minimum, I should use assert_nothing_raised if I'm smoke testing to indicate that I'm smoke testing.
Usually I write tests that fail first, but I'm writing some regression tests. Alternatively, I could supply an incorrect expected value to see if the test is comparing the expected and actual value.
To ensure unit tests actually verify anything a technique called Mutation testing is used.
For Ruby, you can take a look at Mutant.
As PK's link points out too, the presence of assertions in itself doesn't mean the unit test is meaningful and correct. I believe there is no automatic replacement for careful thinking and awareness.
Ensuring the tests fail first is a good practice, which should be made into a habit.
Apart from the things you mention, I often set wrong values in asserts in new tests, to check that the test really runs and fails. (Then I fix it of course :-) This is less obtrusive than editing the production code.
I don't really think that forcing the test to fail without an assert is really helpful. Having an assert in a test is not a goal in itself - the goal is to write useful tests.
The missing assert is just an indication that the test may not be useful. The interesting question is: Will the test fail if something breaks?. If it doesn't, it's obviously useless.
If all you're testing for is that the code doesn't crash, then assert_nothing_raised around it is just a kind of comment. But testing for "no explosions" probably indicates a weak test in itself. In most cases, it doesn't give you any useful information about your code (because "no crash != correct"), so why did you write the test in the first place? Plus I rather prefer a method that explodes properly to one that just returns a wrong result.
I found the best regression test come from the field: Bang your app (or have your tester do it), and for each problem you find write a test that fails. Fix it, and have the test pass.
Otherwise I'd test the behavior, not the absence of crashes. In the case that I have "empty" tests (meaning that I didn't write the test code yet), I usually put a #flunk inside to remind me.