I have Jasmine tests for a data service that does CRUD operations. There are some functions that are only executed on user actions like add, update or delete data. These functions are marked red by Istanbul which decreases the code coverage.
How should I handle these functions?
Either:
exercise the user actions
manually using a hand-written script so you can reliably repeat it,
add a programmable way for your application to trigger these actions, and then
write unit tests that use the triggers
I much prefer the latter because you can then simply run the tests whenever needed.
Related
Parallel execution of cucumber 4 work for me, but I want to execute some actions just once for all tests, It is a possible way to run some hooks in another Thread ?
As per your requirement, you want to execute some actions for all test cases once, is it like before or after all test case execution. If so then adding #BeforeClass from JUnit/TestNG and Similarly you can use #AfterClass in your run cuke class. This piece of code would run once before running your first class and after all test execution completed.
would it may work or adding tagged hooks give you some clue. Like for some specific test cases you can use tagged hooks and run those specific actions inside that hook only.
I'm using JMeter for integration and non-regression testing.
The tests are automated and reports are working.
But since it is scenario testing and not performance testing the report doesn't give real business added value for that kind of tests.
My question: Is there any way to have a scenario (transaction controller based)reporting?
For the moment, to have some more meaningful result, transactions controllers and dummy sampler are used.
What we would like to have is the number of success/failure scenarios of the last test run. And also an history of success/failures per test run (1 by day).
Thank you for your advices.
The easiest way of getting the things done is putting your JMeter test under Jenkins orchestration so it will be automatically executed based on a VCS hook or according to the Schedule
Once done you will be able to utilize Jenkins Performance Plugin which adds test results trends charts and ability to mark build as unstable/failed depending on various criteria.
If I am not wrong, you want to create a suite based on particular test cases. like if single case include execution of more than 1 request in a single execution.
If this is the case, you can simple create a test fragment through jmeter gui, and copy all the samplers in single fragment.
Now to control their execution you can use any controller of your choice, i would suggest you to use module controller for http samplers.
I'm using testcomplete Keyword test for a lot of UI test cases. Quite a lot of them has the same steps.
Is there any Macro functionality which can add multiple preset actions/checkpoints easily?
Sure, you can call another keyword test using the Run Keyword Test operation or a script function using the Run Script Routine operation. Both operations allow specifying parameters for a test. Also, you can use the Run Test operation to run any item that can be treated as a separate test (keyword or script test, network suite job or task, a load test).
Moreover, I think that you will find it useful the Data-Driven Testing functionality of TestComplete that allows running a test for every record in a specified data source. Find more information on this feature in the Data-Driven Testing help topic. Videos demonstrating data-driven approach can be found here and here.
Since application tests can now be run on the simulator from Xcode, what would the advantage be, apart from possibly a small saving in execution time, of still separating your tests into logic and application tests?
The differentiation as per the Apple docs:
Logic tests. These tests check the correct functionality of your code in a clean-room environment; that is, your code is not run inside an application. Logic tests let you put together very specific test cases to exercise your code at a very granular level (a single method in class) or as part of a workflow (several methods in one or more classes). You can use logic tests to perform stress-testing of your code to ensure that it behaves correctly in extreme situations that are unlikely in a running application. These tests help you produce robust code that works correctly when used in ways that you did not anticipate. Logic tests are iOS Simulator SDK–based; however, the application is not run in iOS Simulator: The code being tested is run during the corresponding target’s build phase.
Application tests. These tests check the functionality of your code in a running application. You can use application tests to ensure that the connections of your user-interface controls (outlets and actions) remain in place, and that your controls and controller objects work correctly with your object model as you work on your application. Because application tests run only on a device, you can also use these tests to perform hardware testing, such as getting the location of the device.
Application tests compared to logic tests are really used for two different things:
Logic tests/unit tests are used to test very small behavior for one or a few methods, e.g. "Given that I create my object like this, is the value of a certain property what I expect it to be?"
Application tests however are used to test the big picture, e.g. "Do I get the right data in my detail view when I tap on a certain table view cell?"
In our development environment, we run a Continuous Integration service (TeamCity) which responds to code checkins by running build/test jobs and reporting the results. While the job is in progress, we can easily see how many unit tests have executed so far, how many have failed, etc.
My automated testing team is delivering UI tests developed in Rational Functional Tester. Extracting those tests from the source control system, compiling them, and executing them from the command line all seem to be pretty straight forward exercises.
What I haven't been able to find is a way to report the test results automatically - there don't appear to be any hooks for listeners, for example, or any way to customize the messages that are emitted.
From my research thus far, I've come to the conclusion that my only option is to (a) wait until the tests finish, then (b) parse the HTML report that RFT generates.
Does anybody have a better answer than that?
Here is the workaround I've used for the similar purpose:
Write a helper super class that overwrite the onTerminate callback method, implement your log parsing logics there.
Change the helper super class of your test scripts to the helper super class create in step1.
Use RFT CLI invoke your scripts in your Continous Integration code.
Expanding on #eric2323223, in your onTerminate override, you can use TeamCity's build script interaction functionality to have your RFT pass/fail status rolled up to TeamCity. You just need these TeamCity specific messages emitted to the command line, so that TeamCity picks them up.
##teamcity[testStarted name='test1']
##teamcity[testFailed name='test1' message='failure message' details='message and stack trace']
##teamcity[testFinished name='test1']
##teamcity[testStarted name='test2']
##teamcity[testFailed type='comparisonFailure' name='test2' message='failure message' details='message and stack trace' expected='expected value' actual='actual value']
##teamcity[testFinished name='test2']