I find gradle has already supported showing output by test class. It's quite useful for debugging when test fails on Continuous Integration Server.
I'm wondering is there a way to show output by method instead of class which will be even better. Sometimes, a test class has many test methods which makes the output still hard to inspect.
Related
I am searching a way to assign failed unit tests to resolvers. The way Sonar raises an issue at a class level whenever one or more unit tests fail does not fit my needs, I would like to assign a specific test to a specific developer.
Since Sonar can raise an issue for unit tests failures and is able to determine which particular test case failed I wonder if there is a way I could assign each failed test case to a different developer rather than the whole test class. And if it is possible, how can I do such a thing ?
This is not possible for the moment. You can indeed assign only the issue that tells how many errors (or failures) you have on a specific file. Most of the time, this will work out of the box as most teams try to avoid having people working on the same class at the same time. But it's true that this can happen.
The question is quite straightforward. I'm following up the TDD principle, so basically I write controller test case first then functions. One controller action might have multiple test cases and some are passed but the one I'm writing is failed.
Usually I verify http://domain.com/proj/test.php in browser and see if there are any failed test cases. My question is "how can I explicitly run failed test case only?". I want to ignore those passed test cases and only focus failed ones.
If there is no such feature in cakephp 2.4 stable, how to implement? Please guide me.
Thanks
Since application tests can now be run on the simulator from Xcode, what would the advantage be, apart from possibly a small saving in execution time, of still separating your tests into logic and application tests?
The differentiation as per the Apple docs:
Logic tests. These tests check the correct functionality of your code in a clean-room environment; that is, your code is not run inside an application. Logic tests let you put together very specific test cases to exercise your code at a very granular level (a single method in class) or as part of a workflow (several methods in one or more classes). You can use logic tests to perform stress-testing of your code to ensure that it behaves correctly in extreme situations that are unlikely in a running application. These tests help you produce robust code that works correctly when used in ways that you did not anticipate. Logic tests are iOS Simulator SDK–based; however, the application is not run in iOS Simulator: The code being tested is run during the corresponding target’s build phase.
Application tests. These tests check the functionality of your code in a running application. You can use application tests to ensure that the connections of your user-interface controls (outlets and actions) remain in place, and that your controls and controller objects work correctly with your object model as you work on your application. Because application tests run only on a device, you can also use these tests to perform hardware testing, such as getting the location of the device.
Application tests compared to logic tests are really used for two different things:
Logic tests/unit tests are used to test very small behavior for one or a few methods, e.g. "Given that I create my object like this, is the value of a certain property what I expect it to be?"
Application tests however are used to test the big picture, e.g. "Do I get the right data in my detail view when I tap on a certain table view cell?"
i am writing few ruby scripts and i wanted to write Unit tests and integration tests for it.
i learnt that test/unit module is present in ruby.
in my scripts, i wrote ruby classes A and B which extends the common helper class Common
in both the class, i am trying to setup S3 connections.
i wrote A_test, B_test test cases.
when i run them individually, they work.
when i put together, they are not working. some of the variable in the initialize are getting set only for the class / tests which runs first.
if A_test is running first, then it works.
B_test doesnt work.
any reason?
I can not answer your question well without some code samples, but consider the possibility that it's the code being tested that's the problem, not the tests.
In our development environment, we run a Continuous Integration service (TeamCity) which responds to code checkins by running build/test jobs and reporting the results. While the job is in progress, we can easily see how many unit tests have executed so far, how many have failed, etc.
My automated testing team is delivering UI tests developed in Rational Functional Tester. Extracting those tests from the source control system, compiling them, and executing them from the command line all seem to be pretty straight forward exercises.
What I haven't been able to find is a way to report the test results automatically - there don't appear to be any hooks for listeners, for example, or any way to customize the messages that are emitted.
From my research thus far, I've come to the conclusion that my only option is to (a) wait until the tests finish, then (b) parse the HTML report that RFT generates.
Does anybody have a better answer than that?
Here is the workaround I've used for the similar purpose:
Write a helper super class that overwrite the onTerminate callback method, implement your log parsing logics there.
Change the helper super class of your test scripts to the helper super class create in step1.
Use RFT CLI invoke your scripts in your Continous Integration code.
Expanding on #eric2323223, in your onTerminate override, you can use TeamCity's build script interaction functionality to have your RFT pass/fail status rolled up to TeamCity. You just need these TeamCity specific messages emitted to the command line, so that TeamCity picks them up.
##teamcity[testStarted name='test1']
##teamcity[testFailed name='test1' message='failure message' details='message and stack trace']
##teamcity[testFinished name='test1']
##teamcity[testStarted name='test2']
##teamcity[testFailed type='comparisonFailure' name='test2' message='failure message' details='message and stack trace' expected='expected value' actual='actual value']
##teamcity[testFinished name='test2']