How do I make RFT report test results in real time? - teamcity

In our development environment, we run a Continuous Integration service (TeamCity) which responds to code checkins by running build/test jobs and reporting the results. While the job is in progress, we can easily see how many unit tests have executed so far, how many have failed, etc.
My automated testing team is delivering UI tests developed in Rational Functional Tester. Extracting those tests from the source control system, compiling them, and executing them from the command line all seem to be pretty straight forward exercises.
What I haven't been able to find is a way to report the test results automatically - there don't appear to be any hooks for listeners, for example, or any way to customize the messages that are emitted.
From my research thus far, I've come to the conclusion that my only option is to (a) wait until the tests finish, then (b) parse the HTML report that RFT generates.
Does anybody have a better answer than that?

Here is the workaround I've used for the similar purpose:
Write a helper super class that overwrite the onTerminate callback method, implement your log parsing logics there.
Change the helper super class of your test scripts to the helper super class create in step1.
Use RFT CLI invoke your scripts in your Continous Integration code.

Expanding on #eric2323223, in your onTerminate override, you can use TeamCity's build script interaction functionality to have your RFT pass/fail status rolled up to TeamCity. You just need these TeamCity specific messages emitted to the command line, so that TeamCity picks them up.
##teamcity[testStarted name='test1']
##teamcity[testFailed name='test1' message='failure message' details='message and stack trace']
##teamcity[testFinished name='test1']
##teamcity[testStarted name='test2']
##teamcity[testFailed type='comparisonFailure' name='test2' message='failure message' details='message and stack trace' expected='expected value' actual='actual value']
##teamcity[testFinished name='test2']

Related

Unexecuted user action code in Istanbul Code Coverage

I have Jasmine tests for a data service that does CRUD operations. There are some functions that are only executed on user actions like add, update or delete data. These functions are marked red by Istanbul which decreases the code coverage.
How should I handle these functions?
Either:
exercise the user actions
manually using a hand-written script so you can reliably repeat it,
add a programmable way for your application to trigger these actions, and then
write unit tests that use the triggers
I much prefer the latter because you can then simply run the tests whenever needed.

TDD Scenario: Looking for advice

I'm currently in an environment where we are parsing data off of the client's website. I want to use my tests to ensure that when the client changes their site, I know when we are no longer receiving the information.
My first approach was to do pure integration tests where my tests hit the client's site and assert that the data was found. However half way through and 500 tests in, the test run has become unbearable and in some cases started timing out. So I cleared out as many tests that I could without loosing the core protection they are providing and I'm down to 350 or so. I'm left with a fear to add more tests to only break all the tests. I also find myself not running the 5+ minute duration (some clients will be longer as this is based on speed of communication with their site) when I make changes anymore. I consider this a complete failure.
I've been putting a lot of thought into this and asking around the office, my thoughts for my next attempt at this is to pull down the client's pages and write tests against these embedded resources in my projects. This will give me my higher test coverage and allow me to go back to testing in isolation. However I would need to be notified when they make changes and then re-pull down the pages to test against. I don't think the clients will adhere to this.
A suggestion was made to me to augment this with a suite of 'random' integration tests that serve the same function as my failed tests (hit the clients site) but in a lot less number than before. I really don't like the idea of random testing, where the possibility of sometimes getting red lights and some times getting green lights with the same code. But this so far sounds like the best idea I've heard to still gain the awareness of when the client's site has changed and my code no longer finds the data.
Has anyone found themselves testing an environment like this? Any suggestions from the testing community for me?
When you say the big test has become unbearable, it suggests that you are running this test suite manually. You shouldn't have to. It should just be running constantly in the background, at whatever speed it takes to complete the suite - and then start over again (perhaps after a delay if there are associated costs). Only when something goes wrong should you get an alert.
If there is something about your tests that causes them to get slower as their number grows - find it and fix it. Tests should be independent of one another, so simply having more of them shouldn't cause individual tests to time out.
My recommendation would be to try to isolate as much as possible the part of code that deals with the uncertainty. This part should be an API that works as a service used by all the other code. This way you would be protecting most of your code against changes.
The stable parts of the code should be unit-tested. With that part being independent from the connection to client's site running the tests should be way quicker and it would also make those tests more reliable.
The part that has to deal with the changes on the client's websites can be reduced. This way you are not solving the problem but at least you're minimising it and centralising it in only one module of your code.
Suggesting to the clients to expose the data as a web service would be the best for you. But I guess that doesn't depend on you :P.
You should look at dividing your tests up, maybe into separate assemblies that can be run independently. I typically have a unit tests assembly and a slower running integration tests assembly.
My unit tests assembly is very fast (because the code is tested in isolation using mocks) and gets run very frequently as I develop. The integration tests are slower and I only run them when I finish a feature / check in or if I have a bad feeling about breaking something.
Maybe you could do something similar or even take the idea further and have 3 test suites with the third containing even slower client UI polling tests.
If you don't have a continuous integration server / process you should look at setting one up. This would continuously build you software and execute the tests. This could be set up to monitor check-ins and work in the background, sending out a notification if anything fails. With this in place you wouldn't care how long your client UI polling tests take because you wouldn't ever have to run them yourself.
Definitely split the tests out - separate unit tests from integration tests as a minimum.
As Martyn said, get a Continuous Integration system in place. I use Teamcity, which is excellent, easy to use, free for the first 20 builds, and you can happily run it on your own machine if you don't have a server at your disposal - http://www.jetbrains.com/teamcity/
Set up one build to run on every check in, and make that build run your unit tests, or fast-running tests if you will.
Set up a second build to run at midnight every night (or some other convenient time), and include in this the longer running client-calling integration tests. With this in place, it won't matter how long the tests take, and you'll get a big red flag first thing in the morning if your client has broken your stuff. You can also run these manually on demand, if you suspect there might be a problem.

cucumber re-run failed scenarios automatically with a tag?

In our build there are certain scenarios that fail for reasons which are out of our control or take too long to debug properly. Things such asynchronous javascript etc.
Anyway the point is sometimes they work sometimes they don't, so I was thinking it would be nice to add a tag to a scenario such as #rerun_on_failure or #retry which would retry the scenarion X number of times before failing the build.
I understand this is not an ideal solution, but the test is still valuable and we would like to keep it without having the false negatives
The actual test that fails clicks on a link and expects a tracking event to be sent to a server for analytics (via javascript). Sometimes the selenium web-driver loads the next page too fast and the event does not have time to be sent.
Thanks
More recent versions of Cucumber have a retry flag
cucumber --retry 2
Will retry tests two times if it fails
I've been considering writing something like what you're describing, but I found this:
http://web.archive.org/web/20160713013212/http://blog.crowdint.com/2011/08/22/auto-retry-failed-cucumber-tests.html
If you're tired of having to re-kick builds in your CI server because of non deterministic failures, this post is for you.
In a nutshell: he makes a new rake task called cucumber:rerun that uses rerun.txt to retry failed tests. It should be pretty easy to add some looping in there to retry at most 3x (for example).
For cucumber + java on maven i found this command:
mvn clean test -Dsurefire.rerunFailingTestsCount=2
, u must have actual version of surefire plugin, my is 3.0.0-M5.
And nothing else special u even need.

Testcomplete : Is there any Macro which can add multiple keyword test steps at one time?

I'm using testcomplete Keyword test for a lot of UI test cases. Quite a lot of them has the same steps.
Is there any Macro functionality which can add multiple preset actions/checkpoints easily?
Sure, you can call another keyword test using the Run Keyword Test operation or a script function using the Run Script Routine operation. Both operations allow specifying parameters for a test. Also, you can use the Run Test operation to run any item that can be treated as a separate test (keyword or script test, network suite job or task, a load test).
Moreover, I think that you will find it useful the Data-Driven Testing functionality of TestComplete that allows running a test for every record in a specified data source. Find more information on this feature in the Data-Driven Testing help topic. Videos demonstrating data-driven approach can be found here and here.

How do I test DelayedJob with Cucumber?

We use DelayedJob to run some of our long running processes and would like to test with Cucumber/Webrat.
Currently, we are calling Delayed::Job.work_off in a Ruby thread to get work done in the background, but are looking for a more robust solution
What is the best approach for this?
Thanks.
The main problem I see with the Delayed:Job.work_off approach is that you are making explicit in your Cucumber scenarios something that belongs to the internals of your system. Mixing both concerns is against the spirit of functional testing:
When I click some link # Some operation is launched in the background
And Jobs are dispatched # Delayed:Job.work_off invoked here
Then I should see the results...
Another problem is that you populate your Cucumber scenarios with repetitive steps for dispatching jobs when needed.
The approach I am currently using is launching delayed_job in the background while cucumber scenarios are being executed. You can check the Cucumber hooks I am using in that link.

Resources