Emma not recording line coverage even though it executed - emma

Am using emma for recording code coverage. Am particularly interested in the line coverage (or line %) We are planning to increase the line coverage for our source code thru' automation. We first execute the scenarios manually and then check using emma if there is an increase in line%. If there is, we go ahead and automate that feature. Am stuck with a particular IF-ELSE block where i see the desired result when i manually run the scenario. But emma is not recording the line as covered. Here's the sample code below
if (a == null)
{
final class1 c1 = new class1();
if (c1.isSE())
{
c1.sendRedirect(req, res, "error.html");
}
else
{
c1.sendRedirect(req, res, "testpage.html");
}
return;
}
First 3 lines are green in emma report. But, the following lines below are in red in the emma report (meaning they are not covered)
c1.sendRedirect(req, res, "error.html");
c1.sendRedirect(req, res, "testpage.html");
return;
But when i execute the scenario manually, am seeing the desired result (i.e. am redirected to testpage.html page) Why is emma not recording this line as covered?
Note: I have tried the following troubleshooting below (mentioned in http://emma.sourceforge.net/faq.html )
3.18. EMMA started reporting that it instrumented 0 classes even though I gave it some input...
You might be getting tripped up by the incremental nature of EMMA instrumentation. When debugging an EMMA-enabled build, try either a
clean recompile and/or delete all instrumentation output directories
and all .em/.ec files by hand to reset EMMA to a clean state.

May be useful for the future people who refer to this...
When you instrument the Jars.you can see emma listing some of the classes with "Class Compiled without Debug Mode". If you see these messages when instrumenting then the Line % Coverage will not be generated. To overcome this you either need to compile those classes in debug mode or consider excluding if those classes are not required. Usually the classes with the above mentioned message will be third party classes.
If you don't see the Message "Class Compiled without Debug Mode" while instrumenting - then you should see the Line coverage in your report.

Related

What is the grey Indicator in Unit Test Case

When using XCTExpectFailure and run the test cases (Cmd + U) From "Show the Test navigator" it show's grey cross (x) Other test cases shows in green Colour.
What is the Grey Indicator (x) In Test Cases ?
Do need to do anything here?
Xcode has four indicators for the result of a test.
Green Tick – The test passed
Red Cross – The test failed
Gray Cross – The test failed, but it was expected to
Xcode will show you this only if you already declared that a test was expected to fail using the XCTExpectFailure API.
It's Xcode way to tell you "hey this test failed but I'm not going to report it as a failure because you told me it was expected." This is useful to avoid blocking a build in CI if because when there's an expected failure.
I would encourage you to use XCTExpectFailure parsimoniously. It's useful to avoid being blocked by a test failure that is unrelated to your work, but you should always take the time to go back and fix the failure, don't leave the XCTExpectFailure call in your test suite for long.
Gray Arrow – The test was skipped
Xcode will show this when a test was skipped using XCTSkipIf or XCTSkipUnless.
Similarly to an expected failure, you don't need to do anything about this because either you or someone else in your team added the API call to skip the test, meaning the behavior is expected. It's useful to skip tests if they depend on certain runtime capabilities that might not be available on the device or Simulator executing a test run.
Grey sign appears when test is run successfully and u can click on it to check the performance results in left bar

Troubleshooting Gradle Performance Issue

Recently our build times have been fluxuating wildly. Our baseline was between 1-2mins, but now it is sometimes taking up to 20 mins. I have been trying to track down the cause, but am having some trouble. It feels like the changes that trigger fluxuation in the build time are completely arbitrary, or at least I can not find the underlying connection between them.
Here is what the current build time looks like:
Now this obviously looks like the issue lies in Task Execution time, so lets look at that in more detail:
This indicates that the test task for the umlgen-transformer project is the culprit. The only difference between this project and the one that built in ~1min, is one tiny xml file. That xml file is read in by each of the unit tests as input. Here is what a typical test looks like:
#Test
def void testComponentNamesUpdated() {
loadModel("/testData/ProducerConsumer.uml")
val trafo = new ComponentTransformation
trafo.execute(model)
val componentNames = model.allOwnedElements.filter(Component).map[name]
assertThat(componentNames).contains(#{"ProducerComponent", "ConsumerComponent"})
}
(the .uml file is just a UML model stored as xml). There are several reasons why I don't think this is the underlying cause of our increases in build times: 1) the xml file is incredily small, ~200kb 2) the behavior in the unit tests is extremly simple and executes quite fast 3) when the unit tests are run through eclipse they execute in seconds 4) the build times of unrelated aspects, such as the time to compile kotlin code, have also seen a significant increase 5) this is not the only project we are having this issue with.
What I have tried:
Setting org.gradle.parallel = true
Increasing the memory of gradle deamon to 3gb
Updating to gradle 5
Clearing all gradle caches
Building without the gradle daemon
Simplifying the build.gradle scripts. I have removed several plugins and tasks to rule them out as possible causes
I have only been using Gradle for a couple months now, so I am hoping to get feedback on more conclusive ways to find out what gradle is doing (or to find some way to cross it off the list of possible causes). Please let me know if any additional information would be helpful.

Understanding SonarQube C code coverage measures

I have a SonarQube 5.6 installation, using C/C++ plugin 3.12 to analyse our project. I have generated coverage results (gcov), but so far only for one of the application C files. The coverage is at 98.3%.
When analysing the whole project that application coverage results gets 'imported' and I can trace them on the web interface.
On the top-level Code page the folder containing that file shows then 98.3%, which in my view is not correct, since for all the other C files no coverage is yet available. I've tried to show that in the following series of snapshots:
(1) Top-level Code Tree:
(2) Going down the 'Implementation' tree:
(3) Going down the 'Implementation/ComponentTemplate' tree:
(4) Going down the 'Implementation/ComponentTemplate/code' tree:
EXMPL.c has only (4):113 Lines of Code. Compared to the total Lines of Code of 'Implementation' (4):61k, this is somewhere of about 0.2% only.
The coverage for EXMPL.c of 98.3% in (1) is then wrong !
My project consists of several applications, EXMPL is one - the smallest one - of all my applications within the project. So I have to produce separate coverage results for each application and to 'import' them seperately into sonar. Coverage result files are therefore located in different folders.
Maybe that project structure or the 'incomplete import' of coverage results is the cause of the 'wrong' coverage measures, but so far, I have not found any useful information on how sonar is handling provided gcov coverage measures.
Any help or information will be appriciated.
Thanks
Your second guess is right: the incomplete import of coverage results is what's skewing the numbers. Lines that aren't included in the coverage report aren't included in the coverage calculations. Since the current coverage report includes only one file that's 93% covered, all the numbers are based on that.

How do I find which test Karma is skipping?

Karma has started skipping a test from my Jasmine test suite:
Chrome 45.0.2454 (Windows 7 0.0.0): Executed 74 of 75 (skipped 1) SUCCESS (0.163 secs / 0.138 secs)
However, I have no idea why it's doing this. I'm not trying to skip any tests. How do I find out which test is being skipped?
I've searched to see if ddescribe/iit/xit are being used, and they're not.
I'm running Karma 0.13.10 on Windows.
The ddescribe and iit functions are used for focusing on specific suites/tests, not for skipping them. The xit function is used for skipping a specific test, and the xdescribe function is used for skipping suites. From the looks of what you have described, you have a suite with just one test in it that it being skipped. Search your test code for xdescribe. Choose half of your files and remove them from the config. If you still get the skip, look in that half, otherwise look in the other half. Continue splitting the list in half and removing them from the config until you have isolated the one file that has the skip in it. Then search that file. It has to be in there somewhere.
Yet-to-be-written tests (without a function body) are marked as skipped by Karma.
You might have one in your suite.
If you use karma-spec-reporter you can specify in your karma.conf.js which outputs to suppress/show.
specReporter: {
suppressSkipped: false
},
Make sure your tests have at least one expect statement, otherwise they will appear as "skipped".

Is there a good way to debug order dependent test failures in RSpec (RSpec2)?

Too often people write tests that don't clean up after themselves when they mess with state. Often this doesn't matter since objects tend to be torn down and recreated for most tests, but there are some unfortunate cases where there's global state on objects that persist for the entire test run, and when you run tests, that depend on and modify that global state, in a certain order, they fail.
These tests and possibly implementations obviously need to be fixed, but it's a pain to try to figure out what's causing the failure when the tests that affect each other may not be the only things in the full test suite. It's especially difficult when it's not initially clear that the failures are order dependent, and may fail intermittently or on one machine but not another. For example:
rspec test1_spec.rb test2_spec.rb # failures in test2
rspec test2_spec.rb test1_spec.rb # no failures
In RSpec 1 there were some options (--reverse, --loadby) for ordering test runs, but those have disappeared in RSpec 2 and were only minimally helpful in debugging these issues anyway.
I'm not sure of the ordering that either RSpec 1 or RSpec 2 use by default, but one custom designed test suite I used in the past randomly ordered the tests on every run so that these failures came to light more quickly. In the test output the seed that was used to determine ordering was printed with the results so that it was easy to reproduce the failures even if you had to do some work to narrow down the individual tests in the suite that were causing them. There were then options that allowed you to start and stop at any given test file in the order, which allowed you to easily do a binary search to find the problem tests.
I have not found any such utilities in RSpec, so I'm asking here: What are some good ways people have found to debug these types of order dependent test failures?
There is now a --bisect flag that will find the minimum set of tests to run to reproduce the failure. Try:
$ rspec --bisect=verbose
It might also be useful to use the --fail-fast flag with it.
I wouldn't say I have a good answer, and I'd love to here some better solutions than mine. That said...
The only real technique I have for debugging these issues is adding a global (via spec_helper) hook for printing some aspect of database state (my usual culprit) before and after each test (conditioned to check if I care or not). A recent example was adding something like this to my spec_helper.rb.
Spec::Runner.configure do |config|
config.before(:each) do
$label_count = Label.count
end
config.after(:each) do
label_diff = Label.count - $label_count
$label_count = Label.count
puts "#{self.class.description} #{description} altered label count by #{label_diff}" if label_diff != 0
end
end
We have a single test in our Continuous Integration setup that globs the spec/ directory of a Rails app and runs each of them against each other.
Takes a lot of time but we found 5 or 6 dependencies that way.
Here is some quick dirty script I wrote to debug order-dependent failure - https://gist.github.com/biomancer/ddf59bd841dbf0c448f7
It consists of 2 parts.
First one is intended to run rspec suit multiple times with different seed and dump results to rspec_[ok|fail]_[seed].txt files in current directory to gather stats.
The second part is iterating through all these files, extracts test group names and analyzes their position to the affected test to make assumptions about dependencies and forms some 'risk' groups - safe, unsafe, etc. The script output explains other details and group meanings.
This script will work correctly only for simple dependencies and only if the affected test is failing for some seeds and passes for another ones, but I think it's still better than nothing.
In my case it was complex dependency when effect could be cancelled by another test but this script helped me to get directions after running its analyze part multiple times on different sets of dumps, specifically only on the failed ones (I just moved 'ok' dumps out of current directory).
Found my own question 4 years later, and now rspec has a --order flag that lets you set random order, and if you get order dependent failures reproduce the order with --seed 123 where the seed is printed out on every spec run.
https://www.relishapp.com/rspec/rspec-core/v/2-13/docs/command-line/order-new-in-rspec-core-2-8
Its most likely some state persisting between tests so make sure your database and any other data stores (include class var's and globals) are reset after every test. The database_cleaner gem might help.
Rspec Search and Destroy
is meant to help with this problem.
https://github.com/shepmaster/rspec-search-and-destroy

Resources