I am trying to get code coverage for rest API endpoints written in node, but my functional(not unit) test cases calling these node end points are written in ruby and cucumber. Which approach should I use in this situation?
Thanks, appreciate your help!!
As long as you run an instrumented version of the classes, it shouldn't matter how they are executed. The coverage logging will still be executed.
My solution would be to
Make sure that the classes under test are properly instrumented using Cobertura or similar
Run the test suit
Collect all coverage files and generate a coverage report
This blog post describes a solution where a multi module Maven project is measured. It is somewhat related to your problem.
Related
Generally we find code coverage with unit tests using Jacoco and unit test cases however My Dev manager asking if we can use the existing functional automation tests to find the code coverage of the spring boot application. My functional tests using testng/restassured and stored in a separate code repo, not with in development source code repo.
Please suggest how can be this done if it is really possible. Appreciate if anyone can guide and share links for reference.
Thanks.
Scenario
Compile code & run unit tests
Execute sonar scan on code
Once the scan completes, code coverage for unit tests + other analysis gets stored in sonar (lets call report1)
All good till now. Now once the application is deployed, we are run some integrations tests and manual tests are also performed.
All these tests will generate code coverage.
Now we have to merge unit test, integrations & manual tests code coverage and store it on sonar under same analysis report (basically update report1)
Any thoughts/suggestions for best practice or solution for same on sonar?
It's pretty simple. Don't run the SonarQube scan until you've produced all the data you want to integrate into the report.
However, I would also point out that it's generally not worthwhile to generate code coverage for anything but unit tests. If you can't reach it with a unit test, I don't see how you could reach it with any other test.
I recently started working on creating jacoco reports for maven projects including unit and integration tests and they seem to work out correctly.
Now I have encountered a different scenario which I am not sure how to approach.
I have one workspace which consists of integration test cases - application A, but the source code does not exist in the same workspace/code base. The source code which actually runs on invoking these integration test scripts are in a different workspace/code base - application B(they are invoked using rest api calls with the localhost urls. The jboss server is started for application B so that the localhost context is up) from the integration tests.
The aim is to invoke these integration tests from application A, which in turn calls the source code of these tests in application B generating the jacoco report of the code coverage for application B.
I am not actually sure how to achieve this.
Can someone provide some input.
Thanks.
If I understand you correctly, you actually have 2 different processes in your scenario:
The "client" process that runs the integration tests and for which jacoco can be easily applied, but it's not what you need
The "server" process that runs the actual JBoss server and executes the actual code.
Client process contacts the server via HTTP.
In this case, I'm afraid jacoco won't be able to provide a coverage for you if you're running the tests from maven/gradle, because jacoco instruments only bytecode on the running JVM. So you have to be "creative" here :)
I'll list here some possible approaches
Disclaimer: I haven't tried them though (didn't work with jboss/java ee), but maybe you'll be able to at least borrow some ideas
The first approach would be running the tests together with the application somehow, like its done for example in spring tests (I'm not sure whether JBoss provides similar capabilities).
The idea is simple:
You run the integration test, it runs the jboss "embedded in the same jvm" and you can inject beans / EJB session beans into the test (like autowiring with spring).
The advantage of such a method is that you'll be able just to use jacoco maven plugin and it will instrument everything for you
I don't know how easy will be achieving this architecture technically, I know that recent jboss versions support embedded mode, So maybe you'll find This link to be a useful foundation
Another direction is to take a look at Arquillian project. They have some jacoco extension that probably will help, but I've never tried it.
And the last approach I can think of is running the jboss server with jacoco agent directly instead of relying on the build system that runs jacoco for you.
The idea here is to stream the results of covered server code into some file / tcp endpoint. So you run the jboss with -javaagent:[yourpath/]jacocoagent.jar and it starts streaming the results wherever you need it to stream. After the tests you should gather these results and prepare a report. You can find Here more information about this approach
I am calling Sonar from my Jenkins job
I noticed that the coverage of the classes in my domain module all show as having 0% test coverage even though the domains are being used in the other modules which have high test coverage. I am using cobertura within Sonar for measuring the coverage
Can anyone offer any insights into how to get Sonar/Cobertura to recognise the test coverage for the classes in the domain module
Thanks
Damien
By default, Sonar computes unit test coverage. So if your domain classes are not unit-tested in the module where they are defined, this is normal that you don't get unit test coverage for those.
You can have a look at how to add integration test coverage in Sonar: see https://github.com/SonarSource/sonar-examples/tree/master/projects/code-coverage/combined%20ut-it/combined-ut-it-multimodule-maven-jacoco
But that'd be best to have "real" unit tests for your domain classes to test their logic in an isolated context.
Edit
Link to sonar-exmaple-multiple-modules
I have some tests which are dependent on the success and failure of some tests. How can I define dependency as I am using VS2010 Mstest and selenium.
E.g
if test1 is failed then dont run test5, test 6. is this possible.
Unit Tests should always be isolated and completly non dependent on and thing else to run, not make non-fragile.
You could setup catagories with MSTest to seperate them into deferent logical structures.
A great book to find more details is this http://artofunittesting.com
Roy has also does alot of public speaking which is recorded online
Cheers
Tests shouldn't have dependencies between them.
If you have dependencies, then running them in a different order, or in isolation will cause them to fail sporadically - this can be very confusing for anyone else that is running the tests.
It's much better to define tests that setup their own data and assert something specific. You can use a mocking framework like Rhino Mocks to reduce the dependencies between modules of code by faking (mocking) areas that aren't relevant to your test. This is made much easier if you also use a dependency injection framework like Microsoft Unity as your code will have many more seams where mocking can be applied.