Test Report splitting for specific testcase groups belonging to the same test node - capl

Is it possible to execute all the testcases of a test node in CANoe and have more reports instead of a single html report with the whole result?
I would get one report for each testgroup in the same node.

Yes.
In the configuration dialog of your test modules, test configurations and test units, you can select an XSLT stylesheet which is used to generate the HTML report out of the XML result file.
There are already several XSLT sheets that create multipage output. If that does not fit your use case exactly, you can write your own stylesheet.

Related

Merge JMeter csv results and plot a graph

I executed load tests using Jmeter for 2 different scenarios and I have the Simple data writer output CSV files for the same. How do I merge these 2 results to see how the performance varies from one scenario to another?
I downloaded MergeResults plugin - upon adding the input files and clicking Merge, the output merged csv file gets created but there is no graphical representation which can aid comparing the results. Please can someone help?
I checked the instructions given in the link but I dont know what I am missing in order to plot the graph using the merged csv file.
Please can someone help me?
Merge Results generates a CSV file, if you want to see the graphical representation (chart) you need to open the file in the Listener of your choice.
Alternatively if you need to generate a chart in form of a PNG image you need to use JMeter Plugins Command Line Graph Plotting Tool like:
JMeterPluginsCMD --generate-png responsetimes.png --input-jtl your_merged_file.csv --plugin-type ResponseTimesOverTime
You can do comparison of the JMete test results easily with https://sense.blazemeter.com
Sign in to https://sense.blazemeter.com and create project
Upload the CSV files
Select a file
Click on Add to compare button at the bottom
Select the second file and click compare button
You can compare the test results in tabular form and in graphs.
Note : It is possible to compare multiple test result files

Sphinx docs including unit test output

This is a how-to/best-practice question.
I have a code base with a suite of unit tests run with pytest
I have a set of *.rst files which provide explanation of each test, along with a table of results and images of some mathematical plots
Each time the pytest suite runs, it dynamically updates the *.rst files with the results of the latest test data, updating numerical values, time-stamping the tests, etc
I would like to integrate this with the project docs. I could
Build these rst files separately with sphinx-build whenever I want to view the test results [this seems bad, since it's labor intensive and not automated]
tell Sphinx to render these pages separately and include them in the project docs [better, but I'm not sure how to configure this]
have a separate set of sphinx docs for the test results which I can build after each run of the test suite
Which approach (or another approach) is most effective? Is there a best practice for doing this type of thing?
Maybe take a look into Sphinx-Test-Reports, which reads in all information from junit-based xml-files (pytest supports this) and generates the output during the normal sphinx build phase.
So you are free to add custom information around the test results.
Example from webpage:
.. test-report:: My Report
:id: REPORT
:file: ../tests/data/pytest_sphinx_data_short.xml
So complete answer to your question: Take none of the given approaches and let a sphinx-extension do it during build-time.

How to show text files results for code coverage in sonarqube?

I have written unit cases for my adapter code. The results are in a text file having module name and whether the unit test is succcess or failure with the string SUCCESS AND FAILURE. How can I use this text file to show code coverage in sonarqube analysis ?. Please help me on this.
I want to set as covered as true for the entire folder level and not for linenumber. How to specify in that general xml format ? – Umap
Your best bet is to try to convert the into the Generic Test Data format. However, that format is designed to take coverage data about lines, not modules, so you may face difficulties with your data granularity.

How to write any variable created in a test to the jasmine-reporters output file using Protractor?

In parts of my test, I have some variables I dynamically create that simply capture some strings. I have jasmine-reporters set up and working, and writing to an output.xml file. How do I get any variables I create in my tests to write to that output file?
For example, if I do a search in my test, the results display number of lines in a string as part of what's returned. I do a getText() on that and store in a variable. I have figured out how to write to console, but it would be great to get it to write to the output file instead.
Yes like #bloveridge mentioned, jasmine does not allow you to add data from the tests into the report, and you should not try to do it as it's not the concern of the test reporter. If you want to use protractor to collect some kind of information while testing, you should write into your own (i.e. separate) file (http://nodejs.org/api/fs.html) in your test.

directly embedded subreport in JasperReports

I am so close to having this work, I am trying to directly embed one jasper subreport into the main report xml of the other. You'd think this would be easy, but I can't find a single example on doing it. Everyone seems to use files or resources or whatever. I have one report working straight from a string and I want it to contain it's subreport.
Anyone? Syntax? Thanks!
The only way I know of to do this with JasperReports is to use a separate .jrxml file for the subreport, and include it in the main report using the subreport command.
Another option you have for any embedded reports is to use subdatasets, but as far as I know they're only useful for graphs.
As it sounds like you control the code surrounding the generation of the report, you could come up with a simple format to define multiple reports in the one string, and then have your code extract each report at runtime.
When we've needed to deal with a single file but have subreports for a JasperReport, we've used Zip files, and simply zipped up the main report and all it's necessary sub reports, and then unzipped them into a temporary directory when we need to (all in code of course)

Resources