This is a how-to/best-practice question.
I have a code base with a suite of unit tests run with pytest
I have a set of *.rst files which provide explanation of each test, along with a table of results and images of some mathematical plots
Each time the pytest suite runs, it dynamically updates the *.rst files with the results of the latest test data, updating numerical values, time-stamping the tests, etc
I would like to integrate this with the project docs. I could
Build these rst files separately with sphinx-build whenever I want to view the test results [this seems bad, since it's labor intensive and not automated]
tell Sphinx to render these pages separately and include them in the project docs [better, but I'm not sure how to configure this]
have a separate set of sphinx docs for the test results which I can build after each run of the test suite
Which approach (or another approach) is most effective? Is there a best practice for doing this type of thing?
Maybe take a look into Sphinx-Test-Reports, which reads in all information from junit-based xml-files (pytest supports this) and generates the output during the normal sphinx build phase.
So you are free to add custom information around the test results.
Example from webpage:
.. test-report:: My Report
:id: REPORT
:file: ../tests/data/pytest_sphinx_data_short.xml
So complete answer to your question: Take none of the given approaches and let a sphinx-extension do it during build-time.
Related
I have written unit cases for my adapter code. The results are in a text file having module name and whether the unit test is succcess or failure with the string SUCCESS AND FAILURE. How can I use this text file to show code coverage in sonarqube analysis ?. Please help me on this.
I want to set as covered as true for the entire folder level and not for linenumber. How to specify in that general xml format ? – Umap
Your best bet is to try to convert the into the Generic Test Data format. However, that format is designed to take coverage data about lines, not modules, so you may face difficulties with your data granularity.
As I learned from DevGuide testing ReSharper plugins works as follows:
Plugin is loaded and test input file is passed to it
Plugin performs it's actions on the passed file
ReSharper's test environment writes plugin actions results to .tmp file in a special format that depends on the type of functionality tested (for example, if we test completion, .tmp file will contain the list of generated completion items)
ReSharper's test environment compares .tmp file with .gold file to decide if test is failed or succeeded
But I need the following scenario. The first two steps are the same as the above ones, then:
I write code that obtains the results of plugin's actions and check are they what I'm expected so I can make test fail if needed
How can I achieve this?
I need it because I have a code that uses AST generated by ReSharper to build some graphs and I want to test are the graphs built correctly.
Yes, you can do this. You need to create your own test base class, instead of using one of the provided ones.
There is a hierarchy of base classes, each adding extra functionality. Usually, you'll derive from something like QuickFixAvailabilityTestBase or QuickFixTestBase, which add the functionality for testing quick fixes. These are the classes that will do something and write the output to a .tmp file that is then compared to the .gold file.
These classes themselves derive from something like BaseTestWithSingleProject, which provides the functionality to setup an in-memory solution and project that's populated with files you specify in your test, or BaseTestWithTextControl which also gives you a text control for the file you're testing. If you derive from this class directly (or with your own custom base class), you can perform the action you need for the actual test, and either assert something in memory, or write the appropriate text to the .tmp file to compare against the .gold.
You should override the DoTest method. This will give you an IProject that is already set up, and you can do whatever you need to in order to test your extension's functionality. You can use project.Solution.GetComponent<> to get at any shell or solution component, and use the ExecuteWithGold method to execute something, write to the .tmp file and have ReSharper compare to the .gold file for you.
I have two cucumber feature ( DeleteAccountingYear.feature and AddAccountingYear.feature).
How can i do to make that the second feature(AddAccountingYear.feature) run before the first one (AddAccountingYear.feature).
I concur with #alannichols about tests being independent of each other. Thats a fundamental aspect of automation suite. Otherwise we will end up with a unmaintainable, flaky test suite.
To run a certain feature file before running another feature appears to me like a test design issue.
Cucumber provides few options to solve issues like this:
a) Is DeleteAccountingYear.feature really a feature of its own? If not you can use the cucumber Background: option. The steps provided in the background will be run for each scenario in that feature file. So your AddAccountingYear.feature will look like this:
Feature: AddingAccountingYear
Background:
Given I have deleted accounting year
Scenario Outline: add new accounting year
Then I add new account year
b) If DeleteAccountingYear.feature is indeed a feature of its own and needs to be in its own feature file, then you can use setup and teardown functions. In cucumber this can be achieved using hooks. You can tag AddDeleteAccountingYear.feature with a certain tag say #doAfterDeleteAccountYear. Now from the Before hooks you can do the required setup for this specific tag. The before hooks(for ruby) will look like:
Before('#doAfterDeleteAccountYear') do
#Call the function to delete the account year
end
If the delete account year is written as a function, then the only thing required is to call this method in the before hook. This way the code will be DRY compliant as well.
If these options doesn't work for you, one another way of forcing the order of execution is by using a batch/shell script. You can add individual cucumber commands for each feature in the order you would like to execute and then just execute the script. The downside of it is different reports will be generated for each feature file. But this is something that I wouldn't recommend for the reasons mentioned above.
From Justin Ko's website - https://jkotests.wordpress.com/2013/08/22/specify-execution-order-of-cucumber-features/ the run order is determined in the following way:
Alphabetically by feature file directory
Alphabetically by feature file name
Order of scenarios within the feature file
So to run one feature before the other you could change the name of the feature file or put it in a separate feature folder with a name that alphabetically first.
However, it is good practice to make all of your tests independent of one another. One of the easiest way to do this is to use mocks to create your data (i.e. the date you want to delete), but that isn't always an option. Another way would be to create the data you want to delete in the set up of the delete tests. The downside to doing this is that it's a duplication of effort, but it won't matter what order the tests run in. This may not be an issue now, but with a larger test suite and/or multiple coders using the test repo it may be difficult to maintain the test ordering based solely on alphabetical sorting.
Another option would be to combine the add and delete tests. This goes against the general rule that one test should test one thing but is often a pragmatic approach if your tests take a long time to run and adding the add data step to the set up for delete would add a lot of time to your test suite.
Edit: After reading that link to Justin Ko's site you can specify the features that are run when you run cucumber, and it will run them in the order that you give. For any that you don't care about the order for you can just put the whole feature folder at the end and cucumber will run through them, skipping any that have already been run. Copy paste example from the link above -
cucumber features\folder2\another.feature features\folder1\some.feature features
We implemented a magento module https://github.com/firegento/firegento-pdf/ and I plan to write tests for the module.
The problem is: The extension generates pdfs.
Is there any framework, or whatever to test pdfs? It would be totally fine if I can check for text in the pdf. I don't want to check the correct placement.
Andy ideas?
This looks promising but feels oversized. http://webcheatsheet.com/php/reading_clean_text_from_pdf.php
I use PdfBox for a similar module, a Java based command line utility that extracts text from a PDF and optionally converts it to HTML: http://pdfbox.apache.org/commandline/#extractText
To use it within PHPUnit tests, I wrote a PHP interface for the relevant PdfBox methods: https://github.com/schmengler/PdfBox
Example
use SGH\PdfBox;
//$pdf = GENERATED_PDF;
$converter = new PdfBox;
$converter->setPathToPdfBox('/usr/bin/pdfbox-app-1.7.0.jar');
$text = $converter->textFromPdfStream($pdf);
Further reading: Unit Test Generated PDFs with PHPUnit and PDFBox
Maybe you could use Inkscape to convert it into SVG and make asserts on some SVG Nodes.
That would do if you only want to check the text or some simple formatting.
$ inkscape -f invoice.pdf --export-plain-svg=thepdf.svg
For the correct position you need to be a bit fuzzy, though.
Nevertheless the PDF source can be plain enough to be checked with simple strpos().
You have to resign yourself to testing "we sent the right commands to Magento". Testing the output PDF would cause fragility.
Mock the PDF-writing library, and test that your code called the library correctly. This has the added benefit of speed, but requires a little more discipline. If a PDF output fails a manual inspection, you MUST fix that test-first, to keep your mocks honest.
What kind of (small) tool can we use in order to render a graphical result from testing?
Actually, I would like to display a graphic instead of this test (for example):
Finished in 3.44 seconds
5 examples, 0 failures
Maybe a JS graph for instance, where each bugged test could be in red...
Thanks.
If you use a Continuous Integration tool such as Hudson you can chart failures over time.
Well Rspec comes with a html formatter. you could just pass the option to command line like
rspec -f html -o index.html
You could also inherit RSpec formatter to write your own formatters.
Maybe you are looking for fuubar, it's not JS but shows a nice progress bar in colors. I also use it to have a quick overview of passing or failing tests.