Graphical user visualization for Ruby testing - ruby

What kind of (small) tool can we use in order to render a graphical result from testing?
Actually, I would like to display a graphic instead of this test (for example):
Finished in 3.44 seconds
5 examples, 0 failures
Maybe a JS graph for instance, where each bugged test could be in red...
Thanks.

If you use a Continuous Integration tool such as Hudson you can chart failures over time.

Well Rspec comes with a html formatter. you could just pass the option to command line like
rspec -f html -o index.html
You could also inherit RSpec formatter to write your own formatters.

Maybe you are looking for fuubar, it's not JS but shows a nice progress bar in colors. I also use it to have a quick overview of passing or failing tests.

Related

Sphinx docs including unit test output

This is a how-to/best-practice question.
I have a code base with a suite of unit tests run with pytest
I have a set of *.rst files which provide explanation of each test, along with a table of results and images of some mathematical plots
Each time the pytest suite runs, it dynamically updates the *.rst files with the results of the latest test data, updating numerical values, time-stamping the tests, etc
I would like to integrate this with the project docs. I could
Build these rst files separately with sphinx-build whenever I want to view the test results [this seems bad, since it's labor intensive and not automated]
tell Sphinx to render these pages separately and include them in the project docs [better, but I'm not sure how to configure this]
have a separate set of sphinx docs for the test results which I can build after each run of the test suite
Which approach (or another approach) is most effective? Is there a best practice for doing this type of thing?
Maybe take a look into Sphinx-Test-Reports, which reads in all information from junit-based xml-files (pytest supports this) and generates the output during the normal sphinx build phase.
So you are free to add custom information around the test results.
Example from webpage:
.. test-report:: My Report
:id: REPORT
:file: ../tests/data/pytest_sphinx_data_short.xml
So complete answer to your question: Take none of the given approaches and let a sphinx-extension do it during build-time.

How to handle multi language website in jmeter script

I have website which supports English and French.I have already created script for website in English but now they want me to test against french website.So how can i extended my script that asssertions does not fail i test script any of those languages.
You can easily add flexibility to your Assertions so they would check English OR French word presence in the response.
For instance if you want to use a single assertion to check whether there is Welcome OR Bienvenue word in the response you can combine them using pipe as follows:
Welcome|Bienvenue
As per How to Use JMeter Assertions in 3 Easy Steps guide Response Assertion in "Contains" and "Matches" mode accepts Perl5-style regular expressions so you should have enough flexibility to be able to check both English and French website versions.
In short
Your tests should be language-agnostic, especially performance-/load-tests.
Explanation
UI tests should use generic selectors such as tags <p>, <div>, <table>, element Id's <div id="basket"> or CSS classes <p class="message"> for looking up elements. As you're using JMeter, I assume you're on some sort of performance-/load-tests. If so, then you want to look most likely for some action elements to progress your tests.
If you cannot omit some language dependency (for example localized URL-paths), I would suggest using JMeter variables that are set according to the language you're testing with. See here for details
In contrast to performance tests, acceptance or general web UI tests would incorporate testing of some labels. Selenium or other HTML capturing tests are usually backed by some test code written by you or your team. That code can rely on resource bundles, translations, etc. so you can test for the correct labels.
HTH, Mark

How to make a feature run before other

I have two cucumber feature ( DeleteAccountingYear.feature and AddAccountingYear.feature).
How can i do to make that the second feature(AddAccountingYear.feature) run before the first one (AddAccountingYear.feature).
I concur with #alannichols about tests being independent of each other. Thats a fundamental aspect of automation suite. Otherwise we will end up with a unmaintainable, flaky test suite.
To run a certain feature file before running another feature appears to me like a test design issue.
Cucumber provides few options to solve issues like this:
a) Is DeleteAccountingYear.feature really a feature of its own? If not you can use the cucumber Background: option. The steps provided in the background will be run for each scenario in that feature file. So your AddAccountingYear.feature will look like this:
Feature: AddingAccountingYear
Background:
Given I have deleted accounting year
Scenario Outline: add new accounting year
Then I add new account year
b) If DeleteAccountingYear.feature is indeed a feature of its own and needs to be in its own feature file, then you can use setup and teardown functions. In cucumber this can be achieved using hooks. You can tag AddDeleteAccountingYear.feature with a certain tag say #doAfterDeleteAccountYear. Now from the Before hooks you can do the required setup for this specific tag. The before hooks(for ruby) will look like:
Before('#doAfterDeleteAccountYear') do
#Call the function to delete the account year
end
If the delete account year is written as a function, then the only thing required is to call this method in the before hook. This way the code will be DRY compliant as well.
If these options doesn't work for you, one another way of forcing the order of execution is by using a batch/shell script. You can add individual cucumber commands for each feature in the order you would like to execute and then just execute the script. The downside of it is different reports will be generated for each feature file. But this is something that I wouldn't recommend for the reasons mentioned above.
From Justin Ko's website - https://jkotests.wordpress.com/2013/08/22/specify-execution-order-of-cucumber-features/ the run order is determined in the following way:
Alphabetically by feature file directory
Alphabetically by feature file name
Order of scenarios within the feature file
So to run one feature before the other you could change the name of the feature file or put it in a separate feature folder with a name that alphabetically first.
However, it is good practice to make all of your tests independent of one another. One of the easiest way to do this is to use mocks to create your data (i.e. the date you want to delete), but that isn't always an option. Another way would be to create the data you want to delete in the set up of the delete tests. The downside to doing this is that it's a duplication of effort, but it won't matter what order the tests run in. This may not be an issue now, but with a larger test suite and/or multiple coders using the test repo it may be difficult to maintain the test ordering based solely on alphabetical sorting.
Another option would be to combine the add and delete tests. This goes against the general rule that one test should test one thing but is often a pragmatic approach if your tests take a long time to run and adding the add data step to the set up for delete would add a lot of time to your test suite.
Edit: After reading that link to Justin Ko's site you can specify the features that are run when you run cucumber, and it will run them in the order that you give. For any that you don't care about the order for you can just put the whole feature folder at the end and cucumber will run through them, skipping any that have already been run. Copy paste example from the link above -
cucumber features\folder2\another.feature features\folder1\some.feature features

(Unit)test pdf generation

We implemented a magento module https://github.com/firegento/firegento-pdf/ and I plan to write tests for the module.
The problem is: The extension generates pdfs.
Is there any framework, or whatever to test pdfs? It would be totally fine if I can check for text in the pdf. I don't want to check the correct placement.
Andy ideas?
This looks promising but feels oversized. http://webcheatsheet.com/php/reading_clean_text_from_pdf.php
I use PdfBox for a similar module, a Java based command line utility that extracts text from a PDF and optionally converts it to HTML: http://pdfbox.apache.org/commandline/#extractText
To use it within PHPUnit tests, I wrote a PHP interface for the relevant PdfBox methods: https://github.com/schmengler/PdfBox
Example
use SGH\PdfBox;
//$pdf = GENERATED_PDF;
$converter = new PdfBox;
$converter->setPathToPdfBox('/usr/bin/pdfbox-app-1.7.0.jar');
$text = $converter->textFromPdfStream($pdf);
Further reading: Unit Test Generated PDFs with PHPUnit and PDFBox
Maybe you could use Inkscape to convert it into SVG and make asserts on some SVG Nodes.
That would do if you only want to check the text or some simple formatting.
$ inkscape -f invoice.pdf --export-plain-svg=thepdf.svg
For the correct position you need to be a bit fuzzy, though.
Nevertheless the PDF source can be plain enough to be checked with simple strpos().
You have to resign yourself to testing "we sent the right commands to Magento". Testing the output PDF would cause fragility.
Mock the PDF-writing library, and test that your code called the library correctly. This has the added benefit of speed, but requires a little more discipline. If a PDF output fails a manual inspection, you MUST fix that test-first, to keep your mocks honest.

Automated test with Ruby: select an option from drop-down list

I write automated test with Ruby(Selenium framework) and I need to know how can I select an option from drop-down list.
Thanks in advance!
building on floehopper's answer:
selenium.addSelection(locator, value)
or
selenium.select(locator, value)
You almost certainly want "id=my_select_box_id" (with the quotes) for locator, though other CSS selectors will work. value is the literal text value (not the display value) of the option to be selected.
It sounds like you are writing a functional test here. Selecting it probably won't do you much good on its own. You need to submit the form in order to test the controller. :)
It might help people answering to know which testing framework you are using, because there are several to choose from.
If you are using RSpec, check out this screencast.
Hope that helps anyway.
Aside from functional tests, if you're looking for something that acts a bit more like the real app, have a look at WebRat. For non-AJAXed integration tests, it has a very nice DSL for selection your DOMs and taking appropriate actions against them. (link-clicking form-filling etc.).
On the other hand if your App is an external Web App that you just want to do acceptance tests on, you can also check Selenimum or Watir.
Note that WeRat is heavily web framework based where as Selenimum and Watir use the browser to interact with your web app directly (like a real user).
I think you want this command :-
select(selectLocator, optionLocator)
selectLocator identifies the drop down list
optionLocator identifies the option within the list
Easiest way of doing this: select(selectLocator,optionLocator) as suggested above.
selectLocator: name or xpath for dropdown object
optionLocator: name or xpath for dropdown option to be selected
E.g.
#selenium.select "Language", "label=Ruby"

Resources