Outputting Xamarin.UITest REPL tree to failed test results - xamarin

I am trying to make my Xamarin.UITest output clearer and easier to work with. Every so often when Xamarin Forms updates the tree changes in subtle ways that break our UITests. Also, when developing a test it isn't always necessarily clear what the query should look like to get to a view element we want our test to interact with.
To address these, when a test fails with an "Unable to find element" error, I want to capture the app's view tree and output it to the test results.
Currently in these cases we have to modify the test code by adding app.Repl(); (see Working With the REPL), re-run the test, wait for the REPL window to appear, type tree, look at the output, type exit to leave the REPL, make my code changes based on what I saw in the tree command's output, and rinse-repeat until I have a working test. Instead, if the test results contains the outputs of the REPL's tree command, I can start making changes to fix the test code immediately and greatly speed up my testing feedback loop.
How could I most easily achieve this?

app.Print.Tree();
I think this is what you searched for.

Related

Is there a way to step in to CasperJS code and Debug step by step [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Though I have been using CasperJS for some time, and rely on console logging for debugging. I was wondering if there is any IDE which support CasperJS step by step debugging or there is other way(remote debugging) to step in to CasperJS code? Has anybody successfully done it? Any information will be helpful.
Thanks,
When I want to debug with CasperJS, I do the following : I launch my script with slimerJS (it opens a firefox window so I can easilly see click problem, problems of form filling-ajax return error, media uploading...-, and in which step the code blocks).
With that I don't often need to look at the console and I don't call this.capture('img.jpg') several times to debug (for now i don't test responsive design so i don't need to use capture, take a look at PhantomCSS if you test it).
I use slimerJS to debug (always with casper), but phantomJS in continuous Integration-jenkins- (headless), though you can use slimerjs too (being headless) with xvfb on linux or Mac.
But sometimes i have to look at the console for more details, so I use these options (you can call them in command line too) :
casper.options.verbose = true;
casper.options.logLevel ="debug";
Name your closures will be useful with these options, because the name will be displayed.
I don't think there is an IDE : if a step fail, the stack with all the following steps stops anyway (well it's still possible to do a 'sort of hack' using multiple wait -and encapsulate them- to perform differents closures and have the result of all of them, even if one of them fail; but in this case we are not stacking synchronous steps, but executing asynchronous instructions : take care about timeout and logical flow if you try it). Care : i said 'if a step fail', if it's just an instruction in a closure which fails, of course the following steps will be execute.
So a closure which fails -> following steps are executed.
A step which fails (ex : thenOpen(fssfsf) with fssfsf not defined), the stack will stop.
Multiple wait() can be done asynchronously.
So if you have a lot of bugs and execute your tests sequentially (stacking steps), you can only debug them one by one, or by closure for independant step functions-I think an IDE could work in this case- (->for one file. The stacks are of course independent if you launch a folder). And usually at the beginning you launch your file each time you finish a step. And when you're used to the tool, you write the whole script at once.
In most cases, bug are due to asynchrone, scope, context problems (actually js problems, though casper simplifies asynchronous problems : "The callback/listener stuff is an implementation of the Promise pattern.") :
If you want to loop a suit of instructions, don't forget the IIFE or use the casper each function, and encompass all with a then() statement (or directly eachThen). I can show some exemples if needed. Otherwise it will loop the last index 'i.length' times, there isn't loop scope in js, by default i has the same reference.
When you click on a link at the end of a step, use a wait()-step function too- statement after; instead of a then(). If i've well understood the then() statement, it's launched when the previous step is completed. So it's launched just after the click(). If this click launches an ajax return, and your following step scrape or test the result of this ajax return, it will randomly fail, because you haven't explicitly ask to wait for the resource. I saw some issues like that in my first tests.
Don't mixt the two context : casper environment and page DOM environment. Use the evaluate function() for passing from one to the other. In the evaluate function, you can pass an argument from the casper context to the page DOM...
...Like that :
var casperContext = "phantom";
casper.evaluate(function(pageDomContext) {
console.log("will echo ->phantom<- in the page DOM environment : " + pageDomContext + ", use casper.on('remote.message') to see it in the console");
}, casperContext);
Or you can see it directly in the browser with slimerJS using alert() instead of console.log().
Use setFiltrer to handle prompt and confirm box.
If your website exists in mobile version too, you can manipulate the userAgent for your mobile tests.
You can call node modules in a casperJS file, in files using the tester module too. Well it's not entirely true, see use node module from casper. Some core node features are implemented in phantom (and slimer too), like fs, child process, but they are not always well documented. I prefer to execute my tests with node so. Node is useful to launch your tests in parallel (child process). I suggest you to execute as many processes as you have core. Well, it depends of your type of script, with just normal scenario (open a page and check some elements) I can execute 10 child process in parallel whithout random failure (local computer), but with some elements which are slow to load (as multi svg, sometimes xml...), use require('os').cpus().length or a script like that : Repeat a step X times. Otherwise you will have random failure, even if you increase the timeout. When it crashes, you can't do anything other that reload() the page.
You can then integrate your tests in jenkins using the xunit command. Just specify differents index for each log.xml files, jenkins (XUnit -> JUnit) will manage them : pattern *.xml.
I know i didn't really answer your question, but i think to debug, list the main specific problems remains the best way.
There are still useful functions to debug :
var fs = require('fs');
fs.write("results.html", this.getPageContent(), 'w');
I prefer this way rather than this.debugHTML(). I can check in my results.html file if there are missing tags (relative to the browser using firebug or another tool). Or sometimes if i need to check just one tag, output the result in the console isn't a problem, so : this.getHTML("my selector"); and you can still pipe the log result : casperjs test test.js > test.html
Another trick : if you execute your tests in local, sometimes the default timeout isn't sufficient (network freeze) (5sec).
So -> 10sec :
casper.options.waitTimeout = 10000;
Some differences between Phantom and Slimer :
With slimer, if you set casper.options.pageSettings.loadImages = false; and in your file you try to scrape or test the weight/height.... of an element, it will work with slimer but not with phantom. So set it to true in the specific file to keep the compatibility.
Need to specify an absolute path with slimer (with include, import->input media, ...).
Example :
this.page.uploadFile('input[name="media"]', fs.absolute(require('system').args[4]).split(fs.separator).slice(0, -1).join(fs.separator) + '/../../../../avatar.jpg');
To include a file from the root folder (work in every subdir/OS, better than previous inclusion) -you could also do it nodeLike using require()-:
phantom.injectJs(fs.workingDirectory + '/../../../../global.js');

can I turn off Mocha's watching watching watching behavior when using --watch?

I like Mocha so far but I'm not fond of this when I'm doing continous testing:
watching
watching
watching
watching
watching
watching
[repeat many times]
If I haven't run a test for a while and I want to see the output of the last test it's scroll scroll scroll. It quickly swamps my console buffer. Can I change this behavior without changing mocha's source code?
EDIT: this has been fixed and pulled into master.
Per https://github.com/visionmedia/mocha/blob/master/bin/_mocha#L287-303,
the mocha command gives the test runner a callback that prints a series of strings to the console, and there aren't any cmd line flags that would affect that behavior.
The root of the problem seems that the animation's control characters might be failing in your terminal, though, as it's supposed to do a pretty-looking spinning symbol at the beginning, and then do a character return (not line feed) to rewrite the line after every printout.
If you're really dead-set on changing that behavior without modifying mocha's source, you could make a copy the bin/_mocha file, and just replace the play() function with one that suits your needs. Make sure to fix up all the relative paths.

can I skip a test if another test fails?

Say I have 2 tests:
describe "Data Processor" do
it "should parse simple sample data correctly" do
...
end
it "should parse complex sample data correctly" do
...
end
end
Is there a way using Rspec to make the second test dependent on the first one? The reason I ask is because I'm using Watchr and the second test takes quite a bit more time. If the first test fails, the second one will very likely fail as well. They aren't really dependent on each other but it's kind of a waste of time to run the second test if the first one fails.
I can skip the test with Ctrl C in Watchr but I'm wondering if there's a better way. Didn't find anything via google.
Sometimes I add this line to my .rspec:
--fail-fast
which is a command-line flag telling rspec to exit after the first failure. It may not be exactly what you want; it doesn't tie two particular examples together but rather quits your entire test suite when anything fails.
But when I'm hammering out a new bit of code with autotest/guard/watchr, I like to turn this on since I only want to work on one failure at a time, anyway. Much faster.

Is there a good way to debug order dependent test failures in RSpec (RSpec2)?

Too often people write tests that don't clean up after themselves when they mess with state. Often this doesn't matter since objects tend to be torn down and recreated for most tests, but there are some unfortunate cases where there's global state on objects that persist for the entire test run, and when you run tests, that depend on and modify that global state, in a certain order, they fail.
These tests and possibly implementations obviously need to be fixed, but it's a pain to try to figure out what's causing the failure when the tests that affect each other may not be the only things in the full test suite. It's especially difficult when it's not initially clear that the failures are order dependent, and may fail intermittently or on one machine but not another. For example:
rspec test1_spec.rb test2_spec.rb # failures in test2
rspec test2_spec.rb test1_spec.rb # no failures
In RSpec 1 there were some options (--reverse, --loadby) for ordering test runs, but those have disappeared in RSpec 2 and were only minimally helpful in debugging these issues anyway.
I'm not sure of the ordering that either RSpec 1 or RSpec 2 use by default, but one custom designed test suite I used in the past randomly ordered the tests on every run so that these failures came to light more quickly. In the test output the seed that was used to determine ordering was printed with the results so that it was easy to reproduce the failures even if you had to do some work to narrow down the individual tests in the suite that were causing them. There were then options that allowed you to start and stop at any given test file in the order, which allowed you to easily do a binary search to find the problem tests.
I have not found any such utilities in RSpec, so I'm asking here: What are some good ways people have found to debug these types of order dependent test failures?
There is now a --bisect flag that will find the minimum set of tests to run to reproduce the failure. Try:
$ rspec --bisect=verbose
It might also be useful to use the --fail-fast flag with it.
I wouldn't say I have a good answer, and I'd love to here some better solutions than mine. That said...
The only real technique I have for debugging these issues is adding a global (via spec_helper) hook for printing some aspect of database state (my usual culprit) before and after each test (conditioned to check if I care or not). A recent example was adding something like this to my spec_helper.rb.
Spec::Runner.configure do |config|
config.before(:each) do
$label_count = Label.count
end
config.after(:each) do
label_diff = Label.count - $label_count
$label_count = Label.count
puts "#{self.class.description} #{description} altered label count by #{label_diff}" if label_diff != 0
end
end
We have a single test in our Continuous Integration setup that globs the spec/ directory of a Rails app and runs each of them against each other.
Takes a lot of time but we found 5 or 6 dependencies that way.
Here is some quick dirty script I wrote to debug order-dependent failure - https://gist.github.com/biomancer/ddf59bd841dbf0c448f7
It consists of 2 parts.
First one is intended to run rspec suit multiple times with different seed and dump results to rspec_[ok|fail]_[seed].txt files in current directory to gather stats.
The second part is iterating through all these files, extracts test group names and analyzes their position to the affected test to make assumptions about dependencies and forms some 'risk' groups - safe, unsafe, etc. The script output explains other details and group meanings.
This script will work correctly only for simple dependencies and only if the affected test is failing for some seeds and passes for another ones, but I think it's still better than nothing.
In my case it was complex dependency when effect could be cancelled by another test but this script helped me to get directions after running its analyze part multiple times on different sets of dumps, specifically only on the failed ones (I just moved 'ok' dumps out of current directory).
Found my own question 4 years later, and now rspec has a --order flag that lets you set random order, and if you get order dependent failures reproduce the order with --seed 123 where the seed is printed out on every spec run.
https://www.relishapp.com/rspec/rspec-core/v/2-13/docs/command-line/order-new-in-rspec-core-2-8
Its most likely some state persisting between tests so make sure your database and any other data stores (include class var's and globals) are reset after every test. The database_cleaner gem might help.
Rspec Search and Destroy
is meant to help with this problem.
https://github.com/shepmaster/rspec-search-and-destroy

JMeter - saving results + configuring "graph results" time span

I am using JMeter and have 2 questions (I have read the FAQ + Wiki etc):
I use the Graph Results listener. It seems to have a fixed span, e.g. 2 hours (just guessing - this is not indicated anywhere AFAIK), after which it wraps around and starts drawing on same canvas from the left again. Hence after a long weekend run it only shows the results of last 2 hours. Can I configure that span or other properties (beyond the check boxes I see on the Graph Results listener itself)?
Can I save the results of a run and later open them? I know I can save the test plan or parts of it. I am unclear if I can save separately just the test results data, and later open them and perform comparisons etc. And furthermore can I open them with different listeners even if they weren't part of original test (i.e. I think of the test as accumulating data, and later on I want to view and interpret the data using different "viewers").
Thanks,
-- Shaul
Don't know about 1. Regarding 2: listeners typically have a configuration field for "Write All Data to a File", which lets you specify the file name. You can use the Simple Data Writer to store results efficiently for later analysis.
You can load results from a previous test into a visualizer by choosing "Write All Data to a File" and browsing for the file you wish to load. Somewhat counterintuitively, selecting a file for writing also loads that file into the visualizer and displays the results. Just make sure you don't run the test again while that file is selected, otherwise you will lose your saved test data. :-)
Well, I later found a JMeter group that was discussing the issue raised in my first question, and B.Ramann gave me an excellent suggestion to use instead a better graph found here.
-- Shaul

Resources