Say I have 2 tests:
describe "Data Processor" do
it "should parse simple sample data correctly" do
...
end
it "should parse complex sample data correctly" do
...
end
end
Is there a way using Rspec to make the second test dependent on the first one? The reason I ask is because I'm using Watchr and the second test takes quite a bit more time. If the first test fails, the second one will very likely fail as well. They aren't really dependent on each other but it's kind of a waste of time to run the second test if the first one fails.
I can skip the test with Ctrl C in Watchr but I'm wondering if there's a better way. Didn't find anything via google.
Sometimes I add this line to my .rspec:
--fail-fast
which is a command-line flag telling rspec to exit after the first failure. It may not be exactly what you want; it doesn't tie two particular examples together but rather quits your entire test suite when anything fails.
But when I'm hammering out a new bit of code with autotest/guard/watchr, I like to turn this on since I only want to work on one failure at a time, anyway. Much faster.
Related
I am trying to make my Xamarin.UITest output clearer and easier to work with. Every so often when Xamarin Forms updates the tree changes in subtle ways that break our UITests. Also, when developing a test it isn't always necessarily clear what the query should look like to get to a view element we want our test to interact with.
To address these, when a test fails with an "Unable to find element" error, I want to capture the app's view tree and output it to the test results.
Currently in these cases we have to modify the test code by adding app.Repl(); (see Working With the REPL), re-run the test, wait for the REPL window to appear, type tree, look at the output, type exit to leave the REPL, make my code changes based on what I saw in the tree command's output, and rinse-repeat until I have a working test. Instead, if the test results contains the outputs of the REPL's tree command, I can start making changes to fix the test code immediately and greatly speed up my testing feedback loop.
How could I most easily achieve this?
app.Print.Tree();
I think this is what you searched for.
I have a ruby script, apparently correct, that sometimes stops working (probably on some calls to Postgresql through the pg gem). The problem is that it freezes but doesn't produce any error, so I can't see the line number and I always have to isolate the line by using puts "ok1", puts "ok2", etc. and see where the script stops.
Is there any better way to see the current line being executed (without changing the script)? And maybe the current stack?
You could investigate ruby-debug a project that has been rewritten several times for several different versions of ruby, should allow you to step through your code line by line. I personally prefer printf debugging in a lot of cases though. Also, if I had to take an absolutely random guess at your problem, I might investigate whether or not you're running into a race condition and/or deadlock in your DB.
I am making an app with a TON of features. My problem is that applescript seems to have a cut-off point. After a certain number of lines, the script stopps working. It basically only works until the end. Once it gets to that point it stops. I have moved the code around to make sure that it is not an error within the code. Help?
I might be wrong, but I believe a long script is not a good way to put your code.
It's really hard to read, to debug or to maintain as one slight change in a part can have unexpected consequences at the other part of you file.
If your script is very long, I suggest you break your code in multiple parts.
First, you may use functions if some part of the code is reused several times.
Another benefit of the functions is that you can validate them separately from the rest of the execution code.
Besides, it makes your code easier to read.
on doWhatYouHaveTo(anArgument)
say "hello!"
end doWhatYouHaveTo
If the functions are used by different scripts, you may want to have your functions in a seperate library that you will call at need.
set cc to load script alias ((path to library folder as string) & "Scripts:Common:CommonLibrary.app")
cc's doWhatYouHaveTo(oneArgument)
At last, a thing that I sometimes do is calling a different script with some arguments, if a long code fits for slightly different purposes:
run script file {mainFileName} with parameters {oneWay}
This last trick has a great yet curious benefit : it can accelerate the execution time for a reason I never explained (and when I say accelerate, I say reduce execution time by 17 or so for the very same code).
Does anybody know if it's possible in Test Complete to run a test from a specific point in a function?
I see options like run routine, run test, run project.
Thank you,
Raluca
There is no way to start execution of a script test from a specific line, but it is possible to move the execution point to the desired line. You can set a breakpoint to the first line of a script routine and run this routine. Once the test execution is stopped at the breakpoint, use the Set Next Statement action to move the execution point to the needed line and continue the execution.
This feature is documented in the Setting Next Execution Point help topic.
TestComplete's Keyword Driven Tests have the possibility to add "Labels" and "Go To Labels".
You can put a label anywhere in your KDT script and set a GoToLabel at the very beginning of your test to move to that point.
It may even be possible to combine this with Project Variables to get a "Last Reached Milestone/Label". This is only speculation, but worth looking into it.
no there is no way you can run testcomplete scripts from a specific point. the closest point you can get to for runnning you script from a specific point is to use break point and then press F10 for executing one by one line of code
Too often people write tests that don't clean up after themselves when they mess with state. Often this doesn't matter since objects tend to be torn down and recreated for most tests, but there are some unfortunate cases where there's global state on objects that persist for the entire test run, and when you run tests, that depend on and modify that global state, in a certain order, they fail.
These tests and possibly implementations obviously need to be fixed, but it's a pain to try to figure out what's causing the failure when the tests that affect each other may not be the only things in the full test suite. It's especially difficult when it's not initially clear that the failures are order dependent, and may fail intermittently or on one machine but not another. For example:
rspec test1_spec.rb test2_spec.rb # failures in test2
rspec test2_spec.rb test1_spec.rb # no failures
In RSpec 1 there were some options (--reverse, --loadby) for ordering test runs, but those have disappeared in RSpec 2 and were only minimally helpful in debugging these issues anyway.
I'm not sure of the ordering that either RSpec 1 or RSpec 2 use by default, but one custom designed test suite I used in the past randomly ordered the tests on every run so that these failures came to light more quickly. In the test output the seed that was used to determine ordering was printed with the results so that it was easy to reproduce the failures even if you had to do some work to narrow down the individual tests in the suite that were causing them. There were then options that allowed you to start and stop at any given test file in the order, which allowed you to easily do a binary search to find the problem tests.
I have not found any such utilities in RSpec, so I'm asking here: What are some good ways people have found to debug these types of order dependent test failures?
There is now a --bisect flag that will find the minimum set of tests to run to reproduce the failure. Try:
$ rspec --bisect=verbose
It might also be useful to use the --fail-fast flag with it.
I wouldn't say I have a good answer, and I'd love to here some better solutions than mine. That said...
The only real technique I have for debugging these issues is adding a global (via spec_helper) hook for printing some aspect of database state (my usual culprit) before and after each test (conditioned to check if I care or not). A recent example was adding something like this to my spec_helper.rb.
Spec::Runner.configure do |config|
config.before(:each) do
$label_count = Label.count
end
config.after(:each) do
label_diff = Label.count - $label_count
$label_count = Label.count
puts "#{self.class.description} #{description} altered label count by #{label_diff}" if label_diff != 0
end
end
We have a single test in our Continuous Integration setup that globs the spec/ directory of a Rails app and runs each of them against each other.
Takes a lot of time but we found 5 or 6 dependencies that way.
Here is some quick dirty script I wrote to debug order-dependent failure - https://gist.github.com/biomancer/ddf59bd841dbf0c448f7
It consists of 2 parts.
First one is intended to run rspec suit multiple times with different seed and dump results to rspec_[ok|fail]_[seed].txt files in current directory to gather stats.
The second part is iterating through all these files, extracts test group names and analyzes their position to the affected test to make assumptions about dependencies and forms some 'risk' groups - safe, unsafe, etc. The script output explains other details and group meanings.
This script will work correctly only for simple dependencies and only if the affected test is failing for some seeds and passes for another ones, but I think it's still better than nothing.
In my case it was complex dependency when effect could be cancelled by another test but this script helped me to get directions after running its analyze part multiple times on different sets of dumps, specifically only on the failed ones (I just moved 'ok' dumps out of current directory).
Found my own question 4 years later, and now rspec has a --order flag that lets you set random order, and if you get order dependent failures reproduce the order with --seed 123 where the seed is printed out on every spec run.
https://www.relishapp.com/rspec/rspec-core/v/2-13/docs/command-line/order-new-in-rspec-core-2-8
Its most likely some state persisting between tests so make sure your database and any other data stores (include class var's and globals) are reset after every test. The database_cleaner gem might help.
Rspec Search and Destroy
is meant to help with this problem.
https://github.com/shepmaster/rspec-search-and-destroy