They look totally the same, only toNotEqual matcher does not work on my nodejs/protractor installation.
It could have been more useful if you specified the Jasmine Version you are using.
But anyway answer to your question is .. all Jasmine versions 1.3,2.0,2.1 to 2.5 don't support 'toNotEqual' and in case you want to check inequality you have to chain NOT to expect before the matcher
Jasmine docs
Related
If a project passes the tests in Ruby 2.0.0-p648 and Ruby 2.3.1, does it make sense to also test with versions such as 2.1.8 and 2.2.3?
Has there been any language features that worked in Ruby 2.0 and Ruby 2.3 but were temporarily not working or worked differently in for example Ruby 2.2?
You should test your code for environments you intend it to be used in. Ruby language version is one thing that can vary. You might also consider testing against JRuby and Rubinius if your code is supposed to support it - e.g. it is provided as a public gem.
Logically, testing on earliest and latest versions should cover most failure scenarios with respect to language features (although not necessarily all language bugs since new ones can be introduced). As far as I know there has not been a Ruby feature that was deliberately added or removed in one version and then that decision reversed in a later version. Unless: Perhaps in your production code you are detecting a feature's existence and then using the feature in full - in which case an intermediate Ruby version which has the feature but not in its latest state could fail.
There may be other caveats too, and philosophically speaking when you start testing you want to avoid too much logical "this should work because . . ." thinking. The point of testing is to demonstrate that your code doesn't fail in ways you have covered (well, there's more depth to it than that, but the answer would get far too long if it dove into test philosophies). If you want to declare "works in all versions of Ruby MRI from 2.0.0 to 2.3.1" then you will feel safer making that statement if you have actually tested it. In fact when making such a statement in a public place, I would usually just say something closer to raw fact - "tested in versions 2.0.0, 2.1.4 and 2.3.1".
Obviously there are diminishing returns. If you have no problem in 2.1.9, it is very unlikely you will have a problem in 2.1.10 - at some point it will cost you more to check every minor variation, even just to look at the test results, than the benefit of extra coverage.
The usual answer to this problem is to test as many variations as your automated test environment can handle and you can be bothered to set up and maintain. If using multiple versions of Ruby is done for you in parallel by your test service provider - e.g. you are using Travis - then it is relatively cheap to test multiple versions. So you may as well get as much coverage on environment variations that you expect to see "in the wild" as you can be bothered to look at.
I have imported my Visual Studio Tests into Nunit Test Runner. The tests are set up using If Validations so the tests will run all the way through. The Tests are written in C# using Selenium Webdriver to drive them with an Nunit Framework. After I run the Test i see Pass but I see 0 Assertions which is correct because I never added them but I did add If's should I see an output of some kind to these failing like I would in Visual Studio?
I have goggled this and looked through Nunit documentation and Visual Studio and not found the exact answer.
Looks like you want to check a few cases within the same test. This is not a good idea. Here is a good explanation why.
If you rewrite tests with the single-assert-per-test approach then you will see it is OK to use assertions rather than validations. Assertions will work exactly how you do need:
They don't interfere other cases (because the other cases are in the other tests)
They alert NUnit that each validation is failed (because that is what alerts for).
Hope that helps.
you can use multiple assertions in a single test case. I have used this approach with MSTest (Not with NUnit). You can use try catch blocks in your test case. This will allow you to catch the failed Assertions and your test will continue. At the end of test case, you can check the number of failed assertions. If the count is greater than 1, you can forcefully fail that test case, else you can continue to your next test case.
This approach is explained with example in this blog post.
http://www.binaryclips.com/2016/03/coded-ui-test-testing-all-assertions-in.html
I am learning Ruby developement with Rspec and Cucumber. I have a hard time knowing when I have to switch from one to another. I know Rspec is used for logical errors while Cucumber, for structural/exceptions errors.
How can I know what type of error it gives? Is there a certain pattern of error reporting.
For example, expected ... is a logical error.
Cucumber is usually used for acceptance tests, but under the hoods you're using Capybara steps for browser automation, which of course are available to Rspec too.
SO it's up to you. You can use either Cucumber for acceptance tests and Rspec for unit tests, or use Rspec for everything. I personally go with the latter, but it's really up to you. Try both, and see how it works for you.
I'm thinking of removing some tests from my test suite. I don't think it'll lead to code being untested, but I'm not certain. Are there any tools that would enable me to identify code that's tested by the tests I want to remove, but not by anything else?
If you are using Ruby 1.9 how about SimpleCov?
A wee while ago I ended up on a page which hosted several ruby tools, which had 'crazy' names like 'mangler' or 'executor' or something. The tool's job was to modify you production code (at runtime) in order to prove that your tests were precise.
Unfortunately I would now like to find that tool again, but can't remember what it was called. Any ideas?
I think you're thinking about Heckle, which flips your code to make sure your tests are accurate. Here:
http://seattlerb.rubyforge.org/heckle/
Maybe you're thinking of the Flay project and related modules:
http://ruby.sadi.st/Ruby_Sadist.html
Also you can try my mutant. Its AST based and currently runs under MRI and RBX in > 2.0 mode. It only has a killer for rspec3, but others are possible also.