I'm thinking of removing some tests from my test suite. I don't think it'll lead to code being untested, but I'm not certain. Are there any tools that would enable me to identify code that's tested by the tests I want to remove, but not by anything else?
If you are using Ruby 1.9 how about SimpleCov?
Related
I am learning Ruby developement with Rspec and Cucumber. I have a hard time knowing when I have to switch from one to another. I know Rspec is used for logical errors while Cucumber, for structural/exceptions errors.
How can I know what type of error it gives? Is there a certain pattern of error reporting.
For example, expected ... is a logical error.
Cucumber is usually used for acceptance tests, but under the hoods you're using Capybara steps for browser automation, which of course are available to Rspec too.
SO it's up to you. You can use either Cucumber for acceptance tests and Rspec for unit tests, or use Rspec for everything. I personally go with the latter, but it's really up to you. Try both, and see how it works for you.
You have some code you want to remove associated with an obsolete piece of functionality from a ruby project. How do ensure that you get rid of all of the code?
Some guidelines that usually help in refactoring ruby apply, but there are added challenges because having code that isn't being called by anything won't break any unit tests.
Update: Has anyone written anything that allows you to guess based on your version control history if there are commits where you have since deleted most, but not all, of the code and can point out the remaining code?
Current thoughts:
Identify the outermost part of the stack associated with the obsolete functionality: the binary script calling it, or the unit tests calling it.
Look for methods that are only called by methods associated with the obsolete functionality. I often use git grep for this.
In theory, running mutation testing and looking for code that used to be mutation resistant when the old test suite applied, but is now mutation prone might help. It only helps if your code was well-tested in the first place! (Or you can use code coverage tools such as rcov rather than mutation testing)
Running test suites will ensure you haven't removed anything you shouldn't have!
Using autotest can save you time if you're constantly running tests.
If your code was well-structured, it should be easier to find related methods that need to be removed.
Especially in a dynamically typed language, there is no easy way to do this. If you have unittests, thank the developer that wrote them because it will help you remove the code correctly. But you're basically SOL. Remove the code, if it breaks, put it back, figure out where it broke, attempt to work around it, and repeat.
Look at your code coverage. Any code which isn't covered may be part of the code you have left to remove (if any). (Just be sure you have removed you tests. =])
A wee while ago I ended up on a page which hosted several ruby tools, which had 'crazy' names like 'mangler' or 'executor' or something. The tool's job was to modify you production code (at runtime) in order to prove that your tests were precise.
Unfortunately I would now like to find that tool again, but can't remember what it was called. Any ideas?
I think you're thinking about Heckle, which flips your code to make sure your tests are accurate. Here:
http://seattlerb.rubyforge.org/heckle/
Maybe you're thinking of the Flay project and related modules:
http://ruby.sadi.st/Ruby_Sadist.html
Also you can try my mutant. Its AST based and currently runs under MRI and RBX in > 2.0 mode. It only has a killer for rspec3, but others are possible also.
I know there's a lot of stuff on TDD and i'm trying to pick up the practice too.
But i wonder is it a good idea to to TDD your bug fix too?
I was thinking along the lines of find the bug and narrow it down.
Write unit test to ensure that it will now pass whatever problem that it previously caused.
Write more unit test for other breakable conditions.
And finally write unit test to test for Integration test, as we don't have any unit test prior to this, so whenever i'm fixing a bug i'm always worried that i might accidentally break something.
So is TDD suitable for debugging
too?
Or is
there other method or resource/book
that would be more useful for this
purpose?
I am more concerned for the
Integration Test part as i've
mentioned above. I'm looking for
anything in LAMP/Axkit/Perl related
method.
Thanks.
Write a test that fails because of
the bug.
Fix the bug in your code.
Re-run test.
Re-run all tests.
So - yes - very much so.
When a tester finds a bug I usually write a unit test for it. When the test succeeds, I know the bug is fixed and it will always be covered again in the future.
When you have a bug in your system it is good practice in TDD to write a test that spots the bug (i.e. a red test that proves the bug). When you have to fix the bug, the test should eventually turn green. It may be a good idea to ferret out any other bugs if they're close enough.
Regarding debugging, TDD should be used to leverage debugging sessions away from the programmer. You can still debug, if you have no idea where some bug is, but it's easier to pinpoint a bug if you have a regression test suite with enough granularity.
You have to remember though that TDD is more about unit testing and not about integration testing. There is nothing wrong with writing integration tests since they're a sanity check to make sure your application or system under test works.
One book that is about testing with xUnit frameworks is the xUnit Patterns book that is general enough to work with any unit testing framework (even for PerlUnit I would guess) a cookbook of interesting tricks you can do with xUnit frameworks.
UPDATE:
There is a chapter about unit testing in perl at Extreme Perl.
The answer in short is yes. The basic structure of doing that is to write a test case which would simulate the bug and fail the test case. Then fix the bug which would pass the test case.
Yes. Of course all the tests you performed during TDD of your release will have been added to a regression test suite. But, in the case of a bug, that regression suite obviously wasn't detailed enough.
The first step in fixing a bug is replicating it and this is TDD. Once you find a test case that replicates the bug, you can expand on it if you wish (to catch other obvious problems of the same class), but I don't tend to do a lot of expansion since we have specific turnaround times for fixing a single bug.
Once you have a fix for that bug, add the test case to the regression suite. The idea is to keep adding test cases to the regression suite for both releases and bug fixes to give it very good coverage.
I always wrote tests before the actual code while fixing a bug.
This way I had the example of what to expect from a working code - and I could just focus on making this test (and all others, for regression) pass.
Yes but beware, If you write as many bugs as I do you will soon have a huge set of tests to cover all the bugs you have written and then fixed.
This will mean tests runs will be slower, and intent of behaviour will become muddied by your bug tests.
You could keep these tests logically separate or revisit your original set of specified behaviour checks (read tests) to see if you really have covered all your expected behaviour.
I think its important to differentiate between the two.
I think you'd fix the bug initially and then the regression tests would be composed of TDD.
Does Ruby have any tools along the lines of pylint for analyzing source code for errors and simple coding standards?
It would be nice if it could be integrated with cruisecontrolrb for continuous integration.
Or does everyone write such good tests that they don't need source code checkers!
I reviewed a bunch of Ruby tools that are available here
http://devver.wordpress.com/2008/10/03/ruby-tools-roundup/
most of the tools were mentioned by webmat, but if you want more information I go pretty in depth with examples.
I also highly recommend using Metric-Fu it gives you a on gem/plugin install of 3 of the more popular tools and is built with cruisecontrolrb integration in mind.
The creator has a great post that should help get you up and running in no time.
http://jakescruggs.blogspot.com/2008/04/dead-simple-rails-metrics-with-metricfu.html
There has been a lot of activity in Ruby tools lately which I think is a good sign of a growing and maturing language.
Check these out:
on Ruby Inside, an article presenting Towelie, Flay and Simian, all tools to find code duplication
Towelie
flay
Simian (a more general tool that supports Ruby among other languages)
reek: a code smell detector for Ruby
Roodi: checks the style of your Ruby code
flog: a code complexity analyzer
rcov: will give you a C0 (if I remember correctly) code coverage analysis. But be careful though. 100% coverage is very costly and doesn't mean perfect code.
heckle: changes your code in subtle manners and runs your test suite to see if it catches it. Brutal :-)
Since they're all command-line tools they can all be integrated simply as cc.rb tasks. Just whip out your regex skillz to pick the important part of the output.
I recommend you first try them out by hand to see if they play well with your codebase and if you like the info they give you. Once you find a few that give you value, then spend time integrating them in your cc.
One recently-updated interesting-looking tool is Ruby Object Oriented Design Inferometer - roodi for short. It's at v1.3.0, so I'm guessing it's fairly mature.
I haven't tried it myself, because my code is of course already beyond reproach (hah).
As for test coverage (oh dear, I haven't tried this one either) there's rcov
Also (look, I am definitely going to try some of these today. One at least) you might care to take a look at flog and flay for another style checker and a refactoring candidate finder.
There's also the built-in warnings you can enable with a quick:
ruby -w
Or setting the global variable $VERBOSE to true at any point.
Code metrics on ruby toolbox website.
Rubocop is a widely-used static code analyzer.
I just released Excellent which implements several checks on Ruby code. It combines roodi, reek and flog and also adds some Rails specific checks:
gem sources -a http://gems.github.com
sudo gem install simplabs-excellent
May be helpful...