How to test the seemingly "untestable" in Ruby code? [closed] - ruby

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I'm writing a gem to wrap the xmpp4r RubyGem. I want to add tests for everything I can, but I'm new to testing and a little stuck here.
Essentially, I want one of my classes to have a 'connect' function, but I have no idea whatsoever how to test such a thing...all the examples I can find online about testing are over-simplified. it "is green broccoli", etc., you get the idea.
How can you test something seemingly more complex? Links to good documentation on this stuff would be nice, as well as more personalized answers from your own experience.
EDIT: Possibly letting the test pass if it doesn't encounter any exceptions? Would I be on the right path by doing this?

Just remember you as a programer, you are the lord of your own code world.
The same way you build applications easy to use for others, you can help yourself and make your code easy to test. One of the main benefits of doing OO programming is that it will encapsulate behaviours into methods, then is easier to test your code using those methods as key testing points. If your code is hard to test, it is becuase you are not seeing yourself or the tester as another user of you application/code.
Testing is not directly related to any framework or technology.
Try to see it as something separated from the code itself, something bigger than that.
If you do not know how to test something is because you most probably do not know why you need it and how it is going to be used.
Try doing some TDD (Test Driven Development) and you will get my point.
Also keep in mind there are different types of testing:
Functional testing / High Level
Unit test / Low Level
UAT / User Acceptance Testing
Performance Testing
Exploratory Testing / Hand testing
Integration testing (you can build some mockups of the pieces that you code will work with to anticipate any problem prior the integration)
Testing is expensive, so it is not a matter to test everything is about test what needs to be tested.
Even the NASA do not test "everything"
Testing is ...deep as programming.
Nice dive!
http://en.wikipedia.org/wiki/Software_testability

First, check if the gem itself is tested. If it is and tests are passing, you should probably trust the gem's code. Then, you can just mock any calls to the gem and return the appropriate calls when testing for success or failure scenarios.
i.e.
xmpp4r.should_receive(:connect).and_return(mock("xmpp4r"))
If the gem is not tested, it's about the same as you didn't have any tests yourself. You should consider using another gem or testing it yourself (or even create one using TDD).

Related

What makes a "production code"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am a Research Scientist, and definitely the way we code is not considered "production code" that's prototypical code, but then what makes a production code ?
Testing for scalability, ability to handle real time traffic and testing all edge cases ?
But what else, for instance I also hear Python is less of a "production language" that Java or C#, what is the criteria for them to be production-able.
Any books/references drawing this point home or to sublime it, would also be great!
Thanks in advance
Production code generally means its ready to ship out to a client.
Most obvious bugs are fixed
code is well-structured and self-documenting
Automated Tests are written and have a sufficient level of coverage
It's gone through a peer review process before being incorporated into the main code base.
It will pass the "build system" may automatically check rules like: coding conventions, complexity, linting, testing, compilation. Sometimes this may include deployment success to a testing environment.
How would this compare to non-production code?
Nearly all developers start with prototype/non-production code, even developers that use Test Driven Development (TDD). The goal of their code is to "just make this work" so they can develop a first pass approach to a problem. Often this will lead to poorly named variables, excessively long (number of commands) functions, improperly formatted and often little or no tests.
Once a developer has a satisfactory working solution, they go back and clean up the code. They fix spelling errors; use design patterns, if they see one that's helpful; they make their code fit the teams coding conventions and style guide, some of which lead to real heated debates on using tabs vs spaces.
The best way to think about it is:
The first pass at writing code is a software draft that is to get the developers ideas on the page until they have their "Story" or functionality set. The goal is for them to understand it.
The second pass, i.e., getting it ready for production is refining it so that other people can understand their code. In paper writing terms, you're giving it a more coherent structure and improve trying to convey your meaning to other people better.
That's not all.
While this will generally apply to writing code, part of saying something is "ready for production" is including all the steps not involved with the actual application code.
Often it's all the steps needed to get the code into the clients hands or live.
This could be creating a Continuous Integration and Continuous Deployment system. Setting up servers, deployment groups, monitoring systems and so on.
Without these things your application may not be considered production ready by your organization.

Which testing technology to use for a command-line Ruby app? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I have written a Ruby program that I would like to release as a Gem. It is built using Thor and command_line_reporter. I have been building it while learning, which for me means that I have no tests. Seeing as the community likes and expects tests, which I understand, I feel I should implement this before making the program public.
While this could be taken as asking for opinions, I feel there must be something that fits my specific needs much better than anything else.
Which testing technology should/can be used for a Thor-based Ruby CLI app?
More info: The app allows the user to create a list of their favorite programs with a few fields of accompanying info. It saves all data to a file in JSON format. This is my first complete program and I have never written any tests before.
Perhaps it might help to address HOW to write tests. There are lots of test frameworks, and lots of philosophies of how we should write tests but I try to keep it simple. I generally start with these:
Test to see I got back nil or an object first.
Test to see if the object is the right type.
Test to see if mandatory attributes are set, then if they're the right types.
Once I've got those out of the way I'll start antagonizing the code, throwing out-of-bounds and evil values at it, forcing it to raise its exceptions if it's supposed to do that.
Then, as further use/testing reveal bugs I'd add specific tests to check to see those don't reappear as I screw around with the code. ("Code screwing-around" is gonna happen, so its important I know I didn't make the program go out in flames.)
ZenTest has the autotest command which looks for a change in your test files and runs the tests automatically. It makes it really easy to make sure I haven't borked things, because in a separate console window autotest will be doing its thing each time I save. It's a great-big safety net you'll get used to having very quickly. From the docs:
autotest is a continous testing facility meant to be used during
development. As soon as you save a file, autotest will run the
corresponding dependent tests.
Writing tests are a necessary evil. They'll double your code-writing load, but they're very important to start early and continue maintaining. Trying to add them later to a large code base is a major problem, causing too many apps to never have unit tests. Icky.
Overwhelmingly, the answer to this is "whatever you feel like."
TestUnit and RSpec are both widely used, but it ultimately boils down to whatever you feel fits the needs of your app the best.

Cucumber/Capybara vs Selenium? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The other day I was showing one of the testers at my company some tests I had written in cucumber (2 features, 5 scenarios).
Then he asked me question that I could not answer:
How is this better than selenium or any other functionality test recording tool?
I understand that cucumber is a different technology and it's placed at a different level of testing, but I can't understand why I should bother to write and maintain Cucumber/Capybara tests.
Can someone give me a reasonable explanation for using Cucumber/Capybara instead of just Selenium?
This question is borderline asking for an opinion. Your question actually reads to me, "What tool is right for me?" I say this because you don't give a reason for why you chose Cucumber and Capybara. I believe to answer that tester's question, you need to answer a couple more questions first:
1.) What stage in the process are you going to be writing these tests?
Cucumber may not be the right choice for unit tests, depending on the language you're using. But it can be used for any level of testing, from unit to integration to end-user.
2.) Who is going to maintaining your tests? You? Other developers? Testers? Business Analysts? Project Managers?
Automated tests must be maintained, and knowing who will be doing that can help you decide on a tool - as some will be too technical for certain users.
3.) Who is going to be defining new tests?
Cucumber is meant to be used collaboratively between development, QA and business owners. It is the perfect tool for leveraging everyone's knowledge into the automated testing process. It requires the development of an ubiquitous language to be effect however. You can read up on that on James Shore's Art of Agile page.
Once you've answered these questions, you're ready to address the tester's question.
However, there are a couple of points to keep in mind when comparing recording tools (such as Selenium IDE, HP Quick Test Pro, IBM Rational Functional Tester) vs. development tools (nUnit, jUnit, RSpec, Selenium webdriver, Capybara) is that they are targeted towards different audiences. They also have different plusses and minuses.
Recording tools are easy for anyone to use, but the scripts they create are fragile. They break easily and require more maintenance. They are great for one-off automated testing, where you need to get it done quickly and have non-technical manpower.
Development tools have a larger learning curve and require programming (or at the least scripting) experience. The scripts are generally more robust, but require more technical knowledge to maintain. They are a good solution when you want repeatability and plan to use tests for a long time.
I strongly suggest you read The Cucumber Book. It will really help you decide if Cucumber is the right choice for you.
Cucumber isn't only a testing tool. Besides testing Cucumber features also take a role of documentation, a mechanism to collaborate with stakeholders and requirements storage (if you write them in declarative style).
You don't have to use Cucumber with Capybara. You can use selenium directly. But Capybara has the same high-level API for all supporting drivers. It's more high-level than Selenium's and allows to write tests a bit faster. You don't have to change code when you switch from one driver to another.
Tests built using test recording tools are generally much worse. Selenium IDE may produce valid programming code but it's not good-looking and thus quiet difficult to maintain.
Cucumber is tool used to make tests readable to business users. It consists of plain English sentences that are matched using regex to your Capybara steps.
Using recording tools won't do you any good in the long run. They were meant for beginners and aren't that powerful so I recommend you go straight to coding.
You can use Selenium alone for your tests, but I would recommend you continue to use Cucumber for documentation purposes, if you find them useful and easy to work with. After all, Cucumber can use Capybara or the Selenium web driver.
Selenium ide is good for testing features that have mostly visual elements (links, text and etc.). But often web apps have features that don't represent itself as visual elements, like sending emails, queueing jobs, communicating with 3rd party services and etc. You could, for example, test if an 'Email has been sent' message is present after submitting a form. But it doesn't really tell you if an email is actually sent and therefore you aren't really testing the whole feature here.

How to do full stack integration testing of web applications [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm looking to enhance our current test suites, and continuous integration builds with full stack integration/acceptance testing.
I'm looking into tools like Culerity and Selenium that can execute front end javascript while running user stories. I'm looking for something that can provide coverage of front-end javascript and high level features without sucking up tons of development time maintaining a complex test environment. We're currently using Rspec, Cucumber, and CruiseControl.rb, so easy integration with those tools would be ideal.
Are any of the headless browsers and js-capable test environments to a point where they are worth the trouble of setting up and maintaining? What are the best option you've come across, and pitfalls to avoid?
Thanks.
You sound like you are way further down this road than I am, but I'll comment anyway.
I am working on a JavaScript project (with a Java + MySQL back end) and decided to use Selenium for testing, and to try to achieve as thorough coverage as I could. I also poked around with a few other testing tools, but I can't say I really got to know any of them. None of them appeared, from their web sites, to be very polished or popular compared to Selenium. I am planning to integrate to CruiseControl eventually, but haven't done so yet.
This has been an interesting project and at the end of the day, I am quite happy with Selenium. Selenium plusses:
Test 'scripts' can all be written in Java, no obscure scripting language involved. Among other things, you can easily do things like manipulate and verify the data in your database before and after tests.
Se also supports Perl, C#, etc. I think, although that is of no interest to me.
Selenium IDE is a great tool for quickly understanding how Se works, how locators work, etc. You don't want to actually run tests long-term using the IDE, but it's great for getting your feet wet, and for ongoing figuring things out.
Se seems to work flawlessly with jUnit. Probably TestNG as well, but have not tried that yet, it's on my todo list.
Excellent documentation and web site.
Minuses:
I spent a LOT of time figuring out how to locate elements in all cases. This is partially the 'fault' of the framework I am using (ExtJS), not Selenium.
It seems no matter what you do, Se has timing dependencies - eg. places where you have to inject artificial pauses to make it work.
There are also monitor-size dependencies in my tests. I think this is extremely undesirable but in some places it seems to be unavoidable. Basically, this is because there are many element types that JS doesn't support you clicking on programatically.
Related to #3, in places I am forced to drive the mouse. That means you have to have a dedicated test PC. Which is no big deal, but doesn't seem right.
Tests are slow - mainly due to the time it takes Se to invoke Firefox. No doubt this is partially my environment, and I suspect I could do lots of things to improve this. However, it is really noticeable and not obvious why. It takes about 10 minutes to run about 40 tests.
Support forum is very spotty. Well, you get what you pay for. But time and again I found someone had posted about my problem, and the post was ignored or else an invalid solution was offered with no follow-up when the OP pointed out that the suggestion was bogus.
HTH, cheers.

I know I may not write production code until I have written a failing unit test, so can I tell my manager I cannot write UIs? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've been using TDD for server-side development. I'm not really sure if the benefits of having all of my production code surrounded by unit tests outweigh the disadvantage of spending 4x more time than needed on refactoring.
But when I'm developing UI code, I simply cannot apply TDD. To all fundamentalists out there, the first law of TDD states that "you may not write production code until you have written a failing unit test". But how can this be if you are developing UIs?
(One can use acceptance test frameworks like Selenium, but that doesn't count, because you don't interact directly with the source code.)
So, can I tell my manager that because of the new >90% code coverage policy I cannot write user interface code?
If you find that writing TDD causes you to spend 4x more time on refactoring, you need to be writing better, more isolated tests, and really let the tests drive the design, as intended. You are also not counting the time you spend in the debugger when you refactor without tests, not to mention how much time everyone else spends on bugs you introduce when you refactor.
Anyway, here is some good advice about what TDD means for UI development. How much that will translate into code coverage depends heavily on the UI framework.
Definitely don't tell your manager you can't do it, he may just replace you with someone who can.
First off, even Robert Martin has testing challenges with UIs.
When TDDing a UI, you write "behavioral contracts" that get as close to the action as possible. Ideally that means unit tests. But some UI frameworks make that inordinately difficult, requiring that you step back and use integration or "acceptance" tests to capture how you expect the UI to behave.
Does it not count if you can't use unit tests? That depends on which rules you're using to keep score. The "only unit tests count" rule is a good one for beginners to try to live with, in the same vein as "don't split infinitives" or "avoid the passive voice". Eventually, you learn where the boundaries of that rule are. In one podcast, Kent Beck talks about using combinations of unit and integration tests, as appropriate (adding, if I recall correctly, that it doesn't bother him).
And if TDD is your goal, you can most certainly write Selenium tests first, though that can be a slow way to proceed. I've worked on several projects that have used Selenium RC to great effect (and great pain, because the tests run so slowly).
Whatever your framework, you can Google around for TDD tips from people who've fought the same battles.
TDD is about testing methods in isolation. If you want to test your UI you are doing integration tests and not unit tests. So if you carefully separate the concerns in your application you will be able to successfully apply TDD to ANY kind of project.
That policy sounds a little artificial, but I would agree with the answer that UIs require functional test cases, not unit test. I disagree however with the point about which comes first. I've worked in an environment where the UI functional tests had to be written before the UI was developed and found it to work extremely well. Of course, this assumes that you do some design work up front too. As long as the test case author and the developer agree on the design it's possible for someone to write the test cases before you start coding; then your code has to make all the test cases pass. Same basic principle but it doesn't follow the law to the letter.
Unit tests are inappropriate for UI code. Functional tests are used to test the UI, but you cannot feasibly write those first. You should check with your manager to see if the >90% code coverage policy covers UI code as well. If it does, he should probably seriously rethink that move.
Separate the business logic from the UI and ensure that the UI code takes less than 10% of the total? Separation of concerns is the main goal of TDD, so that's actually a good thing.
As far as 90% coverage goes ... well, best course is to review extant literature (I'd focus on Kent Beck and Bob Martin), and I think you'll find support for not following a mindless coverage percentage (in fact, I think Uncle Bob wrote a blog post on this recently).
A having a >90% code coverage is dumb because a smart dev can get 100% coverage in one test. ;)
If you are using WPF, you can test your UI code if you use the MVVM pattern. By test your UI code, i mean you can test the ModleView but there is nothing that I know that can test XAML.
Read Phlip's book

Resources