Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
This is supposed be an open question but I would like the answer to focus more on code design aspects.
To help narrow the scope of the answer:
How do you decide if a class should be a concrete class or an interface that is mocked when it's not so obvious.
What's your experience of allocating roles and responsibilities.
To what dependency depth do you typically go to.
How much of the already perceived target design influences the tdd process.
What's you experiences of making a tdd driven implementation fit within pre-existing code.
any other design consideration.
Thanks!
Uncle bob defined the three laws of TDD:
You are not allowed to write any production code unless it is to make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
Following the classic Red-Green-Refactor cycle, remember about the four rules of simple design defined by Kent Beck. Apply them during the refactor phase. The code must (in priority order):
Run all the tests
Contain no duplicate code
Express all the ideas the author wants to express
Minimize classes and methods
The 0th law of TDD:
Break the TDD process whenever it is too tedious, motherfucker!
How do you know that TDD is too tedious? When you regularly write tests in
5 minutes, and it suddendly takes you more than one 8 hour shift to write
those tests for that part, which calls 3rd party something, then it is too
tedious. Forget unit tests and test it manually from time to time. The goal
is to have 95% covered, not 100%.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm working in company with a big system with a messy code structure. I want to work with the right standards like polymorphism and design patterns.
But the code is such a mess and needs to be heavily refactored to do that. Also my current company gives me tasks, and if I have would heavily refactor, that will open many bugs in the system as it's not unit tested, of course.
What do you think? Should I work on the tasks on this bad structure to finish the work? Or tell them that we need rebuild many things (also they won't find a difference as the features already work now).
I think you need to start off with some unit tests.
Whilst doing the tasks you have been assigned, you could write some tests to test the code you are about to change, then you can refactor it.
Now you can start to write the code for your task, test-first.
If the code that is already there works, then refactoring is the best option. If it doesn't work, then a rewrite becomes possible.
Well...you need to work on multiple aspects.
First, learn best practices to write clean code (if you haven't yet) and request your team members the same. There are many useful books and online resources available for the same.
Second, do not expect that the situation will change overnight. Adopt "Boy scout rule" - it will gradually improve the code quality.
Third, start building your corpus of unit tests. Slowly, testable code will emerge out of the sea of untestable monolith.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Knowing all the advantages TDD offers. It is possible to apply strictly and all the time in all kind of projects (large projects)? Or is it nothing more than an exaggeration?
One has to consider both advantages and disadvantages, the main disadvantage being the time cost. You typically need to write 1 line of unit testing for every line of code. TDD makes no sense if your functionality is very likely to change a lot in the future, because you will have to rewrite all tests all the time. For example a case of an early prototype, where you mostly want to just test some idea rather than making a super stable application.
Another case is if you want to support the change in your application. Then TDD will give you benefits in the long term. Is it possible to apply strictly? Certainly it is possible, but not easy, as it is typical that business people try to reduce short-term costs and disallow using TDD fully. TDD creates short-term costs (writing the tests) in hope to reduce long-term costs (loosing quality due to sudden bugs or extensive manual tests).
Sometimes it is not possible to use TDD.
For example, when writing classes in c++ to display GUI components, it makes no sense to unit test everything. If using MVP design pattern, models in MVP trio should be unit tested, but not the views.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
What exactly TDD stands for?
I have seen two versions :
Test Driven Design
Test Driven Development
So, which one is correct and why?
They are broadly the same, but different in actual meaning.
Test driven development means that you write tests and then your classes. (yes, in that order.)
The design part means that you have better designed classes if you write your tests first. You won't write quick and dirty if you know that your code will go trough some tests. Thus, test driven development results in test driven design.
I think it rather means Test Driven Development as it is a way to write Software (i.e. develop) by first writing some tests and then the productive code to satisfy all requirements defined by these tests.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I work for a web development company, and quite often, some of our projects come back from the client after being released, with small bugs.
This frustrates my boss as it means we must carry out development work to fix the issue, for which we are not getting paid for.
How can we prevent errors from ever occuring, or is this just something that should be taken into the initial cost?
Our philosophy, time versus money applies to your circumstance as well. The more time that is spent on unit testing, the less likely it is that code will contain bugs after it is released. At the same time, the more time your spend unit testing the less profitable the code is.
We take a more cavalier approach. Our programmers test their own code, pass it off to a fellow programmer for testing, and then a final review and unit test is performed by a supervisor. If all three tests pass, the code is stamped complete and is passed on the end users.
Bugs are an inherent part of programming and must be anticipated as inevitable. No amount of testing can truly guarantee that code is 100% bug free.
Some good methods of avoiding writing bugs are found at this site.
http://sites.google.com/site/yacoset/Home/how-to-avoid-writing-bugs
Released software will always contain some bugs, even the big companies like MS, Google and apple can't release without bugs. Ofc you can do lots of thing to prevent it, like unit testing, smoke testingen, stress testuing ect. ect., but there will always be bugs. thats as certain as rain when you'r on holiday in England.
Make sure you discuss stuff like this in the sell proces. For example 3 weeks aftercare(bugs fixed for free), after that the can buy maintaince hours
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Ruby coders: How do you monitor your productivity?
I'm curious to know what you use to keep track of how much you do and how well you do it.
For any programming task, the best way to track productivity is by tracking requirements/features delivered. Every agile methodology puts emphasis on delivering working software [read meeting part of the requirements each sprint]. So indicators such as number of lines etc. is moot [when you have a person pair programming most of the time and checking in code with the other person's login].
As with an language, you must set goals/milestones for your project. You then break those goals down into individuals tasks. The smaller and more specific a task is, the easier it will be to track your progress. I use a project management web application called Redmine to keep track of these tasks. After I have devised the tests, I begin creating the code tests that will outline the code criteria for each test. My primary use of Ruby has been with Ruby on Rails which has excellent support for testing. Once I am done with the tests, I begin coding the application. When the application passes all the tests for a given task, it can be marked as completed.
At the beginning of a project, you can judge by the relevancy and number of tests. Afterward, the number of passing tests.
Relevancy is the key word, of course. If the code doesn't do anything yet, or doesn't deliver any value, then getting it to that point is your number one test of productivity.