Related
For a number of years now, I have been interested in TDD, but one or two things just didn't click. I am pretty sure it is the usual thoughts most people have when trying. "The examples in the book are wonderful, but my code is a lot more complicated than that. I never have a a procedure that does one thing, it will call three others, and they will call three others, and that will get data from the DB... bla bla bla".
A little while ago, I found some videos on SOLID (Anyone who is stuck, thinking TDD would be awesome, but... then find a few videos on SOLID, trust me). Each point became slightly more confusing, until the end, everything just went into place, including how I thought about testing code, and TDD.
I, of course, have a lot of old code, that isn't written like this, but I am okay with that, because I do see a better idea of how it should be. And whenever I work on anything, I can take it out, and do it properly (even when that means cutting out the small part of a method that needs updating, giving it it's own class, and calling that.
It has a few more questions. I would like to know where I might be able to find answers for that, or is there a standard.
How much should be tested?
My assumption is all of it. A lot of my functions will be take input parameters, and run a Stored Procedure. My guess on how to test that would be, with a given set of input parameters, is the stored procedure being called the correct one, are the parameters getting put in correct. Often this will be obvious (sometimes there will be array of numbers input that will be transformed to a comma separated string). If nothing else, this example, while the test might not be as valuable, will be documentation.
How do I name things?
This is the old problem with development. Should the class be named like the method would be, UpdateEmployee, or should there be a whole lot of er classes (EmployeeUpdater, EmplyeeGetter, etc.)
How is IOC generally handled?
This is still fine for now, I am creating interfaces, implementations, setting up IOC, etc.
I can see though, that pretty soon I am going to have pages and pages and pages of Interface/Class mappings in my IOC initialization method, or I would imagine it splitting into section, with one method that calls a few other methods, each registering classes (by namespace, or something). Is this how it generally works, or are there smarter ways of managing this?
I recommend reading Clean Code by Robert C Martin
In my view...
How much should be tested?
There is a big difference between how much and how well.
Ultimately its a judgment call and or a simple cost/benefit analysis.
Critical apps/code should be tested more thoroughly.
Working pure TDD means your code will be highly tested - easily > 90% coverage, but remember there is a difference between test quality and coverage. You may decide to test more edge cases.
You can get 100% coverage with one test case, but its pragmatic to test a range of values e.g. 0, 1, many & boundaries.
How do I name things?
For Java as an example, Look at the standard Java API documentation and see how they do it.
Referring to Clean Code, naming is and should be difficult, and maybe refactor if the name no longer fits.
Example Classes from Java's API's
FileFilter
DesktopManager
Names should make it obvious what the class/method/variable does.
Refer to Kent Beck's Four Rules of Simple Design (Express intent)
How is IOC generally handled?
Maybe someone else can expand on this point more, but referring to Extreme Programming, don't use interfaces for the sake of it, but when you need them. If you only have one concrete instance, you probably don't need an interface. Refactor to add interfaces to follow known design patterns when you have a real need for them.
https://www.martinfowler.com/articles/designDead.html
A few weeks ago I started my first project with TDD. Up to now, I have only read one book about it.
My main concern: How to write tests for complex methods/classes. I wrote a class that calculates a binomial distribution. Thus, a method of this class takes n, k, and p as input, and calculates the resp. probability. (In fact it does a bit more, that's why I had to write it myself, but let's stick to this description of the class, for ease of the argument.)
What I did to test this method is: copying some tables with different n I found in the web into my code, picking randomly an entry in this table, feeded the resp. values for n, k, and p into my function, and looked whether the result was near the value in the table. I repeat this a number of times for every table.
This all works well now, but after writing the test, I had to tank for a few hours to really code the functionality. From reading the book, I had the impression that I should not code longer than a few minutes, until the test shows green again. What did I do wrong here? Of course I have broken this task down in a lot of methods, but they are all private.
A related question: Was it a bad idea to pick randomly numbers from the table? In case of an error, I will display the random-seed used by this run, so that I can reproduce the bug.
I don't agree with people saying that it's ok to test private code, even if you make them into separate classes. You should test entry points to your application (or your library, if it's a library you're coding). When you test private code, you limit your re-factoring possibilities for later (because refactoring your privates classes mean refactoring your test code, which you should refrain doing). If you end up re-using this private code elsewhere, then sure, create separate classes and test them, but until you do, assume that You Ain't Gonna Need It.
To answer your question, I think that yes, in some cases, it's not a "2 minutes until you go green" situation. In those cases, I think it's ok for the tests to take a long time to go green. But most situations are "2 minutes until you go green" situations. In your case (I don't know squat about binomial distribution), you wrote you have 3 arguments, n, k and p. If you keep k and p constant, is your function any simpler to implement? If yes, you should start by creating tests that always have constant k and p. When your tests pass, introduce a new value for k, and then for p.
"I had the impression that I should not code longer than a few minutes, until the test shows green again. What did I do wrong here?"
Westphal is correct up to a point.
Some functionality starts simple and can be tested simply and coded simply.
Some functionality does not start out simple. Simple is hard to achieve. EWD says that simplicity is not valued because it is so difficult to achieve.
If your function body is hard to write, it isn't simple. This means you have to work much harder to reduce it to something simple.
After you eventually achieve simplicity, you, too, can write a book showing how simple it is.
Until you achieve simplicity, it will take a long time to write things.
"Was it a bad idea to pick randomly numbers from the table?"
Yes. If you have sample data, run your test against all the sample data. Use a loop or something, and test everything you can possibly test.
Don't select one row -- randomly or otherwise, select all rows.
You should TDD using baby steps. Try thinking of tests that will require less code to be written. Then write the code. Then write another test, and so on.
Try to break your problem into smaller problems (you probably used some other methods to have your code completed). You could TDD these smaller methods.
--EDIT - based on the comments
Testing private methods is not necessarily a bad stuff. They sometimes really contain implementation details, but sometimes they might also act like an interface (in this case, you could follow my suggestion next paragraph).
One other option is to create other classes (implemented with interfaces that are injected) to take some of the responsibilities (maybe some of those smaller methods), and test them separately, and mock them when testing your main class.
Finally, I don't see spending more time coding as a really big problem. Some problems are really more complex to implement than to test, and require much thinking time.
You are correct about short quick refactors, I rarely go more than a few minutes between rebuild/test no matter how complicated the change. It takes a little practice.
The test you described is more of a system test than a unit test though. A unit test tries never to test more than a single method--in order to reduce complexity you should probably break your problem down into quite a few methods.
The system test should probably be done after you have built up your functionality with small unit tests on small straight-forward methods.
Even if the methods are just taking a part of the formula out of a longer method, you get the advantage of readability (the method name should be more readable than the formula part it replaces) and if the methods are final the JIT should inline them so you don't lose any speed.
On the other hand, if your formula isn't that big, maybe you just write it all in one method and test it like you did and take the downtime--rules are made to be broken.
It's difficult to answer your question without knowing a little bit more about the things you wanted to implement. It sounds like they were not easily partinioable in testable parts. Either the functionality works as a whole or it doesn't. If this is the case, it's no wonder you tool hours to implement it.
As to your second question: Yes, I think it's a bad idea to make the test fixture random. Why did you do this in the first place? Changing the fixture changes the test.
Avoid developing complex methods with TDD until you have developed simple methods as building blocks for the more complex methods. TDD would typically be used to create a quantity of simple functionality which could be combined to produce more complex behaviour. Complex methods/classes should always be able to be broken down into simpler parts, but it is not always obvious how and is often problem specific. The test you have written sounds like it might be more of an integration test to make sure all the components work together correctly, although the complexity of the problem you describe only borders on the edge of requiring a set of components to solve it. The situation you describe sounds like this:
class A {
public doLotsOfStuff() // Call doTask1..n
private doTask1()
private doTask2()
private doTask3()
}
You will find it quite hard to develop with TDD if you start by writing a test for the greatest unit of functionality (i.e. doLotsOfStuff()). By breaking the problem down into more mangeable chunks and approaching it from the end of simplest functionality you will also be able to create more discrete tests (much more useful than tests that check for everything!). Perhaps your potential solution could be reformulated like this:
class A{
public doLotsOfStuff() // Call doTask1..n
public doTask1()
public doTask2()
public doTask3()
}
Whilst your private methods may be implementation detail that is not a reason to avoid testing them in isolation. Just like many problems a divide-and-conquer approach would prove affective here. The real question is what size is a suitably testable and maintainable chunk of functionality? Only you can answer that based on your knowledge of the problem and your own judgement of applying your abilities to the task.
I think the style of testing you have is totally appropriate for code thats primarily a computation. Rather than pick a random row from your known results table, it'd be better to just hardcode the significant edge cases. This way your tests are consistently verifying the same thing, and when one breaks you know what it was.
Yes TDD prescribes short spans from test to implementation, but what you've down is still well beyond standards you'll find in the industry. You can now rely on the code to calculate what how it should, and can refactor / extend the code with a degree of certainty that you aren't breaking it.
As you learn more testing techniques you may find different approach that shortens the red/green cycle. In the meantime, don't feel bad about it. Its a means to an end, not an end in itself.
Do you use any metrics to make a decision which parts of the code (classes, modules, libraries) shall be consolidated or refactored next?
I don't use any metrics which can be calculated automatically.
I use code smells and similar heuristics to detect bad code, and then I'll fix it as soon as I have noticed it. I don't have any checklist for looking problems - mostly it's a gut feeling that "this code looks messy" and then reasoning that why it is messy and figuring out a solution. Simple refactorings like giving a more descriptive name to a variable or extracting a method take only a few seconds. More intensive refactorings, such as extracting a class, might take up to a an hour or two (in which case I might leave a TODO comment and refactor it later).
One important heuristic that I use is Single Responsibility Principle. It makes the classes nicely cohesive. In some cases I use the size of the class in lines of code as a heuristic for looking more carefully, whether a class has multiple responsibilities. In my current project I've noticed that when writing Java, most of the classes will be less than 100 lines long, and often when the size approaches 200 lines, the class does many unrelated things and it is possible to split it up, so as to get more focused cohesive classes.
Each time I need to add new functionality I search for already existing code that does something similar. Once I find such code I think of refactoring it to solve both the original task and the new one. Surely I don't decide to refactor each time - most often I reuse the code as it is.
I generally only refactor "on-demand", i.e. if I see a concrete, immediate problem with the code.
Often when I need to implement a new feature or fix a bug, I find that the current structure of the code makes this difficult, such as:
too many places to change because of copy&paste
unsuitable data structures
things hardcoded that need to change
methods/classes too big to understand
Then I will refactor.
I sometimes see code that seems problematic and which I'd like to change, but I resist the urge if the area is not currently being worked on.
I see refactoring as a balance between future-proofing the code, and doing things which do not really generate any immediate value. Therefore I would not normally refactor unless I see a concrete need.
I'd like to hear about experiences from people who refactor as a matter of routine. How do you stop yourself from polishing so much you lose time for important features?
We use Cyclomatic_complexity to identify the code that needs to be refactored next.
I use Source Monitor and routinely refactor methods when the complexity metric goes aboove around 8.0.
Its prudent to break a long function into a chief function and helper functions.
I know that the outside the module only chief function will be called, but its long length may prove to be intimidating.
Textbooks put a limit on the number of lines, but I feel that this is too rigid.
P.S. I am programming in Python and need to process incoming, messages. The function returns a tuple containing the message but in Python's internal data types.
So you can see somewhat independent code for each message type.
Duplicate Question
When is a function too long?
I think you need to go about this from the other end of the problem. Think bottom-up. Identify small units of work, as small as possible, and start composing your code that way. You will only run into spaghetti-code issues when you code top-down and don't keep a structured approach.
If you already have spaghetti code and need to refactor, you pretty much have to start over. It is probably more work to break up existing spaghetti code than to rewrite it, and the result may not be as good.
I don't think there should be a hard number for the lines of code in a method either, but well written code does not have methods with more than 5 to 10 lines in the lower layers, and 20 to 30 lines in the business logic. To give you some kind of metric.
I'm not a big fan of breaking a function into multiple functions unnecessarily. It's not a hard and fast thing - if there are things that seem like distinct logical units, then by all means, break those out and think about them separately. But don't just break things out for the sake of some guideline like "one page per function" or "N lines per function".
One good rule of thumb is that if it doesn't fit on a single screen it is worth thinking about splitting it up. But only if it makes sense to split it up, some long functions are perfectly readable and it doesn't make any sense to slavishly split them into multiple functions just for the sake of it.
Never write a function that, when printed on fanfold paper, is taller than you are.
I like the rule of thumb that you should break out the subfunction if you can think of a good domain-relevant name for it.
When someone can understand the top-level function without necessarily having to look up the definition of the sub-function, you've likely made a net gain. (But when you break it down too far, your names start referring to your implementation artifacts rather than the domain)
I was recently discussing this with a friend. He suggested refactoring to separate concerns and I must say I have to agree. That is, one function should do one thing, if it does more than one thing, split it up. If not, let it be together, it makes no sense to split up a function, only to have it obfuscate the meaning. After all, a function is a block of code that does one thing!
The limit in term of number of lines is often impractical becuase it doesn't account for readability well. It's better to try to seperate groups of lines of code that have just a few inputs and just a few outputs and make this a separate functon. It's not always possible - then it's often wise to just leave the code as it is and not to refactor for the sake of refactoring.
Well since I am coding in Python so I have the liberty to write functions inside functions, unlike C, C++ or Java. This i feel is a better choice.
It's not specified. But line should be as low as possible. But you may follow the Role of 30. I follow this in my PHP scripts when needed.
Rule of 30:
“Rule of 30” in Refactoring in Large Software Projects by Martin Lippert and Stephen Roock:
Methods should not have more than an average of 30 code lines.
A class should contain an average of less than 30 methods.
A package/library shouldn’t contain more than 30 classes.
Subsystems should avoid more than 30 packages.
A system more than 30 subsystems may create problem.
If an element consists of more than 30 subelements, it is highly probable that there is a serious problem.
personally I break a function if it either saves total lines or total processing time.
if I only run the helper once per chief function I don't bother
The point is that in principal it's better to have specialiced functions. But where one sets the limit depends very much on
1) the "usual" programming style in certain languages. (one can observe that, object-oriented langauges tend to shorter procedureds than let's say C or the like
2) it depends on your way of programming. Every hard limit must be questioned. IMHO. Overall there will probably some "natural" distribution of programs
3) I think what one should keep on one's mind is that a function should do a certain task take for example some function for parsing it is usually much longer than a function just settin some field in a structure. Or getting back just consider how a event loop in the Windows API may look. So that all suggests that there may be good reasons for long methods...
If there is independent code (in your case specifics for each message type) those areas should be broken out.
Size matters not. Judge me by my size do you? - Yoda
Your main concerns are readability, simplicity and maintainability. A good indicator is if you need to write comments to explain a section of a function then that section is a good candidate for a separate function.
There are many reasons to break a long function into its constituent pieces. Most important is:
readability
maintainability
code clarity/intent
Some functions simple cannot be broken into smaller pieces without negatively impacting the listed goals, so there is no hard-and-fast rule.
If you didn't write it and it's already in production: NEVER!!! If you break it up, you're likely to break it, it's that simple.
If you are writing it and you're not sure, the on screen rule apples as others have said.
Back in college, only the use of pseudo code was evangelized more than OOP in my curriculum. Just like commenting (and other preached 'best practices'), I found that in crunch time psuedocode was often neglected. So my question is...who actually uses it a lot of the time? Or do you only use it when an algorithm is really hard to conceptualize entirely in your head? I'm interested in responses from everyone: wet-behind-the-ears junior developers to grizzled vets who were around back in the punch card days.
As for me personally, I mostly only use it for the difficult stuff.
I use it all the time. Any time I have to explain a design decision, I'll use it. Talking to non-technical staff, I'll use it. It has application not only for programming, but for explaining how anything is done.
Working with a team on multiple platforms (Java front-end with a COBOL backend, in this case) it's much easier to explain how a bit of code works using pseudocode than it is to show real code.
During design stage, pseudocode is especially useful because it helps you see the solution and whether or not it's feasible. I've seen some designs that looked very elegant, only to try to implement them and realize I couldn't even generate pseudocode. Turned out, the designer had never tried thinking about a theoretical implementation. Had he tried to write up some pseudocode representing his solution, I never would have had to waste 2 weeks trying to figure out why I couldn't get it to work.
I use pseudocode when away from a computer and only have paper and pen. It doesn't make much sense to worry about syntax for code that won't compile (can't compile paper).
I almost always use it nowadays when creating any non-trivial routines. I create the pseudo code as comments, and continue to expand it until I get to the point that I can just write the equivalent code below it. I have found this significantly speeds up development, reduces the "just write code" syndrome that often requires rewrites for things that weren't originally considered as it forces you to think through the entire process before writing actual code, and serves as good base for code documentation after it is written.
I and the other developers on my team use it all the time. In emails, whiteboard, or just in confersation. Psuedocode is tought to help you think the way you need to, to be able to program. If you really unstand psuedocode you can catch on to almost any programming language because the main difference between them all is syntax.
If I'm working out something complex, I use it a lot, but I use it as comments. For instance, I'll stub out the procedure, and put in each step I think I need to do. As I then write the code, I'll leave the comments: it says what I was trying to do.
procedure GetTextFromValidIndex (input int indexValue, output string textValue)
// initialize
// check to see if indexValue is within the acceptable range
// get min, max from db
// if indexValuenot between min and max
// then return with an error
// find corresponding text in db based on indexValue
// return textValue
return "Not Written";
end procedure;
I've never, not even once, needed to write the pseudocode of a program before writing it.
However, occasionally I've had to write pseudocode after writing code, which usually happens when I'm trying to describe the high-level implementation of a program to get someone up to speed with new code in a short amount of time. And by "high-level implementation", I mean one line of pseudocode describes 50 or so lines of C#, for example:
Core dumps a bunch of XML files to a folder and runs the process.exe
executable with a few commandline parameters.
The process.exe reads each file
Each file is read line by line
Unique words are pulled out of the file stored in a database
File is deleted when its finished processing
That kind of pseudocode is good enough to describe roughly 1000 lines of code, and good enough to accurately inform a newbie what the program is actually doing.
On many occasions when I don't know how to solve a problem, I actually find myself drawing my modules on a whiteboard in very high level terms to get a clear picture of how their interacting, drawing a prototype of a database schema, drawing a datastructure (especially trees, graphs, arrays, etc) to get a good handle on how to traverse and process it, etc.
I use it when explaining concepts. It helps to trim out the unnecessary bits of language so that examples only have the details pertinent to the question being asked.
I use it a fair amount on StackOverflow.
I don't use pseudocode as it is taught in school, and haven't in a very long time.
I do use english descriptions of algorithms when the logic is complex enough to warrant it; they're called "comments". ;-)
when explaining things to others, or working things out on paper, i use diagrams as much as possible - the simpler the better
Steve McConnel's Code Complete, in its chapter 9, "The Pseudocode Programming Process" proposes an interesting approach: when writing a function longer than a few lines, use simple pseudocode (in the form of comments) to outline what the function/procedure needs to do before writing the actual code that does it. The pseudocode comments can then become actual comments in the body of the function.
I tend to use this for any function that does more than what can be quickly understood by looking at a screenful (max) of code. It works specially well if you are already used to separate your function body in code "paragraphs" - units of semantically related code separated by a blank line. Then the "pseudocode comments" work like "headers" to these paragraphs.
PS: Some people may argue that "you shouldn't comment what, but why, and only when it's not trivial to understand for a reader who knows the language in question better then you". I generally agree with this, but I do make an exception for the PPP. The criteria for the presence and form of a comment shouldn't be set in stone, but ultimately governed by wise, well-thought application of common sense anyway. If you find yourself refusing to try out a slight bent to a subjective "rule" just for the sake of it, you might need to step back and realize if you're not facing it critically enough.
Mostly use it for nutting out really complex code, or when explaining code to either other developers or non developers who understand the system.
I also flow diagrams or uml type diagrams when trying to do above also...
I generally use it when developing multiple if else statements that are nested which can be confusing.
This way I don't need to go back and document it since its already been done.
Fairly rarely, although I often document a method before writing the body of it.
However, If I'm helping another developer with how to approach a problem, I'll often write an email with a pseudocode solution.
I don't use pseudocode at all.
I'm more comfortable with the syntax of C style languages than I am with Pseudocode.
What I do do quite frequently for design purposes is essentially a functional decomposition style of coding.
public void doBigJob( params )
{
doTask1( params);
doTask2( params);
doTask3( params);
}
private void doTask1( params)
{
doSubTask1_1(params);
...
}
Which, in an ideal world, would eventually turn into working code as methods become more and more trivial. However, in real life, there is a heck of a lot of refactoring and rethinking of design.
We find this works well enough, as rarely do we come across an algorithm that is both: Incredibly complex and hard to code and not better solved using UML or other modelling technique.
I never use or used it.
I always try to prototype in a real language when I need to do something complex, usually writting unit tests first to figure out what the code needs to do.