How to print all methods executed in a golang program - go

Similar to the question: How to dump goroutine stacktraces?
I took over a go program but it's fairly complex and I am not able to manually follow the code flow by reading code as some methods are executed and I have no idea why.
Is there a way I could print out all the methods that were executed after main.go was run?
I am aware of: https://golang.org/pkg/net/http/pprof/ but I am able to see only goroutines run, not the specific methods I am unable to backtrace/reverse engineer.

When I'm not sure how a method got called, I find the easiest is to debug.PrintStack() at the start of it. If that scrolls past too fast, make it panic.
I think if you printed all the methods executed, the signal would get lost in the noise.
Another helpful way is to write a unit test that just calls a part of the code you are interested in. Run it and watch what happens. Put some print statements in. Once you figure out what's happening, add some checks to your unit test (to make it a real test), and you've improved the code already.
Once you have a unit test (or any entry point), you can also step into it with delve.
Michael Feather's "Working Effectively With Legacy Code" is pretty good, it gives lots of strategies to attack legacy code, and refactor it to goodness.
Finally, I've found it's difficult to make Go really confusing, so I'm usually really glad if the complex code I inherit is in Go :-)

Related

Is there a way to force all Ruby(MRI) finalizers to run synchronously?

I'm trying to unit test some logic around finalizer code, but in order
to do so, I need to know when all finalizers that can run have run.
That way, I can tell if the effects of the finalizers that should have
been realized, are in fact realized.
Thus far I've had no luck (on MRI 2.2), and the finalizers always seem
to run "later"
To be clear, I have no interest in forcing GC and finalizers to run
anywhere else except in the unit tests, but the finalization logic is
nuanced, and so I'm really not comfortable leaving it untested.
V8 provides a method just for this purpose, and it is really handy,
https://github.com/v8/v8/blob/master/include/v8.h#L5667 but I haven't
been able to find anything equally reliable in Ruby.
So my question is, how can I force a complete and synchronous run of all
GC, and all finalizers? Could this be done with a C extension even if
there's no public API do it with?
Here is a link to a script demonstrating the problem
https://gist.github.com/cowboyd/6caf13104a26210ec525
Googling, someone asked this same question in 2003 (!), and Matz himself responded saying basically no:
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/87753
In message "How to unit test finalizers?"
on 03/12/11, Samuel Tesla <samuel / thoughtlocker.net> writes:
|> Nothing. There's no way to "ensure" instance to be GC'ed except for
|> program termination. That's a weak (or charm) point of Ruby's GC.
|
|That's fine. I'm guessing that the GC will eventually, most likely,
|collect those references. The question then becomes, how can I
|reliably unit test my finalizer code if I can't reliably get it to
|execute?
How about not relying on finalizers, i.e. separate finalizing process
into a method, then call the method explicitly in the test? You don't
have to test whether finalizers are called. That's my responsibility.
matz.

MVC3 and TDD pre or post coding

I am still getting to grips with MVC3, and now I am looking into TDD, and the thing that keeps coming up that makes no sense is.
The first step is to quickly add a test, basically just enough code to fail.
why create a test for your code to just pass.To me it makes much more sense to write my code then test it and see if it fails, and fix any and all bugs with that may occur then.
If you write the code and then write the tests, then you are not doing Test-Driven Development...
That's what TDD stands for; you write your code to enable pre-written tests to pass. If you don't do it that way, you aren't doing TDD.
The idea is that your tests represent your application's requirements. You write those first, just like you would otherwise write your requirements down on paper before you started coding.
This way, you know when all your tests pass, you are done.
Writing the test first makes you start thinking about how the method will pass and fail - you start thinking about the method more deeply.
Otherwise, it's easy to get straight into the method without much thought, leading to methods that aren't so easy to test. It's all too easy to come back to the unit-test later - it often doesn't happen!
Moreover, if you write the method first at what point do you write the test? When you know it passes, when you're "happy" with it... It's a slippery slope to writing code without thinking about testing.

In test driven development, do you write every possible test first, then the code?

In doing test driven development I have been in the habit of writing the first unit test for a new piece of functionality first, then writing the code for that functionality. If I have additional tests to write to cover all scenarios, I usually write them after the code is written. Is this considered bad form? Should I try and write every conceivable test for a piece of functionality first, before ever writing that code?
In order to do TDD properly, you always write the test first, and then the functionality second.
To add to that, I would take one scenario at a time, don't write 20 tests and then write the code for those 20 tests. Write one test, red/green flag it, then move on to your next test. This makes sure you're also doing one of the core tenets of TDD, which is to do the simplest implementation possible that meets all of your requirements/scenarios.
actually no, I often discover functionality "on-the-go". Let me explain the "no" a bit further:
I usually start out writing a test case for a high level feature, defining its Interface. After that, I usually set this test to ignore and continue writing tests for each of the Interfaces functionality. My cycle goes like:
Integration Test for Story A (high level API)
Write Unit Test for method xyz called in Integration Test
Implement method (red/green/refactor)
Repeat 2+3 till Integration Test passes.
While doing so, I often realize I have forgotten some small functionality in my main test. I then usually take time to look back at my customers requirements. If its a fit, I go back and add a test for it, set to ignored as I first want to finish what I started.
Sometimes I see the chance to do a refactoring. I usually finish an implementation till I reach a commit point and do refactoring then, however sometimes I stash my changes, go back and do the refactoring (including new tests if nescessary) first. This workflow is powererd by Mercurial MQ.
For most people, TDD and incremental/agile development go together. This looks something like:
Write a test for some feature
Write just enough code to make the test pass, refactoring as necessary
Repeat.
If you happen to have a detailed specification ahead of time, you could write all of the tests first, but you'd have to live with having sone tests not passing for a while.
The sooner you write the tests, the better. I usually find writing tests being harder tasks than actually implementing the functionality because you have to be aware of all the possible outcomes. So I tend to write more tests when I'm "in the zone". And when during coding I realize I might have missed a test case I just note that down on the to-do lists.
So in my opinion it's up to your leisure but I would implement tests in multiple batches.
The way I see it, test driven development isn't necessarily tests first development. Your tests drive your development and you are really writing your tests as you develop your application. You start by writing a simple test that fails because you haven't written the functionality yet. Then you write your code to implement that so that the tests pass.
Then you go back to your test, make modifications that will force you to add more functionality or refactor your code to follow better practices or reduce duplicate code, go fix your code to make the test pass...repeat, repeat, repeat.
A couple of videos that demonstrates this is below, although you can probably find a lot more by googling "TDD Video"
http://agilesoftwaredevelopment.com/videos/test-driven-development-basic-tutorial
(oops, only one video, new users can't insert more than one link)
I try to write a test at some level before each bit of functionality. Sometimes, I have to write a little more code to get through the compiler, but I try to minimise that. Writing the test first means that I've thought about what the code is supposed to achieve before writing it.
One technique I find useful is to keep an index card or notepad handy, and make a note of all the cases that I think of along the way. That allows me to focus on the current task without losing track of all the other things I'm supposed to think about. Afterwards, I can work through the list and either complete the extra cases or drop them as not necessary.
You could do that, but you wouldn't be doing TDD. The problem (well, one of them, anyway) with writing all of your tests up front is that in any case where the requirements are non-trivial, your tests will be building in a lot of assumptions about the structure of the code you're test-driving. Big steps lead to missteps.
One of the keys of successful TDD involves taking small steps. Small steps mean fewer changes to back out when something goes wrong. Small steps mean you can more often get your head around the effects of the changes you're making. And because small steps are easier to take with confidence, they have the paradoxical effect of increasing your velocity.
The TDD cycle starts with requirements. Start by choosing a requirement you know how to define through tests immediately, in a few short steps. If you look at a requirement and you're not sure how to test it, or you think, "Yeah, but to do that, I'd need to [insert ill-defined steps] first", then you should either skip to another requirement that you know how to do, or you should break this requirement into smaller requirements that you know how to do.
Once you have that, you work in a short red-green-refactor cycle: Write a test that quantifies some part of the requirement ("red", because it fails, because it has no implementation to test yet), write any code that will pass the test ("green"), then rework the code to remove duplication, magic numbers, and other code smells ("refactor"). During the refactoring phase, you should continue working in small steps, frequently re-running the test to make sure you haven't broken anything. Continue this cycle until you can look your boss/client in the eye and call the requirement met.
Now that you have one simple piece of your system defined, you've opened up the list of requirements available to implement - requirements that are adjacent to or dependent on the one you just implemented can now be tested and implemented in smaller steps building on what you've already done.
So the upshot of all that is: Don't try to do all your tests at once. One (small) thing at a time.
The point of TDD is that you have to observe that test fails when feature is not yet implemented. So you have to write test before code.
When you get into the TDD rhythm you write one test at a time and make it work. Very short red-green-refactor cycles really feel the rhythm. That being said, there is nothing wrong with other approaches (and they may even make more sense for some types of problems) but typically the only thing you need to do about other tests you are thinking of is write them down (or have your pair if you are pair programming write them down) so you don't forget them. You have to do that anyway because you could forget about a test in the middle of writing a different test.
Do just enough tests to test 1 unit of code at a time.. then write the actual code until it passes the test.. rinse, wash, repeat until done.
If you find yourself needing to write many tests for one unit of code ( a method, a function etc) it might be a sign that you are trying to do too much in that unit... which in turn makes the unit dificult to test & to refactor at a later time.

What helps to you improve your ability to find a bug?

I want to know if there are method to quickly find bugs in the program.
It seems that the more you master the architecture of your software, the more quickly
you can locate the bugs.
How the programmers improve their ability to find a bug?
Logging, and unit tests. The more information you have about what happened, the easier it is to reproduce it. The more modular you can make your code, the easier it is to check that it really is misbehaving where you think it is, and then check that your fix solves the problem.
Divide and conquer. Whenever you are debugging, you should be thinking about cutting down the possible locations of the problem. Every time you run the app, you should be trying to eliminate a possible source and zero in on the actual location. This can be done with logging, with a debugger, assertions, etc.
Here's a prophylactic method after you have found a bug: I find it really helpful to take a minute and think about the bug.
What was the bug exactly in essence.
Why did it occur.
Could you have found it earlier, easier.
Anything else you learned from the bug.
I find taking a minute to think about these things will make it far less likely that you will produce the same bug in the future.
I will assume you mean logic bugs. The best way I have found to capture logic bugs is to implement some sort of testing scheme. Check out jUnit as the standard. Pretty much you define a set of accepted outputs of your methods. Every time you compile your system it checks all of your test cases. If you have introduced new logic that breaks your tests, you will know about it instantly and know exactly what you have to fix.
Test driven design is a pretty big movement in programming right now. You will be hard pressed to find a language that doesn't support some kind of testing. Even JavaScript has a multitude of test suites.
Experience makes you a better debugger. Pay close attention to the bugs that you AND others commonly make. Try to figure out if/how these bugs apply to ALL code that affects you, not the single instance of where the bug was seen.
Raymond Chen is famous for his powers of psychic debugging.
Most of what looks like psychic
debugging is really just knowing what
people tend to get wrong.
That means that you don't necessarily have to be intimately familiar with the architecture / system. You just need enough knowledge to understand the types of bugs that apply and are easy to make.
I personally take the approach of thinking about where the bug may be in the code before actually opening up the code and taking a look. When you first start with this approach, it may not actually work very well, especially if you are pretty unfamiliar with the code base. However, over time someone will be able to tell you the behavior they are experiencing and you'll have a good idea where the problem is located or you may even know what to fix in the code to remedy the problem before even looking at the code.
I was on a project for several years that maintained by a vendor. They were not very good debuggers and most of the time it was up to us to point them to an area of the code that had the problem. What made our problem worse was that we didn't have a nice way to view the source code, so a lot of our "debugging" was just feeling.
Error checking and reporting. The #1 newbie coder debugging mistake is to turn off error reporting, avoid checking for whether what's going on makes sense, etc etc. In general, people feel like if they can't see anything going wrong then nothing is going wrong. Which of course could not be further from the case.
Instead, your code should be chock full of error conditions that will make lots of noise, with detailed reporting, someplace you will see it. (This doesn't mean inside a production web page.) Then, instead of having to trace an error all over the place because it got passed through sixteen layers of execution before it finally got someplace that broke, your errors start happening proximately to the actual issue.
It seems that the more you master the
architecture of your software ,the
more quickly you can locate the bugs.
After understanding the architecture, one's ability to find bugs in the application increases with their ability to identify and write extensive tests.
Know your tools.
Make sure that you know how to use conditional breakpoints and watches in your debugger.
Use static analysis tools as well - they can point out the more obvious issues.
Sleep and rest.
Use programming methods that produce fewer bugs in the first place.
If to implement a single stand-alone functional requirement it takes N separate point-edits to source code, the number of bugs put into the code is roughly proportional to N, so find programming methods that minimize N. Ways to do this: DRY (don't repeat yourself), code generation, and DSL (domain-specific-language).
Where bugs are likely, have unit tests.
Obviously.IMHO, the best unit tests are monte-carlo.
Make intermediate results visible.
For example, compilers have intermediate representations, in the form of 4-tuples. If there is a bug, the intermediate code can be examined. That tells if the bug is in the first or second half of the compiler.
P.S. Most programmers are not aware that they have a choice of how much data structure to use. The less data structure you use, the less are the chances for bugs (and performance issues) caused by it.
I find tracepoints to be an invaluable debugging tool. They are a bit like logging, except you create them during a debugging session to solve a particular issue, like breakpoints.
Printing the stacktrace in a tracepoint can be especially useful. For example, you can print the hash code and stacktrace in the constructor of an object, and then later on when the object is used again you can search for its hashcode to see which client code created it. Same for seeing who disposed it or called a certain method etc.
They are also great for debugging issues related to window focus changes etc, where the debugger would interfere if you drop in break mode.
Static code tools like FindBugs
Assertions, assertions, and assertions.
Some areas of our code has 4 or 5 assertions for each line of real code. When we get a bug report the first thing that happens is that the customer data is processed in our debug build 99 times out a hundred an assert will fire near the cause of the bug.
Additionally our debug build perform redundant calculations to ensure that an optimized algorithm is returning the correct result, and also debug functions are used to examine the sanity of data structures.
The hardest thing new developers have to contend with is getting their code to survive the assertions of the code gthey are calling.
Additionally we do not allow any code to be putback to toplevel that causes any integration or unit test to fail.
Stepping through the code, examining flow/state where unexpected behavior is occurring. (Then develop a test for it, of course).
Writing Debug.Write(message) in your code and using DebugView is another option. And then run your application find out what is going on.
"Architecture" in software means something like:
Several components
The components interact across clearly-defined interfaces
Each component has a well-defined responsibility
The responsibility of one component is unlike the responsibilities of other components
So, as you said, the better the architecture the easier it is to find bugs.
First: knowing the bug, you can decide which functionality is broken, and therefore know which component implements that functionality. For example, if the bug is that something isn't being logged properly, therefore this bug should be in one of 3 places:
In the component that's responsible for logging (your logging library)
Or, above that in the application code which is using this library
Or, below that in the system code which this library is using
Second: examine the data transfered across the interfaces between components. To continue the previous example above:
Set a debugger breakpoint on the application code which invokes the logger API, to verify whether the logger API is being used correctly (e.g. whether it's being invoked at all, whether parameters are as-expected, etc.).
Doing this tells you whether the bug is in the component above this interface, or in the component that's below this interface.
Repeat (perhaps using binary search if the call stack is very deep) until you've found which component is at fault.
When you come to the point that you think there must be a bug in the OS, check your assertions -- and put them into the code with "assert" statements.
Conversely, as you are writing the code, think of the range of valid inputs for your algorithms and put in assertions to make sure you have what you think you have. Same goes for output: Check that you produced what you think you produced.
E.g. if you expect a non-empty list:
l = getList(input)
assert l, "List was empty for input: %s" % str(input)
I'm part of the QA team # work, and knowing anything about the product and how it is developed, helps a lot in finding bugs, also when I make new QA tools I pass it to our dev team to test it, finding bugs in your own code is just plain hard!
Some people say programmers are tainted, so we cannot see bugs in their own product; we are not talking about code here, we are beyond that, usability and functionality itself.
Meanwhile unit testing seams to be a nice solution to find bugs in your own code, its totally pointless if you're wrong even before writing the unit test, how are you going to find the bugs then? you don't!, let your co-worker find them, hire a QA guy.
Scientific debugging is what I always used, and it greatly helps.
Basically, if you can replicate a bug, you can track its origin. You should then experiment some tests, observe the results, and infer hypotheses on why the bug happens.
Writing about all your hypotheses, attempts, expected results and observed results can help you track down the bugs, particularly if they're nasty.
There are automated tools that can help you with that process, particularly git-bisect (and similar bisection tools on other revision systems) to quickly find which change introduced the bug, unit testing to reproduce a bug and prevent regressions in your code (can be used in combination with bisect), and delta debugging to find the culprit in your code (similar to git-bisect but whereas git-bisect works on the code history, delta debugging works on the code directly).
But whatever the tools you are using, the most important benefit is in the scientific methodology, as this is the formalization of what most experienced debuggers do.

TDD. When you can move on?

When doing TDD, how to tell "that's enough tests for this class / feature"?
I.e. when could you tell that you completed testing all edge cases?
With Test Driven Development, you’ll write a test before you write the code it tests. Once you’re written the code and the test passes, then it’s time to write another test. If you follow TDD correctly, you’ve written enough tests once you’re code does all that is required.
As for edge cases, let's take an example such as validating a parameter in a method. Before you add the parameter to you code, you create tests which verify the code will handle each case correctly. Then you can add the parameter and associated logic, and ensure the tests pass. If you think up more edge cases, then more tests can be added.
By taking it one step at a time, you won't have to worry about edge cases when you've finished writing your code, because you'll have already written the tests for them all. Of course, there's always human error, and you may miss something... When that situation occurs, it's time to add another test and then fix the code.
Kent Beck's advice is to write tests until fear turns into boredom. That is, until you're no longer afraid that anything will break, assuming you start with an appropriate level of fear.
On some level, it's a gut feeling of
"Am I confident that the tests will catch all the problems I can think of
now?"
On another level, you've already got a set of user or system requirements that must be met, so you could stop there.
While I do use code coverage to tell me if I didn't follow my TDD process and to find code that can be removed, I would not count code coverage as a useful way to know when to stop. Your code coverage could be 100%, but if you forgot to include a requirement, well, then you're not really done, are you.
Perhaps a misconception about TDD is that you have to know everything up front to test. This is misguided because the tests that result from the TDD process are like a breadcrumb trail. You know what has been tested in the past, and can guide you to an extent, but it won't tell you what to do next.
I think TDD could be thought of as an evolutionary process. That is, you start with your initial design and it's set of tests. As your code gets battered in production, you add more tests, and code that makes those tests pass. Each time you add a test here, and a test there, you're also doing TDD, and it doesn't cost all that much. You didn't know those cases existed when you wrote your first set of tests, but you gained the knowledge now, and can check for those problems at the touch of a button. This is the great power of TDD, and one reason why I advocate for it so much.
Well, when you can't think of any more failure cases that doesn't work as intended.
Part of TDD is to keep a list of things you want to implement, and problems with your current implementation... so when that list runs out, you are essentially done....
And remember, you can always go back and add tests when you discover bugs or new issues with the implementation.
that common sense, there no perfect answer. TDD goal is to remove fear, if you feel confident you tested it well enough go on...
Just don't forget that if you find a bug later on, write a test first to reproduce the bug, then correct it, so you will prevent future change to break it again!
Some people complain when they don't have X percent of coverage.... some test are useless, and 100% coverage does not mean you test everything that can make your code break, only the fact it wont break for the way you used it!
A test is a way of precisely describing something you want. Adding a test expands the scope of what you want, or adds details of what you want.
If you can't think of anything more that you want, or any refinements to what you want, then move on to something else. You can always come back later.
Tests in TDD are about covering the specification, in fact they can be a substitute for a specification. In TDD, tests are not about covering the code. They ensure the code covers the specification, because the code will fail a test if it doesn't cover the specification. Any extra code you have doesn't matter.
So you have enough tests when the tests look like they describe all the expectations that you or the stakeholders have.
maybe i missed something somewhere in the Agile/XP world, but my understanding of the process was that the developer and the customer specify the tests as part of the Feature. This allows the test cases to substitute for more formal requirements documentation, helps identify the use-cases for the feature, etc. So you're done testing and coding when all of these tests pass...plus any more edge cases that you think of along the way
Alberto Savoia says that "if all your tests pass, chances are that your test are not good enough". I think that it is a good way to think about tests: ask if you are doing edge cases, pass some unexpected parameter and so on. A good way to improve the quality of your tests is work with a pair - specially a tester - and get help about more test cases. Pair with testers is good because they have a different point of view.
Of course, you could use some tool to do mutation tests and get more confidence from your tests. I have used Jester and it improve both my tests and the way that I wrote them. Consider to use something like it.
Kind Regards
Theoretically you should cover all possible input combinations and test that the output is correct but sometimes it's just not worth it.
Many of the other comments have hit the nail on the head. Do you feel confident about the code you have written given your test coverage? As your code evolves do your tests still adequately cover it? Do your tests capture the intended behaviour and functionality for the component under test?
There must be a happy medium. As you add more and more test cases your tests may become brittle as what is considered an edge case continuously changes. Following many of the earlier suggestions it can be very helpful to get everything you can think of up front and then adding new tests as the software grows. This kind of organic grow can help your tests grow without all the effort up front.
I am not going to lie but I often get lazy when going back to write additional tests. I might miss that property that contains 0 code or the default constructor that I do not care about. Sometimes not being completely anal about the process can save you time n areas that are less then critical (the 100% code coverage myth).
You have to remember that the end goal is to get a top notch product out the door and not kill yourself testing. If you have that gut feeling like you are missing something then chances are you are have and that you need to add more tests.
Good luck and happy coding.
You could always use a test coverage tool like EMMA (http://emma.sourceforge.net/) or its Eclipse plugin EclEmma (http://www.eclemma.org/) or the like. Some developers believe that 100% test coverage is a worthy goal; others disagree.
Just try to come up with every way within reason that you could cause something to fail. Null values, values out of range, etc. Once you can't easily come up with anything, just continue on to something else.
If down the road you ever find a new bug or come up with a way, add the test.
It is not about code coverage. That is a dangerous metric, because code is "covered" long before it is "tested well".

Resources