It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
We've all had them, errors or bugs that have lost us lots of time. I've seen it time and time again, the first 90% of the coding work for a given project takes 10% of the total time. It's that last 90% of the time you spend looking for that rogue bug that's really only about 10% of the coding work. That one thing that just doesn't want to work. Sometimes it's something big and other it's just that one character that was off.
What is the bug or error that has cost you and/or your team the most amount of time?
Once upon a time, I worked on a database project for an apartment management company. We had tables like Customer, CustomerStatus, Apartment, ApartmentStatus, and so forth. Queries I wrote would look like:
SELECT cu.Name, ap.ApartmentUnit, as.DateOccupied
from Customer cu
inner join CustomerStatus cs
on cs.CustomerId = cu.CustomerId
inner join ApartmentStatus as
on as.ResidentId = cu.CustomerId
and as.Status = 5
inner join Apartment ap
on ap.ApartmentId = as.ApartmentId
where cu.CustomerId = #CustomerId
This query and ones like it simply would not run, no matter how hard I tried, modified, or stared at it. It took days before I realised that my entirely reasonable table alias of "as" was a reserved word...
A Heisenbug is IMO one of the worst bug. Hunting such a beast is a real nightmare. Having that said, a Bohrbug, a Mandelbug, a Schroedinbug, a Phase of the Moon bug or a Statistical bug will give you serious headache too.
Getting Oracle to work, we are a SQL shop and now have to support Oracle, and nobody knew anything about Oracle.
Ive spent over 2 days trying to figure out a CSS issue that was breaking my site. It turned out to be that I mistook a curly brace for a parenthesis in one of the classes and my resolution is set too small to be able to tell very easily
In C++: 2 days trying to figure out why a particular script worked for everything except one particular class. Copying the class and renaming it didn't fix the problem.
Rewriting the class from scratch did fix the problem, but didn't seem to bring me any closer to the reason for the issue in the first place.
Diffing the files turned up nothing.
However, I then noticed that one of my new files, while visually identical, was only half the size of the original.
Different encodings in header and cpp file broke my script :)
Debugging some error that the Yahoo user interface library was spitting out. Spent a couple of days on it. It turned out that YUI spits out errors that are supposed to happen and don't need fixing.
3 months trying to track down an error in our engine rendering code. We had implemented our own custom vertex-pooling scheme, and it worked great in DX8. Once I upgraded the engine to DX9, all the geometry came out as garbled messes. Luckily I was able to turn that off with a #define, but hunting it down was a painful month of trial and error, and in the end it boiled down to setting the wrong parameter in an interface function that had changed in DX9 - we were setting firstvertex instead of startvertex, which caused the index lists to read the wrong vertices. Fun stuff.
Related
Who is asking? - Question is coming from less than 6 months old PHP developer who fell completely in love with PHP due to its awesomeness, also I just joined STACKOVERFLOW today 7th Dec, 2019.
Reason for the question: I have a form which I have completely built and validated with Laravel but I want to protect it from spam not with recaptcha but with a pin (a kind of generated key). I've seen it used on various websites and I also want to apply it.
Plan of action: The generated code will be placed at the end of the form with an input field and on filling it, it must match the code generated on every page refresh. If it doesn't match, I want to kill the page or perhaps, display a page with a "WELL DONE" message.
My thoughts: I'm new here and maybe the question might have been asked before, but honestly I've been on the computer for over a week (spending at least 18 hours searching and searching) but no really understandable solution.
What I can't do: Because I'm using Laravel, I don't know where to start this functionality and how to end it.
My helper: You are reading this and I believe you have the skills and techniques to help me without sweating at all. Just imagine a friend whose head is floating but the body is already in the ocean and about to drown. Also imagine a friend who has only one shot (2 days) to change his life, and if not done, only God knows what's to come. PLEASE HELP ME!
To everyone: Forgive me for the long message, I just believe that if I can express myself deeply enough, someone out there will help me out.
Thank you to all the awesome developers around the world.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When I'm writing it?
After I got a part done (Single class/function/if-elses)?
After I got the whole thing working?
The short answer
The short answer is anytime something is non-obvious relative to whose going to be reading it. If its code that is still in flux so you are the only consumer, just comments for you (hours and days). Ready to check in for others to try out - comments for you and your team (days and weeks, possibly months). Ready for wide release - comments for the immediate and future public (months and years). You have to think of comments as tools, not documentation.
The long answer:
When I'm writing it? - Yes
After I got a part done (Single class/function/if-elses)? - Yes
After I got the whole thing working? - Yes
When I'm writing it? - Yes
Drop comments anytime you hit a place where the code isn't immediately clear. For example, describe the class when the class name isn't clear or could be interpreted too widely. Another example is if I'm about to write a non-obvious code block, I'll first add a comment reminding me of what I want/need. Or if I just added some code and I immediately realized there was a gotcha in there, drop a comment to remind yourself. These comments are implementor comments, less to help future maintainers, but rather to help yourself in the coding process.
Drop FIXME - explanation and TODO explanations reminders as you go.
Code is still in flux, so I'm not yet documenting every and all method and parameter.
After I got a part done (Single class/function/if-elses)? - Yes
When I'm reasonably done with a method or class, now is the time to review it. Along with checking scopes of methods, ordering methods, and other code cleanup to improve understandability, now's the time to begin to standardize it against your team standards. Consider what comments are need based on the audience it will be released to (future you is part of the audience too!) Does the class have a header block? Are there non-obvious conditions under which this method should not be called? Does this parameter have any conditions on it, e.g. should not be null?
Check the FIXME and TODO items - still valid? Any you should address now before moving on?
These are still notes for you and your team, but the beginnings of standardized notes for future maintainers.
After I got the whole thing working? - Yes
Now is the time to review everything and finalize comments against your standards.
All FIXME and TODO items addressed (fixed or captured as known issue)?
These notes now are for future maintainers.
Now the dirty little secret
More is not always better. Like unit tests, you have to balance use of your tools weighing costs vs benefits. The fact is that a coder can only type so many physical lines per hour - what percent should be comments? A low percentage means I've got a lot of code, but its confusing and difficult to understand and use correctly. A high percentage means that, in an hour when someone changes a method signature or redefines an interface, all the time is spent fully commenting every parameters of those methods just got trashed.
Find the right percentage based on the stability of the code, how long it will live, and how widely it will be released. Not stable yet - minimal comments to help you and your team. Stable and ready for project - fully commented. Public release? - fully commented (check again!) with copyrights (if applicable). As you gain experience, adjust the percentage.
You should never "add" comments - they are not additions. Comments are part of the code - you use them when you need them. Asking when you should add them is like asking when you should add functions or classes. Though thinking about it, I remember doing a program advice slot at university I worked for where one of the students came in with about 1000 lines of Pascal, with no functions. When I queried why he hadn't used functions, his response was "I'll add them later, once I've got it working."
This is subjective, but sometimes it's better to add them before the actual code, eg. when you implement an algorithm that has clearly defined steps. By that way it's harder to miss steps.
This is a matter of style. Personally, I like writing comments during the coding, not after. Because it I leave it to after, I usually get lazy and don't write them at all. That said, sometimes it's useful to go over a completed piece of code, figure out what isn't obvious from the code itself and document it. In particular, the parts where assumptions are made.
I would suggest writing comments whenever you edit any code, while you are editing it. According to Robert C. Martin in Clean Code, a disadvantage of comments is that the code can change without the comments being updated, making the comments not only useless, but dangerous. To reduce this problem, if you must use comments (because you are unable to express yourself in the code itself), make sure you update them every time you update the code.
You should try writing comments BEFORE you write any code. eg
public string getCurrentUserName() {
//init user database repository
//retrieve logged in user
//return name if a user is logged in, otherwise return null
}
Writing comments before you code, helps you learn how to structure your code without actually coding it and realising that you should have done it another way. It's also a good way to quickly visualise a clean solution to a complex problem without getting bogged down in implementation. It's also good because if you get interrupted, when you come back to your work you can go straight back to it, as opposed to refigure out what you have done and what you need to do next.
Not suited to all situations, but often a good option!
A disadvantage of adding comments later is that a lot of times that will simply not be done, due to lazyness, other tasks, etc.
If you find you can always go back and add the appropriate comments without any problem, then by all means do so, but otherwise making a conscious effort to add them as you're coding or before you code a section may be a way to ensure that you don't leave the code uncommented.
Put a comment ANYWHERE the programmer reading your code, may generate a WTF moment.
If you find yourself commenting every line, perhaps you need to take a look at trying to improve your code with simpler, more elegant statements.
Comments should reflect why you are doing the things the way you do, not what it does. Most of the time the one reading your code can read what it does.
You should explain the the things one cannot reduce from the code.
I tend to put basic comments as I'm going, just to remind myself what I was thinking at the time when I wrote it (i.e. why I wrote it that way). I do this especially if it's code that looks like it might be wrong but is actually right, or code that has an inherent race condition that I don't care about, or code that might not be optimal but is a quick way to get something working, so that even ten minutes later when I go back and look at it I can see that I've thought about the problem already and don't have to waste any brain cycles on it.
When the code is more complete, I'll often go back and review the comments I've written and then have a think about whether I still think the decisions made are reasonable, and whether things could be done better. I'll also often expand the basic comment into a longer comment that's more useful for other people when they come to maintain the code; I usually save comment expansion to the end because a lot of the time basic comments just get deleted during refactoring, so writing a long comment is a waste of time until you know you're going to keep it.
In a nutshell, write basic comments as you go along, and then improve them as your code becomes more stable.
Oh, and also, any time you review a bit of existing code and you're struck with a WTF?! moment but then realise the code is actually decent, put a comment in to save yourself and the next person time when they look at it in the future.
The question should be, when do I add code to my comments?
My practice is to write out the functionality of a module/object/function as a series of comments. Not comments like "add one to counter". Higher level comments like
"sort list by account number". Detailed comments are pretty much redundant with the code. So I avoid those unless I'm writing a very tricky algorithm.
Once I have the functionality "designed" in comments, I act like a human compiler and
add in the code after each line of comments.
Give it a try and let us know how it works!
Personally, I tend to write comments to summarise code where necessary - often before I write the code, as well as to save WTFs. I treat them very exactly as notes - of things to do, things that I have done this way, or will do this way, and as such they are put in when and where I feel the need for them.
Before you forget what specification and design the code is required to implement.
Before you forget that some unfortunate coder will have to read it later on.
Before you forget that the unfortunate coder could well be you.
When you do something non-trivial, as you're writing it.
You gave a lot of cases in your question. I think it depends on what you're doing at the time.
If you're writing a function or a class, comments are a way to declare what's supposed to happen with the function. Things like input variables, output type, special behavior, exceptions, etc. IMHO that kind of comment should be written before the actual code is started, in your "code design" phase. Most languages have packages which process those kind of comments into documentation (javadoc, epydoc, POD, etc, so that stuff will be read by users.
If you're making a bit of code work, I think it's OK to wait until you've got it working to put in a comment triumphantly describing your working solution. That kind of comment is only going to get read by a code reviewer.
Then, as others have said, you want to avoid WTF moments, by yourself or others. I once got an attaboy for a comment I made once in an open-source project. The comment was "Yes, I really do want = and not == on that line."
A. when you decide an arbitrary decion that would be difficult to re-understand.
B. Every thing that you feel that you should remember while writting the code
C. in the beginning of a program explain the logic and use
Advice - instead of commenting a lot use long names for functions and vars that realy explain what the function does or what the variable stands for.
Mostly at time when you write that code. You can go there after the function/block/whatever is done and organize your comments on fresh mind. Most of the stuff we write while coding are not meaningful later.
Early on in my career I added comments to nearly every line of code, as you may do perhaps in an ASM program. As time went by I ran into many of the problems mentioned here. It was a bear to maintain which resulted in not updating comments and then they become stale at best, usually moldy.
I feel that the # of comments should reflect how complex or non-obvious the code itself is. In a more challenging environment, such as ASM, you will probably need more comments to understand what is going on. In more modern languages like C# you shouldn't need a whole lot of comments in most cases.
Generally I use tools that evaluate the complexity of my methods in C#. Those that are high on the complexity scale first get refactored. Then when I'm satisfied with the complexity remaining and I still have some code that is not obvious, or even more important, seems obvious but does something different, then I tack a comment on it.
I add comments while writing any code that is not easily understandable. I find that if I don't do it immediately then it gets forgotten. I (or more likely someone else) then spends more time figuring what I did than it would have taken to write the comment.
To be more precise, commenting immediately after the code is written is the best avenue to ensure comments actually get written.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm looking for a way to make programs appear (frequently) used, so that they would appear in the Start menu's "Recently Used Programs" (after a zero touch install).
I'm trying to figure out how Windows stores information related to program usage frequency.
The only (maybe) related things I can see being changed when I run a program from the Start Menu, are some (seemingly undocumented) BagMRU registry keys which have no meaning to me.
I did found a way to get programs pinned, but that's not what I'm looking for here.
Update: please see the comments for explanation why I would like to do this...
Update2: I'm making progress... Now I know where they keys are stored and I know that the keys are ROT13 "encrypted". And the second 4 bytes of the values are the counter.. http://blog.didierstevens.com/2006/07/24/rot13-is-used-in-windows-you’re-joking/
This ROT13(wikipedia) encryption thing is funny. Well, of course there is a reason. They don't want you to be able to find it by simple search.
Lol, and in windows 7 they are using Vigenère crypto! much better :D
At the risk of downvotes, this is not something you should be doing. The "Recently Used Programs" belongs to the owner of the computer, not your program.
If your program is as useful as you think it is, it will automagically show up there.
Raymond Chen has done quite a few articles as to why this sort of thing is a bad idea.
This rates among all those other bad ideas such as:
how can I force my program to be the handler for certain file types?
how can I keep my program always on top.
how can I annoy my users by making decisions for them when they previously had the power to make their own decisions as to how their software was configured? :-)
Update:
A couple of things you may want to try.
Copy a program (explorer.exe) to axolotl.exe and run it enough times to get it on the list. Then search the registry for it (assuming there's not another axolotl.exe somewhere on your disk).Be aware that some strings are stored as Unicode so it might not be a simple search. It also wouldn't surprise me if MS encoded them some way to make this more difficult.
Microsoft's sysinternals have a tool that can monitor the registry (regmon, look here, you could run that while you run a program a few times to see what gets updated when it's added to the list.
I found what I was looking for here:
http://blog.didierstevens.com/2006/07/24/rot13-is-used-in-windows-you’re-joking/
If this is possible, I do recommend against it. It is, as you say, undocumented behaviour and circumvents the intended usage of the frequently used programs list. What's wrong with a desktop icon and quick launch shortcut?
Use Win32 Shell COM interfaces
It has been explained for decades, like for all undocumented features, on Google Groups (Win32), same method than on W95..
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Scenario
You've got several bug reports all showing the same problem. They're all cryptic with similar tales of how the problem occurred. You follow the steps but it doesn't reliably reproduce the problem. After some investigation and web searching, you suspect what might be going on and you are pretty sure you can fix it.
Problem
Unfortunately, without a reliable way to reproduce the original problem, you can't verify that it actually fixes the issue rather than having no effect at all or exacerbating and masking the real problem. You could just not fix it until it becomes reproducible every time, but it's a big bug and not fixing it would cause your users a lot of other problems.
Question
How do you go about verifying your change?
I think this is a very familiar scenario to anyone who has engineered software, so I'm sure there are a plethora of approaches and best practices to tackling bugs like this. We are currently looking at one of these problems on our project where I have spent some time determining the issue but have been unable to confirm my suspicions. A colleague is soak-testing my fix in the hopes that "a day of running without a crash" equates to "it's fixed". However, I'd prefer a more reliable approach and I figured there's a wealth of experience here on SO.
Bugs that are hard to reproduce are the hardest one to solve. What you need to make sure that you have found the root of the problem, even if the problem itself cannot be reproduced successfully.
The most common intermittent bugs are caused by race-conditions - by eliminating the race, or ensuring that one side always wins you have eliminated the root of the problem even if you can't successfully confirm it by testing the results. The only thing you can test is that the cause does need repeat itself.
Sometimes fixing what is seen as the root indeed solves a problem but not the right one - there is no avoiding it. The best way to avoid intermittent bugs is be careful and methodical with the system design and architecture.
You'll never be able to verify the fix without identifying the root cause and coming up with a reliable way to reproduce the bug.
For identifying the root cause: If your platform allows it, hook some post-mortem debugging into the problem.
For example, on Windows, get your code to create a minidump file (core dump on Unix) when it encounters this problem. You can then get the customer (or WinQual, on Windows) to send you this file. This should give you more information about how your code's gone wrong on the production system.
But without that, you'll still need to come up with a reliable way to reproduce the bug. Otherwise you'll never be able to verify that it's fixed.
Even with all of this information, you might end up fixing a bug that looks like, but isn't, the one that the customer is seeing.
Instrument the build with more extensive (possibly optional) logging and data saving that allows exact reproduction of the variable UI steps the users took before the crash occurred.
If that data does not reliably allow you to reproduce the issue then you've narrowed the class of bug. Time to look at sources of random behaviour, such as variations in system configuration, pointer comparisons, uninitialized data, etc.
Sometimes you "know" (or rather feel) that you can fix the issue without extensive testing or unit testing scaffolding, because you truly understand the issue. However, if you don't, it very often boils down to something like "we ran it 100 times and the error no longer occurred, so we'll consider it fixed until the next time it's reported.".
I use what i call "heavy style defensive programming" : add asserts in all the modules that seems linked by the problem. What i mean is, add A LOT of asserts, asserts evidences, assert state of objects in all their memebers, assert "environnement" state, etc.
Asserts help you identify the code that is NOT linked to the problem.
Most of the time i find the origin of the problem just by writing the assertions as it forces you to reread all the code and plundge under the guts of the application to understand it.
There is no one answer to this problem. Sometimes the solution you've found helps you figure out the scenario to reproduce the problem, in which case you can test that scenario before and after the fix. Sometimes, though, that solution you've found only fixes one of the problems but not all of them, or like you say masks a deeper problem. I wish I could say "do this, it works every time", but there isn't a "this" that fits that scenario.
You say in a comment that you think it is a race condition. If you think you know what "feature" of the code is generating the condition, you can write a test to try to force it.
Here is some risky code in c:
const int NITER = 1000;
int thread_unsafe_count = 0;
int thread_unsafe_tracker = 0;
void* thread_unsafe_plus(void *a){
int i, local;
thread_unsafe_tracker++;
for (i=0; i<NITER; i++){
local = thread_unsafe_count;
local++;
thread_unsafe_count+=local;
};
}
void* thread_unsafe_minus(void *a){
int i, local;
thread_unsafe_tracker--;
for (i=0; i<NITER; i++){
local = thread_unsafe_count;
local--;
thread_unsafe_count+=local;
};
}
which I can test (in a pthreads enironment) with:
pthread_t th1, th2;
pthread_create(&th1,NULL,&thread_unsafe_plus,NULL);
pthread_create(&th2,NULL,&thread_unsafe_minus,NULL);
pthread_join(th1,NULL);
pthread_join(th2,NULL);
if (thread_unsafe_count != 0) {
printf("Ah ha!\n");
}
In real life, you'll probably have to wrap your suspect code in some way to help the race hit more ofter.
If it works, adjust the number of threads and other parameters to make it hit most of the time, and now you have a chance.
First you need to get stack traces from your clients, that way you can actually do some forensics.
Next do fuzz tests with random input, and keep these tests running for long stretches, they're great at finding those irrational border cases, that human programmers and testers can find through use cases and understanding of the code.
In this situation, where nothing else works, I introduce additional logging.
I also add in email notifications that show me the state of the application when it breaks down.
Sometimes I add in performance counters... I put that data in a table and look at trends.
Even if nothing shows up, you are narrowing things down. One way or another, you will end up with useful theories.
These are horrible and almost always resistant to the 'fixes' the engineer thinks he is putting in, as they have a habit of coming back to bite months later. Be wary of any fixes made to intermittent bugs. Be prepared for a bit of grunt work and intensive logging as this sounds more of a testing problem than a development problem.
My own problem when overcoming bugs like these was that I was often too close to the problem, not standing back and looking at the bigger picture. Try and get someone else to look at how you approach the problem.
Specifically my bug was to do with the setting of timeouts and various other magic numbers that in retrospect where borderline and so worked almost all of the time. The trick in my own case was to do a lot of experimentation with settings that I could find out which values would 'break' the software.
Do the failures happen during specific time periods? If so, where and when? Is it only certain people that seem to reproduce the bug? What set of inputs seem to invite the problem? What part of the application does it fail on? Does the bug seem more or less intermittent out in the field?
When I was a software tester my main tools where a pen and paper to record notes of my previous actions - remember a lot of seemingly insignificant details is vital. By observing and collecting little bits of data all the time the bug will appear to become less intermittent.
For a difficult-to-reproduce error, the first step is usually documentation. In the area of the code that is failing, modify the code to be hyper-explicit: One command per line; heavy, differentiated exception handling; verbose, even prolix debug output. That way, even if you can't reproduce or fix the error, you can gain far more information about the cause the next time the failure is seen.
The second step is usually assertion of assumptions and bounds checking. Everything you think you know about the code in question, write .Asserts and checks. Specifically, check objects for nullity and (if your language is dynamic) existence.
Third, check your unit test coverage. Do your unit tests actually cover every fork in execution? If you don't have unit tests, this is probably a good place to start.
The problem with unreproducible errors is that they're only unreproducible to the developer. If your end users insist on reproducing them, it's a valuable tool to leverage the crash in the field.
I've run into bugs on systems that seem to consistently cause errors, but when stepping through the code in a debugger the problem mysteriously disappears. In all of these cases the issue was one of timing.
When the system was running normally there was some sort of conflict for resources or taking the next step before the last one finished. When I stepped through it in the debugger, things were moving slowly enough that the problem disappeared.
Once I figured out it was a timing issue it was easy to find a fix. I'm not sure if this is applicable in your situation, but whenever bugs disappear in the debugger timing issues are my first suspects.
Once you fully understand the bug (and that's a big "once"), you should be able to reproduce it at will. When the reproduction code (automated test) is written, you fix the bug.
How to get to the point where you understand the bug?
Instrument the code (log like crazy). Work with your QA - they are good at re-creating the problem, and you need to arrange to have full dev toolkit available to you on their machines. Use automated tools for uninitialized memory/resources. Just plain stare at the code. No easy solution there.
Those types of bugs are very frustrating. Extrapolate it out to different machines with different types of custom hardware that might be in them (like at my company), and boy oh boy does it become a nightmare. I currently have several bugs like this at the moment at my job.
My rule of thumb: I don't fix it unless I can reproduce it myself or I'm presented with a log that clearly shows something wrong. Otherwise I cannot verify my change, nor can I verify that my change has not broken anything else. Of course, it's just a rule of thumb - I do make exceptions.
I think you're quite right to be concerned with your colleuge's approach.
These problems have always been caused by:
Memory Problems
Threading Problems
To solve the problem, you should:
Instrument your code (Add log statements)
Code Review threading
Code Review memory allocation / dereferencing
The code reviews will most likely only happen if it is a priority, or if you have a strong suspicion about which code is shared by the multiple bug reports. If it's a threading issue, then check your thread safety - make sure variables accessable by both threads are protected. If it's a memory issue, then check your allocations and dereferences and especially be suspicious of code that allocates and returns memory, or code that uses memory allocation by someone else who may be releasing it.
Some questions you could ask yourself:
When did this piece of code last work without problem.
What has been done since it stopped working.
If the code has never worked the approach would be different naturally.
At least when many users change a lot of code all the time this is a very common scenario.
Specific scenario
While I don't want to concentrate on only the issue I am having, here are some details of the current issue we face and how I've tackled it so far.
The issue occurs when the user interacts with the user interface (a TabControl to be exact) at a particular phase of a process. It doesn't always occur and I believe this is because the window of time for the problem to be exhibited is small. My suspicion is that the initialization of a UserControl (we're in .NET, using C#) coincides with a state change event from another area of the application, which leads to a font being disposed. Meanwhile, another control (a Label) tries to draw its string with that font, and hence the crash.
However, actually confirming what leads to the font being disposed has proved difficult. The current fix has been to clone the font so that the drawing label still has a valid font, but this really masks the root problem which is the font being disposed in the first place. Obviously, I'd like to track down the full sequence, but that is proving very difficult and time is short.
Approach
My approach was first to look at the stack trace from our crash reports and examine the Microsoft code using Reflector. Unfortunately, this led to a GDI+ call with little documentation, which only returns a number for the error - .NET turns this into a pretty useless message indicating something is invalid. Great.
From there, I went to look at what call in our code leads to this problem. The stack starts with a message loop, not in our code, but I found a call to Update() in the general area under suspicion and, using instrumentation (traces, etc), we were able to confirm to about 75% certainty that this was the source of the paint message. However, it wasn't the source of the bug - asking the label to paint is no crime.
From there, I looked at each aspect of the paint call that was crashing (DrawString) to see what could be invalid and started to rule each one out until it fell on the disposable items. I then determined which ones we had control over and the font was the only one. So, I took a look at how we handled the font and under what circumstances we disposed it to identify any potential root causes. I was able to come up with a plausible sequence of events that fit the reports from users, and therefore able to code a low risk fix.
Of course, it crossed my mind that the bug was in the framework, but I like to assume we screwed up before passing the blame to Microsoft.
Conclusion
So, that's how I approached one particular example of this kind of problem. As you can see, it's less than ideal, but fits with what many have said.
Unless there are major time constraints, I don't start testing changes until I can reliably reproduce the problem.
If you really had to, I suppose you could write a test case that appears to sometimes trigger the problem, and add it to your automated test suite (you do have an automated test suite, right?), and then make your change and hope that test case never fails again, knowing that if you didn't really fix anything at least you now have more chance of catching it. But by the time you can write a test case, you almost always have things reduced down to the point where you're no longer dealing with such an (apparently) non-deterministic situation.
Simply: ask the user who reported it.
I just use one of the reporters as a verification system.
Usually the person who was willing to report a bug is more than happy to help you to solve her problem [1].
Just give her your version with a possible fix and ask if the problem is gone.
In cases where the bug is a regression, the same method can be used to bisect where the problem occurred by giving the user with the problem multiple versions to test.
In other cases the user can also help you to debug the problem by giving her a version with more debugging capabilities.
This will limit any negative effects from a possible fix to that person instead of guessing that something will fix the bug and then later on realising that you've just released a "bug fix" that has no effect or in worst case a negative effect for the system stability.
You can also limit the possible negative effects of the "bug fix" by giving the new version to a limited number of users (for example to all of the ones that reported the problem) and releasing the fix only after that.
Also ones she can confirm that the fix you've made works, it is easy to add tests that ensures that your fix will stay in the code (at least on unit test level, if the bug is hard to reproduce on more higher system level).
Of course this requires that whatever you are working on supports this kind of approach. But if it doesn't I would really do whatever I can to enable it - end users are more satisfied and many of the hardest tech problems just go away and priorities come clear when development can directly interact with the system end users.
[1] If you have ever reported a bug, you most likely know that many times the response from the development/maintenance team is somehow negative from the end users point of view or there will be no response at all - especially in situations where the bug can not be reproduced by the development team.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I would like to know your experience when you need to take over somebody else's software project - more so when the original software developer has already resigned.
The most success that we've had with that is to "wiki" everything. During the notice period ask the leaving developer to help you document everything in the team/company wiki and see if you can do code reviews with him/her and add comments to the code while doing the reviews that explain sections. Best for the "taking over" developer to write the comments in the code under the supervision of the leaver.
Cases where original devs leaved before handing over the project are always the most interesting: you're stuck with a codebase in an unknown state. What I always find intriguing is how the new devs often do their utmost best to comment on how badly designed the code is: they forget about the constraints the old devs might have been under, the shortcuts they might have been forced to make. The saying is always Old dev == bad dev. What do you people think:
I would even call this out as an official bad practice: bad-mouthing the ones who have been before us.
I try to take as much a pragmatic approach as possible: learn the codebase, wander around a bit. Try to understand the relation between requirements and code, even is there is no clear initial relationship at all. There will always be the "aha moment" when you realise why they did something was done this way or that. If you're still convinced something is implemented the wrong way, do your refactorings if possible. And isolate the pieces of code you cannot change: unit test them by using a mocking framework.
Hail to the maintenance developer.
I once joined a team which has been handed over a pile of steaming crap from outsourcing. The original project - a multimedia content manager based on Java, Struts, Hibernate|Oracle - was well structured (it seems like it was the work of a couple of people, pair programming, wise use of design patterns, some unit testing). Then someone else inherited the project and endlessly copy-pasted features, loosened the business rules, patched, branched until it became a huge spaghetti monster with fine crafted piece of codes like:
List<Stuff> stuff = null;
if (LOG.isDebugEnabled())
{
stuff = findStuff();
LOG.debug("Yeah, I'm a smart guy!");
for (Stuff stu : stuff)
{
LOG.debug("I've got this stuff: " + stu);
}
}
methodThatUsesStuff(stuff);
hidden amongst the other brilliant ingenuity.
I tamed the beast via patient refactoring (extracting methods and classes more of the times), commenting the code from time to time, reorganizing everything till the codebase shrunk by 30%, getting more and more manageable over time.
I had to take over someone else’s code of different degrees of quality on several occasions. Hence the tips:
Make effort to take structured notes of any piece of significant information from minute one: names of stakeholders, business rules, code and document locations etc. It is best to dedicate a fresh spiral notebook, so you could tear pages out if you had to.
Make use of one of the better free indexing and desktop search tools available on the market (Google Desktop Search, MS Windows Search will do). Add all document, e-mail, code locations to it.
Before developing anything do document analysis: find everything you can get you hands on electronically on network and printed out docs, make effort of simply reading it. There is amazingly much of useful information even within unfinished drafts.
Mind map the code, architecture etc as you go.
With lesser documented and maintained systems you inevitably will have moments of despair that are likely to push you into procrastination mode. Especially during your first days or week when amount of new information your mind has to digest is overwhelming. At these times it is nice to have someone to remind you (or just do it yourself) to take it easy, concentrate on important things first and revert to making smaller steps in trying to gain understanding instead of trying to leap forward.
Keep taking notes, making diagrams, drawing rich pictures, mind mapping. It really helps to digest the copious amounts of new information, mostly disorganised.
Hei, good luck!
We actually have a specified set of "Deliverables" that has to be present for us to take over a project.
If we have the chance we try to push in one of our folks within the group developing the project at first. That way we get some firsthand knowledgde before our group takes over the code. (in the line of what #Guy wrote)
That being said, the most important part for me would be:
Some kind og highlevel overview(drawing?) of what the code do.
Easy access to ask questions of the people who actually wrote the code
This for me is alpha omega when taking over code and projects