Using Resharper -- is it really a "personal decision"? - visual-studio

My team lead recommends that all the developers user ReSharper but he does not "enforce" this recommendation. As a result, whenever I open some code it immediately jumps out to me whether the developer who wrote it used ReSharper or not. Tell-tale signs are unnecessary nesting, use of redundant type declarations and generic parameters, typos in symbol names (because it would be too hard for them to fix them), etc.
The unstated assumption seems to be that the user of ReSharper is a "personal decision" that does not affect anyone else. But is this really true? What level of "enforcement" on this issue is ideal?

If you work in a group, then nothing that affects your code is purely a personal decision.

<Opinion>
I think the use of tools like ReSharper should be determined the same way style conventions are implemented. Everyone does it. Or no-one does it.
It's really annoying as a developer having hundreds of warnings from other developers who just didn't write to the same standards as everyone else.
</Opinion>

You should write code that conforms to the coding standards agreed on by your team (something like this is sufficient). Your choice of VS plugins, keyboard bindings, font colours and sizes should remain your own.
I wouldn't get overly worked up about minutae such as "redundant type declarations". More important to get people on board with the SOLID principles.

After using R# for over 2 years now, I sympathize with you. I find my code substantially cleaner, terser and more readable. And my standards, for my own code as well as for the code I have to review/maintain from others, have taken a quantum leap upwards. However, a fundemental rule of politics (human nature really) is that most people resist change... and forcing it upon them never seems to lower that resistence, so persuasion is always a better approach...

I think that tools should be used by the whole team in order to be effective.
Having said that it seems that the "problem" you've encountered is a code quality issue and not related to R#.
The mistakes you describe can be created with or without R# and could be avoided by either performing code reviews or pair programming.
R# help write good code faster and I can't think of a developer that doesn't want to inrease his productivity - so if you want everyone in your team to benefit from R# convince them that they'll be more productive using it. Again pair programming is a goidd way to demonstrate the merits of a new tool.

Personally, I think that standards on a team work best if they are enforceable, and enforceable via a tool. Resharper does a good job (provided you setup the same rules) of giving warnings on files for formatting differences. Rules only matter if there is a tool that enforces them - whether that's Resharper's warnings, or something else doesn't matter so much.
My current team is about half-and-half. I use R#, as do a couple other members, but some of our team just uses VS without it. We do, however, enforce that we all follow the standards provided by StyleCop, though - so the R# users are happy (provided we have StyleCop for Resharper installed), and the non-Resharper users can work fine.

It's best to use a consistent set of rules across a team/project, regardless of how they are enforced (by a CA tool or code reviews, etc).
In the absence of a clear standard on a given point, you need to relax your rules (or resharper's) to accept things that other programmers do, even if they don't fit your personal style. Indeed, programmers are all individuals and you will never find two such people who do things exactly the same - some flexibility is always needed.
CA tools often make time-wasting suggestions (the majority of suggestions are pretty rubbish in my experience). What I mean is that if all the members on your team can effectively read, understand, and maintain a piece of code as it is written, then there is often little point refactoring that code - beware of wasting time trying to satisfy Resharper when the changes you are making aren't going to really make any difference to the readability, maintainability, robustness, portability, or efficiency of your code. Turn off these warnings rather than wasting time refactoring your code.
Having said that, you should definitely campaign and promote CA to your team and your manager. The team/project will benefit, and everyone will personally benefit from the use of CA tools.

To me using a code refactoring tool is a no-brainer. Think about it--you are not writing code for yourself, you are writing it for others :) Architects have to make their plans precise, readable, and understandable because they know someone else is coming behind them. They have tools that help them do this. Why, then, do programmers not do the same? We complain it's "too much work" or "Why fix what works?" The truth is that not refactoring boils down to pure laziness.

Related

Ideas for working with a teammate not following team defined standards?

Working in a team environment how would you handle a developer that refuses to follow team defined standards?
Developer is at a junior level
Developer is at a peer level
Developer is at a senior level
I know this is subjective but I feel that it would benefit developers by making them more professional.
1) Developer is at a junior level
- Mentor; be kind & gentle. Explain the need for standards in general and then explain the need for the particular standard which is not being followed. Do this with an open mind; if you cannot justify the standard then perhaps it ought not to be a standard?
2) Developer is at a peer level
- this ought to be easy enough – if you can keep it technical and not let is dissolve into a clash of personalities. Again, if you can justify it, it probably ought to be a standard, but if he has an equally compelling argument against, then maybe not. However, do not accept that there ought to be no standard. Ask him for a suggested standard to replace the one which he does not like. If he will not comply, then escalate. If you don’t like it, then put it to the vote/ escalate. Try to avoid escalation, but try to ensure that there is a standard.
3) Developer is at a senior level
Try to reason. Listen carefully, he may be right. If in doubt, then put it to the vote/ escalate.
Caveat: standards are nice (imo, absolutely required, but ymmv), but they are difficult to “enforce” unless reached by consensus.
Exception: “cowboy coders” need to be slapped down hard; no expectations.
Do not feel bad about “tattling” to the boss. When it comes to a cowboy coder then follow the cowboy motto “this team ain’t big enough for both of us”; either he stops cowboying or one of you has to get the hell out of Dodge.
Pair programming may be my best suggestion as this can help ensure everyone gets up to the same level and help foster a sense of community within the team. This does shift responsibility to some extent but the idea is to have someone try to get the other person to do things the way others do it. How to Win Friends and Influence People has the following points that may apply though these are general:
Fundamental Techniques in Handling People
Don't criticize, condemn, or complain.
Give honest and sincere appreciation.
Arouse in the other person an eager want.
Six Ways to Make People Like You
Become genuinely interested in other people.
Smile.
Remember that a man's Name is to him the sweetest and most important
sound in any language.
Be a good listener. Encourage others to talk about themselves.
Talk in the terms of the other man's interest.
Make the other person feel important and do it sincerely.
Twelve Ways to Win People to Your Way of Thinking
Avoid arguments.
Show respect for the other person's opinions. Never tell someone
they are wrong.
If you're wrong, admit it quickly and emphatically.
Begin in a friendly way.
Start with questions the other person will answer yes to.
Let the other person do the talking.
Let the other person feel the idea is his/hers.
Try honestly to see things from the other person's point of view.
Sympathize with the other person.
Appeal to noble motives.
Dramatize your ideas.
Throw down a challenge & don't talk negative when the person is
absent, talk about only positive.
Be a Leader: How to Change
People Without Giving Offense or
Arousing Resentment
Begin with praise and honest appreciation.
Call attention to other people's mistakes indirectly.
Talk about your own mistakes first.
Ask questions instead of directly giving orders.
Let the other person save face.
Praise every improvement.
Give them a fine reputation to live up to.
Encourage them by making their faults seem easy to correct.
Make the other person happy about doing what you suggest.
If there is a standards document in place, then simply point to the document and tell them that they need to adhere to the standard. If there is no document in place and it is sort of ad-hoc "this is de-facto how this team has been coding", then organize a meeting to create a consensus on what the team standards should be and create a standards document. I think it is fairly hard to argue with the need for a consistent style for the sake of readability and maintanence, and when there are rules in place saying "do it this way", it is much harder to diverge from it than if it is merely established practice.
We use TFS and Code Check-in policies as a way to enforce code standards. The other responses I agree with completely for the people part of it. For some coding standards like variable naming standards, you can spend a tiny bit of time (maybe the dev in question can write these) and write these. If you incorporate them into your build process, then part of the validation of a build involves checking the source for correct code standards. We use MSBuild with Visual Studio 2008 and it works great. That will help somewhat when a system has been developed to enforce standards as it's harder to argue with a build system sometimes. Also, it helps to have the builds treat those violations as errors in visual studio rather than just warnings for further enforcement. Above all, the "why" part of the standards are most important for a developer of any rank to udnerstand. If they understand why standards are useful and understand the right forum/opporunity (monthly dev meeting?) where they can voice reasoning against a particular standard, hopefully they can begin to follow them alongside the successful team.

Do you think VS and Intellisense make us dumber? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I read this article, the parts of "Intellisense" and "Generated Code":
http://www.charlespetzold.com/etc/DoesVisualStudioRotTheMind.html
Do you think the Author's is right?
I don't agree that Intellisense is soooo bad for programmers. VS for C# uses to "hide" the controls' events in another file, but you can find them if you know enough about the language and you can modify them by hand. And with VS I don't need to memorize all the .Net classes I use.
I think it doesn't matter if you use an IDE or notepad but, if thsese RAD tools exist and are free... Why not to use them?
No I very much disagree with this point.
Yes, I do agree that intellisense allows me to keep less of an objects growing number of members in my head. I am dumber in the sense that I often know less about the intricate details of projects where I use intellisense heavily.
For instance, I can probably rattle off all of the members of the C++ types I use with great accuracy. I tend to be a VIM only guy for my C++ projects and hence don't really use intellisnsee. In C# and VB.Net projects though I couldn't rattle off the members with the same accuracy as I rely on intellisense more often.
But there is a trade off. Keeping all of the members in my head comes with a cost. When writing code, instead of focusing on the algorithm, I focus on the members. I have to constantly think about the naming convention of a particular type, or the parameter list, what's byref or by val, when writing out an algorithm in C++. In C#/VB.Net I'm more free to think about the algorithm as the IDE takes care of finding the members for me.
Does this mean I'm dumber? No it simply means I'm able to focus on the problem I'm actually trying to solve. I feel this makes me more productive and hence smarter not dumber.
It doesn't make smart people dumber, but it makes dumb people look smarter
No, modern programming tools and languages help the programmer focus less on the little things and more on the big picture.
The main goal is to design solid software. If a programmer doesn't have to worry about memorizing every method of a class, they can spend more time on engineering the product.
Our physics prof always said why memorize something you can look up. He always listed the required formulas on the board during exams. Seems to be intellisense is the same idea. Rather than remembering if the object uses a Count or a Length property, let VS tell me.
No, it enables us to code faster I think. Anything to make the coding process faster, easier and simpler is a step in the right direction in my opinion.
Not dumber, it makes us faster :)
I use intellisense and generated code to speed up development, not because I don't know what I'm doing. Therefore, I can't agree that using them makes you dumber.
I am the kind of person that will try to learn as much about a language as possible before attempting to use the tools that facilitate development in that language. In that regard, I have to agree with Matthew Jones' comment that "tools do not make people dumber...laziness and lack of drive do."
Programming is just moving forward to make life easier for the programmer and making him more productive.
It would be like complaining that we don't write assembly code anymore... it's important to know the big concepts and ideas behind it, but working with it would be weird (in most cases).
I don't think so.
Intellisense makes things like case sensitive spelling easier.
Is it MyArray.Count() or MyArray.Size() or Length(MyArray) ... ? Which return type is a particular method, again? Intellisense saves me a few minutes every day on Google for things like this.
Detail memmorization is not the most important skill in software development. It is better to have problem solving skills and the ability to find the information you need. If you invest more time in the details you will be lost when the next greatest language is born, but algorithms and patterns will still be relevant.
The question is of course....does Intellisense make programming less of a skilled profession?
Yes, I agree with the author. Intellisense (and many other Visual Studio features) is indeed "making us dumber" for the reasons mentioned in the article.
That's not always a bad thing. Sometimes it's more desirable to be productive than it is to get smarter. The challenge is striking the right balance. :)
The only qualm with IntelliSense that the author seems to have is the autocomplete when you press the space bar, which apparently he doesn't realize you can turn off in the Options menu.
Although, he claims that coding "has become a constant dialog with IntelliSense"... which makes no sense because you still have to pick the correct methods from the list! Without it, you'd simply have to search online for the name of the method instead of an instantaneous search.
It's interesting how the author ignores that IntelliSense can't tell you whether to use a StringBuilder or a String, etc.
Not at all. When the intellisense list pops up, does a programmer search through the whole list every time to find the function they were looking for? Maybe at first, but normally you keep typing until intellisense narrows down the list to the point where it's faster to use the up/down arrows and tab to complete.
Without intellisense, it would take a little longer to code given that you are experienced with the classes that you're using and a lot longer given that you aren't. It only serves as a speed tool and quick documentation of everything that's available.
It doesn't make us dumber; it is a necessity.
Back in the day (MS BASIC for me), there was no need for intellisense. The scope of the language was limited enough for a programmer to remember all keywords and functions.
Jump to today, intellisense is an absolute requirement. Take .Net for example. There is simply no way to remember or discover the many thousands of types, properties and methods. Oh sure, for a very small project you may know a bunch (100s?) of items. But let's be honest - there is no way a modern working programmer could exist without it.
Adding my two cents here.
From my own experience and as mentioned in the TFA I would say that the only drawback I've encountered so far is when you learn the language you might pick up bad habits. Using ArrayLists instead of List only because you're not aware of changing use clauses enables might give you some other datatype.
The author complains over that he gets the wrong datatypes when entering certain datatypes. While some of you will probably get a license, a weapon and start the man hunt, I've found that using naming conventions is an excellent way of forcing the intellisense to be working my way, especially when working in GUI-Control intensive forms & stuff.
No more so than calculators made for poorer mathematicians and physicists. Sure, using a slide rule forces you to keep a mental model of the order of magnitude of things, but it is really just a tool ... and better tools let you do better work.
This can be abstracted into the traditional question:
Does knowing more about the details help or hurt?
As a rule, experienced engineers and craftsman say, help. But knowing the details also lets you know when the details don't matter, which is what Visual Studio/Intellisense provides. (I'm sure there's a pithy proverb that could be said here, but I don't feel up to thinking up a quip).
Dumb & Lazy.
Interesting question. Sure I find Intellisense in some sense makes the job easier, but it's kind of like money. The more you have, the more you spend, not necessarily on things you need. I learned to program around '62, and somehow I got along without Intellisense for a really long time. What Intellisense does for me now is help me remember lots of classes and members that as little as 4 years ago I never knew I needed.
There's one tendency I've seen in software that never fails. Nature abhors a vacuum. Machines get bigger, so guess what, software gets bigger (but not always better). Machines get faster, so software gets slower. Now people can get help typing long names, so the code gets really verbose. Now people get help remembering lots of classes, so guess what, there are lots more classes to remember. This goes a long way to helping the software get bigger and slower.
I do a lot of performance tuning, and what is the dominant cause of slowdown? It is galloping generality caused by overdesign with too much data structure, too many classes, and too many layers of abstraction. In a word, "bloat". Here is just a small example.
I find Visual Studio's tools conducive towards more experimentation. When you're dealing with the Win32 API in C (for example) you can't really poke around too easily. When you're working with C#, it's a snap to have a little explore around a library and learn what it does without breaking out MSDN or a disassembler for the entire evening.
If you're a naturally curious programmer, Intellisense won't change that. If you're not, Intellisense won't change that either. To paraphrase one of my colleagues "I think it's a waste of time looking through huge books when you can just take an implementation from the web and move on to the next thing".
It's an old argument anyway, pre-Intellisense. Does BASIC rot the mind where writing in x86 doesn't? Is knowing an algorithm inside out relevant when every single programming language you're going to use in your role has a tried and tested library?
I find that those who consider programming a hobby or a skill are inclined to comprehend and investigate. Those who consider it the day job don't. Regardless of any frippery around it, it's more about a programmer's mindset than what is made available.

Besides "treat warnings as errors" and fixing memory leaks, what other ideas should we implement as part of our coding standards?

First let me say, I am not a coder but I help manage a coding team. No one on the team has more than about 5 years experience, and most of them have only worked for this company.. So we are flying a bit blind, hence the question.
We are trying to make our software more stable and are looking to implement some "best practices" and coding standards. Recently we started taking this very seriously as we determined that much of the instability in our product could be linked back to the fact that we allowed Warnings to go through without fixing when compiling. We also never bothered to take memory leaks seriously enough.
In reading through this site we are now quickly fixing this problem with our team but it begs the question, what other practices can we implement team wide that will help us?
Edit: We do fairly complex 2D/3D Graphics Software that is cross-platform Mac/Windows in C++.
Typically, the level of precision/exactingness in coding standards/process is directly connected to the safety level required. E.g., if you are working in aerospace, you will tightly control pretty much everything. But, on the other end of the spectrum, if you are working on a computer gaming forum site...if something breaks, no biggie. You can have slop. So YMMV, depending on your field.
The classic book on coding is Code Complete 2nd edition, by Steve McConnell. Have a team copy & strongly recommend your developers purchase it(or have the company get it for them). That will satisfy probably 70% of the stylistic questions. CC addresses the majority of development cases.
edit:
Graphics software, C++, Mac/Windows.
Since you're doing cross-platform work, I would recommend having an automated "compile-on-checkin" process for your Mac(10.4(maybe), 10.5, 10.6), and Windows(XP(maybe), Vista, 7). This ensures your software at the least compiles, and you know when it doesn't.
Your source control(which you are using, I assume), should support branching, and your branching strategy can reflect cross-platformy-ness as well. It's also advantageous to have mainline branches, dev branches, and experimental branches. YMMV; you will probably need to iterate on that and consult with with people who are familiar with configuration management.
Since it's C++, you will probably want to be running Valgrind or similar to know if there is a memory leak. There are some static analyzers which you can get: I don't know how effective they are at the modern C++ idiom. You can also invest in writing some wrappers to help watch memory allocations.
Regarding C++...The books Effective C++, More Effective C++, and Effective STL(all by Scott Meyers) should be on someone's shelf, as well as Modern C++ by Andrescu. You may find Lippman's book on the C++ object model useful as well, I don't know.
HTH.
There are a lot of consultants/companies who have coding rules to sell you, you should have no difficulty finding one. However, one that doesn't first ask you the field you are in (you didn't mention it in your question) is providing you with snake oil.
Test-Driven Development. TDD helps check for logic errors at the development phase.
Get everyone to read and discuss various standards and guidelines. I (as well as Stroustrup) suggest the Joint Strike Fighter coding standards. Ask your developers to classify the guidelines therein among
Already met
Could be met easily (few changes from current condition)
Should work toward in old code and follow in new development
Not worth it
Have the long technical discussions, and settle on a set for the team to adopt.
Code reviews have been shown to provide significant benefits to code quality, even more so than traditional testing. I would suggest getting in the habit of performing routine design and code reviews; the number of stages at which reviews are performed, the formality and detail of the reviews, and the percentage of work subject to review can all be set according to your business requirements. Coding standards can be useful when done right (and if everyone's code looks similar, it is also easier to review), but where you put your braces and how far you indent blocks isn't really going to affect defect rates.
Also, it's worth familiarizing yourself and your peers with the concept of technical debt and working bit by bit to redesign and improve parts of the system as you come in contact with them. However, unless you have comprehensive unit testing and/or processes in place to ensure high code quality, this may not help things.
Given that this is Stack Overflow, someone should reference The Joel Test. I like to automate as much as possible, so using Lint is also a must.
These basics are good for most any industry or team size:
Use Agile methodology (scrum is a good example).
http://www3.software.ibm.com/ibmdl/pub/software/rational/web/whitepapers/2003/rup_bestpractices.pdf
Use Test-driven development. http://www.agiledata.org/essays/tdd.html
Use consistent coding standards. Here is an example document:
http://www.dotnetspider.com/tutorials/BestPractices.aspx
Get your team familiar with good
design patterns.
http://www.dofactory.com/Patterns/Patterns.aspx
You can't go wrong with these basics. Build from there with new team members who have been there and done that. I'd strongly suggest pair programming once you've got those guys on the team. It is the best way to infect people with best practices.
Best of luck to you!
The first thing you need to consider when adding coding standards/best practices is the effect it will have on your team's morale and cohesiveness. Developers usually resent any practices that are imposed on them even if they are good ideas. The people issues have to be addressed for a big change to be successful.
You will need to involve your group in developing the standards and try to achieve consensus. That said, you will never get universal agreement on anything, so you will have to balance consensus and getting to standards. I've seen major fights over something as simple as tabs versus spaces in source.
The best book I've seen for C/C++ guidelines in complicated projects is Large Scale C++ Software Design. That book along with Code Complete (which is a must-read classic) are good starting points.
You don't mention any language, and while it is true that most of coding standards are language independent, it will also help you in your search. On most of the companies I had work they have different coding standards for different programming languages. So my advice will be:
Choose your language
Search the web since there are plenty of standards out there for your language
Gather all the standards you found
Divide your team into groups and give them a few of the documents to analyze. They should come with a list of things they think worthy to have in their new standards.
Have a meeting so each group present its findings to everybody (there will be a lot of redundancy between groups). That should be an open discussion and everybody's opinion should be accounted.
Compile a list of the standards that were selected by the majority of the coders and that should be your starting point.
Perform semi annual reviews of the standards, to add or remove things.
Now, The logic behind this is : Most of the problems from putting a coding standard from scratch is developer's acceptance. Each of us have a way of doing things and it sucks when somebody from the outside believes one way of doing things is better from another. So, if developers understand the logic and the purpose of the coding standards then you have half of the work done. The other thing is that standards should be design and created specifically for your company's needs. There will be some things that will made sense, and some that don't. With the above approach you could discriminate between those. The other thing is that standards should be able to change over time to reflect the company needs, so a coding standard should be a living document.
This blog post describes a lot of the common practices of mediocre programming. These are some of the potential issues you're team is having. It includes a quick explanation of the "best practice" for each one.
One thing you should have rules about is some kind of naming standard. It just makes life easier for people while not being really invasive.
Other than that, I'd have to say it depends on the level of your team. Some need more rules than others. The better people are, the less "support" they need from rules.
If you want a complete set of coding rules to control every little detail, you're going to spend lots of time arguing about rules and exceptions to rules and what you should write rules about. I'd go with something already written instead.
If you are concerned about quality then one thing you could do that really isn't about rules, is:
Automated building and testing. This has helped me a lot. Once you find a problem, it really helps to have an environment where you can write a test to verify the problem. Fix the problem and then easily add your test to an automatic test suite that makes sure that sort of problem can't come back without being spotted.
Then make sure these run often. Preferably every time someone checks something in.
If your framework requires certain rules to function well, put those in your coding standard.
If you decide to have coding standards, you want to be very careful about what you put in. If the document is too long or focuses on arbitrary stylistic details, it will just get ignored and nobody will bother to read it. Often a lot of what goes into coding standards is just the preferences of the person that wrote the document (or some standards that have been copied off the web!). If something is in the standard, it needs to be very clear to the reader how it improves quality and why it is important.
I would argue that a large proportion of what makes code readable is to do with design rather than the layout of the code. I have seen a lot of code that would adhere to the standards but still be difficult to read (really long methods, bad naming etc.) - you can't have everything it the standards, at some point it comes down to how skilled and disciplined your developers are - do what you can to increase their skills.
Perhaps rather than a coding standards document, try to get the team to learn about good design (easier said than done, I know). Make them aware of things like the SOLID principles, how to separate concerns, how to handle exceptions properly. If they design well, the code will be easy to read and it won't matter if there are enough white lines or the curly braces are in the right place.
Get some books about design principles (see a couple of recommendations below). Maybe get the the team to do some workshops to discuss some of the topics. Perhaps get them to collectively write a document on what principles might be important for their project. Whatever you do, make sure it is the team as a whole who decides what the standards / principles are.
http://www.amazon.co.uk/Principles-Patterns-Practices-Robert-Martin/dp/0131857258/
http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882
Don't write your own standards from scratch.
Chances are there are several out there that define what you want already, and are more complete than you could come up with on your own. That said, don't worry too much if you don't agree 100% with it on minor matters, you can swap in some parts of others, or call some infraction of it an warning rather than an error - depending on your own needs. (for example, some standards would throw a warning if the length of a line is more than 80 characters long, I prefer no more than 120 as a hard limit, but would make sure there was a good reason - readability & clarity for example - if there was > 80).
Also, do try to find automated methods of checking your code against the standard - including your own minor changes as required.
Besides books already recommended, I would also mention,
C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Herb Sutter and Andrei Alexandrescu (Paperback - Nov 4, 2004)
If you're programming on VB.NET, make sure Option Explicit and Option Strict are set to ON. This will save you a lot of grief tracking down mysterious bugs. These can be set at project level so that you never have to remember to to set them in your code files
I really like:
MISRA C standard (it's a little strict tho' but the ideas hold for C++)
and Hi-Integrity's http://www.codingstandard.com/HICPPCM/index.html C++ standard which borrows heavily from MISRA
LDRA (a static analysis tool) uses these standards to grade your work (this I don't use as it's expensive) but I can vouch for running cppcheck as a good 'free/libre' static analysis checker.

Debugging is a bad smell - how to persuade them?

I've been working on a project that can't be described as 'small' anymore (40+ months), with a team that can't be defined as 'small' anymore (~30 people). We've been using Agile/Scrum (1) practices all along, and a healthy dose of TDD.
I'm not sure if I picked this up from Agile or TDD, more likely a combination of the two, but I'm now clearly in the camp of people that looks at debugging as a bad smell. By 'debugging' I'm not referring to the more abstract concept of figuring out what might be wrong with the system, but the specific activity of running the system in Debug mode, stepping through the code to figure out details that are otherwise inscrutable.
Since I'm fairly convinced, this question is not about whether debugging is a bad smell or not. Rather, I'd like to know how I can persuade my team-mates about this.
People that believe debugging mode is the 'standard' mode tend to write code that can be understood only by debugging through it, which leads to a lot of time wasted since every time you work an item on top of code developed by someone else, you get to first spend a considerable amount of time debugging it (and, since there's no bug involved.. the term is becoming increasingly ridiculous) - and then silos happen. So I'd love to convince a few of my team-mates that avoiding debug mode is a Good Thing (2). Since they are used to live in Debug mode, however, they don't seem to see the problem; to them, spending hours debugging someone else code before they even start doing anything related to their new item is the norm; they don't see anything wrong with it. Plus, as they spend time 'figuring it out' they know eventually the developer that worked that area will become available and the item will be passed on to them (leading to yet another silo).
Help me come up with a plan to turn them from the Dark Side !
Thanks in advance.
(1) Also referred to as SCRUM (all caps). Capitalization arguments aside, I think an asterisk after the term must be used since - unsurprisingly - our organization 'tweaked' the Agile and Scrum process to fit the perceived needs of all stakeholders involved. So, in all honesty, I won't pretend this has been 100% according to theory, but that's beside the point of my question.
(2) Yes, there will always be times when we'll have to get in debug mode, I'm not trying to absolutely avoid it, just.. trying to minimize the number of times we have to dive into it.
If you want to persuade your coworkers that your programming practices are better, first demonstrate by your productiveness that you are more effective than they are, at least for some tasks. Then they'll believe you when you explain how you get so much done.
It's also sometimes easier to focus on something concrete. Do your coworkers even talk in terms of "code smell"? Perhaps you could focus on specifics like "When the ABC module fails, it takes forever to debug it; it's much faster to use technique XYZ. Here, let me demonstrate." Then afterwards you can mention your basic principle, which is yeah the debugger is a useful tool, but there's usually other more useful ones.
This is a cross-post, because the first time around it was more of an aside on someone else's answer to a different question. To this question it's a direct answer.
Debugging degrades the quality code of
the code we produce because it allows
us to get away with a lower level of
preparation and less mental
discipline. I learnt this from an
accidental controlled experiment in
early 2000, which I now relate:
I took on a contract as a Delphi
coder, and the first task assigned was
to write a template engine
conceptually similar to a reporting
engine - using Java, a language with
which I was unfamiliar.
Bizarrely, the employer was quite
happy to pay me contract rates to
spend months becoming proficient with
a new language, but wouldn't pay for
books or debuggers. I was told to
download the compiler and learn using
online resources (Java Trails were
pretty good).
The golden rule of arts and sciences
is that whoever has the gold makes the
rules, so I proceeded as instructed. I
got my editor macros rigged up so I
could launch the Java compiler on the
current edit buffer with a single
keystroke, I found syntax-colouring
definitions for my editor and I used
regexes to parse the compiler output
and put my cursor on the reported
location of compile errors. When the
dust settled, I had a little IDE with
everything but a debugger.
To trace my code I used the good old
fashioned technique of inserting
writes to the console that logged
position in the code and the state of
any variables I cared to inspect. It
was crude, it was time-consuming, it
had to be pulled out once the code
worked and it sometimes had confusing
side-effects (eg forcing
initialisation earlier than it might
otherwise have occurred resulting in
code that only works while the trace
is present).
Under these conditions my class
methods got shorter and more and more
sharply defined, until typically they
did exactly one very well defined
operation. They also tended to be
specifically designed for easy
testing, with simple and completely
deterministic output so I could test
them independently.
The long and the short of it is that
when debugging is more painful than
designing, the path of least
resistance is better design.
What turned this from an observation
to a certainty was the success of the
project. Suddenly there was budget and
I had a "proper" IDE with an
integrated debugger. Over the course
of the next two weeks I noticed a
reversion to prior habits, with
"sketch" code made to work by
iterative refinement in the debugger.
Having noticed this I recreated some
earlier work using a debugger in place
of thoughtful design. Interestingly,
taking away the debugger slowed
development only slightly, and the
finished code was vastly better
quality particularly from a
maintenance perspective.
Don't get me wrong: there is a place
for debuggers. Personally, I think
that place is in the hands of the team
leader, to be brought out in times of
dire need to figure out a mystery, and
then taken away again before people
lose their discipline.
People won't want to ask for it
because that would be an admission of
weakness in front of their peers, and
the act of explaining the need and the
surrounding context may well induce
peer insights that solve the problem -
or even better designs free from the
problem.
So, FOR, I not only agree with your position, I have real data from a controlled experiment to support it. It is, however, a rather small sample. More elaborate tests are required before my conclusions are supportable.
Why don't you take what I've said to your team and suggest trials. You have more data than they do (I just gave it to you) and in order to have a credible basis for disagreeing with you they basically have to test the idea, and the only way to do that is to give your idea a go.
You should be ready for it to all fall apart, though, because the whole thing is predicated on the assumption that the developers have the talent and experience to rise to the challenge of stronger design in the absence of step-through debugging.
Step-through debugging was created to make debugging easier. The direct effect of lowering the bar is that people with less talent can participate - if you build a tool that even jackasses can use, you will get jackasses using it -- a lot of them, if the newly accessible activity is well-remunerated.
This causes an exodus of people with talent because they generally use that talent to do rare and precious things in order to be well paid without working too hard, and the market doesn't want to pay for excellence because it cannot distinguish talent well enough to know when paying for it is justified.
Another thought: more recent work with problems on production servers, where it was impossible to install a debugger, has shown the importance of having a codebase for which maintenance doesn't depend on the availability of a debugger. Code that's grown in the absence of debuggers is much less hassle. Choose not to use them when you can change your mind, and then when you can't change your mind it won't be so awful.
Since I'm fairly convinced, this question is not about whether debugging is a bad smell or not.
Well, your local Church might be more appropriate place for your question then.
That aside, convince them by arguments. You might want to reconsider your fundamentalist stance, however, because this is the very opposite of persuasive. One thing you might want to do is drop the term “debugging” in your whole discussion and replace it by “stepping through the code” or the likes, emphasizing that you oppose the uninformend guesswork/patchwork practice of probing that you condemn rather than an informed reflection about the code.
(I would still disagree with you, but that's besides the point since you didn't want a discussion.)
I think the real problem here is
People that believe debugging mode is
the 'standard' mode tend to write code
that can be understood only by
stepping through it
This, if true, should be self evidently wrong and there should be no need to discuss it. If it's not evident it's because they don't see how the badly written code could be improved. Show them, do code reviews where you show how that code could be refactored in a way that is clear without stepping through it.
Code stepping will automatically diminish once better code is written, it just doesn't work the other way around. People will still write bad code and if they avoid stepping through it that will only lead to more wasted time (damn I wish I could step through this spaghetti mess), not to better code.
There is something wrong here, but it's hard to put my finger on it. Perhaps the real issue is that the code has other smells that make it difficult to readily understand. I agree that with TDD one ought to use the debugger less rather than more, since you'll be developing the code in small increments. But, if you can't look at the code and understand it, perhaps it's because the design is too coupled -- there are too many interrelated classes required to make things work.
If the code really needs to be so complex that observation won't suffice, then maybe you need to invest in some good commenting, explaining what is happening -- though I would prefer to see things refactored to the point where comments are not needed. My suspicion is that the debugger may be a symptom rather than the problem.
I know that for me, switching from traditional, code-first development to test-first development has resulted in less time spent debugging...and it's not something I miss. Typically I'll only involve the debugger when its not obvious why the code I just wrote to pass a test, didn't.
This is going to sound like the argument you said you don't want to have, but I think if you want to convince your teammates, you're going to have to make a stronger case. I don't understand your objection. I frequently step through code I'm trying to understand with the debugger. It's a great way to see what's going on. You have not established your claim that people who use the debugger in this way tend to write code which is otherwise difficult to understand. The only convincing way to do so would be through some kind of case/control study which tried to measure and compare the readability of code written by people with varying approaches to the debugger. And you have not even told a plausible story explaining why you think using a tool to understand code execution tends to lead to sloppier code construction. For me it's a complete non sequitur.
A "plan" to convince them of the advantage of another approach is by establishing metrics linked to the number of time you debug the same function for different bugs.
By analysis the trend of that metric, you may convince them that non-regression tests are more useful to spend time writing, and will help them to debug more efficiently.
That way, you do not write completely off the "debug" habit, but you convince them of establishing a solid set of test, allowing them to focus on really useful debug session, if needed.
Should you consider this course of action (metrics), you should know its implementation involves the all hierarchy (stakeholder, project manager, architect, developers). They all need to be implicated in those metrics in order to act on them.
Regarding developers, you could try to suggest:
some new ways of closing a bug case (close it only with the test scenario played to reproduce that bug, meaning they need an independent test in order to, if needed, launch their debug session)
a clear relationship between those metrics and their evaluation by the management (it would be a bad practice to debug over and over the same function)
a larger involvement in architectural decisions: sometimes, knowing some functional or applicative features rather than just classes and code can incite a developer to think more in term of black-box test rather than white-box (which can more easily lead to debug session)
a participation into "operational architecture" process (where you need to deploy your app, and make full front-to-back integration test). Again, a larger picture of the all system can help a developer to get more interested in features rather than 'lines of code'
I think a better phrasing of this question would be "Is non-TDD a code smell?" TDD seems to lead to less time spent in the debugger due to more time spent writing/failing/passing tests. Without TDD, you are more likely to spend time in the debugger to diagnose errors.
At least within Visual Studio, using the debugger is not that painful, so the challenge for you would be to explain to your teammates how TDD would make their development more enjoyable, productive and successful. Just avoiding the debugger is probably not reason enough for a team to switch their development methodology.
Right on roadwarrior.
debugging isn't the problem, it's poorly commented and or documented code and bad archetecture. I work on a smaller team but when a bug does surface, I do step through the code. frequently it's a very small job because the app is well planned out and the doc's on the code are clear.
That said lets get to my point. Want the team to not debug... comment, comment comment. Nothing beats down the urge to debug faster. Sure they'll still do it, but they'll be more likely to step over well documented code.
Oh and though it should go without saying, I'll do it anyway. don't have bugs in your code. :)
I agree with those above who expressed the relative irrelevance of this "debugger issue."
IMO, the 2 most important goals of a developer are:
1) Make the software do what it's supposed to do.
2) Write the code so that a maintenance developer 2 years down the road enjoys the experience of changing existing or adding new features.
Before you make a plan, you should decide how important this change is to you. Although I agree that debugging is a smell, it is also a very well accepted and ingrained practice for developers, so convincing them that they should stop doing it won't be easy or quick - and for good reasons. How much energy do you want to put into this topic?
Second, why do you want to persuade them in the first place? If your motivation is to help them, is it really their top priority problem? When you help people in ways they want to be helped, change becomes easy.
Once you have decided that you want to go on with your change initiative, you need to take into account that different people are convinced by different things. Some people will already be convinced by trying something new and exciting. Some will be convinced by numbers (metrics). Some by getting told about it while eating their favorite type of cookie (seriously!), some by hearing about it from their favorite guru. Some by reading about it in a magazine. Some by seeing that "everyone else is doing it, too". Etc. pp.
There is an insightful interview with Linda Rising on this topic at InfoQ: http://www.infoq.com/interviews/Linda-Rising-Fearless-Change. She can say it much better than me. The book is quite good, too.
Whatever you do, don't press too much, but also don't give up. Change can happen - especially if you take resistance as a resource -, and sometimes it happens at unexpected times, so always keep a sense of wonder.
#FOR : You have a second problem too, here it is :
sadly it doesn't seem the devs are interested in being more productive (they get paid the same anyway)
How do you intend to make them want to be more productive when there is nothing (visible) for them to gain?
Designing software by debugging is a good practice.
The number of environments supporting this way of developing is very small: the best known is Smalltalk. In Smalltalk, you can write a test describing your objects protocol without the methods being implemented. Running this test will then trigger the debugger, and you can add the method to the right class in the debugger, and can continue stepping through the code until all functionality is implemented and the test is green.
This needs a compiler to be available at run-time, and first-class invocations. It offers a very short feedback cycle, and is one of the primary reasons for Smalltalks' productivity

Do you think a software company should impose developers a coding-style? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
If you think it shouldn't, explain why.
If yes, how deep should the guidelines be in your opinion? For example, indentation of code should be included?
I think a team (rather than a company) need to agree on a set of guidelines for reasonably consistent style. It makes it more straightforward for maintenance.
How deep? As shallow as you can agree on. The shorter and clearer it is the more likely it is that all the team members can agree to it and will abide by it.
You want everybody reading and writing code in a standard way. There are two ways you can achieve this:
Clone a single developer several times and make sure they all go through the same training. Hopefully they should all be able to write the same codebase.
Give your existing developers explicit instruction on what you require. Tabs or spaces for indentation. Where braces sit. How to comment. Version-control commit guidelines.
The more you leave undefined, the higher the probability one of the developers will clash on style.
The company should impose that some style should be followed. What style that is and how deep the guidelines are should be decided collectively by the developer community in the company.
I'd definitely lay down guidelines on braces, indentation, naming etc...
You write code for readability and maintainability. Always assume someone else is going to read your code.
There are tools that will auto magically format your code , and you can mandate that everyone uses the tool.
If you are on .Net look at stylecop, fxcop and Resharper
Do you think a software company should impose developers a coding-style?
Not in a top-down manner. Developers in a software company should agree on a common coding style.
If yes, how deep should the guidelines be in your opinion?
They should only describe the differences from well-known conventions, trying to keep the deviation minimal. This is easy for languages like Python or Java, somewhat blurry for C/C++, and almost impossible for Perl and Ruby.
For example, indentation of code should be included?
Yes, it makes code much more readable. Keep indentation consistent in terms of spaces vs tabs and (if you opt for spaces) number of space characters. Also, agree on a margin (e.g. 76 chars or 120 chars) for long lines.
Yes, but within reason.
All modern IDEs offer one-keystroke code pretty-print, so the "indentation" point is quite irrelevant, in my opinion.
What is more important is to establish best practices: for example, use as little "out" or "ref" parameters as possible... In this example, you have 2 advantages: improves readability and also fixes a lot of mistakes (a lot of out parameters is a code smell and should probably be refactored).
Going beyond that is, in my honest opinion, a bit "anal" and unnecessarily annoying for the devs.
Good point by Hamish Smith:
Style is quite different from best
practices. It's a shame that 'coding
standards' tend to roll the two
together. If people could keep the
style part to a minimum and
concentrate on best practices that
would probably add more value.
I don't believe a dev team should have style guidelines they must follow as a general rule. There are exceptions, for example the use of <> vs. "" in #include statements, but these exceptions should come from necessity.
The most common reason I hear people use to explain why style guidelines are necessary is that code written in a common style is easier to maintain that code written in individual styles. I disagree. A professional programmer isn't going to be bogged down when they see this:
for( int n = 0; n < 42; ++42 ) {
// blah
}
...when they are used to seeing this:
for(int n = 0; n < 42; ++42 )
{
// blah
}
Moreover, I have found it's actually easier to maintain code in some cases if you can identify the programmer who wrote the original code by simply recognizing their style. Go ask them why they implemented the gizmo in such a convoluted way in 10 minutes instead of spending the better part of a day figuring out the very technical reason why they did something unexpected. True, the programmer should have commented the code to explain their reasoning, but in the real world programmers often don't.
Finally, if it takes Joe 10 minutes backspacing & moving his curly braces so that Bill can spend 3 fewer seconds looking at the code, did it really save any time to make Bill do something that doesn't come natural to him?
I believe having a consistent codebase is important. It increases the maintainability of ur code. If everyone expects the same kind of code, they can easily read and understand it.
Besides it is not much of a hassle given today's IDEs and their autoformatting capabilities.
P.S:
I have this annoying habit of putting my braces on the next line :). No one else seems to like it
I think that programmers should be able to adapt to the style of other programmers. If a new programmer is unable to adapt, that usually means that the new programmer is too stubborn to use the style of the company. It would be nice if we could all do our own thing; however, if we all code along some bast guideline, it makes debugging and maintenance easier. This is only true if the standard is well thought out and not too restrictive.
While I don't agree with everything, this book contains an excellent starting point for standards
The best solution would be for IDEs to regard such formatting as meta data. For example, the opening curly brace position (current line or next line), indentation and white space around operators should be configurable without changing the source file.
In my opinion I think it's highly necessary with standards and style guides. Because when your code-base grows you will want to have it consistent.
As a side note, that is why I love Python; because it already imposes quite a lot of rules on how to structure your applications and such. Compare that with Perl, Ruby or whatever where you have an extreme freedom(which isn't that good in this case).
There are plenty of good reasons for the standards to define the way the applications are developed and the way the code should look like. For example when everyone use the same standard an automatic style-checker could be used as a part of the project CI.
Using the same standards improve code readability and helps to reduce the tension between team members about re-factoring the same code in different ways.
Therefore:
All the code developed by the particular team should follow precisely the same standard.
All the code developed for a particular project should follow precisely the same standard.
It is desirable that teams belonging to the same company use the same standard.
In an outsourcing company an exception could be made for a team working for a customer if the customer wants to enforce a standard of their own. In this case the team adopts the customer's standard which could be incompatible with the one used by their company.
Like others have mentioned, I think it needs to be by engineering or by the team--the company (i.e. business units) should not be involved in that sort of decision.
But one other thing I'd add is any rules that are implemented should be enforced by tools and not by people. Worst case scenario, IMO, is some over-zealous grammar snob (yes, we exist; I know because we can smell our own) writes some documentation outlining a set of coding guidelines which absolutely nobody actually reads or follows. They become obsolete over time, and as new people are added to the team and old people leave, they simply become stale.
Then, some conflict arises, and someone is put in the uncomfortable position of having to confront someone else about coding style--this sort of confrontation should be done by tools and not by people. In short, this method of enforcement is the least desirable, in my opinion, because it is far too easy to ignore and simply begs programmers to argue about stupid things.
A better option (again, IMO) is to have warnings thrown at compile time (or something similar), so long as your build environment supports this. It's not hard to configure this in VS.NET, but I'm unaware of other development environments that have similar features.
Style guidelines are extremely important, whether they're for design or development, because they speed the communication and performance of people who work collaboratively (or even alone, sequentially, as when picking up the pieces of an old project). Not having a system of convention within a company is just asking people to be as unproductive as they can. Most projects require collaboration, and even those that don't can be vulnerable to our natural desire to exercise our programming chops and keep current. Our desire to learn gets in the way of our consistency - which is a good thing in and of itself, but can drive a new employee crazy trying to learn the systems they're jumping in on.
Like any other system that's meant for good and not evil, the real power of the guide lies in the hands of its people. The developers themselves will determine what the essential and useful parts are and then, hopefully, use them.
Like the law. Or the English language.
Style guides should be as deep as they want to be - if it comes up in the brainstorm session, it should be included. It's odd how you worded the question because at the end of the day there is no way to "impose" a style guide because it's only a GUIDE.
RTFM, then glean the good stuff and get on with it.
Yes, I think companies should. Developer may need to get used to the coding-style but in my opinion a good programmer should be able to work with any coding style. As Midhat said: It is important to have a consistent codebase.
I think this is also important for opensource projects, there is no supervisor to tell you how to write your code but many languages have specifications on how naming and organisation of your code should be. This helps a lot when integrating opensource components into your project.
Sure, guidelines are good, and unless it's badly-used Hungarian notation (ha!), it'll probably improve consistency and make reading other people's code easier. The guidelines should just be guidelines though, not strict rules enforced on programmers. You could tell me where to put my braces or not to use names like temp, but what you can't do is force me to have spaces around index values in array brackets (they tried once...)
Yes.
Coding standards are a common way of ensuring that code within a certain organization will follow the Principle of Least Surprise: consistency in standards starting from variable naming to indentation to curly brace use.
Coders having their own styles and their own standards will only produce a code-base that is inconsistent, confusing, and frustrating to read, especially on larger projects.
These are the coding standards for a company I used to work for. They're well defined, and, while it took me a while to get used to them, meant that the code was readable by all of us, and uniform all the way through.
I do think coding standards are important within a company, if none are set, there are going to be clashes between developers, and issues with readability.
Having the code uniform all the way through presents a better code to the end user (so it looks as if it's written by one person - which, from an End Users point of view, it should - that person being "the company" and it also helps with readability within the team...
A common coding style promotes consistency and makes it easy for different people to easily understand, maintain and expand the whole code base, not only their own pieces. It also makes it easier for new people to learn the code faster. Thus, any team should have a guidelines on how the code is expected to be written.
Important guidelines include (in no particular order):
whitespace and indentation
standard comments - file, class or method headers
naming convention - classes, interfaces, variables, namespaces, files
code annotations
project organization - folder structures, binaries
standard libraries - what templates, generics, containers and so on to use
error handling - exceptions, hresults, error codes
threading and synchronization
Also, be wary of programmers that can't or won't adapt to the style of the team, no matter how bright they might be. If they don't play by one of the team rules, they probably won't play by other team rules as well.
I would agree that consistency is key. You can't rely on IDE pretty-printing to save the day, because some of your developers may not like using an IDE, and because when you're trawling through a code base of thousands of source files, it's simply not feasible to pretty print all the files when you start working on them, and perform a roll-back afterwards so your VCS doesn't try to commit back all the changes (clogging the repository with needless updates that burden everyone).
I would suggest standardizing at least the following (in decreasing order of importance):
Whitespace (it's easiest if you choose a style that conforms to the automatic pretty-printing of some shared tool)
Naming (files and folders, classes, functions, variables, ...)
Commenting (using a format that allows automatic documentation generation)
My opinion:
Some basic rules are good as it helps everyone to read and maintain the code
Too many rules are bad as it stops developers innovating with clearer ways of laying out code
Individual style can be useful to determine the history of a code file. Diff/blame tools can be used but the hint is still useful
Modern IDEs let you define a formatting template. If there is a corporate standard, then develop a configuration file that defines all the formatting values you care about and make sure everyone runs the formatter before they check in their code. If you want to be even more rigorous about it you could add a commit hook for your version control system to indent the code before it is accepted.
Yes in terms of using a common naming standard as well as a common layout of classes and code behind files. Everything else is open.
Every company should. Consistent coding style ensures higher readibility and maintainability of the codebase across whole your team.
The shop I work at does not have a unified coding standard, and I can say we (as a team) vastly suffer from that. When there is no will from the individuals (like in case of some of my colleagues), the team leader has to bang his fist on the table and impose some form of standardised coding guidelines.
Ever language has general standards that are used by the community. You should follow those as well as possible so that your code can be maintained by other people used to the language, but there's no need to be dictatorial about it.
The creation of an official standard is wrong because a company coding standard is usually too rigid, and unable to flow with the general community using the language.
If you're having a problem with a team member really be out there in coding style, that's an excellent thing for the group to gently suggest is not a good idea at a code review.
Coding standards: YES. For reasons already covered in this thread.
Styling standards: NO. Your readable, is my bewildering junk, and vice versa. Good commenting and code factoring have a far greater benefit. Also gnu indent.
I like Ilya's answer because it incorporates the importance of readability, and the use of continuous integration as the enforcement mechanism. Hibri mentioned FxCop, and I think its use in the build process as one of the criteria for determining whether a build passes or fails would be more flexible and effective than merely documenting a standard.
I entirely agree that coding standards should be applied, and that it should almost always be at the team level. However there are a couple of exceptions.
If the team is writing code that is to be used by other teams (and here I mean that other teams will have to look at the source, not just use it as a library) then there are benefits to making common standards across all the teams using it. Similarly if the policy of the company is to frequently move programmers from one team to another, or is in a position where one team frequently wants to reuse code from another team then it is probably best to impose standards across the company.
There are two types of conventions.
Type A conventions: "please do these, it is better"
and Type B: "please drive on the right hand side of the road", while it is okay to drive on the other side, as long as everyone does the same way.
There's no such thing as a separate team. All code in a good firm is connected somehow, and style should be consistent. It's easier to get yourself used to one new style than to twenty different styles.
Also, a new developer should be able to respect the practices of existing codebase and to follow them.

Resources